text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
The empty lattice approximation is a theoretical electronic band structure model in which the potential is periodic and weak (close to constant). One may also consider an empty irregular lattice, in which the potential is not even periodic. The empty lattice approximation describes a number of properties of energy dispersion relations of non-interacting free electrons that move through a crystal lattice. The energy of the electrons in the "empty lattice" is the same as the energy of free electrons. The model is useful because it clearly illustrates a number of the sometimes very complex features of energy dispersion relations in solids which are fundamental to all electronic band structures.
== Scattering and periodicity ==
The periodic potential of the lattice in this free electron model must be weak because otherwise the electrons wouldn't be free. The strength of the scattering mainly depends on the geometry and topology of the system. Topologically defined parameters, like scattering cross sections, depend on the magnitude of the potential and the size of the potential well. For 1-, 2- and 3-dimensional spaces potential wells do always scatter waves, no matter how small their potentials are, what their signs are or how limited their sizes are. For a particle in a one-dimensional lattice, like the Kronig–Penney model, it is possible to calculate the band structure analytically by substituting the values for the potential, the lattice spacing and the size of potential well. For two and three-dimensional problems it is more difficult to calculate a band structure based on a similar model with a few parameters accurately. Nevertheless, the properties of the band structure can easily be approximated in most regions by perturbation methods.
In theory the lattice is infinitely large, so a weak periodic scattering potential will eventually be strong enough to reflect the wave. The scattering process results in the well known Bragg reflections of electrons by the periodic potential of the crystal structure. This is the origin of the periodicity of the dispersion relation and the division of k-space in Brillouin zones. The periodic energy dispersion relation is expressed
as:
E
n
(
k
)
=
ℏ
2
(
k
+
G
n
)
2
2
m
{\displaystyle E_{n}(\mathbf {k} )={\frac {\hbar ^{2}(\mathbf {k} +\mathbf {G} _{n})^{2}}{2m}}}
The
G
n
{\displaystyle \mathbf {G} _{n}}
are the reciprocal lattice vectors to which the bands
E
n
(
k
)
{\displaystyle E_{n}(\mathbf {k} )}
belong.
The figure on the right shows the dispersion relation for three periods in reciprocal space of a one-dimensional lattice with lattice cells of length a.
== The energy bands and the density of states ==
In a one-dimensional lattice the number of reciprocal lattice vectors
G
n
{\displaystyle \mathbf {G} _{n}}
that determine the bands in an energy interval is limited to two when the energy rises. In two and three dimensional lattices the number of reciprocal lattice vectors that determine the free electron bands
E
n
(
k
)
{\displaystyle E_{n}(\mathbf {k} )}
increases more rapidly when the length of the wave vector increases and the energy rises. This is because the number of reciprocal lattice vectors
G
n
{\displaystyle \mathbf {G} _{n}}
that lie in an interval
[
k
,
k
+
d
k
]
{\displaystyle [\mathbf {k} ,\mathbf {k} +d\mathbf {k} ]}
increases. The density of states in an energy interval
[
E
,
E
+
d
E
]
{\displaystyle [E,E+dE]}
depends on the number of states in an interval
[
k
,
k
+
d
k
]
{\displaystyle [\mathbf {k} ,\mathbf {k} +d\mathbf {k} ]}
in reciprocal space and the slope of the dispersion relation
E
n
(
k
)
{\displaystyle E_{n}(\mathbf {k} )}
.
Though the lattice cells are not spherically symmetric, the dispersion relation still has spherical symmetry from the point of view of a fixed central point in a reciprocal lattice cell if the dispersion relation is extended outside the central Brillouin zone. The density of states in a three-dimensional lattice will be the same as in the case of the absence of a lattice. For the three-dimensional case the density of states
D
3
(
E
)
{\displaystyle D_{3}\left(E\right)}
is;
D
3
(
E
)
=
2
π
E
−
E
0
c
k
3
.
{\displaystyle D_{3}\left(E\right)=2\pi {\sqrt {\frac {E-E_{0}}{c_{k}^{3}}}}\ .}
In three-dimensional space the Brillouin zone boundaries are planes. The dispersion relations show conics of the free-electron energy dispersion parabolas for all possible reciprocal lattice vectors. This results in a very complicated set intersecting of curves when the dispersion relations are calculated because there is a large number of possible angles between evaluation trajectories, first and higher order Brillouin zone boundaries and dispersion parabola intersection cones.
== Second, third and higher Brillouin zones ==
"Free electrons" that move through the lattice of a solid with wave vectors
k
{\displaystyle \mathbf {k} }
far outside the first Brillouin zone are still reflected back into the first Brillouin zone. See the external links section for sites with examples and figures.
== The nearly free electron model ==
In most simple metals, like aluminium, the screening effect strongly reduces the electric field of the ions in the solid. The electrostatic potential is expressed as
V
(
r
)
=
Z
e
r
e
−
q
r
{\displaystyle V(r)={\frac {Ze}{r}}e^{-qr}}
where Z is the atomic number, e is the elementary unit charge, r is the distance to the nucleus of the embedded ion and q is a screening parameter that determines the range of the potential. The Fourier transform,
U
G
{\displaystyle U_{\mathbf {G} }}
, of the lattice potential,
V
(
r
)
{\displaystyle V(\mathbf {r} )}
, is expressed as
U
G
=
4
π
Z
e
q
2
+
G
2
{\displaystyle U_{\mathbf {G} }={\frac {4\pi Ze}{q^{2}+G^{2}}}}
When the values of the off-diagonal elements
U
G
{\displaystyle U_{\mathbf {G} }}
between the reciprocal lattice vectors in the Hamiltonian almost go to zero. As a result, the magnitude of the band gap
2
|
U
G
|
{\displaystyle 2|U_{\mathbf {G} }|}
collapses and the empty lattice approximation is obtained.
== The electron bands of common metal crystals ==
Apart from a few exotic exceptions, metals crystallize in three kinds of crystal structures: the BCC and FCC cubic crystal structures and the hexagonal close-packed HCP crystal structure.
== References ==
== External links ==
Brillouin Zone simple lattice diagrams by Thayer Watkins Archived 2006-09-14 at the Wayback Machine
Brillouin Zone 3d lattice diagrams by Technion. Archived 2006-12-05 at the Wayback Machine
DoITPoMS Teaching and Learning Package- "Brillouin Zones" | Wikipedia/Empty_lattice_approximation |
A biomolecule or biological molecule is loosely defined as a molecule produced by a living organism and essential to one or more typically biological processes. Biomolecules include large macromolecules such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A general name for this class of material is biological materials. Biomolecules are an important element of living organisms. They are often endogenous, i.e. produced within the organism, but organisms usually also need exogenous biomolecules, for example certain nutrients, to survive.
Biomolecules and their reactions are studied in biology and its subfields of biochemistry and molecular biology. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts.
The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
== Types of biomolecules ==
A diverse range of biomolecules exist, including:
Small molecules:
Lipids, fatty acids, glycolipids, sterols, monosaccharides
Vitamins
Hormones, neurotransmitters
Metabolites
Monomers, oligomers and polymers:
== Nucleosides and nucleotides ==
Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T).
Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides.
Both DNA and RNA are polymers, consisting of long, linear molecules assembled by polymerase enzymes from repeating structural units, or monomers, of mononucleotides. DNA uses the deoxynucleotides C, G, A, and T, while RNA uses the ribonucleotides (which have an extra hydroxyl(OH) group on the pentose ring) C, G, A, and U. Modified bases are fairly common (such as with methyl groups on the base ring), as found in ribosomal RNA or transfer RNAs or for discriminating the new from old strands of DNA after replication.
Each nucleotide is made of an acyclic nitrogenous base, a pentose and one to three phosphate groups. They contain carbon, nitrogen, oxygen, hydrogen and phosphorus. They serve as sources of chemical energy (adenosine triphosphate and guanosine triphosphate), participate in cellular signaling (cyclic guanosine monophosphate and cyclic adenosine monophosphate), and are incorporated into important cofactors of enzymatic reactions (coenzyme A, flavin adenine dinucleotide, flavin mononucleotide, and nicotinamide adenine dinucleotide phosphate).
=== DNA and RNA structure ===
DNA structure is dominated by the well-known double helix formed by Watson-Crick base-pairing of C with G and A with T. This is known as B-form DNA, and is overwhelmingly the most favorable and common state of DNA; its highly specific and stable base-pairing is the basis of reliable genetic information storage. DNA can sometimes occur as single strands (often needing to be stabilized by single-strand binding proteins) or as A-form or Z-form helices, and occasionally in more complex 3D structures such as the crossover at Holliday junctions during DNA replication.
RNA, in contrast, forms large and complex 3D tertiary structures reminiscent of proteins, as well as the loose single strands with locally folded regions that constitute messenger RNA molecules. Those RNA structures contain many stretches of A-form double helix, connected into definite 3D arrangements by single-stranded loops, bulges, and junctions. Examples are tRNA, ribosomes, ribozymes, and riboswitches. These complex structures are facilitated by the fact that RNA backbone has less local flexibility than DNA but a large set of distinct conformations, apparently because of both positive and negative interactions of the extra OH on the ribose. Structured RNA molecules can do highly specific binding of other molecules and can themselves be recognized specifically; in addition, they can perform enzymatic catalysis (when they are known as "ribozymes", as initially discovered by Tom Cech and colleagues).
== Saccharides ==
Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for two different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration.
Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose.
Polysaccharides are polymerized monosaccharides, or complex carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 to 10 monomers, are called oligosaccharides.
A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration.
== Lignin ==
Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center.
== Lipid ==
Lipids (oleaginous) are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three non polar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14–24 carbon groups long, but it is always an even number.
For lipids present in biological membranes, the hydrophilic head is from one of three classes:
Glycolipids, whose heads contain an oligosaccharide with 1-15 saccharide residues.
Phospholipids, whose heads contain a positively charged group that is linked to the tail by a negatively charged phosphate group.
Sterols, whose heads contain a planar steroid ring, for example, cholesterol.
Other lipids include prostaglandins and leukotrienes which are both 20-carbon fatty acyl units synthesized from arachidonic acid.
They are also known as fatty acids
== Amino acids ==
Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid).
Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle. Only two amino acids other than the standard twenty are known to be incorporated into proteins during translation, in certain organisms:
Selenocysteine is incorporated into some proteins at a UGA codon, which is normally a stop codon.
Pyrrolysine is incorporated into some proteins at a UAG codon. For instance, in some methanogens in enzymes that are used to produce methane.
Besides those used in protein synthesis, other biologically important amino acids include carnitine (used in lipid transport within a cell), ornithine, GABA and taurine.
=== Protein structure ===
The particular series of amino acids that form a protein is known as that protein's primary structure. This sequence is determined by the genetic makeup of the individual. It specifies the order of side-chain groups along the linear polypeptide "backbone".
Proteins have two types of well-classified, frequently occurring elements of local structure defined by a particular pattern of hydrogen bonds along the backbone: alpha helix and beta sheet. Their number and arrangement is called the secondary structure of the protein. Alpha helices are regular spirals stabilized by hydrogen bonds between the backbone CO group (carbonyl) of one amino acid residue and the backbone NH group (amide) of the i+4 residue. The spiral has about 3.6 amino acids per turn, and the amino acid side chains stick out from the cylinder of the helix. Beta pleated sheets are formed by backbone hydrogen bonds between individual beta strands each of which is in an "extended", or fully stretched-out, conformation. The strands may lie parallel or antiparallel to each other, and the side-chain direction alternates above and below the sheet. Hemoglobin contains only helices, natural silk is formed of beta pleated sheets, and many enzymes have a pattern of alternating helices and beta-strands. The secondary-structure elements are connected by "loop" or "coil" regions of non-repetitive conformation, which are sometimes quite mobile or disordered but usually adopt a well-defined, stable arrangement.
The overall, compact, 3D structure of a protein is termed its tertiary structure or its "fold". It is formed as result of various attractive forces like hydrogen bonding, disulfide bridges, hydrophobic interactions, hydrophilic interactions, van der Waals force etc.
When two or more polypeptide chains (either of identical or of different sequence) cluster to form a protein, quaternary structure of protein is formed. Quaternary structure is an attribute of polymeric (same-sequence chains) or heteromeric (different-sequence chains) proteins like hemoglobin, which consists of two "alpha" and two "beta" polypeptide chains.
==== Apoenzymes ====
An apoenzyme (or, generally, an apoprotein) is the protein without any small-molecule cofactors, substrates, or inhibitors bound. It is often important as an inactive storage, transport, or secretory form of a protein. This is required, for instance, to protect the secretory cell from the activity of that protein.
Apoenzymes become active enzymes on addition of a cofactor. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds, (e.g., [Flavin group|flavin] and heme). Organic cofactors can be either prosthetic groups, which are tightly bound to an enzyme, or coenzymes, which are released from the enzyme's active site during the reaction.
==== Isoenzymes ====
Isoenzymes, or isozymes, are multiple forms of an enzyme, with slightly different protein sequence and closely similar but usually not identical functions. They are either products of different genes, or else different products of alternative splicing. They may either be produced in different organs or cell types to perform the same function, or several isoenzymes may be produced in the same cell type under differential regulation to suit the needs of changing development or environment. LDH (lactate dehydrogenase) has multiple isozymes, while fetal hemoglobin is an example of a developmentally regulated isoform of a non-enzymatic protein. The relative levels of isoenzymes in blood can be used to diagnose problems in the organ of secretion .
== See also ==
Biomolecular engineering
List of biomolecules
Metabolism
Multi-state modeling of biomolecules
== References ==
== External links ==
Society for Biomolecular Sciences provider of a forum for education and information exchange among professionals within drug discovery and related disciplines. | Wikipedia/Biomolecule |
In statistical mechanics the Percus–Yevick approximation is a closure relation to solve the Ornstein–Zernike equation. It is also referred to as the Percus–Yevick equation. It is commonly used in fluid theory to obtain e.g. expressions for the radial distribution function. The approximation is named after Jerome K. Percus and George J. Yevick.
== Derivation ==
The direct correlation function represents the direct correlation between two particles in a system containing N − 2 other particles. It can be represented by
c
(
r
)
=
g
t
o
t
a
l
(
r
)
−
g
i
n
d
i
r
e
c
t
(
r
)
{\displaystyle c(r)=g_{\rm {total}}(r)-g_{\rm {indirect}}(r)\,}
where
g
t
o
t
a
l
(
r
)
{\displaystyle g_{\rm {total}}(r)}
is the radial distribution function, i.e.
g
(
r
)
=
exp
[
−
β
w
(
r
)
]
{\displaystyle g(r)=\exp[-\beta w(r)]}
(with w(r) the potential of mean force) and
g
i
n
d
i
r
e
c
t
(
r
)
{\displaystyle g_{\rm {indirect}}(r)}
is the radial distribution function without the direct interaction between pairs
u
(
r
)
{\displaystyle u(r)}
included; i.e. we write
g
i
n
d
i
r
e
c
t
(
r
)
=
exp
[
−
β
(
w
(
r
)
−
u
(
r
)
)
]
{\displaystyle g_{\rm {indirect}}(r)=\exp[-\beta (w(r)-u(r))]}
. Thus we approximate c(r) by
c
(
r
)
=
e
−
β
w
(
r
)
−
e
−
β
[
w
(
r
)
−
u
(
r
)
]
.
{\displaystyle c(r)=e^{-\beta w(r)}-e^{-\beta [w(r)-u(r)]}.\,}
If we introduce the function
y
(
r
)
=
e
β
u
(
r
)
g
(
r
)
{\displaystyle y(r)=e^{\beta u(r)}g(r)}
into the approximation for c(r) one obtains
c
(
r
)
=
g
(
r
)
−
y
(
r
)
=
e
−
β
u
y
(
r
)
−
y
(
r
)
=
f
(
r
)
y
(
r
)
.
{\displaystyle c(r)=g(r)-y(r)=e^{-\beta u}y(r)-y(r)=f(r)y(r).\,}
This is the essence of the Percus–Yevick approximation for if we substitute this result in the Ornstein–Zernike equation, one obtains the Percus–Yevick equation:
y
(
r
12
)
=
1
+
ρ
∫
f
(
r
13
)
y
(
r
13
)
h
(
r
23
)
d
r
3
.
{\displaystyle y(r_{12})=1+\rho \int f(r_{13})y(r_{13})h(r_{23})d\mathbf {r_{3}} .\,}
The approximation was defined by Percus and Yevick in 1958.
== Hard spheres ==
For hard spheres, the potential u(r) is either zero or infinite, and therefore the Boltzmann factor
e
−
u
/
k
B
T
{\displaystyle {\text{e}}^{-u/k_{\text{B}}T}}
is either one or zero, regardless of temperature T. Therefore structure of a hard-spheres fluid is temperature independent. This leaves just two parameters: the hard-core radius R (which can be eliminated by rescaling distances or wavenumbers), and the packing fraction η (which has a maximum value of 0.64 for random close packing).
Under these conditions, the Percus–Yevick equation has an analytical solution, obtained by Wertheim in 1963.
=== Solution as C code ===
The static structure factor of the hard-spheres fluid in Percus–Yevick approximation can be computed using the following C function:
== Hard spheres in shear flow ==
For hard spheres in shear flow, the function u(r) arises from the solution to the steady-state two-body Smoluchowski convection–diffusion equation or two-body Smoluchowski equation with shear flow. An approximate analytical solution to the Smoluchowski convection-diffusion equation was found using the method of matched asymptotic expansions by Banetta and Zaccone in Ref.
This analytical solution can then be used together with the Percus–Yevick approximation in the Ornstein-Zernike equation. Approximate solutions for the pair distribution function in the extensional and compressional sectors of shear flow and hence the angular-averaged radial distribution function can be obtained, as shown in Ref., which are in good parameter-free agreement with numerical data up to packing fractions
η
≈
0.5
{\displaystyle \eta \approx 0.5}
.
== See also ==
Hypernetted-chain equation – another closure relation
Ornstein–Zernike equation
== References == | Wikipedia/Percus–Yevick_approximation |
Hybrid functionals are a class of approximations to the exchange–correlation energy functional in density functional theory (DFT) that incorporate a portion of exact exchange from Hartree–Fock theory with the rest of the exchange–correlation energy from other sources (ab initio or empirical). The exact exchange energy functional is expressed in terms of the Kohn–Sham orbitals rather than the density, so is termed an implicit density functional. One of the most commonly used versions is B3LYP, which stands for "Becke, 3-parameter, Lee–Yang–Parr".
== Origin ==
The hybrid approach to constructing density functional approximations was introduced by Axel Becke in 1993. Hybridization with Hartree–Fock (HF) exchange (also called exact exchange) provides a simple scheme for improving the calculation of many molecular properties, such as atomization energies, bond lengths and vibration frequencies, which tend to be poorly described with simple "ab initio" functionals.
== Method ==
A hybrid exchange–correlation functional is usually constructed as a linear combination of the Hartree–Fock exact exchange functional
E
x
HF
=
−
1
2
∑
i
,
j
∬
ψ
i
∗
(
r
1
)
ψ
j
∗
(
r
2
)
1
r
12
ψ
j
(
r
1
)
ψ
i
(
r
2
)
d
r
1
d
r
2
{\displaystyle E_{\text{x}}^{\text{HF}}=-{\frac {1}{2}}\sum _{i,j}\iint \psi _{i}^{*}(\mathbf {r} _{1})\psi _{j}^{*}(\mathbf {r} _{2}){\frac {1}{r_{12}}}\psi _{j}(\mathbf {r} _{1})\psi _{i}(\mathbf {r} _{2})\,d\mathbf {r} _{1}\,d\mathbf {r} _{2}}
and any number of exchange and correlation explicit density functionals. The parameters determining the weight of each individual functional are typically specified by fitting the functional's predictions to experimental or accurately calculated thermochemical data, although in the case of the "adiabatic connection functionals" the weights can be set a priori.
=== B3LYP ===
For example, the popular B3LYP (Becke, 3-parameter, Lee–Yang–Parr) exchange-correlation functional is
E
xc
B3LYP
=
(
1
−
a
)
E
x
LSDA
+
a
E
x
HF
+
b
△
E
x
B
+
(
1
−
c
)
E
c
LSDA
+
c
E
c
LYP
,
{\displaystyle E_{\text{xc}}^{\text{B3LYP}}=(1-a)E_{\text{x}}^{\text{LSDA}}+aE_{\text{x}}^{\text{HF}}+b\vartriangle E_{\text{x}}^{\text{B}}+(1-c)E_{\text{c}}^{\text{LSDA}}+cE_{\text{c}}^{\text{LYP}},}
where
a
=
0.20
{\displaystyle a=0.20}
,
b
=
0.72
{\displaystyle b=0.72}
, and
c
=
0.81
{\displaystyle c=0.81}
.
E
x
B
{\displaystyle E_{\text{x}}^{\text{B}}}
is a generalized gradient approximation: the Becke 88 exchange functional and the correlation functional of Lee, Yang and Parr for B3LYP, and
E
c
LSDA
{\displaystyle E_{\text{c}}^{\text{LSDA}}}
is the VWN local spin density approximation to the correlation functional.
The three parameters defining B3LYP have been taken without modification from Becke's original fitting of the analogous B3PW91 functional to a set of atomization energies, ionization potentials, proton affinities, and total atomic energies.
=== PBE0 ===
The PBE0 functional
mixes the Perdew–Burke–Ernzerhof (PBE) exchange energy and Hartree–Fock exchange energy in a set 3:1 ratio, along with the full PBE correlation energy:
E
xc
PBE0
=
1
4
E
x
HF
+
3
4
E
x
PBE
+
E
c
PBE
,
{\displaystyle E_{\text{xc}}^{\text{PBE0}}={\frac {1}{4}}E_{\text{x}}^{\text{HF}}+{\frac {3}{4}}E_{\text{x}}^{\text{PBE}}+E_{\text{c}}^{\text{PBE}},}
where
E
x
HF
{\displaystyle E_{\text{x}}^{\text{HF}}}
is the Hartree–Fock exact exchange functional,
E
x
PBE
{\displaystyle E_{\text{x}}^{\text{PBE}}}
is the PBE exchange functional, and
E
c
PBE
{\displaystyle E_{\text{c}}^{\text{PBE}}}
is the PBE correlation functional.
=== HSE ===
The HSE (Heyd–Scuseria–Ernzerhof) exchange–correlation functional uses an error-function-screened Coulomb potential to calculate the exchange portion of the energy in order to improve computational efficiency, especially for metallic systems:
E
xc
ω
PBEh
=
a
E
x
HF,SR
(
ω
)
+
(
1
−
a
)
E
x
PBE,SR
(
ω
)
+
E
x
PBE,LR
(
ω
)
+
E
c
PBE
,
{\displaystyle E_{\text{xc}}^{\omega {\text{PBEh}}}=aE_{\text{x}}^{\text{HF,SR}}(\omega )+(1-a)E_{\text{x}}^{\text{PBE,SR}}(\omega )+E_{\text{x}}^{\text{PBE,LR}}(\omega )+E_{\text{c}}^{\text{PBE}},}
where
a
{\displaystyle a}
is the mixing parameter, and
ω
{\displaystyle \omega }
is an adjustable parameter controlling the short-rangeness of the interaction. Standard values of
a
=
1
/
4
{\displaystyle a=1/4}
and
ω
=
0.2
{\displaystyle \omega =0.2}
(usually referred to as HSE06) have been shown to give good results for most systems. The HSE exchange–correlation functional degenerates to the PBE0 hybrid functional for
ω
=
0
{\displaystyle \omega =0}
.
E
x
HF,SR
(
ω
)
{\displaystyle E_{\text{x}}^{\text{HF,SR}}(\omega )}
is the short-range Hartree–Fock exact exchange functional,
E
x
PBE,SR
(
ω
)
{\displaystyle E_{\text{x}}^{\text{PBE,SR}}(\omega )}
and
E
x
PBE,LR
(
ω
)
{\displaystyle E_{\text{x}}^{\text{PBE,LR}}(\omega )}
are the short- and long-range components of the PBE exchange functional, and
E
c
PBE
(
ω
)
{\displaystyle E_{\text{c}}^{\text{PBE}}(\omega )}
is the PBE correlation functional.
=== Meta-hybrid GGA ===
The M06 suite of functionals is a set of four meta-hybrid GGA and meta-GGA DFT functionals. These functionals are constructed by empirically fitting their parameters, while being constrained to a uniform electron gas.
The family includes the functionals M06-L, M06, M06-2X and M06-HF, with a different amount of exact exchange for each one. M06-L is fully local without HF exchange (thus it cannot be considered hybrid), M06 has 27% HF exchange, M06-2X 54% and M06-HF 100%.
The advantages and usefulness of each functional are
M06-L: Fast, good for transition metals, inorganic and organometallics.
M06: For main group, organometallics, kinetics and non-covalent bonds.
M06-2X: Main group, kinetics.
M06-HF: Charge-transfer TD-DFT, systems where self-interaction is pathological.
The suite gives good results for systems containing dispersion forces, one of the biggest deficiencies of standard DFT methods.
A recent review of DFT functionals concludes: "Despite their excellent performance for energies and geometries, we must suspect that modern highly parameterized functionals need further guidance from exact constraints, or exact density, or both"
== References == | Wikipedia/Hybrid_functional |
In theoretical chemistry and molecular physics, Coulson–Fischer theory provides a quantum mechanical description of the electronic structure of molecules. The 1949 seminal work of Coulson and Fischer established a theory of molecular electronic structure which combines the strengths of the two rival theories which emerged soon after the advent of quantum chemistry - valence bond theory and molecular orbital theory, whilst avoiding many of their weaknesses. For example, unlike the widely used Hartree–Fock molecular orbital method, Coulson–Fischer theory provides a qualitatively correct description of molecular dissociative processes. The Coulson–Fischer wave function has been said to provide a third way in quantum chemistry. Modern valence bond theory is often seen as an extension of the Coulson–Fischer method.
== Theory ==
Coulson–Fischer theory is an extension of modern valence bond theory that uses localized atomic orbitals as the basis for VBT structures. In Coulson-Fischer Theory, orbitals are delocalized towards nearby atoms. This is described for H2 as follows:
ϕ
1
=
a
+
λ
b
{\displaystyle \phi _{1}=a+\lambda b}
ϕ
2
=
b
+
λ
a
{\displaystyle \phi _{2}=b+\lambda a}
where a and b are atomic 1s orbitals, that are used as the basis functions for VBT, and λ is a delocalization parameter from 0 to 1. The VB structures then use
ϕ
1
{\displaystyle \phi _{1}}
and
ϕ
2
{\displaystyle \phi _{2}}
as the basis functions to describe the total electronic wavefunction as
Φ
C
F
=
|
ϕ
1
ϕ
2
¯
|
−
|
ϕ
1
¯
ϕ
2
|
{\displaystyle \Phi _{CF}=\left\vert \phi _{1}{\overline {\phi _{2}}}\right\vert -\left\vert {\overline {\phi _{1}}}\phi _{2}\right\vert }
in obvious analogy to the Heitler-London wavefunction. However, an expansion of the Coulson-Fischer description of the wavefunction in terms of a and b gives:
Φ
C
F
=
(
1
+
λ
2
)
(
|
a
b
¯
|
−
|
a
¯
b
|
)
+
(
2
λ
)
(
|
a
a
¯
|
−
|
b
b
¯
|
)
{\displaystyle \Phi _{CF}=(1+\lambda ^{2})(\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert )+(2\lambda )(\left\vert a{\overline {a}}\right\vert -\left\vert b{\overline {b}}\right\vert )}
A full VBT description of H2 that includes both ionic and covalent contributions is
Φ
V
B
T
=
ϵ
(
|
a
b
¯
|
−
|
a
¯
b
|
)
+
μ
(
|
a
a
¯
|
−
|
b
b
¯
|
)
{\displaystyle \Phi _{VBT}=\epsilon (\left\vert a{\overline {b}}\right\vert -\left\vert {\overline {a}}b\right\vert )+\mu (\left\vert a{\overline {a}}\right\vert -\left\vert b{\overline {b}}\right\vert )}
where ε and μ are constants between 0 and 1.
As a result, the CF description gives the same description as a full valence bond description, but with just one VB structure.
== References ==
== External links ==
Stephen Wilson. "The Coulson-Fischer theory of molecular electronic structure". Retrieved 2020-11-20. | Wikipedia/Coulson–Fischer_theory |
Minnesota Functionals (Myz) are a group of highly parameterized approximate exchange-correlation energy functionals in density functional theory (DFT). They are developed by the group of Donald Truhlar at the University of Minnesota. The Minnesota functionals are available in a large number of popular quantum chemistry computer programs, and can be used for traditional quantum chemistry and solid-state physics calculations.
These functionals are based on the meta-GGA approximation, i.e. they include terms that depend on the kinetic energy density, and are all based on complicated functional forms parametrized on high-quality benchmark databases. The Myz functionals are widely used and tested in the quantum chemistry community.
== Controversies ==
Independent evaluations of the strengths and limitations of the Minnesota functionals with respect to various chemical properties cast doubts on their accuracy. Some regard this criticism to be unfair. In this view, because Minnesota functionals are aiming for a balanced description for both main-group and transition-metal chemistry, the studies assessing Minnesota functionals solely based on the performance on main-group databases yield biased information, as the functionals that work well for main-group chemistry may fail for transition metal chemistry.
A study in 2017 highlighted what appeared to be the poor performance of Minnesota functionals on atomic densities. Others subsequently refuted this criticism, claiming that focusing only on atomic densities (including chemically unimportant, highly charged cations) is hardly relevant to real applications of density functional theory in computational chemistry. Another study found this to be the case: for Minnesota functionals, the errors in atomic densities and in energetics are indeed decoupled, and the Minnesota functionals perform better for diatomic densities than for the atomic densities. The study concludes that atomic densities do not yield an accurate judgement of the performance of density functionals. Minnesota functionals have also been shown to reproduce chemically relevant Fukui functions better than they do the atomic densities.
== Family of functionals ==
=== Minnesota 05 ===
The first family of Minnesota functionals, published in 2005, is composed by:
M05: Global hybrid functional with 28% HF exchange.
M05-2X Global hybrid functional with 56% HF exchange.
In addition to the fraction of HF exchange, the M05 family of functionals includes 22 additional empirical parameters. A range-separated functional based on the M05 form, ωM05-D which includes empirical atomic dispersion corrections, has been reported by Chai and coworkers.
=== Minnesota 06 ===
The '06 family represent a general improvement over the 05 family and is composed of:
M06-L: Local functional, 0% HF exchange. Intended to be fast, good for transition metals, inorganic and organometallics.
revM06-L: Local functional, 0% HF exchange. M06-L revised for smoother potential energy curves and improved overall accuracy.
M06: Global hybrid functional with 27% HF exchange. Intended for main group thermochemistry and non-covalent interactions, transition metal thermochemistry and organometallics. It is usually the most versatile of the 06 functionals, and because of this large applicability it can be slightly worse than M06-2X for specific properties that require high percentage of HF exchange, such as thermochemistry and kinetics.
revM06: Global hybrid functional with 40.4% HF exchange. Intended for a broad range of applications on main-group chemistry, transition-metal chemistry, and molecular structure prediction to replace M06 and M06-2X.
M06-2X: Global hybrid functional with 54% HF exchange. It is the top performer within the 06 functionals for main group thermochemistry, kinetics and non-covalent interactions, however it cannot be used for cases where multi-reference species are or might be involved, such as transition metal thermochemistry and organometallics.
M06-HF: Global hybrid functional with 100% HF exchange. Intended for charge transfer TD-DFT and systems where self-interaction is pathological.
The M06 and M06-2X functionals introduce 35 and 32 empirically optimized parameters, respectively, into the exchange-correlation functional. A range-separated functional based on the M06 form, ωM06-D3 which includes empirical atomic dispersion corrections, has been reported by Chai and coworkers.
=== Minnesota 08 ===
The '08 family was created with the primary intent to improve the M06-2X functional form, retaining the performances for main group thermochemistry, kinetics and non-covalent interactions. This family is composed by two functionals with a high percentage of HF exchange, with performances similar to those of M06-2X:
M08-HX: Global hybrid functional with 52.23% HF exchange. Intended for main group thermochemistry, kinetics and non-covalent interactions.
M08-SO: Global hybrid functional with 56.79% HF exchange. Intended for main group thermochemistry, kinetics and non-covalent interactions.
=== Minnesota 11 ===
The '11 family introduces range-separation in the Minnesota functionals and modifications in the functional form and in the training databases. These modifications also cut the number of functionals in a complete family from 4 (M06-L, M06, M06-2X and M06-HF) to just 2:
M11-L: Local functional (0% HF exchange) with dual-range DFT exchange. Intended to be fast, to be good for transition metals, inorganic, organometallics and non-covalent interactions, and to improve much over M06-L.
M11: Range-separated hybrid functional with 42.8% HF exchange in the short-range and 100% in the long-range. Intended for main group thermochemistry, kinetics and non-covalent interactions, with an intended performance comparable to that of M06-2X, and for TD-DFT applications, with an intended performance comparable to M06-HF.
revM11: Range-separated hybrid functional with 22.5% HF exchange in the short-range and 100% in the long-range. Intended for good performance for electronic excitations and good predictions across the board for ground-state properties.
=== Minnesota 12 ===
The 12 family uses a nonseparable (N in MN) functional form aiming to provide balanced performance for both chemistry and solid-state physics applications. It is composed by:
MN12-L: A local functional, 0% HF exchange. The aim of the functional was to be very versatile and provide good computational performance and accuracy for energetic and structural problems in both chemistry and solid-state physics.
MN12-SX: Screened-exchange (SX) hybrid functional with 25% HF exchange in the short-range and 0% HF exchange in the long-range. MN12-L was intended to be very versatile and provide good performance for energetic and structural problems in both chemistry and solid-state physics, at a computational cost that is intermediate between local and global hybrid functionals.
=== Minnesota 15 ===
The 15 functionals are the newest addition to the Minnesota family. Like the 12 family, the functionals are based on a non-separable form, but unlike the 11 or 12 families the hybrid functional doesn't use range separation: MN15 is a global hybrid like in the pre-11 families. The 15 family consists of two functionals
MN15, a global hybrid with 44% HF exchange.
MN15-L, a local functional with 0% HF exchange.
== Main Software with Implementation of the Minnesota Functionals ==
* Using LibXC.
== References ==
== External links ==
The Truhlar Group
Minnesota Databases for Chemistry and Physics
The most recent review article on the performance of the Minnesota functionals | Wikipedia/Minnesota_Functionals |
In solid-state physics, the k·p perturbation theory is an approximated semi-empirical approach for calculating the band structure (particularly effective mass) and optical properties of crystalline solids. It is pronounced "k dot p", and is also called the k·p method. This theory has been applied specifically in the framework of the Luttinger–Kohn model (after Joaquin Mazdak Luttinger and Walter Kohn), and of the Kane model (after Evan O. Kane).
== Background and derivation ==
=== Bloch's theorem and wavevectors ===
According to quantum mechanics (in the single-electron approximation), the quasi-free electrons in any solid are characterized by wavefunctions which are eigenstates of the following stationary Schrödinger equation:
(
p
2
2
m
+
V
)
ψ
=
E
ψ
{\displaystyle \left({\frac {p^{2}}{2m}}+V\right)\psi =E\psi }
where p is the quantum-mechanical momentum operator, V is the potential, and m is the vacuum mass of the electron. (This equation neglects the spin–orbit effect; see below.)
In a crystalline solid, V is a periodic function, with the same periodicity as the crystal lattice. Bloch's theorem proves that the solutions to this differential equation can be written as follows:
ψ
n
,
k
(
x
)
=
e
i
k
⋅
x
u
n
,
k
(
x
)
{\displaystyle \psi _{n,\mathbf {k} }(\mathbf {x} )=e^{i\mathbf {k} \cdot \mathbf {x} }u_{n,\mathbf {k} }(\mathbf {x} )}
where k is a vector (called the wavevector), n is a discrete index (called the band index), and un,k is a function with the same periodicity as the crystal lattice.
For any given n, the associated states are called a band. In each band, there will be a relation between the wavevector k and the energy of the state En,k, called the band dispersion. Calculating this dispersion is one of the primary applications of k·p perturbation theory.
=== Perturbation theory ===
The periodic function un,k satisfies the following Schrödinger-type equation (simply, a direct expansion of the Schrödinger equation with a Bloch-type wave function):
H
k
u
n
,
k
=
E
n
,
k
u
n
,
k
{\displaystyle H_{\mathbf {k} }u_{n,\mathbf {k} }=E_{n,\mathbf {k} }u_{n,\mathbf {k} }}
where the Hamiltonian is
H
k
=
p
2
2
m
+
ℏ
k
⋅
p
m
+
ℏ
2
k
2
2
m
+
V
{\displaystyle H_{\mathbf {k} }={\frac {p^{2}}{2m}}+{\frac {\hbar \mathbf {k} \cdot \mathbf {p} }{m}}+{\frac {\hbar ^{2}k^{2}}{2m}}+V}
Note that k is a vector consisting of three real numbers with dimensions of inverse length, while p is a vector of operators; to be explicit,
k
⋅
p
=
k
x
(
−
i
ℏ
∂
∂
x
)
+
k
y
(
−
i
ℏ
∂
∂
y
)
+
k
z
(
−
i
ℏ
∂
∂
z
)
{\displaystyle \mathbf {k} \cdot \mathbf {p} =k_{x}(-i\hbar {\frac {\partial }{\partial x}})+k_{y}(-i\hbar {\frac {\partial }{\partial y}})+k_{z}(-i\hbar {\frac {\partial }{\partial z}})}
In any case, we write this Hamiltonian as the sum of two terms:
H
k
=
H
0
+
H
k
′
,
H
0
=
p
2
2
m
+
V
,
H
k
′
=
ℏ
2
k
2
2
m
+
ℏ
k
⋅
p
m
{\displaystyle H_{\mathbf {k} }=H_{0}+H_{\mathbf {k} }',\;\;H_{0}={\frac {p^{2}}{2m}}+V,\;\;H_{\mathbf {k} }'={\frac {\hbar ^{2}k^{2}}{2m}}+{\frac {\hbar \mathbf {k} \cdot \mathbf {p} }{m}}}
This expression is the basis for perturbation theory. The "unperturbed Hamiltonian" is H0, which in fact equals the exact Hamiltonian at k = 0 (i.e., at the gamma point). The "perturbation" is the term
H
k
′
{\displaystyle H_{\mathbf {k} }'}
. The analysis that results is called k·p perturbation theory, due to the term proportional to k·p. The result of this analysis is an expression for En,k and un,k in terms of the energies and wavefunctions at k = 0.
Note that the "perturbation" term
H
k
′
{\displaystyle H_{\mathbf {k} }'}
gets progressively smaller as k approaches zero. Therefore, k·p perturbation theory is most accurate for small values of k. However, if enough terms are included in the perturbative expansion, then the theory can in fact be reasonably accurate for any value of k in the entire Brillouin zone.
=== Expression for a nondegenerate band ===
For a nondegenerate band (i.e., a band which has a different energy at k = 0 from any other band), with an extremum at k = 0, and with no spin–orbit coupling, the result of k·p perturbation theory is (to lowest nontrivial order):
u
n
,
k
=
u
n
,
0
+
ℏ
m
∑
n
′
≠
n
⟨
u
n
′
,
0
|
k
⋅
p
|
u
n
,
0
⟩
E
n
,
0
−
E
n
′
,
0
u
n
′
,
0
{\displaystyle u_{n,\mathbf {k} }=u_{n,0}+{\frac {\hbar }{m}}\sum _{n'\neq n}{\frac {\langle u_{n',0}|\mathbf {k} \cdot \mathbf {p} |u_{n,0}\rangle }{E_{n,0}-E_{n',0}}}u_{n',0}}
E
n
,
k
=
E
n
,
0
+
ℏ
2
k
2
2
m
+
ℏ
2
m
2
∑
n
′
≠
n
|
⟨
u
n
,
0
|
k
⋅
p
|
u
n
′
,
0
⟩
|
2
E
n
,
0
−
E
n
′
,
0
{\displaystyle E_{n,\mathbf {k} }=E_{n,0}+{\frac {\hbar ^{2}k^{2}}{2m}}+{\frac {\hbar ^{2}}{m^{2}}}\sum _{n'\neq n}{\frac {|\langle u_{n,0}|\mathbf {k} \cdot \mathbf {p} |u_{n',0}\rangle |^{2}}{E_{n,0}-E_{n',0}}}}
Since k is a vector of real numbers (rather than a vector of more complicated linear operators), the matrix element in these expressions can be rewritten as:
⟨
u
n
,
0
|
k
⋅
p
|
u
n
′
,
0
⟩
=
k
⋅
⟨
u
n
,
0
|
p
|
u
n
′
,
0
⟩
{\displaystyle \langle u_{n,0}|\mathbf {k} \cdot \mathbf {p} |u_{n',0}\rangle =\mathbf {k} \cdot \langle u_{n,0}|\mathbf {p} |u_{n',0}\rangle }
Therefore, one can calculate the energy at any k using only a few unknown parameters, namely En,0 and
⟨
u
n
,
0
|
p
|
u
n
′
,
0
⟩
{\displaystyle \langle u_{n,0}|\mathbf {p} |u_{n',0}\rangle }
. The latter are called "optical matrix elements", closely related to transition dipole moments. These parameters are typically inferred from experimental data.
In practice, the sum over n often includes only the nearest one or two bands, since these tend to be the most important (due to the denominator). However, for improved accuracy, especially at larger k, more bands must be included, as well as more terms in the perturbative expansion than the ones written above.
==== Effective mass ====
Using the expression above for the energy dispersion relation, a simplified expression for the effective mass in the conduction band of a semiconductor can be found. To approximate the dispersion relation in the case of the conduction band, take the energy En0 as the minimum conduction band energy Ec0 and include in the summation only terms with energies near the valence band maximum, where the energy difference in the denominator is smallest. (These terms are the largest contributions to the summation.) This denominator is then approximated as the band gap Eg, leading to an energy expression:
E
c
(
k
)
≈
E
c
0
+
(
ℏ
k
)
2
2
m
+
ℏ
2
E
g
m
2
∑
n
|
⟨
u
c
,
0
|
k
⋅
p
|
u
n
,
0
⟩
|
2
{\displaystyle E_{c}({\boldsymbol {k}})\approx E_{c0}+{\frac {(\hbar k)^{2}}{2m}}+{\frac {\hbar ^{2}}{{E_{g}}m^{2}}}\sum _{n}{|\langle u_{c,0}|\mathbf {k} \cdot \mathbf {p} |u_{n,0}\rangle |^{2}}}
The effective mass in direction ℓ is then:
1
m
ℓ
=
1
ℏ
2
∑
m
⋅
∂
2
E
c
(
k
)
∂
k
ℓ
∂
k
m
≈
1
m
+
2
E
g
m
2
∑
m
,
n
⟨
u
c
,
0
|
p
ℓ
|
u
n
,
0
⟩
⟨
u
n
,
0
|
p
m
|
u
c
,
0
⟩
{\displaystyle {\frac {1}{{m}_{\ell }}}={{1} \over {\hbar ^{2}}}\sum _{m}\cdot {{\partial ^{2}E_{c}({\boldsymbol {k}})} \over {\partial k_{\ell }\partial k_{m}}}\approx {\frac {1}{m}}+{\frac {2}{E_{g}m^{2}}}\sum _{m,\ n}{\langle u_{c,0}|p_{\ell }|u_{n,0}\rangle }{\langle u_{n,0}|p_{m}|u_{c,0}\rangle }}
Ignoring the details of the matrix elements, the key consequences are that the effective mass varies with the smallest bandgap and goes to zero as the gap goes to zero. A useful approximation for the matrix elements in direct gap semiconductors is:
2
E
g
m
2
∑
m
,
n
|
⟨
u
c
,
0
|
p
ℓ
|
u
n
,
0
⟩
|
|
⟨
u
c
,
0
|
p
m
|
u
n
,
0
⟩
|
≈
20
e
V
1
m
E
g
,
{\displaystyle {\frac {2}{E_{g}m^{2}}}\sum _{m,\ n}{|\langle u_{c,0}|p_{\ell }|u_{n,0}\rangle |}{|\langle u_{c,0}|p_{m}|u_{n,0}\rangle |}\approx 20\mathrm {eV} {\frac {1}{mE_{g}}}\ ,}
which applies within about 15% or better to most group-IV, III-V and II-VI semiconductors.
In contrast to this simple approximation, in the case of valence band energy the spin–orbit interaction must be introduced (see below) and many more bands must be individually considered. The calculation is provided in Yu and Cardona. In the valence band the mobile carriers are holes. One finds there are two types of hole, named heavy and light, with anisotropic masses.
=== k·p model with spin–orbit interaction ===
Including the spin–orbit interaction, the Schrödinger equation for u is:
H
k
u
n
,
k
=
E
n
,
k
u
n
,
k
{\displaystyle H_{\mathbf {k} }u_{n,\mathbf {k} }=E_{n,\mathbf {k} }u_{n,\mathbf {k} }}
where
H
k
=
p
2
2
m
+
ℏ
m
k
⋅
p
+
ℏ
2
k
2
2
m
+
V
+
ℏ
4
m
2
c
2
(
∇
V
×
(
p
+
ℏ
k
)
)
⋅
σ
→
{\displaystyle H_{\mathbf {k} }={\frac {p^{2}}{2m}}+{\frac {\hbar }{m}}\mathbf {k} \cdot \mathbf {p} +{\frac {\hbar ^{2}k^{2}}{2m}}+V+{\frac {\hbar }{4m^{2}c^{2}}}(\nabla V\times (\mathbf {p} +\hbar \mathbf {k} ))\cdot {\vec {\sigma }}}
where
σ
→
=
(
σ
x
,
σ
y
,
σ
z
)
{\displaystyle {\vec {\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})}
is a vector consisting of the three Pauli matrices. This Hamiltonian can be subjected to the same sort of perturbation-theory analysis as above.
=== Calculation in degenerate case ===
For degenerate or nearly degenerate bands, in particular the valence bands in certain materials such as gallium arsenide, the equations can be analyzed by the methods of degenerate perturbation theory. Models of this type include the "Luttinger–Kohn model" (a.k.a. "Kohn–Luttinger model"), and the "Kane model".
Generally, an effective Hamiltonian
H
e
f
f
{\displaystyle H^{\rm {eff}}}
is introduced, and to the first order, its matrix elements can be expressed as
H
k
,
m
n
e
f
f
=
⟨
u
m
,
0
|
H
0
|
u
n
,
0
⟩
+
k
⋅
⟨
u
m
,
0
|
∇
k
H
k
′
|
u
n
,
0
⟩
{\displaystyle H_{\mathbf {k} ,mn}^{\rm {eff}}=\langle u_{m,0}|H_{0}|u_{n,0}\rangle +\mathbf {k} \cdot \langle u_{m,0}|\nabla _{\mathbf {k} }H_{\mathbf {k} }'|u_{n,0}\rangle }
After solving it, the wave functions and energy bands are obtained.
== See also ==
== Notes and references == | Wikipedia/K·p_perturbation_theory |
The GW approximation (GWA) is an approximation made in order to calculate the self-energy of a many-body system of electrons. The approximation is that the expansion of the self-energy Σ in terms of the single particle Green's function G and the screened Coulomb interaction W (in units of
ℏ
=
1
{\displaystyle \hbar =1}
)
Σ
=
i
G
W
−
G
W
G
W
G
+
⋯
{\displaystyle \Sigma =iGW-GWGWG+\cdots }
can be truncated after the first term:
Σ
≈
i
G
W
{\displaystyle \Sigma \approx iGW}
In other words, the self-energy is expanded in a formal Taylor series in powers of the screened interaction W and the lowest order term is kept in the expansion in GWA.
== Theory ==
The above formulae are schematic in nature and show the overall idea of the approximation. More precisely, if we label an electron coordinate with its position, spin, and time and bundle all three into a composite index (the numbers 1, 2, etc.), we have
Σ
(
1
,
2
)
=
i
G
(
1
,
2
)
W
(
1
+
,
2
)
−
∫
d
3
∫
d
4
G
(
1
,
3
)
G
(
3
,
4
)
G
(
4
,
2
)
W
(
1
,
4
)
W
(
3
,
2
)
+
.
.
.
{\displaystyle \Sigma (1,2)=iG(1,2)W(1^{+},2)-\int d3\int d4\,G(1,3)G(3,4)G(4,2)W(1,4)W(3,2)+...}
where the "+" superscript means the time index is shifted forward by an infinitesimal amount. The GWA is then
Σ
(
1
,
2
)
≈
i
G
(
1
,
2
)
W
(
1
+
,
2
)
{\displaystyle \Sigma (1,2)\approx iG(1,2)W(1^{+},2)}
To put this in context, if one replaces W by the bare Coulomb interaction (i.e. the usual 1/r interaction), one generates the standard perturbative series for the self-energy found in most many-body textbooks. The GWA with W replaced by the bare Coulomb yields nothing other than the Hartree–Fock exchange potential (self-energy). Therefore, loosely speaking, the GWA represents a type of dynamically screened Hartree–Fock self-energy.
In a solid state system, the series for the self-energy in terms of W should converge much faster than the traditional series in the bare Coulomb interaction. This is because the screening of the medium reduces the effective strength of the Coulomb interaction: for example, if one places an electron at some position in a material and asks what the potential is at some other position in the material, the value is smaller than given by the bare Coulomb interaction (inverse distance between the points) because the other electrons in the medium polarize (move or distort their electronic states) so as to screen the electric field. Therefore, W is a smaller quantity than the bare Coulomb interaction so that a series in W should have higher hopes of converging quickly.
To see the more rapid convergence, we can consider the simplest example involving the homogeneous or uniform electron gas which is characterized by an electron density or equivalently the average electron-electron separation or Wigner–Seitz radius
r
s
{\displaystyle r_{s}}
. (We only present a scaling argument and will not compute numerical prefactors that are order unity.) Here are the key steps:
The kinetic energy of an electron scales as
1
/
r
s
2
{\displaystyle 1/r_{s}^{2}}
The average electron-electron repulsion from the bare (unscreened) Coulomb interaction scales as
1
/
r
s
{\displaystyle 1/r_{s}}
(simply the inverse of the typical separation)
The electron gas dielectric function in the simplest Thomas–Fermi screening model for a wave vector
q
{\displaystyle q}
is
ϵ
(
q
)
=
1
+
λ
2
/
q
2
{\displaystyle \epsilon (q)=1+\lambda ^{2}/q^{2}}
where
λ
{\displaystyle \lambda }
is the screening wave number that scales as
r
s
−
1
/
2
{\displaystyle r_{s}^{-1/2}}
Typical wave vectors
q
{\displaystyle q}
scale as
1
/
r
s
{\displaystyle 1/r_{s}}
(again typical inverse separation)
Hence a typical screening value is
ϵ
∼
1
+
r
s
{\displaystyle \epsilon \sim 1+r_{s}}
The screened Coulomb interaction is
W
(
q
)
=
V
(
q
)
/
ϵ
(
q
)
{\displaystyle W(q)=V(q)/\epsilon (q)}
Thus for the bare Coulomb interaction, the ratio of Coulomb to kinetic energy is of order
r
s
{\displaystyle r_{s}}
which is of order 2-5 for a typical metal and not small at all: in other words, the bare Coulomb interaction is rather strong and makes for a poor perturbative expansion. On the other hand, the ratio of a typical
W
{\displaystyle W}
to the kinetic energy is greatly reduced by the screening and is of order
r
s
/
(
1
+
r
s
)
{\displaystyle r_{s}/(1+r_{s})}
which is well behaved and smaller than unity even for large
r
s
{\displaystyle r_{s}}
: the screened
interaction is much weaker and is more likely to give a rapidly converging perturbative series.
== History ==
The first GWA calculation for Hartree–Fock method was in 1958 by John Quinn and Richard Allan Ferrell but with many approximation and limited approach. Donald F. Dubois used this method to obtain results at for very small Wigner–Seitz radius or very large electron densities in 1959. The first full calculation using GWA was done by Lars Hedin in 1965. Hedin equations for GWA are named after him.
With the advanced of computational resources, real materials were first studied using GWA in the 1980s, with the works of Mark S. Hybertsen and Steven Gwon Sheng Louie.
== Software implementing the GW approximation ==
ABINIT - plane-wave pseudopotential method
ADF - Slater basis set method
BerkeleyGW - plane-wave pseudopotential method
CP2K - Gaussian-based low-scaling all-electron and pseudopotential method
ELK - full-potential (linearized) augmented plane-wave (FP-LAPW) method
FHI-aims - numeric atom-centered orbitals method
Fiesta - Gaussian all-electron method
GAP - an all-electron GW code based on augmented plane-waves, currently interfaced with WIEN2k
GPAW
GREEN - fully self-consistent GW in Gaussian basis for molecules and solids
Molgw - small gaussian basis code
NanoGW - real-space wave functions and Lanczos iterative methods
PySCF
QuantumATK - LCAO and PW methods.
Quantum ESPRESSO - Wannier-function pseudopotential method
Questaal - Full Potential (FP-LMTO) method
SaX Archived 2009-02-03 at the Wayback Machine - plane-wave pseudopotential method
Spex - full-potential (linearized) augmented plane-wave (FP-LAPW) method
TURBOMOLE - Gaussian all-electron method
VASP - projector-augmented-wave (PAW) method
West - large scale GW
YAMBO code - plane-wave pseudopotential method
== Sources ==
The key publications concerning the application of the GW approximation Archived 2019-02-04 at the Wayback Machine
Picture of Lars Hedin, inventor of GW
GW100 - Benchmarking the GW approach for molecules.
== References ==
== Further reading ==
Electron Correlation in the Solid State, Norman H. March (editor), World Scientific Publishing Company
Aryasetiawan, Ferdi. "Correlation effects in solids from first principles" (PDF). {{cite journal}}: Cite journal requires |journal= (help) | Wikipedia/GW_approximation |
Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space (and not, for example, derivatives of the density or the Kohn–Sham orbitals). Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems (molecules and solids).
In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as
E
x
c
L
D
A
[
ρ
]
=
∫
ρ
(
r
)
ϵ
x
c
(
ρ
(
r
)
)
d
r
,
{\displaystyle E_{\rm {xc}}^{\mathrm {LDA} }[\rho ]=\int \rho (\mathbf {r} )\epsilon _{\rm {xc}}(\rho (\mathbf {r} ))\ \mathrm {d} \mathbf {r} \ ,}
where ρ is the electronic density and єxc is the exchange-correlation energy per particle of a homogeneous electron gas of charge density ρ. The exchange-correlation energy is decomposed into exchange and correlation terms linearly,
E
x
c
=
E
x
+
E
c
,
{\displaystyle E_{\rm {xc}}=E_{\rm {x}}+E_{\rm {c}}\ ,}
so that separate expressions for Ex and Ec are sought. The exchange term takes on a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approximations for єc.
Local-density approximations are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradient approximations (GGA) or hybrid functionals, as a desirable property of any approximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA's are often an explicit component of such functionals.
The local-density approximation was first introduced by Walter Kohn and Lu Jeu Sham in 1965.
== Applications ==
Local density approximations, as with GGAs are employed extensively by solid state physicists in ab-initio DFT studies to interpret electronic and magnetic interactions in semiconductor materials including semiconducting oxides and spintronics. The importance of these computational studies stems from the system complexities which bring about high sensitivity to synthesis parameters necessitating first-principles based analysis. The prediction of Fermi level and band structure in doped semiconducting oxides is often carried out using LDA incorporated into simulation packages such as CASTEP and DMol3. However an underestimation in Band gap values often associated with LDA and GGA approximations may lead to false predictions of impurity mediated conductivity and/or carrier mediated magnetism in such systems. Starting in 1998, the application of the Rayleigh theorem for eigenvalues has led to mostly accurate, calculated band gaps of materials, using LDA potentials. A misunderstanding of the second theorem of DFT appears to explain most of the underestimation of band gap by LDA and GGA calculations, as explained in the description of density functional theory, in connection with the statements of the two theorems of DFT.
== Homogeneous electron gas ==
Approximation for єxc depending only upon the density can be developed in numerous ways. The most successful approach is based on the homogeneous electron gas. This is constructed by placing N interacting electrons in to a volume, V, with a positive background charge keeping the system neutral. N and V are then taken to infinity in the manner that keeps the density (ρ = N / V) finite. This is a useful approximation, as the total energy consists of contributions only from the kinetic energy, electrostatic interaction energy and exchange-correlation energy, and that the wavefunction is expressible in terms of plane waves. In particular, for a constant density ρ, the exchange energy density is proportional to ρ⅓.
== Exchange functional ==
The exchange-energy density of a HEG is known analytically. The LDA for exchange employs this expression under the approximation that the exchange-energy in a system where the density is not homogeneous, is obtained by applying the HEG results pointwise, yielding the expression
E
x
L
D
A
[
ρ
]
=
−
3
e
2
16
π
ε
0
(
3
π
)
1
/
3
∫
ρ
(
r
)
4
/
3
d
r
=
−
3
4
(
3
π
)
1
/
3
∫
ρ
(
r
)
4
/
3
d
r
,
{\displaystyle E_{\rm {x}}^{\mathrm {LDA} }[\rho ]=-{\frac {3e^{2}}{16\pi \varepsilon _{0}}}\left({\frac {3}{\pi }}\right)^{1/3}\int \rho (\mathbf {r} )^{4/3}\ \mathrm {d} \mathbf {r} =-{\frac {3}{4}}\left({\frac {3}{\pi }}\right)^{1/3}\int \rho (\mathbf {r} )^{4/3}\ \mathrm {d} \mathbf {r} \,,}
where the second formulation applies in atomic units.
== Correlation functional ==
Analytic expressions for the correlation energy of the HEG are available in the high- and low-density limits corresponding to infinitely-weak and infinitely-strong correlation. For a HEG with density ρ, the high-density limit of the correlation energy density is
ϵ
c
=
A
ln
(
r
s
)
+
B
+
r
s
(
C
ln
(
r
s
)
+
D
)
,
{\displaystyle \epsilon _{\rm {c}}=A\ln(r_{\rm {s}})+B+r_{\rm {s}}(C\ln(r_{\rm {s}})+D)\ ,}
and the low limit
ϵ
c
=
1
2
(
g
0
r
s
+
g
1
r
s
3
/
2
+
…
)
,
{\displaystyle \epsilon _{\rm {c}}={\frac {1}{2}}\left({\frac {g_{0}}{r_{\rm {s}}}}+{\frac {g_{1}}{r_{\rm {s}}^{3/2}}}+\dots \right)\ ,}
where the Wigner-Seitz parameter
r
s
{\displaystyle r_{\rm {s}}}
is dimensionless. It is defined as the radius of a sphere which encompasses exactly one electron, divided by the Bohr radius a0. In terms of the density ρ, this means
4
3
π
r
s
3
=
1
ρ
a
0
3
.
{\displaystyle {\frac {4}{3}}\pi r_{\rm {s}}^{3}={\frac {1}{\rho \,a_{0}^{3}}}\ .}
An analytical expression for the full range of densities has been proposed based on the many-body perturbation theory. The calculated correlation energies are in agreement with the results from quantum Monte Carlo simulation to within 2 milli-Hartree.
Accurate quantum Monte Carlo simulations for the energy of the HEG have been performed for several intermediate values of the density, in turn providing accurate values of the correlation energy density.
== Spin polarization ==
The extension of density functionals to spin-polarized systems is straightforward for exchange, where the exact spin-scaling is known, but for correlation further approximations must be employed. A spin polarized system in DFT employs two spin-densities, ρα and ρβ with ρ = ρα + ρβ, and the form of the local-spin-density approximation (LSDA) is
E
x
c
L
S
D
A
[
ρ
α
,
ρ
β
]
=
∫
d
r
ρ
(
r
)
ϵ
x
c
(
ρ
α
,
ρ
β
)
.
{\displaystyle E_{\rm {xc}}^{\mathrm {LSDA} }[\rho _{\alpha },\rho _{\beta }]=\int \mathrm {d} \mathbf {r} \ \rho (\mathbf {r} )\epsilon _{\rm {xc}}(\rho _{\alpha },\rho _{\beta })\ .}
For the exchange energy, the exact result (not just for local density approximations) is known in terms of the spin-unpolarized functional:
E
x
[
ρ
α
,
ρ
β
]
=
1
2
(
E
x
[
2
ρ
α
]
+
E
x
[
2
ρ
β
]
)
.
{\displaystyle E_{\rm {x}}[\rho _{\alpha },\rho _{\beta }]={\frac {1}{2}}{\bigg (}E_{\rm {x}}[2\rho _{\alpha }]+E_{\rm {x}}[2\rho _{\beta }]{\bigg )}\ .}
The spin-dependence of the correlation energy density is approached by introducing the relative spin-polarization:
ζ
(
r
)
=
ρ
α
(
r
)
−
ρ
β
(
r
)
ρ
α
(
r
)
+
ρ
β
(
r
)
.
{\displaystyle \zeta (\mathbf {r} )={\frac {\rho _{\alpha }(\mathbf {r} )-\rho _{\beta }(\mathbf {r} )}{\rho _{\alpha }(\mathbf {r} )+\rho _{\beta }(\mathbf {r} )}}\ .}
ζ
=
0
{\displaystyle \zeta =0\,}
corresponds to the diamagnetic spin-unpolarized situation with equal
α
{\displaystyle \alpha \,}
and
β
{\displaystyle \beta \,}
spin densities whereas
ζ
=
±
1
{\displaystyle \zeta =\pm 1}
corresponds to the ferromagnetic situation where one spin density vanishes. The spin correlation energy density for a given values of the total density and relative polarization, єc(ρ,ζ), is constructed so to interpolate the extreme values. Several forms have been developed in conjunction with LDA correlation functionals.
== Exchange-correlation potential ==
The exchange-correlation potential corresponding to the exchange-correlation energy for a local density approximation is given by
v
x
c
L
D
A
(
r
)
=
δ
E
L
D
A
δ
ρ
(
r
)
=
ϵ
x
c
(
ρ
(
r
)
)
+
ρ
(
r
)
∂
ϵ
x
c
(
ρ
(
r
)
)
∂
ρ
(
r
)
.
{\displaystyle v_{\rm {xc}}^{\mathrm {LDA} }(\mathbf {r} )={\frac {\delta E^{\mathrm {LDA} }}{\delta \rho (\mathbf {r} )}}=\epsilon _{\rm {xc}}(\rho (\mathbf {r} ))+\rho (\mathbf {r} ){\frac {\partial \epsilon _{\rm {xc}}(\rho (\mathbf {r} ))}{\partial \rho (\mathbf {r} )}}\ .}
In finite systems, the LDA potential decays asymptotically with an exponential form. This result is in error; the true exchange-correlation potential decays much slower in a Coulombic manner. The artificially rapid decay manifests itself in the number of Kohn–Sham orbitals the potential can bind (that is, how many orbitals have energy less than zero). The LDA potential can not support a Rydberg series and those states it does bind are too high in energy. This results in the highest occupied molecular orbital (HOMO) energy being too high in energy, so that any predictions for the ionization potential based on Koopmans' theorem are poor. Further, the LDA provides a poor description of electron-rich species such as anions where it is often unable to bind an additional electron, erroneously predicating species to be unstable. In the case of spin polarization, the exchange-correlation potential acquires spin indices. However, if one only considers the exchange part of the exchange-correlation, one obtains a potential that is diagonal in spin indices (in atomic units):
v
x
c
,
α
β
L
D
A
(
r
)
=
δ
E
L
D
A
δ
ρ
α
β
(
r
)
=
1
2
δ
α
β
δ
E
L
D
A
[
2
ρ
α
]
δ
ρ
α
=
−
δ
α
β
(
3
π
)
1
/
3
2
1
/
3
ρ
α
1
/
3
{\displaystyle v_{\rm {xc,\alpha \beta }}^{\mathrm {LDA} }(\mathbf {r} )={\frac {\delta E^{\mathrm {LDA} }}{\delta \rho _{\alpha \beta }(\mathbf {r} )}}={\frac {1}{2}}\delta _{\alpha \beta }{\frac {\delta E^{\mathrm {LDA} }[2\rho _{\alpha }]}{\delta \rho _{\alpha }}}=-\delta _{\alpha \beta }{\Big (}{\frac {3}{\pi }}{\Big )}^{1/3}2^{1/3}\rho _{\alpha }^{1/3}}
== References == | Wikipedia/Local-density_approximation |
Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of electron correlation effects into the methods.
Within the framework of Hartree–Fock calculations, some pieces of information (such as two-electron integrals) are sometimes approximated or completely omitted. In order to correct for this loss, semi-empirical methods are parametrized, that is their results are fitted by a set of parameters, normally in such a way as to produce results that best agree with experimental data, but sometimes to agree with ab initio results.
== Type of simplifications used ==
Semi-empirical methods follow what are often called empirical methods where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel. For all valence electron systems, the extended Hückel method was proposed by Roald Hoffmann.
Semi-empirical calculations are much faster than their ab initio counterparts, mostly due to the use of the zero differential overlap approximation. Their results, however, can be very wrong if the molecule being computed is not similar enough to the molecules in the database used to parametrize the method.
== Preferred application domains ==
=== Methods restricted to π-electrons ===
These methods exist for the calculation of electronically excited states of polyenes, both cyclic and linear. These methods, such as the Pariser–Parr–Pople method (PPP), can provide good estimates of the π-electronic excited states, when parameterized well. For many years, the PPP method outperformed ab initio excited state calculations.
=== Methods restricted to all valence electrons. ===
These methods can be grouped into several groups:
Methods such as CNDO/2, INDO and NDDO that were introduced by John Pople. The implementations aimed to fit, not experiment, but ab initio minimum basis set results. These methods are now rarely used but the methodology is often the basis of later methods.
Methods that are in the MOPAC, AMPAC, SPARTAN and/or CP2K computer programs originally from the group of Michael Dewar. These are MINDO, MNDO, AM1, PM3, PM6, PM7 and SAM1. Here the objective is to use parameters to fit experimental heats of formation, dipole moments, ionization potentials, and geometries. This is by far the largest group of semiempirical methods.
Methods whose primary aim is to calculate excited states and hence predict electronic spectra. These include ZINDO and SINDO. The OMx (x=1,2,3) methods can also be viewed as belonging to this class, although they are also suitable for ground-state applications; in particular, the combination of OM2 and MRCI is an important tool for excited state molecular dynamics.
Tight-binding methods, e.g. a large family of methods known as DFTB, are sometimes classified as semiempirical methods as well. More recent examples include the semiempirical quantum mechanical methods GFNn-xTB (n=0,1,2), which are particularly suited for the geometry, vibrational frequencies, and non-covalent interactions of large molecules.
The NOTCH method includes many new, physically-motivated terms compared to the NDDO family of methods, is much less empirical than the other semi-empirical methods (almost all of its parameters are determined non-empirically), provides robust accuracy for bonds between uncommon element combinations, and is applicable to ground and excited states.
== See also ==
List of quantum chemistry and solid-state physics software
== References == | Wikipedia/Semi-empirical_quantum_chemistry_method |
The Kohn-Sham equations are a set of mathematical equations used in quantum mechanics to simplify the complex problem of understanding how electrons behave in atoms and molecules. They introduce fictitious non-interacting electrons and use them to find the most stable arrangement of electrons, which helps scientists understand and predict the properties of matter at the atomic and molecular scale.
== Description ==
In physics and quantum chemistry, specifically density functional theory, the Kohn–Sham equation is the non-interacting Schrödinger equation (more clearly, Schrödinger-like equation) of a fictitious system (the "Kohn–Sham system") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles.
In the Kohn–Sham theory the introduction of the noninteracting kinetic energy functional Ts into the energy expression leads, upon functional differentiation, to a collection of one-particle equations whose solutions are the Kohn–Sham orbitals.
The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as vs(r) or veff(r), called the Kohn–Sham potential. If the particles in the Kohn–Sham system are non-interacting fermions (non-fermion Density Functional Theory has been researched), the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest-energy solutions to
(
−
ℏ
2
2
m
∇
2
+
v
eff
(
r
)
)
φ
i
(
r
)
=
ε
i
φ
i
(
r
)
.
{\displaystyle \left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+v_{\text{eff}}(\mathbf {r} )\right)\varphi _{i}(\mathbf {r} )=\varepsilon _{i}\varphi _{i}(\mathbf {r} ).}
This eigenvalue equation is the typical representation of the Kohn–Sham equations. Here εi is the orbital energy of the corresponding Kohn–Sham orbital
φ
i
{\displaystyle \varphi _{i}}
, and the density for an N-particle system is
ρ
(
r
)
=
∑
i
N
|
φ
i
(
r
)
|
2
.
{\displaystyle \rho (\mathbf {r} )=\sum _{i}^{N}|\varphi _{i}(\mathbf {r} )|^{2}.}
== History ==
The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham, who introduced the concept at the University of California, San Diego, in 1965.
Kohn received a Nobel Prize in Chemistry in 1998 for the Kohn–Sham equations and other work related to density functional theory (DFT).
== Kohn–Sham potential ==
In Kohn–Sham density functional theory, the total energy of a system is expressed as a functional of the charge density as
E
[
ρ
]
=
T
s
[
ρ
]
+
∫
d
r
v
ext
(
r
)
ρ
(
r
)
+
E
H
[
ρ
]
+
E
xc
[
ρ
]
,
{\displaystyle E[\rho ]=T_{s}[\rho ]+\int d\mathbf {r} \,v_{\text{ext}}(\mathbf {r} )\rho (\mathbf {r} )+E_{\text{H}}[\rho ]+E_{\text{xc}}[\rho ],}
where Ts is the Kohn–Sham kinetic energy, which is expressed in terms of the Kohn–Sham orbitals as
T
s
[
ρ
]
=
∑
i
=
1
N
∫
d
r
φ
i
∗
(
r
)
(
−
ℏ
2
2
m
∇
2
)
φ
i
(
r
)
,
{\displaystyle T_{s}[\rho ]=\sum _{i=1}^{N}\int d\mathbf {r} \,\varphi _{i}^{*}(\mathbf {r} )\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\right)\varphi _{i}(\mathbf {r} ),}
vext is the external potential acting on the interacting system (at minimum, for a molecular system, the electron–nuclei interaction), EH is the Hartree (or Coulomb) energy
E
H
[
ρ
]
=
e
2
2
∫
d
r
∫
d
r
′
ρ
(
r
)
ρ
(
r
′
)
|
r
−
r
′
|
,
{\displaystyle E_{\text{H}}[\rho ]={\frac {e^{2}}{2}}\int d\mathbf {r} \int d\mathbf {r} '\,{\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}},}
and Exc is the exchange–correlation energy. The Kohn–Sham equations are found by varying the total energy expression with respect to a set of Kohn-Sham orbitals subject to the constraint that they are orthogonal, this yields a time-independent Schrödinger equation with a scalar potential equal to the Kohn–Sham potential
v
eff
(
r
)
=
v
ext
(
r
)
+
e
2
∫
ρ
(
r
′
)
|
r
−
r
′
|
d
r
′
+
δ
E
xc
[
ρ
]
δ
ρ
(
r
)
,
{\displaystyle v_{\text{eff}}(\mathbf {r} )=v_{\text{ext}}(\mathbf {r} )+e^{2}\int {\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} '+{\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}},}
where the last term
v
xc
(
r
)
≡
δ
E
xc
[
ρ
]
δ
ρ
(
r
)
,
{\displaystyle v_{\text{xc}}(\mathbf {r} )\equiv {\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}},}
is the exchange–correlation potential. This term, and the corresponding energy expression, are the only unknowns in the Kohn–Sham approach to density functional theory. An approximation that does not vary the orbitals is Harris functional theory.
The Kohn–Sham orbital energies εi, in general, have little physical meaning (see Koopmans' theorem). The sum of the orbital energies is related to the total energy as
E
=
∑
i
N
ε
i
−
E
H
[
ρ
]
+
E
xc
[
ρ
]
−
∫
δ
E
xc
[
ρ
]
δ
ρ
(
r
)
ρ
(
r
)
d
r
.
{\displaystyle E=\sum _{i}^{N}\varepsilon _{i}-E_{\text{H}}[\rho ]+E_{\text{xc}}[\rho ]-\int {\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}}\rho (\mathbf {r} )\,d\mathbf {r} .}
Because the orbital energies are non-unique in the more general restricted open-shell case, this equation only holds true for specific choices of orbital energies (see Koopmans' theorem).
== References == | Wikipedia/Kohn–Sham_equations |
Görling–Levy perturbation theory (GLPT) in Kohn–Sham (KS) density functional theory (DFT) is the analogue to what Møller–Plesset perturbation theory (MPPT) is in Hartree–Fock (HF) theory. Its basis is Rayleigh–Schrödinger perturbation theory (RSPT) and the adiabatic connection (AC). It describes electronic correlation effects. It is mostly used to second (GL2), rarely to third (GL3) or fourth (GL4) order, but becomes fast really increasingly computational expensive. It was published in 1993 and 1994 by Andreas Görling and Mel Levy.
== Kohn–Sham correlation energy from Görling–Levy perturbation ==
The basis of GL perturbation theory is the adiabatic connection (AC) with the coupling constant
0
≤
α
≤
1
{\textstyle 0\leq \alpha \leq 1}
connecting the artificial Kohn–Sham (KS) system of noninteracting electrons
α
=
0
{\textstyle \alpha =0}
to the real system of interacting electrons
α
=
1
{\textstyle \alpha =1}
with the AC Hamiltonian
H
^
α
=
T
^
+
α
V
^
ee
+
∑
i
=
1
N
v
α
(
r
i
)
{\displaystyle {\hat {H}}_{\alpha }={\hat {T}}+\alpha {\hat {V}}_{\text{ee}}+\sum _{i=1}^{N}v_{\alpha }(r_{i})}
where
N
{\textstyle N}
is the number of electrons,
T
^
=
−
1
2
∑
i
∇
i
2
{\textstyle {\hat {T}}=-{\frac {1}{2}}\sum _{i}\nabla _{i}^{2}}
the kinetic energy of the electrons,
V
^
ee
=
∑
i
∑
j
>
i
|
r
i
−
r
j
|
−
1
{\textstyle {\hat {V}}_{\text{ee}}=\sum _{i}\sum _{j>i}|r_{i}-r_{j}|^{-1}}
the electron-electron interaction. Görling and Levy expressed the coupling-strength dependent local multiplicative potential under the constraint, that the density
n
(
r
)
{\textstyle n(r)}
stays fixed along the AC as
v
α
[
n
]
(
r
)
=
v
S
[
n
]
(
r
)
−
α
v
H
x
[
n
]
(
r
)
−
v
c
α
[
n
]
(
r
)
{\displaystyle v_{\alpha }[n](r)=v_{S}[n](r)-\alpha v_{Hx}[n](r)-v_{c}^{\alpha }[n](r)}
where
v
S
{\textstyle v_{S}}
is the KS potential,
v
H
x
{\textstyle v_{Hx}}
the Hartree-exchange potential in first order, and the correlation potential for second order or higher
v
c
α
(
r
)
=
α
2
v
c
(
2
)
+
α
3
v
c
(
3
)
+
α
4
v
c
(
4
)
+
.
.
.
{\textstyle v_{c}^{\alpha }(r)=\alpha ^{2}v_{c}^{(2)}+\alpha ^{3}v_{c}^{(3)}+\alpha ^{4}v_{c}^{(4)}+...}
. As usual in perturbation theory we can express the correlation energy in a power series
E
c
(
0
)
+
α
E
c
(
1
)
+
α
2
E
c
(
2
)
+
.
.
.
{\textstyle E_{c}^{(0)}+\alpha E_{c}^{(1)}+\alpha ^{2}E_{c}^{(2)}+...}
, where in GLPT the zeroth and first contribution vanish i.e.
E
c
(
0
)
=
E
c
(
1
)
=
0
{\textstyle E_{c}^{(0)}=E_{c}^{(1)}=0}
. The second term is the Görling–Levy second order (GL2) correlation energy and can be evaluated with using the Slater–Condon rules and Brillouin's theorem in terms of occupied
i
,
j
{\textstyle i,j}
and unoccupied
a
,
b
{\textstyle a,b}
KS orbitals and eigenvalues
where
Φ
S
,
Φ
k
{\textstyle \Phi _{S},\Phi _{k}}
are ground state and excited KS determinants with their respective
E
0
,
E
k
{\textstyle E_{0},E_{k}}
energies and
E
c
MP2
{\textstyle E_{c}^{\text{MP2}}}
is exactly the second order Møller–Plesset (MP2) correlation energy but evaluated with KS orbitals,
E
c
S
{\textstyle E_{c}^{S}}
the so called single excitation contribution to correlation which is missing in regular MPPT, but present in GLPT and
v
^
x
NL
{\textstyle {\hat {v}}_{x}^{\text{NL}}}
is the nonlocal exchange operator from Hartree–Fock (HF) theory,
v
^
x
{\textstyle {\hat {v}}_{x}}
is the local Kohn–Sham (KS) exchange operator both evaluated with KS orbitals and lastly the notation
⟨
i
j
|
|
a
b
⟩
=
⟨
i
j
|
a
b
⟩
−
⟨
i
j
|
b
a
⟩
{\textstyle \langle ij||ab\rangle =\langle ij|ab\rangle -\langle ij|ba\rangle }
.
== Hohenberg–Kohn functional from infinite Görling–Levy expansion ==
With GLPT up to infinite order one could in principle obtain the Hohenberg-Kohn (HK) functional exactly
F
HK
[
n
]
≡
E
[
n
]
−
∫
d
r
v
(
r
)
n
(
r
)
{\textstyle F_{\text{HK}}[n]\equiv E[n]-\int drv(r)n(r)}
in terms of unoccupied and occupied KS orbitals
{
φ
i
}
{\textstyle \{\varphi _{i}\}}
and their eigenvalues
{
ε
i
}
{\textstyle \{\varepsilon _{i}\}}
, where
E
[
n
]
{\textstyle E[n]}
is the electronic ground state energy and
v
(
r
)
{\textstyle v(r)}
the external potential. This is obviously only conceptually interesting since it is computational impossible. With the coupling constant expression
By setting
α
=
1
{\textstyle \alpha =1}
hence
where in zeroth order
E
0
=
T
S
=
⟨
Φ
S
|
T
^
|
Φ
S
⟩
=
∑
i
∫
d
r
ϕ
i
∗
(
r
)
(
−
1
/
2
∇
2
)
ϕ
i
(
r
)
{\textstyle E_{0}=T_{S}=\langle \Phi _{S}|{\hat {T}}|\Phi _{S}\rangle =\sum _{i}\int dr\phi _{i}^{*}(r)(-1/2\nabla ^{2})\phi _{i}(r)}
is the KS kinetic energy with the KS potential
v
0
(
r
)
=
v
S
(
r
)
{\textstyle v_{0}(r)=v_{S}(r)}
and in first order
E
1
=
⟨
Φ
S
|
V
^
ee
|
Φ
S
⟩
=
E
H
x
[
n
]
{\textstyle E_{1}=\langle \Phi _{S}|{\hat {V}}_{\text{ee}}|\Phi _{S}\rangle =E_{Hx}[n]}
the Hartree-exchange (Hx) energy and its respective Hx potential
v
1
(
r
)
=
v
H
x
(
r
)
{\textstyle v_{1}(r)=v_{Hx}(r)}
and from second order the infinite GL
n
{\textstyle n}
correlation (c) energy with
n
→
∞
{\textstyle n\rightarrow \infty }
, which is the exact Kohn–Sham (KS) correlation energy
lim
n
→
∞
E
c
GLn
[
n
]
=
∑
j
=
2
∞
E
j
=
E
c
[
n
]
{\textstyle \lim _{n\rightarrow \infty }E_{c}^{\text{GLn}}[n]=\sum _{j=2}^{\infty }E_{j}=E_{c}[n]}
and the corresponding correlation potential
v
c
(
r
)
=
∑
j
=
2
∞
v
j
(
r
)
{\textstyle v_{c}(r)=\sum _{j=2}^{\infty }v_{j}(r)}
. Similarly, if one would do Møller–Plesset perturbation theory up to infinite order one would obtain the exact Hartree–Fock (HF) correlation energy
lim
n
→
∞
E
c
MPn
=
E
c
HF
[
{
Φ
HF
,
Φ
i
a
,
Φ
i
j
a
b
,
Φ
i
j
k
a
b
c
,
.
.
.
}
]
{\textstyle \lim _{n\rightarrow \infty }E_{c}^{\text{MPn}}=E_{c}^{\text{HF}}[\{\Phi ^{\text{HF}},\Phi _{i}^{a},\Phi _{ij}^{ab},\Phi _{ijk}^{abc},...\}]}
where
i
,
j
,
k
{\textstyle i,j,k}
denote occupied and
a
,
b
,
c
{\textstyle a,b,c}
unoccupied HF orbitals and their respective singly, doubly, triply and so on excited Slater determinants. In this notation
Φ
HF
{\textstyle \Phi ^{\text{HF}}}
is the HF determinant and
Φ
S
{\textstyle \Phi _{S}}
the KS determinant.
=== Optimized Effective Potential (OEP) method ===
In the later half of their article Görling and Levy connect their perturbation theory to the optimized effective potential method.
== References == | Wikipedia/Görling-Levy_pertubation_theory |
In solid state physics, the Luttinger–Ward functional, proposed by Joaquin Mazdak Luttinger and John Clive Ward in 1960, is a scalar functional of the bare electron-electron interaction and the renormalized one-particle propagator. In terms of Feynman diagrams, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible diagrams, i.e., all diagrams without particles going in or out that do not fall apart if one removes two propagator lines. It is usually written as
Φ
[
G
]
{\displaystyle \Phi [G]}
or
Φ
[
G
,
U
]
{\displaystyle \Phi [G,U]}
, where
G
{\displaystyle G}
is the one-particle Green's function and
U
{\displaystyle U}
is the bare interaction.
The Luttinger–Ward functional has no direct physical meaning, but it is useful in proving conservation laws.
The functional is closely related to the Baym–Kadanoff functional constructed independently by Gordon Baym and Leo Kadanoff in 1961. Some authors use the terms interchangeably; if a distinction is made, then the Baym–Kadanoff functional is identical to the two-particle irreducible effective action
Γ
[
G
]
{\displaystyle \Gamma [G]}
, which differs from the Luttinger–Ward functional by a trivial term.
== Construction ==
Given a system characterized by the action
S
[
c
,
c
¯
]
{\displaystyle S[c,{\bar {c}}]}
in terms of Grassmann fields
c
i
,
c
¯
i
{\displaystyle c_{i},{\bar {c}}_{i}}
, the partition function can be expressed as the path integral:
Z
[
J
]
=
∫
D
[
c
,
c
¯
]
exp
(
−
S
[
c
,
c
¯
]
+
∑
i
j
c
¯
i
J
i
j
c
j
)
{\displaystyle Z[J]=\int \mathrm {D} [c,{\bar {c}}]\exp \!{\Big (}-S[c,{\bar {c}}]+\sum _{ij}{\bar {c}}_{i}J_{ij}c_{j}{\Big )}}
,
where
J
{\displaystyle J}
is a binary source field. By expansion in the Dyson series, one finds that
Z
=
Z
[
J
=
0
]
{\displaystyle Z=Z[J=0]}
is the sum of all (possibly disconnected), closed Feynman diagrams.
Z
[
J
]
{\displaystyle Z[J]}
in turn is the generating functional of the N-particle Green's function:
G
i
1
j
1
…
i
N
j
N
=
−
⟨
c
i
1
c
¯
j
1
⋯
c
i
N
c
¯
j
N
⟩
=
−
1
Z
[
0
]
δ
N
Z
[
J
]
δ
J
j
1
i
1
⋯
δ
J
j
N
i
N
|
J
=
0
{\displaystyle G_{i_{1}j_{1}\ldots i_{N}j_{N}}=-\langle c_{i_{1}}{\bar {c}}_{j_{1}}\cdots c_{i_{N}}{\bar {c}}_{j_{N}}\rangle ={\frac {-1}{Z[0]}}\left.{\frac {\delta ^{N}Z[J]}{\delta J_{j_{1}i_{1}}\cdots \delta J_{j_{N}i_{N}}}}\right|_{J=0}}
The linked-cluster theorem asserts that the effective action
W
=
−
log
Z
{\displaystyle W=-\log Z}
is the sum of all closed, connected, bare diagrams.
W
[
J
]
=
−
log
Z
[
J
]
{\displaystyle W[J]=-\log Z[J]}
in turn is the generating functional for the connected Green's function. As an example, the two particle connected Green's function reads:
G
i
j
k
l
c
o
n
n
=
−
⟨
c
i
c
¯
j
c
k
c
¯
l
⟩
+
⟨
c
i
c
¯
j
⟩
⟨
c
k
c
¯
l
⟩
−
⟨
c
i
c
¯
l
⟩
⟨
c
k
c
¯
j
⟩
=
δ
2
W
[
J
]
δ
J
j
i
δ
J
l
k
|
J
=
0
{\displaystyle G_{ijkl}^{\mathrm {conn} }=-\langle c_{i}{\bar {c}}_{j}c_{k}{\bar {c}}_{l}\rangle +\langle c_{i}{\bar {c}}_{j}\rangle \langle c_{k}{\bar {c}}_{l}\rangle -\langle c_{i}{\bar {c}}_{l}\rangle \langle c_{k}{\bar {c}}_{j}\rangle =\left.{\frac {\delta ^{2}W[J]}{\delta J_{ji}\delta J_{lk}}}\right|_{J=0}}
To pass to the two-particle irreducible (2PI) effective action, one performs a Legendre transform of
W
[
J
]
{\displaystyle W[J]}
to a new binary source field. One chooses an, at this point arbitrary, convex
G
i
j
{\displaystyle G_{ij}}
as the source and obtains the 2PI functional, also known as Baym–Kadanoff functional:
Γ
[
G
]
=
[
W
[
J
]
−
∑
i
j
J
i
j
G
i
j
]
J
=
J
[
G
]
{\displaystyle \Gamma [G]={\Big [}W[J]-\sum _{ij}J_{ij}G_{ij}{\Big ]}_{J=J[G]}}
with
G
i
j
=
−
δ
W
[
J
]
δ
J
i
j
{\displaystyle G_{ij}=-{\frac {\delta W[J]}{\delta J_{ij}}}}
.
Unlike the connected case, one more step is required to obtain a generating functional from the two-particle irreducible effective action
Γ
{\displaystyle \Gamma }
because of the presence of a non-interacting part. By subtracting it, one obtains the Luttinger–Ward functional:
Φ
[
G
]
=
Γ
[
G
]
−
Γ
0
[
G
]
=
Γ
[
G
]
−
t
r
log
(
−
G
)
−
t
r
(
Σ
G
)
{\displaystyle \Phi [G]=\Gamma [G]-\Gamma _{0}[G]=\Gamma [G]-\mathrm {tr} \log(-G)-\mathrm {tr} (\Sigma G)}
,
where
Σ
{\displaystyle \Sigma }
is the self-energy. Along the lines of the proof of the linked-cluster theorem, one can show that this is the generating functional for the two-particle irreducible propagators.
== Properties ==
Diagrammatically, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible Feynman diagrams (also known as “skeleton” diagrams):
The diagrams are closed as they do not have any external legs, i.e., no particles going in or out of the diagram. They are “bold” because they are formulated in terms of the interacting or bold propagator rather than the non-interacting one. They are two-particle irreducible since they do not become disconnected if we sever up to two fermionic lines.
The Luttinger–Ward functional is related to the grand potential
Ω
{\displaystyle \Omega }
of a system:
Ω
=
t
r
log
(
−
G
)
+
t
r
(
Σ
G
)
+
Φ
[
G
]
{\displaystyle \Omega =\mathrm {tr} \log(-G)+\mathrm {tr} (\Sigma G)+\Phi \left[G\right]}
Φ
{\displaystyle \Phi }
is a generating functional for irreducible vertex quantities: the first functional derivative with respect to
G
{\displaystyle G}
gives the self-energy, while the second derivative gives the partially two-particle irreducible four-point vertex:
Σ
i
j
=
δ
Φ
δ
G
i
j
{\displaystyle \Sigma _{ij}={\frac {\delta \Phi }{\delta G_{ij}}}}
;
Γ
i
j
k
l
=
δ
2
Φ
δ
G
i
j
δ
G
k
l
{\displaystyle \Gamma _{ijkl}={\frac {\delta ^{2}\Phi }{\delta G_{ij}\delta G_{kl}}}}
While the Luttinger–Ward functional exists, it can be shown to be not unique for Hubbard-like models. In particular, the irreducible vertex functions show a set of divergencies, which causes the self-energy to bifurcate into a physical and an unphysical solution.
Baym and Kadanoff showed that we can satisfy the conservation law for any functional
Φ
[
G
]
{\displaystyle \Phi \left[G\right]}
, thanks to the Noether's theorem. This is followed by the fact that the equation of motion of
G
{\displaystyle G}
responding to one-body external fields apparently satisfies the space- and time- translational symmetries as well as the abelian gauge symmetry (phase symmetry), as long as the equation of motion is given with the derivative of
Φ
[
G
]
{\displaystyle \Phi \left[G\right]}
. Note that reverse is also true. Based on the diagramatic analysis, what Baym found is that
δ
Σ
(
1
,
[
G
]
)
δ
G
(
2
)
=
δ
Σ
(
2
,
[
G
]
)
δ
G
(
1
)
{\displaystyle {\frac {\delta \Sigma (1,\left[G\right])}{\delta G(2)}}={\frac {\delta \Sigma (2,\left[G\right])}{\delta G(1)}}}
is needed to satisfy the conservation law. This is nothing but the completely-integrable condition, implying the existence
of
Φ
[
G
]
{\displaystyle \Phi \left[G\right]}
such that
Σ
[
G
]
=
δ
Φ
[
G
]
δ
G
{\displaystyle \Sigma \left[G\right]={\frac {\delta \Phi \left[G\right]}{\delta G}}}
(recall the completely-integrable condition for
d
f
=
A
(
x
,
y
)
d
x
+
B
(
x
,
y
)
d
y
{\displaystyle df=A(x,y)dx+B(x,y)dy}
).
Thus the remaining problem is how to determine
Φ
[
G
]
{\displaystyle \Phi \left[G\right]}
approximately.
Such approximations are called as conserving approximation. Some examples:
The (fully self-consistent) GW approximation is equivalent to truncating
Φ
{\displaystyle \Phi }
to so-called ring diagrams:
Φ
[
G
]
≈
G
U
G
+
G
U
G
G
U
G
+
…
{\displaystyle \Phi [G]\approx GUG+GUGGUG+\ldots }
(A ring diagram consists of polarisation bubbles connected by interaction lines).
Dynamical mean field theory is equivalent to taking only purely local diagrams into account:
Φ
[
G
i
j
,
U
i
j
k
l
]
{\displaystyle \Phi [G_{ij},U_{ijkl}]}
≈
Φ
[
G
i
i
,
U
i
i
i
i
]
{\displaystyle \approx \Phi [G_{ii},U_{iiii}]}
, where
i
,
j
,
k
,
l
{\displaystyle i,j,k,l}
are lattice site indices.
== See also ==
Luttinger's theorem
Ward identity
== References == | Wikipedia/Luttinger–Ward_functional |
A potential energy surface (PES) or energy landscape describes the energy of a system, especially a collection of atoms, in terms of certain parameters, normally the positions of the atoms. The surface might define the energy as a function of one or more coordinates; if there is only one coordinate, the surface is called a potential energy curve or energy profile. An example is the Morse/Long-range potential.
It is helpful to use the analogy of a landscape: for a system with two degrees of freedom (e.g. two bond lengths), the value of the energy (analogy: the height of the land) is a function of two bond lengths (analogy: the coordinates of the position on the ground).
The PES concept finds application in fields such as physics, chemistry and biochemistry, especially in the theoretical sub-branches of these subjects. It can be used to theoretically explore properties of structures composed of atoms, for example, finding the minimum energy shape of a molecule or computing the rates of a chemical reaction. It can be used to describe all possible conformations of a molecular entity, or the spatial positions of interacting molecules in a system, or parameters and their corresponding energy levels, typically Gibbs free energy. Geometrically, the energy landscape is the graph of the energy function across the configuration space of the system. The term is also used more generally in geometric perspectives to mathematical optimization, when the domain of the loss function is the parameter space of some system.
== Mathematical definition and computation ==
The geometry of a set of atoms can be described by a vector, r, whose elements represent the atom positions. The vector r could be the set of the Cartesian coordinates of the atoms, or could also be a set of inter-atomic distances and angles.
Given r, the energy as a function of the positions, E(r), is the value of E(r) for all r of interest. Using the landscape analogy from the introduction, E gives the height on the "energy landscape" so that the concept of a potential energy surface arises.
To study a chemical reaction using the PES as a function of atomic positions, it is necessary to calculate the energy for every atomic arrangement of interest. Methods of calculating the energy of a particular atomic arrangement of atoms are well described in the computational chemistry article, and the emphasis here will be on finding approximations of E(r) to yield fine-grained energy-position information.
For very simple chemical systems or when simplifying approximations are made about inter-atomic interactions, it is sometimes possible to use an analytically derived expression for the energy as a function of the atomic positions. An example is the London-Eyring-Polanyi-Sato potential for the system H + H2 as a function of the three H-H distances.
For more complicated systems, calculation of the energy of a particular arrangement of atoms is often too computationally expensive for large scale representations of the surface to be feasible. For these systems a possible approach is to calculate only a reduced set of points on the PES and then use a computationally cheaper interpolation method, for example Shepard interpolation, to fill in the gaps.
== Application ==
A PES is a conceptual tool for aiding the analysis of molecular geometry and chemical reaction dynamics. Once the necessary points are evaluated on a PES, the points can be classified according to the first and second derivatives of the energy with respect to position, which respectively are the gradient and the curvature. Stationary points (or points with a zero gradient) have physical meaning: energy minima correspond to physically stable chemical species and saddle points correspond to transition states, the highest energy point on the reaction coordinate (which is the lowest energy pathway connecting a chemical reactant to a chemical product).
The term is useful when examining protein folding; while a protein can theoretically exist in a nearly infinite number of conformations along its energy landscape, in reality proteins fold (or "relax") into secondary and tertiary structures that possess the lowest possible free energy. The key concept in the energy landscape approach to protein folding is the folding funnel hypothesis.
In catalysis, when designing new catalysts or refining existing ones, energy landscapes are considered to avoid low-energy or high-energy intermediates that could halt the reaction or demand excessive energy to reach the final products.
In glassing models, the local minima of an energy landscape correspond to metastable low temperature states of a thermodynamic system.
In machine learning, artificial neural networks may be analyzed using analogous approaches. For example, a neural network may be able to perfectly fit the training set, corresponding to a global minimum of zero loss, but overfitting the model ("learning the noise" or "memorizing the training set"). Understanding when this happens can be studied using the geometry of the corresponding energy landscape.
== Attractive and repulsive surfaces ==
Potential energy surfaces for chemical reactions can be classified as attractive or repulsive by comparing the extensions of the bond lengths in the activated complex relative to those of the reactants and products. For a reaction of type A + B—C → A—B + C, the bond length extension for the newly formed A—B bond is defined as R*AB = RAB − R0AB, where RAB is the A—B bond length in the transition state and R0AB in the product molecule. Similarly for the bond which is broken in the reaction, R*BC = RBC − R0BC, where R0BC refers to the reactant molecule.
For exothermic reactions, a PES is classified as attractive (or early-downhill) if R*AB > R*BC, so that the transition state is reached while the reactants are approaching each other. After the transition state, the A—B bond length continues to decrease, so that much of the liberated reaction energy is converted into vibrational energy of the A—B bond. An example is the harpoon reaction K + Br2 → K—Br + Br, in which the initial long-range attraction of the reactants leads to an activated complex resembling K+•••Br−•••Br. The vibrationally excited populations of product molecules can be detected by infrared chemiluminescence.
In contrast the PES for the reaction H + Cl2 → HCl + Cl is repulsive (or late-downhill) because R*HCl < R*ClCl and the transition state is reached when the products are separating. For this reaction in which the atom A (here H) is lighter than B and C, the reaction energy is released primarily as translational kinetic energy of the products. For a reaction such as F + H2 → HF + H in which atom A is heavier than B and C, there is mixed energy release, both vibrational and translational, even though the PES is repulsive.
For endothermic reactions, the type of surface determines the type of energy which is most effective in bringing about reaction. Translational energy of the reactants is most effective at inducing reactions with an attractive surface, while vibrational excitation (to higher vibrational quantum number v) is more effective for reactions with a repulsive surface. As an example of the latter case, the reaction F + HCl(v=1) → Cl + HF is about five times faster than F + HCl(v=0) → Cl + HF for the same total energy of HCl.
== History ==
The concept of a potential energy surface for chemical reactions was first suggested by the French physicist René Marcelin in 1913. The first semi-empirical calculation of a potential energy surface was proposed for the H + H2 reaction by Henry Eyring and Michael Polanyi in 1931. Eyring used potential energy surfaces to calculate reaction rate constants in the transition state theory in 1935.
== H + H2 two-dimensional PES ==
Potential energy surfaces are commonly shown as three-dimensional graphs, but they can also be represented by two-dimensional graphs, in which the advancement of the reaction is plotted by the use of isoenergetic lines.
The collinear system H + H2 is a simple reaction that allows a two-dimension PES to be plotted in an easy and understandable way.
In this reaction, a hydrogen atom (H) reacts with a dihydrogen molecule (H2) by forming a new bond with one atom from the molecule, which in turn breaks the bond of the original molecule. This is symbolized as Ha + Hb–Hc → Ha–Hb + Hc. The progression of the reaction from reactants (H+H₂) to products (H-H-H), as well as the energy of the species that take part in the reaction, are well defined in the corresponding potential energy surface.
Energy profiles describe potential energy as a function of geometrical variables (PES in any dimension are independent of time and temperature).
We have different relevant elements in the 2-D PES:
The 2-D plot shows the minima points where we find reactants, the products and the saddle point or transition state.
The transition state is a maximum in the reaction coordinate and a minimum in the coordinate perpendicular to the reaction path.
The advance of time describes a trajectory in every reaction. Depending on the conditions of the reaction the process will show different ways to get to the product formation plotted between the 2 axes.
== See also ==
Computational chemistry
Energy minimization (or geometry optimization)
Energy profile (chemistry)
Potential well
Reaction coordinate
== References ==
== Bibliographie ==
Schön, J. C. (5 August 2024). "Energy landscapes—Past, present, and future: A perspective". Journal of Chemical Physics. 161 (5): 050901. Bibcode:2024JChPh.161e0901S. doi:10.1063/5.0212867. Retrieved 17 December 2024. | Wikipedia/Potential_energy_surface |
Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics.
DMFT consists in mapping a many-body lattice problem to a many-body local problem, called an impurity model. While the lattice problem is in general intractable, the impurity model is usually solvable through various schemes. The mapping in itself does not constitute an approximation. The only approximation made in ordinary DMFT schemes is to assume the lattice self-energy to be a momentum-independent (local) quantity. This approximation becomes exact in the limit of lattices with an infinite coordination.
One of DMFT's main successes is to describe the phase transition between a metal and a Mott insulator when the strength of electronic correlations is increased. It has been successfully applied to real materials, in combination with the local density approximation of density functional theory.
== Relation to mean-field theory ==
The DMFT treatment of lattice quantum models is similar to the mean-field theory (MFT) treatment of classical models such as the Ising model. In the Ising model, the lattice problem is mapped onto an effective single site problem, whose magnetization is to reproduce the lattice magnetization through an effective "mean-field". This condition is called the self-consistency condition. It stipulates that the single-site observables should reproduce the lattice "local" observables by means of an effective field. While the N-site Ising Hamiltonian is hard to solve analytically (to date, analytical solutions exist only for the 1D and 2D case), the single-site problem is easily solved.
Likewise, DMFT maps a lattice problem (e.g. the Hubbard model) onto a single-site problem. In DMFT, the local observable is the local Green's function. Thus, the self-consistency condition for DMFT is for the impurity Green's function to reproduce the lattice local Green's function through an effective mean-field which, in DMFT, is the hybridization function
Δ
(
τ
)
{\displaystyle \Delta (\tau )}
of the impurity model. DMFT owes its name to the fact that the mean-field
Δ
(
τ
)
{\displaystyle \Delta (\tau )}
is time-dependent, or dynamical. This also points to the major difference between the Ising MFT and DMFT: Ising MFT maps the N-spin problem into a single-site, single-spin problem. DMFT maps the lattice problem onto a single-site problem, but the latter fundamentally remains a N-body problem which captures the temporal fluctuations due to electron-electron correlations.
== Description of DMFT for the Hubbard model ==
=== The DMFT mapping ===
==== Single-orbital Hubbard model ====
The Hubbard model describes the onsite interaction between electrons of opposite spin by a single parameter,
U
{\displaystyle U}
. The Hubbard Hamiltonian may take the following form:
H
Hubbard
=
t
∑
⟨
i
j
⟩
σ
c
i
σ
†
c
j
σ
+
U
∑
i
n
i
↑
n
i
↓
{\displaystyle H_{\text{Hubbard}}=t\sum _{\langle ij\rangle \sigma }c_{i\sigma }^{\dagger }c_{j\sigma }+U\sum _{i}n_{i\uparrow }n_{i\downarrow }}
where, on suppressing the spin 1/2 indices
σ
{\displaystyle \sigma }
,
c
i
†
,
c
i
{\displaystyle c_{i}^{\dagger },c_{i}}
denote the creation and annihilation operators of an electron on a localized orbital on site
i
{\displaystyle i}
, and
n
i
=
c
i
†
c
i
{\displaystyle n_{i}=c_{i}^{\dagger }c_{i}}
.
The following assumptions have been made:
only one orbital contributes to the electronic properties (as might be the case of copper atoms in superconducting cuprates, whose
d
{\displaystyle d}
-bands are non-degenerate),
the orbitals are so localized that only nearest-neighbor hopping
t
{\displaystyle t}
is taken into account
==== The auxiliary problem: the Anderson impurity model ====
The Hubbard model is in general intractable under usual perturbation expansion techniques. DMFT maps this lattice model onto the so-called Anderson impurity model (AIM). This model describes the interaction of one site (the impurity) with a "bath" of electronic levels (described by the annihilation and creation operators
a
p
σ
{\displaystyle a_{p\sigma }}
and
a
p
σ
†
{\displaystyle a_{p\sigma }^{\dagger }}
) through a hybridization function. The Anderson model corresponding to our single-site model is a single-orbital Anderson impurity model, whose hamiltonian formulation, on suppressing some spin 1/2 indices
σ
{\displaystyle \sigma }
, is:
H
AIM
=
∑
p
ϵ
p
a
p
†
a
p
⏟
H
bath
+
∑
p
σ
(
V
p
σ
c
σ
†
a
p
σ
+
h
.
c
.
)
⏟
H
mix
+
U
n
↑
n
↓
−
μ
(
n
↑
+
n
↓
)
⏟
H
loc
{\displaystyle H_{\text{AIM}}=\underbrace {\sum _{p}\epsilon _{p}a_{p}^{\dagger }a_{p}} _{H_{\text{bath}}}+\underbrace {\sum _{p\sigma }\left(V_{p}^{\sigma }c_{\sigma }^{\dagger }a_{p\sigma }+h.c.\right)} _{H_{\text{mix}}}+\underbrace {Un_{\uparrow }n_{\downarrow }-\mu \left(n_{\uparrow }+n_{\downarrow }\right)} _{H_{\text{loc}}}}
where
H
bath
{\displaystyle H_{\text{bath}}}
describes the non-correlated electronic levels
ϵ
p
{\displaystyle \epsilon _{p}}
of the bath
H
loc
{\displaystyle H_{\text{loc}}}
describes the impurity, where two electrons interact with the energetical cost
U
{\displaystyle U}
H
mix
{\displaystyle H_{\text{mix}}}
describes the hybridization (or coupling) between the impurity and the bath through hybridization terms
V
p
σ
{\displaystyle V_{p}^{\sigma }}
The Matsubara Green's function of this model, defined by
G
imp
(
τ
)
=
−
⟨
T
c
(
τ
)
c
†
(
0
)
⟩
{\displaystyle G_{\text{imp}}(\tau )=-\langle Tc(\tau )c^{\dagger }(0)\rangle }
, is entirely determined by the parameters
U
,
μ
{\displaystyle U,\mu }
and the so-called hybridization function
Δ
σ
(
i
ω
n
)
=
∑
p
|
V
p
σ
|
2
i
ω
n
−
ϵ
p
{\displaystyle \Delta _{\sigma }(i\omega _{n})=\sum _{p}{\frac {|V_{p}^{\sigma }|^{2}}{i\omega _{n}-\epsilon _{p}}}}
, which is the imaginary-time Fourier-transform of
Δ
σ
(
τ
)
{\displaystyle \Delta _{\sigma }(\tau )}
.
This hybridization function describes the dynamics of electrons hopping in and out of the bath. It should reproduce the lattice dynamics such that the impurity Green's function is the same as the local lattice Green's function. It is related to the non-interacting Green's function by the relation:
(
G
0
)
−
1
(
i
ω
n
)
=
i
ω
n
+
μ
−
Δ
(
i
ω
n
)
{\displaystyle ({\mathcal {G}}_{0})^{-1}(i\omega _{n})=i\omega _{n}+\mu -\Delta (i\omega _{n})}
(1)
Solving the Anderson impurity model consists in computing observables such as the interacting Green's function
G
(
i
ω
n
)
{\displaystyle G(i\omega _{n})}
for a given hybridization function
Δ
(
i
ω
n
)
{\displaystyle \Delta (i\omega _{n})}
and
U
,
μ
{\displaystyle U,\mu }
. It is a difficult but not intractable problem. There exists a number of ways to solve the AIM, such as
Numerical renormalization group
Exact diagonalization
Iterative perturbation theory
Non-crossing approximation
Continuous-time quantum Monte Carlo algorithms
=== Self-consistency equations ===
The self-consistency condition requires the impurity Green's function
G
i
m
p
(
τ
)
{\displaystyle G_{\mathrm {imp} }(\tau )}
to coincide with the local lattice Green's function
G
i
i
(
τ
)
=
−
⟨
T
c
i
(
τ
)
c
i
†
(
0
)
⟩
{\displaystyle G_{ii}(\tau )=-\langle Tc_{i}(\tau )c_{i}^{\dagger }(0)\rangle }
:
G
i
m
p
(
i
ω
n
)
=
G
i
i
(
i
ω
n
)
=
∑
k
1
i
ω
n
+
μ
−
ϵ
(
k
)
−
Σ
(
k
,
i
ω
n
)
=
G
l
o
c
(
i
ω
n
)
{\displaystyle G_{\mathrm {imp} }(i\omega _{n})=G_{ii}(i\omega _{n})=\sum _{k}{\frac {1}{i\omega _{n}+\mu -\epsilon (k)-\Sigma (k,i\omega _{n})}}=G_{\mathrm {loc} }(i\omega _{n})}
where
Σ
(
k
,
i
ω
n
)
{\displaystyle \Sigma (k,i\omega _{n})}
denotes the lattice self-energy.
=== DMFT approximation: locality of the lattice self-energy ===
The only DMFT approximations (apart from the approximation that can be made in order to solve the Anderson model) consists in neglecting the spatial fluctuations of the lattice self-energy, by equating it to the impurity self-energy:
Σ
(
k
,
i
ω
n
)
≈
Σ
i
m
p
(
i
ω
n
)
{\displaystyle \Sigma (k,i\omega _{n})\approx \Sigma _{imp}(i\omega _{n})}
This approximation becomes exact in the limit of lattices with infinite coordination, that is when the number of neighbors of each site is infinite. Indeed, one can show that in the diagrammatic expansion of the lattice self-energy, only local diagrams survive when one goes into the infinite coordination limit.
Thus, as in classical mean-field theories, DMFT is supposed to get more accurate as the dimensionality (and thus the number of neighbors) increases. Put differently, for low dimensions, spatial fluctuations will render the DMFT approximation less reliable.
Spatial fluctuations also become relevant in the vicinity of phase transitions. Here, DMFT and classical mean-field theories result in mean-field critical exponents, the pronounced changes before the phase transition are not reflected in the DMFT self-energy.
=== The DMFT loop ===
In order to find the local lattice Green's function, one has to determine the hybridization function such that the corresponding impurity Green's function will coincide with the sought-after local lattice Green's function.
The most widespread way of solving this problem is by using a forward recursion method, namely, for a given
U
{\displaystyle U}
,
μ
{\displaystyle \mu }
and temperature
T
{\displaystyle T}
:
Start with a guess for
Σ
(
k
,
i
ω
n
)
{\displaystyle \Sigma (k,i\omega _{n})}
(typically,
Σ
(
k
,
i
ω
n
)
=
0
{\displaystyle \Sigma (k,i\omega _{n})=0}
)
Make the DMFT approximation:
Σ
(
k
,
i
ω
n
)
≈
Σ
i
m
p
(
i
ω
n
)
{\displaystyle \Sigma (k,i\omega _{n})\approx \Sigma _{\mathrm {imp} }(i\omega _{n})}
Compute the local Green's function
G
l
o
c
(
i
ω
n
)
{\displaystyle G_{\mathrm {loc} }(i\omega _{n})}
Compute the dynamical mean field
Δ
(
i
ω
)
=
i
ω
n
+
μ
−
G
l
o
c
−
1
(
i
ω
n
)
−
Σ
i
m
p
(
i
ω
n
)
{\displaystyle \Delta (i\omega )=i\omega _{n}+\mu -G_{\mathrm {loc} }^{-1}(i\omega _{n})-\Sigma _{\mathrm {imp} }(i\omega _{n})}
Solve the AIM for a new impurity Green's function
G
i
m
p
(
i
ω
n
)
{\displaystyle G_{\mathrm {imp} }(i\omega _{n})}
, extract its self-energy:
Σ
i
m
p
(
i
ω
n
)
=
(
G
0
)
−
1
(
i
ω
n
)
−
(
G
i
m
p
)
−
1
(
i
ω
n
)
{\displaystyle \Sigma _{\mathrm {imp} }(i\omega _{n})=({\mathcal {G}}_{0})^{-1}(i\omega _{n})-(G_{\mathrm {imp} })^{-1}(i\omega _{n})}
Go back to step 2 until convergence, namely when
G
i
m
p
n
=
G
i
m
p
n
+
1
{\displaystyle G_{\mathrm {imp} }^{n}=G_{\mathrm {imp} }^{n+1}}
.
== Applications ==
The local lattice Green's function and other impurity observables can be used to calculate a number of physical quantities as a function of correlations
U
{\displaystyle U}
, bandwidth, filling (chemical potential
μ
{\displaystyle \mu }
), and temperature
T
{\displaystyle T}
:
the spectral function (which gives the band structure)
the kinetic energy
the double occupancy of a site
response functions (compressibility, optical conductivity, specific heat)
In particular, the drop of the double-occupancy as
U
{\displaystyle U}
increases is a signature of the Mott transition.
== Extensions of DMFT ==
DMFT has several extensions, extending the above formalism to multi-orbital, multi-site problems, long-range correlations and non-equilibrium.
=== Multi-orbital extension ===
DMFT can be extended to Hubbard models with multiple orbitals, namely with electron-electron interactions of the form
U
α
β
n
α
n
β
{\displaystyle U_{\alpha \beta }n_{\alpha }n_{\beta }}
where
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
denote different orbitals. The combination with density functional theory (DFT+DMFT) then allows for a realistic calculation of correlated materials.
=== Extended DMFT ===
Extended DMFT yields a local impurity self energy for non-local interactions and hence allows us to apply DMFT for more general models such as the t-J model.
=== Cluster DMFT ===
In order to improve on the DMFT approximation, the Hubbard model can be mapped on a multi-site impurity (cluster) problem, which allows one to add some spatial dependence to the impurity self-energy. Clusters contain 4 to 8 sites at low temperature and up to 100 sites at high temperature.
The Typical Medium Dynamical Cluster Approximation (TMDCA) is a non-perturbative approach for obtaining the electronic ground state of strongly correlated many-body systems, built on the dynamical cluster approximation (DCA).
=== Diagrammatic extensions ===
Spatial dependencies of the self energy beyond DMFT, including long-range correlations in the vicinity of a phase transition, can be obtained also through diagrammatic extensions of DMFT using a combination of analytical and numerical techniques. The starting point of the dynamical vertex approximation and of the dual fermion approach is the local two-particle vertex.
=== Non-equilibrium ===
DMFT has been employed to study non-equilibrium transport and optical excitations. Here, the reliable calculation of the AIM's Green function out of equilibrium remains a big challenge. DMFT has also been applied to ecological models in order to describe the mean-field dynamics of a community with a thermodynamic number of species.
== References and notes ==
== See also ==
Strongly correlated material
== External links ==
Strongly Correlated Materials: Insights From Dynamical Mean-Field Theory G. Kotliar and D. Vollhardt
Lecture notes on the LDA+DMFT approach to strongly correlated materials Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes DMFT at 25: Infinite Dimensions Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes DMFT – From Infinite Dimensions to Real Materials Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes Dynamical Mean-Field Theory of Correlated Electrons Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
DMFT for two-site Hubbard dimer: in Dynamical Mean-Field Theory for Materials, Eva Pavarini
https://www.cond-mat.de/events/correl21/manuscripts/pavarini.pdf
DMFT for two-site Hubbard dimer: in Solving the strong-correlation problem in materials, Eva Pavarini
https://doi.org/10.1007/s40766-021-00025-8 | Wikipedia/Dynamical_mean_field_theory |
In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variables, such as spin and density, at different positions or times are related. More specifically, correlation functions measure quantitatively the extent to which microscopic variables fluctuate together, on average, across space and/or time. Keep in mind that correlation doesn’t automatically equate to causation. So, even if there’s a non-zero correlation between two points in space or time, it doesn’t mean there is a direct causal link between them. Sometimes, a correlation can exist without any causal relationship. This could be purely coincidental or due to other underlying factors, known as confounding variables, which cause both points to covary (statistically).
A classic example of spatial correlation can be seen in ferromagnetic and antiferromagnetic materials. In these materials, atomic spins tend to align in parallel and antiparallel configurations with their adjacent counterparts, respectively. The figure on the right visually represents this spatial correlation between spins in such materials.
== Definitions ==
The most common definition of a correlation function is the canonical ensemble (thermal) average of the scalar product of two random variables,
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
, at positions
R
{\displaystyle R}
and
R
+
r
{\displaystyle R+r}
and times
t
{\displaystyle t}
and
t
+
τ
{\displaystyle t+\tau }
:
C
(
r
,
τ
)
=
⟨
s
1
(
R
,
t
)
⋅
s
2
(
R
+
r
,
t
+
τ
)
⟩
−
⟨
s
1
(
R
,
t
)
⟩
⟨
s
2
(
R
+
r
,
t
+
τ
)
⟩
.
{\displaystyle C(r,\tau )=\langle \mathbf {s_{1}} (R,t)\cdot \mathbf {s_{2}} (R+r,t+\tau )\rangle \ -\langle \mathbf {s_{1}} (R,t)\rangle \langle \mathbf {s_{2}} (R+r,t+\tau )\rangle \,.}
Here the brackets,
⟨
⋅
⟩
{\displaystyle \langle \cdot \rangle }
, indicate the above-mentioned thermal average. It is important to note here, however, that while the brackets are called an average, they are calculated as an expected value, not an average value. It is a matter of convention whether one subtracts the uncorrelated average product of
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
,
⟨
s
1
(
R
,
t
)
⟩
⟨
s
2
(
R
+
r
,
t
+
τ
)
⟩
{\displaystyle \langle \mathbf {s_{1}} (R,t)\rangle \langle \mathbf {s_{2}} (R+r,t+\tau )\rangle }
from the correlated product,
⟨
s
1
(
R
,
t
)
⋅
s
2
(
R
+
r
,
t
+
τ
)
⟩
{\displaystyle \langle \mathbf {s_{1}} (R,t)\cdot \mathbf {s_{2}} (R+r,t+\tau )\rangle }
, with the convention differing among fields. The most common uses of correlation functions are when
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
describe the same variable, such as a spin-spin correlation function, or a particle position-position correlation function in an elemental liquid or a solid (often called a Radial distribution function or a pair correlation function). Correlation functions between the same random variable are autocorrelation functions. However, in statistical mechanics, not all correlation functions are autocorrelation functions. For example, in multicomponent condensed phases, the pair correlation function between different elements is often of interest. Such mixed-element pair correlation functions are an example of cross-correlation functions, as the random variables
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
represent the average variations in density as a function position for two distinct elements.
=== Equilibrium equal-time (spatial) correlation functions ===
Often, one is interested in solely the spatial influence of a given random variable, say the direction of a spin, on its local environment, without considering later times,
τ
{\displaystyle \tau }
. In this case, we neglect the time evolution of the system, so the above definition is re-written with
τ
=
0
{\displaystyle \tau =0}
. This defines the equal-time correlation function,
C
(
r
,
0
)
{\displaystyle C(r,0)}
. It is written as:
C
(
r
,
0
)
=
⟨
s
1
(
R
,
t
)
⋅
s
2
(
R
+
r
,
t
)
⟩
−
⟨
s
1
(
R
,
t
)
⟩
⟨
s
2
(
R
+
r
,
t
)
⟩
.
{\displaystyle C(r,0)=\langle \mathbf {s_{1}} (R,t)\cdot \mathbf {s_{2}} (R+r,t)\rangle \ -\langle \mathbf {s_{1}} (R,t)\rangle \langle \mathbf {s_{2}} (R+r,t)\rangle \,.}
Often, one omits the reference time,
t
{\displaystyle t}
, and reference radius,
R
{\displaystyle R}
, by assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sample positions, yielding:
C
(
r
)
=
⟨
s
1
(
0
)
⋅
s
2
(
r
)
⟩
−
⟨
s
1
(
0
)
⟩
⟨
s
2
(
r
)
⟩
{\displaystyle C(r)=\langle \mathbf {s_{1}} (0)\cdot \mathbf {s_{2}} (r)\rangle \ -\langle \mathbf {s_{1}} (0)\rangle \langle \mathbf {s_{2}} (r)\rangle }
where, again, the choice of whether to subtract the uncorrelated variables differs among fields. The Radial distribution function is an example of an equal-time correlation function where the uncorrelated reference is generally not subtracted. Other equal-time spin-spin correlation functions are shown on this page for a variety of materials and conditions.
=== Equilibrium equal-position (temporal) correlation functions ===
One might also be interested in the temporal evolution of microscopic variables. In other words, how the value of a microscopic variable at a given position and time,
R
{\displaystyle R}
and
t
{\displaystyle t}
, influences the value of the same microscopic variable at a later time,
t
+
τ
{\displaystyle t+\tau }
(and usually at the same position). Such temporal correlations are quantified via equal-position correlation functions,
C
(
0
,
τ
)
{\displaystyle C(0,\tau )}
. They are defined analogously to above equal-time correlation functions, but we now neglect spatial dependencies by setting
r
=
0
{\displaystyle r=0}
, yielding:
C
(
0
,
τ
)
=
⟨
s
1
(
R
,
t
)
⋅
s
2
(
R
,
t
+
τ
)
⟩
−
⟨
s
1
(
R
,
t
)
⟩
⟨
s
2
(
R
,
t
+
τ
)
⟩
.
{\displaystyle C(0,\tau )=\langle \mathbf {s_{1}} (R,t)\cdot \mathbf {s_{2}} (R,t+\tau )\rangle \ -\langle \mathbf {s_{1}} (R,t)\rangle \langle \mathbf {s_{2}} (R,t+\tau )\rangle \,.}
Assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sites in the sample gives a simpler expression for the equal-position correlation function as for the equal-time correlation function:
C
(
τ
)
=
⟨
s
1
(
0
)
⋅
s
2
(
τ
)
⟩
−
⟨
s
1
(
0
)
⟩
⟨
s
2
(
τ
)
⟩
.
{\displaystyle C(\tau )=\langle \mathbf {s_{1}} (0)\cdot \mathbf {s_{2}} (\tau )\rangle \ -\langle \mathbf {s_{1}} (0)\rangle \langle \mathbf {s_{2}} (\tau )\rangle \,.}
The above assumption may seem non-intuitive at first: how can an ensemble which is time-invariant have a non-uniform temporal correlation function? Temporal correlations remain relevant to talk about in equilibrium systems because a time-invariant, macroscopic ensemble can still have non-trivial temporal dynamics microscopically. One example is in diffusion. A single-phase system at equilibrium has a homogeneous composition macroscopically. However, if one watches the microscopic movement of each atom, fluctuations in composition are constantly occurring due to the quasi-random walks taken by the individual atoms. Statistical mechanics allows one to make insightful statements about the temporal behavior of such fluctuations of equilibrium systems. This is discussed below in the section on the temporal evolution of correlation functions and Onsager's regression hypothesis.
=== Time correlation function ===
Time correlation function plays a significant role in nonequilibrium statistical mechanics as partition function does in equilibrium statistical mechanics. For instance, transport coefficients are closely related to time correlation functions through the Fourier transform; and the Green-Kubo relations, used to calculate relaxation and dissipation processes in a system, are expressed in terms of equilibrium time correlation functions. The time correlation function of two observable
A
{\displaystyle A}
and
B
{\displaystyle B}
is defined as,
C
A
B
(
t
1
,
t
2
)
=
⟨
A
(
t
1
)
B
(
t
2
)
⟩
{\displaystyle C_{AB}(t_{1},t_{2})=\langle A(t_{1})B(t_{2})\rangle }
and this definition applies for both classical and quantum version. For stationary (equilibrium) system, the time origin is irrelevant, and
C
A
B
(
τ
)
=
C
A
B
(
t
1
,
t
2
)
{\displaystyle C_{AB}(\tau )=C_{AB}(t_{1},t_{2})}
, with
τ
=
t
2
−
t
1
{\displaystyle \tau =t_{2}-t_{1}}
as the time difference.
The explicit expression of classical time correlation function is,
C
A
B
(
t
)
=
∫
d
N
r
d
N
p
f
(
r
0
,
p
0
)
A
(
r
0
,
p
0
)
B
(
r
t
,
p
t
)
{\displaystyle C_{AB}(t)=\int d^{N}\mathbf {r} d^{N}\mathbf {p} f(\mathbf {r} _{0},\mathbf {p} _{0})A(\mathbf {r} _{0},\mathbf {p} _{0})B(\mathbf {r} _{t},\mathbf {p} _{t})}
where
A
(
r
0
,
p
0
)
{\displaystyle A(\mathbf {r} _{0},\mathbf {p} _{0})}
is the value of
A
{\displaystyle A}
at time
t
=
0
{\displaystyle t=0}
,
B
(
r
t
,
p
t
)
{\displaystyle B(\mathbf {r} _{t},\mathbf {p} _{t})}
is the value of
B
{\displaystyle B}
at time
t
{\displaystyle t}
given the initial state
(
r
0
,
p
0
)
{\displaystyle (\mathbf {r} _{0},\mathbf {p} _{0})}
, and
f
(
r
0
,
p
0
)
{\displaystyle f(\mathbf {r} _{0},\mathbf {p} _{0})}
is the phase space distribution function for the initial state. If the ergodicity is assumed, then the ensemble average is the same as time average in a long time; mathematically,
C
A
B
(
τ
)
=
⟨
A
(
τ
)
B
(
0
)
⟩
=
lim
T
→
∞
1
T
∫
0
T
−
τ
d
t
A
(
t
+
τ
)
B
(
t
)
{\displaystyle C_{AB}(\tau )=\langle A(\tau )B(0)\rangle =\lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T-\tau }dt\,A(t+\tau )B(t)}
scanning different time window
τ
{\displaystyle \tau }
gives the time correlation function. As
t
→
0
{\displaystyle t\to 0}
, the correlation function
C
A
B
(
0
)
=
⟨
A
B
⟩
{\displaystyle C_{AB}(0)=\langle AB\rangle }
, while as
t
→
∞
{\displaystyle t\to \infty }
, we may assume the correlation vanishes and
lim
t
→
∞
C
A
B
(
t
)
=
⟨
A
⟩
⟨
B
⟩
{\displaystyle \lim _{t\to \infty }C_{AB}(t)=\langle A\rangle \langle B\rangle }
.
Correspondingly, the quantum time correlation function is, in the canonical ensemble,
C
A
B
(
t
)
=
1
Q
(
N
,
V
,
T
)
Tr
[
e
−
β
H
^
A
^
e
i
H
^
t
/
ℏ
B
^
e
−
i
H
^
t
/
ℏ
]
{\displaystyle C_{AB}(t)={\frac {1}{Q(N,V,T)}}{\text{Tr}}\left[e^{-\beta {\hat {H}}}{\hat {A}}e^{i{\hat {H}}t/\hbar }{\hat {B}}e^{-i{\hat {H}}t/\hbar }\right]}
where
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
are the quantum operator, and
B
^
(
t
)
=
e
i
H
^
t
/
ℏ
B
^
(
0
)
e
−
i
H
^
t
/
ℏ
{\displaystyle {\hat {B}}(t)=e^{i{\hat {H}}t/\hbar }{\hat {B}}(0)e^{-i{\hat {H}}t/\hbar }}
in the Heisenberg picture. If evaluating the (non-symmetrized) quantum time correlation function by expanding the trace to the eigenstates,
C
A
B
(
t
)
=
1
Q
(
N
,
V
,
T
)
∑
j
,
k
e
β
E
j
e
i
(
E
k
−
E
j
)
t
/
ℏ
A
j
k
B
k
j
{\displaystyle C_{AB}(t)={\frac {1}{Q(N,V,T)}}\sum _{j,k}e^{\beta E_{j}}e^{i(E_{k}-E_{j})t/\hbar }A_{jk}B_{kj}}
Evaluating quantum time correlation function quantum mechanically is very expensive, and this cannot be applied to a large system with many degrees of freedom. Nevertheless, semiclassical initial value representation (SC-IVR) is a family to evaluate the quantum time correlation function from the definition.
Additionally, there are two alternative quantum time correlations, and they both related to the definition of quantum time correlation function in the Fourier space. The first symmetrized correlation function
G
A
B
(
t
)
{\displaystyle G_{AB}(t)}
is defined by,
G
A
B
(
t
)
=
1
Q
(
N
,
V
,
T
)
Tr
[
A
^
e
i
H
^
τ
c
∗
/
ℏ
B
^
e
−
i
H
^
τ
c
/
ℏ
]
{\displaystyle G_{AB}(t)={\frac {1}{Q(N,V,T)}}{\text{Tr}}\left[{\hat {A}}e^{i{\hat {H}}\tau _{c}^{*}/\hbar }{\hat {B}}e^{-i{\hat {H}}\tau _{c}/\hbar }\right]}
with
τ
c
≡
t
−
i
β
ℏ
/
2
{\displaystyle \tau _{c}\equiv t-i\beta \hbar /2}
as a complex time variable.
G
A
B
(
t
)
{\displaystyle G_{AB}(t)}
is related with the definition of quantum time correlation function by,
C
~
A
B
(
ω
)
=
e
β
ℏ
ω
/
2
G
~
A
B
(
ω
)
{\displaystyle {\tilde {C}}_{AB}(\omega )=e^{\beta \hbar \omega /2}{\tilde {G}}_{AB}(\omega )}
The second symmetrized (Kubo transformed) correlation function is,
K
A
B
(
t
)
=
1
β
Q
(
N
,
V
,
T
)
∫
0
β
d
λ
Tr
[
e
−
(
β
−
λ
)
H
^
A
^
e
−
λ
H
^
e
i
H
^
t
/
ℏ
B
^
e
−
i
H
^
t
/
ℏ
]
{\displaystyle K_{AB}(t)={\frac {1}{\beta Q(N,V,T)}}\int _{0}^{\beta }d\lambda \operatorname {Tr} \left[e^{-(\beta -\lambda ){\hat {H}}}{\hat {A}}e^{-\lambda {\hat {H}}}e^{i{\hat {H}}t/\hbar }{\hat {B}}e^{-i{\hat {H}}t/\hbar }\right]}
and
K
A
B
(
t
)
{\displaystyle K_{AB}(t)}
reduces to its classical counterpart both in the high temperature and harmonic limit.
K
A
B
(
t
)
{\displaystyle K_{AB}(t)}
is related with the definition of quantum time correlation function by,
C
~
A
B
(
ω
)
=
[
β
ℏ
ω
1
−
e
−
β
ℏ
ω
]
K
~
A
B
(
ω
)
{\displaystyle {\tilde {C}}_{AB}(\omega )=\left[{\frac {\beta \hbar \omega }{1-e^{-\beta \hbar \omega }}}\right]{\tilde {K}}_{AB}(\omega )}
The symmetrized quantum time correlation function are easier to evaluate, and the Fourier transformed relation makes them applicable in calculating spectrum, transport coefficients, etc. Quantum time correlation function can be approximated using the path integral molecular dynamics.
=== Generalization beyond equilibrium correlation functions ===
All of the above correlation functions have been defined in the context of equilibrium statistical mechanics. However, it is possible to define correlation functions for systems away from equilibrium. Examining the general definition of
C
(
r
,
τ
)
{\displaystyle C(r,\tau )}
, it is clear that one can define the random variables used in these correlation functions, such as atomic positions and spins, away from equilibrium. As such, their scalar product is well-defined away from equilibrium. The operation which is no longer well-defined away from equilibrium is the average over the equilibrium ensemble. This averaging process for non-equilibrium system is typically replaced by averaging the scalar product across the entire sample. This is typical in scattering experiments and computer simulations, and is often used to measure the radial distribution functions of glasses.
One can also define averages over states for systems perturbed slightly from equilibrium. See, for example, http://xbeams.chem.yale.edu/~batista/vaa/node56.html Archived 2018-12-25 at the Wayback Machine
== Measuring correlation functions ==
Correlation functions are typically measured with scattering experiments. For example, x-ray scattering experiments directly measure electron-electron equal-time correlations. From knowledge of elemental structure factors, one can also measure elemental pair correlation functions. See Radial distribution function for further information. Equal-time spin–spin correlation functions are measured with neutron scattering as opposed to x-ray scattering. Neutron scattering can also yield information on pair correlations as well. For systems composed of particles larger than about one micrometer, optical microscopy can be used to measure both equal-time and equal-position correlation functions. Optical microscopy is thus common for colloidal suspensions, especially in two dimensions.
== Time evolution of correlation functions ==
In 1931, Lars Onsager proposed that the regression of microscopic thermal fluctuations at equilibrium follows the macroscopic law of relaxation of small non-equilibrium disturbances. This is known as the Onsager regression hypothesis. As the values of microscopic variables separated by large timescales,
τ
{\displaystyle \tau }
, should be uncorrelated beyond what we would expect from thermodynamic equilibrium, the evolution in time of a correlation function can be viewed from a physical standpoint as the system gradually 'forgetting' the initial conditions placed upon it via the specification of some microscopic variable. There is actually an intuitive connection between the time evolution of correlation functions and the time evolution of macroscopic systems: on average, the correlation function evolves in time in the same manner as if a system was prepared in the conditions specified by the correlation function's initial value and allowed to evolve.
Equilibrium fluctuations of the system can be related to its response to external perturbations via the Fluctuation-dissipation theorem.
== The connection between phase transitions and correlation functions ==
Continuous phase transitions, such as order-disorder transitions in metallic alloys and ferromagnetic-paramagnetic transitions, involve a transition from an ordered to a disordered state. In terms of correlation functions, the equal-time correlation function is non-zero for all lattice points below the critical temperature, and is non-negligible for only a fairly small radius above the critical temperature. As the phase transition is continuous, the length over which the microscopic variables are correlated,
ξ
{\displaystyle \xi }
, must transition continuously from being infinite to finite when the material is heated through its critical temperature. This gives rise to a power-law dependence of the correlation function as a function of distance at the critical point. This is shown in the figure in the left for the case of a ferromagnetic material, with the quantitative details listed in the section on magnetism.
== Applications ==
=== Magnetism ===
In a spin system, the equal-time correlation function is especially well-studied. It describes the canonical ensemble (thermal) average of the scalar product of the spins at two lattice points over all possible orderings:
C
(
r
)
=
⟨
s
(
R
)
⋅
s
(
R
+
r
)
⟩
−
⟨
s
(
R
)
⟩
⟨
s
(
R
+
r
)
⟩
.
{\displaystyle C(r)=\langle \mathbf {s} (R)\cdot \mathbf {s} (R+r)\rangle \ -\langle \mathbf {s} (R)\rangle \langle \mathbf {s} (R+r)\rangle \,.}
Here the brackets mean the above-mentioned thermal average. Schematic plots of this function are shown for a ferromagnetic material below, at, and above its Curie temperature on the left.
Even in a magnetically disordered phase, spins at different positions are correlated, i.e., if the distance r is very small (compared to some length scale
ξ
{\displaystyle \xi }
), the interaction between the spins will cause them to be correlated.
The alignment that would naturally arise as a result of the interaction between spins is destroyed by thermal effects. At high temperatures exponentially-decaying correlations are observed with increasing distance, with the correlation function being given asymptotically by
C
(
r
)
≈
1
r
ϑ
exp
(
−
r
d
)
,
{\displaystyle C(r)\approx {\frac {1}{r^{\vartheta }}}\exp {\left(-{\frac {r}{d}}\right)}\,,}
where r is the distance between spins, and d is the dimension of the system, and
ϑ
{\displaystyle \vartheta }
is an exponent, whose value depends on whether the system is in the disordered phase (i.e. above the critical point), or in the ordered phase (i.e. below the critical point). At high temperatures, the correlation decays to zero exponentially with the distance between the spins. The same exponential decay as a function of radial distance is also observed below
T
c
{\displaystyle T_{c}}
, but with the limit at large distances being the mean magnetization
⟨
M
2
⟩
{\displaystyle \langle M^{2}\rangle }
. Precisely at the critical point, an algebraic behavior is seen
C
(
r
)
≈
1
r
(
d
−
2
+
η
)
,
{\displaystyle C(r)\approx {\frac {1}{r^{(d-2+\eta )}}}\,,}
where
η
{\displaystyle \eta }
is a critical exponent, which does not have any simple relation with the non-critical exponent
ϑ
{\displaystyle \vartheta }
introduced above.
For example, the exact solution of the two-dimensional Ising model (with short-ranged ferromagnetic interactions) gives precisely at criticality
η
=
1
4
{\displaystyle \eta ={\frac {1}{4}}}
, but above criticality
ϑ
=
1
2
{\displaystyle \vartheta ={\frac {1}{2}}}
and below criticality
ϑ
=
2
{\displaystyle \vartheta =2}
.
As the temperature is lowered, thermal disordering is lowered, and in a continuous phase transition the correlation length diverges, as the correlation length must transition continuously from a finite value above the phase transition, to infinite below the phase transition:
ξ
∝
|
T
−
T
c
|
−
ν
,
{\displaystyle \xi \propto |T-T_{c}|^{-\nu }\,,}
with another critical exponent
ν
{\displaystyle \nu }
.
This power law correlation is responsible for the scaling, seen in these transitions. All exponents mentioned are independent of temperature.
They are in fact universal, i.e. found to be the same in a wide variety of systems.
=== Radial distribution functions ===
One common correlation function is the radial distribution function which is seen often in statistical mechanics and fluid mechanics. The correlation function can be calculated in exactly solvable models (one-dimensional Bose gas, spin chains, Hubbard model) by means of Quantum inverse scattering method and Bethe ansatz. In an isotropic XY model, time and temperature correlations were evaluated by Its, Korepin, Izergin & Slavnov.
==== Higher order correlation functions ====
Higher-order correlation functions involve multiple reference points, and are defined through a generalization of the above correlation function by taking the expected value of the product of more than two random variables:
C
i
1
i
2
⋯
i
n
(
s
1
,
s
2
,
⋯
,
s
n
)
=
⟨
X
i
1
(
s
1
)
X
i
2
(
s
2
)
⋯
X
i
n
(
s
n
)
⟩
.
{\displaystyle C_{i_{1}i_{2}\cdots i_{n}}(s_{1},s_{2},\cdots ,s_{n})=\langle X_{i_{1}}(s_{1})X_{i_{2}}(s_{2})\cdots X_{i_{n}}(s_{n})\rangle .}
However, such higher order correlation functions are relatively difficult to interpret and measure. For example, in order to measure the higher-order analogues of pair distribution functions, coherent x-ray sources are needed. Both the theory of such analysis and the experimental measurement of the needed X-ray cross-correlation functions are areas of active research.
== See also ==
Ornstein–Zernike equation
== References ==
== Further reading ==
Sethna, James P. (2006). "Chapter 10: Correlations, response, and dissipation". Statistical Mechanics: Entropy, Order Parameters, and Complexity. Oxford University Press. ISBN 978-0198566779.
Radial distribution function
Yeomans, J. M. (1992). Statistical Mechanics of Phase Transitions. Oxford Science Publications. ISBN 978-0-19-851730-6.
Fisher, M. E. (1974). "Renormalization Group in Theory of Critical Behavior". Reviews of Modern Physics. 46 (4): 597–616. Bibcode:1974RvMP...46..597F. doi:10.1103/RevModPhys.46.597.
C. Domb, M.S. Green, J.L. Lebowitz editors, Phase Transitions and Critical Phenomena, vol. 1-20 (1972–2001), Academic Press. | Wikipedia/Correlation_function_(statistical_mechanics) |
The optimized effective potential method (OEP) in Kohn-Sham (KS) density functional theory (DFT) is a method to determine the potentials as functional derivatives of the corresponding KS orbital-dependent energy density functionals. This can be in principle done for any arbitrary orbital-dependent functional, but is most common for exchange energy as the so-called exact exchange method (EXX), which will be considered here.
== Origin ==
The OEP method was developed more than 10 years prior to the work of Pierre Hohenberg, Walter Kohn and Lu Jeu Sham in 1953 by R. T. Sharp and G. K. Horton in order to investigate, what happens to Hartree-Fock (HF) theory when, instead of the regular nonlocal exchange potential, a local exchange potential is demanded. Much later after 1990 it was found out that this ansatz is useful in density functional theory.
== Background via chain rule ==
In density functional theory the exchange correlation (xc) potential is defined as the functional derivative of the exchange correlation (xc) energy with respect to the electron density
ρ
(
r
)
{\displaystyle \rho (r)}
where the index
s
{\displaystyle s}
denotes either occupied or unoccupied KS orbitals and eigenvalues. The problem is that, although the xc energy is in principle (due to the Hohenberg-Kohn (HK) theorem) a functional of the density, its explicit dependence on the density is unknown (only known in the simple Local density approximation (LDA) case), only its implicit dependence through the KS orbitals. That motivates the use of the chain rule
v
x
c
(
r
)
=
∫
d
r
′
∑
s
[
δ
E
x
c
[
{
ϕ
s
}
]
δ
ϕ
s
(
r
′
)
δ
ϕ
s
(
r
′
)
δ
ρ
(
r
)
+
c
.
c
.
]
{\displaystyle v_{xc}(r)=\int dr'\sum _{s}{\bigg [}{\frac {\delta E_{xc}[\{\phi _{s}\}]}{\delta \phi _{s}(r')}}{\frac {\delta \phi _{s}(r')}{\delta \rho (r)}}+c.c.{\bigg ]}}
Unfortunately the functional derivative
δ
ϕ
s
/
δ
ρ
{\displaystyle \delta \phi _{s}/\delta \rho }
, despite its existence, is also unknown. So one needs to invoke the chain rule once more, now with respect to the Kohn-Sham (KS) potential
v
S
(
r
)
{\displaystyle v_{S}(r)}
v
x
c
(
r
)
=
∬
d
r
′
d
r
″
∑
s
[
δ
E
x
c
[
{
ϕ
s
}
]
δ
ϕ
s
(
r
′
)
δ
ϕ
s
(
r
′
)
δ
v
S
(
r
″
)
δ
v
S
(
r
″
)
δ
ρ
(
r
)
⏟
≡
X
S
−
1
(
r
,
r
′
)
+
c
.
c
.
]
{\displaystyle v_{xc}(r)=\iint dr'dr''\sum _{s}{\bigg [}{\frac {\delta E_{xc}[\{\phi _{s}\}]}{\delta \phi _{s}(r')}}{\frac {\delta \phi _{s}(r')}{\delta v_{S}(r'')}}\underbrace {\frac {\delta v_{S}(r'')}{\delta \rho (r)}} _{\equiv X_{S}^{-1}(r,r')}+c.c.{\bigg ]}}
where
X
S
−
1
(
r
,
r
′
)
{\displaystyle X_{S}^{-1}(r,r')}
is defined the inverse static Kohn-Sham (KS) response function.
== Formalism ==
The KS orbital-dependent exact exchange energy (EXX) is given in Chemist's notation as
E
x
[
{
ϕ
i
}
]
=
−
1
2
∑
i
∑
j
(
i
j
|
j
i
)
≡
−
1
2
∑
i
∑
j
∬
d
r
d
r
′
ϕ
i
†
(
r
)
ϕ
j
(
r
)
ϕ
j
†
(
r
′
)
ϕ
i
(
r
′
)
|
r
−
r
′
|
{\displaystyle E_{x}[\{\phi _{i}\}]=-{\frac {1}{2}}\sum _{i}\sum _{j}(ij|ji)\equiv -{\frac {1}{2}}\sum _{i}\sum _{j}\iint drdr'{\frac {\phi _{i}^{\dagger }(r)\phi _{j}(r)\phi _{j}^{\dagger }(r')\phi _{i}(r')}{|r-r'|}}}
where
r
,
r
′
{\displaystyle r,r'}
denote electronic coordinates,
†
{\displaystyle \dagger }
the hermitian conjugate.The static Kohn-Sham (KS) response function is given as
where the indices
i
{\displaystyle i}
denote occupied and
a
{\displaystyle a}
unoccupied KS orbitals,
c
.
c
.
{\displaystyle c.c.}
the complex conjugate. the right hand side (r.h.s.) of the OEP equation is
where
v
^
x
NL
{\displaystyle {\hat {v}}_{x}^{\text{NL}}}
is the nonlocal exchange operator from Hartree-Fock (HF) theory but evaluated with KS orbitals stemming from the functional derivative
δ
E
x
c
[
{
ϕ
i
}
]
/
δ
ϕ
i
(
r
′
)
{\displaystyle \delta E_{xc}[\{\phi _{i}\}]/\delta \phi _{i}(r')}
. Lastly note that the following functional derivative is given by first order static perturbation theory exactly
δ
ϕ
s
(
r
′
)
δ
v
S
(
r
″
)
=
ϕ
i
(
r
′
)
∑
t
,
t
≠
i
ϕ
t
†
(
r
′
)
ϕ
t
(
r
)
ε
i
−
ε
t
⏟
G
(
r
,
r
′
)
{\displaystyle {\frac {\delta \phi _{s}(r')}{\delta v_{S}(r'')}}=\phi _{i}(r')\underbrace {\sum _{t,t\neq i}{\frac {\phi _{t}^{\dagger }(r')\phi _{t}(r)}{\varepsilon _{i}-\varepsilon _{t}}}} _{G(r,r')}}
which is a Green's function. Combining eqs. (1), (2) and (3) leads to the Optimized Effective Potential (OEP) Integral equation
∫
d
r
′
v
x
(
r
′
)
X
S
(
r
,
r
′
)
=
t
(
r
)
{\displaystyle \int dr'v_{x}(r')X_{S}(r,r')=t(r)}
== Implementation with a basis set ==
Usually the exchange potential is expanded in an auxiliary basis set (RI basis)
{
f
μ
}
{\displaystyle \{f_{\mu }\}}
as
v
x
(
r
)
=
∑
ν
v
x
,
ν
f
ν
(
r
)
{\displaystyle v_{x}(r)=\sum _{\nu }v_{x,\nu }f_{\nu }(r)}
together with the regular orbital basis
{
χ
λ
}
{\displaystyle \{\chi _{\lambda }\}}
requiring the so-called 3-index integrals of the form
(
f
ν
|
χ
λ
χ
κ
)
{\displaystyle (f_{\nu }|\chi _{\lambda }\chi _{\kappa })}
as the linear algebra problem
X
S
v
x
=
t
{\displaystyle {\textbf {X}}_{\text{S}}{\textbf {v}}_{\text{x}}={\textbf {t}}}
It shall be noted, that many OEP codes suffer from numerical issues. There are two main causes. The first is, that the Hohenberg-Kohn theorem is violated since for practical reasons a finite basis set is used, the second being that different spatial regions of potentials have different influence on the optimized energy leading e.g. to oscillations in the convergence from poor conditioning.
== References == | Wikipedia/Optimized_effective_potential_method |
The van der Waals equation is a mathematical formula that describes the behavior of real gases. It is an equation of state that relates the pressure, volume, number of molecules, and temperature in a fluid. The equation modifies the ideal gas law in two ways: first, it considers particles to have a finite diameter (whereas an ideal gas consists of point particles); second, its particles interact with each other (unlike an ideal gas, whose particles move as though alone in the volume).
The equation is named after Dutch physicist Johannes Diderik van der Waals, who first derived it in 1873 as part of his doctoral thesis. Van der Waals based the equation on the idea that fluids are composed of discrete particles, which few scientists believed existed. However, the equation accurately predicted the behavior of a fluid around its critical point, which had been discovered a few years earlier. Its qualitative and quantitative agreement with experiments ultimately cemented its acceptance in the scientific community. These accomplishments won van der Waals the 1910 Nobel Prize in Physics. Today the equation is recognized as an important model of phase change processes.
== Description ==
One explicit way to write the van der Waals equation is:
where
p
{\displaystyle p}
is pressure,
T
{\displaystyle T}
is temperature, and
v
=
V
/
n
=
N
A
V
/
N
{\displaystyle v=V/n=N_{\text{A}}V/N}
is molar volume, the ratio of volume,
V
{\displaystyle V}
, to quantity of matter,
n
{\displaystyle n}
(
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant and
N
{\displaystyle N}
the number of molecules). Also
a
{\displaystyle a}
and
b
{\displaystyle b}
are experimentally determinable, substance-specific constants, and
R
=
k
N
A
{\displaystyle R=kN_{\text{A}}}
is the universal gas constant. This form is useful for plotting isotherms (constant temperature curves).
Van der Waals wrote it in an equivalent, explicit in temperature, form in his Thesis (although he could not denote absolute temperature by its modern form in 1873)
This form is useful for plotting isobars (constant pressure curves). Writing
v
=
V
/
n
{\displaystyle v=V/n}
, and multiplying both sides by
n
{\displaystyle n}
it becomes the form that appears in Figure A.
When van der Waals created his equation, few scientists believed that fluids were composed of rapidly moving particles. Moreover, those who thought so did not know the atomic/molecular structure. The simplest conception of a particle, and the easiest to model mathematically, was a hard sphere of volume
V
0
{\displaystyle V_{0}}
; this is what van der Waals used, and he found the total excluded volume was
B
=
4
N
V
0
{\displaystyle B=4NV_{0}}
, namely 4 times the volume of all the particles.
The constant
b
=
B
N
A
/
N
{\displaystyle b=BN_{\text{A}}/N}
, has the dimension of molar volume, [v]. The constant
a
{\displaystyle a}
expresses the strength of the hypothesized inter-particle attraction. Van der Waals only had Newton's law of gravitation, in which two particles are attracted in proportion to the product of their masses, as a model. Thus he argued that, in his case, the attractive pressure was proportional to the density squared. The proportionality constant, a, when written in the form used above, has the dimension [pv2] (pressure times molar volume squared).
The force magnitude between two spherically symmetric molecules is written as
F
=
−
d
φ
/
d
r
{\displaystyle F=-d\varphi /dr}
, where
φ
(
r
)
{\displaystyle \varphi (r)}
is the pair potential function, and the force direction is along the line connecting the two mass centers. The specific functional relation is most simply characterized by a single length,
σ
{\displaystyle \sigma }
, and a minimum energy,
−
ε
{\displaystyle -\varepsilon }
(with
ε
≥
0
{\displaystyle \varepsilon \geq 0}
). Two of the many such functions that have been suggested are shown in Fig. B.
A modern theory based on statistical mechanics produces the same result for
b
=
4
N
A
[
(
4
π
/
3
)
(
σ
/
2
)
3
]
{\displaystyle b=4N_{\text{A}}[(4\pi /3)(\sigma /2)^{3}]}
obtained by van der Waals and his contemporaries. It also produces a constant value for
a
/
N
A
ε
b
{\displaystyle a/N_{\text{A}}\varepsilon b}
when
ε
/
k
T
{\displaystyle \varepsilon /kT}
is small enough.
Once the constants
a
{\displaystyle a}
and
b
{\displaystyle b}
are known for a given substance, the van der Waals equation can be used to predict attributes like the boiling point at any given pressure, and the critical point. These predictions are accurate for only a few substances. Most simple fluids are only a valuable approximation.
=== Relationship to the ideal gas law ===
The ideal gas law follows from the van der Waals equation whenever the molar volume
v
{\displaystyle v}
is sufficiently large (when
v
≫
b
{\displaystyle v\gg b}
, so
v
−
b
≈
v
{\displaystyle v-b\approx v}
), or equivalently whenever the molar density,
ρ
=
1
/
v
{\displaystyle \rho =1/v}
, is sufficiently small (when
v
≫
(
a
/
p
)
1
/
2
{\displaystyle v\gg (a/p)^{1/2}}
, so
p
+
a
/
v
2
≈
p
{\displaystyle p+a/v^{2}\approx p}
).
When
v
{\displaystyle v}
is large enough that both inequalities are satisfied, these two approximations reduce the van der Waals equation to
p
=
R
T
/
v
{\displaystyle p=RT/v}
, or
p
v
=
R
T
{\displaystyle pv=RT}
. With
R
=
N
A
k
{\displaystyle R=N_{\text{A}}k}
, where
k
{\displaystyle k}
is the Boltzmann constant, and using the definition
v
=
V
/
n
{\displaystyle v=V/n}
given after Eq (1a), this becomes
p
V
=
N
k
T
{\displaystyle pV=NkT}
; either of these forms expresses the ideal gas law. This is unsurprising since the van der Waals equation was constructed from the ideal gas equation to obtain an equation valid beyond the low-density limit of ideal gas behavior.
What is truly remarkable is the extent to which van der Waals succeeded. Indeed, Epstein in his classic thermodynamics textbook began his discussion of the van der Waals equation by writing, "Despite its simplicity, it comprehends both the gaseous and the liquid state and brings out, in a most remarkable way, all the phenomena pertaining to the continuity of these two states". Also, in Volume 5 of his Lectures on Theoretical Physics, Sommerfeld, in addition to noting that "Boltzmann described van der Waals as the Newton of real gases", also wrote "It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the unstable [referring to superheated liquid, and subcooled vapor, now called metastable] states" that are associated with the phase change process.
== History ==
The first to propose a volume correction to Boyle's law was Daniel Bernoulli in his microscopic theory in Hydrodynamica, however this model was mostly ignored in 1738.
In 1857 Rudolf Clausius published The Nature of the Motion which We Call Heat. In it he derived the relation
p
=
(
N
/
V
)
m
c
2
¯
/
3
{\displaystyle p=(N/V)m{\overline {c^{2}}}/3}
for the pressure
p
{\displaystyle p}
in a gas, composed of particles in motion, with number density
N
/
V
{\displaystyle N/V}
, mass
m
{\displaystyle m}
, and mean square speed
c
2
¯
{\displaystyle {\overline {c^{2}}}}
. He then noted that using the classical laws of Boyle and Charles, one could write
m
c
2
¯
/
3
=
k
T
{\displaystyle m{\overline {c^{2}}}/3=kT}
with a constant of proportionality
k
{\displaystyle k}
. Hence temperature was proportional to the average kinetic energy of the particles. This article inspired further work based on the twin ideas that substances are composed of indivisible particles, and that heat is a consequence of the particle motion; movement that evolves according to Newton's laws. The work, known as the kinetic theory of gases, was done principally by Clausius, James Clerk Maxwell, and Ludwig Boltzmann. At about the same time, Josiah Willard Gibbs advanced the work by converting it into statistical mechanics.
This environment influenced Johannes Diderik van der Waals. After initially pursuing a teaching credential, he was accepted for doctoral studies at the University of Leiden under Pieter Rijke. This led, in 1873, to a dissertation that provided a simple, particle-based equation that described the gas-liquid change of state, the origin of a critical temperature, and the concept of corresponding states. The equation is based on two premises: first, that fluids are composed of particles with non-zero volumes, and second, that at a large enough distance each particle exerts an attractive force on all other particles in its vicinity. Boltzmann called these forces van der Waals cohesive forces.
In 1869 Irish professor of chemistry Thomas Andrews at Queen's University Belfast, in a paper entitled On the Continuity of the Gaseous and Liquid States of Matter, displayed an experimentally obtained set of isotherms of carbonic acid, H2CO3, that showed at low temperatures a jump in density at a certain pressure, while at higher temperatures there was no abrupt change (the figure can be seen here). Andrews called the isotherm at which the jump disappears the critical point. Given the similarity of the titles of this paper and van der Waals' subsequent thesis, one might think that van der Waals set out to develop a theoretical explanation of Andrews' experiments; however, this is not what happened. Van der Waals began work by trying to determine a molecular attraction that appeared in Laplace's theory of capillarity, and only after establishing his equation he tested it using Andrews' results.
By 1877 sprays of both liquid oxygen and liquid nitrogen had been produced, and a new field of research, low-temperature physics, had been opened. The van der Waals equation played a part in all this, especially for the liquefaction of hydrogen and helium which was finally achieved in 1908. From measurements of
p
1
,
T
1
{\displaystyle p_{1},T_{1}}
and
p
2
,
T
2
{\displaystyle p_{2},T_{2}}
in two states with the same density, the van der Waals equation produces the values
b
=
v
−
R
(
T
2
−
T
1
)
p
2
−
p
1
and
a
=
v
2
p
2
T
1
−
p
1
T
2
T
2
−
T
1
.
{\displaystyle b=v-{\frac {R\left(T_{2}-T_{1}\right)}{p_{2}-p_{1}}}\qquad {\text{and}}\qquad a=v^{2}{\frac {p_{2}T_{1}-p_{1}T_{2}}{T_{2}-T_{1}}}.}
Thus from two such measurements of pressure and temperature, one could determine
a
{\displaystyle a}
and
b
{\displaystyle b}
, and from these values calculate the expected critical pressure, temperature, and molar volume. Goodstein summarized this contribution of the van der Waals equation as follows:
All this labor required considerable faith in the belief that gas–liquid systems were all basically the same, even if no one had ever seen the liquid phase. This faith arose out of the repeated success of the van der Waals theory, which is essentially a universal equation of state, independent of the details of any particular substance once it has been properly scaled. [...] As a result, not only was it possible to believe that hydrogen could be liquefied, but it was even possible to predict the necessary temperature and pressure.Van der Waals was awarded the Nobel Prize in 1910, in recognition of the contribution of his formulation of this "equation of state for gases and liquids".
== Use ==
The van der Waals equation has been, and remains, useful because:
Its coefficient of thermal expansion has a simple analytic expression
It explains the existence of the critical point, and establishes the theorem of corresponding states
Its internal energy and entropy have simple analytic expressions
Its specific heat at constant volume
c
v
{\displaystyle c_{v}}
is a function of
T
{\displaystyle T}
only
Its specific heat at constant pressure,
c
p
{\displaystyle c_{p}}
has a simple relationship with
c
v
{\displaystyle c_{v}}
Its Joule–Thomson coefficient and associated inversion curve, which are instrumental in the commercial liquefaction of gases, have simple analytic expressions
Together with the Maxwell construction it explains the existence of the liquid–vapor phase transition, including the observed metastable states
In addition
Its enthalpy and free energies all have simple analytic expressions
Its isothermal compressibility has a simple analytic expression
Its saturation curve has a simple analytic parametric solution
It is an intermediate mathematical model that is useful as a pedagogical tool when teaching physics, chemistry, and engineering
and
It plays an important role in the modern theory of phase transitions
It is the completely accurate equation of state for substances whose intermolecular potential matches the Sutherland potential
== Critical point and corresponding states ==
Figure 1 shows four isotherms of the van der Waals equation (abbreviated as vdW) on a
p
,
v
{\displaystyle p,v}
(pressure, molar volume) plane. The essential character of these curves is that they come in three forms:
At some critical temperature
T
c
{\displaystyle T_{\text{c}}}
(orange isotherm), the slope is negative everywhere except at a single inflection point: the critical point
(
p
c
,
v
c
)
{\displaystyle (p_{\text{c}},v_{\text{c}})}
, where both the slope and curvature are zero,
∂
p
∂
v
|
T
=
∂
2
p
∂
v
2
|
T
=
0
{\displaystyle \left.{\frac {\partial p}{\partial v}}\right\vert _{T}=\left.{\frac {\partial ^{2}p}{\partial v^{2}}}\right\vert _{T}=0}
.
At higher temperatures (red isotherm), the isotherm's slope is negative everywhere. (This corresponds to values of
p
,
T
{\displaystyle p,T}
for which the vdW equation has one real root for
v
{\displaystyle v}
).
At lower temperatures (green and blue isotherms), all isotherms have two points with zero slope. (This corresponds to values of
p
{\displaystyle p}
,
T
{\displaystyle T}
for which the vdW equation has three real roots for
v
{\displaystyle v}
).
The critical point can be analytically determined by equating the two partial derivatives of the vdW equation, created by differentiating Eq (1a), to zero. This produces the critical values
v
c
=
3
b
{\displaystyle v_{\text{c}}=3b}
and
T
c
=
8
a
/
(
27
R
b
)
{\displaystyle T_{\text{c}}=8a/(27Rb)}
. Finally, using these values in Eq (1a) gives
p
c
=
a
/
27
b
2
{\displaystyle p_{\text{c}}=a/27b^{2}}
. These results can also be obtained algebraically by noting that at the critical point the three roots are equal. Hence, Eqs (1) can be written as either
v
3
−
(
b
+
R
T
c
/
p
c
)
v
2
+
(
a
/
p
c
)
v
−
a
b
/
p
c
=
0
{\displaystyle v^{3}-(b+RT_{\text{c}}/p_{\text{c}})v^{2}+(a/p_{\text{c}})v-ab/p_{\text{c}}=0}
, or
(
v
−
v
c
)
3
=
0
{\displaystyle (v-v_{\text{c}})^{3}=0}
; two forms with the same coefficients.
=== Course of the isotherms ===
Above the critical temperature
T
c
{\displaystyle T_{\text{c}}}
, van der Waals isotherms satisfy the stability criterion that
∂
p
/
∂
v
|
T
<
0
{\displaystyle \partial p/\partial v|_{T}<0}
. Below the critical temperature, each isotherm contains an interval where this condition is violated. This unstable region is the genesis of the phase change; there is a range
v
m
i
n
≤
v
≤
v
m
a
x
{\displaystyle v_{\rm {min}}\leq v\leq v_{\rm {max}}}
, for which no observable states exist. The states for
v
<
v
m
i
n
{\displaystyle v<v_{\rm {min}}}
are liquid, and those for
v
>
v
m
a
x
{\displaystyle v>v_{\rm {max}}}
are vapor; the denser liquid separates and lies below the vapor due to gravity. The transition points, states with zero slope, are called spinodal points. Their locus is the spinodal curve, a boundary that separates the regions of the plane for which liquid, vapor, and gas exist from a region where no observable homogeneous states exist. This spinodal curve is obtained here from the vdW equation by differentiation (or equivalently from
κ
T
=
∞
{\displaystyle \kappa _{T}=\infty }
) as
T
s
p
=
2
a
(
v
−
b
)
2
R
v
3
p
s
p
=
a
(
v
−
2
b
)
v
3
{\displaystyle T_{\rm {sp}}=2a{\frac {(v-b)^{2}}{Rv^{3}}}\qquad p_{\rm {sp}}={\frac {a(v-2b)}{v^{3}}}}
A projection of the spinodal curve is plotted in Figure 1 as the black dash-dot curve. It passes through the critical point, which is also a spinodal point.
=== Principle of corresponding states ===
Using the critical values to define reduced (dimensionless) variables
p
r
=
p
/
p
c
{\displaystyle p_{r}=p/p_{\text{c}}}
,
T
r
=
T
/
T
c
{\displaystyle T_{r}=T/T_{\text{c}}}
, and
v
r
=
v
/
v
c
{\displaystyle v_{r}=v/v_{\text{c}}}
renders the vdW equation in the dimensionless form (used to construct Fig. 1):
p
r
=
8
T
r
3
v
r
−
1
−
3
v
r
2
{\displaystyle p_{r}={\frac {8T_{r}}{3v_{r}-1}}-{\frac {3}{v_{r}^{2}}}}
This dimensionless form is a similarity relation; it indicates that all vdW fluids at the same
T
r
{\displaystyle T_{r}}
will plot on the same curve. It expresses the law of corresponding states which Boltzmann described as follows:
All the constants characterizing the gas have dropped out of this equation. If one bases measurements on the van der Waals units [Boltzmann's name for the reduced quantities here], then he obtains the same equation of state for all gases. [...] Only the values of the critical volume, pressure, and temperature depend on the nature of the particular substance; the numbers that express the actual volume, pressure, and temperature as multiples of the critical values satisfy the same equation for all substances. In other words, the same equation relates the reduced volume, reduced pressure, and reduced temperature for all substances.
Obviously such a broad general relation is unlikely to be correct; nevertheless, the fact that one can obtain from it an essentially correct description of actual phenomena is very remarkable.
This "law" is just a special case of dimensional analysis in which an equation containing 6 dimensional quantities,
p
,
v
,
T
,
a
,
b
,
R
{\displaystyle p,v,T,a,b,R}
, and 3 independent dimensions, [p], [v], [T], must be expressible in terms of 6 − 3 = 3 dimensionless groups. Here
v
∗
=
b
{\displaystyle v^{*}=b}
is a characteristic molar volume,
p
∗
=
a
/
b
2
{\displaystyle p^{*}=a/b^{2}}
a characteristic pressure, and
T
∗
=
a
/
(
R
b
)
{\displaystyle T^{*}=a/(Rb)}
a characteristic temperature, and the 3 dimensionless groups are
p
/
p
∗
,
v
/
v
∗
,
T
/
T
∗
{\displaystyle p/p^{*},v/v^{*},T/T^{*}}
. According to dimensional analysis the equation must then have the form
p
/
p
∗
=
Φ
(
v
/
v
∗
,
T
/
T
∗
)
{\displaystyle p/p^{*}=\Phi (v/v^{*},T/T^{*})}
, a general similarity relation. In his discussion of the vdW equation, Sommerfeld also mentioned this point. The reduced properties defined previously are
p
r
=
27
(
p
/
p
∗
)
{\displaystyle p_{r}=27(p/p^{*})}
,
v
r
=
(
1
/
3
)
(
v
/
v
∗
)
{\displaystyle v_{r}=(1/3)(v/v^{*})}
, and
T
r
=
(
27
/
8
)
(
T
/
T
∗
)
{\displaystyle T_{r}=(27/8)(T/T^{*})}
. Recent research has suggested that there is a family of equations of state that depend on an additional dimensionless group, and this provides a more exact correlation of properties. Nevertheless, as Boltzmann observed, the van der Waals equation provides an essentially correct description.
The vdW equation produces the critical compressibility factor
Z
c
=
p
c
v
c
/
(
R
T
c
)
=
3
/
8
=
0.375
{\displaystyle Z_{\text{c}}=p_{\text{c}}v_{\text{c}}/(RT_{\text{c}})=3/8=0.375}
, while for most real fluids
0.23
<
Z
c
<
0.31
{\displaystyle 0.23<Z_{\text{c}}<0.31}
. Thus most real fluids do not satisfy this condition, and consequently their behavior is only described qualitatively by the vdW equation. However, the vdW equation of state is a member of a family of state equations based on the Pitzer (acentric) factor,
ω
{\displaystyle \omega }
, and the liquid metals (mercury and cesium) are well approximated by it.
== Thermodynamic properties ==
The properties molar internal energy,
u
{\displaystyle u}
, and entropy,
s
{\displaystyle s}
, are defined by the first and second laws of thermodynamics. From these laws, they, and all other thermodynamic properties of a simple compressible substance, can be specified, up to a constant of integration, by two measurable functions. These are a mechanical equation of state,
p
=
p
(
v
,
T
)
{\displaystyle p=p(v,T)}
, and a constant volume specific heat,
c
v
(
v
,
T
)
{\displaystyle c_{v}(v,T)}
.
When
u
(
v
,
T
)
{\displaystyle u(v,T)}
represents a continuous surface, it must be a continuous function with continuous partial derivatives, and its second mixed partial derivatives must be equal,
∂
v
∂
T
u
=
∂
T
∂
v
u
{\displaystyle \partial _{v}\partial _{T}u=\partial _{T}\partial _{v}u}
. Then with
c
v
=
∂
T
u
{\displaystyle c_{v}=\partial _{T}u}
this condition can be written simply as
∂
v
c
(
v
,
T
)
=
∂
T
[
T
2
∂
T
(
p
/
T
)
]
{\displaystyle \partial _{v}c(v,T)=\partial _{T}[T^{2}\partial _{T}(p/T)]}
. Differentiating
p
/
T
{\displaystyle p/T}
for the vdW equation gives
T
2
∂
T
(
p
/
T
)
]
=
a
/
v
2
{\displaystyle T^{2}\partial _{T}(p/T)]=a/v^{2}}
, so
∂
v
c
v
=
0
{\displaystyle \partial _{v}c_{v}=0}
. Consequently
c
v
=
c
v
(
T
)
{\displaystyle c_{v}=c_{v}(T)}
for a vdW fluid exactly as it is for an ideal gas. To keep things simple, it is regarded as a constant in the following,
c
v
=
c
R
{\displaystyle c_{v}=cR}
, with
c
{\displaystyle c}
a number.
=== Internal energy, and entropy ===
The energetic equation of state gives the internal energy, and the entropic equation of state gives the entropy as
u
−
C
u
=
∫
c
v
(
v
,
T
)
d
T
+
∫
T
2
∂
(
p
/
T
)
∂
T
d
v
s
−
C
s
=
∫
c
v
(
T
)
d
T
T
+
∫
∂
p
∂
T
d
v
{\displaystyle {\begin{aligned}u-C_{u}&=\int c_{v}(v,T)\,dT+\int T^{2}\,{\frac {\partial (p/T)}{\partial T}}\,dv\\s-C_{s}&=\int c_{v}(T)\,{\frac {dT}{T}}+\int {\frac {\partial p}{\partial T}}\,dv\end{aligned}}}
where
C
u
,
C
s
{\displaystyle C_{u},C_{s}}
are arbitrary constants of integration.
Both integrals for
u
{\displaystyle u}
can be easily evaluated and the result is,
Likewise both integrals for
s
{\displaystyle s}
can be evaluated with the result,
=== Free energies, and enthalpy ===
The Helmholtz free energy is
f
=
u
−
T
s
{\displaystyle f=u-Ts}
. Subtracting
T
{\displaystyle T}
times Eq (3) from Eq (2) gives
f
{\displaystyle f}
as
The enthalpy is
h
=
u
+
p
v
{\displaystyle h=u+pv}
, and the product
p
v
{\displaystyle pv}
is, using Eq (1a),
p
v
=
R
T
v
/
(
v
−
b
)
−
a
/
v
{\displaystyle pv=RTv/(v-b)-a/v}
. Adding Eq (2) gives
h
{\displaystyle h}
as
h
−
C
u
=
R
T
[
c
+
v
/
(
v
−
b
)
]
−
2
a
/
v
{\displaystyle h-C_{u}=RT[c+v/(v-b)]-2a/v}
The Gibbs free energy is
g
=
h
−
T
s
{\displaystyle g=h-Ts}
so subtracting
T
{\displaystyle T}
times Eq (3) from
h
{\displaystyle h}
produces
g
{\displaystyle g}
as
All these results can be rendered in reduced form by using the characteristic energy
R
T
c
{\displaystyle RT_{\text{c}}}
.
=== Derivatives: α, κT and cp ===
Any derivative of any thermodynamic property can be expressed in terms of any three of them. A standard set is composed of
α
,
κ
T
,
c
v
{\displaystyle \alpha ,\kappa _{T},c_{v}}
. For a vdW fluid
c
v
(
T
)
{\displaystyle c_{v}(T)}
is a known function, and the other two are obtained from the first partial derivatives of the vdW equation as,
(
∂
p
∂
T
)
v
=
R
v
−
b
=
α
κ
T
and
(
∂
p
∂
v
)
T
=
−
R
T
(
v
−
b
)
2
+
2
a
v
3
=
−
1
v
κ
T
{\displaystyle \left({\frac {\partial p}{\partial T}}\right)_{v}={\frac {R}{v-b}}={\frac {\alpha }{\kappa _{T}}}\quad {\text{and}}\quad \left({\frac {\partial p}{\partial v}}\right)_{T}=-{\frac {RT}{(v-b)^{2}}}+{\frac {2a}{v^{3}}}=-{\frac {1}{v\kappa _{T}}}}
Here
κ
T
=
−
v
−
1
∂
p
v
{\displaystyle \kappa _{T}=-v^{-1}\partial _{p}v}
, is the isothermal compressibility, and
α
=
v
−
1
∂
T
v
p
{\displaystyle \alpha =v^{-1}\partial _{T}v_{p}}
, is the coefficient of thermal expansion. Therefore,
In the limit
v
→
∞
{\displaystyle v\to \infty }
,
α
=
1
/
T
{\displaystyle \alpha =1/T}
and
κ
T
=
v
/
(
R
T
)
{\displaystyle \kappa _{T}=v/(RT)}
. Since the vdW equation in this limit becomes
p
=
R
T
/
v
{\displaystyle p=RT/v}
, finally
κ
T
=
1
/
p
{\displaystyle \kappa _{T}=1/p}
. Both of these are the ideal gas values.
The specific heat at constant pressure,
c
p
{\displaystyle c_{p}}
is defined as the partial derivative
c
p
=
∂
T
h
|
p
{\displaystyle c_{p}=\partial _{T}h|_{p}}
. It is related to
c
v
{\displaystyle c_{v}}
by the Mayer equation,
c
p
−
c
v
=
−
T
(
∂
T
p
)
2
/
∂
v
p
=
T
v
α
2
/
κ
T
{\displaystyle c_{p}-c_{v}=-T(\partial _{T}p)^{2}/\partial _{v}p=Tv\alpha ^{2}/\kappa _{T}}
. Then the two partials of the vdW equation can be used to express
c
p
{\displaystyle c_{p}}
as,
Here in the limit
v
→
∞
{\displaystyle v\to \infty }
,
c
p
−
c
v
=
R
{\displaystyle c_{p}-c_{v}=R}
, which is also the ideal gas result; however the limit
v
→
b
{\displaystyle v\rightarrow b}
gives the same result, which does not agree with experiments on liquids.
Finally
c
p
,
α
{\displaystyle c_{p},\alpha }
, and
κ
T
{\displaystyle \kappa _{T}}
are all infinite on the curve
T
=
2
a
(
v
−
b
)
2
/
(
R
v
3
)
=
T
c
(
3
v
r
−
1
)
2
/
(
4
v
r
3
)
{\displaystyle T=2a(v-b)^{2}/(Rv^{3})=T_{\text{c}}(3v_{r}-1)^{2}/(4v_{r}^{3})}
. This is the spinodal curve defined by
κ
T
−
1
=
0
{\displaystyle \kappa _{T}^{-1}=0}
, that was discussed in the subsection The course of the isotherms.
== Saturation ==
Although the gap in
v
{\displaystyle v}
delimited by the two spinodal points on an isotherm (e.g.
T
r
=
7
/
8
{\displaystyle T_{\text{r}}=7/8}
in Fig. 1) is the origin of the phase change, the change occurs at some intermediate value of pressure. This can be understood by considering that the saturated liquid and vapor states can coexist in equilibrium, with the same pressure and temperature. However, the minimum and maximum spinodal points are not at the same pressure. Therefore, at a temperature
T
s
{\displaystyle T_{\text{s}}}
, the phase change is characterized by the pressure
p
s
{\displaystyle p_{\text{s}}}
, which lies within the range of
p
{\displaystyle p}
set by the spinodal points (
p
min
<
p
s
<
p
max
{\displaystyle p_{\text{min}}<p_{\text{s}}<p_{\text{max}}}
), and by the molar volume of liquid
v
f
{\displaystyle v_{\text{f}}}
and vapor
v
g
{\displaystyle v_{\text{g}}}
, which lie outside the range of
v
{\displaystyle v}
set by the spinodal points (
v
f
<
v
min
{\displaystyle v_{\text{f}}<v_{\text{min}}}
and
v
g
>
v
max
{\displaystyle v_{\text{g}}>v_{\text{max}}}
).
Applying Eq (1a) to the saturated liquid and saturated vapor states gives:
Equations (7) contain four variables
p
s
,
T
s
,
v
f
,
v
g
{\displaystyle p_{\text{s}},T_{\text{s}},v_{\text{f}},v_{\text{g}}}
), so a third equation is required to uniquely specify three of these variables in terms of the fourth. In this case of a single substance, the equation is provided by the condition of equal Gibbs free energy,
g
g
=
g
f
{\displaystyle g_{\text{g}}=g_{\text{f}}}
Using Eq (4b) applied to each state in this equation produces
This is a third equation that, along with Eqs. 7 can be solved numerically. This has been done given a value for either
T
s
{\displaystyle T_{\text{s}}}
or
p
s
{\displaystyle p_{\text{s}}}
, and tabular results presented; however, the equations also admit an analytic parametric solution obtained by Lekner. Details of this solution may be found in the Maxwell construction, and the dimensionless results are:
T
rs
(
y
)
=
27
8
⋅
2
f
(
y
)
[
cosh
y
+
f
(
y
)
]
g
(
y
)
2
,
p
rs
=
27
f
(
y
)
2
[
1
−
f
(
y
)
2
]
g
(
y
)
2
,
v
rf
=
1
+
f
(
y
)
e
y
3
f
(
y
)
e
y
,
v
rg
=
1
+
f
(
y
)
e
−
y
3
f
(
y
)
e
−
y
{\displaystyle {\begin{aligned}T_{\text{rs}}(y)&={\frac {27}{8}}\cdot {\frac {2f(y)\left[\cosh y+f(y)\right]}{g(y)^{2}}},&p_{\text{rs}}&=27{\frac {f(y)^{2}\left[1-f(y)^{2}\right]}{g(y)^{2}}},\\[1ex]v_{\text{rf}}&={\frac {1+f(y)e^{y}}{3f(y)e^{y}}},&v_{\text{rg}}&={\frac {1+f(y)e^{-y}}{3f(y)e^{-y}}}\end{aligned}}}
where
f
(
y
)
=
y
cosh
y
−
sinh
y
sinh
y
cosh
y
−
y
,
g
(
y
)
=
1
+
2
f
(
y
)
cosh
y
+
f
(
y
)
2
{\displaystyle {\begin{aligned}f(y)&={\frac {y\cosh y-\sinh y}{\sinh y\cosh y-y}},&g(y)&=1+2f(y)\cosh y+f(y)^{2}\end{aligned}}}
The parameter
0
≤
y
<
∞
{\displaystyle 0\leq y<\infty }
is given physically by
y
=
(
s
g
−
s
f
)
/
(
2
R
)
{\displaystyle y=(s_{\text{g}}-s_{\text{f}})/(2R)}
. This solution also produces values of all other property discontinuities across the saturation curve. These functions define the coexistence curve (or saturation curve), which is the locus of the saturated liquid and saturated vapor states of the vdW fluid. Projections of this saturation curve are plotted in Figures 1 and 2.
Referring back to Figure 1, the isotherms for
T
r
<
1
{\displaystyle T_{\text{r}}<1}
are discontinuous. For example, the
T
r
=
7
/
8
{\displaystyle T_{\text{r}}=7/8}
(green) isotherm consists of two separate segments. The solid green lines are composed of stable states. They terminate at dots representing the saturated liquid and vapor states forming the phase change. The dashed green lines represent metastable states (superheated liquid and subcooled vapor). They are created in the phase transition, have a finite lifetime, and then devolve into their lower energy stable alternative.
At every point in the region between the two curves in Figure 2, there are two states: one stable and one metastable. The coexistence of these states can be seen in Figure 1—for discontinuous isotherms, there are values of
p
r
{\displaystyle p_{\text{r}}}
which correspond to two points on the isotherm: one on a solid line (the stable state) and one on a dashed region (the metastable state).
In his treatise of 1898, in which he described the van der Waals equation in great detail, Boltzmann discussed these metastable states in a section titled "Undercooling, Delayed evaporation". (Today, these states are now denoted "subcooled vapor" and "superheated liquid".) Moreover, it has now become clear that these metastable states occur regularly in the phase transition process. In particular, processes that involve very high heat fluxes create large numbers of these states, and transition to their stable alternative with a corresponding release of energy that can be dangerous. Consequently, there is a pressing need to study their thermal properties.
In the same section, Boltzmann also addressed and explained the negative pressures which some liquid metastable states exhibit (for example, the blue isotherm
T
r
=
4
/
5
{\displaystyle T_{\text{r}}=4/5}
in Fig. 1). He concluded that such liquid states of tensile stresses were real, as did Tien and Lienhard many years later who wrote "The van der Waals equation predicts that at low temperatures liquids sustain enormous tension [...] In recent years measurements have been made that reveal this to be entirely correct."
Even though the phase change produces a mathematical discontinuity in the homogeneous fluid properties (for example
v
{\displaystyle v}
), there is no physical discontinuity. As the liquid begins to vaporize, the fluid becomes a heterogeneous mixture of liquid and vapor whose molar volume varies continuously from
v
f
{\displaystyle v_{\text{f}}}
to
v
g
{\displaystyle v_{\text{g}}}
according to the equation of state
v
=
v
f
+
x
(
v
g
−
v
f
)
{\textstyle v=v_{\text{f}}+x(v_{\text{g}}-v_{\text{f}})}
where
x
=
N
g
/
(
N
f
+
N
g
)
{\textstyle x=N_{\text{g}}/(N_{\text{f}}+N_{\text{g}})}
and
0
≤
x
≤
1
{\displaystyle 0\leq x\leq 1}
is the mole fraction of the vapor. This equation is called the lever rule and applies to other properties as well. The states it represents form a horizontal line bridging the discontinuous region of an isotherm (not shown in Fig. 1 because it is a different equation from the vdW equation).
=== Extended corresponding states ===
The idea of corresponding states originated when van der Waals cast his equation in the dimensionless form,
p
r
=
p
(
v
r
,
T
r
)
{\displaystyle p_{\text{r}}=p(v_{\text{r}},T_{\text{r}})}
. However, as Boltzmann noted, such a simple representation could not correctly describe all substances. Indeed, the saturation analysis of this form produces
p
rs
=
p
s
(
T
r
)
{\displaystyle p_{\text{rs}}=p_{\text{s}}(T_{\text{r}})}
; namely, that all substances have the same dimensionless coexistence curve, which is not true. To avoid this paradox, an extended principle of corresponding states has been suggested in which
p
r
=
p
(
v
r
,
T
r
,
ϕ
)
{\displaystyle p_{\text{r}}=p(v_{\text{r}},T_{\text{r}},\phi )}
where
ϕ
{\displaystyle \phi }
is a substance-dependent dimensionless parameter related to the only physical feature associated with an individual substance: its critical point.
One candidate for
ϕ
{\displaystyle \phi }
is the critical compressibility factor
Z
c
=
p
c
v
c
/
(
R
T
c
)
{\displaystyle Z_{\text{c}}=p_{\text{c}}v_{\text{c}}/(RT_{\text{c}})}
; however, because
v
c
{\displaystyle v_{\text{c}}}
is difficult to measure accurately, the acentric factor developed by Kenneth Pitzer,
ω
=
−
log
10
[
p
r
(
T
r
=
0.7
)
]
−
1
{\displaystyle \omega =-\log _{10}[p_{\text{r}}(T_{\text{r}}=0.7)]-1}
, is more useful. The saturation pressure in this situation is represented by a one-parameter family of curves:
p
rs
=
p
s
(
T
r
,
ω
)
{\displaystyle p_{\text{rs}}=p_{\text{s}}(T_{\text{r}},\omega )}
. Several investigators have produced correlations of saturation data for several substances; Dong and Lienhard give
ln
p
rs
=
5.37270
(
1
−
1
/
T
r
)
+
ω
(
7.49408
−
11.181777
T
r
3
+
3.68769
T
r
6
+
17.92998
ln
T
r
)
{\displaystyle {\begin{aligned}\ln p_{\text{rs}}=5.37270(1-1/T_{\text{r}})+\omega (&7.49408-11.181777\ {T_{\text{r}}}^{3}+\\&3.68769\ {T_{\text{r}}}^{6}+17.92998\,\ln T_{\text{r}})\end{aligned}}}
which has an RMS error of
±
0.42
{\displaystyle \pm 0.42}
over the range
1
≤
T
r
≤
0.3
{\displaystyle 1\leq T_{\text{r}}\leq 0.3}
.
Figure 3 is a plot of
p
rs
{\displaystyle p_{\text{rs}}}
vs.
T
r
{\displaystyle T_{\text{r}}}
for various values of the Pitzer factor
ω
{\displaystyle \omega }
as given by this equation. The vertical axis is logarithmic to show the behavior at pressures closer to zero, where differences among the various substances (indicated by varying values of
ω
{\displaystyle \omega }
) are more pronounced.
Figure 4 is another plot of the same equation showing
T
r
{\displaystyle T_{\text{r}}}
as a function of
ω
{\displaystyle \omega }
for various values of
p
rs
{\displaystyle p_{\text{rs}}}
. It includes data from 51 substances, including the vdW fluid, over the range
−
0.4
<
ω
<
0.9
{\displaystyle -0.4<\omega <0.9}
. This plot shows that the vdW fluid (
ω
=
−
0.302
{\displaystyle \omega =-0.302}
) is a member of the class of real fluids; indeed, the vdW fluid can quantitatively approximate the behavior of the liquid metals cesium (
ω
=
−
0.267
{\displaystyle \omega =-0.267}
) and mercury (
ω
=
−
0.21
{\displaystyle \omega =-0.21}
), which share similar values of
ω
{\displaystyle \omega }
. However, in general it can describe the behavior of fluids of various
ω
{\displaystyle \omega }
only qualitatively.
== Joule–Thomson coefficient ==
The Joule–Thomson coefficient,
μ
JT
=
∂
p
T
|
h
{\displaystyle \mu _{\text{JT}}=\partial _{p}T|_{h}}
, is of practical importance because the two end states of a throttling process (
h
2
=
h
1
{\displaystyle h_{2}=h_{1}}
) lie on a constant enthalpy curve. Although ideal gases, for which
h
=
h
(
T
)
{\displaystyle h=h(T)}
, do not change temperature in such a process, real gases do, and it is important in applications to know whether they heat up or cool down.
This coefficient can be found in terms of the previously derived
α
{\displaystyle \alpha }
and
c
p
{\displaystyle c_{p}}
as
μ
JT
=
v
(
α
T
−
1
)
c
p
.
{\displaystyle \mu _{\text{JT}}={\frac {v(\alpha T-1)}{c_{p}}}.}
When
μ
JT
{\displaystyle \mu _{\text{JT}}}
is positive, the gas temperature decreases as it passes through a throttling process, and when it is negative, the temperature increases. Therefore, the condition
μ
JT
=
0
{\displaystyle \mu _{\text{JT}}=0}
defines a curve that separates the region of the
T
,
p
{\displaystyle T,p}
plane where
μ
JT
>
0
{\displaystyle \mu _{\text{JT}}>0}
from the region where
μ
JT
<
0
{\displaystyle \mu _{\text{JT}}<0}
. This curve is called the inversion curve, and its equation is
α
T
−
1
=
0
{\displaystyle \alpha T-1=0}
. Evaluating this using the expression for
α
{\displaystyle \alpha }
derived in Eq. 5 produces,
2
a
(
v
−
b
)
2
−
R
T
v
2
b
=
0
{\displaystyle 2a(v-b)^{2}-RTv^{2}b=0}
Note that for
v
≫
b
{\displaystyle v\gg b}
there will be cooling for
2
a
>
R
T
b
{\displaystyle 2a>RTb}
(or, in terms of the critical temperature,
T
<
(
27
/
4
)
T
c
{\displaystyle T<(27/4)\ T_{\text{c}}}
). As Sommerfeld noted, "This is the case with air and with most other gases. Air can be cooled at will by repeated expansion and can finally be liquified."
Solving for
b
/
v
>
0
{\displaystyle b/v>0}
, and using this to eliminate
v
{\displaystyle v}
from Eq (1a) gives the inversion curve as
p
p
∗
=
−
1
+
4
(
T
2
T
∗
)
1
/
2
−
3
(
T
2
T
∗
)
{\displaystyle {\frac {p}{p^{*}}}=-1+4\left({\frac {T}{2T^{*}}}\right)^{1/2}-3\left({\frac {T}{2T^{*}}}\right)}
where, for simplicity,
a
,
b
,
R
{\displaystyle a,b,R}
have been replaced by
p
∗
,
T
∗
{\displaystyle p^{*},T^{*}}
.
A plot of the curve, in reduced variables, is shown in green in Figure 5. Sommerfeld also displays this plot, together with a curve drawn using experimental data from H2. The two curves agree qualitatively, but not quantitatively.
Figure 5 shows an overlap between the saturation curve and the inversion curve plotted in the same region. This crossover means a van der Waals gas can be liquified by passing it through a throttling process under the proper conditions; real gases are liquified in this way.
== Compressibility factor ==
Real gases are characterized by their difference from ideal gases by writing
p
v
=
Z
R
T
{\displaystyle pv=ZRT}
, where
Z
{\displaystyle Z}
, called the compressibility factor. It is expressed either as
Z
(
p
,
T
)
{\displaystyle Z(p,T)}
or
Z
(
ρ
,
T
)
{\displaystyle Z(\rho ,T)}
, because in either case (pressure,
p
{\displaystyle p}
, or density,
ρ
{\displaystyle \rho }
) the limit as
p
{\displaystyle p}
or
ρ
{\displaystyle \rho }
approaches zero is 1, and
Z
{\displaystyle Z}
takes the ideal gas value. In the second case
Z
(
ρ
,
T
)
=
p
(
ρ
,
T
)
/
ρ
R
T
{\displaystyle Z(\rho ,T)=p(\rho ,T)/\rho RT}
, so for a van der Waals fluid from Eq (‘’’1’’’) the compressibility factor is
or in terms of reduced variables
Z
=
3
3
−
ρ
r
−
9
ρ
r
8
T
r
{\displaystyle Z={\frac {3}{3-\rho _{r}}}-{\frac {9\rho _{r}}{8T_{r}}}}
where
0
≤
ρ
r
=
1
/
v
r
≤
3
{\displaystyle 0\leq \rho _{r}=1/v_{r}\leq 3}
. At the critical point,
T
r
=
ρ
r
=
1
{\displaystyle T_{r}=\rho _{r}=1}
and
Z
=
Z
c
=
3
/
2
−
9
/
8
=
3
/
8
{\displaystyle Z=Z_{\text{c}}=3/2-9/8=3/8}
.
In the limit
ρ
→
0
{\displaystyle \rho \rightarrow 0}
,
Z
=
1
{\displaystyle Z=1}
; the fluid behaves like an ideal gas, as mentioned before. The derivative
(
∂
Z
∂
ρ
)
T
=
b
(
(
1
−
b
ρ
)
−
2
−
a
b
R
T
)
{\displaystyle \left({\frac {\partial Z}{\partial \rho }}\right)_{T}=b\left({\left(1-b\rho \right)}^{-2}-{\frac {a}{bRT}}\right)}
is never negative when
a
b
R
T
=
T
∗
/
T
≤
1
{\displaystyle {a \over bRT}=T^{*}/T\leq 1}
; that is, when
T
/
T
∗
≥
1
{\displaystyle T/T^{*}\geq 1}
(which corresponds to
T
r
≥
27
/
8
{\displaystyle T_{r}\geq 27/8}
). Alternatively, the initial slope is negative when
T
/
T
∗
<
1
{\displaystyle T/T^{*}<1}
, is zero at
b
ρ
=
1
−
(
T
/
T
∗
)
1
/
2
{\displaystyle b\rho =1-(T/T^{*})^{1/2}}
, and is positive for larger
b
ρ
≤
1
{\displaystyle b\rho \leq 1}
(see Fig. 6). In this case, the value of
Z
{\displaystyle Z}
passes through
1
{\displaystyle 1}
when
b
ρ
B
=
1
−
T
B
/
T
∗
{\displaystyle b\rho _{B}=1-T_{B}/T^{*}}
. Here
T
B
=
(
27
T
c
/
8
)
(
1
−
b
ρ
B
)
{\displaystyle T_{B}=(27T_{\text{c}}/8)(1-b\rho _{B})}
is called the Boyle temperature. It ranges between
0
≤
T
B
≤
27
T
c
/
8
{\displaystyle 0\leq T_{B}\leq 27T_{\text{c}}/8}
, and denotes a point in
T
,
ρ
{\displaystyle T,\rho }
space where the equation of state reduces to the ideal gas law. However, the fluid does not behave like an ideal gas there, because neither its derivatives
(
α
,
κ
T
)
{\displaystyle (\alpha ,\kappa _{T})}
nor
c
p
{\displaystyle c_{p}}
reduce to their ideal gas values, other than where
b
ρ
B
≪
1
,
T
B
∼
27
T
c
/
8
{\displaystyle b\rho _{B}\ll 1,\,T_{B}\sim 27T_{\text{c}}/8}
the actual ideal gas region.
Figure 6 plots various isotherms of
Z
(
ρ
,
T
r
)
{\displaystyle Z(\rho ,T_{r})}
vs
ρ
r
{\displaystyle \rho _{r}}
. Also shown are the spinodal and coexistence curves described previously. The subcritical isotherm consists of stable, metastable, and unstable segments (identified in the same way as in Fig. 1). Also included are the zero initial slope isotherm and the one corresponding to infinite temperature.
Figure 7 shows a generalized compressibility chart for a vdW gas. Like all other vdW properties, this is not quantitatively correct for most gases, but it has the correct qualitative features. Note the caustic generated by the crossing isotherms.
=== Virial expansion ===
Kammerlingh Onnes first suggested the virial expansion as an empirical alternative to the vdW equation. Subsequently, it was proven to result from Statistical mechanics, in the form
Z
(
ρ
,
T
)
=
1
+
∑
k
=
2
∞
B
k
(
T
)
(
ρ
)
k
−
1
{\displaystyle Z(\rho ,T)=1+\sum _{k=2}^{\infty }\,B_{k}(T)(\rho )^{k-1}}
where
Z
=
p
/
(
ρ
R
T
)
{\displaystyle Z=p/(\rho RT)}
and the functions
B
k
(
T
)
{\displaystyle B_{k}(T)}
are the virial coefficients. The
k
{\displaystyle k}
th term represents a
k
{\displaystyle k}
-particle interaction.
Expanding the term
(
1
−
b
ρ
)
−
1
{\displaystyle (1-b\rho )^{-1}}
in the definition of
Z
{\displaystyle Z}
, Eq (9), into an infinite series, absolutely convergent for
b
ρ
<
1
{\displaystyle b\rho <1}
, produces
Z
(
ρ
,
T
)
=
1
+
(
1
−
a
b
R
T
)
b
ρ
+
∑
k
=
3
∞
(
b
ρ
)
k
−
1
.
{\displaystyle Z(\rho ,T)=1+\left(1-{a \over bRT}\right)b\rho +\sum _{k=3}^{\infty }(b\rho )^{k-1}.}
The second virial coefficient is the slope of
Z
(
ρ
,
T
)
{\displaystyle Z(\rho ,T)}
at
ρ
=
0
{\displaystyle \rho =0}
. It is positive when
T
/
T
∗
>
1
{\displaystyle T/T^{*}>1}
and negative when
T
/
T
∗
<
1
{\displaystyle T/T^{*}<1}
(
T
r
=
T
/
T
c
>
or
<
27
/
8
{\displaystyle T_{\text{r}}=T/T_{\text{c}}>{\mbox{or}}<27/8}
), in agreement with the result found by differentiation. Its vdW value,
B
2
=
b
−
a
/
R
T
{\displaystyle B_{2}=b-a/RT}
agrees with a statistical mechanical calculation; however, the higher order coefficients are in error. This means that the vdW virial expansion, hence the vdW equation itself, is equivalent to a two term asymptotic approximation to the virial equation.
For molecules modeled as non-attracting hard spheres,
a
=
0
{\displaystyle a=0}
, and the vdW virial expansion becomes
Z
(
ρ
)
=
(
1
−
b
ρ
)
−
1
=
1
+
∑
k
=
2
∞
(
b
ρ
)
k
−
1
,
{\displaystyle Z(\rho )=(1-b\rho )^{-1}=1+\sum _{k=2}^{\infty }(b\rho )^{k-1},}
which illustrates the effect of the excluded volume alone. It was recognized early on that this was in error beginning with the term
(
b
ρ
)
2
{\displaystyle (b\rho )^{2}}
. Boltzmann calculated its correct value as
5
8
(
b
ρ
)
2
{\textstyle {\frac {5}{8}}(b\rho )^{2}}
, and used the result to propose an enhanced version of the vdW equation:
(
p
+
a
v
2
)
(
v
−
b
3
)
=
R
T
(
1
+
2
b
3
v
+
7
b
2
24
v
2
)
.
{\displaystyle \left(p+{a \over v^{2}}\right)\left(v-{b \over 3}\right)=RT\left(1+{2b \over 3v}+{{7b^{2}} \over {24v^{2}}}\right).}
On expanding
(
v
−
b
/
3
)
−
1
{\displaystyle (v-b/3)^{-1}}
, this produced the correct coefficients through
(
b
/
v
)
2
{\displaystyle (b/v)^{2}}
and also gave infinite pressure at
b
/
3
{\displaystyle b/3}
, which is approximately the close-packing distance for hard spheres. This was one of the first of many equations of state proposed over the years that attempted to make quantitative improvements to the remarkably accurate explanations of real gas behavior produced by the vdW equation.
== Mixtures ==
In 1890 van der Waals published an article that initiated the study of fluid mixtures. It was subsequently included as Part III of a later published version of his thesis. His essential idea was that in a binary mixture of vdW fluids described by the equations
p
1
=
R
T
v
−
b
11
−
a
11
v
2
and
p
2
=
R
T
v
−
b
22
−
a
22
v
2
{\displaystyle p_{1}={\frac {RT}{v-b_{11}}}-{\frac {a_{11}}{v^{2}}}\quad {\text{and}}\quad p_{2}={\frac {RT}{v-b_{22}}}-{\frac {a_{22}}{v^{2}}}}
the mixture is also a vdW fluid given by
p
=
R
T
v
−
b
x
−
a
x
v
2
{\displaystyle p={\frac {RT}{v-b_{x}}}-{\frac {a_{x}}{v^{2}}}}
where
a
x
=
a
11
x
1
2
+
2
a
12
x
1
x
2
+
a
22
x
2
2
,
b
x
=
b
11
x
1
2
+
2
b
12
x
1
x
2
+
b
22
x
2
2
.
{\displaystyle {\begin{aligned}a_{x}&=a_{11}x_{1}^{2}+2a_{12}x_{1}x_{2}+a_{22}x_{2}^{2},\\[2pt]b_{x}&=b_{11}x_{1}^{2}+2b_{12}x_{1}x_{2}+b_{22}x_{2}^{2}.\end{aligned}}}
Here
x
1
=
N
1
/
N
{\displaystyle x_{1}=N_{1}/N}
and
x
2
=
N
2
/
N
{\displaystyle x_{2}=N_{2}/N}
, with
N
=
N
1
+
N
2
{\displaystyle N=N_{1}+N_{2}}
(so that
x
1
+
x
2
=
1
{\displaystyle x_{1}+x_{2}=1}
), are the mole fractions of the two fluid substances. Adding the equations for the two fluids shows that
p
≠
p
1
+
p
2
{\displaystyle p\neq p_{1}+p_{2}}
, although for
v
{\displaystyle v}
sufficiently large
p
≈
p
1
+
p
2
{\displaystyle p\approx p_{1}+p_{2}}
with equality holding in the ideal gas limit. The quadratic forms for
a
x
{\displaystyle a_{x}}
and
b
x
{\displaystyle b_{x}}
are a consequence of the forces between molecules. This was first shown by Lorentz, and was credited to him by van der Waals. The quantities
a
11
,
a
22
{\displaystyle a_{11},\,a_{22}}
and
b
11
,
b
22
{\displaystyle b_{11},\,b_{22}}
in these expressions characterize collisions between two molecules of the same fluid component, while
a
12
=
a
21
{\displaystyle a_{12}=a_{21}}
and
b
12
=
b
21
{\displaystyle b_{12}=b_{21}}
represent collisions between one molecule of each of the two different fluid components. This idea of van der Waals' was later called a one fluid model of mixture behavior.
Assuming that
b
12
{\displaystyle b_{12}}
is the arithmetic mean of
b
11
{\displaystyle b_{11}}
and
b
22
{\displaystyle b_{22}}
,
b
12
=
(
b
11
+
b
22
)
/
2
{\displaystyle b_{12}=(b_{11}+b_{22})/2}
, substituting into the quadratic form and noting that
x
1
+
x
2
=
1
{\displaystyle x_{1}+x_{2}=1}
produces
b
=
b
11
x
1
+
b
22
x
2
{\displaystyle b=b_{11}x_{1}+b_{22}x_{2}}
Van der Waals wrote this relation, but did not make use of it initially. However, it has been used frequently in subsequent studies, and its use is said to produce good agreement with experimental results at high pressure.
=== Common tangent construction ===
In this article, van der Waals used the Helmholtz potential minimum principle to establish stability conditions. This principle states that in a system in diathermal contact with a heat reservoir
T
=
T
R
{\displaystyle T=T_{R}}
,
D
F
=
0
{\displaystyle DF=0}
, and
D
2
F
>
0
{\displaystyle D^{2}F>0}
, namely at equilibrium, the Helmholtz potential is a minimum. Since, like
g
(
p
,
T
)
{\displaystyle g(p,T)}
, the molar Helmholtz function
f
(
v
,
T
)
{\displaystyle f(v,T)}
is also a potential function whose differential is
d
f
=
(
∂
f
∂
v
)
T
d
v
+
(
∂
f
∂
T
)
v
d
T
=
−
p
d
v
−
s
d
T
,
{\displaystyle df=\left({\frac {\partial f}{\partial v}}\right)_{T}dv+\left({\frac {\partial f}{\partial T}}\right)_{v}dT=-p\,dv-s\,dT,}
this minimum principle leads to the stability condition
∂
2
f
/
∂
v
2
|
T
=
−
∂
p
/
∂
v
|
T
>
0
{\displaystyle \partial ^{2}f/\partial v^{2}|_{T}=-\partial p/\partial v|_{T}>0}
. This condition means that the function,
f
(
v
,
T
)
{\displaystyle f(v,T)}
, is convex at all stable states of the system. Moreover, for those states the previous stability condition for the pressure is also necessarily satisfied.
==== Single fluid ====
For a single substance, the definition of the molar Gibbs free energy can be written in the form
f
=
g
−
p
v
{\displaystyle f=g-pv}
. Thus when
p
{\displaystyle p}
and
g
{\displaystyle g}
are constant, the function
f
(
v
)
{\displaystyle f(v)}
is a straight line with slope
−
p
{\displaystyle -p}
, and intercept
g
{\displaystyle g}
. Since the curve
f
(
T
R
,
v
)
{\displaystyle f(T_{R},v)}
has positive curvature everywhere when
T
R
≥
T
c
{\displaystyle T_{R}\geq T_{\text{c}}}
, the curve and the straight line will have a single tangent. However, for a subcritical
T
R
,
f
(
T
R
,
v
)
{\displaystyle T_{R},\,f(T_{R},v)}
is not everywhere convex. With
p
=
p
s
(
T
R
)
{\displaystyle p=p_{\text{s}}(T_{R})}
and a suitable value of
g
{\displaystyle g}
, the line will be tangent to
f
(
T
R
,
v
)
{\displaystyle f(T_{R},v)}
at the molar volume of each coexisting phase: saturated liquid
v
f
(
T
R
)
{\displaystyle v_{f}(T_{R})}
and saturated vapor
v
g
(
T
R
)
{\displaystyle v_{g}(T_{R})}
; there will be a double tangent. Furthermore, each of these points is characterized by the same values of
g
{\displaystyle g}
,
p
{\displaystyle p}
, and
T
R
.
{\displaystyle T_{R}.}
These are the same three specifications for coexistence that were used previously.
Figure 8 depicts an evaluation of
f
(
T
R
,
v
)
{\displaystyle f(T_{R},v)}
as a green curve, with
v
f
{\displaystyle v_{f}}
and
v
g
{\displaystyle v_{g}}
marked by the left and right green circles, respectively. The region on the green curve for
v
≤
v
f
{\displaystyle v\leq v_{f}}
corresponds to the liquid state. As
v
{\displaystyle v}
increases past
v
f
{\displaystyle v_{f}}
, the curvature of
f
{\displaystyle f}
(proportional to
∂
v
∂
v
f
=
−
∂
v
p
{\displaystyle \partial _{v}\partial _{v}f=-\partial _{v}p}
) continually decreases. The inflection point, characterized by zero curvature, is a spinodal point; between
v
f
{\displaystyle v_{f}}
and this point is the metastable superheated liquid. For further increases in
v
{\displaystyle v}
the curvature decreases to a minimum then increases to another (zero curvature) spinodal point; between these two spinodal points is the unstable region in which the fluid cannot exist in a homogeneous equilibrium state (represented by the dotted grey curve). With a further increase in
v
{\displaystyle v}
the curvature increases to a maximum at
v
g
{\displaystyle v_{g}}
, where the slope is
p
s
{\displaystyle p_{\text{s}}}
; the region between this point and the second spinodal point is the metastable subcooled vapor. Finally, the region
v
≥
v
g
{\displaystyle v\geq v_{g}}
is the vapor. In this region the curvature continually decreases until it is zero at infinitely large
v
{\displaystyle v}
. The double tangent line (solid black) that runs between
v
f
{\displaystyle v_{f}}
and
v
g
{\displaystyle v_{g}}
represents states that are stable but heterogeneous, not homogeneous solutions of the vdW equation. The states above this line (with larger Helmholtz free energy) are either metastable or unstable. The combined solid green-black curve in Figure 8 is the convex envelope of
f
(
T
R
,
v
)
{\displaystyle f(T_{R},v)}
, which is defined as the largest convex curve that is less than or equal to the function.
For a vdW fluid, the molar Helmholtz potential is given by Eq (4a). This is, in reduced form,
f
r
=
f
R
T
c
=
C
u
+
T
r
(
c
−
C
s
−
ln
[
T
r
c
(
3
v
r
−
1
)
]
)
−
9
8
v
r
{\displaystyle f_{r}={\frac {f}{RT_{\text{c}}}}=C_{u}+T_{\text{r}}(c-C_{\text{s}}-\ln[T_{\text{r}}^{c}(3v_{\text{r}}-1)])-{\frac {9}{8v_{\text{r}}}}}
with derivative
∂
v
r
f
r
=
−
3
T
r
/
(
3
v
r
−
1
)
+
9
/
(
8
v
r
)
2
=
−
p
r
{\displaystyle \partial _{v_{\text{r}}}f_{\text{r}}=-3T_{\text{r}}}/({3v_{\text{r}}-1)+9/(8v_{\text{r}})^{2}=-p_{\text{r}}}
. A plot of this function
f
r
{\displaystyle f_{\text{r}}}
, whose slope at each point is given by
p
r
{\displaystyle p_{\text{r}}}
of the vdW equation, for the subcritical isotherm
T
r
=
7
/
8
{\displaystyle T_{\text{r}}=7/8}
is shown in Figure 8 along with the line tangent to it at its two coexisting saturation points. The data illustrated in Figure 8 is the same as that shown in Figure 1 for this isotherm.
This double tangent construction thus provides a graphical alternative to the Maxwell construction to establish the saturated liquid and vapor points on an isotherm.
==== Binary fluid ====
Van der Waals used the Helmholtz function because its properties could be easily extended to the binary fluid situation. In a binary mixture of vdW fluids, the Helmholtz potential is a function of two variables,
f
(
T
R
,
v
,
x
)
{\displaystyle f(T_{R},v,x)}
, where
x
{\displaystyle x}
is a composition variable (for example
x
=
x
2
{\displaystyle x=x_{2}}
so
x
1
=
1
−
x
{\displaystyle x_{1}=1-x}
). In this case, there are three stability conditions:
∂
2
f
∂
v
2
>
0
∂
2
f
∂
x
2
>
0
∂
2
f
∂
v
2
∂
2
f
∂
x
2
−
(
∂
2
f
∂
x
∂
v
)
2
>
0
{\displaystyle {\frac {\partial ^{2}f}{\partial v^{2}}}>0\qquad {\frac {\partial ^{2}f}{\partial x^{2}}}>0\qquad {\frac {\partial ^{2}f}{\partial v^{2}}}{\frac {\partial ^{2}f}{\partial x^{2}}}-\left({\frac {\partial ^{2}f}{\partial x\partial v}}\right)^{2}>0}
and the Helmholtz potential is a surface (of physical interest in the region
0
≤
x
≤
1
{\displaystyle 0\leq x\leq 1}
). The first two stability conditions show that the curvature in each of the directions
v
{\displaystyle v}
and
x
{\displaystyle x}
are both non-negative for stable states, while the third condition indicates that stable states correspond to elliptic points on this surface. Moreover, its limit
∂
2
f
∂
v
2
∂
2
f
∂
x
2
−
∂
2
f
∂
x
∂
v
=
0
{\displaystyle {\frac {\partial ^{2}f}{\partial v^{2}}}{\frac {\partial ^{2}f}{\partial x^{2}}}-{\frac {\partial ^{2}f}{\partial x\partial v}}=0}
specifies the spinodal curves on the surface.
For a binary mixture, the Euler equation can be written in the form
f
=
−
p
v
+
μ
1
x
1
+
μ
2
x
2
=
−
p
v
+
(
μ
2
−
μ
1
)
x
+
μ
1
{\displaystyle {\begin{aligned}f&=-pv+\mu _{1}x_{1}+\mu _{2}x_{2}\\&=-pv+(\mu _{2}-\mu _{1})x+\mu _{1}\end{aligned}}}
where
μ
j
=
∂
x
j
f
{\displaystyle \mu _{j}=\partial _{x_{j}}f}
are the molar chemical potentials of each substance,
j
=
1
,
2
{\displaystyle j=1,2}
. For constant values of
p
{\displaystyle p}
,
μ
1
{\displaystyle \mu _{1}}
, and
μ
2
{\displaystyle \mu _{2}}
, this equation is a plane with slopes
−
p
{\displaystyle -p}
in the
v
{\displaystyle v}
direction,
μ
2
−
μ
1
{\displaystyle \mu _{2}-\mu _{1}}
in the
x
{\displaystyle x}
direction, and intercept
μ
1
{\displaystyle \mu _{1}}
. As in the case of a single substance, here the plane and the surface can have a double tangent, and the locus of the coexisting phase points forms a curve on each surface. The coexistence conditions are that the two phases have the same
T
{\displaystyle T}
,
p
{\displaystyle p}
,
μ
2
−
μ
1
{\displaystyle \mu _{2}-\mu _{1}}
, and
μ
1
{\displaystyle \mu _{1}}
; the last two are equivalent to having the same
μ
1
{\displaystyle \mu _{1}}
and
μ
2
{\displaystyle \mu _{2}}
individually, which are just the Gibbs conditions for material equilibrium in this situation. The two methods of producing the coexistence surface are equivalent.
Although this case is similar to that of a single fluid, here the geometry can be much more complex. The surface can develop a wave (called a plait or fold) in the
x
{\displaystyle x}
direction as well as the one in the
v
{\displaystyle v}
direction. Therefore, there can be two liquid phases that can be either miscible, or wholly or partially immiscible, as well as a vapor phase. Despite a great deal of both theoretical and experimental work on this problem by van der Waals and his successors—work which produced much useful knowledge about the various types of phase equilibria that are possible in fluid mixtures—complete solutions to the problem were only obtained after 1967, when the availability of modern computers made calculations of mathematical problems of this complexity feasible for the first time. The results obtained were, in Rowlinson's words,
a spectacular vindication of the essential physical correctness of the ideas behind the van der Waals equation, for almost every kind of critical behavior found in practice can be reproduced by the calculations, and the range of parameters that correlate with the different kinds of behavior are intelligible in terms of the expected effects of size and energy.
=== Mixing rules ===
To obtain these numerical results, the values of the constants of the individual component fluids
a
11
,
a
22
,
b
11
,
b
22
{\displaystyle a_{11},a_{22},b_{11},b_{22}}
must be known. In addition, the effect of collisions between molecules of the different components, given by
a
12
{\displaystyle a_{12}}
and
b
12
{\displaystyle b_{12}}
, must also be specified. In the absence of experimental data, or computer modeling results to estimate their value, the empirical combining rules, geometric and algebraic means can be used, respectively:
a
12
=
(
a
11
a
22
)
1
/
2
and
b
12
1
/
3
=
(
b
11
1
/
3
+
b
22
1
/
3
)
/
2.
{\displaystyle a_{12}=(a_{11}a_{22})^{1/2}\qquad {\text{and}}\qquad b_{12}^{1/3}=(b_{11}^{1/3}+b_{22}^{1/3})/2.}
These relations correspond to the empirical combining rules for the intermolecular force constants,
ϵ
12
=
(
ϵ
11
ϵ
22
)
1
/
2
and
σ
12
=
(
σ
11
+
σ
22
)
/
2
,
{\displaystyle \epsilon _{12}=(\epsilon _{11}\epsilon _{22})^{1/2}\qquad {\text{and}}\qquad \sigma _{12}=(\sigma _{11}+\sigma _{22})/2,}
the first of which follows from a simple interpretation of the dispersion forces in terms of polarizabilities of the individual molecules, while the second is exact for rigid molecules. Using these empirical combining rules to generalize for
n
{\displaystyle n}
fluid components, the quadratic mixing rules for the material constants are:
a
x
=
∑
i
=
1
n
∑
j
=
1
n
(
a
i
i
a
j
j
)
1
/
2
x
i
x
j
=
(
∑
i
=
1
n
a
i
i
1
/
2
x
i
)
2
b
x
=
1
8
∑
i
=
1
n
∑
j
=
1
n
(
b
i
i
1
/
3
+
b
j
j
1
/
3
)
3
x
i
x
j
{\displaystyle {\begin{aligned}a_{x}&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\left(a_{ii}a_{jj}\right)}^{1/2}x_{i}x_{j}={\left(\sum _{i=1}^{n}a_{ii}^{1/2}x_{i}\right)}^{2}\\b_{x}&={\tfrac {1}{8}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\left(b_{ii}^{1/3}+b_{jj}^{1/3}\right)}^{3}x_{i}x_{j}\end{aligned}}}
These expressions come into use when mixing gases in proportion, such as when producing tanks of air for diving and managing the behavior of fluid mixtures in engineering applications. However, more sophisticated mixing rules are often necessary, to obtain satisfactory agreement with reality over the wide variety of mixtures encountered in practice.
Another method of specifying the vdW constants, pioneered by W.B. Kay and known as Kay's rule, specifies the effective critical temperature and pressure of the fluid mixture by
T
c
x
=
∑
i
=
1
n
T
c
i
x
i
and
p
c
x
=
∑
i
=
1
n
p
c
i
x
i
.
{\displaystyle T_{{\text{c}}x}=\sum _{i=1}^{n}T_{{\text{c}}i}x_{i}\qquad {\text{and}}\qquad p_{{\text{c}}x}=\sum _{i=1}^{n}\,p_{{\text{c}}i}x_{i}.}
In terms of these quantities, the vdW mixture constants are
a
x
=
(
3
4
)
3
(
R
T
c
x
)
2
p
c
x
,
b
x
=
(
1
2
)
3
R
T
c
x
p
c
x
{\displaystyle a_{x}=\left({\frac {3}{4}}\right)^{3}{\frac {(RT_{{\text{c}}x})^{2}}{p_{{\text{c}}x}}},\qquad \qquad b_{x}=\left({\frac {1}{2}}\right)^{3}{\frac {RT_{{\text{c}}x}}{p_{{\text{c}}x}}}}
which Kay used as the basis for calculations of the thermodynamic properties of mixtures. Kay's idea was adopted by T. W. Leland, who applied it to the molecular parameters
ϵ
,
σ
{\displaystyle \epsilon ,\sigma }
, which are related to
a
,
b
{\displaystyle a,b}
through
T
c
,
p
c
{\displaystyle T_{\text{c}},p_{\text{c}}}
by
a
∝
ϵ
σ
3
{\displaystyle a\propto \epsilon \sigma ^{3}}
and
b
∝
σ
3
{\displaystyle b\propto \sigma ^{3}}
. Using these together with the quadratic mixing rules for
a
,
b
{\displaystyle a,b}
produces
σ
x
3
=
∑
i
=
i
n
∑
j
=
1
n
σ
i
j
3
x
i
x
j
and
ϵ
x
=
[
∑
i
=
1
n
∑
j
=
1
n
ϵ
i
j
σ
i
j
3
x
i
x
j
]
[
∑
i
=
i
n
∑
j
=
1
n
σ
i
j
3
x
i
x
j
]
−
1
{\displaystyle \sigma _{x}^{3}=\sum _{i=i}^{n}\sum _{j=1}^{n}\,\sigma _{ij}^{3}x_{i}x_{j}\qquad {\text{and}}\qquad \epsilon _{x}=\left[\sum _{i=1}^{n}\sum _{j=1}^{n}\epsilon _{ij}\sigma _{ij}^{3}x_{i}x_{j}\right]\left[\sum _{i=i}^{n}\sum _{j=1}^{n}\,\sigma _{ij}^{3}x_{i}x_{j}\right]^{-1}}
which is the van der Waals approximation expressed in terms of the intermolecular constants. This approximation, when compared with computer simulations for mixtures, are in good agreement over the range
1
/
2
<
(
σ
11
/
σ
22
)
3
<
2
{\displaystyle 1/2<(\sigma _{11}/\sigma _{22})^{3}<2}
, namely for molecules of similar diameters. Rowlinson said of this approximation, "It was, and indeed still is, hard to improve on the original van der Waals recipe when expressed in [this] form".
== Validity of the equation ==
Since van der Waals presented his thesis, "[m]any derivations, pseudo-derivations, and plausibility arguments have been given" for it. However, no mathematically rigorous derivation of the equation over its entire range of molar volume that begins from a statistical mechanical principle exists. Indeed, such a proof is not possible, even for hard spheres. Goodstein writes, "Obviously the value of the van der Waals equation rests principally on its empirical behavior rather than its theoretical foundation."
Although the use of the vdW equation is not justified mathematically, it has empirical validity. Its various applications in this region that attest to this, both qualitative and quantitative, have been described previously in this article. This point was also made by Alder, et al. who, at a conference marking the 100th anniversary of van der Waals' thesis, noted that:
It is doubtful whether we would celebrate the centennial of the Van der Waals equation if it were applicable only under circumstances where it has been proven to be rigorously valid. It is empirically well established that many systems whose molecules have attractive potentials that are neither long-range nor weak conform nearly quantitatively to the Van der Waals model. An example is the theoretically much studied system of Argon, where the attractive potential has only a range half as large as the repulsive core. They continued by saying that this model has "validity down to temperatures below the critical temperature, where the attractive potential is not weak at all but, in fact, comparable to the thermal energy." They also described its application to mixtures "where the Van der Waals model has also been applied with great success. In fact, its success has been so great that not a single other model of the many proposed since, has equalled its quantitative predictions, let alone its simplicity."
Engineers have made extensive use of this empirical validity, modifying the equation in numerous ways (by one account there have been some 400 cubic equations of state produced) to manage the liquids, and gases of pure substances and mixtures, that they encounter in practice.
This situation has been aptly described by Boltzmann:
... van der Waals has given us such a valuable tool that it would cost us much trouble to obtain by the subtlest deliberations a formula that would really be more useful than the one that van der Waals found by inspiration, as it were.
== Notes ==
== References ==
Alder, B. J.; Alley, W. E.; Rigby, M. (1974). "Correction to the van der Waals model for mixtures and for the diffusion coefficient". Physica. 74 (1): 143–155. Bibcode:1974Phy....73..143A. doi:10.1016/0031-8914(74)90231-6.
Andrews, T. (1869). "On the Continuity of the Gaseous and Liquid States of Matter". Philosophical Transactions of the Royal Society of London. 159: 575–590.
Barenblatt, G.I. (1979) [1978], Similarity, Self-Similarity, and Intermediate Asymptotics, translated by Stein, Norman, NY and London: Milton
Barrufet, M.A.; Eubank, P.T. (1989). "Generalized Saturation Properties of Pure Fluids Via Cubic Equations of State". Chemical Engineering Education. 23 (3): 168–175.
Boltzmann, L. (1995) [1967, original in German 1896 (Part I), 1898 (Part II)]. Lectures on Gas Theory. Translated by Brush, S.G. NY: Dover.
Brush, Stephen G. (1973). "J.D. van der Waals and the states of matter". Phys. Teach. 11 (5): 261–270. Bibcode:1973PhTea..11..261B. doi:10.1119/1.2349996.
Callen, H.B. (1960). Thermodynamics. NY: John Wiley ans Sons.
DeBoer, J. (1974). "Van der Waals in his time and the present revival opening address". Physica. 73 (1): 1–27. Bibcode:1974Phy....73....1D. doi:10.1016/0031-8914(74)90223-7.
Dong, W.G.; Lienhard, J.H. (1986). "Corresponding States Correlation of Saturated and Metastable Properties". Can J Chem Eng. 64: 158–161. doi:10.1002/cjce.5450640123.
Epstein, P.S. (1937). Textbook of Thermodynamics. NY: John Wiley and Sons.
Gibbs, J.W. (1948) [1901]. The Collected Works of J. Willard Gibbs Volume II Part One Elementary Principles in Statistical Mechanics. New Haven: Yale University Press.
Goodstein, D.L. (1985) [1975]. States of Matter. NY: Dover.
Grattan-Guinness, Ivor (11 February 2005). Landmark Writings in Western Mathematics 1640-1940. Elsevier. ISBN 978-0-08-045744-4.
Hewitt, Nigel. "Who was Van der Waals anyway and what has he to do with my Nitrox fill?". Maths for Divers. Archived from the original on 11 March 2020. Retrieved 1 February 2019.
Hill, Terrell L. (1986). Statistical Thermodynamics. NY: Dover.
Hirschfelder, J. O.; Curtis, C. F.; Bird, R. B. (1964). Mollecular Theory of Gases and Liquids, corrected printing. NY: John Wiley and Sons, Inc.
Johnston, D.C. (2014). Advances in Thermodynamics of the van der Waals Fluid. arXiv:1402.1205. Bibcode:2014atvd.book.....J. doi:10.1088/978-1-627-05532-1. ISBN 978-1-627-05532-1.
Kac, M.; Uhlenbeck, G.E.; Hemmer, P.C. (1963). "On the van der Waals Theory of the Vapor-Liquid Equilibrium. 1. Discussion of a One-Dimensional Model". J. Math. Phys. 4 (2): 216–228. Bibcode:1963JMP.....4..216K. doi:10.1063/1.1703946.
Kreyszig, E. (1959). Differential Geometry. Toronto: University of Toronto Press.
Klein, M. J. (1974). "The Historical Origins of the Van der Waals Equation". Physica. 73 (1): 28–47. Bibcode:1974Phy....73...28K. doi:10.1016/0031-8914(74)90224-9.
Kontogeorgis, G.M.; Privat, R.; Jaubert, J-N.J. (2019). "Taking Another Look at the van der Waals Equation of State – Almost 150 Years Later". J. Chem. Eng. Data. 64 (11): 4619–4637. doi:10.1021/acs.jced.9b00264.
Korteweg, D.T. (1891a). "On Van Der Waals Isothermal Equation". Nature. 45 (1155): 152–154. Bibcode:1891Natur..45..152K. doi:10.1038/045152a0.
Korteweg, D.T. (1891b). "On Van Der Waals Isothermal Equation". Nature. 45 (1160): 277. doi:10.1038/045277a0.
Lebowitz, J.L. (1974). "Exact Derivation of the Van Der Waals Equation". Physica. 73 (1): 48–60. Bibcode:1974Phy....73...48L. doi:10.1016/0031-8914(74)90225-0.
Lebowitz, J.L.; Penrose, O. (1966). "Rigorous Treatment of the Van der Waals–Maxwell Theory of the Liquid–Vapor Transition". Jour Math Phys. 7 (1): 98–113. Bibcode:1966JMP.....7...98L. doi:10.1063/1.1704821.
Lekner, J. (1982). "Parametric solution of the van der Waals liquid–vapor coexistence curve". Am. J. Phys. 50 (2): 161–163. Bibcode:1982AmJPh..50..161L. doi:10.1119/1.12877.
Leland, T. W.; Rowlinson, J.S.; Sather, G.A. (1968). "Statistical thermodynamics of mixtures of molecules of different sizes". Trans. Faraday Soc. 64: 1447–1460. doi:10.1039/tf9686401447.
Lienhard, J.H. (1986). "The Properties and Behavior of Superheated Liquids". Lat. Am. J. Heat and Mass Transfer. 10: 169–187.
Lienhard, J.H; Shamsundar, N.; Biney, P.O. (1986). "Spinodal Lines and Equations of State: A Review". Nuclear Engineering and Design. 95: 297–314. Bibcode:1986NuEnD..95..297L. doi:10.1016/0029-5493(86)90056-7.
Maxwell, J.C. (1875). "On the Dynamical Evidence of the Molecular Constitution of Bodies". Nature. 11 (279): 357–359. Bibcode:1875Natur..11..357C. doi:10.1038/011357a0.
Moran, M.J.; Shapiro, H.N. (2000). Fundamentals of Engineering Thermodynamics 4th Edition. NY: McGraw-Hill.
Niemeyer, Kyle. "Mixture properties". Computational Thermodynamics. Archived from the original on 2 April 2024. Retrieved 2 April 2024.
Peck, R.E. (1982). "The Assimilation of van der Waals Equation in the Corresponding States Family". Can. J. Chem. Eng. 60: 446–449. doi:10.1002/cjce.5450600319.
Pitzer, K.S.; Lippman, D.Z.; Curl, R.F.; Huggins, C.M.; Peterson, D.E. (1955). "The Volumetric and Thermodynamic Properties of Fluids. II. Compressibility Factor, Vapor Pressure and Entropy of Vaporization". J. Am. Chem. Soc. 77 (13): 3433–3440. Bibcode:1955JAChS..77.3433P. doi:10.1021/ja01618a002.
Redlich, O.; Kwong, J. N. S. (1949). "On the Thermodynamics of Solutions. V. An Equation of State. Fugacities of Gaseous Solutions" (PDF). Chemical Reviews. 44 (1): 233–244. doi:10.1021/cr60137a013. PMID 18125401. Retrieved 2 April 2024.
Shamsundar, N.; Lienhard, J.H. (1983). "Saturation and Metastable Properties of the van der Waals Fluid". Can J Chem Eng. 61 (6): 876–880. doi:10.1002/cjce.5450610617.
Sommerfeld, A. (1956). Bopp, F.; Meixner, J. (eds.). Thermodynamics and Statistical Mechanics – Lectures on Theoretical Physics Volume V. Translated by Kestin, J. NY: Academic Press. ISBN 978-0-323-13773-7. {{cite book}}: ISBN / Date incompatibility (help)
Strutt, J.W., 3rd Baron Rayleigh (1891). "On the Virial of a System of Hard Colliding Bodies". Nature. 45 (1152): 80–82. Bibcode:1891Natur..45...80R. doi:10.1038/045080a0.{{cite journal}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
Su, G.J. (1946). "Modified Law of Corresponding States for Real Gases". Ind. Eng. Chem. 38 (8): 803–806. doi:10.1021/ie50440a018.
Tien, C.L.; Lienhard, J.H. (1979). Statistical Thermodynamics Revised Printing. NY: Hemisphere Publishing. Bibcode:1979wdch.book.....T.
Tonks, L. (1936). "The Complete Equation of State of One, Two, and Three-Dimensional Gases of Hard Elastic Spheres". Phys. Rev. 50 (10): 955–963. Bibcode:1936PhRv...50..955T. doi:10.1103/PhysRev.50.955.
Valderrama, J.O. (2003). "The State of the Cubic Equations of State". Ind. Chem. Eng. Res. 42 (8): 1603–1618. doi:10.1021/ie020447b.
van der Waals, J.D. (1873). Over de Continuïteit van den Gas en Vloeistoftoestand (Ph.D. thesis). Leiden Univ.
van der Waals, Johannes D. (1967) [1910]. "The Equation of State of Gases and Liquids". in Nobel Lectures, Physics 1901–1921. Amsterdam: Elsevier. pp. 254–265.
van der Waals, J.D. (2004) [1984]. Rowlinson, J.S. (ed.). On the Continuity of the Gaseous and Liquid States, edited and with an Introduction by J.S. Rowlinson. NY: Dover Phoenix Editions.
van Hove, L. (1949). "Quelques Proprieties Generales De L'Integrale De Configuration D'Un Systeme De Particules Avec Interaction". Physica. 15 (11–12): 951–961. Bibcode:1949Phy....15..951V. doi:10.1016/0031-8914(49)90059-2.
Van Wylen, G.J.; Sonntag, R.E. (1973). Fundamentals of Classical Thermodynamics Second Edition. NY: John Wiley ans Sons.
Vera, J.H.; Prausnitz, J.M. (1972). "Generalized van der Waals Theory for Dense Fluids". Chem. Eng. Jour. 3: 1–13. doi:10.1016/0300-9467(72)85001-9.
Weinberg, S. (2021). Foundations of Modern Physics. Cambridge: Cambridge University Press. Bibcode:2021fmp..book.....W.
Whitman, A.M. (2023). Thermodynamics: Basic Principles and Engineering Applications 2nd Edition. NY: Springer.
== See also ==
Gas laws
Ideal gas
Inversion temperature
Iteration
Maxwell construction
Real gas
Theorem of corresponding states
Van der Waals constants (data page)
Redlich–Kwong equation of state
== Further reading ==
Chandler, David (1987). Introduction to Modern Statistical Mechanics. Oxford: Oxford University Press. pp. 287–295. ISBN 0195042778.
Cross, Michael (2004), "Lecture 3: First Order Phase Transitions" (PDF), Physics 127: Statistical Physics, Second Term, Pasadena, California: Division of Physics, Mathematics, and Astronomy, California Institute of Technology.
Dalgarno, A.; Davison, W.D. (1966). "The Calculation of Van Der Waals Interactions". Advances in Atomic and Molecular Physics. 2: 1–32. Bibcode:1966AdAMP...2....1D. doi:10.1016/S0065-2199(08)60216-X. ISBN 9780120038022.
Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (Revised ed.). New York: Macmillan. pp. 287–295. ISBN 0716710889. | Wikipedia/Van_der_Waals_equation |
Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space (and not, for example, derivatives of the density or the Kohn–Sham orbitals). Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems (molecules and solids).
In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as
E
x
c
L
D
A
[
ρ
]
=
∫
ρ
(
r
)
ϵ
x
c
(
ρ
(
r
)
)
d
r
,
{\displaystyle E_{\rm {xc}}^{\mathrm {LDA} }[\rho ]=\int \rho (\mathbf {r} )\epsilon _{\rm {xc}}(\rho (\mathbf {r} ))\ \mathrm {d} \mathbf {r} \ ,}
where ρ is the electronic density and єxc is the exchange-correlation energy per particle of a homogeneous electron gas of charge density ρ. The exchange-correlation energy is decomposed into exchange and correlation terms linearly,
E
x
c
=
E
x
+
E
c
,
{\displaystyle E_{\rm {xc}}=E_{\rm {x}}+E_{\rm {c}}\ ,}
so that separate expressions for Ex and Ec are sought. The exchange term takes on a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approximations for єc.
Local-density approximations are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradient approximations (GGA) or hybrid functionals, as a desirable property of any approximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA's are often an explicit component of such functionals.
The local-density approximation was first introduced by Walter Kohn and Lu Jeu Sham in 1965.
== Applications ==
Local density approximations, as with GGAs are employed extensively by solid state physicists in ab-initio DFT studies to interpret electronic and magnetic interactions in semiconductor materials including semiconducting oxides and spintronics. The importance of these computational studies stems from the system complexities which bring about high sensitivity to synthesis parameters necessitating first-principles based analysis. The prediction of Fermi level and band structure in doped semiconducting oxides is often carried out using LDA incorporated into simulation packages such as CASTEP and DMol3. However an underestimation in Band gap values often associated with LDA and GGA approximations may lead to false predictions of impurity mediated conductivity and/or carrier mediated magnetism in such systems. Starting in 1998, the application of the Rayleigh theorem for eigenvalues has led to mostly accurate, calculated band gaps of materials, using LDA potentials. A misunderstanding of the second theorem of DFT appears to explain most of the underestimation of band gap by LDA and GGA calculations, as explained in the description of density functional theory, in connection with the statements of the two theorems of DFT.
== Homogeneous electron gas ==
Approximation for єxc depending only upon the density can be developed in numerous ways. The most successful approach is based on the homogeneous electron gas. This is constructed by placing N interacting electrons in to a volume, V, with a positive background charge keeping the system neutral. N and V are then taken to infinity in the manner that keeps the density (ρ = N / V) finite. This is a useful approximation, as the total energy consists of contributions only from the kinetic energy, electrostatic interaction energy and exchange-correlation energy, and that the wavefunction is expressible in terms of plane waves. In particular, for a constant density ρ, the exchange energy density is proportional to ρ⅓.
== Exchange functional ==
The exchange-energy density of a HEG is known analytically. The LDA for exchange employs this expression under the approximation that the exchange-energy in a system where the density is not homogeneous, is obtained by applying the HEG results pointwise, yielding the expression
E
x
L
D
A
[
ρ
]
=
−
3
e
2
16
π
ε
0
(
3
π
)
1
/
3
∫
ρ
(
r
)
4
/
3
d
r
=
−
3
4
(
3
π
)
1
/
3
∫
ρ
(
r
)
4
/
3
d
r
,
{\displaystyle E_{\rm {x}}^{\mathrm {LDA} }[\rho ]=-{\frac {3e^{2}}{16\pi \varepsilon _{0}}}\left({\frac {3}{\pi }}\right)^{1/3}\int \rho (\mathbf {r} )^{4/3}\ \mathrm {d} \mathbf {r} =-{\frac {3}{4}}\left({\frac {3}{\pi }}\right)^{1/3}\int \rho (\mathbf {r} )^{4/3}\ \mathrm {d} \mathbf {r} \,,}
where the second formulation applies in atomic units.
== Correlation functional ==
Analytic expressions for the correlation energy of the HEG are available in the high- and low-density limits corresponding to infinitely-weak and infinitely-strong correlation. For a HEG with density ρ, the high-density limit of the correlation energy density is
ϵ
c
=
A
ln
(
r
s
)
+
B
+
r
s
(
C
ln
(
r
s
)
+
D
)
,
{\displaystyle \epsilon _{\rm {c}}=A\ln(r_{\rm {s}})+B+r_{\rm {s}}(C\ln(r_{\rm {s}})+D)\ ,}
and the low limit
ϵ
c
=
1
2
(
g
0
r
s
+
g
1
r
s
3
/
2
+
…
)
,
{\displaystyle \epsilon _{\rm {c}}={\frac {1}{2}}\left({\frac {g_{0}}{r_{\rm {s}}}}+{\frac {g_{1}}{r_{\rm {s}}^{3/2}}}+\dots \right)\ ,}
where the Wigner-Seitz parameter
r
s
{\displaystyle r_{\rm {s}}}
is dimensionless. It is defined as the radius of a sphere which encompasses exactly one electron, divided by the Bohr radius a0. In terms of the density ρ, this means
4
3
π
r
s
3
=
1
ρ
a
0
3
.
{\displaystyle {\frac {4}{3}}\pi r_{\rm {s}}^{3}={\frac {1}{\rho \,a_{0}^{3}}}\ .}
An analytical expression for the full range of densities has been proposed based on the many-body perturbation theory. The calculated correlation energies are in agreement with the results from quantum Monte Carlo simulation to within 2 milli-Hartree.
Accurate quantum Monte Carlo simulations for the energy of the HEG have been performed for several intermediate values of the density, in turn providing accurate values of the correlation energy density.
== Spin polarization ==
The extension of density functionals to spin-polarized systems is straightforward for exchange, where the exact spin-scaling is known, but for correlation further approximations must be employed. A spin polarized system in DFT employs two spin-densities, ρα and ρβ with ρ = ρα + ρβ, and the form of the local-spin-density approximation (LSDA) is
E
x
c
L
S
D
A
[
ρ
α
,
ρ
β
]
=
∫
d
r
ρ
(
r
)
ϵ
x
c
(
ρ
α
,
ρ
β
)
.
{\displaystyle E_{\rm {xc}}^{\mathrm {LSDA} }[\rho _{\alpha },\rho _{\beta }]=\int \mathrm {d} \mathbf {r} \ \rho (\mathbf {r} )\epsilon _{\rm {xc}}(\rho _{\alpha },\rho _{\beta })\ .}
For the exchange energy, the exact result (not just for local density approximations) is known in terms of the spin-unpolarized functional:
E
x
[
ρ
α
,
ρ
β
]
=
1
2
(
E
x
[
2
ρ
α
]
+
E
x
[
2
ρ
β
]
)
.
{\displaystyle E_{\rm {x}}[\rho _{\alpha },\rho _{\beta }]={\frac {1}{2}}{\bigg (}E_{\rm {x}}[2\rho _{\alpha }]+E_{\rm {x}}[2\rho _{\beta }]{\bigg )}\ .}
The spin-dependence of the correlation energy density is approached by introducing the relative spin-polarization:
ζ
(
r
)
=
ρ
α
(
r
)
−
ρ
β
(
r
)
ρ
α
(
r
)
+
ρ
β
(
r
)
.
{\displaystyle \zeta (\mathbf {r} )={\frac {\rho _{\alpha }(\mathbf {r} )-\rho _{\beta }(\mathbf {r} )}{\rho _{\alpha }(\mathbf {r} )+\rho _{\beta }(\mathbf {r} )}}\ .}
ζ
=
0
{\displaystyle \zeta =0\,}
corresponds to the diamagnetic spin-unpolarized situation with equal
α
{\displaystyle \alpha \,}
and
β
{\displaystyle \beta \,}
spin densities whereas
ζ
=
±
1
{\displaystyle \zeta =\pm 1}
corresponds to the ferromagnetic situation where one spin density vanishes. The spin correlation energy density for a given values of the total density and relative polarization, єc(ρ,ζ), is constructed so to interpolate the extreme values. Several forms have been developed in conjunction with LDA correlation functionals.
== Exchange-correlation potential ==
The exchange-correlation potential corresponding to the exchange-correlation energy for a local density approximation is given by
v
x
c
L
D
A
(
r
)
=
δ
E
L
D
A
δ
ρ
(
r
)
=
ϵ
x
c
(
ρ
(
r
)
)
+
ρ
(
r
)
∂
ϵ
x
c
(
ρ
(
r
)
)
∂
ρ
(
r
)
.
{\displaystyle v_{\rm {xc}}^{\mathrm {LDA} }(\mathbf {r} )={\frac {\delta E^{\mathrm {LDA} }}{\delta \rho (\mathbf {r} )}}=\epsilon _{\rm {xc}}(\rho (\mathbf {r} ))+\rho (\mathbf {r} ){\frac {\partial \epsilon _{\rm {xc}}(\rho (\mathbf {r} ))}{\partial \rho (\mathbf {r} )}}\ .}
In finite systems, the LDA potential decays asymptotically with an exponential form. This result is in error; the true exchange-correlation potential decays much slower in a Coulombic manner. The artificially rapid decay manifests itself in the number of Kohn–Sham orbitals the potential can bind (that is, how many orbitals have energy less than zero). The LDA potential can not support a Rydberg series and those states it does bind are too high in energy. This results in the highest occupied molecular orbital (HOMO) energy being too high in energy, so that any predictions for the ionization potential based on Koopmans' theorem are poor. Further, the LDA provides a poor description of electron-rich species such as anions where it is often unable to bind an additional electron, erroneously predicating species to be unstable. In the case of spin polarization, the exchange-correlation potential acquires spin indices. However, if one only considers the exchange part of the exchange-correlation, one obtains a potential that is diagonal in spin indices (in atomic units):
v
x
c
,
α
β
L
D
A
(
r
)
=
δ
E
L
D
A
δ
ρ
α
β
(
r
)
=
1
2
δ
α
β
δ
E
L
D
A
[
2
ρ
α
]
δ
ρ
α
=
−
δ
α
β
(
3
π
)
1
/
3
2
1
/
3
ρ
α
1
/
3
{\displaystyle v_{\rm {xc,\alpha \beta }}^{\mathrm {LDA} }(\mathbf {r} )={\frac {\delta E^{\mathrm {LDA} }}{\delta \rho _{\alpha \beta }(\mathbf {r} )}}={\frac {1}{2}}\delta _{\alpha \beta }{\frac {\delta E^{\mathrm {LDA} }[2\rho _{\alpha }]}{\delta \rho _{\alpha }}}=-\delta _{\alpha \beta }{\Big (}{\frac {3}{\pi }}{\Big )}^{1/3}2^{1/3}\rho _{\alpha }^{1/3}}
== References == | Wikipedia/Local_density_approximation |
The Korringa–Kohn–Rostoker (KKR) method is used to calculate the electronic band structure of periodic solids. In the derivation of the method using multiple scattering theory by Jan Korringa and the derivation based on the Kohn and Rostoker variational method, the muffin-tin approximation was used. Later calculations are done with full potentials having no shape restrictions.
== Introduction ==
All solids in their ideal state are single crystals with the atoms arranged on a periodic lattice. In condensed matter physics, the properties of such solids are explained on the basis of their electronic structure. This requires the solution of a complicated many-electron problem, but the density functional theory of Walter Kohn makes it possible to reduce it to the solution of a Schroedinger equation with a one-electron periodic potential. The problem is further simplified with the use of group theory and in particular Bloch's theorem, which leads to the result that the energy eigenvalues depend on the crystal momentum
k
{\displaystyle {\bf {k}}}
and are divided into bands. Band theory is used to calculate the eigenvalues and wave functions.
As compared with other band structure methods, the Korringa-Kohn-Rostoker (KKR) band structure method has the advantage of dealing with small matrices due to the fast convergence of scattering operators in angular momentum space, and disordered systems where it allows to carry out with relative ease the ensemble configuration averages. The KKR method does have a few “bills” to pay, e.g., (1) the calculation of KKR structure constants, the empty lattice propagators, must be carried out by the Ewald's sums for each energy and k-point, and (2) the KKR functions have a pole structure on the real energy axis, which requires a much larger number of k points for the Brillouin Zone (BZ) integration as compared with other band theory methods. The KKR method has been implemented in several codes for electronic structure and spectroscopy calculations, such as MuST, AkaiKKR, sprKKR, FEFF, GNXAS and JuKKR.
== Mathematical formulation ==
The KKR band theory equations for space-filling non-spherical potentials are derived in books and in the article on multiple scattering theory.
The wave function near site
j
{\displaystyle j}
is determined by the coefficients
c
ℓ
′
m
′
j
{\displaystyle c_{\ell 'm'}^{j}}
. According to Bloch's theorem, these coefficients differ only through a phase factor
c
ℓ
′
m
′
j
=
e
−
i
k
⋅
R
j
c
ℓ
′
m
′
(
E
,
k
)
{\displaystyle c_{\ell 'm'}^{j}={e^{-i{\bf {k}}\cdot {\bf {R}}_{j}}}c_{\ell 'm'}(E,{\bf {k}})}
. The
c
ℓ
′
m
′
(
E
,
k
)
{\displaystyle c_{\ell 'm'}(E,{\bf {k}})}
satisfy the homogeneous equations
∑
ℓ
′
m
′
M
ℓ
m
,
ℓ
′
m
′
(
E
,
k
)
c
ℓ
′
m
′
(
E
,
k
)
=
0
,
{\displaystyle \sum _{\ell 'm'}M_{\ell m,\ell 'm'}(E,{\bf {k}})c_{\ell 'm'}(E,{\bf {k}})=0,}
where
M
ℓ
m
,
ℓ
′
m
′
(
E
,
k
)
=
m
ℓ
m
,
ℓ
′
m
′
(
E
)
−
A
ℓ
m
,
ℓ
′
m
′
(
E
,
k
)
{\displaystyle {M_{\ell m,\ell 'm'}}(E,{\bf {k}})=m_{\ell m,\ell 'm'}(E)-A_{\ell m,\ell 'm'}(E,{\bf {k}})}
and
A
ℓ
m
,
ℓ
′
m
′
(
E
,
k
)
=
∑
j
e
i
k
⋅
R
i
j
g
l
m
,
l
′
m
′
(
E
,
R
i
j
)
{\displaystyle A_{\ell m,\ell 'm'}(E,{\bf {k}})=\sum \limits _{j}{e^{i{\bf {{k}\cdot {\bf {{R}_{ij}}}}}}}g_{lm,l'm'}(E,{\bf {R}}_{ij})}
.
The
m
ℓ
m
,
ℓ
′
m
′
(
E
)
{\displaystyle m_{\ell m,\ell 'm'}(E)}
is the inverse of the scattering matrix
t
ℓ
m
,
ℓ
′
m
′
(
E
)
{\displaystyle t_{\ell m,\ell 'm'}(E)}
calculated with the non-spherical potential for the site. As pointed out by Korringa, Ewald derived a summation process that makes it possible to calculate the structure constants,
A
ℓ
m
,
ℓ
′
m
′
(
E
,
k
)
{\displaystyle A_{\ell m,\ell 'm'}(E,{\bf {k}})}
. The energy eigenvalues of the periodic solid for a particular
k
{\displaystyle {\bf {k}}}
,
E
b
(
k
)
{\displaystyle E_{b}({\bf {{k})}}}
, are the roots of the equation
det
M
(
E
,
k
)
=
0
{\displaystyle \det {\bf {M}}(E,{\bf {k}})=0}
. The eigenfunctions are found by solving for the
c
ℓ
,
m
(
E
,
k
)
{\displaystyle c_{\ell ,m}(E,{\bf {k}})}
with
E
=
E
b
(
k
)
{\displaystyle E=E_{b}({\bf {k}})}
. By ignoring all contributions that correspond to an angular momentum
l
{\displaystyle l}
greater than
ℓ
max
{\displaystyle \ell _{\max }}
, they have dimension
(
ℓ
max
+
1
)
2
{\displaystyle (\ell _{\max }+1)^{2}}
.
In the original derivations of the KKR method, spherically symmetric muffin-tin potentials were used. Such potentials have the advantage that the inverse of the scattering matrix is diagonal in
l
{\displaystyle l}
m
ℓ
m
,
ℓ
′
m
′
=
[
α
cot
δ
ℓ
(
E
)
−
i
α
]
δ
ℓ
,
ℓ
′
δ
m
,
m
′
,
{\displaystyle m_{\ell m,\ell 'm'}=\left[\alpha \cot \delta _{\ell }(E)-i\alpha \right]\delta _{\ell ,\ell '}\delta _{m,m'},}
where
δ
ℓ
(
E
)
{\displaystyle \delta _{\ell }(E)}
is the scattering phase shift that appears in the partial wave analysis in scattering theory. The muffin-tin approximation is good for closely packed metals, but it does not work well for ionic solids like semiconductors. It also leads to errors in calculations of interatomic forces.
== Applications ==
The KKR method may be combined with density functional theory (DFT) and used to study the electronic structure and consequent physical properties of molecules and materials. As with any DFT calculation, the electronic problem must be solved self-consistently, before quantities such as the total energy of a collection of atoms, the electron density, the band structure, and forces on individual atoms may be calculated.
One major advantage of the KKR formalism over other electronic structure methods is that it provides direct access to the Green's function of a given system. This, and other convenient mathematical quantities recovered from the derivation in terms of multiple scattering theory, facilitate access to a range of physically relevant quantities, including transport properties, magnetic properties, and spectroscopic properties.
One particularly powerful method which is unique to Green's function-based methods is the coherent potential approximation (CPA), which is an effective medium theory used to average over configurational disorder, such as is encountered in a substitutional alloy. The CPA captures the broken translational symmetry of the disordered alloy in a physically meaningful way, with the end result that the initially 'sharp' band structure is 'smeared-out', which reflects the finite lifetime of electronic states in such a system. The CPA can also be used to average over many possible orientations of magnetic moments, as is necessary to describe the paramagnetic state of a magnetic material (above its Curie temperature). This is referred to as the disordered local moment (DLM) picture.
== References == | Wikipedia/Korringa–Kohn–Rostoker_method |
Quantum chemistry composite methods (also referred to as thermochemical recipes) are computational chemistry methods that aim for high accuracy by combining the results of several calculations. They combine methods with a high level of theory and a small basis set with methods that employ lower levels of theory with larger basis sets. They are commonly used to calculate thermodynamic quantities such as enthalpies of formation, atomization energies, ionization energies and electron affinities. They aim for chemical accuracy which is usually defined as within 1 kcal/mol of the experimental value. The first systematic model chemistry of this type with broad applicability was called Gaussian-1 (G1) introduced by John Pople. This was quickly replaced by the Gaussian-2 (G2) which has been used extensively. The Gaussian-3 (G3) was introduced later.
== Gaussian-n theories ==
=== Gaussian-2 (G2) ===
The G2 uses seven calculations:
the molecular geometry is obtained by a MP2 optimization using the 6-31G(d) basis set and all electrons included in the perturbation. This geometry is used for all subsequent calculations.
The highest level of theory is a quadratic configuration interaction calculation with single and double excitations and a triples excitation contribution (QCISD(T)) with the 6-311G(d) basis set. Such a calculation in the Gaussian and Spartan programs also give the MP2 and MP4 energies which are also used.
The effect of polarization functions is assessed using an MP4 calculation with the 6-311G(2df,p) basis set.
The effect of diffuse functions is assessed using an MP4 calculation with the 6-311+G(d, p) basis set.
The largest basis set is 6-311+G(3df,2p) used at the MP2 level of theory.
A Hartree–Fock geometry optimization with the 6-31G(d) basis set used to give a geometry for:
A frequency calculation with the 6-31G(d) basis set to obtain the zero-point vibrational energy (ZPVE)
The various energy changes are assumed to be additive so the combined energy is given by:
EQCISD(T) from 2 + [EMP4 from 3 - EMP4 from 2] + [EMP4 from 4 - EMP4 from 2] + [EMP2 from 5 + EMP2 from 2 - EMP2 from 3 - EMP2 from 4]
The second term corrects for the effect of adding the polarization functions. The third term corrects for the diffuse functions. The final term corrects for the larger basis set with the terms from steps 2, 3 and 4 preventing contributions from being counted twice. Two final corrections are made to this energy. The ZPVE is scaled by 0.8929. An empirical correction is then added to account for factors not considered above. This is called the higher level correction (HC) and is given by -0.00481 x (number of valence electrons) -0.00019 x (number of unpaired valence electrons). The two numbers are obtained calibrating the results against the experimental results for a set of molecules. The scaled ZPVE and the HLC are added to give the final energy. For some molecules containing one of the third row elements Ga–Xe, a further term is added to account for spin orbit coupling.
Several variants of this procedure have been used. Removing steps 3 and 4 and relying only on the MP2 result from step 5 is significantly cheaper and only slightly less accurate. This is the G2MP2 method. Sometimes the geometry is obtained using a density functional theory method such as B3LYP and sometimes the QCISD(T) method in step 2 is replaced by the coupled cluster method CCSD(T).
The G2(+) variant, where the "+" symbol refers to added diffuse functions, better describes anions than conventional G2 theory. The 6-31+G(d) basis set is used in place of the 6-31G(d) basis set for both the initial geometry optimization, as well as the second geometry optimization and frequency calculation. Additionally, the frozen-core approximation is made for the initial MP2 optimization, whereas G2 usually uses the full calculation.
=== Gaussian-3 (G3) ===
The G3 is very similar to G2 but learns from the experience with G2 theory. The 6-311G basis set is replaced by the smaller 6-31G basis. The final MP2 calculations use a larger basis set, generally just called G3large, and correlating all the electrons not just the valence electrons as in G2 theory, additionally a spin-orbit correction term and an empirical correction for valence electrons are introduced. This gives some core correlation contributions to the final energy. The HLC takes the same form but with different empirical parameters.
=== Gaussian-4 (G4) ===
G4 is a compound method in spirit of the other Gaussian theories and attempts to take the accuracy achieved with G3X one small step further. This involves the introduction of an extrapolation scheme for obtaining basis set limit Hartree-Fock energies, the use of geometries and thermochemical corrections calculated at B3LYP/6-31G(2df,p) level, a highest-level single point calculation at CCSD(T) instead of QCISD(T) level, and addition of extra polarization functions in the largest-basis set MP2 calculations. Thus, Gaussian 4 (G4) theory is an approach for the calculation of energies of molecular species containing first-row, second-row, and third row main group elements. G4 theory is an improved modification of the earlier approach G3 theory. The modifications to G3- theory are the change in an estimate of the Hartree–Fock energy limit, an expanded polarization set for the large basis set calculation, use of CCSD(T) energies, use of geometries from density functional theory and zero-point energies, and two added higher level correction parameters. According to the developers, this theory gives significant improvement over G3-theory. The G4 and the related G4MP2 methods have been extended to cover transition metals. A variant of G4MP2, termed G4(MP2)-6X, has been developed with an aim to improve the accuracy with essentially identical quantum chemistry components. It applies scaling to the energy components in addition to using the HLC. In the G4(MP2)-XK method that is related to G4(MP2)-6X, the Pople-type basis sets are replaced with customized Karlsruhe-type basis sets. In comparison with G4(MP2)-6X, which covers main-group elements up to krypton, G4(MP2)-XK is applicable to main-group elements up to radon.
== Feller-Peterson-Dixon approach (FPD) ==
Unlike fixed-recipe, "model chemistries", the FPD approach consists of a flexible sequence of (up to) 13 components that vary with the nature of the chemical system under study and the desired accuracy in the final results. In most instances, the primary component relies on coupled cluster theory, such as CCSD(T), or configuration interaction theory combined with large Gaussian basis sets (up through aug-cc-pV8Z, in some cases) and extrapolation to the complete basis set limit. As with some other approaches, additive corrections for core/valence, scalar relativistic and higher order correlation effects are usually included. Attention is paid to the uncertainties associated with each of the components so as to permit a crude estimate of the uncertainty in the overall results. Accurate structural parameters and vibrational frequencies are a natural byproduct of the method. While the computed molecular properties can be highly accurate, the computationally intensive nature of the FPD approach limits the size of the chemical system to which it can be applied to roughly 10 or fewer first/second row atoms.
The FPD Approach has been heavily benchmarked against experiment. When applied at the highest possible level, FDP is capable to yielding a root-mean-square (RMS) deviation with respect to experiment of 0.30 kcal/mol (311 comparisons covering atomization energies, ionization potentials, electron affinities and proton affinities). In terms of equilibrium, bottom-of-the-well structures, FPD gives an RMS deviation of 0.0020 Å (114 comparisons not involving hydrogens) and 0.0034 Å (54 comparisons involving hydrogen). Similar good agreement was found for vibrational frequencies.
== T1 ==
The T1 method. is an efficient computational approach developed for calculating accurate heats of formation of uncharged, closed-shell molecules comprising H, C, N, O, F, Si, P, S, Cl and Br, within experimental error. It is practical for molecules up to molecular weight ~ 500 a.m.u.
T1 method as incorporated in Spartan consists of:
HF/6-31G* optimization.
RI-MP2/6-311+G(2d,p)[6-311G*] single point energy with dual basis set.
An empirical correction using atom counts, Mulliken bond orders, HF/6-31G* and RI-MP2 energies as variables.
T1 follows the G3(MP2) recipe, however, by substituting an HF/6-31G* for the MP2/6-31G* geometry, eliminating both the HF/6-31G* frequency and QCISD(T)/6-31G* energy and approximating the MP2/G3MP2large energy using dual basis set RI-MP2 techniques, the T1 method reduces computation time by up to 3 orders of magnitude. Atom counts, Mulliken bond orders and HF/6-31G* and RI-MP2 energies are introduced as variables in a linear regression fit to a set of 1126 G3(MP2) heats of formation. The T1 procedure reproduces these values with mean absolute and RMS errors of 1.8 and 2.5 kJ/mol, respectively. T1 reproduces experimental heats of formation for a set of 1805 diverse organic molecules from the NIST thermochemical database with mean absolute and RMS errors of 8.5 and 11.5 kJ/mol, respectively.
== Correlation consistent composite approach (ccCA) ==
This approach, developed at the University of North Texas by Angela K. Wilson's research group, utilizes the correlation consistent basis sets developed by Dunning and co-workers. Unlike the Gaussian-n methods, ccCA does not contain any empirically fitted term. The B3LYP density functional method with the cc-pVTZ basis set, and cc-pV(T+d)Z for third row elements (Na - Ar), are used to determine the equilibrium geometry. Single point calculations are then used to find the reference energy and additional contributions to the energy. The total ccCA energy for main group is calculated by:
EccCA = EMP2/CBS + ΔECC + ΔECV + ΔESR + ΔEZPE + ΔESO
The reference energy EMP2/CBS is the MP2/aug-cc-pVnZ (where n=D,T,Q) energies extrapolated at the complete basis set limit by the Peterson mixed gaussian exponential extrapolation scheme. CCSD(T)/cc-pVTZ is used to account for correlation beyond the MP2 theory:
ΔECC = ECCSD(T)/cc-pVTZ - EMP2/cc-pVTZ
Core-core and core-valence interactions are accounted for using MP2(FC1)/aug-cc-pCVTZ:
ΔECV= EMP2(FC1)/aug-cc-pCVTZ - EMP2/aug-cc-pVTZ
Scalar relativistic effects are also taken into account with a one-particle Douglass Kroll Hess Hamiltonian and recontracted basis sets:
ΔESR = EMP2-DK/cc-pVTZ-DK - EMP2/cc-pVTZ
The last two terms are zero-point energy corrections scaled with a factor of 0.989 to account for deficiencies in the harmonic approximation and spin-orbit corrections considered only for atoms.
The Correlation Consistent Composite Approach is available as a keyword in NWChem and GAMESS (ccCA-S4 and ccCA-CC(2,3))
== Complete Basis Set methods (CBS) ==
The Complete Basis Set (CBS) methods are a family of composite methods, the members of which are: CBS-4M, CBS-QB3, and CBS-APNO, in increasing order of accuracy. These methods offer errors of 2.5, 1.1, and 0.7 kcal/mol when tested against the G2 test set. The CBS methods were developed by George Petersson and coworkers, and they make extrapolate several single-point energies to the "exact" energy. In comparison, the Gaussian-n methods perform their approximation using additive corrections. Similar to the modified G2(+) method, CBS-QB3 has been modified by the inclusion of diffuse functions in the geometry optimization step to give CBS-QB3(+). The CBS family of methods is available via keywords in the Gaussian 09 suite of programs.
== Weizmann-n theories ==
The Weizmann-n ab initio methods (Wn, n = 1–4) are highly accurate composite theories devoid of empirical parameters. These theories are capable of sub-kJ/mol accuracies in prediction of fundamental thermochemical quantities such as heats of formation and atomization energies, and unprecedented accuracies in prediction of spectroscopic constants. The Wn-P34 variants further extend the applicability from first- and second-row species to include heavy main-group systems (up to xenon).
The ability of these theories to successfully reproduce the CCSD(T)/CBS (W1 and W2), CCSDT(Q)/CBS (W3), and CCSDTQ5/CBS (W4) energies relies on judicious combination of very large Gaussian basis sets with basis-set extrapolation techniques. Thus, the high accuracy of Wn theories comes with the price of a significant computational cost. In practice, for systems consisting of more than ~9 non-hydrogen atoms (with C1 symmetry), even the computationally more economical W1 theory becomes prohibitively expensive with current mainstream server hardware.
In an attempt to extend the applicability of the Wn ab initio thermochemistry methods, explicitly correlated versions of these theories have been developed: Wn-F12 (n = 1–3) and more recently even a W4-F12 theory. W1-F12 was successfully applied to large hydrocarbons (e.g., dodecahedrane, as well as to systems of biological relevance (e.g., DNA bases). W4-F12 theory has been applied to systems as large as benzene. In a similar manner, the WnX protocols that have been developed independently further reduce the requirements on computational resources by using more efficient basis sets and, for the minor components, electron-correlation methods that are computationally less demanding.
== References ==
Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley and Sons. pp. 224–228. ISBN 0-471-48552-7.
Jensen, Frank (2007). Introduction to Computational Chemistry. Chichester, England: John Wiley and Sons. pp. 164–169. ISBN 978-0-470-01187-4. | Wikipedia/Quantum_chemistry_composite_methods |
In chemistry and physics, the exchange interaction is a quantum mechanical constraint on the states of indistinguishable particles. While sometimes called an exchange force, or, in the case of fermions, Pauli repulsion, its consequences cannot always be predicted based on classical ideas of force. Both bosons and fermions can experience the exchange interaction.
The wave function of indistinguishable particles is subject to exchange symmetry: the wave function either changes sign (for fermions) or remains unchanged (for bosons) when two particles are exchanged. The exchange symmetry alters the expectation value of the distance between two indistinguishable particles when their wave functions overlap. For fermions the expectation value of the distance increases, and for bosons it decreases (compared to distinguishable particles).
The exchange interaction arises from the combination of exchange symmetry and the Coulomb interaction. For an electron in an electron gas, the exchange symmetry creates an "exchange hole" in its vicinity, which other electrons with the same spin tend to avoid due to the Pauli exclusion principle. This decreases the energy associated with the Coulomb interactions between the electrons with same spin. Since two electrons with different spins are distinguishable from each other and not subject to the exchange symmetry, the effect tends to align the spins. Exchange interaction is the main physical effect responsible for ferromagnetism, and has no classical analogue.
For bosons, the exchange symmetry makes them bunch together, and the exchange interaction takes the form of an effective attraction that causes identical particles to be found closer together, as in Bose–Einstein condensation.
Exchange interaction effects were discovered independently by physicists Werner Heisenberg and Paul Dirac in 1926.
== Exchange symmetry ==
Quantum particles are fundamentally indistinguishable.
Wolfgang Pauli demonstrated that this is a type of symmetry: states of two particles must be either symmetric or antisymmetric when coordinate labels are exchanged.
In a simple one-dimensional system with two identical particles in two states
ψ
a
{\displaystyle \psi _{a}}
and
ψ
b
{\displaystyle \psi _{b}}
the system wavefunction can therefore be written two ways:
ψ
a
(
x
1
)
ψ
b
(
x
2
)
±
ψ
a
(
x
2
)
ψ
b
(
x
1
)
.
{\displaystyle \psi _{a}(x_{1})\psi _{b}(x_{2})\pm \psi _{a}(x_{2})\psi _{b}(x_{1}).}
Exchanging
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
gives either a symmetric combination of the states ("plus") or an antisymmetric combination ("minus"). Particles that give symmetric combinations are called bosons; those with antisymmetric combinations are called fermions.
The two possible combinations imply different physics. For example, the expectation value of the square of the distance between the two particles is: 258
⟨
(
x
1
−
x
2
)
2
⟩
±
=
⟨
x
2
⟩
a
+
⟨
x
2
⟩
b
−
2
⟨
x
⟩
a
⟨
x
⟩
b
∓
2
|
⟨
x
⟩
a
b
|
2
.
{\displaystyle \langle (x_{1}-x_{2})^{2}\rangle _{\pm }=\langle x^{2}\rangle _{a}+\langle x^{2}\rangle _{b}-2\langle x\rangle _{a}\langle x\rangle _{b}\mp 2{\big |}\langle x\rangle _{ab}{\big |}^{2}.}
The last term reduces the expected value for bosons and increases the value for fermions but only when the states
ψ
a
{\displaystyle \psi _{a}}
and
ψ
b
{\displaystyle \psi _{b}}
physically overlap (
⟨
x
⟩
a
b
≠
0
{\displaystyle \langle x\rangle _{ab}\neq 0}
).
The physical effect of the exchange symmetry requirement is not a force. Rather it is a significant geometrical constraint, increasing the curvature of wavefunctions to prevent the overlap of the states occupied by indistinguishable fermions. The terms "exchange force" and "Pauli repulsion" for fermions are sometimes used as an intuitive description of the effect but this intuition can give incorrect physical results.: 291
== Exchange interactions between localized electron magnetic moments ==
Quantum mechanical particles are classified as bosons or fermions. The spin–statistics theorem of quantum field theory demands that all particles with half-integer spin behave as fermions and all particles with integer spin behave as bosons. Multiple bosons may occupy the same quantum state; however, by the Pauli exclusion principle, no two fermions can occupy the same state. Since electrons have spin 1/2, they are fermions. This means that the overall wave function of a system must be antisymmetric when two electrons are exchanged, i.e. interchanged with respect to both spatial and spin coordinates. First, however, exchange will be explained with the neglect of spin.
=== Exchange of spatial coordinates ===
Taking a hydrogen molecule-like system (i.e. one with two electrons), one may attempt to model the state of each electron by first assuming the electrons behave independently (that is, as if the Pauli exclusion principle did not apply), and taking wave functions in position space of
Φ
a
(
r
1
)
{\displaystyle \Phi _{a}(r_{1})}
for the first electron and
Φ
b
(
r
2
)
{\displaystyle \Phi _{b}(r_{2})}
for the second electron. The functions
Φ
a
{\displaystyle \Phi _{a}}
and
Φ
b
{\displaystyle \Phi _{b}}
are orthogonal, and each corresponds to an energy eigenstate. Two wave functions for the overall system in position space can be constructed. One uses an antisymmetric combination of the product wave functions in position space:
The other uses a symmetric combination of the product wave functions in position space:
To treat the problem of the Hydrogen molecule perturbatively, the overall Hamiltonian is decomposed into a unperturbed Hamiltonian of the non-interacting hydrogen atoms
H
(
0
)
{\displaystyle {\mathcal {H}}^{(0)}}
and a perturbing Hamiltonian, which accounts for interactions between the two atoms
H
(
1
)
{\displaystyle {\mathcal {H}}^{(1)}}
. The full Hamiltonian is then:
H
=
H
(
0
)
+
H
(
1
)
{\displaystyle {\mathcal {H}}={\mathcal {H}}^{(0)}+{\mathcal {H}}^{(1)}}
where
H
(
0
)
=
−
ℏ
2
2
m
∇
1
2
−
ℏ
2
2
m
∇
2
2
−
e
2
r
a
1
−
e
2
r
b
2
{\displaystyle {\mathcal {H}}^{(0)}=-{\frac {\hbar ^{2}}{2m}}\nabla _{1}^{2}-{\frac {\hbar ^{2}}{2m}}\nabla _{2}^{2}-{\frac {e^{2}}{r_{a1}}}-{\frac {e^{2}}{r_{b2}}}}
and
H
(
1
)
=
(
e
2
R
a
b
+
e
2
r
12
−
e
2
r
a
2
−
e
2
r
b
1
)
{\displaystyle {\mathcal {H}}^{(1)}=\left({\frac {e^{2}}{R_{ab}}}+{\frac {e^{2}}{r_{12}}}-{\frac {e^{2}}{r_{a2}}}-{\frac {e^{2}}{r_{b1}}}\right)}
The first two terms of
H
(
0
)
{\displaystyle {\mathcal {H}}^{(0)}}
denote the kinetic energy of the electrons. The remaining terms account for attraction between the electrons and their host protons (
r
a
1
/
b
2
{\displaystyle r_{a1/b2}}
). The terms in
H
(
1
)
{\displaystyle {\mathcal {H}}^{(1)}}
account for the potential energy corresponding to: proton–proton repulsion (
R
a
b
{\displaystyle R_{ab}}
), electron–electron repulsion (
r
12
{\displaystyle r_{12}}
), and electron–proton attraction between the electron of one host atom and the proton of the other (
r
a
2
/
b
1
{\displaystyle r_{a2/b1}}
). All quantities are assumed to be real.
Two eigenvalues for the system energy are found:
where the
E
+
{\displaystyle E_{+}}
is the spatially symmetric solution and
E
−
{\displaystyle E_{-}}
is the spatially antisymmetric solution, corresponding to
Ψ
S
{\displaystyle \Psi _{\rm {S}}}
and
Ψ
A
{\displaystyle \Psi _{\rm {A}}}
respectively. A variational calculation yields similar results.
H
{\displaystyle {\mathcal {H}}}
can be diagonalized by using the position–space functions given by Eqs. (1) and (2). In Eq. (3),
C
{\displaystyle C}
is the two-site two-electron Coulomb integral (It may be interpreted as the repulsive potential for electron-one at a particular point
Φ
a
(
r
→
1
)
2
{\displaystyle \Phi _{a}({\vec {r}}_{1})^{2}}
in an electric field created by electron-two distributed over the space with the probability density
Φ
b
(
r
→
2
)
2
)
{\displaystyle \Phi _{b}({\vec {r}}_{2})^{2})}
,
S
{\displaystyle {\mathcal {S}}}
is the overlap integral, and
J
e
x
{\displaystyle J_{\mathrm {ex} }}
is the exchange integral, which is similar to the two-site Coulomb integral but includes exchange of the two electrons. It has no simple physical interpretation, but it can be shown to arise entirely due to the anti-symmetry requirement. These integrals are given by:
Although in the hydrogen molecule the exchange integral, Eq. (6), is negative, Heisenberg first suggested that it changes sign at some critical ratio of internuclear distance to mean radial extension of the atomic orbital.
=== Inclusion of spin ===
The symmetric and antisymmetric combinations in Equations (1) and (2) did not include the spin variables (α = spin-up; β = spin-down); there are also antisymmetric and symmetric combinations of the spin variables:
To obtain the overall wave function, these spin combinations have to be coupled with Eqs. (1) and (2). The resulting overall wave functions, called spin-orbitals, are written as Slater determinants. When the orbital wave function is symmetrical the spin one must be anti-symmetrical and vice versa. Accordingly,
E
+
{\displaystyle E_{+}}
above corresponds to the spatially symmetric/spin-singlet solution and
E
−
{\displaystyle E_{-}}
to the spatially antisymmetric/spin-triplet solution.
J. H. Van Vleck presented the following analysis:
The potential energy of the interaction between the two electrons in orthogonal orbitals can be represented by a matrix, say
E
ex
{\displaystyle E_{\textrm {ex}}}
. From Eq. (3), the characteristic values of this matrix are
C
±
J
ex
{\displaystyle C\pm J_{\textrm {ex}}}
. The characteristic values of a matrix are its diagonal elements after it is converted to a diagonal matrix (that is, eigenvalues). Now, the characteristic values of the square of the magnitude of the resultant spin,
⟨
(
s
→
a
+
s
→
b
)
2
⟩
{\displaystyle \langle ({\vec {s}}_{a}+{\vec {s}}_{b})^{2}\rangle }
is
S
(
S
+
1
)
{\displaystyle S(S+1)}
. The characteristic values of the matrices
⟨
s
→
a
2
⟩
{\displaystyle \langle {\vec {s}}_{a}^{\;2}\rangle }
and
⟨
s
→
b
2
⟩
{\displaystyle \langle {\vec {s}}_{b}^{\;2}\rangle }
are each
1
2
(
1
2
+
1
)
=
3
4
{\displaystyle {\tfrac {1}{2}}({\tfrac {1}{2}}+1)={\tfrac {3}{4}}}
and
⟨
(
s
→
a
+
s
→
b
)
2
⟩
=
⟨
s
→
a
2
⟩
+
⟨
s
→
b
2
⟩
+
2
⟨
s
→
a
⋅
s
→
b
⟩
{\displaystyle \langle ({\vec {s}}_{a}+{\vec {s}}_{b})^{2}\rangle =\langle {\vec {s}}_{a}^{\;2}\rangle +\langle {\vec {s}}_{b}^{\;2}\rangle +2\langle {\vec {s}}_{a}\cdot {\vec {s}}_{b}\rangle }
. The characteristic values of the scalar product
⟨
s
→
a
⋅
s
→
b
⟩
{\displaystyle \langle {\vec {s}}_{a}\cdot {\vec {s}}_{b}\rangle }
are
1
2
(
0
−
6
4
)
=
−
3
4
{\displaystyle {\tfrac {1}{2}}(0-{\tfrac {6}{4}})=-{\tfrac {3}{4}}}
and
1
2
(
2
−
6
4
)
=
1
4
{\displaystyle {\tfrac {1}{2}}(2-{\tfrac {6}{4}})={\tfrac {1}{4}}}
, corresponding to both the spin-singlet (
S
=
0
{\displaystyle S=0}
) and spin-triplet (
S
=
1
{\displaystyle S=1}
) states, respectively.
From Eq. (3) and the aforementioned relations, the matrix
E
ex
{\displaystyle E_{\textrm {ex}}}
is seen to have the characteristic value
C
+
J
ex
{\displaystyle C+J_{\textrm {ex}}}
when
⟨
s
→
a
⋅
s
→
b
⟩
{\displaystyle \langle {\vec {s}}_{a}\cdot {\vec {s}}_{b}\rangle }
has the characteristic value −3/4 (i.e. when
S
=
0
{\displaystyle S=0}
; the spatially symmetric/spin-singlet state). Alternatively, it has the characteristic value
C
−
J
ex
{\displaystyle C-J_{\textrm {ex}}}
when
⟨
s
→
a
⋅
s
→
b
⟩
{\displaystyle \langle {\vec {s}}_{a}\cdot {\vec {s}}_{b}\rangle }
has the characteristic value +1/4 (i.e. when
S
=
1
{\displaystyle S=1}
; the spatially antisymmetric/spin-triplet state). Therefore,
and, hence,
where the spin momenta are given as
⟨
s
→
a
⟩
{\displaystyle \langle {\vec {s}}_{a}\rangle }
and
⟨
s
→
b
⟩
{\displaystyle \langle {\vec {s}}_{b}\rangle }
.
Dirac pointed out that the critical features of the exchange interaction could be obtained in an elementary way by neglecting the first two terms on the right-hand side of Eq. (9), thereby considering the two electrons as simply having their spins coupled by a potential of the form:
It follows that the exchange interaction Hamiltonian between two electrons in orbitals
Φ
a
{\displaystyle \Phi _{a}}
and
Φ
b
{\displaystyle \Phi _{b}}
can be written in terms of their spin momenta
s
→
a
{\displaystyle {\vec {s}}_{a}}
and
s
→
b
{\displaystyle {\vec {s}}_{b}}
. This interaction is named the Heisenberg exchange Hamiltonian or the Heisenberg–Dirac Hamiltonian in the older literature:
J
ab
{\displaystyle J_{\textrm {ab}}}
is not the same as the quantity labeled
J
ex
{\displaystyle J_{\textrm {ex}}}
in Eq. (6). Rather,
J
ab
{\displaystyle J_{\textrm {ab}}}
, which is termed the exchange constant, is a function of Eqs. (4), (5), and (6), namely,
However, with orthogonal orbitals (in which
S
{\displaystyle {\mathcal {S}}}
= 0), for example with different orbitals in the same atom,
J
ab
=
J
ex
{\displaystyle J_{\textrm {ab}}=J_{\textrm {ex}}}
.
=== Effects of exchange ===
If
J
ab
{\displaystyle J_{\textrm {ab}}}
is positive the exchange energy favors electrons with parallel spins; this is a primary cause of ferromagnetism in materials in which the electrons are considered localized in the Heitler–London model of chemical bonding, but this model of ferromagnetism has severe limitations in solids (see below). If
J
ab
{\displaystyle J_{\textrm {ab}}}
is negative, the interaction favors electrons with antiparallel spins, potentially causing antiferromagnetism. The sign of
J
ab
{\displaystyle J_{\textrm {ab}}}
is essentially determined by the relative sizes of
J
ex
{\displaystyle J_{\textrm {ex}}}
and the product of
C
S
{\displaystyle C{\mathcal {S}}}
. This sign can be deduced from the expression for the difference between the energies of the triplet and singlet states,
E
−
−
E
+
{\displaystyle E_{-}-E_{+}}
:
Although these consequences of the exchange interaction are magnetic in nature, the cause is not; it is due primarily to electric repulsion and the Pauli exclusion principle. In general, the direct magnetic interaction between a pair of electrons (due to their electron magnetic moments) is negligibly small compared to this electric interaction.
Exchange energy splittings are very elusive to calculate for molecular systems at large internuclear distances. However, analytical formulae have been worked out for the hydrogen molecular ion (see references herein).
Normally, exchange interactions are very short-ranged, confined to electrons in orbitals on the same atom (intra-atomic exchange) or nearest neighbor atoms (direct exchange) but longer-ranged interactions can occur via intermediary atoms and this is termed superexchange.
== Direct exchange interactions in solids ==
In a crystal, generalization of the Heisenberg Hamiltonian in which the sum is taken over the exchange Hamiltonians for all the
(
i
,
j
)
{\displaystyle (i,j)}
pairs of atoms of the many-electron system gives:.
The 1/2 factor is introduced because the interaction between the same two atoms is counted twice in performing the sums. Note that the
J
{\displaystyle J}
in Eq.(14) is the exchange constant
J
ab
{\displaystyle J_{\textrm {ab}}}
above not the exchange integral
J
ex
{\displaystyle J_{\textrm {ex}}}
. The exchange integral
J
ex
{\displaystyle J_{\textrm {ex}}}
is related to yet another quantity, called the exchange stiffness constant (
A
{\displaystyle A}
) which serves as a characteristic of a ferromagnetic material. The relationship is dependent on the crystal structure. For a simple cubic lattice with lattice parameter
a
{\displaystyle a}
,
For a body-centered cubic lattice,
and for a face-centered cubic lattice,
The form of Eq. (14) corresponds identically to the Ising model of ferromagnetism except that in the Ising model, the dot product of the two spin angular momenta is replaced by the scalar product
S
i
j
S
j
i
{\displaystyle S_{ij}S_{ji}}
. The Ising model was invented by Wilhelm Lenz in 1920 and solved for the one-dimensional case by his doctoral student Ernst Ising in 1925. The energy of the Ising model is defined to be:
=== Limitations of the Heisenberg Hamiltonian and the localized electron model in solids ===
Because the Heisenberg Hamiltonian presumes the electrons involved in the exchange coupling are localized in the context of the Heitler–London, or valence bond (VB), theory of chemical bonding, it is an adequate model for explaining the magnetic properties of electrically insulating narrow-band ionic and covalent non-molecular solids where this picture of the bonding is reasonable. Nevertheless, theoretical evaluations of the exchange integral for non-molecular solids that display metallic conductivity in which the electrons responsible for the ferromagnetism are itinerant (e.g. iron, nickel, and cobalt) have historically been either of the wrong sign or much too small in magnitude to account for the experimentally determined exchange constant (e.g. as estimated from the Curie temperatures via
T
C
≈
2
⟨
J
⟩
/
3
k
B
{\displaystyle T_{C}\approx 2\langle J\rangle /3k_{\textrm {B}}}
where
⟨
J
⟩
{\displaystyle \langle J\rangle }
is the exchange interaction averaged over all sites).
The Heisenberg model thus cannot explain the observed ferromagnetism in these materials. In these cases, a delocalized, or Hund–Mulliken–Bloch (molecular orbital/band) description, for the electron wave functions is more realistic. Accordingly, the Stoner model of ferromagnetism is more applicable.
In the Stoner model, the spin-only magnetic moment (in Bohr magnetons) per atom in a ferromagnet is given by the difference between the number of electrons per atom in the majority spin and minority spin states. The Stoner model thus permits non-integral values for the spin-only magnetic moment per atom. However, with ferromagnets
μ
S
=
−
g
μ
B
[
S
(
S
+
1
)
]
1
/
2
{\displaystyle \mu _{S}=-g\mu _{\rm {B}}[S(S+1)]^{1/2}}
(
g
{\displaystyle g}
= 2.0023 ≈ 2) tends to overestimate the total spin-only magnetic moment per atom.
For example, a net magnetic moment of 0.54 μB per atom for Nickel metal is predicted by the Stoner model, which is very close to the 0.61 Bohr magnetons calculated based on the metal's observed saturation magnetic induction, its density, and its atomic weight. By contrast, an isolated Ni atom (electron configuration = 3d84s2) in a cubic crystal field will have two unpaired electrons of the same spin (hence,
S
→
=
1
{\displaystyle {\vec {S}}=1}
) and would thus be expected to have in the localized electron model a total spin magnetic moment of
μ
S
=
2.83
μ
B
{\displaystyle \mu _{S}=2.83\mu _{\rm {B}}}
(but the measured spin-only magnetic moment along one axis, the physical observable, will be given by
μ
→
S
=
g
μ
B
S
→
=
2
μ
B
{\displaystyle {\vec {\mu }}_{S}=g\mu _{\rm {B}}{\vec {S}}=2\mu _{\rm {B}}}
).
Generally, valence s and p electrons are best considered delocalized, while 4f electrons are localized and 5f and 3d/4d electrons are intermediate, depending on the particular internuclear distances. In the case of substances where both delocalized and localized electrons contribute to the magnetic properties (e.g. rare-earth systems), the Ruderman–Kittel–Kasuya–Yosida (RKKY) model is the currently accepted mechanism.
== See also ==
Double-exchange mechanism
Slater determinant
Superexchange
Holstein–Herring method
Spin-exchange interaction
Multipolar exchange interaction
Antisymmetric exchange
== Notes ==
== References ==
== Further reading ==
Cox, Paul Anthony (1992). Transition Metal Oxides: An Introduction to their Electronic Structure and Properties. International Series of Monographs on Chemistry. Vol. 27. Oxford: Clarendon Press. pp. 148–153. ISBN 978-0-19-855570-4.
Koch, Erik (2012). "Exchange Mechanisms" (PDF). In Pavarini, Eva; Koch, Erik; Anders, Frithjof; Jarrell, Mark (eds.). Correlated Electrons: From Models to Materials (PDF). Schriften des Forschungszentrums Jülich. Reihe Modeling and Simulation. Jülich: Forschungszentrum Jülich GmbH. ISBN 978-3-89336-796-2.
Mattis, Daniel C. (1981). "Exchange". The Theory of Magnetism I: Statics and Dynamics. Springer Series in Solid-State Sciences. Vol. 17. pp. 39–66. doi:10.1007/978-3-642-83238-3. ISBN 978-3-540-18425-6. ISSN 0171-1873.
Skomski, Ralph (2020). "Magnetic Exchange Interactions". In Coey, Michael; Parkin, Stuart (eds.). Handbook of Magnetism and Magnetic Materials. Cham: Springer International Publishing. pp. 1–50. doi:10.1007/978-3-030-63101-7_2-1. ISBN 978-3-030-63101-7. Retrieved 2024-02-18.
Yosida, Kei (1998). Theory of Magnetism. Springer Series in Solid-State Sciences (2 ed.). Berlin Heidelberg: Springer. pp. 49–64. ISBN 978-3-540-60651-2. | Wikipedia/Exchange_energy |
The linearized augmented-plane-wave method (LAPW) is an implementation of Kohn-Sham density functional theory (DFT) adapted to periodic materials. It typically goes along with the treatment of both valence and core electrons on the same footing in the context of DFT and the treatment of the full potential and charge density without any shape approximation. This is often referred to as the all-electron full-potential linearized augmented-plane-wave method (FLAPW). It does not rely on the pseudopotential approximation and employs a systematically extendable basis set. These features make it one of the most precise implementations of DFT, applicable to all crystalline materials, regardless of their chemical composition. It can be used as a reference for evaluating other approaches.
== Introduction ==
At the core of density functional theory the Hohenberg-Kohn theorems state that every observable of an interacting many-electron system is a functional of its ground-state charge density and that this density minimizes the total energy of the system. The theorems do not answer the question how to obtain such a ground-state density. A recipe for this is given by Walter Kohn and Lu Jeu Sham who introduce an auxiliary system of noninteracting particles constructed such that it shares the same ground-state density with the interacting particle system. The Schrödinger-like equations describing this system are the Kohn-Sham equations. With these equations one can calculate the eigenstates of the system and with these the density. One contribution to the Kohn-Sham equations is the effective potential which itself depends on the density. As the ground-state density is not known before a Kohn-Sham DFT calculation and it is an input as well as an output of such a calculation, the Kohn-Sham equations are solved in an iterative procedure together with a recalculation of the density and the potential in every iteration. It starts with an initial guess for the density and after every iteration a new density is constructed as a mixture from the output density and previous densities. The calculation finishes as soon as a fixpoint of a self-consistent density is found, i.e., input and output density are identical. This is the ground-state density.
A method implementing Kohn-Sham DFT has to realize these different steps of the sketched iterative algorithm. The LAPW method is based on a partitioning of the material's unit cell into non-overlapping but nearly touching so-called muffin-tin (MT) spheres, centered at the atomic nuclei, and an interstitial region (IR) in between the spheres. The physical description and the representation of the Kohn-Sham orbitals, the charge density, and the potential is adapted to this partitioning. In the following this method design and the extraction of quantities from it are sketched in more detail. Variations and extensions are indicated.
== Solving the Kohn-Sham equations ==
The central aspect of practical DFT implementations is the question how to solve the Kohn-Sham equations
[
T
^
s
+
V
eff
(
r
)
]
|
Ψ
j
k
(
r
)
⟩
=
ϵ
j
k
|
Ψ
j
k
(
r
)
⟩
{\displaystyle \left[{\hat {T}}_{\text{s}}+V_{\text{eff}}(\mathbf {r} )\right]\left|\Psi _{j}^{\mathbf {k} }(\mathbf {r} )\right\rangle =\epsilon _{j}^{\mathbf {k} }\left|\Psi _{j}^{\mathbf {k} }(\mathbf {r} )\right\rangle }
with the single-electron kinetic energy operator
T
^
s
{\displaystyle {\hat {T}}_{\text{s}}}
, the effective potential
V
eff
(
r
)
{\displaystyle V_{\text{eff}}(\mathbf {r} )}
, Kohn-Sham states
Ψ
j
k
(
r
)
{\displaystyle \Psi _{j}^{\mathbf {k} }(\mathbf {r} )}
, energy eigenvalues
ϵ
j
k
{\displaystyle \epsilon _{j}^{\mathbf {k} }}
, and position and Bloch vectors
r
{\displaystyle \mathbf {r} }
and
k
{\displaystyle \mathbf {k} }
. While in abstract evaluations of Kohn-Sham DFT the model for the exchange-correlation contribution to the effective potential is the only fundamental approximation, in practice solving the Kohn-Sham equations is accompanied by the introduction of many additional approximations. These include the incompleteness of the basis set used to represent the Kohn-Sham orbitals, the choice of whether to use the pseudopotential approximation or to consider all electrons in the DFT scheme, the treatment of relativistic effects, and possible shape approximations to the potential. Beyond the partitioning of the unit cell, for the LAPW method the central design aspect is the use of the LAPW basis set
{
ϕ
k
,
G
(
r
)
}
{\displaystyle \left\lbrace \phi _{\mathbf {k} ,\mathbf {G} }(\mathbf {r} )\right\rbrace }
to represent the valence electron orbitals as
|
Ψ
j
k
(
r
)
⟩
=
∑
G
c
j
k
,
G
|
ϕ
k
,
G
(
r
)
⟩
,
{\displaystyle \left|\Psi _{j}^{\mathbf {k} }(\mathbf {r} )\right\rangle =\sum \limits _{\mathbf {G} }c_{j}^{\mathbf {k} ,\mathbf {G} }\left|\phi _{\mathbf {k} ,\mathbf {G} }(\mathbf {r} )\right\rangle ,}
where
c
j
k
,
G
{\displaystyle c_{j}^{\mathbf {k} ,\mathbf {G} }}
are the expansion coefficients. The LAPW basis is designed to enable a precise representation of the orbitals and an accurate modelling of the physics in each region of the unit cell.
Considering a unit cell of volume
Ω
{\displaystyle \Omega }
covering atoms
α
{\displaystyle \alpha }
at positions
τ
α
{\displaystyle \mathbf {\tau } _{\alpha }}
, an LAPW basis function is characterized by a reciprocal lattice vector
G
{\displaystyle \mathbf {G} }
and the considered Bloch vector
k
{\displaystyle \mathbf {k} }
. It is given as
ϕ
k
,
G
(
r
)
=
{
1
Ω
e
i
(
k
+
G
)
r
for
r
in IR
∑
l
=
0
l
max
,
α
∑
m
=
−
l
l
[
a
l
,
m
k
,
G
,
α
u
l
,
α
(
r
α
,
E
l
,
α
)
+
b
l
,
m
k
,
G
,
α
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
]
Y
l
,
m
(
r
^
α
)
for
r
in MT
α
,
{\displaystyle \phi _{\mathbf {k} ,\mathbf {G} }(\mathbf {r} )=\left\lbrace {\begin{array}{l l}{\frac {1}{\sqrt {\Omega }}}e^{i(\mathbf {k} +\mathbf {G} )\mathbf {r} }&{\text{for }}\mathbf {r} {\text{ in IR}}\\\sum \limits _{l=0}^{l_{{\text{max}},\alpha }}\sum \limits _{m=-l}^{l}\left[a_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }u_{l,\alpha }(r_{\alpha },E_{l,\alpha })+b_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }{\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right]Y_{l,m}(\mathbf {\hat {r}} _{\alpha })&{\text{for }}\mathbf {r} {\text{ in MT}}_{\alpha }\end{array}}\right.,}
where
r
α
=
r
−
τ
α
{\displaystyle \mathbf {r} _{\alpha }=\mathbf {r} -\mathbf {\tau } _{\alpha }}
is the position vector relative to the position of atom nucleus
α
{\displaystyle \alpha }
. An LAPW basis function is thus a plane wave in the IR and a linear combination of the radial functions
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
and
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
multiplied by spherical harmonics
Y
l
,
m
{\displaystyle Y_{l,m}}
in each MT sphere. The radial function
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
is hereby the solution of the Kohn-Sham Hamiltonian for the spherically averaged potential with regular behavior at the nucleus for the given energy parameter
E
l
,
α
{\displaystyle E_{l,\alpha }}
. Together with its energy derivative
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
these augmentations of the plane wave in each MT sphere enable a representation of the Kohn-Sham orbitals at arbitrary eigenenergies linearized around the energy parameters. The coefficients
a
l
,
m
k
,
G
,
α
{\displaystyle a_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }}
and
b
l
,
m
k
,
G
,
α
{\displaystyle b_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }}
are automatically determined by enforcing the basis function to be continuously differentiable for the respective
(
l
,
m
)
{\displaystyle (l,m)}
channel. The set of LAPW basis functions is defined by specifying a cutoff parameter
K
max
=
|
k
+
G
|
max
{\displaystyle K_{\text{max}}=|\mathbf {k} +\mathbf {G} |_{\text{max}}}
. In each MT sphere, the expansion into spherical harmonics is limited to a maximum number of angular momenta
l
max
,
α
≈
K
max
R
MT
α
{\displaystyle l_{{\text{max}},\alpha }\approx K_{\text{max}}R_{{\text{MT}}_{\alpha }}}
, where
R
MT
α
{\displaystyle R_{{\text{MT}}_{\alpha }}}
is the muffin-tin radius of atom
α
{\displaystyle \alpha }
. The choice of this cutoff is connected to the decay of expansion coefficients for growing
l
{\displaystyle l}
in the Rayleigh expansion of plane waves into spherical harmonics.
While the LAPW basis functions are used to represent the valence states, core electron states, which are completely confined within a MT sphere, are calculated for the spherically averaged potential on radial grids, for each atom separately applying atomic boundary conditions. Semicore states, which are still localized but slightly extended beyond the MT sphere boundary, may either be treated as core electron states or as valence electron states. For the latter choice the linearized representation is not sufficient because the related eigenenergy is typically far away from the energy parameters. To resolve this problem the LAPW basis can be extended by additional basis functions in the respective MT sphere, so called local orbitals (LOs). These are tailored to provide a precise representation of the semicore states.
The plane-wave form of the basis functions in the interstitial region makes setting up the Hamiltonian matrix
H
G
′
,
G
k
=
⟨
ϕ
k
,
G
′
|
H
^
|
ϕ
k
,
G
⟩
=
⟨
ϕ
k
,
G
′
|
T
^
s
+
V
eff
(
r
)
|
ϕ
k
,
G
⟩
{\displaystyle H_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }=\left\langle \phi _{\mathbf {k} ,\mathbf {G'} }{\Big |}{\hat {H}}{\Big |}\phi _{\mathbf {k} ,\mathbf {G} }\right\rangle =\left\langle \phi _{\mathbf {k} ,\mathbf {G'} }{\Big |}{\hat {T}}_{\text{s}}+V_{\text{eff}}(\mathbf {r} ){\Big |}\phi _{\mathbf {k} ,\mathbf {G} }\right\rangle }
for that region simple. In the MT spheres this setup is also simple and computationally inexpensive for the kinetic energy and the spherically averaged potential, e.g., in the muffin-tin approximation. The simplicity hereby stems from the connection of the radial functions to the spherical Hamiltonian in the spheres
H
^
sphr
α
{\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }}
, i.e.,
H
^
sphr
α
|
u
l
,
α
(
r
α
,
E
l
,
α
)
⟩
=
E
l
,
α
|
u
l
,
α
(
r
α
,
E
l
,
α
)
⟩
{\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle =E_{l,\alpha }\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle }
and
H
^
sphr
α
|
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
⟩
=
E
l
,
α
|
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
⟩
+
|
u
l
,
α
(
r
α
,
E
l
,
α
)
⟩
{\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }\left|{\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle =E_{l,\alpha }\left|{\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle +\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle }
. In comparison to the MT approximation, for the full-potential description (FLAPW) contributions from the non-spherical part of the potential are added to the Hamiltonian matrix in the MT spheres and in the IR contributions related to deviations from the constant potential.
After the Hamiltonian matrix
H
G
′
,
G
k
{\displaystyle H_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }}
together with the overlap matrix
S
G
′
,
G
k
=
⟨
ϕ
k
,
G
′
|
ϕ
k
,
G
⟩
{\displaystyle S_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }=\left\langle \phi _{\mathbf {k} ,\mathbf {G'} }{\Big |}\phi _{\mathbf {k} ,\mathbf {G} }\right\rangle }
is set up, the Kohn-Sham orbitals are obtained as eigenfunctions from the algebraic generalized dense Hermitian eigenvalue problem
∑
G
H
G
′
,
G
k
c
j
k
,
G
=
ϵ
j
k
∑
G
S
G
′
,
G
k
c
j
k
,
G
,
{\displaystyle \sum \limits _{\mathbf {G} }H_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }c_{j}^{\mathbf {k} ,\mathbf {G} }=\epsilon _{j}^{\mathbf {k} }\sum \limits _{\mathbf {G} }S_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }c_{j}^{\mathbf {k} ,\mathbf {G} }~~,}
where
ϵ
j
k
{\displaystyle \epsilon _{j}^{\mathbf {k} }}
is the energy eigenvalue of the j-th Kohn-Sham state at Bloch vector
k
{\displaystyle {\mathbf {k} }}
and the state is given as indicated above by the expansion coefficients
c
j
k
,
G
{\displaystyle c_{j}^{\mathbf {k} ,\mathbf {G} }}
.
The considered degree of relativistic physics differs for core and valence electrons. The strong localization of core electrons due to the singularity of the effective potential at the atomic nucleus is connected to large kinetic energy contributions and thus a fully relativistic treatment is desirable and common. For the determination of the radial functions
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
and
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
the common approach is to make an approximation to the fully relativistic description. This may be the scalar-relativistic approximation (SRA) or similar approaches. The dominant effect neglected by these approximations is the spin-orbit coupling. As indicated above the construction of the Hamiltonian matrix within such an approximation is trivial. Spin-orbit coupling can additionally be included, though this leads to a more complex Hamiltonian matrix setup or a second variation scheme, connected to increased computational demands. In the interstitial region it is reasonable and common to describe the valence electrons without considering relativistic effects.
== Representation of the charge density and the potential ==
After calculating the Kohn-Sham eigenfunctions, the next step is to construct the electron charge density by occupying the lowest energy eigenstates up to the Fermi level with electrons. The Fermi level itself is determined in this process by keeping charge neutrality in the unit cell. The resulting charge density
ρ
(
r
)
{\displaystyle \rho (\mathbf {r} )}
then has a region-specific form
ρ
(
r
)
=
{
∑
G
ρ
G
e
i
G
r
for
r
in IR
∑
l
=
0
l
max
,
α
∑
m
=
−
l
l
ρ
l
,
m
α
(
r
α
)
Y
l
,
m
(
r
^
α
)
for
r
in MT
α
,
{\displaystyle \rho (\mathbf {r} )=\left\{{\begin{array}{l l}\sum \limits _{\mathbf {G} }\rho _{\mathbf {G} }e^{i\mathbf {G} \mathbf {r} }&{\text{for }}\mathbf {r} {\text{ in IR}}\\\sum \limits _{l=0}^{l_{{\text{max}},\alpha }}\sum \limits _{m=-l}^{l}\rho _{l,m}^{\alpha }(r_{\alpha })Y_{l,m}(\mathbf {\hat {r}} _{\alpha })&{\text{for }}\mathbf {r} {\text{ in MT}}_{\alpha }\end{array}}\right.,}
i.e., it is given as a plane-wave expansion in the interstitial region and as an expansion into radial functions times spherical harmonics in each MT sphere. The radial functions hereby are numerically given on a mesh.
The representation of the effective potential follows the same scheme. In its construction a common approach is to employ Weinert's method for solving the Poisson equation. It efficiently and accurately provides a solution of the Poisson equation without shape approximation for an arbitrary periodic charge density based on the concept of multipole potentials and the boundary value problem for a sphere.
== Postprocessing and extracting results ==
Because they are based on the same theoretical framework, different DFT implementations offer access to very similar sets of material properties. However, the variations in the implementations result in differences in the ease of extracting certain quantities and also in differences in their interpretation. In the following, these circumstances are sketched for some examples.
The most basic quantity provided by DFT is the ground-state total energy of an investigated system. To avoid the calculation of derivatives of the eigenfunctions in its evaluation, the common implementation replaces the expectation value of the kinetic energy operator by the sum of the band energies of occupied Kohn-Sham states minus the energy due to the effective potential. The force exerted on an atom, which is given by the change of the total energy due to an infinitesimal displacement, has two major contributions. The first contribution is due to the displacement of the potential. It is known as Hellmann-Feynman force. The other, computationally more elaborate contribution, is due to the related change in the atom-position-dependent basis functions. It is often called Pulay force and requires a method-specific implementation. Beyond forces, similar method-specific implementations are also needed for further quantities derived from the total energy functional. For the LAPW method, formulations for the stress tensor and for phonons have been realized.
Independent of the actual size of an atom, evaluating atom-dependent quantities in LAPW is often interpreted as calculating the quantity in the respective MT sphere. This applies to quantities like charges at atoms, magnetic moments, or projections of the density of states or the band structure onto a certain orbital character at a given atom. Deviating interpretations of such quantities from experiments or other DFT implementations may lead to differences when comparing results. On a side note also some atom-specific LAPW inputs relate directly to the respective MT region. For example, in the DFT+U approach the Hubbard U only affects the MT sphere.
A strength of the LAPW approach is the inclusion of all electrons in the DFT calculation, which is crucial for the evaluation of certain quantities. One of which are hyperfine interaction parameters like electric field gradients whose calculation involves the evaluation of the curvature of the all-electron Coulomb potential near the nuclei. The prediction of such quantities with LAPW is very accurate.
Kohn-Sham DFT does not give direct access to all quantities one may be interested in. For example, most energy eigenvalues of the Kohn-Sham states are not directly related to the real interacting many-electron system. For the prediction of optical properties one therefore often uses DFT codes in combination with software implementing the GW approximation (GWA) to many-body perturbation theory and optionally the Bethe-Salpeter equation (BSE) to describe excitons. Such software has to be adapted to the representation used in the DFT implementation. Both the GWA and the BSE have been formulated in the LAPW context and several implementations of such tools are in use. In other postprocessing situations it may be useful to project Kohn-Sham states onto Wannier functions. For the LAPW method such projections have also been implemented and are in common use.
== Variants and extensions of the LAPW method ==
APW: The augmented-plane-wave method is the predecessor of LAPW. It uses the radial solution to the spherically averaged potential for the augmentation in the MT spheres. The energy derivative of this radial function is not involved. This missing linearization implies that the augmentation has to be adapted to each Kohn-Sham state individually, i.e., it depends on the Bloch vector and the band index, which subsequently leads to a non-linear, energy-dependent eigenvalue problem. In comparison to LAPW this is a more complex problem to solve. A relativistic generalization of this approach, RAPW, has also been formulated.
Local orbitals extensions: The LAPW basis can be extended by local orbitals (LOs). These are additional basis functions having nonvanishing values only in a single MT sphere. They are composed of the radial functions
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
,
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
, and a third radial function tailored to describe use-case-specific physics. LOs have originally been proposed for the representation of semicore states. Other uses involve the representation of unoccupied states or the elimination of the linearization error for the valence states.
APW+lo: In the APW+lo method the augmentation in the MT spheres only consists of the function
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
. It is matched to the plane wave in the interstitial region only in value. As an alternative implementation of the linearization the function
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
is included in the basis set as an additional local orbital. While the matching conditions result in an unphysical kink of the basis functions at the MT sphere boundaries, a careful consideration of the kink in the construction of the Hamiltonian matrix suppresses it in the Kohn-Sham eigenfunctions. In comparison to the classical LAPW method the APW+lo approach leads to a less stiff basis set. The outcome is a faster convergence of the DFT calculations with respect to the basis set size.
Soler-Williams formulation of LAPW: In the Soler-Williams formulation of LAPW the plane waves cover the whole unit cell. In the MT spheres the augmentation is implemented by replacing up to the angular momentum cutoff the plane waves by the functions
u
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
and
u
˙
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
. This yields basis functions continuously differentiable also in the
(
l
,
m
)
{\displaystyle (l,m)}
channels above the angular momentum cutoff. As a consequence the Soler-Williams approach has reduced angular momentum cutoff requirements in comparison to the classical LAPW formulation.
ELAPW: In the extended LAPW method pairs of local orbitals introducing the functions
u
l
,
α
(
r
α
,
E
l
,
α
lo
)
{\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha }^{\text{lo}})}
and
u
˙
l
,
α
(
r
α
,
E
l
,
α
lo
)
{\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha }^{\text{lo}})}
are added to the LAPW basis. The energy parameters
E
l
,
α
lo
{\displaystyle E_{l,\alpha }^{\text{lo}}}
are chosen to systematically extend the energy region in which Kohn-Sham states are accurately described by the linearization in LAPW.
QAPW: In the quadratic APW method the augmentation in the MT spheres additionally includes the second energy derivative
u
¨
l
,
α
(
r
α
,
E
l
,
α
)
{\displaystyle {\ddot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })}
. The matching at the MT sphere boundaries is performed by enforcing continuity of the basis functions in value, slope, and curvature. This is similar to the super-linearized APW (SLAPW) method in which radial functions
u
l
,
α
{\displaystyle u_{l,\alpha }}
and/or their derivatives
u
˙
l
,
α
{\displaystyle {\dot {u}}_{l,\alpha }}
at more than one energy parameter are used for the augmentation. In comparison to a pure LAPW basis these approaches can precisely represent Kohn-Sham orbitals in a broader energy window around the energy parameters. The drawback is that the stricter matching conditions lead to a stiffer basis set.
Lower-dimensional systems: The partitioning of the unit cell can be extended to explicitly include semi-infinite vacuum regions with their own augmentations of the plane waves. This enables efficient calculations for lower-dimensional systems such as surfaces and thin films. For the treatment of atomic chains an extension to one-dimensional setups has been formulated.
== Software implementations ==
There are various software projects implementing the LAPW method and/or its variants. Examples for such codes are
Elk
Exciting
Flair
FLEUR
HiLAPW
Wien2k
== References == | Wikipedia/Linearized_augmented-plane-wave_method |
The natural sciences saw various advancements during the Golden Age of Islam (from roughly the mid 8th to the mid 13th centuries), adding a number of innovations to the Transmission of the Classics (such as Aristotle, Ptolemy, Euclid, Neoplatonism). During this period, Islamic theology was encouraging of thinkers to find knowledge. Thinkers from this period included Al-Farabi, Abu Bishr Matta, Ibn Sina, al-Hassan Ibn al-Haytham and Ibn Bajjah. These works and the important commentaries on them were the wellspring of science during the medieval period. They were translated into Arabic, the lingua franca of this period.
Islamic scholarship in the sciences had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further. However the Islamic world had a greater respect for knowledge gained from empirical observation, and believed that the universe is governed by a single set of laws. Their use of empirical observation led to the formation of crude forms of the scientific method. The study of physics in the Islamic world started in Iraq and Egypt.
Fields of physics studied in this period include optics, mechanics (including statics, dynamics, kinematics and motion), and astronomy.
== Physics ==
Islamic scholarship had inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. With Aristotelian physics, physics was seen as lower than demonstrative mathematical sciences, but in terms of a larger theory of knowledge, physics was higher than astronomy; many of whose principles derive from physics and metaphysics. The primary subject of physics, according to Aristotle, was motion or change; there were three factors involved with this change, underlying thing, privation, and form. In his Metaphysics, Aristotle believed that the Unmoved Mover was responsible for the movement of the cosmos, which Neoplatonists later generalized as the cosmos were eternal. Al-Kindi argued against the idea of the cosmos being eternal by claiming that the eternality of the world lands one in a different sort of absurdity involving the infinite; Al-Kindi asserted that the cosmos must have a temporal origin because traversing an infinite was impossible.
One of the first commentaries of Aristotle's Metaphysics is by Al-Farabi. In "'The Aims of Aristotle's Metaphysics", Al-Farabi argues that metaphysics is not specific to natural beings, but at the same time, metaphysics is higher in universality than natural beings.
== Optics ==
One field in physics, optics, developed rapidly in this period. By the ninth century, there were works on physiological optics as well as mirror reflections, and geometrical and physical optics. In the eleventh century, Ibn al-Haytham not only rejected the Greek idea about vision, he came up with a new theory.
Ibn Sahl (c. 940–1000), a mathematician and physicist connected with the court of Baghdad, wrote a treatise On Burning Mirrors and Lenses in 984 in which he set out his understanding of how curved mirrors and lenses bend and focus light. Ibn Sahl is credited with discovering the law of refraction, now usually called Snell's law. He used this law to work out the shapes of lenses that focus light with no geometric aberrations, known as anaclastic lenses.
Ibn al-Haytham (known in Western Europe as Alhacen or Alhazen) (965-1040), often regarded as the "father of optics" and a pioneer of the scientific method, formulated "the first comprehensive and systematic alternative to Greek optical theories." He postulated in his "Book of Optics" that light was reflected upon different surfaces in different directions, thus causing different light signatures for a certain object that we see. It was a different approach than that which was previously thought by Greek scientists, such as Euclid or Ptolemy, who believed rays were emitted from the eye to an object and back again. Al-Haytham, with this new theory of optics, was able to study the geometric aspects of the visual cone theories without explaining the physiology of perception. Also in his Book of Optics, Ibn al-Haytham used mechanics to try and understand optics. Using projectiles, he observed that objects that hit a target perpendicularly exert much more force than projectiles that hit at an angle. Al-Haytham applied this discovery to optics and tried to explain why direct light hurts the eye, because direct light approaches perpendicularly and not at an oblique angle. He developed a camera obscura to demonstrate that light and color from different candles can be passed through a single aperture in straight lines, without intermingling at the aperture. His theories were transmitted to the West. His work influenced Roger Bacon, John Peckham and Vitello, who built upon his work and ultimately transmitted it to Kepler.
Taqī al-Dīn tried to disprove the widely held belief that light is emitted by the eye and not the object that is being observed. He explained that, if light came from our eyes at a constant velocity it would take much too long to illuminate the stars for us to see them while we are still looking at them, because they are so far away. Therefore, the illumination must be coming from the stars so we can see them as soon as we open our eyes.
== Astronomy ==
The Islamic understanding of the astronomical model was based on the Greek Ptolemaic system. However, many early astronomers had started to question the model. It was not always accurate in its predictions and was over complicated because astronomers were trying to mathematically describe the movement of the heavenly bodies. Ibn al-Haytham published Al-Shukuk ala Batiamyus ("Doubts on Ptolemy"), which outlined his many criticisms of the Ptolemaic paradigm. This book encouraged other astronomers to develop new models to explain celestial movement better than Ptolemy. In al-Haytham's Book of Optics he argues that the celestial spheres were not made of solid matter, and that the heavens are less dense than air. Some astronomers theorized about gravity too, al-Khazini suggests that the gravity an object contains varies depending on its distance from the center of the universe. The center of the universe in this case refers to the center of the Earth.
== Mechanics ==
=== Impetus ===
John Philoponus had rejected the Aristotelian view of motion, and argued that an object acquires an inclination to move when it has a motive power impressed on it. In the eleventh century Ibn Sina had roughly adopted this idea, believing that a moving object has force which is dissipated by external agents like air resistance.
Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), he claimed that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that projectile in a vacuum would not stop unless it is acted upon. This conception of motion is consistent with Newton's first law of motion, inertia, which states that an object in motion will stay in motion unless it is acted on by an external force. This idea which dissented from the Aristotelian view was basically abandoned until it was described as "impetus" by John Buridan, who may have been influenced by Ibn Sina.
=== Acceleration ===
In Abū Rayḥān al-Bīrūnī text Shadows, he recognizes that non-uniform motion is the result of acceleration. Ibn-Sina's theory of mayl tried to relate the velocity and weight of a moving object, this idea closely resembled the concept of momentum Aristotle's theory of motion stated that a constant force produces a uniform motion, Abu'l-Barakāt al-Baghdādī contradicted this and developed his own theory of motion. In his theory he showed that velocity and acceleration are two different things and force is proportional to acceleration and not velocity.
== See also ==
Astronomy in the medieval Islamic world
History of optics
History of physics
History of scientific method
Islamic world contributions to Medieval Europe
Islamic Golden Age
Science in the medieval Islamic world
Science in the Middle Ages
== References == | Wikipedia/Physics_in_medieval_Islam |
Graph paper, coordinate paper, grid paper, or squared paper is writing paper that is printed with fine lines making up a regular grid. It is available either as loose leaf paper or bound in notebooks or graph books.
It is commonly found in mathematics and engineering education settings, exercise books, and in laboratory notebooks.
The lines are often used as guides for mathematical notation, plotting graphs of functions or experimental data, and drawing curves.
== History ==
The Metropolitan Museum of Art owns a pattern book dated to around 1596 in which each page bears a grid printed with a woodblock. The owner has used these grids to create block pictures in black and white and in colour.
The first commercially published "coordinate paper" is usually attributed to a Dr. Buxton of England, who patented paper printed with a rectangular coordinate grid, in 1794. A century later, E. H. Moore, a distinguished mathematician at the University of Chicago, advocated usage of paper or exercise books with "squared lines" by students of high schools and universities. The 1906 edition of Algebra for Beginners by H. S. Hall and S. R. Knight included a strong statement that "the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shows that anything on a smaller scale (such as 'millimeter' paper) is practically worthless in the hands of beginners."
The term "graph paper" did not catch on quickly in American usage. A School Arithmetic (1919) by H. S. Hall and F. H. Stevens had a chapter on graphing with "squared paper". Analytic Geometry (1937) by W. A. Wilson and J. A. Tracey used the phrase "coordinate paper". The term "squared paper" remained in British usage for longer; for example it was used in Public School Arithmetic (2023) by W. M. Baker and A. A. Bourne published in London.
== Formats ==
Quad paper, sometimes referred to as quadrille paper from French quadrillé, 'large square', is a common form of graph paper with a sparse grid printed in light blue or gray and right to the edge of the paper. In the U.S. and Canada, it often has two, four or five squares per inch for work not needing too much detail. In Europe, it usually has 5 mm by 5 mm squares. It is used in mathematical exercise books and Lab notebooks.
Dot grid paper uses dots at intersections instead of gridlines. It is often used for bullet journalling.
Engineering paper, or an engineer's pad, is traditionally printed on light green or tan translucent paper. It may have four, five or ten squares per inch. The grid lines are printed on the back side of each page and show through faintly to the front side. Each page has an unprinted margin. When photocopied or scanned, the grid lines typically do not show up in the resulting copy, which often gives the work a neat, uncluttered appearance. In the U.S. and Canada, some engineering professors require student homework to be completed on engineering paper.
Millimeter paper has ten squares per centimeter and is used for technical drawings.
Hexagonal paper shows regular hexagons instead of squares. These can be used to map geometric tiled or tesselated designs among other uses.
Isometric graph paper or 3D graph paper is a triangular graph paper which uses a series of three guidelines forming a 60° grid of small triangles. The triangles are arranged in groups of six to make hexagons. The name suggests the use for isometric views or pseudo-three-dimensional views. Among other functions, they can be used in the design of trianglepoint embroidery. It can be used to draw angles accurately.
Logarithmic paper has rectangles drawn in varying widths corresponding to logarithmic scales for semi-log plots or log-log plots.
Normal probability paper is another graph paper with rectangles of variable widths. It is designed so that "the graph of the normal distribution function is represented on it by a straight line", i.e. it can be used for a normal probability plot.
Polar coordinate paper has concentric circles divided into small arcs or 'pie wedges' to allow plotting in polar coordinates.
Ternary (triangular) graph paper has an equilateral triangle, divided into smaller equilateral triangles with usually 10 or more divisions per edge. It is used to plot compositional percentages of in systems that have three constituents or three dimensions. (see ternary plot)
In general, graphs showing grids are sometimes called Cartesian graphs because the square can be used to map measurements onto a Cartesian coordinate system.
== Examples ==
== See also ==
Ruled paper
Examination book
== References ==
== External links ==
Graph paper downloads at Print-graph-paper.com | Wikipedia/Graph_paper |
A geographical pole or geographic pole is either of the two points on Earth where its axis of rotation intersects its surface. The North Pole lies in the Arctic Ocean while the South Pole is in Antarctica. North and South poles are also defined for other planets or satellites in the Solar System, with a North pole being on the same side of the invariable plane as Earth's North pole.
Relative to Earth's surface, the geographic poles move by a few metres over periods of a few years. This is a combination of Chandler wobble, a free oscillation with a period of about 433 days; an annual motion responding to seasonal movements of air and water masses; and an irregular drift towards the 80th west meridian. As cartography requires exact and unchanging coordinates, the averaged locations of geographical poles are taken as fixed cartographic poles and become the points where the body's great circles of longitude intersect.
== See also ==
Earth's rotation
Polar motion
Poles of astronomical bodies
True polar wander
== References == | Wikipedia/Geographic_pole |
The Pontifical Academy of Sciences (Italian: Pontificia accademia delle scienze, Latin: Pontificia Academia Scientiarum) is a scientific academy of the Vatican City, established in 1936 by Pope Pius XI. Its aim is to promote the progress of the mathematical, physical, and natural sciences and the study of related epistemological problems. The Accademia Pontificia dei Nuovi Lincei ("Pontifical Academy of the New Lynxes") was founded in 1847 as a more closely supervised successor to the Accademia dei Lincei ("Academy of Lynxes") established in Rome in 1603 by the learned Roman Prince, Federico Cesi (1585–1630), who was a young botanist and naturalist, and which claimed Galileo Galilei as its president. The Accademia dei Lincei survives as a wholly separate institution.
The Academy of Sciences, one of the Pontifical academies at the Vatican in Rome, is headquartered in the Casina Pio IV in the heart of the Vatican Gardens.
== History ==
Cesi wanted his academicians to adhere to a research methodology based upon observation, experimentation, and the inductive method. He thus called his academy "dei lincei" because its members had "eyes as sharp as lynxes," scrutinizing nature at both microscopic and macroscopic levels. The leader of the first academy was the scientist Galileo Galilei.
Academy of Lynxes was dissolved after the death of its founder, but was re-created by Pope Pius IX in 1847 and given the name Accademia Pontificia dei Nuovi Lincei ("Pontifical Academy of the New Lynxes"). It was later re-founded in 1936 by Pope Pius XI and given its current name. Pope Paul VI in 1976 and Pope John Paul II in 1986 subsequently updated its statutes.
Since 1936, the Pontifical Academy of Sciences has been concerned both with investigating specific scientific subjects belonging to individual disciplines and with the promotion of interdisciplinary co-operation. It has progressively increased the number of its academicians and the international character of its membership. The Academy is an independent body within the Holy See and enjoys freedom of research. The statutes of 1976 express its goal: "The Pontifical Academy of Sciences has as its goal the promotion of the progress of the mathematical, physical, and natural sciences, and the study of related epistemological questions and issues."
== Activities ==
Since the Academy and its membership is not influenced by factors of a national, political, or religious character it represents a valuable source of objective scientific information which is made available to the Holy See and to the international scientific community. Today the work of the Academy covers six main areas:
fundamental science
the science and technology of global questions and issues
science in favor of the problems of the Third World
the ethics and politics of science
bioethics
epistemology
The disciplines involved are sub-divided into eight fields: the disciplines of physics and related disciplines; astronomy; chemistry; the earth and environmental sciences; the life sciences (botany, agronomy, zoology, genetics, molecular biology, biochemistry, the neurosciences, surgery); mathematics; the applied sciences; and the philosophy and history of sciences.
Principal among the many publications produced by the Academy are:
Acta – proceedings of the Plenary Sessions
Scripta Varia – major works such as full reports on Study Weeks & Working Groups held at the Academy; some, due to their special importance, have been taken up by foreign publishers
Documenta & Extra Series – for quick publication of summaries and conclusions of Study Weeks and Working Groups; also for rapid diffusion of Papal addresses to the Academy, and of significant documents such as the "Declaration on the Prevention of Nuclear War"
Commentarii – notes and memoirs as well as special studies on scientific subjects.
With the goal of promoting scientific research, the Pius XI Medal is awarded by the Academy every two years to a young scientist who is under the age of 45 and shows exceptional promise. A few of the winners have also become members of the Academy.
== Goals and hopes of the Academy ==
The goals and hopes of the Academy were expressed by Pope Pius XI in the motu proprio "In multis solaciis" which brought about its re-foundation in 1936:
"Amongst the many consolations with which divine Goodness has wished to make happy the years of our Pontificate, I am happy to place that of our having being able to see not a few of those who dedicate themselves to the studies of the sciences mature their attitude and their intellectual approach towards religion. Science, when it is real cognition, is never in contrast with the truth of the Christian faith. Indeed, as is well known to those who study the history of science, it must be recognized on the one hand that the Roman Pontiffs and the Catholic Church have always fostered the research of the learned in the experimental field as well, and on the other hand that such research has opened up the way to the defense of the deposit of supernatural truths entrusted to the Church.... We promise again that it is our strongly-held intention, that the 'Pontifical Academicians', through their work and our Institution, work ever more and ever more effectively for the progress of the sciences. Of them we do not ask anything else, since this praiseworthy intent and this noble work in the service of the truth is what we expect of them."
Forty years later (10 November 1979), John Paul II once again emphasized the role and goals of the Academy, on the 100th anniversary (centenary) of the birth of Albert Einstein:
"The existence of this Pontifical Academy of Sciences, of which in its ancient ancestry Galileo was a member and of which today eminent scientists are members, without any form of ethnic or religious discrimination, is a visible sign, raised amongst the peoples of the world, of the profound harmony that can exist between the truths of science and the truths of faith.... The Church of Rome together with all the Churches spread throughout the world attributes a great importance to the function of the Pontifical Academy of Sciences. The title of 'Pontifical' given to the Academy means, as you know, the interest and the commitment of the Church, in different forms from the ancient patronage, but no less profound and effective in character.... How could the Church have lacked interest in the most noble of the occupations which are most strictly human – the search for truth?"
"Both believing scientists and non-believing scientists are involved in deciphering the palimpsest of nature which has been built in a rather complex way, where the traces of the different stages of the long evolution of the world have been covered over and mixed up. The believer, perhaps, has the advantage of knowing that the puzzle has a solution, that the underlying writing is in the final analysis the work of an intelligent being, and that thus the problem posed by nature has been posed to be solved and that its difficulty is without doubt proportionate to the present or future capacity of humanity. This, perhaps, will not give him new resources for the investigation engaged in. But it will contribute to maintaining him in that healthy optimism without which a sustained effort cannot be engaged in for long."
On 8 November 2012 Pope Benedict XVI told members of the Pontifical Academy of Sciences:
"Dialogue and cooperation between faith and science are urgently needed for building a culture that respects people and the planet.... Without faith and science informing each other, the great questions of humanity leave the domain of reason and truth, and are abandoned to the irrational, to myth, or to indifference, with great damage to humanity itself, to world peace and to our ultimate destiny.... (As people strive to) unlock the mysteries of man and the universe, I am convinced of the urgent need for continued dialogue and cooperation between the worlds of science and of faith in building a culture of respect for man, for human dignity and freedom, for the future of our human family, and for the long-term sustainable development of our planet."
== Members ==
The new members of the Academy are elected by the body of Academicians and chosen from men and women of every race and religion based on the high scientific value of their activities and their high moral profile. They are then officially appointed by the Roman Pontiff. The Academy is governed by a President, appointed from its members by the Pope, who is helped by a scientific Council and by the Chancellor. Initially made up of 80 Academicians, 70 who were appointed for life. In 1986 John Paul II raised the number of members for life to 80, side by side with a limited number of Honorary Academicians chosen because they are highly qualified figures, and others who are Academicians because of the posts they hold, including the Chancellor of the Academy, the Director of the Vatican Observatory, the Prefect of the Vatican Apostolic Library, and the Prefect of the Vatican Secret Archives.
=== President ===
The president of the Academy is appointed from its members by the Pope. The current president is Joachim von Braun, as of 21 June 2017, who assumed the position after Werner Arber, who is a Nobel Prize Laureate and was the first Protestant to hold the position.
The list of all current and past presidents of the Academy is below:
== See also ==
Catholic Church & science
Science and the Popes
== Notes ==
== References ==
Based on The Pontifical Academy of Sciences: A Historical Profile (in PDF)
Pontifical Academy of Sciences website (in English)
== External links ==
Official website
Message to the Pontifical Academy of Sciences on Evolution by Pope John Paul II, 22 October 1996
History
Pontifical Academies – Website of the Holy See
Article about inner workings and relationship to other councils | Wikipedia/Pontifical_Academy_of_Sciences |
An ecosystem (or ecological system) is a system formed by organisms in interaction with their environment.: 458 The biotic and abiotic components are linked together through nutrient cycles and energy flows.
Ecosystems are controlled by external and internal factors. External factors—including climate and what parent materials form the soil and topography—control the overall structure of an ecosystem, but are not themselves influenced by it. By contrast, internal factors both control and are controlled by ecosystem processes. include decomposition, the types of species present, root competition, shading, disturbance, and succession. While external factors generally determine which resource inputs an ecosystem has, the availability of said resources within the ecosystem is controlled by internal factors.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
== Definition ==
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact.: 5 : 458 The biotic and abiotic components are linked together through nutrient cycles and energy flows.
"Ecosystem processes" are the transfers of energy and materials from one pool to another.: 458 Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.: 5
=== Origin and development of the term ===
The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment.: 9 He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope".
G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.: 9
== Processes ==
=== External and internal factors ===
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure".: 14 Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.: 145
Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.: 39 : 66
Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present.: 321 The introduction of non-native species can cause substantial shifts in ecosystem function.
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them.: 16 While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.
=== Primary production ===
Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP).: 124 About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance.: 157 The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP).: 157 Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.: 155
=== Energy flow ===
Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration.: 157 The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.
Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.
In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.
The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network.
=== Decomposition ===
The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.: 183
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it).: 271–280 Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.: 69–77
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition.: 184 Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.: 186
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.: 186
==== Decomposition rates ====
Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself.: 194 Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.: 280
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.: 200
=== Dynamics and resilience ===
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances.: 347 When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.: 67
Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass".: 346 This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply.": 470
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery.: 348 More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests.: 340 Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.
=== Nutrient cycling ===
Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer.: 266 Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production.: 289 Over the long term, phosphorus availability can also be critical.
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium.: 231 Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.: 231
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants.: 360 Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust.: 270 Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.: 270
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification.: 277 Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.: 281
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.
Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics).: 287–290 Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.: 291
=== Function and biodiversity ===
Biodiversity plays an important role in ecosystem functioning.: 449–453 Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness,: 331 additional species may have little additive effect unless they differ substantially from species already present.: 324 This is the case for example for exotic species.: 321
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.: 324
An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.
== Study approaches ==
=== Ecosystem ecology ===
Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system".: 458 The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.: 6
The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.
=== Classifications ===
Biomes are general classes or categories of ecosystems.: 14 However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra.: 14 There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification.
Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.
== Human interactions with ecosystems ==
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.: 14
=== Ecosystem goods and services ===
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.
The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.: 6–19
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change.
Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.
=== Degradation and decline ===
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.
Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.: 437
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.
=== Management ===
When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).
=== Restoration and sustainable development ===
Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.: 445
== See also ==
Complex system
Earth science
Ecoregion
Ecological resilience
Ecosystem-based adaptation
Artificialization
Ecosystem structure
=== Types ===
The following articles are types of ecosystems for particular types of regions or zones:
Ecosystems grouped by condition
Agroecosystem
Closed ecosystem
Depauperate ecosystem
Novel ecosystem
Reference ecosystem
=== Instances ===
Ecosystem instances in specific regions of the world:
Greater Yellowstone Ecosystem
Leuser Ecosystem
Longleaf pine Ecosystem
Tarangire Ecosystem
== References ==
== External links ==
Media related to Ecosystems at Wikimedia Commons
The dictionary definition of ecosystem at Wiktionary
Wikidata: topic (Scholia)
Biomes and ecosystems travel guide from Wikivoyage | Wikipedia/Ecosystems |
A carbohydrate () is a biomolecule composed of carbon (C), hydrogen (H), and oxygen (O) atoms. The typical hydrogen-to-oxygen atomic ratio is 2:1, analogous to that of water, and is represented by the empirical formula Cm(H2O)n (where m and n may differ). This formula does not imply direct covalent bonding between hydrogen and oxygen atoms; for example, in CH2O, hydrogen is covalently bonded to carbon, not oxygen. While the 2:1 hydrogen-to-oxygen ratio is characteristic of many carbohydrates, exceptions exist. For instance, uronic acids and deoxy-sugars like fucose deviate from this precise stoichiometric definition. Conversely, some compounds conforming to this definition, such as formaldehyde and acetic acid, are not classified as carbohydrates.
The term is predominantly used in biochemistry, functioning as a synonym for saccharide (from Ancient Greek σάκχαρον (sákkharon) 'sugar'), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (from Ancient Greek γλεῦκος (gleûkos) 'wine, must'), and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)).
Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.
Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes.
Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.
== Terminology ==
In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings.
In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.
The term "carbohydrate" (or "carbohydrate by difference") refers also to dietary fiber, which is a carbohydrate, but, unlike sugars and starches, fibers are not hydrolyzed by human digestive enzymes. Fiber generally contributes little food energy in humans, but is often included in the calculation of total food energy. The fermentation of soluble fibers by gut microflora can yield short-chain fatty acids, and soluble fiber is estimated to provide about 2 kcal/g.
== History ==
The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.
== Structure ==
Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid).
Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6).
The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.
Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.
== Division ==
Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.
== Monosaccharides ==
Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.
=== Classification of monosaccharides ===
Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).
Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry.
=== Ring-straight chain isomerism ===
The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.
During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.
=== Use in living organisms ===
Monosaccharides are the major fuel source for metabolism, and glucose is an energy-rich molecule utilized to generate ATP in almost all living organisms. Glucose is a high-energy substrate produced in plants through photosynthesis by combining energy-poor water and carbon dioxide in an endothermic reaction fueled by solar energy. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In animals, glucose circulating the blood is a major metabolic substrate and is oxidized in the mitochondria to produce ATP for performing useful cellular work. In humans and other animals, serum glucose levels must be regulated carefully to maintain glucose within acceptable limits and prevent the deleterious effects of hypo- or hyperglycemia. Hormones such as insulin and glucagon serve to keep glucose levels in balance: insulin stimulates glucose uptake into the muscle and fat cells when glucose levels are high, whereas glucagon helps to raise glucose levels if they dip too low by stimulating hepatic glucose synthesis. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.
== Disaccharides ==
Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.
Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:
Its monosaccharides: glucose and fructose
Their ring types: glucose is a pyranose and fructose is a furanose
How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.
The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.
Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.
== Oligosaccharides and polysaccharides ==
=== Oligosaccharides ===
Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glycolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion.
=== Polysaccharides ===
== Nutrition ==
Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Refined carbohydrates from processed foods such as white bread or rice, soft drinks, and desserts are readily digestible, and many are known to have a high glycemic index, which reflects a rapid assimilation of glucose. By contrast, the digestion of whole, unprocessed, fiber-rich foods such as beans, peas, and whole grains produces a slower and steadier release of glucose and energy into the body. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.
Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present, the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides such as chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose, fermenting it to caloric short-chain fatty acids. Even though humans lack the enzymes to digest fiber, dietary fiber represents an important dietary element for humans. Fibers promote healthy digestion, help regulate postprandial glucose and insulin levels, reduce cholesterol levels, and promote satiety.
The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.
=== Classification ===
The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides). Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.
==== Glycemic index ====
The glycemic index (GI) and glycemic load concepts characterize the potential for carbohydrates in food to raise blood glucose compared to a reference food (generally pure glucose). Expressed numerically as GI, carbohydrate-containing foods can be grouped as high-GI (score more than 70), moderate-GI (56–69), or low-GI (less than 55) relative to pure glucose (GI=100). Consumption of carbohydrate-rich, high-GI foods causes an abrupt increase in blood glucose concentration that declines rapidly following the meal, whereas low-GI foods with lower carbohydrate content produces a lower blood glucose concentration that returns gradually after the meal.
Glycemic load is a measure relating the quality of carbohydrates in a food (low- vs. high-carbohydrate content – the GI) by the amount of carbohydrates in a single serving of that food.
=== Health effects of dietary carbohydrate restriction ===
Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber and phytochemicals – afforded by high-quality plant foods such as legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation.
Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, low-carbohydrate diets do not appear to confer a "metabolic advantage," and effective weight loss or maintenance depends on the level of calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, but a more balanced diet that restricts refined carbohydrates can also reduce serum glucose and insulin levels and may also suppress lipogenesis and promote fat oxidation. However, as far as energy expenditure itself is concerned, the claim that low-carbohydrate diets have a "metabolic advantage" is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.
Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.
An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018".
== Sources ==
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.
^A The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber".
== Metabolism ==
Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.
The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.
=== Catabolism ===
Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.
In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.
== Carbohydrate chemistry ==
Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:
Amadori rearrangement
Carbohydrate acetalisation
Carbohydrate digestion
Cyanohydrin reaction
Koenigs–Knorr reaction
Lobry de Bruyn–Van Ekenstein transformation
Nef reaction
Wohl degradation
Tipson-Cohen reaction
Ferrier rearrangement
Ferrier II reaction
== Chemical synthesis ==
Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive.
Common reactions for glycosidic bond formation are as follows:
Chemical glycosylation
Fischer glycosidation
Koenigs-Knorr reaction
Crich beta-mannosylation
While some common protection methods are as below:
Carbohydrate acetalisation
Trimethylsilyl
Benzyl ether
p-Methoxybenzyl ether
== See also ==
Bioplastic
Carbohydrate NMR
Gluconeogenesis – A process where glucose can be synthesized by non-carbohydrate sources.
Glycobiology
Glycogen
Glycoinformatics
Glycolipid
Glycome
Glycomics
Glycosyl
Macromolecule
Saccharic acid
== References ==
== Further reading ==
"Compolition of foods raw, processed, prepared" (PDF). United States Department of Agriculture. September 2015. Archived (PDF) from the original on October 31, 2016. Retrieved October 30, 2016.
== External links ==
Carbohydrates, including interactive models and animations (Requires MDL Chime)
IUPAC-IUBMB Joint Commission on Biochemical Nomenclature (JCBN): Carbohydrate Nomenclature
Carbohydrates detailed
Carbohydrates and Glycosylation – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Functional Glycomics Gateway, a collaboration between the Consortium for Functional Glycomics and Nature Publishing Group | Wikipedia/Carbohydrate |
Protein purification is a series of processes intended to isolate one or a few proteins from a complex mixture, usually cells, tissues, or whole organisms. Protein purification is vital for the specification of the function, structure, and interactions of the protein of interest. The purification process may separate the protein and non-protein parts of the mixture, and finally separate the desired protein from all other proteins. Ideally, to study a protein of interest, it must be separated from other components of the cell so that contaminants will not interfere in the examination of the protein of interest's structure and function. Separation of one protein from all others is typically the most laborious aspect of protein purification. Separation steps usually exploit differences in protein size, physico-chemical properties, binding affinity, and biological activity. The pure result may be termed protein isolate.
== Purpose ==
The protein manufacturing cost remains high and there is a growing demand to develop cost efficient and rapid protein purification methods. Understanding the different protein purification methods and optimizing the downstream processing is critical to minimize production costs while maintaining the quality of acceptable standards of homogeneity. Protein purification is either preparative or analytical.
Preparative purifications aim to produce a relatively large quantity of purified proteins for subsequent use. Examples include the preparation of commercial products such as enzymes (e.g. lactase), nutritional proteins (e.g. soy protein isolate), and certain biopharmaceuticals (e.g. insulin). Several preparative purification steps are often deployed to remove bi-products, such as host cell proteins, which pose a potential threat to the patient's health.
Analytical purification produces a relatively small amount of a protein for a variety of research or analytical purposes, including identification, quantification, and studies of the protein's structure, post-translational modifications, and function. Each step of a protein purification scheme is monitored and takes into consideration purification levels and yield. A high purification level and a poor yield leaves hardly any protein with which to experiment. On the other hand, a high yield with low purification levels leaves many contaminants (proteins other than the one interest) which interfere with research purposes.
== Preliminary steps ==
=== Extraction ===
If the protein of interest is not secreted by the organism into the surrounding solution, the first step of each purification process is the disruption of the cells containing the protein. Depending on how fragile the protein is and how stable the cells are, one could, for instance, use one of the following methods: i) repeated freezing and thawing, ii) sonication, iii) homogenization by high pressure (French press), iv) homogenization by grinding (bead mill), and v) permeabilization by detergents (e.g. Triton X-100) and/or enzymes (e.g. lysozyme). Finally, the cell debris can be removed by differential centrifugation, which is a procedure where the homogenate is centrifuged at low speed, then again at a greater force to yield a pellet consisting of nuclei and supernatant. This yields several fractions of decreasing density where more discriminating purification techniques are applied to one fraction.
Also, proteases are released during cell lysis, which will start digesting the proteins in the solution. If the protein of interest is sensitive to proteolysis, it is recommended to proceed quickly, and to keep the extract cooled, to slow down the digestion. Alternatively, one or more protease inhibitors can be added to the lysis buffer immediately before cell disruption. Sometimes it is also necessary to add DNAse in order to reduce the viscosity of the cell lysate caused by a high DNA content.
=== Ultracentrifugation ===
Centrifugation is a process that uses centrifugal force to separate mixtures of particles of varying masses or densities suspended in a liquid. When a vessel (typically a tube or bottle) containing a mixture of proteins or other particulate matter, such as bacterial cells, is rotated at high speeds, the inertia of each particle yields a force in the direction of the particles velocity that is proportional to its mass. The tendency of a given particle to move through the liquid because of this force is offset by the resistance the liquid exerts on the particle. The net effect of "spinning" the sample in a centrifuge is that massive, small, and dense particles move outward faster than less massive particles or particles with more "drag" in the liquid. When suspensions of particles are "spun" in a centrifuge, a "pellet" may form at the bottom of the vessel that is enriched for the most massive particles with low drag in the liquid.
Non-compacted particles remain mostly in the liquid called "supernatant" and can be removed from the vessel thereby separating the supernatant from the pellet. The rate of centrifugation is determined by the angular acceleration applied to the sample, typically measured in comparison to the g-force. If samples are centrifuged long enough, the particles in the vessel will reach equilibrium wherein the particles accumulate specifically at a point in the vessel where their buoyant density is balanced with centrifugal force. Such an "equilibrium" centrifugation can allow extensive purification of a given particle.
Sucrose gradient centrifugation—a linear concentration gradient of sugar (typically sucrose, glycerol, or a silica-based density gradient media, like Percoll)—is generated in a tube such that the highest concentration is on the bottom and lowest on top. A protein sample is then layered on top of the gradient and spun at high speeds in an ultracentrifuge. This causes heavy macromolecules to migrate towards the bottom of the tube faster than lighter material. During centrifugation in the absence of sucrose, as particles move farther and farther from the center of rotation, they experience more and more centrifugal force (the further they move, the faster they move). The problem with this is that the useful separation range within the vessel is restricted to a small observable window. Spinning a sample twice as long does not mean the particle of interest will go twice as far; in fact, it will go significantly further. However, when the proteins are moving through a sucrose gradient, they encounter liquid of increasing density and viscosity. A properly designed sucrose gradient will counteract the increasing centrifugal force so the particles move in close proportion to the time they have been in the centrifugal field. Samples separated by these gradients are referred to as "rate zonal" centrifugations. After separating the protein/particles, the gradient is then fractionated and collected. In biochemistry, ultracentrifugation is valuable for separating biomolecules and analyzing their physical properties.
== Purification strategies ==
The choice of starting material is key to the design of a purification process. In a plant or animal, a particular protein usually is not distributed homogeneously throughout the body; different organs or tissues have higher or lower concentrations of the protein. The use of only the tissues or organs with the highest concentration decreases the volumes needed to produce a given amount of purified protein. If the protein is present in low abundance, or if it has a high value, scientists may use recombinant DNA technology to develop cells that will produce large quantities of the desired protein (this is known as an expression system). Recombinant expression allows the protein to be tagged, e.g. by a His-tag or Strep-tag to facilitate purification, reducing the number of purification steps required.
Analytical purification generally utilizes three properties to separate proteins. First, proteins may be purified according to their isoelectric points by running them through a pH-graded gel or an ion exchange column. Second, proteins can be separated according to their size or molecular weight via size exclusion chromatography or by SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) analysis. Proteins are often purified by using 2D-PAGE and are then analysed by peptide mass fingerprinting to establish the protein identity. This is very useful for scientific purposes and the detection limits for protein are nowadays very low and nanogram amounts of protein are sufficient for their analysis. Thirdly, proteins may be separated by polarity/hydrophobicity via high-performance liquid chromatography or reversed-phase chromatography.
Usually, a protein purification protocol contains one or more chromatographic steps. The basic procedure in chromatography is to flow the solution containing the protein through a column packed with various materials. Different proteins interact differently with the column material, and can thus be separated by the time required to pass the column, or the conditions required to elute the protein from the column. Proteins are typically detected as they are coming off the column by their absorbance at 280 nm. Many different chromatographic methods exist:
=== Precipitation and differential solubilization ===
Most proteins require some salt to dissolve in water, a process called salting in. As the salt concentration is increased, proteins can precipitate, a process called salting out which involves changing protein solubility. For example, in bulk protein purification, a common first step to isolate proteins is precipitation with ammonium sulfate (NH4)2SO4. This is performed by adding increasing amounts of ammonium sulfate and collecting the different fractions of precipitated protein. Subsequently, ammonium sulfate can be removed using dialysis (separating proteins from small molecules through a semipermeable membrane). During the ammonium sulfate precipitation step, hydrophobic groups present on the proteins are exposed to the atmosphere, attracting other hydrophobic groups; the result is the formation of an aggregate of hydrophobic components. In this case, the protein precipitate will typically be visible to the naked eye. One advantage of this method is that it can be performed inexpensively, even with very large volumes.
The first proteins to be purified are water-soluble proteins. Purification of integral membrane proteins requires disruption of the cell membrane in order to isolate any one particular protein from others that are in the same membrane compartment. Sometimes a particular membrane fraction can be isolated first, such as isolating mitochondria from cells before purifying a protein located in a mitochondrial membrane. A detergent such as sodium dodecyl sulfate (SDS) can be used to dissolve cell membranes and keep membrane proteins in solution during purification; however, because SDS causes denaturation, milder detergents such as Triton X-100 or CHAPS can be used to retain the protein's native conformation during complete purification.
=== Size exclusion (gel-filtration chromatography) ===
Chromatography can be used to separate protein in solution or denaturing conditions by using porous gels. This technique is a more discriminating separation and is known as size exclusion chromatography. The principle is that smaller molecules have to traverse a larger volume in a porous matrix. Consequentially, proteins of a certain range in size will require a variable volume of eluent (solvent) before being collected at the other end of the column of gel. Larger molecules (or proteins) will travel through less volume and elute prior to smaller molecules.
In the context of protein purification, the eluent is usually pooled in different test tubes. All test tubes containing no measurable trace of the protein to purify are discarded. The remaining solution is thus made of the protein to purify and any other similarly-sized proteins.
=== Separation based on charge (ion-exchange chromatography) ===
One chromatography technique based on molecular properties is usually not sufficient in obtaining a protein of high purity. In addition to size, ion exchange chromatography separates compounds according to the nature and degree of their ionic charge. The column to be used is selected according to its type and strength of charge. Anion exchange resins have a positive charge and are used to retain and separate negatively charged compounds (anions), while cation exchange resins have a negative charge and are used to separate positively charged molecules (cations).
Before the separation begins a buffer is pumped through the column to equilibrate the opposing charged ions. Upon injection of the sample, solute molecules will exchange with the buffer ions as each competes for the binding sites on the resin. The length of retention for each solute depends upon the strength of its charge. The most weakly charged compounds will elute first, followed by those with successively stronger charges. Because of the nature of the separating mechanism, pH, buffer type, buffer concentration, and temperature all play important roles in controlling the separation.
Ion exchange chromatography is a very powerful tool for use in protein purification and is frequently used in both analytical and preparative separations. It is especially useful when purifying nucleic-acid binding proteins, where separation of the protein from the bound nucleic acid is required to obtain a pure sample devoid of nucleic acids co-purified from the expression system or the native source.
=== Free-flow-electrophoresis ===
Free-flow electrophoresis (FFE) is a carrier-free electrophoresis technique that allows preparative protein separation in a laminar buffer stream by using an orthogonal electric field. By making use of a pH-gradient, that can for example be induced by ampholytes, this technique allows to separate protein isoforms up to a resolution of < 0.02 delta-pI.
=== Separation based on hydrophobicity (hydrophobic interaction chromatography) ===
HIC media is amphiphilic, with both hydrophobic and hydrophilic regions, allowing for the separation of proteins based on their surface hydrophobicity. Target proteins and their product aggregate species tend to have different hydrophobic properties and removing them via HIC further purifies the protein of interest. Additionally, the environment used typically employs less harsh denaturing conditions than other chromatography techniques, thus helping to preserve the protein of interest in its native and functional state. In pure water, the interactions between the resin and the hydrophobic regions of protein would be very weak, but this interaction is enhanced by applying a protein sample to HIC resin in a high ionic strength buffer. The ionic strength of the buffer is then reduced to elute proteins in order of decreasing hydrophobicity.
=== Affinity chromatography ===
Affinity Chromatography is another powerful separation technique that is highly selective for the protein of interest based upon molecular conformation, which frequently utilizes application specific resins. These resins have ligands attached to their surfaces which are specific for the compounds to be separated. Most frequently, these ligands function in a fashion similar to that of antibody-antigen interactions. This "lock and key" fit between the ligand and its target compound makes it highly specific, frequently generating a single peak, while all else in the sample is unretained.
Many membrane proteins are glycoproteins and can be purified by lectin affinity chromatography. Detergent-solubilized proteins can be allowed to bind to a chromatography resin that has been modified to have a covalently attached lectin. Proteins that do not bind to the lectin are washed away and then specifically bound glycoproteins can be eluted by adding a high concentration of a sugar that competes with the bound glycoproteins at the lectin binding site. Some lectins have high affinity binding to oligosaccharides of glycoproteins that is hard to compete with sugars, and bound glycoproteins need to be released by denaturing the lectin.
=== Immunoaffinity chromatography ===
Immunoaffinity chromatography uses the specific binding of an antibody-antigen to selectively purify the target protein. The procedure involves immobilizing a protein to a solid substrate (e.g. a porous bead or a membrane), which then selectively binds the target, while everything else flows through. The target protein can be eluted by changing the pH or the salinity. The immobilized ligand can be an antibody (such as immunoglobulin G) or it can be a protein (such as protein A). Because this method does not involve engineering in a tag, it can be used for proteins from natural sources.
=== HPLC ===
High-performance liquid chromatography or high-pressure liquid chromatography is a form of chromatography applying high pressure to drive the solutes through the column faster. This means that the diffusion is limited and the resolution is improved. The most common form is "reversed phase" HPLC, where the column material is hydrophobic. The proteins are eluted by a gradient of increasing amounts of an organic solvent, such as acetonitrile. The proteins elute according to their hydrophobicity. After purification by HPLC the protein is in a solution that only contains volatile compounds, and can easily be lyophilized. HPLC purification frequently results in denaturation of the purified proteins and is thus not applicable to proteins that do not spontaneously refold.
=== Purification of a tagged protein ===
Another way to tag proteins is to engineer an antigen peptide tag onto the protein, and then purify the protein on a column or by incubating with a loose resin that is coated with an immobilized antibody. This particular procedure is known as immunoprecipitation. Immunoprecipitation is capable of generating an extremely specific interaction which usually results in binding only the desired protein. The purified tagged proteins can then easily be separated from the other proteins in solution and later eluted back into clean solution.
When the tags are not needed anymore, they can be cleaved off by a protease. This often involves engineering a protease cleavage site between the tag and the protein.
Self-cleaving tags eliminate the need for proteases to separate tag from target protein of interest during purification process (e.g. iCapTag™). The main component of the tag is an intein, which cleaves off simply after a pH change. Tagless and pure target protein is then released into the elution buffer.
== Concentration of the purified protein ==
At the end of a protein purification, the protein often has to be concentrated. Different methods exist.
=== Lyophilization ===
If the solution doesn't contain any other soluble component than the protein in question the protein can be lyophilized (dried). This is commonly done after an HPLC run. This simply removes all volatile components, leaving the proteins behind.
=== Ultrafiltration ===
Ultrafiltration concentrates a protein solution using selective permeable membranes. The function of the membrane is to let the water and small molecules pass through while retaining the protein. The solution is forced against the membrane by mechanical pump, gas pressure, or centrifugation.
== Evaluating purification yield ==
The most general method to monitor the purification process is by running a SDS-PAGE of the different steps. This method only gives a rough measure of the amounts of different proteins in the mixture, and it is not able to distinguish between proteins with similar apparent molecular weight.
If the protein has a distinguishing spectroscopic feature or an enzymatic activity, this property can be used to detect and quantify the specific protein, and thus to select the fractions of the separation, that contains the protein. If antibodies against the protein are available then western blotting and ELISA can specifically detect and quantify the amount of desired protein. Some proteins function as receptors and can be detected during purification steps by a ligand binding assay, often using a radioactive ligand.
In order to evaluate the process of multistep purification, the amount of the specific protein has to be compared to the amount of total protein. The latter can be determined by the Bradford total protein assay or by absorbance of light at 280 nm, however some reagents used during the purification process may interfere with the quantification. For example, imidazole (commonly used for purification of polyhistidine-tagged recombinant proteins) is an amino acid analogue and at low concentrations will interfere with the bicinchoninic acid (BCA) assay for total protein quantification. Impurities in low-grade imidazole will also absorb at 280 nm, resulting in an inaccurate reading of protein concentration from UV absorbance.
Another method to be considered is surface plasmon resonance (SPR). SPR can detect binding of label free molecules on the surface of a chip. If the desired protein is an antibody, binding can be translated directly to the activity of the protein. One can express the active concentration of the protein as the percent of the total protein. SPR can be a powerful method for quickly determining protein activity and overall yield. It is a powerful technology that requires an instrument to perform.
== Analytical ==
=== Denaturing-condition electrophoresis ===
Gel electrophoresis is a common laboratory technique that can be used both as a preparative and analytical method. The principle of electrophoresis relies on the movement of a charged ion in an electric field. In practice, the proteins are denatured in a solution containing a detergent (SDS). In these conditions, the proteins are unfolded and coated with negatively charged detergent molecules. The proteins in SDS-PAGE are separated on the sole basis of their size.
In analytical methods, the protein migrate as bands based on size. Each band can be detected using stains such as Coomassie blue dye or silver stain. Preparative methods to purify large amounts of protein, require the extraction of the protein from the electrophoretic gel. This extraction may involve excision of the gel containing a band, or eluting the band directly off the gel as it runs off the end of the gel.
In the context of a purification strategy, denaturing condition electrophoresis provides an improved resolution over size exclusion chromatography, but does not scale to large quantity of proteins in a sample as well as the late chromatography columns.
=== Non-denaturing-condition electrophoresis ===
A non-denaturing electrophoretic procedure for isolating bioactive metalloproteins in complex protein mixtures is preparative native PAGE. The intactness or the structural integrity of the isolated protein has to be confirmed by an independent method.
== See also ==
Salting out
Protein tag
Protein production
Host cell protein
== References ==
== External links ==
Protein purification ebook
Protein purification facility
Strategies for Protein Purification Handbook | Wikipedia/Protein_purification |
Nutritional science (also nutrition science, sometimes short nutrition, dated trophology) is the science that studies the physiological process of nutrition (primarily human nutrition), interpreting the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism.
== History ==
Before nutritional science emerged as an independent study disciplines, mainly chemists worked in this area. The chemical composition of food was examined. Macronutrients, especially protein, fat and carbohydrates, have been the focus components of the study of (human) nutrition since the 19th century. Until the discovery of vitamins and vital substances, the quality of nutrition was measured exclusively by the intake of nutritional energy.
The early years of the 20th century were summarized by Kenneth John Carpenter in his Short History of Nutritional Science as "the vitamin era". The first vitamin was isolated and chemically defined in 1926 (thiamine). The isolation of vitamin C followed in 1932 and its effects on health, the protection against scurvy, was scientifically documented for the first time.
At the instigation of the British physiologist John Yudkin at the University of London, the degrees Bachelor of Science and Master of Science in nutritional science were established in the 1950s.
Nutritional science as a separate discipline was institutionalized in Germany in November 1956 when Hans-Diedrich Cremer was appointed to the chair for human nutrition in Giessen. The Institute for Nutritional Science was initially located at the Academy for Medical Research and Further Education, which was transferred to the Faculty of Human Medicine when the Justus Liebig University was reopened. Over time, seven other universities with similar institutions followed in Germany.
From the 1950s to 1970s, a focus of nutritional science was on dietary fat and sugar. From the 1970s to the 1990s, attention was put on diet-related chronic diseases and supplementation.
== Distinction ==
Nutritional science is often combined with food science (nutrition and food science).
Trophology is a term used globally for nutritional science in other languages, in English the term is dated. Today, it is partly still used for the approach of food combining that advocates specific combinations (or advises against certain combinations) of food. Ecotrophology is a branch of nutritional science concerned with everyday practice and elements from household management that is primarily studied in Germany.
== Academic studies and education ==
Nutritional science as a subject is taught at universities around the world. At the beginning of the programs, the basic subjects of biology, chemistry, mathematics and physics are part of the curriculum. Later, a focus is on inorganic chemistry, functional biology, biochemistry and genetics. At most universities, students can specialize in certain areas, this involves subjects such as special food chemistry, nutritional physiology, nutritional epidemiology, food law and nutritional medicine. Students who are more interested in the economic aspect usually specialize in the field of food economics. Laboratory exercises are also on the curriculum at most universities.
== Notable nutritional scientists ==
John Yudkin (1910–1995), who established the first degree in nutritional science in any European university
Hans Adalbert Schweigart (1900–1972), the creator of the term vital substances
Hans Konrad Biesalski (* 1949)
Hanni Rützler (* 1962)
== Scientific journals ==
Nutrition
Journal of Nutritional Science, published on behalf of The Nutrition Society
Journal of Nutritional Science and Vitaminology, edited by The Vitamin Society of Japan and Japan Society of Nutrition and Food Science, published by the Center for Academic Publications Japan
Food & Nutrition Research, published by the Swedish Nutrition Foundation
European Journal of Nutrition, published by Springer Science+Business Media in Germany
Journal of the Academy of Nutrition and Dietetics
== References ==
== External links ==
Society of Nutrition and Food Science | Wikipedia/Nutritional_science |
Electrosurgery is the application of a high-frequency (radio frequency) alternating polarity, electrical current to biological tissue as a means to cut, coagulate, desiccate, or fulgurate tissue. (These terms are used in specific ways for this methodology—see below.) Its benefits include the ability to make precise cuts with limited blood loss. Electrosurgical devices are frequently used during surgical operations helping to prevent blood loss in hospital operating rooms or in outpatient procedures.
In electrosurgical procedures, the tissue is heated by an electric current. Although electrical devices that create a heated probe may be used for the cauterization of tissue in some applications, electrosurgery refers to a different method than electrocautery. Electrocautery uses heat conduction from a probe heated to a high temperature by a direct electrical current (much in the manner of a soldering iron). This may be accomplished by direct current from dry-cells in a penlight-type device.
Electrosurgery, by contrast, uses radio frequency (RF) alternating current to heat the tissue by RF induced intracellular oscillation of ionized molecules that result in an elevation of intracellular temperature. When the intracellular temperature reaches 60 degrees C, instantaneous cell death occurs. If tissue is heated to 60–99 degrees C, the simultaneous processes of tissue desiccation (dehydration) and protein coagulation occur. If the intracellular temperature rapidly reaches 100 degrees C, the intracellular contents undergo a liquid to gas conversion, massive volumetric expansion, and resulting explosive vaporization.
Appropriately applied with electrosurgical forceps, desiccation and coagulation result in the occlusion of blood vessels and halting of bleeding. While the process is technically a process of electrocoagulation, the term "electrocautery" is sometimes loosely, nontechnically and incorrectly used to describe it. The process of vaporization can be used to ablate tissue targets, or, by linear extension, used to transect or cut tissue. While the processes of vaporization/ cutting and desiccation/coagulation are best accomplished with relatively low voltage, continuous or near continuous waveforms, the process of fulguration is performed with relatively high voltage modulated waveforms. Fulguration is a superficial type of coagulation, typically created by arcing modulated high voltage current to tissue that is rapidly desiccated and coagulated. The continued application of current to this high impedance tissue results in resistive heating and the achievement of very high temperatures—enough to cause breakdown of the organic molecules to sugars and even carbon, thus the dark textures from carbonization of tissue.
Diathermy is used by some as a synonym for electrosurgery but in other contexts diathermy means dielectric heating, produced by rotation of molecular dipoles in a high frequency electromagnetic field. This effect is most widely used in microwave ovens or some tissue ablative devices which operate at gigahertz frequencies. Lower frequencies, allowing for deeper penetration, are used in industrial processes.
RF electrosurgery is commonly used in virtually all surgical disciplines including dermatological, gynecological, cardiac, plastic, ocular, spine, ENT, maxillofacial, orthopedic, urological, neuro- and general surgical procedures as well as certain dental procedures.
RF electrosurgery is performed using a RF electrosurgical generator (also referred to as an electrosurgical unit or ESU) and a handpiece including one or two electrodes—a monopolar or bipolar instrument. All RF electrosurgery is bipolar so the difference between monopolar and bipolar instruments is that monopolar instruments comprise only one electrode while bipolar instruments include both electrodes in their design.
The monopolar instrument called an "active electrode" when energized, requires the application of another monopolar instrument called a "dispersive electrode" elsewhere on the patient's body that functions to 'defocus' or disperse the RF current thereby preventing thermal injury to the underlying tissue. This dispersive electrode is frequently and mistakenly called a "ground pad" or "neutral electrode". However virtually all currently available RF electrosurgical systems are designed to function with isolated circuits—the dispersive electrode is directly attached to the ESU, not to "ground". The same electrical current is transmitted across both the dispersive electrode and the active electrode, so it is not "neutral". The term "return electrode" is also technically incorrect since alternating electrical currents refer to alternating polarity, a circumstance that results in bidirectional flow across both electrodes in the circuit.
Bipolar instruments generally are designed with two "active" electrodes, such as a forceps for sealing blood vessels. However, the bipolar instrument can be designed such that one electrode is dispersive. The main advantage of bipolar instruments is that the only part of the patient included in the circuit is that which is between the two electrodes, a circumstance that eliminates the risk of current diversion and related adverse events. However, except for those devices designed to function in fluid, it is difficult to vaporize or cut tissue with bipolar instruments.
== Electrical stimulation of neural and muscle cells ==
Neural and muscle cells are electrically-excitable, i.e. they can be stimulated by electric current. In human patients such stimulation may cause acute pain, muscle spasms, and even cardiac arrest. Sensitivity of the nerve and muscle cells to electric field is due to the voltage-gated ion channels present in their cell membranes. Stimulation threshold does not vary much at low frequencies (so called rheobase-constant level). However, the threshold starts increasing with decreasing duration of a pulse (or a cycle) when it drops below a characteristic minimum (so called chronaxie). Typically, chronaxie of neural cells is in the range of 0.1–10 ms, so the sensitivity to electrical stimulation (inverse of the stimulation threshold) decreases with increasing frequency in the kHz range and above. (Note that frequency of the alternating electric current is an inverse of the duration of a single cycle).
To minimize the effects of muscle and neural stimulation, electrosurgical equipment typically operates in the radio frequency (RF) range of 100 kHz to 5 MHz.
Operation at higher frequencies also helps minimizing the amount of hydrogen and oxygen generated by electrolysis of water. This is especially important consideration for applications in liquid medium in closed compartments, where generation of gas bubbles may interfere with the procedure. For example, bubbles produced during an operation inside an eye may obscure a field of view.
== Common electrode configurations for devices with isolated circuits ==
There are several commonly used electrode configurations or circuit topologies:
With "bipolar" instruments the current is applied to the patient using a pair of similarly-sized electrodes. For example, special forceps, with one tine connected to one pole of the RF generator and the other tine connected to the other pole of the generator. When a piece of tissue is held by the forceps, the RF alternating polarity electrical current oscillates between the two forceps tines, heating the intervening tissue by the previously described synchronous oscillation of intracellular ions.
In monopolar configuration the patient is attached to the dispersive electrode, a relatively large metal plate or a flexible metalized plastic pad which is connected to the RF generator or electrosurgical unit (ESU). The surgeon uses a pointed or blade shaped electrode called the "active electrode" to make contact with the tissue and exert a tissue effect - vaporization, and its linear propagation called electrosurgical cutting, or the combination of desiccation and protein coagulation used to seal blood vessels for the purpose of Hemostasis. The electric current oscillates between the active electrode and the dispersive electrode with the entire patient interposed between the two. Since the concentration of the RF current reduces with distance from the active electrode the current density rapidly (quadratically) decreases. Since the rate of tissue heating is proportional to the square of current density, the heating occurs in a very localized region, only near the portion of the electrode, usually the tip, near to or in contact with the target tissue.
On an extremity such as a finger, there is limited cross-sectional area to disperse the current, a circumstance which might result in higher current density and some heating throughout the volume of the extremity.
Another bipolar instrument is characterized with both electrodes on the same design, but the dispersive electrode is much larger than the active one. Since current density is higher in front of the smaller electrode, the heating and associated tissue effects take place only (or primarily) in front of the active electrode, and exact position of the dispersive electrode on tissue is not critical. Sometimes such configuration is called sesquipolar, even though the origin of this term in Latin (sesqui) means a ratio of 1.5.
=== Dedicated non-grounded machines without a dispersive electrode ===
Relatively low-powered high frequency electrosurgery can be performed on conscious outpatients with no grounded machines without a dispersive electrode . Operating at low currents with no dispersive electrode is possible because, at the medium RF frequencies (usually 100 – 500 kHz) that the machines generate, the self-capacitance of the patient's body (which is between the patient's body and the machine's ground) is large enough to allow the resulting displacement current to act as a virtual "circuit completion path."
One example of such a machine is called a hyfrecator. This term began in 1940 as a Birtcher Corporation brand name Hyfrecator for "High Frequency Eradicator", but now serves generically to describe a general class of single-electrode, non-isolated (earth-referenced) low-powered electrosurgical machines intended mainly for office use. An accidental circuit completion path through an earth-ground creates the danger of a burn at a site far away from the probe electrode, and for this reason single-electrode devices are used only on conscious patients who would be aware of such complications, and only on carefully insulated tables.
In such a setting, hyfrecators are not used to cut tissue, but to destroy relatively small lesions, and also to stop bleeding in surgical incisions made by blade instruments under local anesthesia.
== Electrosurgical modalities ==
In cutting mode electrode touches the tissue, and sufficiently high power density is applied to vaporize its water content. Since water vapor is not conductive under normal circumstances, electric current cannot flow through the vapor layer. Energy delivery beyond the vaporization threshold can continue if sufficiently high voltage is applied (> +/-200 V) to ionize vapor and convert it into a conductive plasma. Vapor and fragments of the overheated tissue are ejected, forming a crater. Electrode surfaces intended to be used for cutting often feature a finer wire or wire loop, as opposed to a more flat blade with a rounded surface.
Coagulation is performed using waveforms with lower average power, generating heat insufficient for explosive vaporization, but producing a thermal coagulum instead.
Electrosurgical desiccation occurs when the electrode touches the tissue open to air, and the amount of generated heat is lower than that required for cutting. The tissue surface and some of the tissue more deep to the probe dries out and forms a coagulum (a dry patch of dead tissue). This technique may be used for treating nodules under the skin where minimal damage to the skin surface is desired.
In fulguration mode, the electrode is held away from the tissue, so that when the air gap between the electrode and the tissue is ionized, an electric arc discharge develops. In this approach, the burning to the tissue is more superficial, because the current is spread over the tissue area larger than the tip of electrode. Under these conditions, superficial skin charring or carbonization is seen over a wider area than when operating in contact with the probe, and this technique is therefore used for very superficial or protrusive lesions such as skin tags. Ionization of an air gap requires voltage in the kV range.
Besides the thermal effects in tissue, the electric field can produce pores in the cellular membranes – a phenomenon called electroporation. This effect may affect cells beyond the range of thermal damage.
=== Wet field electrosurgery ===
There are wet and dry field electrosurgical devices. Wet field devices operate in a saline solution, or in an open wound. Heating is as a result of an alternating current that passes between two electrodes. Heating is usually greatest where the current density is highest. Therefore, it is usually the smallest or sharpest electrode that generates the most heat.
Cut/Coag Most wet field electrosurgical systems operate in two modes: "Cut" causes a small area of tissue to be vaporized, and "Coag" causes the tissue to "dry" (in the sense of bleeding being stopped). "Dried" tissues are killed (and will later slough or be replaced by fibrotic tissue) but they are temporarily physically intact after electrosurgical application. The depth of tissue death is typically a few millimeters near the contact of the electrode.
Cut If the voltage level is high enough, the heat generated can create a vapour pocket. The vapour pocket typically reaches temperatures of approximately 400 degrees Celsius, which vaporizes and explodes a small section of soft tissue, resulting in an incision.
Coag When the system is operating in "coag mode" the voltage output is usually higher than in cut mode. Tissue remains grossly intact, but cells are destroyed at the point of contact, and smaller vessels are destroyed and sealed, stopping capillary and small-arterial bleeding.
== Electrosurgical waveforms ==
Different waveforms can be used for different electrosurgical procedures. For cutting, a continuous single frequency sine wave is often employed. Rapid tissue heating leads to explosive vaporization of interstitial fluid. If the voltage is sufficiently high (> 400 V peak-to-peak) the vapor sheath is ionized, forming conductive plasma. Electric current continues to flow from the metal electrode through the ionized gas into the tissue. Rapid overheating of tissue results in its vaporization, fragmentation and ejection of fragments, allowing for tissue cutting. In applications of a continuous wave the heat diffusion typically leads to formation of a significant thermal damage zone at the edges of the lesion. Open circuit voltage in electrosurgical waveforms is typically in the range of 300–10,000 V peak-to-peak.
Higher precision can be achieved with pulsed waveforms. Using bursts of several tens of microseconds in duration the tissue can be cut, while the size of the heat diffusion zone does not exceed the cellular scale. Heat accumulation during repetitive application of bursts can also be avoided if sufficient delay is provided between the bursts, allowing the tissue to cool down.
The proportion of ON time to OFF time can be varied to allow control of the heating rate. A related parameter, duty cycle, is defined as the ratio of the ON time to the period (the time of a single ON-OFF cycle). In the terminology of electrical engineering, the process of altering this ratio to achieve an average amplitude, instead of altering the amplitude directly is called pulse-width modulation.
For coagulation, the average power is typically reduced below the threshold of cutting. Typically, sine wave is turned on and off in rapid succession. The overall effect is a slower heating process, which causes tissue to coagulate. In simple coagulation/cutting mode machines, the lower duty cycle typical of coagulation mode is usually heard by the ear as a lower frequency and a rougher tone than the higher frequency tone typical of cutting mode with the same equipment.
Many modern electrosurgical generators provide sophisticated wave forms with power adjusted in real time, based on changes of the tissue impedance.
== Prevention of unintended harm ==
Burns
For the high power surgical uses during anesthesia the monopolar modality relies on a good electrical contact between a large area of the body (Typically at least the entire back of the patient) and the return electrode or pad (also known as dispersive pad or patient plate). Severe burns (3rd degree) can occur if the contact with the return electrode is insufficient, or when a patient comes into contact with metal objects serving as an unintended (capacitative) leakage path to Earth/Ground.
To prevent unintended burns, the skin is cleaned and a conductive gel is used to enhance the contact with the return electrode. Proper electrical grounding practices must be followed in the electrical wiring of the building. It is also recommended to use a modern ElectroSurgical Unit that includes a return electrode monitoring system that continuously tests for reliable and safe patient contact. These systems interrogate the impedance of a split or dual-pad return electrode and will alarm out, disabling further generator output in case of fault. Prior generators relied on single pad return electrodes and thus had no means of verifying safe patient connection. Return electrodes should always have full contact with the skin and be placed on the same side of the body and close to the body part where the procedure is occurring.
If there is any metal in the body of the patient, the return electrode is placed on the opposite side of the body from the metal and be placed between the metal and the operation site. This prevents current from passing selectively through metal on the way to the return electrode. For example, for a patient who has had a right sided hip replacement who is scheduled for surgery, the return electrode is placed on the left side of the body on the lateral side of the lower abdomen, which places the return electrode between the location of the metal and the surgical site and on the opposite side from the metal. If there is metal on both sides of the body, the return electrode is placed between the metal and the procedure site when possible. Common return electrode locations include lateral portions of the outer thighs, abdomen, back, or shoulder blades.
The use of the bipolar option does not require the placement of a return electrode because the current only passes between tines of the forceps or other bipolar output device.
Electrosurgery should only be performed by a physician who has received specific training in this field and who is familiar with the techniques used to prevent burns.
Smoke toxicity
Concerns have also been raised regarding the toxicity of surgical smoke produced by electrosurgery. This has been shown to contain various volatile organic compounds (VOCs), including formaldehyde, which may cause harm by inhalation by the patients, surgeon or operating theatre staff.
Fire hazard
Electrical knives should not be used around flammable substances, like alcohol-based disinfectants.
== History ==
Development of the first commercial electrosurgical device is credited to William T. Bovie, who developed the first electrosurgical device while employed at Harvard University. The first use of an electrosurgical generator in an operating room occurred on October 1, 1926 at Peter Bent Brigham Hospital in
Boston, Massachusetts. The operation—removal of a mass from a patient’s head—was performed by Harvey Cushing. The low powered hyfrecator for office use was introduced in 1940.
== See also ==
Cryosurgery
Laser surgery
Electrocautery
Dielectric heating
microwave minimaze procedure
Na effect
Harmonic scalpel
Medical applications of radio frequency
== Notes ==
== External links ==
A Simple Guide to the Hyfrecator 2000, Richard J Motley, Schuco International Ltd. a primer for low-powered outpatient dermatological devices, such as the Hyfrecator 2000 device.
Electrosurgery for the Skin Archived 2008-05-17 at the Wayback Machine, Barry L. Hainer M.D., Richard B. Usatine, M.D., American Family Physician (Journal of the American Academy of Family Physicians), 2002 Oct 1;66(7):1259-66.
Electrosurgical Generator Testing Online Journal of the Biomedical Engineering Association of Ireland (BEAI), May 1997.
Update on Electrosurgery Archived 2016-03-03 at the Wayback Machine, Judith Lee, Contributing Editor, Outpatient Surgery Magazine, February, 2002. | Wikipedia/Electrosurgery |
The green fluorescent protein (GFP) is a protein that exhibits green fluorescence when exposed to light in the blue to ultraviolet range. The label GFP traditionally refers to the protein first isolated from the jellyfish Aequorea victoria and is sometimes called avGFP. However, GFPs have been found in other organisms including corals, sea anemones, zoanithids, copepods and lancelets.
The GFP from A. victoria has a major excitation peak at a wavelength of 395 nm and a minor one at 475 nm. Its emission peak is at 509 nm, which is in the lower green portion of the visible spectrum. The fluorescence quantum yield (QY) of GFP is 0.79. The GFP from the sea pansy (Renilla reniformis) has a single major excitation peak at 498 nm. GFP makes for an excellent tool in many forms of biology due to its ability to form an internal chromophore without requiring any accessory cofactors, gene products, or enzymes / substrates other than molecular oxygen.
In cell and molecular biology, the GFP gene is frequently used as a reporter of expression. It has been used in modified forms to make biosensors, and many animals have been created that express GFP, which demonstrates a proof of concept that a gene can be expressed throughout a given organism, in selected organs, or in cells of interest. GFP can be introduced into animals or other species through transgenic techniques, and maintained in their genome and that of their offspring. GFP has been expressed in many species, including bacteria, yeasts, fungi, fish and mammals, including in human cells. Scientists Roger Y. Tsien, Osamu Shimomura, and Martin Chalfie were awarded the 2008 Nobel Prize in Chemistry on 10 October 2008 for their discovery and development of the green fluorescent protein.
Most commercially available genes for GFP and similar fluorescent proteins are around 730 base-pairs long. The natural protein has 238 amino acids. Its molecular mass is 27 kD. Therefore, fusing the GFP gene to the gene of a protein of interest can significantly increase the protein's size and molecular mass, and can impair the protein's natural function or change its location or trajectory of transport within the cell.
== Background ==
=== Wild-type GFP (wtGFP) ===
In the 1960s and 1970s, GFP, along with the separate luminescent protein aequorin (an enzyme that catalyzes the breakdown of luciferin, releasing light), was first purified from the jellyfish Aequorea victoria and its properties studied by Osamu Shimomura. In A. victoria, GFP fluorescence occurs when aequorin interacts with Ca2+ ions, inducing a blue glow. Some of this luminescent energy is transferred to the GFP, shifting the overall color towards green. However, its utility as a tool for molecular biologists did not begin to be realized until 1992 when Douglas Prasher reported the cloning and nucleotide sequence of wtGFP in Gene. The funding for this project had run out, so Prasher sent cDNA samples to several labs. The lab of Martin Chalfie expressed the coding sequence of wtGFP, with the first few amino acids deleted, in heterologous cells of E. coli and C. elegans, publishing the results in Science in 1994. Frederick Tsuji's lab independently reported the expression of the recombinant protein one month later. Remarkably, the GFP molecule folded and was fluorescent at room temperature, without the need for exogenous cofactors specific to the jellyfish. Although this near-wtGFP was fluorescent, it had several drawbacks, including dual peaked excitation spectra, pH sensitivity, chloride sensitivity, poor fluorescence quantum yield, poor photostability and poor folding at 37 °C (99 °F).
The first reported crystal structure of a GFP was that of the S65T mutant by the Remington group in Science in 1996. One month later, the Phillips group independently reported the wild-type GFP structure in Nature Biotechnology. These crystal structures provided vital background on chromophore formation and neighboring residue interactions. Researchers have modified these residues by directed and random mutagenesis to produce the wide variety of GFP derivatives in use today. Further research into GFP has shown that it is resistant to detergents, proteases, guanidinium chloride (GdmCl) treatments, and drastic temperature changes.
=== GFP derivatives ===
Due to the potential for widespread usage and the evolving needs of researchers, many different mutants of GFP have been engineered. The first major improvement was a single point mutation (S65T) reported in 1995 in Nature by Roger Tsien. This mutation dramatically improved the spectral characteristics of GFP, resulting in increased fluorescence, photostability, and a shift of the major excitation peak to 488 nm, with the peak emission kept at 509 nm. This matched the spectral characteristics of commonly available FITC filter sets, increasing the practicality of use by the general researcher. A 37 °C folding efficiency (F64L) point mutant to this scaffold, yielding enhanced GFP (EGFP), was discovered in 1995 by the laboratories of Thastrup and Falkow. EGFP allowed the practical use of GFPs in mammalian cells. EGFP has an extinction coefficient (denoted ε) of 55,000 M−1cm−1. The fluorescence quantum yield (QY) of EGFP is 0.60. The relative brightness, expressed as ε•QY, is 33,000 M−1cm−1.
Superfolder GFP (sfGFP), a series of mutations that allow GFP to rapidly fold and mature even when fused to poorly folding peptides, was reported in 2006.
Many other mutations have been made, including color mutants; in particular, blue fluorescent protein (EBFP, EBFP2, Azurite, mKalama1), cyan fluorescent protein (ECFP, Cerulean, CyPet, mTurquoise2), and yellow fluorescent protein derivatives (YFP, Citrine, Venus, YPet). BFP derivatives (except mKalama1) contain the Y66H substitution. They exhibit a broad absorption band in the ultraviolet centered close to 380 nanometers and an emission maximum at 448 nanometers. A green fluorescent protein mutant (BFPms1) that preferentially binds Zn(II) and Cu(II) has been developed. BFPms1 have several important mutations including and the BFP chromophore (Y66H),Y145F for higher quantum yield, H148G for creating a hole into the beta-barrel and several other mutations that increase solubility. Zn(II) binding increases fluorescence intensity, while Cu(II) binding quenches fluorescence and shifts the absorbance maximum from 379 to 444 nm. Therefore, they can be used as a Zn biosensor.
More color variants are possible via chromophore binding. The critical mutation in cyan derivatives is the Y66W substitution, which causes the chromophore to form with an indole rather than phenol component. Several additional compensatory mutations in the surrounding barrel are required to restore brightness to this modified chromophore due to the increased bulk of the indole group. In ECFP and Cerulean, the N-terminal half of the seventh strand exhibits two conformations. These conformations both have a complex set of van der Waals interactions with the chromophore. The Y145A and H148D mutations in Cerulean stabilize these interactions and allow the chromophore to be more planar, better packed, and less prone to collisional quenching.
Additional site-directed random mutagenesis in combination with fluorescence lifetime based screening has further stabilized the seventh β-strand resulting in a bright variant, mTurquoise2, with a quantum yield (QY) of 0.93. The red-shifted wavelength of the YFP derivatives is accomplished by the T203Y mutation and is due to π-electron stacking interactions between the substituted tyrosine residue and the chromophore. These two classes of spectral variants are often employed for Förster resonance energy transfer (FRET) experiments. Genetically encoded FRET reporters sensitive to cell signaling molecules, such as calcium or glutamate, protein phosphorylation state, protein complementation, receptor dimerization, and other processes provide highly specific optical readouts of cell activity in real time.
Semirational mutagenesis of a number of residues led to pH-sensitive mutants known as pHluorins, and later super-ecliptic pHluorins. By exploiting the rapid change in pH upon synaptic vesicle fusion, pHluorins tagged to synaptobrevin have been used to visualize synaptic activity in neurons.
Redox sensitive GFP (roGFP) was engineered by introduction of cysteines into the beta barrel structure. The redox state of the cysteines determines the fluorescent properties of roGFP.
=== Nomenclature ===
The nomenclature of modified GFPs is often confusing due to overlapping mapping of several GFP versions onto a single name. For example, mGFP often refers to a GFP with an N-terminal palmitoylation that causes the GFP to bind to cell membranes. However, the same term is also used to refer to monomeric GFP, which is often achieved by the dimer interface breaking A206K mutation. Wild-type GFP has a weak dimerization tendency at concentrations above 5 mg/mL. mGFP also stands for "modified GFP," which has been optimized through amino acid exchange for stable expression in plant cells.
== In nature ==
The purpose of both the (primary) bioluminescence (from aequorin's action on luciferin) and the (secondary) fluorescence of GFP in jellyfish is unknown. GFP is co-expressed with aequorin in small granules around the rim of the jellyfish bell. The secondary excitation peak (480 nm) of GFP does absorb some of the blue emission of aequorin, giving the bioluminescence a more green hue. The serine 65 residue of the GFP chromophore is responsible for the dual-peaked excitation spectra of wild-type GFP. It is conserved in all three GFP isoforms originally cloned by Prasher. Nearly all mutations of this residue consolidate the excitation spectra to a single peak at either 395 nm or 480 nm. The precise mechanism of this sensitivity is complex, but, it seems, involves donation of a hydrogen from serine 65 to glutamate 222, which influences chromophore ionization. Since a single mutation can dramatically enhance the 480 nm excitation peak, making GFP a much more efficient partner of aequorin, A. victoria appears to evolutionarily prefer the less-efficient, dual-peaked excitation spectrum. Roger Tsien has speculated that varying hydrostatic pressure with depth may affect serine 65's ability to donate a hydrogen to the chromophore and shift the ratio of the two excitation peaks. Thus, the jellyfish may change the color of its bioluminescence with depth. However, a collapse in the population of jellyfish in Friday Harbor, where GFP was originally discovered, has hampered further study of the role of GFP in the jellyfish's natural environment.
Most species of lancelet are known to produce GFP in various regions of their body. Unlike A. victoria, lancelets do not produce their own blue light, and the origin of their endogenous GFP is still unknown. Some speculate that it attracts plankton towards the mouth of the lancelet, serving as a passive hunting mechanism. It may also serve as a photoprotective agent in the larvae, preventing damage caused by high-intensity blue light by converting it into lower-intensity green light. However, these theories have not been tested.
GFP-like proteins have been found in multiple species of marine copepods, particularly from the Pontellidae and Aetideidae families. GFP isolated from Pontella mimocerami has shown high levels of brightness with a quantum yield of 0.92, making them nearly two-fold brighter than the commonly used EGFP isolated from A. victoria.
== Other fluorescent proteins ==
There are many GFP-like proteins that, despite being in the same protein family as GFP, are not directly derived from Aequorea victoria. These include dsRed, eqFP611, Dronpa, TagRFPs, KFP, EosFP/IrisFP, Dendra, and so on. Having been developed from proteins in different organisms, these proteins can sometimes display unanticipated approaches to chromophore formation. Some of these, such as KFP, are developed from naturally non- or weakly-fluorescent proteins to be greatly improved upon by mutagenesis. When GFP-like barrels of different spectral characteristics are used, the excitation spectrum of one chromophore can be used to power another chromophore (FRET), allowing for conversion between wavelengths of light.
FMN-binding fluorescent proteins (FbFPs) were developed in 2007 and are a class of small (11–16 kDa), oxygen-independent fluorescent proteins that are derived from blue-light receptors. They are intended especially for the use under anaerobic or hypoxic conditions, since the formation and binding of the flavin chromophore does not require molecular oxygen, as it is the case with the synthesis of the GFP chromophore.
Fluorescent proteins with other chromophores, such as UnaG with bilirubin, can display unique properties like red-shifted emission above 600 nm or photoconversion from a green-emitting state to a red-emitting state. They can have excitation and emission wavelengths far enough apart to achieve conversion between red and green light.
A new class of fluorescent protein was engineered from α-allophycocyanin, a phycobiliprotein found in the cyanobacterium Trichodesmium erythraeum, and was named small ultra red fluorescent protein (smURFP) in 2016. smURFP autocatalytically incorporates the chromophore biliverdin without the need for an external protein known as a lyase. Jellyfish- and coral-derived GFP-like proteins require oxygen and produce a stoichiometric amount of hydrogen peroxide upon chromophore formation. smURFP does not require oxygen or produce hydrogen peroxide. smURFP has a large extinction coefficient (180,000 M−1 cm−1) and has a modest quantum yield (0.20), which makes it comparable biophysical brightness to eGFP and ~2-fold brighter than most red or far-red fluorescent proteins derived from coral. smURFP spectral properties are similar to the organic dye Cy5.
Reviews on new classes of fluorescent proteins and applications can be found in the cited reviews.
== Structure ==
GFP has a beta barrel structure consisting of eleven β-strands with a pleated sheet arrangement, with an alpha helix containing the covalently bonded chromophore 4-(p-hydroxybenzylidene)imidazolidin-5-one (HBI) running through the center. Five shorter alpha helices form caps on the ends of the structure. The beta barrel structure is a nearly perfect cylinder, 42Å long and 24Å in diameter (some studies have reported a diameter of 30Å), creating what is referred to as a "β-can" formation, which is unique to the GFP-like family. HBI, the spontaneously modified form of the tripeptide Ser65–Tyr66–Gly67, is nonfluorescent in the absence of the properly folded GFP scaffold and exists mainly in the un-ionized phenol form in wtGFP. Inward-facing sidechains of the barrel induce specific cyclization reactions in Ser65–Tyr66–Gly67 that induce ionization of HBI to the phenolate form and chromophore formation. This process of post-translational modification is referred to as maturation. The hydrogen-bonding network and electron-stacking interactions with these sidechains influence the color, intensity and photostability of GFP and its numerous derivatives. The tightly packed nature of the barrel excludes solvent molecules, protecting the chromophore fluorescence from quenching by water. In addition to the auto-cyclization of the Ser65-Tyr66-Gly67, a 1,2-dehydrogenation reaction occurs at the Tyr66 residue. Besides the three residues that form the chromophore, residues such as Gln94, Arg96, His148, Thr203, and Glu222 all act as stabilizers. The residues of Gln94, Arg96, and His148 are able to stabilize by delocalizing the chromophore charge. Arg96 is the most important stabilizing residue due to the fact that it prompts the necessary structural realignments that are necessary from the HBI ring to occur. Any mutation to the Arg96 residue would result in a decrease in the development rate of the chromophore because proper electrostatic and steric interactions would be lost. Tyr66 is the recipient of hydrogen bonds and does not ionize in order to produce favorable electrostatics.
Blue fluorescent protein (BFP) is the blue variant of green fluorescent protein (GFP). BFP has a very similar structure to GFP. In the BFP structure, two substitution mutations in the amino acid sequence change its fluorescence from green to blue. The first mutation occurs inside the chromophore of GFP at position 66 which changes a tyrosine to a histidine. The other mutation in BFP is on the tyrosine at position 145 which mutates to phenylalanine. The autocatalytic cyclization and oxidation of the serine, tyrosine, and glycine form the GFP chromophore. These three residues at positions 65-67 make up the green fluorescent chromophore. When the tyrosine in the chromophore is substituted by a histidine, it changes the folding structure of the protein and emission spectra. The T145F mutation is also added to increase the stability of the protein and well as intensify the fluorescence. These mutations are what change GFP to BFP.
=== Autocatalytic formation of the chromophore in wtGFP ===
Mechanistically, the process involves base-mediated cyclization followed by dehydration and oxidation. In the reaction of 7a to 8 involves the formation of an enamine from the imine, while in the reaction of 7b to 9 a proton is abstracted. The formed HBI fluorophore is highlighted in green.
The reactions are catalyzed by residues Glu222 and Arg96. An analogous mechanism is also possible with threonine in place of Ser65.
== Applications ==
=== Reporter assays ===
Green fluorescent protein may be used as a reporter gene.
For example, GFP can be used as a reporter for environmental toxicity levels. This protein has been shown to be an effective way to measure the toxicity levels of various chemicals including ethanol, p-formaldehyde, phenol, triclosan, and paraben. GFP is great as a reporter protein because it has no effect on the host when introduced to the host's cellular environment. Due to this ability, no external visualization stain, ATP, or cofactors are needed. With regards to pollutant levels, the fluorescence was measured in order to gauge the effect that the pollutants have on the host cell. The cellular density of the host cell was also measured. Results from the study conducted by Song, Kim, & Seo (2016) showed that there was a decrease in both fluorescence and cellular density as pollutant levels increased. This was indicative of the fact that cellular activity had decreased. More research into this specific application in order to determine the mechanism by which GFP acts as a pollutant marker. Similar results have been observed in zebrafish because zebrafish that were injected with GFP were approximately twenty times more susceptible to recognize cellular stresses than zebrafish that were not injected with GFP.
==== Advantages ====
The biggest advantage of GFP is that it can be heritable, depending on how it was introduced, allowing for continued study of cells and tissues it is expressed in. Visualizing GFP is noninvasive, requiring only illumination with blue light. GFP alone does not interfere with biological processes, but when fused to proteins of interest, careful design of linkers is required to maintain the function of the protein of interest. Moreover, if used with a monomer it is able to diffuse readily throughout cells.
=== Fluorescence microscopy ===
The availability of GFP and its derivatives has thoroughly redefined fluorescence microscopy and the way it is used in cell biology and other biological disciplines. While most small fluorescent molecules such as FITC (fluorescein isothiocyanate) are strongly phototoxic when used in live cells, fluorescent proteins such as GFP are usually much less harmful when illuminated in living cells. This has triggered the development of highly automated live-cell fluorescence microscopy systems, which can be used to observe cells over time expressing one or more proteins tagged with fluorescent proteins.
There are many techniques to utilize GFP in a live cell imaging experiment. The most direct way of utilizing GFP is to directly attach it to a protein of interest. For example, GFP can be included in a plasmid expressing other genes to indicate a successful transfection of a gene of interest. Another method is to use a GFP that contains a mutation where the fluorescence will change from green to yellow over time, which is referred to as a fluorescent timer. With the fluorescent timer, researchers can study the state of protein production such as recently activated, continuously activated, or recently deactivated based on the color reported by the fluorescent protein. In yet another example, scientists have modified GFP to become active only after exposure to irradiation giving researchers a tool to selectively activate certain portions of a cell and observe where proteins tagged with the GFP move from the starting location. These are only two examples in a burgeoning field of fluorescent microcopy and a more complete review of biosensors utilizing GFP and other fluorescent proteins can be found here
For example, GFP had been widely used in labelling the spermatozoa of various organisms for identification purposes as in Drosophila melanogaster, where expression of GFP can be used as a marker for a particular characteristic. GFP can also be expressed in different structures enabling morphological distinction. In such cases, the gene for the production of GFP is incorporated into the genome of the organism in the region of the DNA that codes for the target proteins and that is controlled by the same regulatory sequence; that is, the gene's regulatory sequence now controls the production of GFP, in addition to the tagged protein(s). In cells where the gene is expressed, and the tagged proteins are produced, GFP is produced at the same time. Thus, only those cells in which the tagged gene is expressed, or the target proteins are produced, will fluoresce when observed under fluorescence microscopy. Analysis of such time lapse movies has redefined the understanding of many biological processes including protein folding, protein transport, and RNA dynamics, which in the past had been studied using fixed (i.e., dead) material. Obtained data are also used to calibrate mathematical models of intracellular systems and to estimate rates of gene expression. Similarly, GFP can be used as an indicator of protein expression in heterologous systems. In this scenario, fusion proteins containing GFP are introduced indirectly, using RNA of the construct, or directly, with the tagged protein itself. This method is useful for studying structural and functional characteristics of the tagged protein on a macromolecular or single-molecule scale with fluorescence microscopy.
The Vertico SMI microscope using the SPDM Phymod technology uses the so-called "reversible photobleaching" effect of fluorescent dyes like GFP and its derivatives to localize them as single molecules in an optical resolution of 10 nm. This can also be performed as a co-localization of two GFP derivatives (2CLM).
Another powerful use of GFP is to express the protein in small sets of specific cells. This allows researchers to optically detect specific types of cells in vitro (in a dish), or even in vivo (in the living organism). GFP is considered to be a reliable reporter of gene expression in eukaryotic cells when the fluorescence is measured by flow cytometry. Genetically combining several spectral variants of GFP is a useful trick for the analysis of brain circuitry (Brainbow). Other interesting uses of fluorescent proteins in the literature include using FPs as sensors of neuron membrane potential, tracking of AMPA receptors on cell membranes, viral entry and the infection of individual influenza viruses and lentiviral viruses, etc.
It has also been found that new lines of transgenic GFP rats can be relevant for gene therapy as well as regenerative medicine. By using "high-expresser" GFP, transgenic rats display high expression in most tissues, and many cells that have not been characterized or have been only poorly characterized in previous GFP-transgenic rats.
GFP has been shown to be useful in cryobiology as a viability assay. Correlation of viability as measured by trypan blue assays were 0.97. Another application is the use of GFP co-transfection as internal control for transfection efficiency in mammalian cells.
A novel possible use of GFP includes using it as a sensitive monitor of intracellular processes via an eGFP laser system made out of a human embryonic kidney cell line. The first engineered living laser is made by an eGFP expressing cell inside a reflective optical cavity and hitting it with pulses of blue light. At a certain pulse threshold, the eGFP's optical output becomes brighter and completely uniform in color of pure green with a wavelength of 516 nm. Before being emitted as laser light, the light bounces back and forth within the resonator cavity and passes the cell numerous times. By studying the changes in optical activity, researchers may better understand cellular processes.
GFP is used widely in cancer research to label and track cancer cells. GFP-labelled cancer cells have been used to model metastasis, the process by which cancer cells spread to distant organs.
=== Split GFP ===
GFP can be used to analyse the colocalization of proteins. This is achieved by "splitting" the protein into two fragments which are able to self-assemble, and then fusing each of these to the two proteins of interest. Alone, these incomplete GFP fragments are unable to fluoresce. However, if the two proteins of interest colocalize, then the two GFP fragments assemble together to form a GFP-like structure which is able to fluoresce. Therefore, by measuring the level of fluorescence it is possible to determine whether the two proteins of interest colocalize.
=== Macro-photography ===
Macro-scale biological processes, such as the spread of virus infections, can be followed using GFP labeling. In the past, mutagenic ultra violet light (UV) has been used to illuminate living organisms (e.g., see) to detect and photograph the GFP expression. Recently, a technique using non-mutagenic LED lights have been developed for macro-photography. The technique uses an epifluorescence camera attachment based on the same principle used in the construction of epifluorescence microscopes.
=== Transgenic pets ===
Alba, a green-fluorescent rabbit, was created by a French laboratory commissioned by Eduardo Kac using GFP for purposes of art and social commentary. The US company Yorktown Technologies markets to aquarium shops green fluorescent zebrafish (GloFish) that were initially developed to detect pollution in waterways. NeonPets, a US-based company has marketed green fluorescent mice to the pet industry as NeonMice. Green fluorescent pigs, known as Noels, were bred by a group of researchers led by Wu Shinn-Chih at the Department of Animal Science and Technology at National Taiwan University. A Japanese-American Team created green-fluorescent cats as proof of concept to use them potentially as model organisms for diseases, particularly HIV. In 2009 a South Korean team from Seoul National University bred the first transgenic beagles with fibroblast cells from sea anemones. The dogs give off a red fluorescent light, and they are meant to allow scientists to study the genes that cause human diseases like narcolepsy and blindness.
=== Art ===
Julian Voss-Andreae, a German-born artist specializing in "protein sculptures," created sculptures based on the structure of GFP, including the 1.70 metres (5 feet 7 inches) tall "Green Fluorescent Protein" (2004) and the 1.40 metres (4 feet 7 inches) tall "Steel Jellyfish" (2006). The latter sculpture is located at the place of GFP's discovery by Shimomura in 1962, the University of Washington's Friday Harbor Laboratories.
== See also ==
Protein tag
pGLO
Yellow fluorescent protein
Genetically encoded voltage indicator
== References ==
== Further reading ==
== External links ==
A comprehensive article on fluorescent proteins at Scholarpedia
Brief summary of landmark GFP papers
Interactive Java applet demonstrating the chemistry behind the formation of the GFP chromophore
Video of 2008 Nobel Prize lecture of Roger Tsien on fluorescent proteins
Excitation and emission spectra for various fluorescent proteins
Green Fluorescent Protein Chem Soc Rev themed issue dedicated to the 2008 Nobel Prize winners in Chemistry, Professors Osamu Shimomura, Martin Chalfie and Roger Y. Tsien
Molecule of the Month, June 2003: an illustrated overview of GFP by David Goodsell.
Molecule of the Month, June 2014: an illustrated overview of GFP-like variants by David Goodsell.
Green Fluorescent Protein on FPbase, a fluorescent protein database
Overview of all the structural information available in the PDB for UniProt: P42212 (Green fluorescent protein) at the PDBe-KB. | Wikipedia/Green_fluorescent_protein |
In molecular biology and genetics, transformation is the genetic alteration of a cell resulting from the direct uptake and incorporation of exogenous genetic material from its surroundings through the cell membrane(s). For transformation to take place, the recipient bacterium must be in a state of competence, which might occur in nature as a time-limited response to environmental conditions such as starvation and cell density, and may also be induced in a laboratory.
Transformation is one of three processes that lead to horizontal gene transfer, in which exogenous genetic material passes from one bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.
As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between Gram-positive and Gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers.
"Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection".
== History ==
Transformation in bacteria was first demonstrated in 1928 by the British bacteriologist Frederick Griffith. Griffith was interested in determining whether injections of heat-killed bacteria could be used to vaccinate mice against pneumonia. However, he discovered that a non-virulent strain of Streptococcus pneumoniae could be made virulent after being exposed to heat-killed virulent strains. Griffith hypothesized that some "transforming principle" from the heat-killed strain was responsible for making the harmless strain virulent. In 1944 this "transforming principle" was identified as being genetic by Oswald Avery, Colin MacLeod, and Maclyn McCarty. They isolated DNA from a virulent strain of S. pneumoniae and using just this DNA were able to make a harmless strain virulent. They called this uptake and incorporation of DNA by bacteria "transformation" (See Avery-MacLeod-McCarty experiment) The results of Avery et al.'s experiments were at first skeptically received by the scientific community and it was not until the development of genetic markers and the discovery of other methods of genetic transfer (conjugation in 1947 and transduction in 1953) by Joshua Lederberg that Avery's experiments were accepted.
It was originally thought that Escherichia coli, a commonly used laboratory organism, was refractory to transformation. However, in 1970, Morton Mandel and Akiko Higa showed that E. coli may be induced to take up DNA from bacteriophage λ without the use of helper phage after treatment with calcium chloride solution. Two years later in 1972, Stanley Norman Cohen, Annie Chang and Leslie Hsu showed that CaCl2 treatment is also effective for transformation of plasmid DNA. The method of transformation by Mandel and Higa was later improved upon by Douglas Hanahan. The discovery of artificially induced competence in E. coli created an efficient and convenient procedure for transforming bacteria which allows for simpler molecular cloning methods in biotechnology and research, and it is now a routinely used laboratory procedure.
Transformation using electroporation was developed in the late 1980s, increasing the efficiency of in-vitro transformation and increasing the number of bacterial strains that could be transformed. Transformation of animal and plant cells was also investigated with the first transgenic mouse being created by injecting a gene for a rat growth hormone into a mouse embryo in 1982. In 1897 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor-inducing agent was found to be a DNA plasmid called the Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes, researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants. Not all plant cells are susceptible to infection by A. tumefaciens, so other methods were developed, including electroporation and micro-injection. Particle bombardment was made possible with the invention of the Biolistic Particle Delivery System (gene gun) by John Sanford in the 1980s.
== Definitions ==
Transformation is one of three forms of horizontal gene transfer that occur in nature among bacteria, in which DNA encoding for a trait passes from one bacterium to another and is integrated into the recipient genome by homologous recombination; the other two are transduction, carried out by means of a bacteriophage, and conjugation, in which a gene is passed through direct contact between bacteria. In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.
Competence refers to a temporary state of being able to take up exogenous DNA from the environment; it may be induced in a laboratory.
It appears to be an ancient process inherited from a common prokaryotic ancestor that is a beneficial adaptation for promoting recombinational repair of DNA damage, especially damage acquired under stressful conditions. Natural genetic transformation appears to be an adaptation for repair of DNA damage that also generates genetic diversity.
Transformation has been studied in medically important Gram-negative bacteria species such as Helicobacter pylori, Legionella pneumophila, Neisseria meningitidis, Neisseria gonorrhoeae, Haemophilus influenzae and Vibrio cholerae. It has also been studied in Gram-negative species found in soil such as Pseudomonas stutzeri, Acinetobacter baylyi, and Gram-negative plant pathogens such as Ralstonia solanacearum and Xylella fastidiosa. Transformation among Gram-positive bacteria has been studied in medically important species such as Streptococcus pneumoniae, Streptococcus mutans, Staphylococcus aureus and Streptococcus sanguinis and in Gram-positive soil bacterium Bacillus subtilis. It has also been reported in at least 30 species of Pseudomonadota distributed in several different classes. The best studied Pseudomonadota with respect to transformation are the medically important human pathogens Neisseria gonorrhoeae, Haemophilus influenzae, and Helicobacter pylori.
"Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection".
== Natural competence and transformation ==
Naturally competent bacteria carry sets of genes that provide the protein machinery to bring DNA across the cell membrane(s). The transport of the exogenous DNA into the cells may require proteins that are involved in the assembly of type IV pili and type II secretion system, as well as DNA translocase complex at the cytoplasmic membrane.
Due to the differences in structure of the cell envelope between Gram-positive and Gram-negative bacteria, there are some differences in the mechanisms of DNA uptake in these cells, however most of them share common features that involve related proteins. The DNA first binds to the surface of the competent cells on a DNA receptor, and passes through the cytoplasmic membrane via DNA translocase. Only single-stranded DNA may pass through, the other strand being degraded by nucleases in the process. The translocated single-stranded DNA may then be integrated into the bacterial chromosomes by a RecA-dependent process. In Gram-negative cells, due to the presence of an extra membrane, the DNA requires the presence of a channel formed by secretins on the outer membrane. Pilin may be required for competence, but its role is uncertain. The uptake of DNA is generally non-sequence specific, although in some species the presence of specific DNA uptake sequences may facilitate efficient DNA uptake.
=== Natural transformation ===
Natural transformation is a bacterial adaptation for DNA transfer that depends on the expression of numerous bacterial genes whose products appear to be responsible for this process. In general, transformation is a complex, energy-requiring developmental process. In order for a bacterium to bind, take up and recombine exogenous DNA into its chromosome, it must become competent, that is, enter a special physiological state. Competence development in Bacillus subtilis requires expression of about 40 genes. The DNA integrated into the host chromosome is usually (but with rare exceptions) derived from another bacterium of the same species, and is thus homologous to the resident chromosome.
In B. subtilis the length of the transferred DNA is greater than 1271 kb (more than 1 million bases). The length transferred is likely double stranded DNA and is often more than a third of the total chromosome length of 4215 kb. It appears that about 7-9% of the recipient cells take up an entire chromosome.
The capacity for natural transformation appears to occur in a number of prokaryotes, and thus far 67 prokaryotic species (in seven different phyla) are known to undergo this process.
Competence for transformation is typically induced by high cell density and/or nutritional limitation, conditions associated with the stationary phase of bacterial growth. Transformation in Haemophilus influenzae occurs most efficiently at the end of exponential growth as bacterial growth approaches stationary phase. Transformation in Streptococcus mutans, as well as in many other streptococci, occurs at high cell density and is associated with biofilm formation. Competence in B. subtilis is induced toward the end of logarithmic growth, especially under conditions of amino acid limitation. Similarly, in Micrococcus luteus (a representative of the less well studied Actinomycetota phylum), competence develops during the mid-late exponential growth phase and is also triggered by amino acids starvation.
By releasing intact host and plasmid DNA, certain bacteriophages are thought to contribute to transformation.
=== Transformation, as an adaptation for DNA repair ===
Competence is specifically induced by DNA damaging conditions. For instance, transformation is induced in Streptococcus pneumoniae by the DNA damaging agents mitomycin C (a DNA cross-linking agent) and fluoroquinolone (a topoisomerase inhibitor that causes double-strand breaks). In B. subtilis, transformation is increased by UV light, a DNA damaging agent. In Helicobacter pylori, ciprofloxacin, which interacts with DNA gyrase and introduces double-strand breaks, induces expression of competence genes, thus enhancing the frequency of transformation Using Legionella pneumophila, Charpentier et al. tested 64 toxic molecules to determine which of these induce competence. Of these, only six, all DNA damaging agents, caused strong induction. These DNA damaging agents were mitomycin C (which causes DNA inter-strand crosslinks), norfloxacin, ofloxacin and nalidixic acid (inhibitors of DNA gyrase that cause double-strand breaks), bicyclomycin (causes single- and double-strand breaks), and hydroxyurea (induces DNA base oxidation). UV light also induced competence in L. pneumophila. Charpentier et al. suggested that competence for transformation probably evolved as a DNA damage response. Natural transformation in the extraordinarily radiation resistant bacterium Deinococcus radiodurans is associated with the repair of DNA damage under stressful conditions.
Logarithmically growing bacteria differ from stationary phase bacteria with respect to the number of genome copies present in the cell, and this has implications for the capability to carry out an important DNA repair process. During logarithmic growth, two or more copies of any particular region of the chromosome may be present in a bacterial cell, as cell division is not precisely matched with chromosome replication. The process of homologous recombinational repair (HRR) is a key DNA repair process that is especially effective for repairing double-strand damages, such as double-strand breaks. This process depends on a second homologous chromosome in addition to the damaged chromosome. During logarithmic growth, a DNA damage in one chromosome may be repaired by HRR using sequence information from the other homologous chromosome. Once cells approach stationary phase, however, they typically have just one copy of the chromosome, and HRR requires input of homologous template from outside the cell by transformation.
To test whether the adaptive function of transformation is repair of DNA damages, a series of experiments were carried out using B. subtilis irradiated by UV light as the damaging agent (reviewed by Michod et al. and Bernstein et al.) The results of these experiments indicated that transforming DNA acts to repair potentially lethal DNA damages introduced by UV light in the recipient DNA. The particular process responsible for repair was likely HRR. Transformation in bacteria can be viewed as a primitive sexual process, since it involves interaction of homologous DNA from two individuals to form recombinant DNA that is passed on to succeeding generations. Bacterial transformation in prokaryotes may have been the ancestral process that gave rise to meiotic sexual reproduction in eukaryotes (see Evolution of sexual reproduction; Meiosis.)
== Methods and mechanisms of transformation in laboratory ==
=== Bacterial ===
Artificial competence can be induced in laboratory procedures that involve making the cell passively permeable to DNA by exposing it to conditions that do not normally occur in nature. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. Cells that are able to take up the DNA are called competent cells.
It has been found that growth of Gram-negative bacteria in 20 mM Mg reduces the number of protein-to-lipopolysaccharide bonds by increasing the ratio of ionic to covalent bonds, which increases membrane fluidity, facilitating transformation. The role of lipopolysaccharides here are verified from the observation that shorter O-side chains are more effectively transformed – perhaps because of improved DNA accessibility.
The surface of bacteria such as E. coli is negatively charged due to phospholipids and lipopolysaccharides on its cell surface, and the DNA is also negatively charged. One function of the divalent cation therefore would be to shield the charges by coordinating the phosphate groups and other negative charges, thereby allowing a DNA molecule to adhere to the cell surface.
DNA entry into E. coli cells is through channels known as zones of adhesion or Bayer's junction, with a typical cell carrying as many as 400 such zones. Their role was established when cobalamine (which also uses these channels) was found to competitively inhibit DNA uptake. Another type of channel implicated in DNA uptake consists of poly (HB):poly P:Ca. In this poly (HB) is envisioned to wrap around DNA (itself a polyphosphate), and is carried in a shield formed by Ca ions.
It is suggested that exposing the cells to divalent cations in cold condition may also change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall.
Electroporation is another method of promoting competence. In this method the cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms.
=== Yeast ===
Most species of yeast, including Saccharomyces cerevisiae, may be transformed by exogenous DNA in the environment. Several methods have been developed to facilitate this transformation at high frequency in the lab.
Yeast cells may be treated with enzymes to degrade their cell walls, yielding spheroplasts. These cells are very fragile but take up foreign DNA at a high rate.
Exposing intact yeast cells to alkali cations such as those of caesium or lithium allows the cells to take up plasmid DNA. Later protocols adapted this transformation method, using lithium acetate, polyethylene glycol, and single-stranded DNA. In these protocols, the single-stranded DNA preferentially binds to the yeast cell wall, preventing plasmid DNA from doing so and leaving it available for transformation.
Electroporation: Formation of transient holes in the cell membranes using electric shock; this allows DNA to enter as described above for bacteria.
Enzymatic digestion or agitation with glass beads may also be used to transform yeast cells.
Efficiency – Different yeast genera and species take up foreign DNA with different efficiencies. Also, most transformation protocols have been developed for baker's yeast, S. cerevisiae, and thus may not be optimal for other species. Even within one species, different strains have different transformation efficiencies, sometimes different by three orders of magnitude. For instance, when S. cerevisiae strains were transformed with 10 ug of plasmid YEp13, the strain DKD-5D-H yielded between 550 and 3115 colonies while strain OS1 yielded fewer than five colonies.
=== Plants ===
A number of methods are available to transfer DNA into plant cells. Some vector-mediated methods are:
Agrobacterium-mediated transformation is the easiest and most simple plant transformation. Plant tissue (often leaves) are cut into small pieces, e.g. 10x10mm, and soaked for ten minutes in a fluid containing suspended Agrobacterium. The bacteria will attach to many of the plant cells exposed by the cut. The plant cells secrete wound-related phenolic compounds which in turn act to upregulate the virulence operon of the Agrobacterium. The virulence operon includes many genes that encode for proteins that are part of a Type IV secretion system that exports from the bacterium proteins and DNA (delineated by specific recognition motifs called border sequences and excised as a single strand from the virulence plasmid) into the plant cell through a structure called a pilus. The transferred DNA (called T-DNA) is piloted to the plant cell nucleus by nuclear localization signals present in the Agrobacterium protein VirD2, which is covalently attached to the end of the T-DNA at the Right border (RB). Exactly how the T-DNA is integrated into the host plant genomic DNA is an active area of plant biology research. Assuming that a selection marker (such as an antibiotic resistance gene) was included in the T-DNA, the transformed plant tissue can be cultured on selective media to produce shoots. The shoots are then transferred to a different medium to promote root formation. Once roots begin to grow from the transgenic shoot, the plants can be transferred to soil to complete a normal life cycle (make seeds). The seeds from this first plant (called the T1, for first transgenic generation) can be planted on a selective (containing an antibiotic), or if an herbicide resistance gene was used, could alternatively be planted in soil, then later treated with herbicide to kill wildtype segregants. Some plants species, such as Arabidopsis thaliana can be transformed by dipping the flowers or whole plant, into a suspension of Agrobacterium tumefaciens, typically strain C58 (C=Cherry, 58=1958, the year in which this particular strain of A. tumefaciens was isolated from a cherry tree in an orchard at Cornell University in Ithaca, New York). Though many plants remain recalcitrant to transformation by this method, research is ongoing that continues to add to the list the species that have been successfully modified in this manner.
Viral transformation (transduction): Package the desired genetic material into a suitable plant virus and allow this modified virus to infect the plant. If the genetic material is DNA, it can recombine with the chromosomes to produce transformant cells. However, genomes of most plant viruses consist of single stranded RNA which replicates in the cytoplasm of infected cell. For such genomes this method is a form of transfection and not a real transformation, since the inserted genes never reach the nucleus of the cell and do not integrate into the host genome. The progeny of the infected plants is virus-free and also free of the inserted gene.
Some vector-less methods include:
Gene gun: Also referred to as particle bombardment, microprojectile bombardment, or biolistics. Particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material will stay in the cells and transform them. This method also allows transformation of plant plastids. The transformation efficiency is lower than in Agrobacterium-mediated transformation, but most plants can be transformed with this method.
Electroporation: Formation of transient holes in cell membranes using electric pulses of high field strength; this allows DNA to enter as described above for bacteria.
=== Fungi ===
There are some methods to produce transgenic fungi most of them being analogous to those used for plants. However, fungi have to be treated differently due to some of their microscopic and biochemical traits:
A major issue is the dikaryotic state that parts of some fungi are in; dikaryotic cells contain two haploid nuclei, one of each parent fungus. If only one of these gets transformed, which is the rule, the percentage of transformed nuclei decreases after each sporulation.
Fungal cell walls are quite thick hindering DNA uptake so (partial) removal is often required; complete degradation, which is sometimes necessary, yields protoplasts.
Mycelial fungi consist of filamentous hyphae, which are, if at all, separated by internal cell walls interrupted by pores big enough to enable nutrients and organelles, sometimes even nuclei, to travel through each hypha. As a result, individual cells usually cannot be separated. This is problematic as neighbouring transformed cells may render untransformed ones immune to selection treatments, e.g. by delivering nutrients or proteins for antibiotic resistance.
Additionally, growth (and thereby mitosis) of these fungi exclusively occurs at the tip of their hyphae which can also deliver issues.
As stated earlier, an array of methods used for plant transformation do also work in fungi:
Agrobacterium is not only capable of infecting plants but also fungi, however, unlike plants, fungi do not secrete the phenolic compounds necessary to trigger Agrobacterium so that they have to be added, e.g. in the form of acetosyringone.
Thanks to development of an expression system for small RNAs in fungi the introduction of a CRISPR/CAS9-system in fungal cells became possible. In 2016 the USDA declared that it will not regulate a white button mushroom strain edited with CRISPR/CAS9 to prevent fruit body browning causing a broad discussion about placing CRISPR/CAS9-edited crops on the market.
Physical methods like electroporation, biolistics ("gene gun"), sonoporation that uses cavitation of gas bubbles produced by ultrasound to penetrate the cell membrane, etc. are also applicable to fungi.
=== Animals ===
Introduction of DNA into animal cells is usually called transfection, and is discussed in the corresponding article.
== Practical aspects of transformation in molecular biology ==
The discovery of artificially induced competence in bacteria allow bacteria such as Escherichia coli to be used as a convenient host for the manipulation of DNA as well as expressing proteins. Typically plasmids are used for transformation in E. coli. In order to be stably maintained in the cell, a plasmid DNA molecule must contain an origin of replication, which allows it to be replicated in the cell independently of the replication of the cell's own chromosome.
The efficiency with which a competent culture can take up exogenous DNA and express its genes is known as transformation efficiency and is measured in colony forming unit (cfu) per μg DNA used. A transformation efficiency of 1×108 cfu/μg for a small plasmid like pUC19 is roughly equivalent to 1 in 2000 molecules of the plasmid used being transformed.
In calcium chloride transformation, the cells are prepared by chilling cells in the presence of Ca2+ (in CaCl2 solution), making the cell become permeable to plasmid DNA. The cells are incubated on ice with the DNA, and then briefly heat-shocked (e.g., at 42 °C for 30–120 seconds). This method works very well for circular plasmid DNA. Non-commercial preparations should normally give 106 to 107 transformants per microgram of plasmid; a poor preparation will be about 104/μg or less, but a good preparation of competent cells can give up to ~108 colonies per microgram of plasmid. Protocols, however, exist for making supercompetent cells that may yield a transformation efficiency of over 109. The chemical method, however, usually does not work well for linear DNA, such as fragments of chromosomal DNA, probably because the cell's native exonuclease enzymes rapidly degrade linear DNA. In contrast, cells that are naturally competent are usually transformed more efficiently with linear DNA than with plasmid DNA.
The transformation efficiency using the CaCl2 method decreases with plasmid size, and electroporation therefore may be a more effective method for the uptake of large plasmid DNA. Cells used in electroporation should be prepared first by washing in cold double-distilled water to remove charged particles that may create sparks during the electroporation process.
=== Selection and screening in plasmid transformation ===
Because transformation usually produces a mixture of relatively few transformed cells and an abundance of non-transformed cells, a method is necessary to select for the cells that have acquired the plasmid. The plasmid therefore requires a selectable marker such that those cells without the plasmid may be killed or have their growth arrested. Antibiotic resistance is the most commonly used marker for prokaryotes. The transforming plasmid contains a gene that confers resistance to an antibiotic that the bacteria are otherwise sensitive to. The mixture of treated cells is cultured on media that contain the antibiotic so that only transformed cells are able to grow. Another method of selection is the use of certain auxotrophic markers that can compensate for an inability to metabolise certain amino acids, nucleotides, or sugars. This method requires the use of suitably mutated strains that are deficient in the synthesis or utility of a particular biomolecule, and the transformed cells are cultured in a medium that allows only cells containing the plasmid to grow.
In a cloning experiment, a gene may be inserted into a plasmid used for transformation. However, in such experiment, not all the plasmids may contain a successfully inserted gene. Additional techniques may therefore be employed further to screen for transformed cells that contain plasmid with the insert. Reporter genes can be used as markers, such as the lacZ gene which codes for β-galactosidase used in blue-white screening. This method of screening relies on the principle of α-complementation, where a fragment of the lacZ gene (lacZα) in the plasmid can complement another mutant lacZ gene (lacZΔM15) in the cell. Both genes by themselves produce non-functional peptides, however, when expressed together, as when a plasmid containing lacZ-α is transformed into a lacZΔM15 cells, they form a functional β-galactosidase. The presence of an active β-galactosidase may be detected when cells are grown in plates containing X-gal, forming characteristic blue colonies. However, the multiple cloning site, where a gene of interest may be ligated into the plasmid vector, is located within the lacZα gene. Successful ligation therefore disrupts the lacZα gene, and no functional β-galactosidase can form, resulting in white colonies. Cells containing successfully ligated insert can then be easily identified by its white coloration from the unsuccessful blue ones.
Other commonly used reporter genes are green fluorescent protein (GFP), which produces cells that glow green under blue light, and the enzyme luciferase, which catalyzes a reaction with luciferin to emit light. The recombinant DNA may also be detected using other methods such as nucleic acid hybridization with radioactive RNA probe, while cells that expressed the desired protein from the plasmid may also be detected using immunological methods.
== References ==
== External links ==
Bacterial Transformation (a Flash Animation)
"Ready, aim, fire!" At the Max Planck Institute for Molecular Plant Physiology in Potsdam-Golm plant cells are 'bombarded' using a particle gun | Wikipedia/Transformation_(genetics) |
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, thymine, cytosine, and guanine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery.
Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged.
The rapid advancements in DNA sequencing technology have played a crucial role in sequencing complete genomes of various life forms, including humans, as well as numerous animal, plant, and microbial species.
The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, DNA sequencing has become easier and orders of magnitude faster.
== Applications ==
DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. DNA sequencing is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, DNA sequencing has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropology.
=== Molecular biology ===
Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes and noncoding DNA (including regulatory sequences), associations with diseases and phenotypes, and identify potential drug targets.
=== Evolutionary biology ===
Since DNA is an informative macromolecule in terms of transmission from one generation to another, DNA sequencing is used in evolutionary biology to study how different organisms are related and how they evolved. In February 2021, scientists reported, for the first time, the sequencing of DNA from animal remains, a mammoth in this instance, over a million years old, the oldest DNA sequenced to date.
=== Metagenomics ===
The field of metagenomics involves identification of organisms present in a body of water, sewage, dirt, debris filtered from the air, or swab samples from organisms. Knowing which organisms are present in a particular environment is critical to research in ecology, epidemiology, microbiology, and other fields. Sequencing enables researchers to determine which types of microbes may be present in a microbiome, for example.
=== Virology ===
As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus. Viral genomes can be based in DNA or RNA. RNA viruses are more time-sensitive for genome sequencing, as they degrade faster in clinical samples. Traditional Sanger sequencing and next-generation sequencing are used to sequence viruses in basic and clinical research, as well as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. Recently, NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes.
During the 1997 avian influenza outbreak, viral sequencing determined that the influenza sub-type originated through reassortment between quail and poultry. This led to legislation in Hong Kong that prohibited selling live quail and poultry together at market. Viral sequencing can also be used to estimate when a viral outbreak began by using a molecular clock technique.
=== Medicine ===
Medical technicians may sequence genes (or, theoretically, full genomes) from patients to determine if there is risk of genetic diseases. This is a form of genetic testing, though some genetic tests may not involve DNA sequencing.
As of 2013 DNA sequencing was increasingly used to diagnose and treat rare diseases. As more and more genes are identified that cause rare genetic diseases, molecular diagnoses for patients become more mainstream. DNA sequencing allows clinicians to identify genetic diseases, improve disease management, provide reproductive counseling, and more effective therapies. Gene sequencing panels are used to identify multiple potential genetic causes of a suspected disorder.
Also, DNA sequencing may be useful for determining a specific bacteria, to allow for more precise antibiotics treatments, hereby reducing the risk of creating antimicrobial resistance in bacteria populations.
=== Forensic investigation ===
DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing. DNA testing has evolved tremendously in the last few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, etc. uniquely separate each living organism from another. Testing DNA is a technique which can detect specific genomes in a DNA strand to produce a unique and individualized pattern.
DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing, as it has evolved significantly over the past few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, and other bodily fluids uniquely separate each living organism from another, making it an invaluable tool in the field of forensic science. The process of DNA testing involves detecting specific genomes in a DNA strand to produce a unique and individualized pattern, which can be used to identify individuals or determine their relationships.
The advancements in DNA sequencing technology have made it possible to analyze and compare large amounts of genetic data quickly and accurately, allowing investigators to gather evidence and solve crimes more efficiently. This technology has been used in various applications, including forensic identification, paternity testing, and human identification in cases where traditional identification methods are unavailable or unreliable. The use of DNA sequencing has also led to the development of new forensic techniques, such as DNA phenotyping, which allows investigators to predict an individual's physical characteristics based on their genetic data.
In addition to its applications in forensic science, DNA sequencing has also been used in medical research and diagnosis. It has enabled scientists to identify genetic mutations and variations that are associated with certain diseases and disorders, allowing for more accurate diagnoses and targeted treatments. Moreover, DNA sequencing has also been used in conservation biology to study the genetic diversity of endangered species and develop strategies for their conservation.
Furthermore, the use of DNA sequencing has also raised important ethical and legal considerations. For example, there are concerns about the privacy and security of genetic data, as well as the potential for misuse or discrimination based on genetic information. As a result, there are ongoing debates about the need for regulations and guidelines to ensure the responsible use of DNA sequencing technology.
Overall, the development of DNA sequencing technology has revolutionized the field of forensic science and has far-reaching implications for our understanding of genetics, medicine, and conservation biology.
== The four canonical bases ==
The canonical structure of DNA has four bases: thymine (T), adenine (A), cytosine (C), and guanine (G). DNA sequencing is the determination of the physical order of these bases in a molecule of DNA. However, there are many other bases that may be present in a molecule. In some viruses (specifically, bacteriophage), cytosine may be replaced by hydroxy methyl or hydroxy methyl glucose cytosine. In mammalian DNA, variant bases with methyl groups or phosphosulfate may be found. Depending on the sequencing technique, a particular modification, e.g., the 5mC (5-Methylcytosine) common in humans, may or may not be detected.
In almost all organisms, DNA is synthesized in vivo using only the 4 canonical bases; modification that occurs post replication creates other bases like 5 methyl C. However, some bacteriophage can incorporate a non standard base directly.
In addition to modifications, DNA is under constant assault by environmental agents such as UV and Oxygen radicals. At the present time, the presence of such damaged bases is not detected by most DNA sequencing methods, although PacBio has published on this.
== History ==
=== Discovery of DNA structure and function ===
Deoxyribonucleic acid (DNA) was first discovered and isolated by Friedrich Miescher in 1869, but it remained under-studied for many decades because proteins, rather than DNA, were thought to hold the genetic blueprint to life. This situation changed after 1944 as a result of some experiments by Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrating that purified DNA could change one strain of bacteria into another. This was the first time that DNA was shown capable of transforming the properties of cells.
In 1953, James Watson and Francis Crick put forward their double-helix model of DNA, based on crystallized X-ray structures being studied by Rosalind Franklin. According to the model, DNA is composed of two strands of nucleotides coiled around each other, linked together by hydrogen bonds and running in opposite directions. Each strand is composed of four complementary nucleotides – adenine (A), cytosine (C), guanine (G) and thymine (T) – with an A on one strand always paired with T on the other, and C always paired with G. They proposed that such a structure allowed each strand to be used to reconstruct the other, an idea central to the passing on of hereditary information between generations.
The foundation for sequencing proteins was first laid by the work of Frederick Sanger who by 1955 had completed the sequence of all the amino acids in insulin, a small protein secreted by the pancreas. This provided the first conclusive evidence that proteins were chemical entities with a specific molecular pattern rather than a random mixture of material suspended in fluid. Sanger's success in sequencing insulin spurred on x-ray crystallographers, including Watson and Crick, who by now were trying to understand how DNA directed the formation of proteins within a cell. Soon after attending a series of lectures given by Frederick Sanger in October 1954, Crick began developing a theory which argued that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins, which in turn helped determine the function of a protein. He published this theory in 1958.
=== RNA sequencing ===
RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), in 1972 and 1976. Traditional RNA sequencing methods require the creation of a cDNA molecule which must be sequenced.
==== Traditional RNA Sequencing Methods ====
Traditional RNA sequencing methods involve several steps:
1) Reverse Transcription: The first step is to convert the RNA molecule into a complementary DNA (cDNA) molecule using an enzyme called reverse transcriptase.
2) cDNA Synthesis: The cDNA molecule is then synthesized through a process called PCR (Polymerase Chain Reaction), which amplifies the cDNA to produce multiple copies.
3)Sequencing: The amplified cDNA is then sequenced using a technique such as Sanger sequencing or Maxam-Gilbert sequencing.
==== Challenges and Limitations ====
Traditional RNA sequencing methods have several limitations. For example:
They require the creation of a cDNA molecule, which can be time-consuming and labor-intensive.
They are prone to errors and biases, which can affect the accuracy of the sequencing results.
They are limited in their ability to detect rare or low-abundance transcripts.
==== Advances in RNA Sequencing Technology ====
In recent years, advances in RNA sequencing technology have addressed some of these limitations. New methods such as next-generation sequencing (NGS) and single-molecule real-time (SMRT) sequencing have enabled faster, more accurate, and more cost-effective sequencing of RNA molecules. These advances have opened up new possibilities for studying gene expression, identifying new genes, and understanding the regulation of gene expression.
=== Early DNA sequencing methods ===
The first method for determining DNA sequences involved a location-specific primer extension strategy established by Ray Wu, a geneticist, at Cornell University in 1970. DNA polymerase catalysis and specific nucleotide labeling, both of which figure prominently in current sequencing schemes, were used to sequence the cohesive ends of lambda phage DNA. Between 1970 and 1973, Wu, scientist Radha Padmanabhan and colleagues demonstrated that this method can be employed to determine any DNA sequence using synthetic location-specific primers.
Walter Gilbert, a biochemist, and Allan Maxam, a molecular geneticist, at Harvard also developed sequencing methods, including one for "DNA sequencing by chemical degradation". In 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. Advancements in sequencing were aided by the concurrent development of recombinant DNA technology, allowing DNA samples to be isolated from sources other than viruses.
Two years later in 1975, Frederick Sanger, a biochemist, and Alan Coulson, a genome scientist, developed a method to sequence DNA. The technique known as the "Plus and Minus" method, involved supplying all the components of the DNA but excluding the reaction of one of the four bases needed to complete the DNA.
In 1976, Gilbert and Maxam, invented a method for rapidly sequencing DNA while at Harvard, known as the Maxam–Gilbert sequencing. The technique involved treating radiolabelled DNA with a chemical and using a polyacrylamide gel to determine the sequence.
In 1977, Sanger then adopted a primer-extension strategy to develop more rapid DNA sequencing methods at the MRC Centre, Cambridge, UK. This technique was similar to his "Plus and Minus" strategy, however, it was based upon the selective incorporation of chain-terminating dideoxynucleotides (ddNTPs) by DNA polymerase during in vitro DNA replication. Sanger published this method in the same year.
=== Sequencing of full genomes ===
The first full DNA genome to be sequenced was that of bacteriophage φX174 in 1977. Medical Research Council scientists deciphered the complete DNA sequence of the Epstein-Barr virus in 1984, finding it contained 172,282 nucleotides. Completion of the sequence marked a significant turning point in DNA sequencing because it was achieved with no prior genetic profile knowledge of the virus.
A non-radioactive method for transferring the DNA molecules of sequencing reaction mixtures onto an immobilizing matrix during electrophoresis was developed by Herbert Pohl and co-workers in the early 1980s. Followed by the commercialization of the DNA sequencer "Direct-Blotting-Electrophoresis-System GATC 1500" by GATC Biotech, which was intensively used in the framework of the EU genome-sequencing programme, the complete DNA sequence of the yeast Saccharomyces cerevisiae chromosome II. Leroy E. Hood's laboratory at the California Institute of Technology announced the first semi-automated DNA sequencing machine in 1986. This was followed by Applied Biosystems' marketing of the first fully automated sequencing machine, the ABI 370, in 1987 and by Dupont's Genesis 2000 which used a novel fluorescent labeling technique enabling all four dideoxynucleotides to be identified in a single lane. By 1990, the U.S. National Institutes of Health (NIH) had begun large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae at a cost of US$0.75 per base. Meanwhile, sequencing of human cDNA sequences called expressed sequence tags began in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. In 1995, Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) published the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marked the first published use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts.
By 2003, the Human Genome Project's shotgun sequencing methods had been used to produce a draft sequence of the human genome; it had a 92% accuracy. In 2022, scientists successfully sequenced the last 8% of the human genome. The fully sequenced standard reference gene is called GRCh38.p14, and it contains 3.1 billion base pairs.
=== High-throughput sequencing (HTS) methods ===
Several new methods for DNA sequencing were developed in the mid to late 1990s and were implemented in commercial DNA sequencers by 2000. Together these were called the "next-generation" or "second-generation" sequencing (NGS) methods, in order to distinguish them from the earlier methods, including Sanger sequencing. In contrast to the first generation of sequencing, NGS technology is typically characterized by being highly scalable, allowing the entire genome to be sequenced at once. Usually, this is accomplished by fragmenting the genome into small pieces, randomly sampling for a fragment, and sequencing it using one of a variety of technologies, such as those described below. An entire genome is possible because multiple fragments are sequenced at once (giving it the name "massively parallel" sequencing) in an automated process.
NGS technology has tremendously empowered researchers to look for insights into health, anthropologists to investigate human origins, and is catalyzing the "Personalized Medicine" movement. However, it has also opened the door to more room for error. There are many software tools to carry out the computational analysis of NGS data, often compiled at online platforms such as CSI NGS Portal, each with its own algorithm. Even the parameters within one software package can change the outcome of the analysis. In addition, the large quantities of data produced by DNA sequencing have also required development of new methods and programs for sequence analysis. Several efforts to develop standards in the NGS field have been attempted to address these challenges, most of which have been small-scale efforts arising from individual labs. Most recently, a large, organized, FDA-funded effort has culminated in the BioCompute standard.
On 26 October 1990, Roger Tsien, Pepi Ross, Margaret Fahnestock and Allan J Johnston filed a patent describing stepwise ("base-by-base") sequencing with removable 3' blockers on DNA arrays (blots and single DNA molecules).
In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing.
On 1 April 1997, Pascal Mayer and Laurent Farinelli submitted patents to the World Intellectual Property Organization describing DNA colony sequencing. The DNA sample preparation and random surface-polymerase chain reaction (PCR) arraying methods described in this patent, coupled to Roger Tsien et al.'s "base-by-base" sequencing method, is now implemented in Illumina's Hi-Seq genome sequencers.
In 1998, Phil Green and Brent Ewing of the University of Washington described their phred quality score for sequencer data analysis, a landmark analysis technique that gained widespread adoption, and which is still the most common metric for assessing the accuracy of a sequencing platform.
Lynx Therapeutics published and marketed massively parallel signature sequencing (MPSS), in 2000. This method incorporated a parallelized, adapter/ligation-mediated, bead-based sequencing technology and served as the first commercially available "next-generation" sequencing method, though no DNA sequencers were sold to independent laboratories.
== Basic methods ==
=== Maxam-Gilbert sequencing ===
Allan Maxam and Walter Gilbert published a DNA sequencing method in 1977 based on chemical modification of DNA and subsequent cleavage at specific bases. Also known as chemical sequencing, this method allowed purified samples of double-stranded DNA to be used without further cloning. This method's use of radioactive labeling and its technical complexity discouraged extensive use after refinements in the Sanger methods had been made.
Maxam-Gilbert sequencing requires radioactive labeling at one 5' end of the DNA and purification of the DNA fragment to be sequenced. Chemical treatment then generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred.
This method is mostly obsolete as of 2023.
=== Chain-termination methods ===
The chain-termination method developed by Frederick Sanger and coworkers in 1977 soon became the method of choice, owing to its relative ease and reliability. When invented, the chain-terminator method used fewer toxic chemicals and lower amounts of radioactivity than the Maxam and Gilbert method. Because of its comparative ease, the Sanger method was soon automated and was the method used in the first generation of DNA sequencers.
Sanger sequencing is the method which prevailed from the 1980s until the mid-2000s. Over that period, great advances were made in the technique, such as fluorescent labelling, capillary electrophoresis, and general automation. These developments allowed much more efficient sequencing, leading to lower costs. The Sanger method, in mass production form, is the technology which produced the first human genome in 2001, ushering in the age of genomics. However, later in the decade, radically different approaches reached the market, bringing the cost per genome down from $100 million in 2001 to $10,000 in 2011.
=== Sequencing by synthesis ===
The objective for sequential sequencing by synthesis (SBS) is to determine the sequencing of a DNA sample by detecting the incorporation of a nucleotide by a DNA polymerase. An engineered polymerase is used to synthesize a copy of a single strand of DNA and the incorporation of each nucleotide is monitored. The principle of real-time sequencing by synthesis was first described in 1993 with improvements published some years later. The key parts are highly similar for all embodiments of SBS and includes (1) amplification of DNA (to enhance the subsequent signal) and attach the DNA to be sequenced to a solid support, (2) generation of single stranded DNA on the solid support, (3) incorporation of nucleotides using an engineered polymerase and (4) real-time detection of the incorporation of nucleotide The steps 3-4 are repeated and the sequence is assembled from the signals obtained in step 4. This principle of real-time sequencing-by-synthesis has been used for almost all massive parallel sequencing instruments, including 454, PacBio, IonTorrent, Illumina and MGI.
== Large-scale sequencing and de novo sequencing ==
Large-scale sequencing often aims at sequencing very long DNA pieces, such as whole chromosomes, although large-scale sequencing can also be used to generate very large numbers of short sequences, such as found in phage display. For longer targets such as chromosomes, common approaches consist of cutting (with restriction enzymes) or shearing (with mechanical forces) large DNA fragments into shorter DNA fragments. The fragmented DNA may then be cloned into a DNA vector and amplified in a bacterial host such as Escherichia coli. Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence. Studies have shown that adding a size selection step to collect DNA fragments of uniform size can improve sequencing efficiency and accuracy of the genome assembly. In these studies, automated sizing has proven to be more reproducible and precise than manual gel sizing.
The term "de novo sequencing" specifically refers to methods used to determine the sequence of DNA with no previously known sequence. De novo translates from Latin as "from the beginning". Gaps in the assembled sequence may be filled by primer walking. The different strategies have different tradeoffs in speed and accuracy; shotgun methods are often used for sequencing large genomes, but its assembly is complex and difficult, particularly with sequence repeats often causing gaps in genome assembly.
Most sequencing approaches use an in vitro cloning step to amplify individual DNA molecules, because their molecular detection methods are not sensitive enough for single molecule sequencing. Emulsion PCR isolates individual DNA molecules along with primer-coated beads in aqueous droplets within an oil phase. A polymerase chain reaction (PCR) then coats each bead with clonal copies of the DNA molecule followed by immobilization for later sequencing. Emulsion PCR is used in the methods developed by Marguilis et al. (commercialized by 454 Life Sciences), Shendure and Porreca et al. (also known as "polony sequencing") and SOLiD sequencing, (developed by Agencourt, later Applied Biosystems, now Life Technologies). Emulsion PCR is also used in the GemCode and Chromium platforms developed by 10x Genomics.
=== Shotgun sequencing ===
Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. This method requires the target DNA to be broken into random fragments. After sequencing individual fragments using the chain termination method, the sequences can be reassembled on the basis of their overlapping regions.
== High-throughput methods ==
High-throughput sequencing, which includes next-generation "short-read" and third-generation "long-read" sequencing methods, applies to exome sequencing, genome sequencing, genome resequencing, transcriptome profiling (RNA-Seq), DNA-protein interactions (ChIP-sequencing), and epigenome characterization.
The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences concurrently. High-throughput sequencing technologies are intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel. Such technologies led to the ability to sequence an entire human genome in as little as one day. As of 2019, corporate leaders in the development of high-throughput sequencing products included Illumina, Qiagen and ThermoFisher Scientific.
=== Long-read sequencing methods ===
==== Single molecule real time (SMRT) sequencing ====
SMRT sequencing is based on the sequencing by synthesis approach. The DNA is synthesized in zero-mode wave-guides (ZMWs) – small well-like containers with the capturing tools located at the bottom of the well. The sequencing is performed with use of unmodified polymerase (attached to the ZMW bottom) and fluorescently labelled nucleotides flowing freely in the solution. The wells are constructed in a way that only the fluorescence occurring by the bottom of the well is detected. The fluorescent label is detached from the nucleotide upon its incorporation into the DNA strand, leaving an unmodified DNA strand. According to Pacific Biosciences (PacBio), the SMRT technology developer, this methodology allows detection of nucleotide modifications (such as cytosine methylation). This happens through the observation of polymerase kinetics. This approach allows reads of 20,000 nucleotides or more, with average read lengths of 5 kilobases. In 2015, Pacific Biosciences announced the launch of a new sequencing instrument called the Sequel System, with 1 million ZMWs compared to 150,000 ZMWs in the PacBio RS II instrument. SMRT sequencing is referred to as "third-generation" or "long-read" sequencing.
==== Nanopore DNA sequencing ====
The DNA passing through the nanopore changes its ion current. This change is dependent on the shape, size and length of the DNA sequence. Each type of the nucleotide blocks the ion flow through the pore for a different period of time. The method does not require modified nucleotides and is performed in real time. Nanopore sequencing is referred to as "third-generation" or "long-read" sequencing, along with SMRT sequencing.
Early industrial research into this method was based on a technique called 'exonuclease sequencing', where the readout of electrical signals occurred as nucleotides passed by alpha(α)-hemolysin pores covalently bound with cyclodextrin. However the subsequent commercial method, 'strand sequencing', sequenced DNA bases in an intact strand.
Two main areas of nanopore sequencing in development are solid state nanopore sequencing, and protein based nanopore sequencing. Protein nanopore sequencing utilizes membrane protein complexes such as α-hemolysin, MspA (Mycobacterium smegmatis Porin A) or CssG, which show great promise given their ability to distinguish between individual and groups of nucleotides. In contrast, solid-state nanopore sequencing utilizes synthetic materials such as silicon nitride and aluminum oxide and it is preferred for its superior mechanical ability and thermal and chemical stability. The fabrication method is essential for this type of sequencing given that the nanopore array can contain hundreds of pores with diameters smaller than eight nanometers.
The concept originated from the idea that single stranded DNA or RNA molecules can be electrophoretically driven in a strict linear sequence through a biological pore that can be less than eight nanometers, and can be detected given that the molecules release an ionic current while moving through the pore. The pore contains a detection region capable of recognizing different bases, with each base generating various time specific signals corresponding to the sequence of bases as they cross the pore which are then evaluated. Precise control over the DNA transport through the pore is crucial for success. Various enzymes such as exonucleases and polymerases have been used to moderate this process by positioning them near the pore's entrance.
=== Short-read sequencing methods ===
==== Massively parallel signature sequencing (MPSS) ====
The first of the high-throughput sequencing technologies, massively parallel signature sequencing (or MPSS, also called next generation sequencing), was developed in the 1990s at Lynx Therapeutics, a company founded in 1992 by Sydney Brenner and Sam Eletr. MPSS was a bead-based method that used a complex approach of adapter ligation followed by adapter decoding, reading the sequence in increments of four nucleotides. This method made it susceptible to sequence-specific bias or loss of specific sequences. Because the technology was so complex, MPSS was only performed 'in-house' by Lynx Therapeutics and no DNA sequencing machines were sold to independent laboratories. Lynx Therapeutics merged with Solexa (later acquired by Illumina) in 2004, leading to the development of sequencing-by-synthesis, a simpler approach acquired from Manteia Predictive Medicine, which rendered MPSS obsolete. However, the essential properties of the MPSS output were typical of later high-throughput data types, including hundreds of thousands of short DNA sequences. In the case of MPSS, these were typically used for sequencing cDNA for measurements of gene expression levels.
==== Polony sequencing ====
The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full E. coli genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an E. coli genome at an accuracy of >99.9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific.
==== 454 pyrosequencing ====
A parallelized version of pyrosequencing was developed by 454 Life Sciences, which has since been acquired by Roche Diagnostics. The method amplifies DNA inside water droplets in an oil solution (emulsion PCR), with each droplet containing a single DNA template attached to a single primer-coated bead that then forms a clonal colony. The sequencing machine contains many picoliter-volume wells each containing a single bead and sequencing enzymes. Pyrosequencing uses luciferase to generate light for detection of the individual nucleotides added to the nascent DNA, and the combined data are used to generate sequence reads. This technology provides intermediate read length and price per base compared to Sanger sequencing on one end and Solexa and SOLiD on the other.
==== Illumina (Solexa) sequencing ====
Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents. In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli. It is based on "DNA clusters" or "DNA colonies", which involves the clonal amplification of DNA on a surface. The cluster technology was co-acquired with Lynx Therapeutics of California. Solexa Ltd. later merged with Lynx to form Solexa Inc.
In this method, DNA molecules and primers are first attached on a slide or flow cell and amplified with polymerase so that local clonal DNA colonies, later coined "DNA clusters", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. A camera takes images of the fluorescently labeled nucleotides. Then the dye, along with the terminal 3' blocker, is chemically removed from the DNA, allowing for the next cycle to begin. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera.
Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity. With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analog-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10 MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera).
==== Combinatorial probe anchor synthesis (cPAS) ====
This method is an upgraded modification to combinatorial probe anchor ligation technology (cPAL) described by Complete Genomics which has since become part of Chinese genomics company BGI in 2013. The two companies have refined the technology to allow for longer read lengths, reaction time reductions and faster time to results. In addition, data are now generated as contiguous full-length reads in the standard FASTQ file format and can be used as-is in most short-read-based bioinformatics analysis pipelines.
The two technologies that form the basis for this high-throughput sequencing technology are DNA nanoballs (DNB) and patterned arrays for nanoball attachment to a solid surface. DNA nanoballs are simply formed by denaturing double stranded, adapter ligated libraries and ligating the forward strand only to a splint oligonucleotide to form a ssDNA circle. Faithful copies of the circles containing the DNA insert are produced utilizing Rolling Circle Amplification that generates approximately 300–500 copies. The long strand of ssDNA folds upon itself to produce a three-dimensional nanoball structure that is approximately 220 nm in diameter. Making DNBs replaces the need to generate PCR copies of the library on the flow cell and as such can remove large proportions of duplicate reads, adapter-adapter ligations and PCR induced errors.
The patterned array of positively charged spots is fabricated through photolithography and etching techniques followed by chemical modification to generate a sequencing flow cell. Each spot on the flow cell is approximately 250 nm in diameter, are separated by 700 nm (centre to centre) and allows easy attachment of a single negatively charged DNB to the flow cell and thus reducing under or over-clustering on the flow cell.
Sequencing is then performed by addition of an oligonucleotide probe that attaches in combination to specific sites within the DNB. The probe acts as an anchor that then allows one of four single reversibly inactivated, labelled nucleotides to bind after flowing across the flow cell. Unbound nucleotides are washed away before laser excitation of the attached labels then emit fluorescence and signal is captured by cameras that is converted to a digital output for base calling. The attached base has its terminator and label chemically cleaved at completion of the cycle. The cycle is repeated with another flow of free, labelled nucleotides across the flow cell to allow the next nucleotide to bind and have its signal captured. This process is completed a number of times (usually 50 to 300 times) to determine the sequence of the inserted piece of DNA at a rate of approximately 40 million nucleotides per second as of 2018.
==== SOLiD sequencing ====
Applied Biosystems' (now a Life Technologies brand) SOLiD technology employs sequencing by ligation. Here, a pool of all possible oligonucleotides of a fixed length are labeled according to the sequenced position. Oligonucleotides are annealed and ligated; the preferential ligation by DNA ligase for matching sequences results in a signal informative of the nucleotide at that position. Each base in the template is sequenced twice, and the resulting data are decoded according to the 2 base encoding scheme used in this method. Before sequencing, the DNA is amplified by emulsion PCR. The resulting beads, each containing single copies of the same DNA molecule, are deposited on a glass slide. The result is sequences of quantities and lengths comparable to Illumina sequencing. This sequencing by ligation method has been reported to have some issue sequencing palindromic sequences.
==== Ion Torrent semiconductor sequencing ====
Ion Torrent Systems Inc. (now owned by Life Technologies) developed a system based on using standard sequencing chemistry, but with a novel, semiconductor-based detection system. This method of sequencing is based on the detection of hydrogen ions that are released during the polymerisation of DNA, as opposed to the optical methods used in other sequencing systems. A microwell containing a template DNA strand to be sequenced is flooded with a single type of nucleotide. If the introduced nucleotide is complementary to the leading template nucleotide it is incorporated into the growing complementary strand. This causes the release of a hydrogen ion that triggers a hypersensitive ion sensor, which indicates that a reaction has occurred. If homopolymer repeats are present in the template sequence, multiple nucleotides will be incorporated in a single cycle. This leads to a corresponding number of released hydrogens and a proportionally higher electronic signal.
==== DNA nanoball sequencing ====
DNA nanoball sequencing is a type of high throughput sequencing technology used to determine the entire genomic sequence of an organism. The company Complete Genomics uses this technology to sequence samples submitted by independent researchers. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Unchained sequencing by ligation is then used to determine the nucleotide sequence. This method of DNA sequencing allows large numbers of DNA nanoballs to be sequenced per run and at low reagent costs compared to other high-throughput sequencing platforms. However, only short sequences of DNA are determined from each DNA nanoball which makes mapping the short reads to a reference genome difficult.
==== Heliscope single molecule sequencing ====
Heliscope sequencing is a method of single-molecule sequencing developed by Helicos Biosciences. It uses DNA fragments with added poly-A tail adapters which are attached to the flow cell surface. The next steps involve extension-based sequencing with cyclic washes of the flow cell with fluorescently labeled nucleotides (one nucleotide type at a time, as with the Sanger method). The reads are performed by the Heliscope sequencer. The reads are short, averaging 35 bp. What made this technology especially novel was that it was the first of its class to sequence non-amplified DNA, thus preventing any read errors associated with amplification steps. In 2009 a human genome was sequenced using the Heliscope, however in 2012 the company went bankrupt.
==== Microfluidic Systems ====
There are two main microfluidic systems that are used to sequence DNA; droplet based microfluidics and digital microfluidics. Microfluidic devices solve many of the current limitations of current sequencing arrays.
Abate et al. studied the use of droplet-based microfluidic devices for DNA sequencing. These devices have the ability to form and process picoliter sized droplets at the rate of thousands per second. The devices were created from polydimethylsiloxane (PDMS) and used Forster resonance energy transfer, FRET assays to read the sequences of DNA encompassed in the droplets. Each position on the array tested for a specific 15 base sequence.
Fair et al. used digital microfluidic devices to study DNA pyrosequencing. Significant advantages include the portability of the device, reagent volume, speed of analysis, mass manufacturing abilities, and high throughput. This study provided a proof of concept showing that digital devices can be used for pyrosequencing; the study included using synthesis, which involves the extension of the enzymes and addition of labeled nucleotides.
Boles et al. also studied pyrosequencing on digital microfluidic devices. They used an electro-wetting device to create, mix, and split droplets. The sequencing uses a three-enzyme protocol and DNA templates anchored with magnetic beads. The device was tested using two protocols and resulted in 100% accuracy based on raw pyrogram levels. The advantages of these digital microfluidic devices include size, cost, and achievable levels of functional integration.
DNA sequencing research, using microfluidics, also has the ability to be applied to the sequencing of RNA, using similar droplet microfluidic techniques, such as the method, inDrops. This shows that many of these DNA sequencing techniques will be able to be applied further and be used to understand more about genomes and transcriptomes.
== Methods in development ==
DNA sequencing methods currently under development include reading the sequence as a DNA strand transits through nanopores (a method that is now commercial but subsequent generations such as solid-state nanopores are still in development), and microscopy-based techniques, such as atomic force microscopy or transmission electron microscopy that are used to identify the positions of individual nucleotides within long DNA fragments (>5,000 bp) by nucleotide labeling with heavier elements (e.g., halogens) for visual detection and recording.
Third generation technologies aim to increase throughput and decrease the time to result and cost by eliminating the need for excessive reagents and harnessing the processivity of DNA polymerase.
=== Tunnelling currents DNA sequencing ===
Another approach uses measurements of the electrical tunnelling currents across single-strand DNA as it moves through a channel. Depending on its electronic structure, each base affects the tunnelling current differently, allowing differentiation between different bases.
The use of tunnelling currents has the potential to sequence orders of magnitude faster than ionic current methods and the sequencing of several DNA oligomers and micro-RNA has already been achieved.
=== Sequencing by hybridization ===
Sequencing by hybridization is a non-enzymatic method that uses a DNA microarray. A single pool of DNA whose sequence is to be determined is fluorescently labeled and hybridized to an array containing known sequences. Strong hybridization signals from a given spot on the array identifies its sequence in the DNA being sequenced.
This method of sequencing utilizes binding characteristics of a library of short single stranded DNA molecules (oligonucleotides), also called DNA probes, to reconstruct a target DNA sequence. Non-specific hybrids are removed by washing and the target DNA is eluted. Hybrids are re-arranged such that the DNA sequence can be reconstructed. The benefit of this sequencing type is its ability to capture a large number of targets with a homogenous coverage. A large number of chemicals and starting DNA is usually required. However, with the advent of solution-based hybridization, much less equipment and chemicals are necessary.
=== Sequencing with mass spectrometry ===
Mass spectrometry may be used to determine DNA sequences. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry, or MALDI-TOF MS, has specifically been investigated as an alternative method to gel electrophoresis for visualizing DNA fragments. With this method, DNA fragments generated by chain-termination sequencing reactions are compared by mass rather than by size. The mass of each nucleotide is different from the others and this difference is detectable by mass spectrometry. Single-nucleotide mutations in a fragment can be more easily detected with MS than by gel electrophoresis alone. MALDI-TOF MS can more easily detect differences between RNA fragments, so researchers may indirectly sequence DNA with MS-based methods by converting it to RNA first.
The higher resolution of DNA fragments permitted by MS-based methods is of special interest to researchers in forensic science, as they may wish to find single-nucleotide polymorphisms in human DNA samples to identify individuals. These samples may be highly degraded so forensic researchers often prefer mitochondrial DNA for its higher stability and applications for lineage studies. MS-based sequencing methods have been used to compare the sequences of human mitochondrial DNA from samples in a Federal Bureau of Investigation database and from bones found in mass graves of World War I soldiers.
Early chain-termination and TOF MS methods demonstrated read lengths of up to 100 base pairs. Researchers have been unable to exceed this average read size; like chain-termination sequencing alone, MS-based DNA sequencing may not be suitable for large de novo sequencing projects. Even so, a recent study did use the short sequence reads and mass spectroscopy to compare single-nucleotide polymorphisms in pathogenic Streptococcus strains.
=== Microfluidic Sanger sequencing ===
In microfluidic Sanger sequencing the entire thermocycling amplification of DNA fragments as well as their separation by electrophoresis is done on a single glass wafer (approximately 10 cm in diameter) thus reducing the reagent usage as well as cost. In some instances researchers have shown that they can increase the throughput of conventional sequencing through the use of microchips. Research will still need to be done in order to make this use of technology effective.
=== Microscopy-based techniques ===
This approach directly visualizes the sequence of DNA molecules using electron microscopy. The first identification of DNA base pairs within intact DNA molecules by enzymatically incorporating modified bases, which contain atoms of increased atomic number, direct visualization and identification of individually labeled bases within a synthetic 3,272 base-pair DNA molecule and a 7,249 base-pair viral genome has been demonstrated.
=== RNAP sequencing ===
This method is based on use of RNA polymerase (RNAP), which is attached to a polystyrene bead. One end of DNA to be sequenced is attached to another bead, with both beads being placed in optical traps. RNAP motion during transcription brings the beads in closer and their relative distance changes, which can then be recorded at a single nucleotide resolution. The sequence is deduced based on the four readouts with lowered concentrations of each of the four nucleotide types, similarly to the Sanger method. A comparison is made between regions and sequence information is deduced by comparing the known sequence regions to the unknown sequence regions.
=== In vitro virus high-throughput sequencing ===
A method has been developed to analyze full sets of protein interactions using a combination of 454 pyrosequencing and an in vitro virus mRNA display method. Specifically, this method covalently links proteins of interest to the mRNAs encoding them, then detects the mRNA pieces using reverse transcription PCRs. The mRNA may then be amplified and sequenced. The combined method was titled IVV-HiTSeq and can be performed under cell-free conditions, though its results may not be representative of in vivo conditions.
== Market share ==
While there are many different ways to sequence DNA, only a few dominate the market. In 2022, Illumina had about 80% of the market; the rest of the market is taken by only a few players (PacBio, Oxford, 454, MGI)
== Sample preparation ==
The success of any DNA sequencing protocol relies upon the DNA or RNA sample extraction and preparation from the biological material of interest.
A successful DNA extraction will yield a DNA sample with long, non-degraded strands.
A successful RNA extraction will yield a RNA sample that should be converted to complementary DNA (cDNA) using reverse transcriptase—a DNA polymerase that synthesizes a complementary DNA based on existing strands of RNA in a PCR-like manner. Complementary DNA can then be processed the same way as genomic DNA.
After DNA or RNA extraction, samples may require further preparation depending on the sequencing method. For Sanger sequencing, either cloning procedures or PCR are required prior to sequencing. In the case of next-generation sequencing methods, library preparation is required before processing. Assessing the quality and quantity of nucleic acids both after extraction and after library preparation identifies degraded, fragmented, and low-purity samples and yields high-quality sequencing data.
== Development initiatives ==
In October 2006, the X Prize Foundation established an initiative to promote the development of full genome sequencing technologies, called the Archon X Prize, intending to award $10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $10,000 (US) per genome."
Each year the National Human Genome Research Institute, or NHGRI, promotes grants for new research and developments in genomics. 2010 grants and 2011 candidates include continuing work in microfluidic, polony and base-heavy sequencing methodologies.
== Computational challenges ==
The sequencing technologies described here produce raw data that needs to be assembled into longer sequences such as complete genomes (sequence assembly). There are many computational challenges to achieve this, such as the evaluation of the raw sequence data which is done by programs and algorithms such as Phred and Phrap. Other challenges have to deal with repetitive sequences that often prevent complete genome assemblies because they occur in many places of the genome. As a consequence, many sequences may not be assigned to particular chromosomes. The production of raw sequence data is only the beginning of its detailed bioinformatical analysis. Yet new methods for sequencing and correcting sequencing errors were developed.
=== Read trimming ===
Sometimes, the raw reads produced by the sequencer are correct and precise only in a fraction of their length. Using the entire read may introduce artifacts in the downstream analyses like genome assembly, SNP calling, or gene expression estimation. Two classes of trimming programs have been introduced, based on the window-based or the running-sum classes of algorithms. This is a partial list of the trimming algorithms currently available, specifying the algorithm class they belong to:
== Ethical issues ==
Human genetics have been included within the field of bioethics since the early 1970s and the growth in the use of DNA sequencing (particularly high-throughput sequencing) has introduced a number of ethical issues. One key issue is the ownership of an individual's DNA and the data produced when that DNA is sequenced. Regarding the DNA molecule itself, the leading legal case on this topic, Moore v. Regents of the University of California (1990) ruled that individuals have no property rights to discarded cells or any profits made using these cells (for instance, as a patented cell line). However, individuals have a right to informed consent regarding removal and use of cells. Regarding the data produced through DNA sequencing, Moore gives the individual no rights to the information derived from their DNA.
As DNA sequencing becomes more widespread, the storage, security and sharing of genomic data has also become more important. For instance, one concern is that insurers may use an individual's genomic data to modify their quote, depending on the perceived future health of the individual based on their DNA. In May 2008, the Genetic Information Nondiscrimination Act (GINA) was signed in the United States, prohibiting discrimination on the basis of genetic information with respect to health insurance and employment. In 2012, the US Presidential Commission for the Study of Bioethical Issues reported that existing privacy legislation for DNA sequencing data such as GINA and the Health Insurance Portability and Accountability Act were insufficient, noting that whole-genome sequencing data was particularly sensitive, as it could be used to identify not only the individual from which the data was created, but also their relatives.
In most of the United States, DNA that is "abandoned", such as that found on a licked stamp or envelope, coffee cup, cigarette, chewing gum, household trash, or hair that has fallen on a public sidewalk, may legally be collected and sequenced by anyone, including the police, private investigators, political opponents, or people involved in paternity disputes. As of 2013, eleven states have laws that can be interpreted to prohibit "DNA theft".
Ethical issues have also been raised by the increasing use of genetic variation screening, both in newborns, and in adults by companies such as 23andMe. It has been asserted that screening for genetic variations can be harmful, increasing anxiety in individuals who have been found to have an increased risk of disease. For example, in one case noted in Time, doctors screening an ill baby for genetic variants chose not to inform the parents of an unrelated variant linked to dementia due to the harm it would cause to the parents. However, a 2011 study in The New England Journal of Medicine has shown that individuals undergoing disease risk profiling did not show increased levels of anxiety. Also, the development of Next Generation sequencing technologies such as Nanopore based sequencing has also raised further ethical concerns.
== See also ==
== Notes ==
== References ==
== External links ==
A wikibook on next generation sequencing | Wikipedia/DNA_sequencing |
In molecular biology, proteins are generally thought to adopt unique structures determined by their amino acid sequences. However, proteins are not strictly static objects, but rather populate ensembles of (sometimes similar) conformations. Transitions between these states occur on a variety of length scales (tenths of angstroms to nm) and time scales (ns to s),
and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis.
The study of protein dynamics is most directly concerned with the transitions between these states, but can also involve the nature and equilibrium populations of the states themselves.
These two perspectives—kinetics and thermodynamics, respectively—can be conceptually synthesized in an "energy landscape" paradigm:
highly populated states and the kinetics of transitions between them can be described by the depths of energy wells and the heights of energy barriers, respectively.
== Local flexibility: atoms and residues ==
Portions of protein structures often deviate from the equilibrium state.
Some such excursions are harmonic, such as stochastic fluctuations of chemical bonds and bond angles.
Others are anharmonic, such as sidechains that jump between separate discrete energy minima, or rotamers.
Evidence for local flexibility is often obtained from NMR spectroscopy. Flexible and potentially disordered regions of a protein can be detected using the random coil index. Flexibility in folded proteins can be identified by analyzing the spin relaxation of individual atoms in the protein. Flexibility can also be observed in very high-resolution electron density maps produced by X-ray crystallography,
particularly when diffraction data is collected at room temperature instead of the traditional cryogenic temperature (typically near 100 K). Information on the frequency distribution and dynamics of local protein flexibility can be obtained using Raman and optical Kerr-effect spectroscopy as well as anisotropic microspectroscopy in the terahertz frequency domain. The internal re-arrangement of the amino acids during protein motion involves elastic and plastic deformations induced by viscoelastic forces, which can be probed with nano-rheology techniques.
== Regional flexibility: intra-domain multi-residue coupling ==
Many residues are in close spatial proximity in protein structures. This is true for most residues that are contiguous in the primary sequence, but also for many that are distal in sequence yet are brought into contact in the final folded structure. Because of this proximity, these residue's energy landscapes become coupled based on various biophysical phenomena such as hydrogen bonds, ionic bonds, and van der Waals interactions (see figure).
Transitions between states for such sets of residues therefore become correlated.
This is perhaps most obvious for surface-exposed loops, which often shift collectively to adopt different conformations in different crystal structures (see figure). However, coupled conformational heterogeneity is also sometimes evident in secondary structure. For example, consecutive residues and residues offset by 4 in the primary sequence often interact in α helices. Also, residues offset by 2 in the primary sequence point their sidechains toward the same face of β sheets and are close enough to interact sterically, as are residues on adjacent strands of the same β sheet. Some of these conformational changes are induced by post-translational modifications in protein structure, such as phosphorylation and methylation.
When these coupled residues form pathways linking functionally important parts of a protein,
they may participate in allosteric signaling.
For example, when a molecule of oxygen binds to one subunit of the hemoglobin tetramer,
that information is allosterically propagated to the other three subunits, thereby enhancing their affinity for oxygen.
In this case, the coupled flexibility in hemoglobin allows for cooperative oxygen binding,
which is physiologically useful because it allows rapid oxygen loading in lung tissue and rapid oxygen unloading in oxygen-deprived tissues (e.g. muscle).
== Global flexibility: multiple domains ==
The presence of multiple domains in proteins gives rise to a great deal of flexibility and mobility, leading to protein domain dynamics.
Domain motions can be directly observed using spectra
measured by neutron spin echo spectroscopy.
They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis or inferred by comparing different structures of a protein (as in Database of Molecular Motions).
Domain motions are important for:
ABC transporters
adherens junction
cellular locomotion and motor proteins
enzyme catalysis
formation of protein complexes
ion channels
mechanoreceptors and mechanotransduction
regulatory activity
transport of metabolites across cell membranes
One of the largest observed domain motions is the 'swivelling' mechanism in pyruvate phosphate dikinase. The phosphoinositide domain swivels between two states in order to bring a phosphate group from the active site of the nucleotide binding domain to that of the phosphoenolpyruvate/pyruvate domain. The phosphate group is moved over a distance of 45 Å involving a domain motion of about 100 degrees around a single residue. In enzymes, the closure of one domain onto another captures a substrate by an induced fit, allowing the reaction to take place in a controlled way. A detailed analysis by Gerstein led to the classification of two basic types of domain motion; hinge and shear. Only a relatively small portion of the chain, namely the inter-domain linker and side chains undergo significant conformational changes upon domain rearrangement.
=== Hinge motions ===
A study by Hayward found that the termini of α-helices and β-sheets form hinges in a large number of cases. Many hinges were found to involve two secondary structure elements acting like hinges of a door, allowing an opening and closing motion to occur. This can arise when two neighbouring strands within a β-sheet situated in one domain, diverge apart as they join the other domain. The two resulting termini then form the bending regions between the two domains. α-helices that preserve their hydrogen bonding network when bent are found to behave as mechanical hinges, storing `elastic energy' that drives the closure of domains for rapid capture of a substrate. Khade et. al. worked on prediction of the hinges in any conformation and further built an Elastic Network Model called hdANM that can model those motions.
=== Helical to extended conformation ===
The interconversion of helical and extended conformations at the site of a domain boundary is not uncommon. In calmodulin, torsion angles change for five residues in the middle of a domain linking α-helix. The helix is split into two, almost perpendicular, smaller helices separated by four residues of an extended strand.
=== Shear motions ===
Shear motions involve a small sliding movement of domain interfaces, controlled by the amino acid side chains within the interface. Proteins displaying shear motions often have a layered architecture: stacking of secondary structures. The interdomain linker has merely the role of keeping the domains in close proximity.
=== Domain motion and functional dynamics in enzymes ===
The analysis of the internal dynamics of structurally different, but functionally similar enzymes
has highlighted a common relationship between the positioning of the
active site and the two principal protein sub-domains. In fact, for several members of the hydrolase superfamily, the catalytic site is located close to the interface separating the two principal quasi-rigid domains. Such positioning appears instrumental for maintaining the precise geometry of the active site, while allowing for an appreciable functionally oriented modulation of the flanking regions resulting from the relative motion of the two sub-domains.
=== Quantifying internal protein motions using strain ===
A natural measure to quantify and classify the subtle motions of amino acids that occur during conformational changes is the strain. When a group of amino acids moves together as a rigid body, the strain vanishes. In contrast, high strain values indicate that neighboring amino acids and atoms have moved with respect to each other. The effective strain is the relative change in distances between neighboring amino acids, which is a sensitive enough measure to probe the effects of single mutations on the structural landscape of protein.
== Implications for macromolecular evolution ==
Evidence suggests that protein dynamics are important for function, e.g. enzyme catalysis in dihydrofolate reductase (DHFR),
yet they are also posited to facilitate the acquisition of new functions by molecular evolution.
This argument suggests that proteins have evolved to have stable, mostly unique folded structures,
but the unavoidable residual flexibility leads to some degree of functional promiscuity,
which can be amplified/harnessed/diverted by subsequent mutations.
Research on promiscuous proteins within the BCL-2 family revealed that nanosecond-scale protein dynamics can play a crucial role in protein binding behaviour and thus promiscuity.
However, there is growing awareness that intrinsically unstructured proteins are quite prevalent in eukaryotic genomes,
casting further doubt on the simplest interpretation of Anfinsen's dogma: "sequence determines structure (singular)".
In effect, the new paradigm is characterized by the addition of two caveats: "sequence and cellular environment determine structural ensemble".
== References == | Wikipedia/Protein_domain_dynamics |
Virophysics is a branch of biophysics in which the theoretical concepts and experimental techniques of physics are applied to study the mechanics and dynamics driving the interactions between virions and cells.
== Overview ==
Research in virophysics typically focuses on resolving the physical structure and structural properties of viruses, the dynamics of their assembly and disassembly, their population kinetics over the course of an infection, and the emergence and evolution of various strains. The common aim of these efforts is to establish a set of models (expressions or laws) that quantitatively describe the details of all processes involved in viral infections with reliable predictive power. Having such a quantitative understanding of viruses would not only rationalize the development of strategies to prevent, guide, or control the course of viral infections, but could also be used to exploit virus processes and put virus to work in areas such as nanosciences, materials, and biotechnologies.
Traditionally, in vivo and in vitro experimentation has been the only way to study viral infections. This approach for deriving knowledge based solely on experimental observations relies on common-sense assumptions (e.g., a higher virus count means a fitter virus). These assumptions often go untested due to difficulties controlling individual components of these complex systems without affecting others. The use of mathematical models and computer simulations to describe such systems, however, makes it possible to deconstruct an experimental system into individual components and determine how the pieces combine to create the infection we observe.
Virophysics has large overlaps with other fields. For example, the modelling of infectious disease dynamics is a popular research topic in mathematics, notably in applied mathematics or mathematical biology. While most modelling efforts in mathematics have focused on elucidating the dynamics of spread of infectious diseases at an epidemiological scale (person-to-person), there is also important work being done at the cellular scale (cell-to-cell). Virophysics focuses almost exclusively on the single-cell or multi-cellular scale, utilizing physical models to resolve the temporal and spatial dynamics of viral infection spread within a cell culture (in vitro), an organ (ex vivo or in vivo) or an entire host (in vivo).
== References ==
== External links ==
=== Related meetings/conferences ===
Virophysics 2015
2nd Workshop on Virus Dynamics | Wikipedia/Virophysics |
In evolutionary biology, function is the reason some object or process occurred in a system that evolved through natural selection. That reason is typically that it achieves some result, such as that chlorophyll helps to capture the energy of sunlight in photosynthesis. Hence, the organism that contains it is more likely to survive and reproduce, in other words the function increases the organism's fitness. A characteristic that assists in evolution is called an adaptation; other characteristics may be non-functional spandrels, though these in turn may later be co-opted by evolution to serve new functions.
In biology, function has been defined in many ways. In physiology, it is simply what an organ, tissue, cell or molecule does.
In the philosophy of biology, talk of function inevitably suggests some kind of teleological purpose, even though natural selection operates without any goal for the future. All the same, biologists often use teleological language as a shorthand for function. In contemporary philosophy of biology, there are three major accounts of function in the biological world: theories of causal role, selected effect, and goal contribution.
== In pre-evolutionary biology ==
In physiology, a function is an activity or process carried out by a system in an organism, such as sensation or locomotion in an animal. This concept of function as opposed to form (respectively Aristotle's ergon and morphê) was central in biological explanations in classical antiquity. In more modern times it formed part of the 1830 Cuvier–Geoffroy debate, where Cuvier argued that an animal's structure was driven by its functional needs, while Geoffroy proposed that each animal's structure was modified from a common plan.
== In evolutionary biology ==
Function can be defined in a variety of ways, including as adaptation, as contributing to evolutionary fitness, in animal behaviour, and, as discussed below, also as some kind of causal role or goal in the philosophy of biology.
=== Adaptation ===
A functional characteristic is known in evolutionary biology as an adaptation, and the research strategy for investigating whether a character is adaptive is known as adaptationism. Although assuming that a character is functional may be helpful in research, some characteristics of organisms are non-functional, formed as accidental spandrels, side effects of neighbouring functional systems.
=== Natural selection ===
From the point of view of natural selection, biological functions exist to contribute to fitness, increasing the chance that an organism will survive to reproduce. For example, the function of chlorophyll in a plant is to capture the energy of sunlight for photosynthesis, which contributes to evolutionary success.
== In ethology ==
The ethologist Niko Tinbergen named four questions, based on Aristotle's Four Causes, that a biologist could ask to help explain a behaviour, though they have been generalised to a wider scope. 1) Mechanism: What mechanisms cause the animal to behave as it does? 2) Ontogeny: What developmental mechanisms in the animal's embryology (and its youth, if it learns) created the structures that cause the behaviour? 3) Function/adaptation: What is the evolutionary function of the behaviour? 4) Evolution: What is the phylogeny of the behaviour, or in other words, when did it first appear in the evolutionary history of the animal? The questions are interdependent, so that, for example, adaptive function is constrained by embryonic development.
== In philosophy of biology ==
Function is not the same as purpose in the teleological sense, that is, possessing conscious mental intention to achieve a goal. In the philosophy of biology, evolution is a blind process which has no 'goal' for the future. For example, a tree does not grow flowers for any purpose, but does so simply because it has evolved to do so. To say 'a tree grows flowers to attract pollinators' would be incorrect if the 'to' implies purpose. A function describes what something does, not what its 'purpose' is. However, teleological language is often used by biologists as a shorthand way of describing function, even though its applicability is disputed.
In contemporary philosophy of biology, there are three major accounts of function in the biological world: theories of causal role, selected effect, and goal contribution.
=== Causal role ===
Causal role theories of biological function trace their origin back to a 1975 paper by Robert Cummins. Cummins defines the functional role of a component of a system to be the causal effect that the component has on the larger containing system. For example, the heart has the actual causal role of pumping blood in the circulatory system; therefore, the function of the heart is to pump blood. This account has been objected to on the grounds that it is too loose a notion of function. For example, the heart also has the causal effect of producing a sound, but we would not consider producing sound to be the function of the heart.
=== Selected effect ===
Selected effect theories of biological functions hold that the function of a biological trait is the function that the trait was selected for, as argued by Ruth Millikan. For example, the function of the heart is pumping blood, for that is the action for which the heart was selected for by evolution. In other words, pumping blood is the reason that the heart has evolved. This account has been criticized for being too restrictive a notion of function. It is not always clear which behavior has contributed to the selection of a trait, as biological traits can have functions, even if they have not been selected for. Beneficial mutations are initially not selected for, but they do have functions.
=== Goal contribution ===
Goal contribution theories seek to carve a middle ground between causal role and selected effect theories, as with Boorse (1977). Boorse defines the function of a biological trait to be the statistically typical causal contribution of that trait to survival and reproduction. So for example, zebra stripes were sometimes said to work by confusing predators. This role of zebra stripes would contribute to the survival and reproduction of zebras, and that is why confusing predators would be said to be the function of zebra stripes. Under this account, whether or not a particular causal role of a trait is its function depends on whether that causal role contributes to the survival and reproduction of that organism.
== See also ==
Preadaptation
== References == | Wikipedia/Function_(biology) |
In molecular biology, proteins are generally thought to adopt unique structures determined by their amino acid sequences. However, proteins are not strictly static objects, but rather populate ensembles of (sometimes similar) conformations. Transitions between these states occur on a variety of length scales (tenths of angstroms to nm) and time scales (ns to s),
and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis.
The study of protein dynamics is most directly concerned with the transitions between these states, but can also involve the nature and equilibrium populations of the states themselves.
These two perspectives—kinetics and thermodynamics, respectively—can be conceptually synthesized in an "energy landscape" paradigm:
highly populated states and the kinetics of transitions between them can be described by the depths of energy wells and the heights of energy barriers, respectively.
== Local flexibility: atoms and residues ==
Portions of protein structures often deviate from the equilibrium state.
Some such excursions are harmonic, such as stochastic fluctuations of chemical bonds and bond angles.
Others are anharmonic, such as sidechains that jump between separate discrete energy minima, or rotamers.
Evidence for local flexibility is often obtained from NMR spectroscopy. Flexible and potentially disordered regions of a protein can be detected using the random coil index. Flexibility in folded proteins can be identified by analyzing the spin relaxation of individual atoms in the protein. Flexibility can also be observed in very high-resolution electron density maps produced by X-ray crystallography,
particularly when diffraction data is collected at room temperature instead of the traditional cryogenic temperature (typically near 100 K). Information on the frequency distribution and dynamics of local protein flexibility can be obtained using Raman and optical Kerr-effect spectroscopy as well as anisotropic microspectroscopy in the terahertz frequency domain. The internal re-arrangement of the amino acids during protein motion involves elastic and plastic deformations induced by viscoelastic forces, which can be probed with nano-rheology techniques.
== Regional flexibility: intra-domain multi-residue coupling ==
Many residues are in close spatial proximity in protein structures. This is true for most residues that are contiguous in the primary sequence, but also for many that are distal in sequence yet are brought into contact in the final folded structure. Because of this proximity, these residue's energy landscapes become coupled based on various biophysical phenomena such as hydrogen bonds, ionic bonds, and van der Waals interactions (see figure).
Transitions between states for such sets of residues therefore become correlated.
This is perhaps most obvious for surface-exposed loops, which often shift collectively to adopt different conformations in different crystal structures (see figure). However, coupled conformational heterogeneity is also sometimes evident in secondary structure. For example, consecutive residues and residues offset by 4 in the primary sequence often interact in α helices. Also, residues offset by 2 in the primary sequence point their sidechains toward the same face of β sheets and are close enough to interact sterically, as are residues on adjacent strands of the same β sheet. Some of these conformational changes are induced by post-translational modifications in protein structure, such as phosphorylation and methylation.
When these coupled residues form pathways linking functionally important parts of a protein,
they may participate in allosteric signaling.
For example, when a molecule of oxygen binds to one subunit of the hemoglobin tetramer,
that information is allosterically propagated to the other three subunits, thereby enhancing their affinity for oxygen.
In this case, the coupled flexibility in hemoglobin allows for cooperative oxygen binding,
which is physiologically useful because it allows rapid oxygen loading in lung tissue and rapid oxygen unloading in oxygen-deprived tissues (e.g. muscle).
== Global flexibility: multiple domains ==
The presence of multiple domains in proteins gives rise to a great deal of flexibility and mobility, leading to protein domain dynamics.
Domain motions can be directly observed using spectra
measured by neutron spin echo spectroscopy.
They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis or inferred by comparing different structures of a protein (as in Database of Molecular Motions).
Domain motions are important for:
ABC transporters
adherens junction
cellular locomotion and motor proteins
enzyme catalysis
formation of protein complexes
ion channels
mechanoreceptors and mechanotransduction
regulatory activity
transport of metabolites across cell membranes
One of the largest observed domain motions is the 'swivelling' mechanism in pyruvate phosphate dikinase. The phosphoinositide domain swivels between two states in order to bring a phosphate group from the active site of the nucleotide binding domain to that of the phosphoenolpyruvate/pyruvate domain. The phosphate group is moved over a distance of 45 Å involving a domain motion of about 100 degrees around a single residue. In enzymes, the closure of one domain onto another captures a substrate by an induced fit, allowing the reaction to take place in a controlled way. A detailed analysis by Gerstein led to the classification of two basic types of domain motion; hinge and shear. Only a relatively small portion of the chain, namely the inter-domain linker and side chains undergo significant conformational changes upon domain rearrangement.
=== Hinge motions ===
A study by Hayward found that the termini of α-helices and β-sheets form hinges in a large number of cases. Many hinges were found to involve two secondary structure elements acting like hinges of a door, allowing an opening and closing motion to occur. This can arise when two neighbouring strands within a β-sheet situated in one domain, diverge apart as they join the other domain. The two resulting termini then form the bending regions between the two domains. α-helices that preserve their hydrogen bonding network when bent are found to behave as mechanical hinges, storing `elastic energy' that drives the closure of domains for rapid capture of a substrate. Khade et. al. worked on prediction of the hinges in any conformation and further built an Elastic Network Model called hdANM that can model those motions.
=== Helical to extended conformation ===
The interconversion of helical and extended conformations at the site of a domain boundary is not uncommon. In calmodulin, torsion angles change for five residues in the middle of a domain linking α-helix. The helix is split into two, almost perpendicular, smaller helices separated by four residues of an extended strand.
=== Shear motions ===
Shear motions involve a small sliding movement of domain interfaces, controlled by the amino acid side chains within the interface. Proteins displaying shear motions often have a layered architecture: stacking of secondary structures. The interdomain linker has merely the role of keeping the domains in close proximity.
=== Domain motion and functional dynamics in enzymes ===
The analysis of the internal dynamics of structurally different, but functionally similar enzymes
has highlighted a common relationship between the positioning of the
active site and the two principal protein sub-domains. In fact, for several members of the hydrolase superfamily, the catalytic site is located close to the interface separating the two principal quasi-rigid domains. Such positioning appears instrumental for maintaining the precise geometry of the active site, while allowing for an appreciable functionally oriented modulation of the flanking regions resulting from the relative motion of the two sub-domains.
=== Quantifying internal protein motions using strain ===
A natural measure to quantify and classify the subtle motions of amino acids that occur during conformational changes is the strain. When a group of amino acids moves together as a rigid body, the strain vanishes. In contrast, high strain values indicate that neighboring amino acids and atoms have moved with respect to each other. The effective strain is the relative change in distances between neighboring amino acids, which is a sensitive enough measure to probe the effects of single mutations on the structural landscape of protein.
== Implications for macromolecular evolution ==
Evidence suggests that protein dynamics are important for function, e.g. enzyme catalysis in dihydrofolate reductase (DHFR),
yet they are also posited to facilitate the acquisition of new functions by molecular evolution.
This argument suggests that proteins have evolved to have stable, mostly unique folded structures,
but the unavoidable residual flexibility leads to some degree of functional promiscuity,
which can be amplified/harnessed/diverted by subsequent mutations.
Research on promiscuous proteins within the BCL-2 family revealed that nanosecond-scale protein dynamics can play a crucial role in protein binding behaviour and thus promiscuity.
However, there is growing awareness that intrinsically unstructured proteins are quite prevalent in eukaryotic genomes,
casting further doubt on the simplest interpretation of Anfinsen's dogma: "sequence determines structure (singular)".
In effect, the new paradigm is characterized by the addition of two caveats: "sequence and cellular environment determine structural ensemble".
== References == | Wikipedia/Protein_dynamics |
Membrane biology is the study of the biological and physiochemical characteristics of membranes, with applications in the study of cellular physiology.
Membrane bioelectrical impulses are described by the Hodgkin cycle.
== Biophysics ==
Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections.
== See also ==
== References == | Wikipedia/Membrane_biophysics |
Molecular biophysics is a rapidly evolving interdisciplinary area of research that combines concepts in physics, chemistry, engineering, mathematics and biology. It seeks to understand biomolecular systems and explain biological function in terms of molecular structure, structural organization, and dynamic behaviour at various levels of complexity (from single molecules to supramolecular structures, viruses and small living systems). This discipline covers topics such as the measurement of molecular forces, molecular associations, allosteric interactions, Brownian motion, and cable theory. Additional areas of study can be found on Outline of Biophysics. The discipline has required development of specialized equipment and procedures capable of imaging and manipulating minute living structures, as well as novel experimental approaches.
== Overview ==
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, X-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
== Areas of Research ==
=== Computational biology ===
Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, ecological, behavioral, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science and evolution. Computational biology has become an important part of developing emerging technologies for the field of biology.
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies.
=== Membrane biophysics ===
Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections.
=== Motor proteins ===
Motor proteins are a class of molecular motors that can move along the cytoplasm of animal cells. They convert chemical energy into mechanical work by the hydrolysis of ATP. A good example is the muscle protein myosin which "motors" the contraction of muscle fibers in animals. Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm. Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis. Axonemal dynein, found in cilia and flagella, is crucial to cell motility, for example in spermatozoa, and fluid transport, for example in trachea.
Some biological machines are motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed.
These molecular motors are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors.
Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines. Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay "There's Plenty of Room at the Bottom".
These biological machines might have applications in nanomedicine. For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.
=== Protein folding ===
Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner. It is the physical process by which a polypeptide folds into its characteristic and functional three-dimensional structure from a random coil.
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA to a linear chain of amino acids. This polypeptide lacks any stable (long-lasting) three-dimensional structure (the left hand side of the first figure). As the polypeptide chain is being synthesized by a ribosome, the linear chain begins to fold into its three-dimensional structure. Folding begins to occur even during the translation of the polypeptide chain. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right-hand side of the figure), known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence or primary structure (Anfinsen's dogma).
=== Protein structure determination ===
As the three-dimensional structure of proteins brings with it an understanding of its function and biological context, there is great effort placed in observing the structures of proteins. X-ray crystallography was the primary method used in the 20th century to solve the structures of proteins in their crystalline form. Ever since the early 2000s, cryogenic electron microscopy has been used to solve the structures of proteins closer to their native state, as well as observing cellular structures.
=== Protein structure prediction ===
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its folding and its secondary and tertiary structure from its primary structure. Structure prediction is fundamentally different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine, in drug design, biotechnology and in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment (Critical Assessment of Techniques for Protein Structure Prediction). A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D.
The challenge in predicting protein structures is that there lacks a physical model that can fully predict protein tertiary structures from their amino acid sequence. This problem is known as the de novo protein structure prediction problem and is one of the great problems of modern science. AlphaFold, an artificial intelligence program, is able to accurately predict the structures of proteins with genetic homology to other proteins that have been previously solved. Though, this is not a solution to the de novo problem, as it relies on a database of prior data which results in it always being biased. The solution to the de novo protein structure prediction problem must be a purely physical model that will simulate the protein folding in its native environment, resulting in the in silico observation of protein structures and dynamics that were never previously observed.
=== Spectroscopy ===
Spectroscopic techniques like NMR, spin label electron spin resonance, Raman spectroscopy, infrared spectroscopy, circular dichroism, and so on have been widely used to understand structural dynamics of important biomolecules and intermolecular interactions.
== See also ==
== References == | Wikipedia/Molecular_biophysics |
Protein biosynthesis, or protein synthesis, is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences.
Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain.
Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins.
Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease.
== Transcription ==
Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription.
Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNA – corresponding to a gene – to unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand.
Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'.
The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete.
The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil.
=== Post-transcriptional modifications ===
Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule.
There are 3 key steps within post-transcriptional modifications:
Addition of a 5' cap to the 5' end of the pre-mRNA molecule
Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule
Removal of introns via RNA splicing
The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100–200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present.
This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus.
== Translation ==
During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm.
Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70–80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid.
The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids.
The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel Prize in 1968, along with two other scientists, for his work.
== Protein folding ==
Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function.
The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin.
== Post-translation events ==
There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes.
== Post-translational modifications ==
When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude.
There are four key classes of post-translational modification:
Cleavage
Addition of chemical groups
Addition of complex molecules
Formation of intramolecular bonds
=== Cleavage ===
Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities.
=== Addition of chemical groups ===
Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation.
Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed.
Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription.
The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate.
=== Addition of complex molecules ===
Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification.
In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins.
There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure.
=== Formation of covalent bonds ===
Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment.
== Role of protein synthesis in disease ==
Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders.
=== Sickle cell disease ===
Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunits – two A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine.
This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual.
=== Cancer ===
Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it.
As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body.
== See also ==
Central dogma of molecular biology
Genetic code
== References ==
== External links ==
A more advanced video detailing the different types of post-translational modifications and their chemical structures
A useful video visualising the process of converting DNA to protein via transcription and translation
Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease | Wikipedia/Protein_biosynthesis |
Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization.
Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others.
Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (increasingly called 'biomedical physics') has a much wider scope and may include research in any applications of physics to medicine from the study of biomolecular structure to microscopy and nanomedicine.
== Mission statement of medical physicists ==
In hospital medical physics departments, the mission statement for medical physicists as adopted by the European Federation of Organisations for Medical Physics (EFOMP) is the following:
Medical Physicists will contribute to maintaining and improving the quality, safety and cost-effectiveness of healthcare services through patient-oriented activities requiring expert action, involvement or advice regarding the specification, selection, acceptance testing, commissioning, quality assurance/control and optimised clinical use of medical devices and regarding patient risks and protection from associated physical agents (e.g., x-rays, electromagnetic fields, laser light, radionuclides) including the prevention of unintended or accidental exposures; all activities will be based on current best evidence or own scientific research when the available evidence is not sufficient. The scope includes risks to volunteers in biomedical research, carers and comforters. The scope often includes risks to workers and public particularly when these impact patient risk
The term "physical agents" refers to ionising and non-ionising electromagnetic radiations, static electric and magnetic fields, ultrasound, laser light and any other Physical Agent associated with medical e.g., x-rays in computerised tomography (CT), gamma rays/radionuclides in nuclear medicine, magnetic fields and radio-frequencies in magnetic resonance imaging (MRI), ultrasound in ultrasound imaging and Doppler measurements.
This mission includes the following 11 key activities:
Scientific problem solving service: Comprehensive problem solving service involving recognition of less than optimal performance or optimised use of medical devices, identification and elimination of possible causes or misuse, and confirmation that proposed solutions have restored device performance and use to acceptable status. All activities are to be based on current best scientific evidence or own research when the available evidence is not sufficient.
Dosimetry measurements: Measurement of doses had by patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures (e.g., for legal or employment purposes); selection, calibration and maintenance of dosimetry related instrumentation; independent checking of dose related quantities provided by dose reporting devices (including software devices); measurement of dose related quantities required as inputs to dose reporting or estimating devices (including software). Measurements to be based on current recommended techniques and protocols. Includes dosimetry of all physical agents.
Patient safety/risk management (including volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures. Surveillance of medical devices and evaluation of clinical protocols to ensure the ongoing protection of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures from the deleterious effects of physical agents in accordance with the latest published evidence or own research when the available evidence is not sufficient. Includes the development of risk assessment protocols.
Occupational and public safety/risk management (when there is an impact on medical exposure or own safety). Surveillance of medical devices and evaluation of clinical protocols with respect to protection of workers and public when impacting the exposure of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures or responsibility with respect to own safety. Includes the development of risk assessment protocols in conjunction with other experts involved in occupational / public risks.
Clinical medical device management: Specification, selection, acceptance testing, commissioning and quality assurance/ control of medical devices in accordance with the latest published European or International recommendations and the management and supervision of associated programmes. Testing to be based on current recommended techniques and protocols.
Clinical involvement: Carrying out, participating in and supervising everyday radiation protection and quality control procedures to ensure ongoing effective and optimised use of medical radiological devices and including patient specific optimization.
Development of service quality and cost-effectiveness: Leading the introduction of new medical radiological devices into clinical service, the introduction of new medical physics services and participating in the introduction/development of clinical protocols/techniques whilst giving due attention to economic issues.
Expert consultancy: Provision of expert advice to outside clients (e.g., clinics with no in-house medical physics expertise).
Education of healthcare professionals (including medical physics trainees: Contributing to quality healthcare professional education through knowledge transfer activities concerning the technical-scientific knowledge, skills and competences supporting the clinically effective, safe, evidence-based and economical use of medical radiological devices. Participation in the education of medical physics students and organisation of medical physics residency programmes.
Health technology assessment (HTA): Taking responsibility for the physics component of health technology assessments related to medical radiological devices and /or the medical uses of radioactive substances/sources.
Innovation: Developing new or modifying existing devices (including software) and protocols for the solution of hitherto unresolved clinical problems.
== Medical biophysics and biomedical physics ==
Some education institutions house departments or programs bearing the title "medical biophysics" or "biomedical physics" or "applied physics in medicine". Generally, these fall into one of two categories: interdisciplinary departments that house biophysics, radiobiology, and medical physics under a single umbrella; and undergraduate programs that prepare students for further study in medical physics, biophysics, or medicine.
Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.
== Areas of specialty ==
The International Organization for Medical Physics (IOMP) recognizes main areas of medical physics employment and focus.
=== Medical imaging physics ===
Medical imaging physics is also known as diagnostic and interventional radiology physics.
Clinical (both "in-house" and "consulting") physicists typically deal with areas of testing, optimization, and quality assurance of diagnostic radiology physics areas such as radiographic X-rays, fluoroscopy, mammography, angiography, and computed tomography, as well as non-ionizing radiation modalities such as ultrasound, and MRI. They may also be engaged with radiation protection issues such as dosimetry (for staff and patients). In addition, many imaging physicists are often also involved with nuclear medicine systems, including single photon emission computed tomography (SPECT) and positron emission tomography (PET).
Sometimes, imaging physicists may be engaged in clinical areas, but for research and teaching purposes, such as quantifying intravascular ultrasound as a possible method of imaging a particular vascular object.
=== Therapeutic medical physics ===
Radiation therapeutic physics is also known as radiotherapy physics or radiation oncologist physics.
The majority of medical physicists currently working in the US, Canada, and some western countries are of this group. A radiation therapy physicist typically deals with linear accelerator (Linac) systems and kilovoltage x-ray treatment units on a daily basis, as well as other modalities such as TomoTherapy, gamma knife, Cyberknife, proton therapy, and brachytherapy.
The academic and research side of therapeutic physics may encompass fields such as boron neutron capture therapy, sealed source radiotherapy, terahertz radiation, high-intensity focused ultrasound (including lithotripsy), optical radiation lasers, ultraviolet etc. including photodynamic therapy, as well as nuclear medicine including unsealed source radiotherapy, and photomedicine, which is the use of light to treat and diagnose disease.
=== Nuclear medicine physics ===
Nuclear medicine is a branch of medicine that uses radiation to provide information about the functioning of a person's specific organs or to treat disease. The thyroid, bones, heart, liver and many other organs can be easily imaged, and disorders in their function revealed. In some cases radiation sources can be used to treat diseased organs, or tumours. Five Nobel laureates have been intimately involved with the use of radioactive tracers in medicine.
Over 10,000 hospitals worldwide use radioisotopes in medicine, and about 90% of the procedures are for diagnosis. The most common radioisotope used in diagnosis is technetium-99m, with some 30 million procedures per year, accounting for 80% of all nuclear medicine procedures worldwide.
=== Health physics ===
Health physics is also known as radiation safety or radiation protection. Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the science and practice of radiation protection and safety.
Background radiation
Radiation protection
Dosimetry
Health physics
Radiological protection of patients
=== Non-ionizing medical radiation physics ===
Some aspects of non-ionizing radiation physics may be considered under radiation protection or diagnostic imaging physics. Imaging modalities include MRI, optical imaging and ultrasound. Safety considerations include these areas and lasers
Lasers and applications in medicine
=== Physiological measurement ===
Physiological measurements have also been used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods. Measurement methods include electrocardiography Many of these areas may be covered by other specialities, for example medical engineering or vascular science.
=== Healthcare informatics and computational physics ===
Other closely related fields to medical physics include fields which deal with medical data, information technology and computer science for medicine.
Information and communication in medicine
Medical informatics
Image processing, display and visualization
Computer-aided diagnosis
Picture archiving and communication systems (PACS)
Standards: DICOM, ISO, IHE
Hospital information systems
e-Health
Telemedicine
Digital operating room
Workflow, patient-specific modeling
Medicine on the Internet of Things
Distant monitoring and telehomecare
=== Areas of research and academic development ===
Non-clinical physicists may or may not focus on the above areas from an academic and research point of view, but their scope of specialization may also encompass lasers and ultraviolet systems (such as photodynamic therapy), fMRI and other methods for functional imaging as well as molecular imaging, electrical impedance tomography, diffuse optical imaging, optical coherence tomography, and dual energy X-ray absorptiometry.
== Legislative and advisory bodies ==
=== International ===
ICRU: International Commission on Radiation Units and Measurements
ICRP: International Commission on Radiological Protection
IOMP: International Organization for Medical Physics
IAEA: International Atomic Energy Agency
=== United States of America ===
NCRP: National Council on Radiation Protection & Measurements
NRC: Nuclear Regulatory Commission
FDA: Food and Drug Administration
AAPM: American Association of Physicists in Medicine
=== United Kingdom ===
IPEM: Institute of Physics and Engineering in Medicine
MHRA: Medicines and Healthcare products Regulatory Agency
=== Other ===
AMPI: Association of Medical Physicists of India
CCPM: Canadian College of Physicists in Medicine
EFOMP: European Federation of Organisations for Medical Physics
ACPSEM: Australasian College of Physical Scientists and Engineers in Medicine
== References ==
== External links ==
Human Health Campus, The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications
Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)
Canadian Organization of Medical Physicist - Organisation canadienne des physiciens médicaux
The American Association of Physicists in Medicine
Romanian College of Medical Physicists
medicalphysicsweb.org from the Institute of Physics
AIP Medical Physics portal
Institute of Physics & Engineering in Medicine (IPEM) - UK
European Federation of Organizations for Medical Physics (EFOMP)
International Organization for Medical Physics (IOMP) | Wikipedia/Medical_biophysics |
The unmoved mover (Ancient Greek: ὃ οὐ κινούμενον κινεῖ, romanized: ho ou kinoúmenon kineî, lit. 'that which moves without being moved') or prime mover (Latin: primum movens) is a concept advanced by Aristotle as a primary cause (or first uncaused cause) or "mover" of all the motion in the universe. As is implicit in the name, the unmoved mover moves other things, but is not itself moved by any prior action. In Book 12 (Ancient Greek: Λ) of his Metaphysics, Aristotle describes the unmoved mover as being perfectly beautiful, indivisible, and contemplating only the perfect contemplation: self-contemplation. He also equates this concept with the active intellect. This Aristotelian concept had its roots in cosmological speculations of the earliest Greek pre-Socratic philosophers and became highly influential and widely drawn upon in medieval philosophy and theology. St. Thomas Aquinas, for example, elaborated on the unmoved mover in the Five Ways.
== First philosophy ==
Aristotle argues, in Book 8 of the Physics and Book 12 of the Metaphysics, "that there must be an immortal, unchanging being, ultimately responsible for all wholeness and orderliness in the sensible world."
In the Physics (VIII 4–6) Aristotle finds "surprising difficulties" explaining even commonplace change, and in support of his approach of explanation by four causes, he required "a fair bit of technical machinery". This "machinery" includes potentiality and actuality, hylomorphism, the theory of categories, and "an audacious and intriguing argument, that the bare existence of change requires the postulation of a first cause, an unmoved mover whose necessary existence underpins the ceaseless activity of the world of motion". Aristotle's "first philosophy", or Metaphysics ("after the Physics"), develops his peculiar theology of the prime mover, as πρῶτον κινοῦν ἀκίνητον: an independent divine eternal unchanging immaterial substance.
=== Celestial spheres ===
Aristotle adopted the geometrical model of Eudoxus of Cnidus to provide a general explanation of the apparent wandering of the classical planets arising from uniform circular motions of celestial spheres. While the number of spheres in the model itself was subject to change (47 or 55), Aristotle's account of aether, and of potentiality and actuality, required an individual unmoved mover for each sphere.
=== Final cause and efficient cause ===
Simplicius argues that the first unmoved mover is a cause not only in the sense of being a final cause—which everyone in his day, as in ours, would accept—but also in the sense of being an efficient cause (1360. 24ff.), and his master Ammonius wrote a whole book defending the thesis (ibid. 1363. 8–10). Simplicius's arguments include citations of Plato's views in the Timaeus—evidence not relevant to the debate unless one happens to believe in the essential harmony of Plato and Aristotle—and inferences from approving remarks which Aristotle makes about the role of Nous in Anaxagoras, which require a good deal of reading between the lines. But he does point out rightly that the unmoved mover fits the definition of an efficient cause—"whence the first source of change or rest" (Phys. II. 3, 194b29–30; Simpl. 1361. 12ff.). The examples which Aristotle adduces do not obviously suggest an application to the first unmoved mover, and it is at least possible that Aristotle originated his fourfold distinction without reference to such an entity. But the real question is whether his definition of the efficient cause includes the unmoved mover willy-nilly. One curious fact remains: that Aristotle never acknowledges the alleged fact that the unmoved mover is an efficient cause (a problem of which Simplicius is well aware: 1363. 12–14)...
Despite their apparent function in the celestial model, the unmoved movers were a final cause, not an efficient cause for the movement of the spheres; they were solely a constant inspiration, and even if taken for an efficient cause precisely due to being a final cause, the nature of the explanation is purely teleological.
=== Aristotle's theology ===
The unmoved mover, if they were anywhere, were said to fill the outer void beyond the sphere of fixed stars:
It is clear then that there is neither place, nor void, nor time, outside the heaven. Hence whatever is there, is of such a nature as not to occupy any place, nor does time age it; nor is there any change in any of the things which lie beyond the outermost motion; they continue through their entire duration unalterable and unmodified, living the best and most self sufficient of lives… From [the fulfilment of the whole heaven] derive the being and life which other things, some more or less articulately but other feebly, enjoy.
The unmoved mover is an immaterial substance (separate and individual beings), having neither parts nor magnitude. As such, it would be physically impossible for them to move material objects of any size by pushing, pulling, or collision. Because matter is, for Aristotle, a substratum in which a potential to change can be actualized, any potentiality must be actualized in an eternal being, but it must not be still because continuous activity is essential for all forms of life. This immaterial form of activity must be intellectual and cannot be contingent upon sensory perception if it is to remain uniform; therefore, eternal substance must think only of thinking itself and exist outside the starry sphere, where even the notion of place is undefined for Aristotle. Their influence on lesser beings is purely the result of an "aspiration or desire," and each aetheric celestial sphere emulates one of the unmoved movers, as best it can, by uniform circular motion. The first heaven, the outmost sphere of fixed stars, is moved by a desire to emulate the prime mover (first cause), about whom, the subordinate movers suffer an accidental dependency.
Many of Aristotle's contemporaries complained that oblivious, powerless gods are unsatisfactory. Nonetheless, it was a life which Aristotle enthusiastically endorsed as one most enviable and perfect, the unembellished basis of theology. As the whole of nature depends on the inspiration of the eternal unmoved movers, Aristotle was concerned with establishing the metaphysical necessity of the perpetual motions of the heavens. Through the Sun's seasonal action upon the terrestrial spheres, the cycles of generation and corruption give rise to all natural motion as efficient cause. The intellect, nous, "or whatever else it be that is thought to rule and lead us by nature, and to have cognizance of what is noble and divine" is the highest activity, according to Aristotle (contemplation or speculative thinking, theōríā). It is also the most sustainable, pleasant, self-sufficient activity; something which is aimed at for its own sake. (Unlike politics and warfare, it does not involve doing things we'd rather not do, but rather something we do at our leisure.) This aim is not strictly human: to achieve it means to live following not mortal thoughts but something immortal and divine within humans. According to Aristotle, contemplation is the only type of happy activity that it would not be ridiculous to imagine the gods having. In Aristotle's psychology and biology, the intellect is the soul (see also eudaimonia).
According to Giovanni Reale, the first Unmoved Mover is a living, thinking, and personal God who "possesses the theoretical knowledge alone or in the highest degree...knows not only Himself, but all things in their causes and first principles."
=== First cause ===
In Book VIII of his Physics, Aristotle examines the notions of change or motion, and attempts to show by a challenging argument, that the mere supposition of a 'before' and an 'after', requires a first principle. He argues that in the beginning, if the cosmos had come to be, its first motion would lack an antecedent state; and, as Parmenides said, "nothing comes from nothing". The cosmological argument, later attributed to Aristotle, thereby concludes that God exists. However, if the cosmos had a beginning, Aristotle argued, it would require an efficient first cause, a notion that Aristotle took to demonstrate a critical flaw.
But it is a wrong assumption to suppose universally that we have an adequate first principle in virtue of the fact that something always is so ... Thus Democritus reduces the causes that explain nature to the fact that things happened in the past in the same way as they happen now: but he does not think fit to seek for a first principle to explain this 'always' ... Let this conclude what we have to say in support of our contention that there never was a time when there was not motion, and never will be a time when there will not be motion.
The purpose of Aristotle's cosmological argument that at least one eternal unmoved mover must exist is to support everyday change.
Of things that exist, substances are the first. But if substances can, then all things can perish... and yet, time and change cannot. Now, the only continuous change is that of place, and the only continuous change of place is circular motion. Therefore, there must be an eternal circular motion and this is confirmed by the fixed stars which are moved by the eternal actual substance that's purely actual.
In Aristotle's estimation, an explanation without the temporal actuality and potentiality of an infinite locomotive chain is required for an eternal cosmos with neither beginning nor end: an unmoved eternal substance for whom the Primum Mobile turns diurnally, whereby all terrestrial cycles are driven by day and night, the seasons of the year, the transformation of the elements, and the nature of plants and animals.
== Substance and change ==
Aristotle begins by describing substance, of which he says there are three types: the sensible, subdivided into the perishable, which belongs to physics, and the eternal, which belongs to "another science." He notes that sensible substance is changeable and that there are several types of change, including quality and quantity, generation and destruction, increase and diminution, alteration, and motion. Change occurs when one given state becomes something contrary to it: that is to say, what exists potentially comes to exist actually (see potentiality and actuality). Therefore, "a thing [can come to be], incidentally, out of that which is not, [and] also all things come to be out of that which is, but is potentially, and is not actually." That by which something is changed is the mover, that which is changed is the matter, and that into which it is changed is the form.
Substance is necessarily composed of different elements. The proof for this is that there are things that are different from each other and that all things are composed of elements. Since elements combine to form composite substances, and because these substances differ from each other, there must be different elements: in other words, "b or a cannot be the same as ba."
== Number of movers ==
Near the end of Metaphysics, Book Λ, Aristotle introduces a surprising question, asking "whether we have to suppose one such [mover] or more than one, and if the latter, how many." Aristotle concludes that the number of all the movers equals the number of separate movements, and we can determine these by considering the mathematical science most akin to philosophy, i.e., astronomy. Although the mathematicians differ on the number of movements, Aristotle considers that the number of celestial spheres would be 47 or 55. Nonetheless, he concludes his Metaphysics, Book Λ, with a quotation from the Iliad: "The rule of many is not good; one ruler let there be."
== Influence ==
John Burnet (1892) noted
The Neoplatonists were quite justified in regarding themselves as the spiritual heirs of Pythagoras; and, in their hands, philosophy ceased to exist as such, and became theology. And this tendency was at work all along; hardly a single Greek philosopher was wholly uninfluenced by it. Perhaps Aristotle might seem to be an exception; but it is probable that, if we still possessed a few such "exoteric" works as the Protreptikos in their entirety, we should find that the enthusiastic words in which he speaks of the "blessed life" in the Metaphysics and in the Ethics (Nicomachean Ethics) were less isolated outbursts of feeling than they appear now. In later days, Apollonios of Tyana showed in practice what this sort of thing must ultimately lead to. The theurgy and thaumaturgy of the late Greek schools were only the fruit of the seed sown by the generation which immediately preceded the Persian War.
Aristotle's principles of being (see section above) influenced Anselm's view of God, whom he called "that than which nothing greater can be conceived." Anselm thought God did not feel emotions such as anger or love but appeared to do so through our imperfect understanding. The incongruity of judging "being" against something that might not exist may have led Anselm to his famous ontological argument for God's existence.
Many medieval philosophers used the idea of approaching a knowledge of God through negative attributes. For example, we should not say that God exists in the usual sense of the term; all we can safely say is that God is not nonexistent. We should not say that God is wise, but we can say that God is not ignorant (i.e., in some way, God has some properties of knowledge). We should not say that God is One, but we can state that there is no multiplicity in God's being.
Many later Jewish, Islamic, and Christian philosophers accepted Aristotelian theological concepts. Key Jewish philosophers included ibn Tibbon, Maimonides, and Gersonides, among many others. Their views of God are considered mainstream by many Jews of all denominations, even today. Preeminent among Islamic philosophers who were influenced by Aristotelian theology are Avicenna and Averroes. In Christian theology, the key philosopher influenced by Aristotle was undoubtedly Thomas Aquinas. There had been earlier Aristotelian influences within Christianity (notably Anselm), but Aquinas (who, incidentally, found his Aristotelian influence via Avicenna, Averroes, and Maimonides) incorporated extensive Aristotelian ideas throughout his theology. Through Aquinas and the Scholastic Christian theology of which he was a significant part, Aristotle became "academic theology's great authority in the thirteenth century" and influenced Christian theology that became widespread and deeply embedded. However, notable Christian theologians rejected Aristotelian theological influence, especially the first generation of Christian Reformers, most notably Martin Luther. In subsequent Protestant theology, Aristotelian thought quickly reemerged in Protestant scholasticism.
== See also ==
== Notes ==
== References ==
== Sources ==
The Theology of Aristotle in the Stanford Encyclopedia of Philosophy
John W. Watt (2019). The Aristotelian Tradition in Syriac. Routledge. ISBN 9780429817489.
Gilles Emery; Matthew Levering (2015). Aristotle in Aquinas's Theology. Oxford University Press. ISBN 9780198749639.
Richard Bodeus (2000). Aristotle and the Theology of the Living Immortals. SUNY Press. ISBN 9780791447284.
Otfried Hoffe (2003). Aristotle. SUNY Press. ISBN 9780791456347. | Wikipedia/Prime_mover_theory |
Ancient, medieval and Renaissance astronomers and philosophers developed many different theories about the dynamics of the celestial spheres. They explained the motions of the various nested spheres in terms of the materials of which they were made, external movers such as celestial intelligences, and internal movers such as motive souls or impressed forces. Most of these models were qualitative, although a few of them incorporated quantitative analyses that related speed, motive force and resistance.
== The celestial material and its natural motions ==
In considering the physics of the celestial spheres, scholars followed two different views about the material composition of the celestial spheres. For Plato, the celestial regions were made "mostly out of fire" on account of fire's mobility. Later Platonists, such as Plotinus, maintained that although fire moves naturally upward in a straight line toward its natural place at the periphery of the universe, when it arrived there, it would either rest or move naturally in a circle. This account was compatible with Aristotle's meteorology of a fiery region in the upper air, dragged along underneath the circular motion of the lunar sphere. For Aristotle, however, the spheres themselves were made entirely of a special fifth element, Aether (Αἰθήρ), the bright, untainted upper atmosphere in which the gods dwell, as distinct from the dense lower atmosphere, Aer (Ἀήρ). While the four terrestrial elements (earth, water, air and fire) gave rise to the generation and corruption of natural substances by their mutual transformations, aether was unchanging, moving always with a uniform circular motion that was uniquely suited to the celestial spheres, which were eternal. Earth and water had a natural heaviness (gravitas), which they expressed by moving downward toward the center of the universe. Fire and air had a natural lightness (levitas), such that they moved upward, away from the center. Aether, being neither heavy nor light, moved naturally around the center.
== The causes of celestial motion ==
As early as Plato, philosophers considered the heavens to be moved by immaterial agents. Plato believed the cause to be a world-soul, created according to mathematical principles, which governed the daily motion of the heavens (the motion of the Same) and the opposed motions of the planets along the zodiac (the motion of the Different). Aristotle proposed the existence of divine unmoved movers which act as final causes; the celestial spheres mimic the movers, as best they could, by moving with uniform circular motion. In his Metaphysics, Aristotle maintained that an individual unmoved mover would be required to insure each individual motion in the heavens. While stipulating that the number of spheres, and thus gods, is subject to revision by astronomers, he estimated the total as 47 or 55, depending on whether one followed the model of Eudoxus or Callippus. In On the Heavens, Aristotle presented an alternate view of eternal circular motion as moving itself, in the manner of Plato's world-soul, which lent support to three principles of celestial motion: an internal soul, an external unmoved mover, and the celestial material (aether).
=== Later Greek interpreters ===
In his Planetary Hypotheses, Ptolemy (c. 90 – c. 168) rejected the Aristotelian concept of an external prime mover, maintaining instead that the planets have souls and move themselves with a voluntary motion. Each planet sends out motive emissions that direct its own motion and the motions of the epicycle and deferent that make up its system, just as a bird sends out emissions to its nerves that direct the motions of its feet and wings.
John Philoponus (490–570) considered that the heavens were made of fire, not of aether, yet maintained that circular motion is one of the two natural motions of fire. In a theological work, On the Creation of the World (De opificio mundi), he denied that the heavens are moved by either a soul or by angels, proposing that "it is not impossible that God, who created all these things, imparted a motive force to the Moon, the Sun, and other stars – just as the inclination to heavy and light bodies, and the movements due to the internal soul to all living beings – in order that the angels do not move them by force." This is interpreted as an application of the concept of impetus to the motion of the celestial spheres. In an earlier commentary on Aristotle's Physics, Philoponus compared the innate power or nature that accounts for the rotation of the heavens to the innate power or nature that accounts for the fall of rocks.
=== Islamic interpreters ===
The Islamic philosophers al-Farabi (c. 872 – c. 950) and Avicenna (c. 980–1037), following Plotinus, maintained that Aristotle's movers, called intelligences, came into being through a series of emanations beginning with God. A first intelligence emanated from God, and from the first intelligence emanated a sphere, its soul, and a second intelligence. The process continued down through the celestial spheres until the sphere of the Moon, its soul, and a final intelligence. They considered that each sphere was moved continually by its soul, seeking to emulate the perfection of its intelligence. Avicenna maintained that besides an intelligence and its soul, each sphere was also moved by a natural inclination (mayl).
An interpreter of Aristotle from Muslim Spain, al-Bitruji (d. c. 1024), proposed a radical transformation of astronomy that did away with epicycles and eccentrics, in which the celestial spheres were driven by a single unmoved mover at the periphery of the universe. The spheres thus moved with a "natural nonviolent motion". The mover's power diminished with increasing distance from the periphery so that the lower spheres lagged behind in their daily motion around the Earth; this power reached even as far as the sphere of water, producing the tides.
More influential for later Christian thinkers were the teachings of Averroes (1126–1198), who agreed with Avicenna that the intelligences and souls combine to move the spheres but rejected his concept of emanation. Considering how the soul acts, he maintained that the soul moves its sphere without effort, for the celestial material has no tendency to a contrary motion.
Later in the century, the mutakallim Adud al-Din al-Iji (1281–1355) rejected the principle of uniform and circular motion, following the Ash'ari doctrine of atomism, which maintained that all physical effects were caused directly by God's will rather than by natural causes. He maintained that the celestial spheres were "imaginary things" and "more tenuous than a spider's web". His views were challenged by al-Jurjani (1339–1413), who argued that even if the celestial spheres "do not have an external reality, yet they are things that are correctly imagined and correspond to what [exists] in actuality."
=== Medieval Western Europe ===
In the Early Middle Ages, Plato's picture of the heavens was dominant among European philosophers, which led Christian thinkers to question the role and nature of the world-soul. With the recovery of Aristotle's works in the twelfth and thirteenth centuries, Aristotle's views supplanted the earlier Platonism, and a new set of questions regarding the relationships of the unmoved movers to the spheres and to God emerged.
In the early phases of the Western recovery of Aristotle, Robert Grosseteste (c. 1175–1253), influenced by medieval Platonism and by the astronomy of al-Bitruji, rejected the idea that the heavens are moved by either souls or intelligences. Adam Marsh's (c. 1200–1259) treatise On the Ebb and Flow of the Sea, which was formerly attributed to Grosseteste, maintained al-Bitruji's opinion that the celestial spheres and the seas are moved by a peripheral mover whose motion weakens with distance.
Thomas Aquinas (c. 1225–1274), following Avicenna, interpreted Aristotle to mean that there were two immaterial substances responsible for the motion of each celestial sphere, a soul that was an integral part of its sphere, and an intelligence that was separate from its sphere. The soul shares the motion of its sphere and causes the sphere to move through its love and desire for the unmoved separate intelligence. Avicenna, al-Ghazali, Moses Maimonides, and most Christian scholastic philosophers identified Aristotle's intelligences with the angels of revelation, thereby associating an angel with each of the spheres. Moreover, Aquinas rejected the idea that celestial bodies are moved by an internal nature, similar to the heaviness and lightness that moves terrestrial bodies. Attributing souls to the spheres was theologically controversial, as that could make them animals. After the Condemnations of 1277, most philosophers came to reject the idea that the celestial spheres had souls.
Robert Kilwardby (c. 1215–1279) discussed three alternative explanations of the motions of the celestial spheres, rejecting the views that celestial bodies are animated and are moved by their own spirits or souls, or that the celestial bodies are moved by angelic spirits, which govern and move them. He maintained, instead, that "celestial bodies are moved by their own natural inclinations similar to weight." Just as heavy bodies are naturally moved by their own weight, which is an intrinsic active principle, so the celestial bodies are naturally moved by a similar intrinsic principle. Since the heavens are spherical, the only motion that could be natural to them is rotation. Kilwardby's idea had been earlier held by another Oxford scholar, John Blund (c. 1175–1248).
In two slightly different discussions, John Buridan (c. 1295 – c. 1358) suggested that when God created the celestial spheres, he began to move them, impressing in them a circular impetus that would be neither corrupted nor diminished, since there was neither an inclination to other movements nor any resistance in the celestial region. He noted that this would allow God to rest on the seventh day, but he left the matter to be resolved by the theologians.
Nicole Oresme (c. 1323-1382) explained the motion of the spheres in traditional terms of the action of intelligences but noted that, contrary to Aristotle, some intelligences are moved; for example, the intelligence that moves the Moon's epicycle shares the motion of the lunar orb in which the epicycle is embedded. He related the spheres' motions to the proportion of motive power to resistance that was impressed in each sphere when God created the heavens. In discussing the relation of the moving power of the intelligence, the resistance of the sphere, and the circular velocity, he said "this ratio ought not to be called a ratio of force to resistance except by analogy, because an intelligence moves by will alone ... and the heavens do not resist it."
According to Grant, except for Oresme, scholastic thinkers did not consider the force-resistance model to be properly applicable to the motion of celestial bodies, although some, such as Bartholomeus Amicus, thought analogically in terms of force and resistance. By the end of the Middle Ages it was the common opinion among philosophers that the celestial bodies were moved by external intelligences, or angels, and not by some kind of an internal mover.
=== The movers and Copernicanism ===
Although Nicolaus Copernicus (1473–1543) transformed Ptolemaic astronomy and Aristotelian cosmology by moving the Earth from the center of the universe, he retained both the traditional model of the celestial spheres and the medieval Aristotelian views of the causes of its motion. Copernicus follows Aristotle to maintain that circular motion is natural to the form of a sphere. However, he also appears to have accepted the traditional philosophical belief that the spheres are moved by an external mover.
Johannes Kepler's (1571–1630) cosmology eliminated the celestial spheres, but he held that the planets were moved both by an external motive power, which he located in the Sun, and a motive soul associated with each planet. In an early manuscript discussing the motion of Mars, Kepler considered the Sun to cause the circular motion of the planet. He then attributed the inward and outward motion of the planet, which transforms its overall motion from circular to oval, to a moving soul in the planet since the motion is "not a natural motion, but more of an animate one". In various writings, Kepler often attributed a kind of intelligence to the inborn motive faculties associated with the stars.
In the aftermath of Copernicanism the planets came to be seen as bodies moving freely through a very subtle aethereal medium. Although many scholastics continued to maintain that intelligences were the celestial movers, they now associated the intelligences with the planets themselves, rather than with the celestial spheres.
== See also ==
Christian angelic hierarchy
== Notes ==
== References ==
=== Primary sources ===
=== Secondary sources === | Wikipedia/Dynamics_of_the_celestial_spheres |
Dialogue Concerning the Two Chief World Systems (Dialogo sopra i due massimi sistemi del mondo) is a 1632 book by Galileo Galilei comparing Nicolaus Copernicus's heliocentric system model with Ptolemy's geocentric model. Written in Italian, it was translated into Latin as Systema cosmicum (Cosmic System) in 1635 by Matthias Bernegger. The book was dedicated to Galileo's patron, Ferdinando II de' Medici, Grand Duke of Tuscany, who received the first printed copy on February 22, 1632. It consists of four Socratic dialogues between the Copernican Salviati, the educated layman Sagredo and the geocentrist Simplicio. They discuss the findings of their "mutual friend the Academician" (Galileo).
In the heliocentric system, the Earth and other planets orbit the Sun, while in the Ptolemaic system, everything in the Universe circles around the Earth. The Dialogue was published in Florence under a formal license from the Inquisition. In 1633, Galileo was found to be "vehemently suspect of heresy" based on the book, which was then placed on the Index of Forbidden Books, from which it was not removed until 1835 (after the theories it discussed had been permitted in print in 1822). In an action that was not announced at the time, the publication of anything else he had written or ever might write was also banned in Catholic countries.
== Overview ==
While writing the book, Galileo referred to it as his Dialogue on the Tides, and when the manuscript went to the Inquisition for approval, the title was Dialogue on the Ebb and Flow of the Sea. He was ordered to remove all mention of tides from the title and to change the preface because not granting approval to such a title would look like approval of his theory of the tides using the motion of the Earth as proof. As a result, the formal title on the title page is Dialogue, which is followed by Galileo's name, academic posts, and followed by a long subtitle. The name by which the work is now known was extracted by the printer from the description on the title page when permission was given to reprint it with an approved preface by a Catholic theologian in 1744. This must be kept in mind when discussing Galileo's motives for writing the book. Although the book is presented formally as a consideration of both systems (as it needed to be in order to be published at all), there is no question that the Copernican side gets the better of the argument.
=== Structure ===
The book is presented as a series of discussions, over a span of four days, among two philosophers and a layman:
Salviati argues for the Copernican position and presents some of Galileo's views directly, calling him the "Academician" in honor of Galileo's membership in the Accademia dei Lincei. He is named after Galileo's friend Filippo Salviati (1582–1614).
Sagredo is an intelligent layman who is initially neutral. He is named after Galileo's friend Giovanni Francesco Sagredo (1571–1620).
Simplicio, a dedicated follower of Ptolemy and Aristotle, presents the traditional views and the arguments against the Copernican position. He is supposedly named after Simplicius of Cilicia, a sixth-century commentator on Aristotle, but it was suspected the name was a double entendre, as the Italian for "simple" (as in "simple minded") is "semplice". Simplicio is modeled on two contemporary conservative philosophers, Lodovico delle Colombe (1565–1616?), Galileo's opponent, and Cesare Cremonini (1550–1631), a Paduan colleague who had refused to look through the telescope. Colombe was the leader of a group of Florentine opponents of Galileo's, which some of the latter's friends referred to as "the pigeon league".
=== Content ===
The discussion is not narrowly limited to astronomical topics, but ranges over much of contemporary science. Some of this is to show what Galileo considered good science, such as the discussion of William Gilbert's work on magnetism. Other parts are important to the debate, answering erroneous arguments against the Earth's motion.
A classic argument against Earth motion is the lack of speed sensations of the Earth surface, though it moves, by the Earth's rotation, at about 1700 km/h at the equator. In this category there is a thought experiment in which a man is below decks on a ship and cannot tell whether the ship is docked or is moving smoothly through the water: he observes water dripping from a bottle, fish swimming in a tank, butterflies flying, and so on; and their behavior is the same whether the ship is moving or not. This is a classic exposition of the inertial frame of reference and refutes the objection that if we were moving hundreds of kilometres an hour as the Earth rotated, anything that one dropped would rapidly fall behind and drift to the west.
The bulk of Galileo's arguments may be divided into three classes:
Rebuttals to the objections raised by traditional philosophers; for example, the thought experiment on the ship.
Observations that are incompatible with the Ptolemaic model: the phases of Venus, for instance, which simply could not happen, or the apparent motions of sunspots, which could only be explained in the Ptolemaic or Tychonic systems as resulting from an implausibly complicated precession of the Sun's axis of rotation.
Arguments showing that the elegant unified theory of the Heavens that the philosophers held, which was believed to prove that the Earth was stationary, was incorrect; for instance, the mountains of the Moon, the moons of Jupiter, and the very existence of sunspots, none of which was part of the old astronomy.
Generally, these arguments have held up well in terms of the knowledge of the next four centuries. Just how convincing they ought to have been to an impartial reader in 1632 remains a contentious issue. Galileo attempted a fourth class of argument:
Direct physical argument for the Earth's motion, by means of an explanation of tides.
As an account of the causation of tides or a proof of the Earth's motion, it is a failure. The fundamental argument is internally inconsistent and actually leads to the conclusion that tides do not exist. But, Galileo was fond of the argument and devoted the "Fourth Day" of the discussion to it. The degree of its failure is—like nearly anything having to do with Galileo—a matter of controversy. On the one hand, the whole thing has recently been described in print as "cockamamie." On the other hand, Einstein used a rather different description:
It was Galileo's longing for a mechanical proof of the motion of the earth which misled him into formulating a wrong theory of the tides. The fascinating arguments in the last conversation would hardly have been accepted as proof by Galileo, had his temperament not got the better of him. [Emphasis added]
=== Omissions ===
The Dialogue does not treat the Tychonic system, which was becoming the preferred system of many astronomers at the time of publication and which was ultimately proven incorrect. The Tychonic system is a motionless Earth system but not a Ptolemaic system; it is a hybrid system of the Copernican and Ptolemaic models. Mercury and Venus orbit the Sun (as in the Copernican system) in small circles, while the Sun in turn orbits a stationary Earth; Mars, Jupiter, and Saturn orbit the Sun in much larger circles, which means they also orbit the Earth. The Tychonian system is mathematically equivalent to the Copernican system, except that the Copernican system predicts a stellar parallax, while the Tychonian system predicts none. Stellar parallax was not measurable until the 19th century, and therefore there was at the time no valid disproof of the Tychonic system on empirical grounds, nor any decisive observational evidence for the Copernican system.
Galileo never took Tycho's system seriously, as can be seen in his correspondence, regarding it as an inadequate and physically unsatisfactory compromise. A reason for the absence of Tycho's system (in spite of many references to Tycho and his work in the book) may be sought in Galileo's theory of the tides, which provided the original title and organizing principle of the Dialogue. While the Copernican and Tychonic systems are equivalent geometrically, they are quite different dynamically. Galileo's tidal theory entailed the actual, physical movement of the Earth; that is, if true, it would have provided the kind of proof that Foucault's pendulum apparently provided two centuries later. Without reference to Galileo's tidal theory, there would be no difference between the Copernican and Tychonic systems.
Galileo fails to discuss the possibility of non-circular orbits, although Johannes Kepler had sent him a copy of his 1609 book, Astronomia nova, in which he proposes elliptical orbits—correctly calculating that of Mars. Prince Federico Cesi's letter to Galileo of 1612 treated the two laws of planetary motion presented in the book as common knowledge; Kepler's third law was published in 1619. Four and a half decades after Galileo's death, Isaac Newton published his laws of motion and gravity, from which a heliocentric system with planets in approximately elliptical orbits is deducible.
== Summary ==
"Preface: To the Discerning Reader" refers to the ban on the "Pythagorean opinion that the earth moves" and says that the author "takes the Copernican side with a pure mathematical hypothesis". He introduces the friends Sagredo and Salviati with whom he had had discussions as well as the peripatetic philosopher Simplicio.
=== Day one ===
Salviati starts with Aristotle's proof of the completeness and perfection of the world (i.e. the universe) because of its three dimensions. Simplicio points out that three was favoured by the Pythagoreans whereas Salviati cannot understand why three legs are better than two or four. He suggests that the numbers were "trifles which later spread among the vulgar" and that their definitions, such as those of straight lines and right angles, were more useful in establishing the dimensions. Simplicio's response was that Aristotle thought that in physical matters mathematical demonstration was not always needed.
Salviati attacks Aristotle's definition of the heavens as incorruptible and unchanging whilst only the lunar-bound zone shows change. He points to the changes seen in the skies: the new stars of 1572 and 1604 and sunspots, seen through the new telescope. There is a discussion about Aristotle's use of a priori arguments. Salviati suggests that Aristotle uses Aristotle’s personal experience to choose an appropriate argument to prove just as others do and that Aristotle would change his mind in the present circumstances.
Simplicio argues that sunspots could simply be small opaque objects passing in front of the Sun, but Salviati points out that some appear or disappear randomly and those at the edge are flattened, unlike separate bodies. Therefore, "it is better Aristotelian philosophy to say 'Heaven is alterable because my senses tell me' than 'Heaven is unalterable because Aristotle was so persuaded by reasoning.'" He adds "we possess a much better basis for reasoning about celestial things than Aristotle did...Now we, thanks to the telescope, have brought the heavens thirty or forty times closer to us than they were to Aristotle, so that we can discern many things in them that he could not see; among other things these sunspots, which were absolutely invisible to him." Experiments with a mirror are used to show that the Moon's surface must be opaque and not a perfect crystal sphere as Simplicio believes. He refuses to accept that mountains on the Moon cause shadows, or that reflected light from the Earth is responsible for the faint outline in a crescent moon.
Sagredo holds that he considers the Earth noble because of the changes in it whereas Simplicio says that change in the Moon or stars would be useless because they do not benefit man. Salviati points out that days on the Moon are a month long and despite the varied terrain that the telescope has disclosed, it would not sustain life. Humans acquire mathematical truths slowly and hesitantly, whereas God knows the full infinity of them intuitively. And when one looks into the marvelous things men have understood and contrived, then clearly the human mind is one of the most excellent of God's works.
=== Day two ===
The second day starts by repeating that Aristotle would be changing his opinions if he saw what they were seeing. "It is the followers of Aristotle who have crowned him with authority, not he who has usurped or appropriated it to himself." There is one supreme motion—that by which the Sun, Moon, planets and fixed stars appear to be moved from east to west in the space of 24 hours. This may as logically belong to the Earth alone as to the rest of the universe. Aristotle and Ptolemy, who understood this, do not argue against any other motion than this diurnal one. Motion is relative: the position of the sacks of grain on a ship can be identical at the end of the voyage despite the movement of the ship. Why should we believe that nature moves all these extremely large bodies with inconceivable velocities rather than simply moving the moderately sized Earth? If the Earth is removed from the picture, what happens to all the movement?
The movement of the skies from east to west is the opposite of all the other motions of the heavenly bodies which are from west to east; making the Earth rotate brings it into line with all the others. Although Aristotle argues that circular motions are not contraries, they could still lead to collisions. The great orbits of the planets take longer than the shorter: Saturn and Jupiter take many years, Mars two, whereas the Moon takes only a month. Jupiter's moons take even less. This is not changed if the Earth rotates every day, but if the Earth is stationary then we suddenly find that the sphere of the fixed stars rotates in 24 hours. Given the distances, that would more reasonably be thousands of years. In addition some of these stars have to travel faster than others: if the Pole Star was precisely at the axis, then it would be entirely stationary whereas those of the equator have unimaginable speed. The solidity of this supposed sphere is incomprehensible. Make the Earth the primum mobile and the need for this extra sphere disappears.
They consider three main objections to the motion of the Earth: that a falling body would be left behind by the Earth and thus fall far to the west of its point of release; that a cannonball fired to the west would similarly fly much further than one fired to the east; and that a cannonball fired vertically would also land far to the west. Salviati shows that these do not take account of the impetus of the cannon. He also points out that attempting to prove that the Earth does not move by using vertical fall commits the logical fault of paralogism (assuming what is to be proved), because if the Earth is moving then it is only in appearance that it is falling vertically; in fact it is falling at a slant, as happens with a cannonball rising through the cannon (illustrated).
In rebutting a work which claims that a ball falling from the Moon would take six days to arrive, the odd-number rule is introduced: a body falling 1 unit in an interval would fall 3 units in the next interval, 5 units in the subsequent one, etc. This gives rise to the rule by which the distance fallen is according to the square of the time. Using this he calculates the time is really little more than 3 hours. He also points out that density of the material does not make much difference: a lead ball might only accelerate twice as fast as one of cork. In fact, a ball falling from such a height would not fall behind but ahead of the vertical because the rotational motion would be in ever-decreasing circles. What makes the Earth move is similar to whatever moves Mars or Jupiter and is the same as that which pulls the stone to Earth. Calling it gravity does not explain what it is.
=== Day three ===
Salviati starts by dismissing the arguments of a book against the novas he has been reading overnight. Unlike comets, these were stationary and their lack of parallax easily checked and thus could not have been in the sublunary sphere. Simplicio now gives the greatest argument against the annual motion of the Earth that if it moves then it can no longer be the center of the zodiac, the world. Aristotle gives proofs that the universe is finite bounded and spherical. Salviati points out that these disappear if he denies him the assumption that it is movable, but allows the assumption initially in order not to multiply disputes.
Salviati points out that if anything is the center, it must be the Sun not the Earth, because all the planets are closer or further away from the Earth at different times, Venus and Mars up to eight times. He encourages Simplicio to make a plan of the planets, starting with Venus and Mercury which are easily seen to rotate about the Sun. Mars must also go about the Sun (as well as the Earth) since it is never seen horned, unlike Venus now seen through the telescope; similarly with Jupiter and Saturn. Earth, which is between Mars with a period of two years and Venus with nine months, has a period of a year which may more elegantly be attributed to motion than a state of rest.
Sagredo brings up two other common objections. If the Earth rotated, the mountains would soon be in a position that one would have to descend them rather than ascend. Secondly, the motion would be so rapid that someone at the bottom of a well would have only a brief instance to glimpse a star as it traversed. Simplicio can see that the first is no different from travelling over the globe, as any who have circumnavigated but though he realizes the second is the same as if the heavens were rotating, he still does not understand it. Salviati says the first is no different from those who deny the antipodes. For the second, he encourages Simplicio to decide what fraction of the sky can be seen from down the well.
Salviati brings up another problem, which is that Mars and Venus are not as variable as the theory would suggest. He explains that the size of a star to the human eye is affected by the brightness and the sizes are not real. This is resolved by use of the telescope which also shows the crescent shape of Venus. A further objection to the movement of the Earth, the unique existence of the Moon, has been resolved by the discovery of the moons of Jupiter, which would appear like Earth's Moon to any Jovian.
Copernicus has succeeded in reducing some of the uneven motions of Ptolemy who had to deal with motions that sometimes go fast, sometimes slow, and sometimes backwards, by means of vast epicycles. Mars, above the Sun's sphere, often falls far below it, then soars above it. These anomalies are cured by the annual movement of the Earth. This is explained by a diagram in which the varying motion of Jupiter is shown using the Earth's orbit.
Simplicio produces another booklet in which theological arguments are mixed with astronomic, but Salviati refuses to address the issues from Scripture. So he produces the argument that the fixed stars must be at an inconceivable distance with the smallest larger than the whole orbit of the Earth. Salviati explains that this all comes from a misrepresentation of what Copernicus said, resulting in a huge over-calculation of the size of a sixth magnitude star. But many other famous astronomers over-estimated the size of stars by ignoring the brightness factor. Not even Tycho, with his accurate instruments, set himself to measure the size of any star except the Sun and Moon. But Salviati (Galileo) was able to make a reasonable estimate simply by hanging a cord to obscure the star and measuring the distance from eye to cord.
But still many cannot believe that the fixed stars can individually be as big or bigger than the Sun. To what end are these? Salviati maintains that "it is brash for our feebleness to attempt to judge the reasons for God's actions, and to call everything in the universe vain and superfluous which does not serve us".
Has Tycho or any of his disciples tried to investigate in any way phenomena that might affirm or deny the movement of the Earth? Do any of them know how much variation is needed in the fixed stars? Simplicio objects to conceding that the distance of the fixed stars is too great for it to be detectable. Salviati points out how difficult it is even to detect the varying distances of Saturn. Many of the positions of the fixed stars are not known accurately and far better instruments than Tycho's are needed: say using a sight with a fixed position 60 miles away.
Sagredo then asks Salviati to explain how the Copernican system explains the seasons and inequalities of night and day. This he does with the aid of a diagram showing the position of the Earth in the four seasons. He points out how much simpler it is than the Ptolemaic system. But Simplicio thinks Aristotle was wise to avoid too much geometry. He prefers Aristotle's axiom to avoid more than one simple motion at a time.
=== Day four ===
They are in Sagredo's house in Venice, where tides are an important issue, and Salviati wants to show the effect of the Earth's movement on the tides. He first points out the three periods of the tides: daily (diurnal), generally with intervals of 6 hours of rising and six more of falling; monthly, seemingly from the Moon, which increases or decreases these tides; and annual, leading to different sizes at the equinoxes. He considers first the daily motion. Three varieties are observed: in some places the waters rise and fall without any forward motion; in others they move towards the east and back to the west without rising or falling; in still others there is a combination of both—this happens in Venice where the waters rise on entering and fall on leaving. In the Straits of Messina there are very swift currents between Scylla and Charybdis. In the open Mediterranean the alteration of height is small but the currents are noticeable.
Simplicio counters with the peripatetic explanations, which are based on the depths of the sea, and the dominion of the Moon over the water, though this does not explain the risings when the Moon is below the horizon. But he admits it could be a miracle. When the water in Venice rises, where does it come from? There is little rise in Corfu or Dubrovnik. From the ocean through the Straits of Gibraltar? It's much too far away and the currents are too slow. So could the movement of the container cause the disturbance? Consider the barges that bring water into Venice. When they hit an obstacle, the water rushes forward; when they speed up it will go to the back. For all this disturbance there is no need for new water and the level in the middle stays largely constant though the water there rushes backwards and forwards.
Consider a point on the Earth under the joint action of the annual and diurnal movements. At one time these are added together and 12 hours later they act against each other, so there is an alternate speeding up and slowing down. So the ocean basins are affected in the same way as the barge particularly in an east-west direction. The length of the barge makes a difference to the speed of oscillations, just as the length of a plumb bob changes its speed. The depth of water also makes a difference to the size of vibrations. The primary effect only explains tides once a day; one must look elsewhere for the six-hour change, to the oscillation periods of the water. In some places, such as the Hellespont and the Aegean the periods are briefer and variable. But a north-south sea like the Red Sea has very little tide whereas the Messina Strait carries the pent up effect of two basins.
Simplicio objects that if this accounts for the water, should it not even more be seen in the winds? Salviati suggests that the containing basins are not so effective and the air does not sustain its motion. Nevertheless, these forces are seen by the steady winds from east to west in the oceans in the tropical zone. It seems that the Moon also is taking part in the production of the daily effects, but that is repugnant to his mind. The motions of the Moon have caused great difficulty to astronomers. It's impossible to make a full account of these things given the irregular nature of the sea basins.
== See also ==
"Discourse on the Tides", 1616 Galileo essay
== Notes ==
== Bibliography ==
Drake, Stillman (1970). Galileo Studies. Ann Arbor: The University of Michigan Press. ISBN 0-472-08283-3.
Linton, Christopher M. (2004). From Eudoxus to Einstein – A History of Mathematical Astronomy. Cambridge: Cambridge University Press. ISBN 978-0-521-82750-8.
Sharratt, Michael (1994). Galileo: Decisive Innovator. Cambridge: Cambridge University Press. ISBN 0-521-56671-1.
== External links ==
Media related to Dialogo sopra i due massimi sistemi del mondo at Wikimedia Commons
Italian text with figures (in Italian)
Thomas Salusbury's 1661 English translation of the Dialogue. Online copy of full text.
Dialogo dei massimi sistemi. Fiorenza, Per Gio: Batista Landini, 1632. From the Rare Book and Special Collections Division at the Library of Congress
Audio book version by Brian Keating | Wikipedia/Dialogue_Concerning_the_Two_Chief_World_Systems |
Solid-state physics is the study of rigid matter, or solids, through methods such as solid-state chemistry, quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. Along with solid-state chemistry, it also has direct applications in the technology of transistors and semiconductors.
== Background ==
Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass).
The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes.
The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding.
== History ==
The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society.
Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices.
== Crystal structure and properties ==
Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction.
The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially.
Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials.
== Electronic properties ==
Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity.
Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators.
The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators.
The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory.
== Modern research ==
Modern research topics in solid-state physics include:
High-temperature superconductivity
Quasicrystals
Spin glass
Strongly correlated materials
Two-dimensional materials
Nanomaterials
== See also ==
Condensed matter physics
Crystallography
Nuclear spectroscopy
Solid mechanics
== References ==
== Further reading ==
Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976).
Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 2004).
H. M. Rosenberg, The Solid State (Oxford University Press: Oxford, 1995).
Steven H. Simon, The Oxford Solid State Basics (Oxford University Press: Oxford, 2013).
Out of the Crystal Maze. Chapters from the History of Solid State Physics, ed. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann, Spencer Weart (Oxford: Oxford University Press, 1992).
M. A. Omar, Elementary Solid State Physics (Revised Printing, Addison-Wesley, 1993).
Hofmann, Philip (2015-05-26). Solid State Physics (2 ed.). Wiley-VCH. ISBN 978-3527412822. | Wikipedia/State_theory |
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements).
It is predicted that there are hundreds of stable single-layer materials. The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation.
== Single element materials ==
=== C: graphene and graphyne ===
Graphene
Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper. It was first produced in 2004.
Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer.
Graphyne
Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization, spn, where 1 < n < 2, compared to graphene (pure sp2) and diamond (pure sp3).
First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable.
The existence of graphyne was conjectured before 1960. In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates.
In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. However, after an investigation the team's paper was retracted by the publication citing fabricated data.
Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions.
Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones.
=== B: borophene ===
Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet. First predicted by theory in the mid-1990s in a freestanding state, and then demonstrated as distinct monoatomic layers on substrates by Zhang et al.,
different borophene structures were experimentally confirmed in 2015.
=== Ge: germanene ===
Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure.
Experimentally synthesized germanene exhibits a honeycomb structure.
This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other.
=== Si: silicene ===
Silicene is a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene. Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer.
=== Sn: stanene ===
Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Its buckled structure leads to high reactivity against common air pollutants such as NOx and COx and it is able to trap and dissociate them at low temperature.
A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface.
=== Pb: plumbene ===
Plumbene is a two-dimensional allotrope of lead, with a hexagonal honeycomb structure similar to that of graphene.
=== P: phosphorene ===
Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus. Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. This property potentially makes it a better semiconductor than graphene.
The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports.
=== Sb: antimonene ===
Antimonene is a two-dimensional allotrope of antimony, with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications.
In a study made in 2018, antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g−1 at a current of 14 A g−1. Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg−1 and a power density of 4.8 kW kg−1. These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds.
=== Bi: bismuthene ===
Bismuthene, the two-dimensional (2D) allotrope of bismuth, was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. The prediction was successfully realized and synthesized in 2016. At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. Top-down exfoliation of bismuthene has been reported in various instances with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. Emdadul et al. predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations.
=== Au: goldene ===
On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene, a single layer of gold atoms 100nm wide. Lars Hultman, a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene. Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst.
=== Metals ===
Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. These atomically thin platinum films are epitaxially grown on graphene, which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. Single atom layers of palladium with the thickness down to 2.6 Å, and rhodium with the thickness of less than 4 Å have been synthesized and characterized with atomic force microscopy and transmission electron microscopy.
A 2D titanium formed by additive manufacturing (laser powder bed fusion) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure.
=== 2D supracrystals ===
The supracrystals of 2D materials have been proposed and theoretically simulated. These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell.
== 2D alloys ==
Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene.
== Compounds ==
Boron nitride nanosheet
Titanate nanosheet
Borocarbonitrides
MXenes
2D silica
Niobium bromide and Niobium chloride (Nb3[X]8)
=== Transition metal dichalcogenide monolayers ===
The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS2). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies).
The 2H phase of MoS2 (Pearson symbol hP6; Strukturbericht designation C7) has space group P63/mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS2 are split into three bands: dz2, dx2-y2,xy, and dxz,yz. Of these, only the dz2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. 1T-MoS2, on the other hand, has partially filled d-orbitals which give it a metallic character.
Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions, the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS2 in the direction parallel to the planar layer (0.1–1 ohm−1cm−1) is ~2200 times larger than the conductivity perpendicular to the layers. There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS2 (0.1–10 cm2V−1s−1) than for bulk MoS2 (100–500 cm2V−1s−1). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on.
MoS2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets or depositing the sheets vertically rather than horizontally. Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm2, an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase).
=== Graphane ===
While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp2-bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp3-hybrized carbon bonded to a hydrogen (chemical formula of (CH)n). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane.
Graphane was first theorized in 2003, was shown to be stable using first principles energy calculations in 2007, and was first experimentally synthesized in 2009. There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. Graphane is an insulator, with a predicted band gap of 3.5 eV; however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation.
=== Germanane ===
Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. Germanane's structure is similar to graphane, Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide. From this material, the calcium (Ca) is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH. The Ca sites in Zintl-phase CaGe2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl2.
=== SLSiN ===
SLSiN (acronym for Single-Layer Silicon Nitride), a novel 2D material introduced as the first post-graphene member of Si3N4, was first discovered computationally in 2020 via density-functional theory based simulations. This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics.
== Combined surface alloying ==
Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way.
== Organic ==
Ni3(HITP)2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaaminotriphenylene). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm−1, comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors.
The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm−1, respectively.
== Polymer ==
Using melamine (carbon and nitrogen ring structure) as a monomer, researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds. The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass. It is impermeable to gases and liquids.
== Combinations ==
Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures. Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties.
== Characterization ==
Microscopy techniques such as transmission electron microscopy, 3D electron diffraction, scanning probe microscopy, scanning tunneling microscope, and atomic-force microscopy are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy, X-ray diffraction, and X-ray photoelectron spectroscopy.
=== Mechanical characterization ===
The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements.
Nanoindentation testing is commonly used to experimentally measure elastic modulus, hardness, and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness, work hardening exponent, residual stress, and yield strength. These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively.
Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene and graphene and a Poisson's ratio of zero in triangular lattice borophene.
Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations.
Fracture toughness of 2D materials in Mode I (KIC) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. Most 2D materials were found to undergo brittle fracture.
== Applications ==
The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics.
=== Biological applications ===
Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials. Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge.
Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. 2D nanomaterials are highly diverse in terms of their mechanical, chemical, and optical properties, as well as in size, shape, biocompatibility, and degradability. These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery, imaging, tissue engineering, biosensors, and gas sensors among others. However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels, even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing. Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT).
Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues. Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials.
== See also ==
Monolayer
Two-dimensional semiconductor
Transition metal dichalcogenide monolayers
== References ==
== External links ==
"What Are 2D Materials, and Why Do They Interest Scientists?" in Columbia News (March 6, 2024)
"Twenty years of 2D materials" in Nature Physics (January 16, 2024)
== Additional reading ==
Xu, Yang; Cheng, Cheng; Du, Sichao; Yang, Jianyi; Yu, Bin; Luo, Jack; Yin, Wenyan; Li, Erping; Dong, Shurong; Ye, Peide; Duan, Xiangfeng (2016). "Contacts between Two- and Three-Dimensional Materials: Ohmic, Schottky, and p–n Heterojunctions". ACS Nano. 10 (5): 4895–4919. doi:10.1021/acsnano.6b01842. PMID 27132492.
Briggs, Natalie; Subramanian, Shruti; Lin, Zhong; Li, Xufan; Zhang, Xiaotian; Zhang, Kehao; Xiao, Kai; Geohegan, David; Wallace, Robert; Chen, Long-Qing; Terrones, Mauricio; Ebrahimi, Aida; Das, Saptarshi; Redwing, Joan; Hinkle, Christopher; Momeni, Kasra; van Duin, Adri; Crespi, Vin; Kar, Swastik; Robinson, Joshua A. (2019). "A roadmap for electronic grade 2D materials". 2D Materials. 6 (2): 022001. Bibcode:2019TDM.....6b2001B. doi:10.1088/2053-1583/aaf836. OSTI 1503991. S2CID 188118830.
Shahzad, F.; Alhabeb, M.; Hatter, C. B.; Anasori, B.; Man Hong, S.; Koo, C. M.; Gogotsi, Y. (2016). "Electromagnetic interference shielding with 2D transition metal carbides (MXenes)". Science. 353 (6304): 1137–1140. Bibcode:2016Sci...353.1137S. doi:10.1126/science.aag2421. PMID 27609888.
"Graphene Uses & Applications". Graphenea. Retrieved 2014-04-13.
cao, yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher Stephen; Noori, Yasir Jamal; Gavito, Ramon Bernardo; Shahrjerdi, Davood; Roedig, Utz (2017). "Optical identification using imperfections in 2D materials". 2D Materials. 4 (4): 045021. arXiv:1706.07949. Bibcode:2017TDM.....4d5021C. doi:10.1088/2053-1583/aa8b4d. S2CID 35147364.
Kolesnichenko, Pavel; Zhang, Qianhui; Zheng, Changxi; Fuhrer, Michael; Davis, Jeffrey (2021). "Multidimensional analysis of excitonic spectra of monolayers of tungsten disulphide: toward computer-aided identification of structural and environmental perturbations of 2D materials". Machine Learning: Science and Technology. 2 (2): 025021. arXiv:2003.01904. doi:10.1088/2632-2153/abd87c. | Wikipedia/Two-dimensional_materials |
In particle physics, the acronym WISP refers to a largely hypothetical weakly interacting sub-eV particle, or weakly interacting slender particle, or weakly interacting slim particle – low-mass particles which rarely interact with conventional particles.
The term is used to generally categorize a type of dark matter candidate, and is essentially synonymous with axion-like particle (ALP).
WISPs are generally hypothetical particles.
WISPs are the low-mass counterpart of weakly interacting massive particles (WIMPs).
== Discussion ==
Except for conventional, active neutrinos, all WISPs are candidate dark matter constituents, and many proposed experiments to detect WISPs might possibly be able to detect several different kinds. "WISP" is most often used to refer to a low-mass hypothetical particles which are viable dark matter candidates. Examples include:
Axion – long-standing hypothetical strong force related light particle
Sterile neutrino – never-observed particles explicitly excluded (if they exist) from the weak, strong and electromagnetic interactions
Supersymmetric particles, particularly the lightest supersymmetric particle which might be a
Neutralino – supersymmetric fermions that are electrically neutral composites of superpartners to bosons
== Excluded active neutrinos ==
Although ordinary "active" neutrinos (left-chiral neutrinos and right-chiral antineutrinos) are particles known to exist, and though active neutrinos do indeed technically satisfy the description of the term, they are often excluded from lists of "WISP" particles.
The reason that active neutrinos are often not included among WISPs is that they are no longer viable dark matter candidates: current estimated limits on their number density and mass indicate that their cumulative mass-density could not be high enough to account for the amount of dark matter inferred from its observed effects, although they certainly do make some small contribution to dark matter density.
== Sources ==
The various sources of WISPs could possibly include hot astrophysical plasma and energy transport in stars. Note however, that since they remain hypothetical (except for active neutrinos), the means of creation of WISPs depends on the theoretical framework used to propose them.
== See also ==
Axion
Feebly interacting particle (FIP)
Hot dark matter
Light dark matter
Lightest supersymmetric particle (LSP)
Sterile neutrino
Weakly interacting massive particle (WIMP)
== References == | Wikipedia/WISP_(particle_physics) |
Microlensing Observations in Astrophysics (MOA) is a collaborative project between researchers in New Zealand and Japan, led by Professor Yasushi Muraki of Nagoya University. They use microlensing to observe dark matter, extra-solar planets, and stellar atmospheres from the Southern Hemisphere. The group concentrates especially on the detection and observation of gravitational microlensing events of high magnification, of order 100 or more, as these provide the greatest sensitivity to extrasolar planets. They work with other groups in Australia, the United States and elsewhere. Observations are conducted at New Zealand's Mt. John University Observatory using a 1.8 m (70.9 in) reflector telescope built for the project.
In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an earth-mass rogue planet unbounded by any star, and free floating in the Milky Way galaxy. In January 2022 in collaboration with Optical Gravitational Lensing Experiment (OGLE) they reported in a preprint the first rogue BH while there have been others candidates this is the most solid detection so far as their technique allowed to measure not only the amplification of light but also its deflection by the BH from the microlensing data.
== MOA telescope mirror images ==
== Planets discovered ==
The following planets have been announced by this survey, some in conjunction with other surveys.
== See also ==
Optical Gravitational Lensing Experiment or OGLE, a similar microlensing survey
List of extrasolar planets
== References ==
== External links ==
MOA website
MicroFUN - Microlensing Follow-Up Network | Wikipedia/Microlensing_Observations_in_Astrophysics |
MACRO (Monopole, Astrophysics and Cosmic Ray Observatory) was a particle physics experiment located at the Laboratori Nazionali del Gran Sasso in Abruzzo, Italy. MACRO was proposed by 6 scientific institutions in the United States and 6 Italian institutions.
The primary goal of MACRO was to search for magnetic monopoles. The active elements of MACRO were liquid scintillator and streamer tubes, optimized for high resolution tracking and timing. This design also allowed MACRO to operate as a neutrino detector and as a cosmic ray observatory.
The experiment operated from 1989 to 2000. No monopole candidates were detected, meaning that the flux of monopoles is less than 1.4×10−16 per square centimetre per steradian per second (cm−2sr−1s−1) for velocities between 0.0001 c and 1 c (between 30000 m/s and 300000000 m/s).
The magnetic monopole is a theorized particle that has not yet been observed. If detected, it would disprove Gauss's law for magnetism, one of the four Maxwell's equations which describe the well-established modern understanding of electricity and magnetism.
One researcher claimed to have observed a monopole with a light-bulb-sized detector. The fact that a detector the size of multiple football pitches (MACRO) has not yet duplicated this feat leads most to disregard the earlier claim.
The MACRO project included a large cavern, approximately 800 metres underground, which was further hollowed out and housed hundreds of long chambers filled with scintillating fluid – a fluid that gives off photons when a charged or magnetic particle passes through it. At opposing ends of the chamber were a pair of photomultiplier tubes. Photomultiplier tubes contain a number of small charged "plates". They look like flood lights, but they are collectors that can take a handful of photons and "multiply" them. This multiplication begins by using the photo-electric effect to convert photons that hit the first "plate" into electrons. These electrons are then attracted to the next plate which gives off more electrons that it receives. The next plate does the same, thus amplifying the signal more at each plate. The photomultipliers used in the MACRO experiment were produced by Thorn-EMI, and were sensitive to a signal as small as five photons. After decommissioning, MACRO donated about 800 photomultiplier tubes to the Daya Bay Reactor Neutrino Experiment. The exact voltage put on each plate was determined by a custom circuit board designed by some of the scientists involved with the project.
The scintillating chambers were assembled into high stacks and long rows. When a signal was detected, it was usually detected in multiple chambers. The timing of each signal from each photomultiplier told the approximate path and speed of the particle. The type of signal and the speed through the "pool" of chambers told researchers if they had observed a monopole or merely some common charged particle.
Very important results were obtained by MACRO in other sectors:
cosmic rays: flux, composition and shadow of the Sun and the Moon;
search for dark matter (WIMPS) from the center of the Sun and the Earth and dark matter with strange quarks;
search for low energy neutrinos from supernovae;
neutrino astronomy and neutrino oscillations.
In particular, MACRO showed evidence of neutrino oscillations at the Takayama neutrino conference immediately before the announcement of the discovery of oscillations by the Super-Kamiokande experiment.
== References ==
== External links ==
MACRO experiment record on INSPIRE-HEP | Wikipedia/Monopole,_Astrophysics_and_Cosmic_Ray_Observatory |
The Sachs–Wolfe effect, named after Rainer K. Sachs and Arthur M. Wolfe, is a property of the cosmic microwave background radiation (CMB), in which photons from the CMB are gravitationally redshifted, causing the CMB spectrum to appear uneven. This effect is the predominant source of fluctuations in the CMB for angular scales larger than about ten degrees.
== Non-integrated Sachs–Wolfe effect ==
The non-integrated Sachs–Wolfe effect is caused by gravitational redshift occurring at the surface of last scattering. The effect is not constant across the sky due to differences in the matter/energy density at the time of last scattering.
== Integrated Sachs–Wolfe effect ==
The integrated Sachs–Wolfe (ISW) effect is also caused by gravitational redshift, but it occurs between the surface of last scattering and the Earth, so it is not part of the primordial CMB. It occurs when the Universe is dominated in its energy density by something other than matter. If the Universe is dominated by matter, then large-scale gravitational potential energy wells and hills do not evolve significantly. If the Universe is dominated by radiation, or by dark energy, though, those potentials do evolve, subtly changing the energy of photons passing through them.
There are two contributions to the ISW effect. The "early-time" ISW occurs immediately after the (non-integrated) Sachs–Wolfe effect produces the primordial CMB, as photons course through density fluctuations while there is still enough radiation around to affect the Universe's expansion. Although it is physically the same as the late-time ISW, for observational purposes it is usually lumped in with the primordial CMB, since the matter fluctuations that cause it are in practice undetectable.
=== Late-time integrated Sachs–Wolfe effect ===
The "late-time" ISW effect arises quite recently in cosmic history, as dark energy, or the cosmological constant, starts to govern the Universe's expansion. Unfortunately, the nomenclature is a bit confusing. Often, "late-time ISW" implicitly refers to the late-time ISW effect to linear/first order in density perturbations. This linear part of the effect entirely vanishes in a flat universe with only matter, but dominates over the higher-order part of the effect in a universe with dark energy. The full nonlinear (linear + higher-order) late-time ISW effect, especially in the case of individual voids and clusters, is sometimes known as the Rees–Sciama effect, since Martin Rees and Dennis Sciama elucidated the following physical picture.
Accelerated expansion due to dark energy causes even strong large-scale potential wells (superclusters) and hills (voids) to decay over the time it takes a photon to travel through them. A photon gets a kick of energy going into a potential well (a supercluster), and it keeps some of that energy after it exits, after the well has been stretched out and shallowed. Similarly, a photon has to expend energy entering a supervoid, but will not get all of it back upon exiting the slightly reduced potential hill.
A signature of the late-time ISW is a non-zero cross-correlation function between the galaxy density (the number of galaxies per square degree) and the temperature of the CMB, because superclusters gently heat photons, while supervoids gently cool them. This correlation has been detected at moderate to high significance.
In May 2008, Granett, Neyrinck & Szapudi showed that the late-time ISW can be pinned to discrete supervoids and superclusters identified in the SDSS Luminous Red Galaxy catalog. Their ISW detection traces the localised ISW effect produced by supervoids and superclusters have on the CMB. However, the amplitude of this localised detection is controversial, as it is significantly larger than the expectations and depends on several assumptions of the analysis.
== See also ==
Sunyaev–Zeldovich effect
Cosmic microwave background spectral distortions
== References ==
== External links ==
Sam LaRoque, The Integrated Sachs–Wolfe Effect. University of Chicago, IL.
Aguiar, Paulo, and Paulo Crawford, Sachs–Wolfe effect in some anisotropic models. (PDF format)
White, Martin; Hu, Wayne (1997). "The Sachs–Wolfe effect" (PDF). Astronomy and Astrophysics. 321: 89.
Sachs–Wolfe effect Level 5.
"Dark Energy and the Imprint of Super-Structures on the Microwave Background", a webpage by Granett, Neyrinck & Szapudi. | Wikipedia/Integrated_Sachs–Wolfe_effect |
Negative energy is a concept used in physics to explain the nature of certain fields, including the gravitational field and various quantum field effects.
== Gravitational energy ==
Gravitational energy, or gravitational potential energy, is the potential energy a massive object has because it is within a gravitational field. In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. As two objects move apart and the distance between them approaches infinity, the gravitational force between them approaches zero from the positive side of the real number line and the gravitational potential approaches zero from the negative side. Conversely, as two massive objects move towards each other, the motion accelerates under gravity causing an increase in the (positive) kinetic energy of the system and, in order to conserve the total sum of energy, the increase of the same amount in the gravitational potential energy of the object is treated as negative.
A universe in which positive energy dominates will eventually collapse in a Big Crunch, while an "open" universe in which negative energy dominates will either expand indefinitely or eventually disintegrate in a Big Rip. In the zero-energy universe model ("flat" or "Euclidean"), the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly cancelled out by its negative energy in the form of gravity. It is unclear which, if any, of these models accurately describes the real universe.
=== Black hole ergosphere ===
For a classically rotating black hole, the rotation creates an ergosphere outside the event horizon, in which spacetime itself begins to rotate, in a phenomenon known as frame-dragging. Since the ergosphere is outside the event horizon, particles can escape from it. Within the ergosphere, a particle's energy may become negative (via the relativistic rotation of its Killing vector). The negative-energy particle then crosses the event horizon into the black hole, with the law of conservation of energy requiring that an equal amount of positive energy should escape.
In the Penrose process, a body divides in two, with one half gaining negative energy and falling in, while the other half gains an equal amount of positive energy and escapes. This is proposed as the mechanism by which the intense radiation emitted by quasars is generated.
== Quantum field effects ==
Negative energies and negative energy density are consistent with quantum field theory.
=== Virtual particles ===
In quantum theory, the uncertainty principle allows the vacuum of space to be filled with virtual particle-antiparticle pairs which appear spontaneously and exist for only a short time before, typically, annihilating themselves again. Some of these virtual particles can have negative energy. This behaviour plays a role in several important phenomena, as described below.
=== Casimir effect ===
In the Casimir effect, two flat plates placed very close together restrict the wavelengths of quanta which can exist between them. This in turn restricts the types and hence number and density of virtual particle pairs which can form in the intervening vacuum and can result in a negative energy density. Since this restriction does not exist or is much less significant on the opposite sides of the plates, the forces outside the plates are greater than those between the plates. This causes the plates to appear to pull on each other, which has been measured. More accurately, the vacuum energy caused by the virtual particle pairs is pushing the plates together, and the vacuum energy between the plates is too small to negate this effect since fewer virtual particles can exist per unit volume between the plates than can exist outside them.
=== Squeezed light ===
It is possible to arrange multiple beams of laser light such that destructive quantum interference suppresses the vacuum fluctuations. Such a squeezed vacuum state involves negative energy. The repetitive waveform of light leads to alternating regions of positive and negative energy.
=== Dirac sea ===
According to the theory of the Dirac sea, developed by Paul Dirac in 1930, the vacuum of space is full of negative energy. This theory was developed to explain the anomaly of negative-energy quantum states predicted by the Dirac equation. A year later, after work by Weyl, the negative energy concept was abandoned and replaced by a theory of antimatter.: 9 The following year, 1932, saw the discovery of the positron by Carl Anderson.
== Quantum gravity phenomena ==
The intense gravitational fields around black holes create phenomena which are attributed to both gravitational and quantum effects. In these situations, a particle's Killing vector may be rotated such that its energy becomes negative.
=== Hawking radiation ===
Virtual particles can exist for a short period. When a pair of such particles appears next to a black hole's event horizon, one of them may get drawn in. This rotates its Killing vector so that its energy becomes negative and the pair have no net energy. This allows them to become real and the positive particle escapes as Hawking radiation, while the negative-energy particle reduces the black hole's net energy. Thus, a black hole may slowly evaporate.
== Speculative suggestions ==
=== Wormholes ===
Negative energy appears in the speculative theory of wormholes, where it is needed to keep the wormhole open. A wormhole directly connects two locations which may be separated arbitrarily far apart in both space and time, and in principle allows near-instantaneous travel between them. However physicists such as Roger Penrose regard such ideas as unrealistic, more fiction than speculation.
=== Warp drive ===
A theoretical principle for a faster-than-light (FTL) warp drive for spaceships has been suggested, using negative energy. The Alcubierre drive is based on a solution to the Einstein field equations of general relativity in which a "bubble" of spacetime is constructed using a hypothetical negative energy. The bubble is then moved by expanding space behind it and shrinking space in front of it. The bubble may travel at arbitrary speeds and is not constrained by the speed of light. This does not contradict general relativity, as the bubble's contents do not actually move through their local spacetime.
=== Negative-energy particles ===
Speculative theoretical studies have suggested that particles with negative energies are consistent with Relativistic quantum theory, with some noting interrelationships with negative mass and/or time reversal.
== See also ==
Antimatter
Dark energy
Dark matter
Negative mass
Negative pressure
== References ==
=== Inline notes ===
=== Bibliography ===
Lawrence H. Ford and Thomas A. Roman; "Negative energy, wormholes and warp drive", Scientific American January 2000, 282, Pages 46–53.
Roger Penrose; The Road to Reality, ppbk, Vintage, 2005. Chapter 30: Gravity's Role in Quantum State Reduction. | Wikipedia/Negative_kinetic_energy |
Modern Physics Letters A (MPLA) is the first in a series of journals published by World Scientific under the title Modern Physics Letters. It covers specifically papers and research on gravitation, cosmology, nuclear physics, and particles and fields.
== Related journals ==
Modern Physics Letters B
International Journal of Modern Physics A
International Journal of Modern Physics D
International Journal of Modern Physics E
== Abstracting and indexing ==
According to the Journal Citation Reports, the journal had an impact factor of 1.594 for 2021. The journal is abstracted and indexed in:
Science Citation Index
SciSearch
ISI Alerting Services
Current Contents/Physical, Chemical & Earth Sciences
Astrophysics Data System (ADS) Abstract Service
Mathematical Reviews
Inspec
Zentralblatt MATH
== External links ==
Official website
== References == | Wikipedia/Modern_Physics_Letters_A |
In theoretical physics and applied mathematics, a field equation is a partial differential equation which determines the dynamics of a physical field, specifically the time evolution and spatial distribution of the field. The solutions to the equation are mathematical functions which correspond directly to the field, as functions of time and space. Since the field equation is a partial differential equation, there are families of solutions which represent a variety of physical possibilities. Usually, there is not just a single equation, but a set of coupled equations which must be solved simultaneously. Field equations are not ordinary differential equations since a field depends on space and time, which requires at least two variables.
Whereas the "wave equation", the "diffusion equation", and the "continuity equation" all have standard forms (and various special cases or generalizations), there is no single, special equation referred to as "the field equation".
The topic broadly splits into equations of classical field theory and quantum field theory. Classical field equations describe many physical properties like temperature of a substance, velocity of a fluid, stresses in an elastic material, electric and magnetic fields from a current, etc. They also describe the fundamental forces of nature, like electromagnetism and gravity. In quantum field theory, particles or systems of "particles" like electrons and photons are associated with fields, allowing for infinite degrees of freedom (unlike finite degrees of freedom in particle mechanics) and variable particle numbers which can be created or annihilated.
== Generalities ==
=== Origin ===
Usually, field equations are postulated (like the Einstein field equations and the Schrödinger equation, which underlies all quantum field equations) or obtained from the results of experiments (like Maxwell's equations). The extent of their validity is their ability to correctly predict and agree with experimental results.
From a theoretical viewpoint, field equations can be formulated in the frameworks of Lagrangian field theory, Hamiltonian field theory, and field theoretic formulations of the principle of stationary action. Given a suitable Lagrangian or Hamiltonian density, a function of the fields in a given system, as well as their derivatives, the principle of stationary action will obtain the field equation.
=== Symmetry ===
In both classical and quantum theories, field equations will satisfy the symmetry of the background physical theory. Most of the time Galilean symmetry is enough, for speeds (of propagating fields) much less than light. When particles and fields propagate at speeds close to light, Lorentz symmetry is one of the most common settings because the equation and its solutions are then consistent with special relativity.
Another symmetry arises from gauge freedom, which is intrinsic to the field equations. Fields which correspond to interactions may be gauge fields, which means they can be derived from a potential, and certain values of potentials correspond to the same value of the field.
=== Classification ===
Field equations can be classified in many ways: classical or quantum, nonrelativistic or relativistic, according to the spin or mass of the field, and the number of components the field has and how they change under coordinate transformations (e.g. scalar fields, vector fields, tensor fields, spinor fields, twistor fields etc.). They can also inherit the classification of differential equations, as linear or nonlinear, the order of the highest derivative, or even as fractional differential equations. Gauge fields may be classified as in group theory, as abelian or nonabelian.
=== Waves ===
Field equations underlie wave equations, because periodically changing fields generate waves. Wave equations can be thought of as field equations, in the sense they can often be derived from field equations. Alternatively, given suitable Lagrangian or Hamiltonian densities and using the principle of stationary action, the wave equations can be obtained also.
For example, Maxwell's equations can be used to derive inhomogeneous electromagnetic wave equations, and from the Einstein field equations one can derive equations for gravitational waves.
=== Supplementary equations to field equations ===
Not every partial differential equation (PDE) in physics is automatically called a "field equation", even if fields are involved. They are extra equations to provide additional constraints for a given physical system.
"Continuity equations" and "diffusion equations" describe transport phenomena, even though they may involve fields which influence the transport processes.
If a "constitutive equation" takes the form of a PDE and involves fields, it is not usually called a field equation because it does not govern the dynamical behaviour of the fields. They relate one field to another, in a given material. Constitutive equations are used along with field equations when the effects of matter need to be taken into account.
== Classical field equation ==
Classical field equations arise in continuum mechanics (including elastodynamics and fluid mechanics), heat transfer, electromagnetism, and gravitation.
Fundamental classical field equations include
Newton's Law of Universal Gravitation for nonrelativistic gravitation.
Einstein field equations for relativistic gravitation
Maxwell's equations for electromagnetism.
Important equations derived from fundamental laws include:
Navier–Stokes equations for fluid flow.
As part of real-life mathematical modelling processes, classical field equations are accompanied by other equations of motion, equations of state, constitutive equations, and continuity equations.
== Quantum field equation ==
In quantum field theory, particles are described by quantum fields which satisfy the Schrödinger equation. They are also creation and annihilation operators which satisfy commutation relations and are subject to the spin–statistics theorem.
Particular cases of relativistic quantum field equations include
the Klein–Gordon equation for spin-0 particles
the Dirac equation for spin-1/2 particles
the Bargmann–Wigner equations for particles of any spin
In quantum field equations, it is common to use momentum components of the particle instead of position coordinates of the particle's location, the fields are in momentum space and Fourier transforms relate them to the position representation.
== See also ==
Field strength
Wave function
Fundamental interaction
Field coupling
Field decoupling
Coupling parameter
Vacuum solution
== References ==
=== General ===
G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2.
=== Classical field theory ===
Misner, Charles W.; Thorne, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
Chadwick, P. (1976), Continuum mechanics: Concise theory and problems, Dover (originally George Allen & Unwin Ltd.), Bibcode:1976nyhp.book.....C, ISBN 0-486-40180-4
=== Quantum field theory ===
Weinberg, S. (1995). The Quantum Theory of Fields. Vol. 1. Cambridge University Press. ISBN 0-521-55001-7.
V.B. Berestetskii, E.M. Lifshitz, L.P. Pitaevskii (1982). Quantum Electrodynamics. Course of Theoretical Physics. Vol. 4 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3371-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 3-540-59179-6
Aitchison, I.J.R.; Hey, A.J.G. (2003). Gauge Theories in Particle Physics: From Relativistic Quantum Mechanics to QED. Vol. 1 (3rd ed.). IoP. ISBN 0-7503-0864-8.
Aitchison, I.J.R.; Hey, A.J.G. (2004). Gauge Theories in Particle Physics: Non-Abelian Gauge Theories: QCD and electroweak theory. Vol. 2 (3rd ed.). IoP. ISBN 0-7503-0950-4.
=== Classical and quantum field theory ===
Sexl, R. U.; Urbantke, H. K. (2001) [1992]. Relativity, Groups Particles. Special Relativity and Relativistic Symmetry in Field and Particle Physics. Springer. ISBN 978-3211834435.
== External links ==
J.C.A. Wevers (1999). "Physics formulary" (PDF). Archived from the original (PDF) on 27 December 2016. Retrieved 27 December 2016.
Glenn Elert (1998). "Frequently Used Equations". Retrieved 27 December 2016. | Wikipedia/Field_equation |
The Joint Dark Energy Mission (JDEM) was an Einstein probe that planned to focus on investigating dark energy. JDEM was a partnership between NASA and the U.S. Department of Energy (DOE).
In August 2010, the Board on Physics and Astronomy of the National Science Foundation (NSF) recommended the Wide Field Infrared Survey Telescope (WFIRST) mission, a renamed JDEM-Omega proposal which has superseded SNAP, Destiny, and Advanced Dark Energy Physics Telescope (ADEPT), as the highest priority for development in the decade around 2020. This would be a 1.5-meter telescope with a 144-megapixel HgCdTe focal plane array, located at the Sun-Earth L2 Lagrange point. The expected cost is around US$1.6 billion.
== Earlier proposals ==
=== Dark Energy Space Telescope (Destiny) ===
The Dark Energy Space Telescope (Destiny), was a planned project by NASA and DOE, designed to perform precision measurements of the universe to provide an understanding of dark energy. The space telescope will derive the expansion of the universe by measuring up to 3,000 distant supernovae each year of its three-year mission lifetime, and will additionally study the structure of matter in the universe by measuring millions of galaxies in a weak gravitational lensing survey. The Destiny spacecraft features an optical telescope with a 1.8 metre primary mirror. The telescope images infrared light onto an array of solid-state detectors. The mission is designed to be deployed in a halo orbit about the Sun-Earth L2 Lagrange point.
The Destiny proposal has been superseded by the Wide Field Infrared Survey Telescope (WFIRST).
=== SuperNova Acceleration Probe (SNAP) ===
The SuperNova Acceleration Probe (SNAP) mission was proposed to provide an understanding of the mechanism driving the acceleration of the universe and determine the nature of dark energy. To achieve these goals, the spacecraft needed to be able to detect these supernova when they are at their brightest moment. The mission was proposed as an experiment for the JDEM. The satellite observatory would be capable of measuring up to 2,000 distant supernovae each year of its three-year mission lifetime. SNAP was also planned to observe the small distortions of light from distant galaxies to reveal more about the expansion history of the universe. SNAP was initially planned to launch in 2013.
To understand what is driving the acceleration of the universe, scientists need to see greater redshifts from supernovas than what is seen from Earth. The SNAP would detect redshifts of 1.7 from distant supernovas up to 10 billion light years away. At this distance, the acceleration of the universe is easily seen. To measure the presence of dark energy, a process called weak lensing can be used.
The SNAP would have used an optical setup called the three-mirror anastigmat. This consists of a main mirror with a diameter of 2 meters to take in light. It reflects this light to a second mirror. Then this light is transferred to two additional smaller mirrors which direct the light to the spacecraft's instruments. It will also contain 72 different cameras. 36 of them are able to detect visible light and the other 36 detect infrared light. Its cameras combined produces the equivalence of a 600 megapixel camera. The resolution of the camera is about 0.2 arcseconds in the visible spectrum and 0.3 arcseconds in the infrared spectrum. The SNAP would also have a spectrograph attached to it. The purpose of it is to detect what type of supernova SNAP is observing, determine the redshift, detect changes between different supernovas, and store supernova spectra for future reference.
JDEM recognized several potential problems of the SNAP project:
The supernovas that SNAP would detect may not all be SN 1a type. Some other 1b and 1c type supernovas have similar spectra which could potentially confuse SNAP.
Hypothetical gray dust could contaminate results. Gray dust absorbs all wavelengths of light, making supernovas dimmer than they actually are.
The behavior of supernovas could potentially be altered by its binary-star system.
Any objects between the viewed supernova and the SNAP could gravitationally produce inaccurate results.
The SNAP proposal has been superseded by the Wide Field Infrared Survey Telescope (WFIRST).
== See also ==
Wide-field Infrared Survey Explorer (2009–2011)
== References ==
== External links ==
JDEM at Berkley Lab Archived 1 September 2018 at the Wayback Machine | Wikipedia/Joint_Dark_Energy_Mission |
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame. The two differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula:
E
=
m
c
2
{\displaystyle E=mc^{2}}
. In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula.
The formula defines the energy (E) of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of mass corresponds to an enormous amount of energy.
Rest mass, also called invariant mass, is a fundamental physical property of matter, independent of velocity. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy.
The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics.
Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.
== Description ==
Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared (c2). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by c2, which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with E as the energy released (removed), and m as the change in mass.
In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by c2. For an observer in the rest frame, removing energy is the same as removing mass and the formula m = E/c2 indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by c2.
== Mass in special relativity ==
An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by c2. Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum.
=== Relativistic mass ===
Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by c2, or mrel = E/c2. The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws.
=== Conservation of mass and energy ===
Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy.
=== Massless particles ===
Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation E = hf, where h is the Planck constant and f is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy.
=== Composite systems ===
For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses.
For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by c2) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass.
=== Relation to gravity ===
Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity.
The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the solar eclipse of May 29, 1919. During the eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation.
== Efficiency ==
In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation.
As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang.
Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes.
== Extension for systems in motion ==
Unlike a system's energy in an inertial frame, the relativistic energy (
E
r
e
l
{\displaystyle E_{\rm {rel}}}
) of a system depends on both the rest mass (
m
0
{\displaystyle m_{0}}
) and the total momentum of the system. The extension of Einstein's equation to these systems is given by:
E
r
e
l
2
−
|
p
|
2
c
2
=
m
0
2
c
4
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-|\mathbf {p} |^{2}c^{2}&=m_{0}^{2}c^{4}\\\end{aligned}}}
or
E
r
e
l
2
−
(
p
c
)
2
=
(
m
0
c
2
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-(pc)^{2}&=(m_{0}c^{2})^{2}\\\end{aligned}}}
or
E
r
e
l
=
(
m
0
c
2
)
2
+
(
p
c
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}={\sqrt {(m_{0}c^{2})^{2}+(pc)^{2}}}\,\!\end{aligned}}}
where the
(
p
c
)
2
{\displaystyle (pc)^{2}}
term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to
E
r
e
l
=
m
c
2
{\displaystyle E_{\rm {rel}}=mc^{2}}
when the momentum term is zero. For photons where
m
0
=
0
{\displaystyle m_{0}=0}
, the equation reduces to
E
r
e
l
=
p
c
{\displaystyle E_{\rm {rel}}=pc}
.
== Low-speed approximation ==
Using the Lorentz factor, γ, the energy–momentum can be rewritten as E = γmc2 and expanded as a power series:
E
=
m
0
c
2
[
1
+
1
2
(
v
c
)
2
+
3
8
(
v
c
)
4
+
5
16
(
v
c
)
6
+
…
]
.
{\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {v}{c}}\right)^{2}+{\frac {3}{8}}\left({\frac {v}{c}}\right)^{4}+{\frac {5}{16}}\left({\frac {v}{c}}\right)^{6}+\ldots \right].}
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v/c is small. For low speeds, all but the first two terms can be ignored:
E
≈
m
0
c
2
+
1
2
m
0
v
2
.
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}.}
In classical mechanics, both the m0c2 term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the m0c2 term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields:
E
≈
m
0
c
2
+
1
2
m
0
v
2
(
1
+
3
v
2
4
c
2
)
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}\left(1+{\frac {3v^{2}}{4c^{2}}}\right)}
.
The difference between the two approximations is given by
3
v
2
4
c
2
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}}
, a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of 153,454 miles per hour (68,600 m/s). The difference between the approximations for the Parker Solar Probe in 2018 is
3
v
2
4
c
2
≈
3.9
×
10
−
8
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}\approx 3.9\times 10^{-8}}
, which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about
2.2
×
10
−
5
{\displaystyle 2.2\times 10^{-5}}
.
== Applications ==
=== Application to nuclear physics ===
The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power.
A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by c2), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is detonated in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton (9×1013 joule) nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation.
=== Practical examples ===
Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: E = m. In the SI system (expressing the ratio E/m in joules per kilogram using the value of c in metres per second):
E/m = c2 = (299792458 m/s)2 = 89875517873681764 J/kg (≈ 9.0 × 1016 joules per kilogram).
So the energy equivalent of one kilogram of mass is
89.9 petajoules
25.0 billion kilowatt-hours (or 25,000 GW·h)
21.5 trillion kilocalories (or 21.5 Pcal)
85.2 trillion BTUs (or 0.0852 quads)
or the energy released by combustion of any of the following:
21 500 kilotons of TNT-equivalent energy (or 21.5 Mt)
2630000000 litres or 695000000 US gallons of automotive gasoline
Any time energy is released, the process can be evaluated from an E = mc2 perspective. For instance, the "gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass.
Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged:
A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring.
Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = 1×10−12 g).
A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg.
== History ==
While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps.
=== Developments prior to Einstein ===
Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it.
During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of Е = kmc2, where 0.5 ≤ k ≤ 1. English engineer Samuel Tolver Preston in 1875 and the Italian industrialist and geologist Olinto De Pretto in 1903, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed c. Each of these particles has a kinetic energy of mc2 up to a small numerical factor, giving a mass–energy relation.
In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics.
==== Electromagnetic mass ====
There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass:
m
L
=
m
0
(
1
−
v
2
c
2
)
3
,
m
T
=
m
0
1
−
v
2
c
2
{\displaystyle m_{L}={\frac {m_{0}}{\left({\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\right)^{3}}},\quad m_{T}={\frac {m_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
,
where
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass
m
e
m
=
E
e
m
c
2
.
{\displaystyle m_{em}={\frac {E_{em}}{c^{2}}}\,.}
By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes.
Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass"
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
to the cavity's mass. He argued that this implies mass dependence on temperature as well.
=== Einstein: mass–energy equivalence ===
Einstein did not write the exact formula E = mc2 in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy L by emitting light, its mass diminishes by L/c2. This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."
==== Mass–velocity relationship ====
In developing special relativity, Einstein found that the kinetic energy of a moving body is
E
k
=
m
0
c
2
(
γ
−
1
)
=
m
0
c
2
(
1
1
−
v
2
c
2
−
1
)
,
{\displaystyle E_{k}=m_{0}c^{2}(\gamma -1)=m_{0}c^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right),}
with v the velocity, m0 the rest mass, and γ the Lorentz factor.
He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle:
E
k
=
1
2
m
0
v
2
+
⋯
{\displaystyle E_{k}={\frac {1}{2}}m_{0}v^{2}+\cdots }
Without this second term, there would be an additional contribution in the energy when the particle is not moving.
==== Einstein's view on mass ====
Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on E = mc2, he treated m as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass".
In modern physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection.
==== Einstein's 1905 derivation ====
Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles:
E
k
=
m
c
2
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle E_{k}=mc^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used V to represent the speed of light in vacuum and L to represent the energy lost by a body in the form of radiation. Consequently, the equation E = mc2 was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy L in the form of radiation, its mass diminishes by L/V2." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of E0 before and E1 after the emission as seen in its rest frame. As seen from a moving frame, E0 becomes H0 and E1 becomes H1. Einstein obtained, in modern notation:
(
H
0
−
E
0
)
−
(
H
1
−
E
1
)
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle \left(H_{0}-E_{0}\right)-\left(H_{1}-E_{1}\right)=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
He then argued that H − E can only differ from the kinetic energy K by an additive constant, which gives
K
0
−
K
1
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle K_{0}-K_{1}=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Neglecting effects higher than third order in v/c after a Taylor series expansion of the right side of this yields:
K
0
−
K
1
=
E
c
2
v
2
2
.
{\displaystyle K_{0}-K_{1}={\frac {E}{c^{2}}}{\frac {v^{2}}{2}}.}
Einstein concluded that the emission reduces the body's mass by E/c2, and that the mass of a body is a measure of its energy content.
The correctness of Einstein's 1905 derivation of E = mc2 was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons.
==== Relativistic center-of-mass theorem of 1906 ====
Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's E = mc2, because mass conservation appears as a special case of the energy conservation law.
==== Further developments ====
There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy ε of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be ε0 = μV2 (where μ is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula μ = E0/V2, with E0 being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as M = E0 + pV0/c2 in June 1907, where p is the pressure and V0 the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as M0 = E0/c2 and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form M = μ + E0/c2 and concluded: "A mass μ is equivalent, as regards inertia, to a quantity of energy μc2. […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: m = E/c2 and m0 = E0/c2, with E being the relativistic energy (the energy of an object when the object is moving), E0 is the rest energy (the energy when not moving), m is the relativistic mass (the rest mass and the extra mass gained when moving), and m0 is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: ε = Mc2 and ε0 = mc2, with ε being the total energy (rest energy plus kinetic energy) of a moving material point, ε0 its rest energy, M the relativistic mass, and m the invariant mass.
In 1911, German physicist Max von Laue gave a more comprehensive proof of M0 = E0/c2 from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918.
Einstein returned to the topic once again after World War II and this time he wrote E = mc2 in the title of his article intended as an explanation for a general reader by analogy.
==== Alternative version ====
An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass M. If the body is examined in a frame moving with nonrelativistic velocity v, it is no longer at rest and in the moving frame it has momentum P = Mv. Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy E/2. In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity v to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor 1 − v/c. The momentum of the light is its energy divided by c, and it is increased by a factor of v/c. So the right-moving light is carrying an extra momentum ΔP given by:
Δ
P
=
v
c
E
2
c
.
{\displaystyle \Delta P={v \over c}{E \over 2c}.}
The left-moving light carries a little less momentum, by the same amount ΔP. So the total right-momentum in both light pulses is twice ΔP. This is the right-momentum that the object lost.
2
Δ
P
=
v
E
c
2
.
{\displaystyle 2\Delta P=v{E \over c^{2}}.}
The momentum of the object in the moving frame after the emission is reduced to this amount:
P
′
=
M
v
−
2
Δ
P
=
(
M
−
E
c
2
)
v
.
{\displaystyle P'=Mv-2\Delta P=\left(M-{E \over c^{2}}\right)v.}
So the change in the object's mass is equal to the total energy lost divided by c2. Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass.
=== Radioactivity and nuclear energy ===
It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter."
Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine."
This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation E = mc2 became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem.
While E = mc2 is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation E = mc2, plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of E = mc2 allowed them to realize on the spot that the basic fission process was energetically possible.
=== Einstein's equation written ===
According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021.
== See also ==
== Notes ==
== References ==
== External links ==
Einstein on the Inertia of Energy – MathPages
Einstein-on film explaining a mass energy equivalence
Mass and Energy – Conversations About Science with Theoretical Physicist Matt Strassler
The Equivalence of Mass and Energy – Entry in the Stanford Encyclopedia of Philosophy
Merrifield, Michael; Copeland, Ed; Bowley, Roger. "E=mc2 – Mass–Energy Equivalence". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Mass-energy |
The Dark Energy Spectroscopic Instrument (DESI) is a scientific research instrument for conducting spectrographic astronomical surveys of distant galaxies. Its main components are a focal plane containing 5,000 fiber-positioning robots, and a bank of spectrographs which are fed by the fibers. The instrument enables an experiment to probe the expansion history of the universe and the mysterious physics of dark energy. The main DESI survey started in May 2021. DESI sits at an elevation of 6,880 feet (2,100 m), where it has been retrofitted onto the Mayall Telescope on top of Kitt Peak in the Sonoran Desert, which is located 55 miles (89 km) from Tucson, Arizona, US.
The instrument is operated by the Lawrence Berkeley National Laboratory under funding from the US Department of Energy's Office of Science. Construction of the instrument was principally funded by the US Department of Energy's Office of Science, and by other numerous sources including the US National Science Foundation, the UK Science and Technology Facilities Council, France's Alternative Energies and Atomic Energy Commission, Mexico's National Council of Science and Technology, Spain's Ministry of Science and Innovation, by the Gordon and Betty Moore Foundation, by the Heising-Simons Foundation, and by collaborating institutions worldwide.
== Science goals ==
The expansion history and large-scale structure of the universe is a key prediction of cosmological models, and DESI observations will permit scientists to probe diverse aspects of cosmology, from dark energy to alternatives to General Relativity to neutrino masses to the early universe. The data from DESI will be used to create three-dimensional maps of the distribution of matter covering an unprecedented volume of the universe with unparalleled detail. This will provide insight into the nature of dark energy and establish whether cosmic acceleration is due to a cosmic-scale modification of General Relativity. DESI will be transformative in the understanding of dark energy and the expansion rate of the universe at early times, one of the greatest mysteries in the understanding of the physical laws.
DESI will measure the expansion history of the universe using the baryon acoustic oscillations (BAO) imprinted in the clustering of galaxies, quasars, and the intergalactic medium. The BAO technique is a robust way to extract cosmological distance information from the clustering of matter and galaxies. It relies only on very large-scale structure and it does so in a manner that enables scientists to separate the acoustic peak of the BAO signature from uncertainties in most systematic errors in the data. BAO was identified in the 2006 Dark Energy Task Force report as one of the key methods for studying dark energy. In May 2014, the High-Energy Physics Advisory Panel, a federal advisory committee, commissioned by the US Department of Energy (DOE) and the National Science Foundation (NSF) endorsed DESI.
== 3D map of the universe ==
The baryon acoustic oscillations method requires a three-dimensional map of distant galaxies and quasars created from the angular and redshift information of a large statistical sample of cosmologically distant objects. By obtaining spectra of distant galaxies it is possible to determine their distance, via the measurement of their spectroscopic redshift, and thus create a 3-D map of the universe. The 3-D map of the large-scale structure of the universe also contains more information about dark energy than just the BAO and is sensitive to the mass of the neutrino and parameters that governed the primordial universe. During its five-year survey, which began on May 15, 2021, the DESI experiment is expected to observe 40 million galaxies and quasars.
== Development ==
The DESI instrument implements a new highly multiplexed optical spectrograph on the Mayall Telescope. The new optical corrector design creates a very large, 8.0 square degree field of view on the sky, which combined with the new focal plane instrumentation weighs approximately 10 tonnes. The focal plane accommodates 5,000 small robotic fiber positioners on a 10.4 millimeter pitch. The entire focal plane can be reconfigured for the next exposure in less than two minutes while the telescope slews to the next field. The DESI instrument is capable of taking 5,000 simultaneous spectra over a wavelength range from 360 nm to 980 nm. The DESI project scope included construction, installation, and commissioning of the new wide-field corrector and corrector support structure for the telescope, the focal plane assembly with 5,000 robotic fiber positioners and ten guide/focus/alignment sensors, a 40-meter optical fiber cabling system that brings light from the focal plane to the spectrographs, ten 3-arm spectrographs, an instrument control system, and a data analysis pipeline.
The instrument fabrication was managed by the Lawrence Berkeley National Laboratory and oversees operation of the experiment including a 600-person international scientific collaboration. Cost of construction was $56M from the US Department of Energy's Office of Science plus an additional $19M from other non-federal sources including contributions in-kind. The leadership of DESI currently consists of the director, Dr. Michael E. Levi, collaboration co-spokespersons Prof. Alexie Leauthaud and Prof. Will Percival, project scientists Dr. David J. Schlegel and Dr. Julien Guy, project manager Dr. Patrick Jelinsky, instrument scientists Prof. Klaus Honscheid and Prof. Constance Rockosi. Past collaboration spokespersons have been Prof. Daniel Eisenstein, Prof. Risa Wechsler, Prof. Kyle Dawson, and Dr. Nathalie Palanque-Delabrouille.
The U.S. Department of Energy (DOE) approved CD-0 (Mission Need) on September 18, 2012, approved CD-1 (Alternative Selection and Cost Range) on March 19, 2015, and CD-2 (Performance Baseline) on September 17, 2015. U.S. Congressional approval for the start of DESI as a new Major Item of Equipment was provided in the Fiscal Year 2015 Energy & Water appropriations legislation. Construction on the new instrument started June 22, 2016 with CD-3 (Start Construction) approval and was largely assembled by 2019 with commissioning finishing in March 21, 2020 in advance of the pandemic and marking the formal end of the project (CD-4). DESI was completed under budget by $1.9M and 17 months ahead of schedule. As a consequence, the project received the DOE Project Management Excellence Award for 2020. After a pause for the pandemic and a transition to remote operations, DESI returned to survey operations in December, 2020 with a final checkout and validation phase prior to starting its planned five-year survey. The five-year survey began on May 14, 2021. DESI was shut down for three months in the summer of 2022 due to the Contreras fire which engulfed Kitt Peak. DESI was undamaged and is acquiring scientific data.
== DESI Legacy Imaging Surveys ==
To provide targets for the DESI survey three telescopes surveyed the northern and part of the southern sky in the g, r and z-band. Those surveys were the Beijing-Arizona Sky Survey (BASS), using the Bok 2.3-m telescope, the Dark Energy Camera Legacy Survey (DECaLS), using the Blanco 4m telescope and the Mayall z-band Legacy Survey (MzLS), using the 4-meter Mayall telescope. The area of the surveys is 14,000 square degrees (about one third of the sky) and avoids the Milky Way. These surveys were combined into the DESI Legacy Imaging Surveys, or Legacy Surveys. Colored images of the survey can be viewed in the Legacy Survey Sky Browser. The legacy survey covers 16,000 square degrees of the night sky containing 1.6 billion objects including galaxies and quasars out to 11 billion years ago.
== History ==
DESI received a go-ahead to start R&D for the project in December 2012 with the assignment of the Lawrence Berkeley National Laboratory as the managing laboratory. Dr. Michael Levi, a senior scientist at the Lawrence Berkeley National Laboratory was appointed by the laboratory to be DESI's project director who served in that role starting in 2012 and throughout construction. Henry Heetderks was project manager from 2013 until 2016, Robert Besuner was project manager from 2016 until 2020. Congressional authorization was provided in 2015, and the US Department of Energy's Office of Science approved the start of physical construction in June 2016. First light of the new corrector system was obtained on the night of April 1, 2019, and first-light of the entire instrument was achieved on the night of October 22, 2019. Commissioning ensued after first light and was completed in March 2020, then paused during the pandemic in 2020. DESI started its 5-year main scientific survey on May 14, 2021. DESI is currently operating normally after surviving the Contreras fire in 2022.
== Data releases ==
All of the publicly available data including redshift catalogs, added-value catalogs, and documentation, can be accessed through DESI sata portal. Individuals with accounts at the National Energy Research Scientific Computing Center (NERSC) can access the entire public portion of the DESI data. DESI catalogs also exist in a database format. For convenience, a copy of the public databases is also hosted by the NOIRLab Astro Data Lab science platform, and by using the SPectral Analysis and Retrievable Catalog Lab (SPARCL). One easy way to access DESI spectra online is to use the legacy viewer at the DESI Legacy Imaging Surveys. Users have to check the box for DESI spectra and click on an encircled galaxy or star for a link to the DESI Spectral Viewer to show up. The spectrum can be explored in the DESI Spectral Viewer (see External Links under Index| Legacy Surveys).
=== Early data release ===
On 13 June 2023 the DESI Early Data Release (EDR) was announced. The EDR contains spectra of nearly two million galaxies, quasars and stars. One early result of the EDR was announced in February 2023 and described a mass migration of stars into the Andromeda Galaxy. The EDR also revealed very distant quasars and very metal-poor stars.
==== Possibly evolving dark energy levels ====
From the level of detail able to be observed, the largest 3-D map of the universe at this point has been created (2024). From this precise data, DESI Director Michael Levi stated:We’re also seeing some potentially interesting differences that could indicate that dark energy is evolving over time. Those may or may not go away with more data, so we’re excited to start analyzing our three-year dataset soon.
=== Data Release 1 (DR1) ===
Data Release 1 (DR1), published on 19 March 2025, contains 18.7 million objects. These objects comprise roughly 4 million stars, 13.1 million galaxies, and 1.6 million quasars. One outcome of these big data were hints of an evolving dark energy. If the data hold, this would mark the first major change in our understanding of the universe in decades. Whilst the DESI observations as such are consistent with the Lambda CDM model, in combination with past surveys of the cosmic microwave background, supernovae, and weak lensing the data show that the influence of dark energy weakens over time. The signal does not reach 5 sigma, but nonetheless at 2.8 to 4.2 sigma heralds a new era of research. E.g., the causes of the variation itself are not yet known.
== References ==
== External links ==
Official DESI site
Index| Legacy Survey
Legacy Survey Sky Browser
Science Final Design Report
Instrument Final Design Report
DESI data
Key publications
Omnibus DESI collaboration
Telescope tracks 35 million galaxies in Dark Energy hunt, BBC Science report, 28 October 2019 | Wikipedia/Dark_Energy_Spectroscopic_Instrument |
In theoretical physics, fine-tuning is the process in which parameters of a model must be adjusted very precisely in order to fit with certain observations.
Theories requiring fine-tuning are regarded as problematic in the absence of a known mechanism to explain why the parameters happen to have precisely the observed values that they return. The heuristic rule that parameters in a fundamental physical theory should not be too fine-tuned is called naturalness.
== Background ==
The idea that naturalness will explain fine tuning was brought into question by Nima Arkani-Hamed, a theoretical physicist, in his talk "Why is there a Macroscopic Universe?", a lecture from the mini-series "Multiverse & Fine Tuning" from the "Philosophy of Cosmology" project, a University of Oxford and Cambridge Collaboration 2013. In it he describes how naturalness has usually provided a solution to problems in physics; and that it had usually done so earlier than expected. However, in addressing the problem of the cosmological constant, naturalness has failed to provide an explanation though it would have been expected to have done so a long time ago.
The necessity of fine-tuning leads to various problems that do not show that the theories are incorrect, in the sense of falsifying observations, but nevertheless suggest that a piece of the story is missing. For example, the cosmological constant problem (why is the cosmological constant so small?); the hierarchy problem; and the strong CP problem, among others.
== Example ==
An example of a fine-tuning problem considered by the scientific community to have a plausible "natural" solution is the cosmological flatness problem, which is solved if inflationary theory is correct: inflation forces the universe to become very flat, answering the question of why the universe is today observed to be flat to such a high degree.
== Measurement ==
Although fine-tuning was traditionally measured by ad hoc fine-tuning measures, such as the Barbieri-Giudice-Ellis measure, over the past decade many scientists recognized that fine-tuning arguments were a specific application of Bayesian statistics.
== See also ==
Anthropic principle
Fine-tuned universe
Hierarchy problem
Strong CP problem
== References ==
== External links ==
Quotations related to Fine-tuning (physics) at Wikiquote | Wikipedia/Fine-tuning_(physics) |
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle.
The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel:
It follows therefore from these three tests, that the repulsive force that the two balls – [that were] electrified with the same kind of electricity – exert on each other, follows the inverse proportion of the square of the distance.
Coulomb also showed that oppositely charged bodies attract according to an inverse-square law:
|
F
|
=
k
e
|
q
1
|
|
q
2
|
r
2
{\displaystyle |F|=k_{\text{e}}{\frac {|q_{1}||q_{2}|}{r^{2}}}}
Here, ke is a constant, q1 and q2 are the quantities of each charge, and the scalar r is the distance between the charges.
The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract.
Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m.
== History ==
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects.
In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758.
Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance.
In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as x−2.06.
In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the 2 + 1/50th and that of the 2 − 1/50th, and there is no reason to think that it differs at all from the inverse duplicate ratio".
Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them.
The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law.
== Mathematical form ==
Coulomb's law states that the electrostatic force
F
1
{\textstyle \mathbf {F} _{1}}
experienced by a charge,
q
1
{\displaystyle q_{1}}
at position
r
1
{\displaystyle \mathbf {r} _{1}}
, in the vicinity of another charge,
q
2
{\displaystyle q_{2}}
at position
r
2
{\displaystyle \mathbf {r} _{2}}
, in a vacuum is equal to
F
1
=
q
1
q
2
4
π
ε
0
r
^
12
|
r
12
|
2
{\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}}
where
r
12
=
r
1
−
r
2
{\textstyle \mathbf {r_{12}=r_{1}-r_{2}} }
is the displacement vector between the charges,
r
^
12
{\textstyle {\hat {\mathbf {r} }}_{12}}
a unit vector pointing from
q
2
{\textstyle q_{2}}
to
q
1
{\textstyle q_{1}}
, and
ε
0
{\displaystyle \varepsilon _{0}}
the electric constant. Here,
r
^
12
{\textstyle \mathbf {\hat {r}} _{12}}
is used for the vector notation. The electrostatic force
F
2
{\textstyle \mathbf {F} _{2}}
experienced by
q
2
{\displaystyle q_{2}}
, according to Newton's third law, is
F
2
=
−
F
1
{\textstyle \mathbf {F} _{2}=-\mathbf {F} _{1}}
.
If both charges have the same sign (like charges) then the product
q
1
q
2
{\displaystyle q_{1}q_{2}}
is positive and the direction of the force on
q
1
{\displaystyle q_{1}}
is given by
r
^
12
{\textstyle {\widehat {\mathbf {r} }}_{12}}
; the charges repel each other. If the charges have opposite signs then the product
q
1
q
2
{\displaystyle q_{1}q_{2}}
is negative and the direction of the force on
q
1
{\displaystyle q_{1}}
is
−
r
^
12
{\textstyle -{\hat {\mathbf {r} }}_{12}}
; the charges attract each other.
=== System of discrete charges ===
The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed.
Force
F
{\textstyle \mathbf {F} }
on a small charge
q
{\displaystyle q}
at position
r
{\displaystyle \mathbf {r} }
, due to a system of
n
{\textstyle n}
discrete charges in vacuum is
F
(
r
)
=
q
4
π
ε
0
∑
i
=
1
n
q
i
r
^
i
|
r
i
|
2
,
{\displaystyle \mathbf {F} (\mathbf {r} )={q \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}},}
where
q
i
{\displaystyle q_{i}}
is the magnitude of the ith charge,
r
i
{\textstyle \mathbf {r} _{i}}
is the vector from its position to
r
{\displaystyle \mathbf {r} }
and
r
^
i
{\textstyle {\hat {\mathbf {r} }}_{i}}
is the unit vector in the direction of
r
i
{\displaystyle \mathbf {r} _{i}}
.
=== Continuous charge distribution ===
In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge
d
q
{\displaystyle dq}
. The distribution of charge is usually linear, surface or volumetric.
For a linear charge distribution (a good approximation for charge in a wire) where
λ
(
r
′
)
{\displaystyle \lambda (\mathbf {r} ')}
gives the charge per unit length at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
ℓ
′
{\displaystyle d\ell '}
is an infinitesimal element of length,
d
q
′
=
λ
(
r
′
)
d
ℓ
′
.
{\displaystyle dq'=\lambda (\mathbf {r'} )\,d\ell '.}
For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where
σ
(
r
′
)
{\displaystyle \sigma (\mathbf {r} ')}
gives the charge per unit area at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
A
′
{\displaystyle dA'}
is an infinitesimal element of area,
d
q
′
=
σ
(
r
′
)
d
A
′
.
{\displaystyle dq'=\sigma (\mathbf {r'} )\,dA'.}
For a volume charge distribution (such as charge within a bulk metal) where
ρ
(
r
′
)
{\displaystyle \rho (\mathbf {r} ')}
gives the charge per unit volume at position
r
′
{\displaystyle \mathbf {r} '}
, and
d
V
′
{\displaystyle dV'}
is an infinitesimal element of volume,
d
q
′
=
ρ
(
r
′
)
d
V
′
.
{\displaystyle dq'=\rho ({\boldsymbol {r'}})\,dV'.}
The force on a small test charge
q
{\displaystyle q}
at position
r
{\displaystyle {\boldsymbol {r}}}
in vacuum is given by the integral over the distribution of charge
F
(
r
)
=
q
4
π
ε
0
∫
d
q
′
r
−
r
′
|
r
−
r
′
|
3
.
{\displaystyle \mathbf {F} (\mathbf {r} )={\frac {q}{4\pi \varepsilon _{0}}}\int dq'{\frac {\mathbf {r} -\mathbf {r'} }{|\mathbf {r} -\mathbf {r'} |^{3}}}.}
The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which
|
r
−
r
′
|
=
0
{\displaystyle |\mathbf {r} -\mathbf {r'} |=0}
because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow
|
r
−
r
′
|
=
0
{\displaystyle |\mathbf {r} -\mathbf {r'} |=0}
to be analyzed.
== Coulomb constant ==
The constant of proportionality,
1
4
π
ε
0
{\displaystyle {\frac {1}{4\pi \varepsilon _{0}}}}
, in Coulomb's law:
F
1
=
q
1
q
2
4
π
ε
0
r
^
12
|
r
12
|
2
{\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}}
is a consequence of historical choices for units.: 4–2
The constant
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum electric permittivity. Using the CODATA 2022 recommended value for
ε
0
{\displaystyle \varepsilon _{0}}
, the Coulomb constant is
k
e
=
1
4
π
ε
0
=
8.987
551
7862
(
14
)
×
10
9
N
⋅
m
2
⋅
C
−
2
{\displaystyle k_{\text{e}}={\frac {1}{4\pi \varepsilon _{0}}}=8.987\ 551\ 7862(14)\times 10^{9}\ \mathrm {N{\cdot }m^{2}{\cdot }C^{-2}} }
== Limitations ==
There are three conditions to be fulfilled for the validity of Coulomb's inverse square law:
The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere).
The charges must not overlap (e.g. they must be distinct point charges).
The charges must be stationary with respect to a nonaccelerating frame of reference.
The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration.
== Electric field ==
An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force
F
{\textstyle \mathbf {F} }
on a charge
q
t
{\textstyle q_{t}}
depends on the electric field
E
{\textstyle \mathbf {E} }
established by other charges that it finds itself in, such that
F
=
q
t
E
{\textstyle \mathbf {F} =q_{t}\mathbf {E} }
. In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition.
If the field is generated by a positive source point charge
q
{\textstyle q}
, the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge
q
t
{\textstyle q_{t}}
would move if placed in the field. For a negative point source charge, the direction is radially inwards.
The magnitude of the electric field E can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field E created by a single source point charge Q at a certain distance from it r in vacuum is given by
|
E
|
=
k
e
|
q
|
r
2
{\displaystyle |\mathbf {E} |=k_{\text{e}}{\frac {|q|}{r^{2}}}}
A system of n discrete charges
q
i
{\displaystyle q_{i}}
stationed at
r
i
=
r
−
r
i
{\textstyle \mathbf {r} _{i}=\mathbf {r} -\mathbf {r} _{i}}
produces an electric field whose magnitude and direction is, by superposition
E
(
r
)
=
1
4
π
ε
0
∑
i
=
1
n
q
i
r
^
i
|
r
i
|
2
{\displaystyle \mathbf {E} (\mathbf {r} )={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}}}
== Atomic forces ==
Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable.
== Relation to Gauss's law ==
=== Deriving Gauss's law from Coulomb's law ===
=== Deriving Coulomb's law from Gauss's law ===
Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion).
== In relativity ==
Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by:
E
=
q
4
π
ϵ
0
r
3
1
−
β
2
(
1
−
β
2
sin
2
θ
)
3
/
2
r
{\displaystyle \mathbf {E} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}\mathbf {r} }
B
=
q
4
π
ϵ
0
r
3
1
−
β
2
(
1
−
β
2
sin
2
θ
)
3
/
2
v
×
r
c
2
=
v
×
E
c
2
{\displaystyle \mathbf {B} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}{\frac {\mathbf {v} \times \mathbf {r} }{c^{2}}}={\frac {\mathbf {v} \times \mathbf {E} }{c^{2}}}}
where
q
{\displaystyle q}
is the charge of the point source,
r
{\displaystyle \mathbf {r} }
is the position vector from the point source to the point in space,
v
{\displaystyle \mathbf {v} }
is the velocity vector of the charged particle,
β
{\displaystyle \beta }
is the ratio of speed of the charged particle divided by the speed of light and
θ
{\displaystyle \theta }
is the angle between
r
{\displaystyle \mathbf {r} }
and
v
{\displaystyle \mathbf {v} }
.
This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating
β
≪
1
{\displaystyle \beta \ll 1}
) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations.
== Coulomb potential ==
=== Quantum field theory ===
The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows:
Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude
A
(
|
p
⟩
→
|
p
′
⟩
)
{\textstyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )}
is:
A
(
|
p
⟩
→
|
p
′
⟩
)
−
1
=
2
π
δ
(
E
p
−
E
p
′
)
(
−
i
)
∫
d
3
r
V
(
r
)
e
−
i
(
p
−
p
′
)
r
{\displaystyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )-1=2\pi \delta (E_{p}-E_{p'})(-i)\int d^{3}\mathbf {r} \,V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }}
This is to be compared to the:
∫
d
3
k
(
2
π
)
3
e
i
k
r
0
⟨
p
′
,
k
|
S
|
p
,
k
⟩
{\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}e^{ikr_{0}}\langle p',k|S|p,k\rangle }
where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential.
Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with
m
0
≫
|
p
|
{\displaystyle m_{0}\gg |\mathbf {p} |}
⟨
p
′
,
k
|
S
|
p
,
k
⟩
|
c
o
n
n
=
−
i
e
2
|
p
−
p
′
|
2
−
i
ε
(
2
m
)
2
δ
(
E
p
,
k
−
E
p
′
,
k
)
(
2
π
)
4
δ
(
p
−
p
′
)
{\displaystyle \langle p',k|S|p,k\rangle |_{conn}=-i{\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}(2m)^{2}\delta (E_{p,k}-E_{p',k})(2\pi )^{4}\delta (\mathbf {p} -\mathbf {p} ')}
Comparing with the QM scattering, we have to discard the
(
2
m
)
2
{\displaystyle (2m)^{2}}
as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain:
∫
V
(
r
)
e
−
i
(
p
−
p
′
)
r
d
3
r
=
e
2
|
p
−
p
′
|
2
−
i
ε
{\displaystyle \int V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }d^{3}\mathbf {r} ={\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}}
where Fourier transforming both sides, solving the integral and taking
ε
→
0
{\displaystyle \varepsilon \to 0}
at the end will yield
V
(
r
)
=
e
2
4
π
r
{\displaystyle V(r)={\frac {e^{2}}{4\pi r}}}
as the Coulomb potential.
However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental.
The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass.
== Verification ==
It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass
m
{\displaystyle m}
and same-sign charge
q
{\displaystyle q}
, hanging from two ropes of negligible mass of length
l
{\displaystyle l}
. The forces acting on each sphere are three: the weight
m
g
{\displaystyle mg}
, the rope tension
T
{\displaystyle \mathbf {T} }
and the electric force
F
{\displaystyle \mathbf {F} }
. In the equilibrium state:
and
Dividing (1) by (2):
Let
L
1
{\displaystyle \mathbf {L} _{1}}
be the distance between the charged spheres; the repulsion force between them
F
1
{\displaystyle \mathbf {F} _{1}}
, assuming Coulomb's law is correct, is equal to
so:
If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge
q
2
{\textstyle {\frac {q}{2}}}
. In the equilibrium state, the distance between the charges will be
L
2
<
L
1
{\textstyle \mathbf {L} _{2}<\mathbf {L} _{1}}
and the repulsion force between them will be:
We know that
F
2
=
m
g
tan
θ
2
{\displaystyle \mathbf {F} _{2}=mg\tan \theta _{2}}
and:
q
2
4
4
π
ε
0
L
2
2
=
m
g
tan
θ
2
{\displaystyle {\frac {\frac {q^{2}}{4}}{4\pi \varepsilon _{0}L_{2}^{2}}}=mg\tan \theta _{2}}
Dividing (4) by (5), we get:
Measuring the angles
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
and the distance between the charges
L
1
{\displaystyle \mathbf {L} _{1}}
and
L
2
{\displaystyle \mathbf {L} _{2}}
is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation:
Using this approximation, the relationship (6) becomes the much simpler expression:
In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value.
== See also ==
== References ==
Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06
== Related reading ==
Coulomb, Charles Augustin (1788) [1785]. "Premier mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 569–577.
Coulomb, Charles Augustin (1788) [1785]. "Second mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 578–611.
Coulomb, Charles Augustin (1788) [1785]. "Troisième mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 612–638.
Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0.
Tamm, Igor E. (1979) [1976]. Fundamentals of the Theory of Electricity (9th ed.). Moscow: Mir. pp. 23–27.
Tipler, Paul A.; Mosca, Gene (2008). Physics for Scientists and Engineers (6th ed.). New York: W. H. Freeman and Company. ISBN 978-0-7167-8964-2. LCCN 2007010418.
Young, Hugh D.; Freedman, Roger A. (2010). Sears and Zemansky's University Physics: With Modern Physics (13th ed.). Addison-Wesley (Pearson). ISBN 978-0-321-69686-1.
== External links ==
Coulomb's Law on Project PHYSNET
Electricity and the Atom Archived 2009-02-21 at the Wayback Machine—a chapter from an online textbook
A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software
Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. | Wikipedia/Electric_force |
The Annual Review of Astronomy and Astrophysics is an annual peer-reviewed scientific journal published by Annual Reviews. The co-editors are Ewine van Dishoeck and Robert C. Kennicutt. The journal reviews scientific literature pertaining to local and distant celestial entities throughout the observable universe, as well as cosmology, instrumentation, techniques, and the history of developments. It was established in 1963. As of 2023, it is being published as open access, under the Subscribe to Open model.
== History ==
In November 1960, the board of directors of the nonprofit publisher Annual Reviews began investigating the need for a new journal of review articles that covered developments in astronomy and astrophysics. The board consulted an advisory group of experts, including Ronald Bracewell, Robert Jastrow, Joseph Kaplan, Paul Merrill, Otto Struve, and Harold Urey. The editorial committee met in August 1961 to determine the authors and topics for the first volume, which was published in 1963. As of 2020, it was published both in print and electronically.
== Scope and indexing ==
The Annual Review of Astronomy and Astrophysics defines its scope as covering significant developments in astronomy and astrophysics, including the Sun, the Solar System, exoplanets, stars, the interstellar medium, the Milky Way and other galaxies, galactic nuclei, cosmology, and the instrumentation and techniques used for research and analysis. As of 2024, Journal Citation Reports gives the journal an impact factor of 26.3, ranking it second out of 84 journals in the category "Astronomy and Astrophysics". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, Inspec, and Academic Search, among others.
== Editorial processes ==
The Annual Review of Astronomy and Astrophysics is led by the editor or co-editors. They are is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
=== Editors of volumes ===
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
Leo Goldberg (1963–1973)
Geoffrey Burbidge (1974–2004)
Roger Blandford (2005–2011)
Ewine van Dishoeck and Sandra Faber (2012–2021)
Ewine van Dishoeck and Robert C. Kennicutt (present)
=== Current editorial committee ===
As of 2025, the editorial committee consists of the co-editors and the following members:
Previous members (as of 2022) include: Joss Bland-Hawthorn and Eliot Quataert.
== See also ==
List of astronomy journals
== References == | Wikipedia/Annual_Review_of_Astronomy_and_Astrophysics |
The Cryogenic Low-Energy Astrophysics with Noble liquids (CLEAN) experiment by the DEAP/CLEAN collaboration is searching for dark matter using noble gases at the SNOLAB underground facility. CLEAN has studied neon and argon in the MicroCLEAN prototype, and running the MiniCLEAN detector to test a multi-ton design.
== Design ==
Dark matter searches in isolated noble gas scintillators with xenon and argon have set limits on WIMP interactions, such as recent cross sections from LUX and XENON. Particles scattering in the target emit photons detected by PMTs, identified via pulse shape discrimination developed on DEAP results. Shielding reduces the cosmic and radiation background. Neon has been studied as a clear, dense, low-background scintillator. CLEAN can use neon or argon and plans runs with both to study nuclear mass dependence of any WIMP signals.
== Status ==
The MiniCLEAN detector will operate with argon in 2014. It will have 500 kg of noble cryogen in a spherical steel vessel with 92 PMTs shielded in a water tank with muon rejection.
== References == | Wikipedia/Cryogenic_Low-Energy_Astrophysics_with_Neon |
High Energy Stereoscopic System (H.E.S.S.) is a system of imaging atmospheric Cherenkov telescopes (IACTs) for the investigation of cosmic gamma rays in the photon energy range of 0.03 to 100 TeV. The acronym was chosen in honour of Victor Hess, who was the first to observe cosmic rays.
The name also emphasizes two main features of the installation, namely the simultaneous observation of air showers with several telescopes, under different viewing angles, and the combination of telescopes to a large system to increase the effective detection area for gamma rays. H.E.S.S. permits the exploration of gamma-ray sources with intensities at a level of a few thousandth parts of the flux of the Crab Nebula.
H.E.S.S. consists of five telescopes: four with mirrors just under 12 m in diameter, arranged as a square with 120 m sides, and one larger telescope with a 28 m mirror, located at the centre of the array. The four 12 m telescopes began operation in 2004, with the 28 m telescope added as an upgrade (called H.E.S.S. II) in 2012.
As with other gamma-ray telescopes, H.E.S.S. observes high energy processes in the universe. Gamma-ray producing sources include supernova remnants, active galactic nuclei and pulsar wind nebulae. It also actively tests unproven theories in physics such as looking for the predicted gamma-ray annihilation signal from WIMP dark matter particles and testing Lorentz invariance predictions of loop quantum gravity.
H.E.S.S. is located in the Khomas highlands of Namibia near the Gamsberg mountain, an area well known for its excellent optical quality. The first of the four telescopes of Phase I of the H.E.S.S. project went into operation in Summer 2002; all four were operational in December 2003.
In 2004 H.E.S.S. was the first IACT experiment to spatially resolve a source of cosmic gamma rays.
In 2005, it was announced that H.E.S.S. had detected eight new high-energy gamma ray sources, doubling the known number of such sources. As of 2014, more than 90 sources of teraelectronvolt gamma rays were discovered by H.E.S.S.
In 2016, the HESS collaboration reported deep gamma ray observations which show the presence of petaelectronvolt-protons originating from Sagittarius A*, the supermassive black hole at the centre of the Milky Way, and therefore should be considered as a viable alternative to supernova remnants as a source of petaelectronvolt galactic cosmic rays.
== See also ==
Werner Hofmann (physicist)
Major Atmospheric Cerenkov Experiment Telescope
== References ==
== External links ==
High Energy Stereoscopic System Project (H.E.S.S.) on the internet
Nature: High energy particle acceleration in the shell of a supernova remnant
Science: A new population of very high energy gamma-ray sources in the Milky Way
New Scientist: Number of very high-energy gamma ray sources doubles
Aspera European network portal
HESS experiment record on INSPIRE-HEP | Wikipedia/High_Energy_Stereoscopic_System |
Astrophysics is a peer-reviewed scientific journal of astrophysics published by Springer. Each volume is published every three months. It was founded in 1965 by the Soviet Armenian astrophysicist Viktor Ambartsumian. It is the English version of the journal Astrofizika, published by the Armenian National Academy of Sciences mostly in Russian. The current editor-in-chief is Arthur Nikoghossian.
== Aims and scope ==
The focus of this journal is astronomy and is a translation of the peer-reviewed Russian language journal Astrofizika.
== Abstracting and indexing ==
Astrophysics is indexed in the following databases:
== See also ==
List of astronomy journals
== References ==
== External links ==
Official website
astro.asj-oa.am | Wikipedia/Astrophysics_(journal) |
Stellar dynamics is the branch of astrophysics which describes in a statistical way the collective motions of stars subject to their mutual gravity. The essential difference from celestial mechanics is that the number of body
N
≫
10.
{\displaystyle N\gg 10.}
Typical galaxies have upwards of millions of macroscopic gravitating bodies and countless number of neutrinos and perhaps other dark microscopic bodies. Also each star contributes more or less equally to the total gravitational field, whereas in celestial mechanics the pull of a massive body dominates any satellite orbits.
== Connection with fluid dynamics ==
Stellar dynamics also has connections to the field of plasma physics. The two fields underwent significant development during a similar time period in the early 20th century, and both borrow mathematical formalism originally developed in the field of fluid mechanics.
In accretion disks and stellar surfaces, the dense plasma or gas particles collide very frequently, and collisions result in equipartition and perhaps viscosity under magnetic field. We see various sizes for accretion disks and stellar atmosphere, both made of enormous number of microscopic particle mass,
(
L
/
V
,
M
/
N
)
{\displaystyle (L/V,M/N)}
∼
(
10
−
8
pc
/
500
km/s
,
1
M
⊙
/
10
55
=
m
p
)
{\displaystyle \sim (10^{-8}{\text{pc}}/500{\text{km/s}},1M_{\odot }/10^{55}=m_{p})}
at stellar surfaces,
∼
(
10
−
4
pc
/
10
km/s
,
0.1
M
⊙
/
10
54
∼
m
p
)
{\displaystyle \sim (10^{-4}{\text{pc}}/10{\text{km/s}},0.1M_{\odot }/10^{54}\sim m_{p})}
around Sun-like stars or km-sized stellar black holes,
∼
(
10
−
1
pc
/
100
km/s
,
10
M
⊙
/
10
56
∼
m
p
)
{\displaystyle \sim (10^{-1}{\text{pc}}/100{\text{km/s}},10M_{\odot }/10^{56}\sim m_{p})}
around million solar mass black holes (about AU-sized) in centres of galaxies.
The system crossing time scale is long in stellar dynamics, where it is handy to note that
1000
pc
/
1
km/s
=
1000
Myr
=
HubbleTime
/
14.
{\displaystyle 1000{\text{pc}}/1{\text{km/s}}=1000{\text{Myr}}={\text{HubbleTime}}/14.}
The long timescale means that, unlike gas particles in accretion disks, stars in galaxy disks very rarely see a collision in their stellar lifetime. However, galaxies collide occasionally in galaxy clusters, and stars have close encounters occasionally in star clusters.
As a rule of thumb, the typical scales concerned (see the Upper Portion of P.C.Budassi's Logarithmic Map of the Universe) are
(
L
/
V
,
M
/
N
)
{\displaystyle (L/V,M/N)}
∼
(
10
p
c
/
10
k
m
/
s
,
1000
M
⊙
/
1000
)
{\displaystyle \sim (\mathrm {10pc/10km/s} ,1000M_{\odot }/1000)}
for M13 Star Cluster,
∼
(
100
k
p
c
/
100
k
m
/
s
,
10
11
M
⊙
/
10
11
)
{\displaystyle \sim (\mathrm {100kpc/100km/s} ,10^{11}M_{\odot }/10^{11})}
for M31 Disk Galaxy,
∼
(
10
M
p
c
/
1000
k
m
/
s
,
10
14
M
⊙
/
10
77
=
m
ν
)
{\displaystyle \sim (\mathrm {10Mpc/1000km/s} ,10^{14}M_{\odot }/10^{77}=m_{\nu })}
for neutrinos in the Bullet Clusters, which is a merging system of N = 1000 galaxies.
== Connection with Kepler problem and 3-body problem ==
At a superficial level, all of stellar dynamics might be formulated as an N-body problem
by Newton's second law, where the equation of motion (EOM) for internal interactions of an isolated stellar system of N members can be written down as,
m
i
d
2
r
i
d
t
2
=
∑
i
=
1
i
≠
j
N
G
m
i
m
j
(
r
j
−
r
i
)
‖
r
j
−
r
i
‖
3
.
{\displaystyle m_{i}{\frac {d^{2}\mathbf {r_{i}} }{dt^{2}}}=\sum _{i=1 \atop i\neq j}^{N}{\frac {Gm_{i}m_{j}\left(\mathbf {r} _{j}-\mathbf {r} _{i}\right)}{\left\|\mathbf {r} _{j}-\mathbf {r} _{i}\right\|^{3}}}.}
Here in the N-body system, any individual member,
m
i
{\displaystyle m_{i}}
is influenced by the gravitational potentials of the remaining
m
j
{\displaystyle m_{j}}
members.
In practice, except for in the highest performance computer simulations, it is not feasible to calculate rigorously the future of a large N system this way. Also this EOM gives very little intuition. Historically, the methods utilised in stellar dynamics originated from the fields of both classical mechanics and statistical mechanics. In essence, the fundamental problem of stellar dynamics is the N-body problem, where the N members refer to the members of a given stellar system. Given the large number of objects in a stellar system, stellar dynamics can address both the global, statistical properties of many orbits as well as the specific data on the positions and velocities of individual orbits.
== Concept of a gravitational potential field ==
Stellar dynamics involves determining the gravitational potential of a substantial number of stars. The stars can be modeled as point masses whose orbits are determined by the combined interactions with each other. Typically, these point masses represent stars in a variety of clusters or galaxies, such as a Galaxy cluster, or a Globular cluster. Without getting a system's gravitational potential by adding all of the point-mass potentials in the system at every second, stellar dynamicists develop potential models that can accurately model the system while remaining computationally inexpensive. The gravitational potential,
Φ
{\displaystyle \Phi }
, of a system is related to the acceleration and the gravitational field,
g
{\displaystyle \mathbf {g} }
by:
d
2
r
i
d
t
2
=
g
→
=
−
∇
r
i
Φ
(
r
i
)
,
Φ
(
r
i
)
=
−
∑
k
=
1
k
≠
i
N
G
m
k
‖
r
i
−
r
k
‖
,
{\displaystyle {\frac {d^{2}\mathbf {r_{i}} }{dt^{2}}}}=\mathbf {\vec {g}} =-\nabla _{\mathbf {r_{i}} }\Phi (\mathbf {r_{i}} ),~~\Phi (\mathbf {r} _{i})=-\sum _{k=1 \atop k\neq i}^{N}{{\frac {Gm_{k}}{\left\|\mathbf {r} _{i}-\mathbf {r} _{k}\right\|}},}
whereas the potential is related to a (smoothened) mass density,
ρ
{\displaystyle \rho }
, via the Poisson's equation in the integral form
Φ
(
r
)
=
−
∫
G
ρ
(
R
)
d
3
R
‖
r
−
R
‖
{\displaystyle \Phi (\mathbf {r} )=-\int {G\rho (\mathbf {R} )d^{3}\mathbf {R} \over \left\|\mathbf {r} -\mathbf {R} \right\|}}
or the more common differential form
∇
2
Φ
=
4
π
G
ρ
.
{\displaystyle \nabla ^{2}\Phi =4\pi G\rho .}
=== An example of the Poisson Equation and escape speed in a uniform sphere ===
Consider an analytically smooth spherical potential
Φ
(
r
)
≡
(
−
V
0
2
)
+
[
r
2
−
r
0
2
2
r
0
2
,
1
−
r
0
r
]
max
V
0
2
≡
Φ
(
r
0
)
−
V
e
2
(
r
)
2
,
Φ
(
r
0
)
=
−
V
0
2
,
g
=
−
∇
Φ
(
r
)
=
−
Ω
2
r
H
(
r
0
−
r
)
−
G
M
0
r
2
H
(
r
−
r
0
)
,
Ω
=
V
0
r
0
,
M
0
=
V
0
2
r
0
G
,
{\displaystyle {\begin{aligned}\Phi (r)&\equiv \left(-V_{0}^{2}\right)+\left[{r^{2}-r_{0}^{2} \over 2r_{0}^{2}},~~1-{r_{0} \over r}\right]_{\max }\!\!\!\!V_{0}^{2}\equiv \Phi (r_{0})-{V_{e}^{2}(r) \over 2},~~\Phi (r_{0})=-V_{0}^{2},\\\mathbf {g} &=-\mathbf {\nabla } \Phi (r)=-\Omega ^{2}rH(r_{0}-r)-{GM_{0} \over r^{2}}H(r-r_{0}),~~\Omega ={V_{0} \over r_{0}},~~M_{0}={V_{0}^{2}r_{0} \over G},\end{aligned}}}
where
V
e
(
r
)
{\displaystyle V_{e}(r)}
takes the meaning of the speed to "escape to the edge"
r
0
{\displaystyle r_{0}}
, and
2
V
0
{\displaystyle {\sqrt {2}}V_{0}}
is the speed to "escape from the edge to infinity". The gravity is like the restoring force of harmonic oscillator inside the sphere, and Keplerian outside as described by the Heaviside functions.
We can fix the normalisation
V
0
{\displaystyle V_{0}}
by computing the corresponding density using the spherical Poisson Equation
G
ρ
=
d
4
π
r
2
d
r
r
2
d
Φ
d
r
=
d
(
G
M
)
4
π
r
2
d
r
=
3
V
0
2
4
π
r
0
2
H
(
r
0
−
r
)
,
{\displaystyle G\rho ={d \over 4\pi r^{2}dr}{r^{2}d\Phi \over dr}={d(GM) \over 4\pi r^{2}dr}={3V_{0}^{2} \over 4\pi r_{0}^{2}}H(r_{0}-r),}
where the enclosed mass
M
(
r
)
=
r
2
d
Φ
G
d
r
=
∫
0
r
d
r
∫
0
π
(
r
d
θ
)
∫
0
2
π
(
r
sin
θ
d
φ
)
ρ
0
H
(
r
0
−
r
)
=
M
0
x
3
|
x
=
r
r
0
.
{\displaystyle M(r)={r^{2}d\Phi \over Gdr}=\int _{0}^{r}dr\int _{0}^{\pi }(rd\theta )\int _{0}^{2\pi }(r\sin \theta d\varphi )\rho _{0}H(r_{0}-r)=\left.M_{0}x^{3}\right|_{x={r \over r_{0}}}.}
Hence the potential model corresponds to a uniform sphere of radius
r
0
{\displaystyle r_{0}}
, total mass
M
0
{\displaystyle M_{0}}
with
V
0
r
0
≡
4
π
G
ρ
0
3
=
G
M
0
r
0
3
.
{\displaystyle {V_{0} \over r_{0}}\equiv {\sqrt {4\pi G\rho _{0} \over 3}}={\sqrt {GM_{0} \over r_{0}^{3}}}.}
=== Key concepts ===
While both the equations of motion and Poisson Equation can also take on non-spherical forms, depending on the coordinate system and the symmetry of the physical system, the essence is the same:
The motions of stars in a galaxy or in a globular cluster are principally determined by the average distribution of the other, distant stars. The infrequent stellar encounters involve processes such as relaxation, mass segregation, tidal forces, and dynamical friction that influence the trajectories of the system's members.
== Relativistic Approximations ==
There are three related approximations made in the Newtonian EOM and Poisson Equation above.
=== SR and GR ===
Firstly above equations neglect relativistic corrections, which are of order of
(
v
/
c
)
2
≪
10
−
4
{\displaystyle (v/c)^{2}\ll 10^{-4}}
as typical stellar 3-dimensional speed,
v
∼
3
−
3000
{\displaystyle v\sim 3-3000}
km/s, is much below the speed of light.
=== Eddington Limit ===
Secondly non-gravitational force is typically negligible in stellar systems. For example, in the vicinity of a typical star the ratio of radiation-to-gravity force on a hydrogen atom or ion,
Q
Eddington
=
σ
e
4
π
m
H
c
L
⊙
r
2
G
M
⊙
r
2
=
1
30
,
000
,
{\displaystyle Q^{\text{Eddington}}={{\sigma _{e} \over 4\pi m_{H}c}{L\odot \over r^{2}} \over {GM_{\odot } \over r^{2}}}={1 \over 30,000},}
hence radiation force is negligible in general, except perhaps around a luminous O-type star of mass
30
M
⊙
{\displaystyle 30M_{\odot }}
, or around a black hole accreting gas at the Eddington limit so that its luminosity-to-mass ratio
L
∙
/
M
∙
{\displaystyle L_{\bullet }/M_{\bullet }}
is defined by
Q
Eddington
=
1
{\displaystyle Q^{\text{Eddington}}=1}
.
=== Loss cone ===
Thirdly a star can be swallowed if coming within a few Schwarzschild radii of the black hole. This radius of Loss is given by
s
≤
s
Loss
=
6
G
M
∙
c
2
{\displaystyle s\leq s_{\text{Loss}}={\frac {6GM_{\bullet }}{c^{2}}}}
The loss cone can be visualised by considering infalling particles aiming to the black hole within a small solid angle (a cone in velocity).
These particle with small
θ
≪
1
{\displaystyle \theta \ll 1}
have small angular momentum per unit mass
J
≡
r
v
sin
θ
≤
J
loss
=
4
G
M
∙
c
.
{\displaystyle J\equiv rv\sin \theta \leq J_{\text{loss}}={\frac {4GM_{\bullet }}{c}}.}
Their small angular momentum (due to ) does not make a high enough barrier near
s
Loss
{\displaystyle s_{\text{Loss}}}
to force the particle to turn around.
The effective potential
Φ
eff
(
r
)
≡
E
−
r
˙
2
2
=
J
2
2
r
2
+
Φ
(
r
)
,
{\displaystyle \Phi _{\text{eff}}(r)\equiv E-{{\dot {r}}^{2} \over 2}={J^{2} \over 2r^{2}}+\Phi (r),}
is always positive infinity in Newtonian gravity. However, in GR, it
nosedives to minus infinity near
6
G
M
∙
c
2
{\displaystyle {\frac {6GM_{\bullet }}{c^{2}}}}
if
J
≤
4
G
M
∙
c
.
{\displaystyle J\leq {\frac {4GM_{\bullet }}{c}}.}
Sparing a rigorous GR treatment, one can verify this
s
loss
,
J
loss
{\displaystyle s_{\text{loss}},J_{\text{loss}}}
by computing the last stable circular orbit, where the effective potential is at an inflection point
Φ
eff
″
(
s
loss
)
=
Φ
eff
′
(
s
loss
)
=
0
{\displaystyle \Phi ''_{\text{eff}}(s_{\text{loss}})=\Phi '_{\text{eff}}(s_{\text{loss}})=0}
using an approximate classical potential of a Schwarzschild black hole
Φ
(
r
)
=
−
(
4
G
M
∙
/
c
)
2
2
r
2
[
1
+
3
(
6
G
M
∙
/
c
2
)
2
8
r
2
]
−
G
M
∙
r
[
1
−
(
6
G
M
∙
/
c
2
)
2
r
2
]
.
{\displaystyle \Phi (r)=-{(4GM_{\bullet }/c)^{2} \over 2r^{2}}\left[1+{3(6GM_{\bullet }/c^{2})^{2} \over 8r^{2}}\right]-{\frac {GM_{\bullet }}{r}}\left[1-{(6GM_{\bullet }/c^{2})^{2} \over r^{2}}\right].}
== Tidal disruption radius ==
A star can be tidally torn by a heavier black hole when coming within the so-called Hill's radius of the black hole, inside which a star's surface gravity yields to the tidal force from the black hole, i.e.,
(
1
−
1.5
)
≥
Q
tide
≡
G
M
⊙
/
R
⊙
2
[
G
M
∙
/
s
Hill
2
−
G
M
∙
/
(
s
Hill
+
R
⊙
)
2
]
,
s
Hill
→
R
⊙
(
(
2
−
3
)
G
M
∙
G
M
⊙
)
1
3
,
{\displaystyle (1-1.5)\geq Q^{\text{tide}}\equiv {GM_{\odot }/R_{\odot }^{2} \over [GM_{\bullet }/s_{\text{Hill}}^{2}-GM_{\bullet }/(s_{\text{Hill}}+R_{\odot })^{2}]},~~~s_{\text{Hill}}\rightarrow R_{\odot }\left({(2-3)GM_{\bullet } \over GM_{\odot }}\right)^{1 \over 3},}
For typical black holes of
M
∙
=
(
10
0
−
10
8.5
)
M
⊙
{\displaystyle M_{\bullet }=(10^{0}-10^{8.5})M_{\odot }}
the destruction radius
max
[
s
Hill
,
s
Loss
]
=
400
R
⊙
max
[
(
M
∙
3
×
10
7
M
⊙
)
1
/
3
,
M
∙
3
×
10
7
M
⊙
]
=
(
1
−
4000
)
R
⊙
≪
0.001
p
c
,
{\displaystyle \max[s_{\text{Hill}},s_{\text{Loss}}]=400R_{\odot }\max \left[\left({M_{\bullet } \over 3\times 10^{7}M_{\odot }}\right)^{1/3},{M_{\bullet } \over 3\times 10^{7}M_{\odot }}\right]=(1-4000)R_{\odot }\ll 0.001\mathrm {pc} ,}
where 0.001pc is the stellar spacing in the densest stellar systems (e.g., the nuclear star cluster in the Milky Way centre). Hence (main sequence) stars are generally too compact internally and too far apart spaced to be disrupted by even the strongest black hole tides in galaxy or cluster environment.
== Radius of sphere of influence ==
A particle of mass
m
{\displaystyle m}
with a relative speed V will be deflected when entering the (much larger) cross section
π
s
∙
2
{\displaystyle \pi s_{\bullet }^{2}}
of a black hole. This so-called sphere of influence is loosely defined by, up to a Q-like fudge factor
ln
Λ
{\displaystyle {\sqrt {\ln \Lambda }}}
,
1
∼
ln
Λ
≡
V
2
/
2
G
(
M
∙
+
m
)
/
s
∙
,
{\displaystyle 1\sim {\sqrt {\ln \Lambda }}\equiv {\frac {V^{2}/2}{G(M_{\bullet }+m)/s_{\bullet }}},}
hence for a Sun-like star we have,
s
∙
=
G
(
M
∙
+
M
⊙
)
ln
Λ
V
2
/
2
≈
M
∙
M
⊙
V
⊙
2
V
2
R
⊙
>
[
s
Hill
,
s
Loss
]
m
a
x
=
(
1
−
4000
)
R
⊙
,
{\displaystyle s_{\bullet }={G(M_{\bullet }+M_{\odot }){\sqrt {\ln \Lambda }} \over V^{2}/2}\approx {M_{\bullet } \over M_{\odot }}{V_{\odot }^{2} \over V^{2}}R_{\odot }>[s_{\text{Hill}},s_{\text{Loss}}]_{max}=(1-4000)R_{\odot },}
i.e., stars will neither be tidally disrupted nor physically hit/swallowed in a typical encounter with the black hole thanks to the high surface escape speed
V
⊙
=
2
G
M
⊙
/
R
⊙
=
615
k
m
/
s
{\displaystyle V_{\odot }={\sqrt {2GM_{\odot }/R_{\odot }}}=615\mathrm {km/s} }
from any solar mass star, comparable to the internal speed between galaxies in the Bullet Cluster of galaxies, and greater than the typical internal speed
V
∼
2
G
(
N
M
⊙
)
/
R
≪
300
k
m
/
s
{\displaystyle V\sim {\sqrt {2G(NM_{\odot })/R}}\ll \mathrm {300km/s} }
inside all star clusters and in galaxies.
== Connections between star loss cone and gravitational gas accretion physics ==
First consider a heavy black hole of mass
M
∙
{\displaystyle M_{\bullet }}
is moving through a dissipational gas of (rescaled) thermal sound speed
ς'
{\displaystyle {\text{ς'}}}
and density
ρ
gas
{\displaystyle \rho _{\text{gas}}}
, then every gas particle of mass m will likely transfer its relative momentum
m
V
∙
{\displaystyle mV_{\bullet }}
to the BH when coming within a cross-section of radius
s
∙
≡
(
G
M
∙
+
G
m
)
ln
Λ
(
V
∙
2
+
ς'
2
)
/
2
,
{\displaystyle s_{\bullet }\equiv {(GM_{\bullet }+Gm){\sqrt {\ln \Lambda }} \over (V_{\bullet }^{2}+{\text{ς'}}^{2})/2},}
In a time scale
t
fric
{\displaystyle t_{\text{fric}}}
that the black hole loses half of its streaming velocity, its mass may double by Bondi accretion, a process of capturing most of gas particles that enter its sphere of influence
s
∙
{\displaystyle s_{\bullet }}
, dissipate kinetic energy by gas collisions and fall in the black hole. The gas capture rate is
M
∙
t
Bondi
g
a
s
=
ς'
2
+
V
∙
2
(
π
s
∙
2
)
ρ
gas
=
4
π
ρ
gas
[
(
G
M
∙
)
2
(
ς'
2
+
V
∙
2
)
3
2
]
ln
Λ
,
ς'
≡
σ
1
+
γ
3
2
(
9
/
8
)
2
/
3
≈
[
ς
,
γ
σ
]
max
,
{\displaystyle {M_{\bullet } \over t_{\text{Bondi}}^{gas}}={\sqrt {{\text{ς'}}^{2}+V_{\bullet }^{2}}}(\pi s_{\bullet }^{2})\rho _{\text{gas}}=4\pi \rho _{\text{gas}}\left[{(GM_{\bullet })^{2} \over ({\text{ς'}}^{2}+V_{\bullet }^{2})^{3 \over 2}}\right]\ln \Lambda ,~~{\text{ς'}}\equiv \sigma {\sqrt {1+\gamma ^{3} \over 2(9/8)^{2/3}}}\approx [{\text{ς}},\gamma \sigma ]_{\text{max}},}
where the polytropic index
γ
{\displaystyle \gamma }
is the sound speed in units of velocity dispersion squared, and the rescaled sound speed
ς'
{\displaystyle {\text{ς'}}}
allows us to match the Bondi spherical accretion rate,
M
˙
∙
≈
π
ρ
gas
ς
[
(
G
M
∙
)
ς
2
]
2
{\displaystyle {\dot {M}}_{\bullet }\approx \pi \rho _{\text{gas}}{\text{ς}}\left[{(GM_{\bullet }) \over {\text{ς}}^{2}}\right]^{2}}
for the adiabatic gas
γ
=
5
/
3
{\displaystyle \gamma =5/3}
, compared to
M
˙
∙
≈
4
π
ρ
gas
ς
[
(
G
M
∙
)
ς
2
]
2
{\displaystyle {\dot {M}}_{\bullet }\approx 4\pi \rho _{\text{gas}}{\text{ς}}\left[{(GM_{\bullet }) \over {\text{ς}}^{2}}\right]^{2}}
of the isothermal case
γ
=
1
{\displaystyle \gamma =1}
.
Coming back to star tidal disruption and star capture by a (moving) black hole, setting
ln
Λ
=
1
{\displaystyle \ln \Lambda =1}
, we could summarise the BH's growth rate from gas and stars,
M
∙
t
Bondi
g
a
s
+
M
∙
t
loss
∗
{\displaystyle {M_{\bullet } \over t_{\text{Bondi}}^{gas}}+{M_{\bullet } \over t_{\text{loss}}^{*}}}
with,
M
˙
∙
=
ς'
2
+
V
∙
2
m
n
(
π
s
∙
2
,
π
s
Hill
2
,
π
s
Loss
2
)
max
,
s
∙
≈
(
G
M
∙
+
G
m
)
(
V
∙
2
+
ς'
2
)
/
2
,
{\displaystyle {\dot {M}}_{\bullet }={\sqrt {{\text{ς'}}^{2}+V_{\bullet }^{2}}}mn(\pi s_{\bullet }^{2},\pi s_{\text{Hill}}^{2},\pi s_{\text{Loss}}^{2})_{\text{max}},~~s_{\bullet }\approx {(GM_{\bullet }+Gm) \over (V_{\bullet }^{2}+{\text{ς'}}^{2})/2},}
because the black hole consumes a fractional/most part of star/gas particles passing its sphere of influence.
== Gravitational dynamical friction ==
Consider the case that a heavy black hole of mass
M
∙
{\displaystyle M_{\bullet }}
moves relative to a background of stars in random motion in
a cluster of total mass
(
N
M
⊙
)
{\displaystyle (NM_{\odot })}
with a mean number density
n
∼
(
N
−
1
)
/
(
4
π
R
3
/
3
)
{\displaystyle n\sim (N-1)/(4\pi R^{3}/3)}
within a typical size
R
{\displaystyle R}
.
Intuition says that gravity causes the light bodies to accelerate and gain momentum and kinetic energy (see slingshot effect). By conservation of energy and momentum, we may conclude that the heavier body will be slowed by an amount to compensate. Since there is a loss of momentum and kinetic energy for the body under consideration, the effect is called dynamical friction.
After certain time of relaxations the heavy black hole's kinetic energy should be in equal partition with the less-massive background objects. The slow-down of the black hole can be described as
−
M
∙
V
˙
∙
=
M
∙
V
∙
t
fric
star
,
{\displaystyle -{M_{\bullet }{\dot {V}}_{\bullet }}={M_{\bullet }V_{\bullet } \over t_{\text{fric}}^{\text{star}}},}
where
t
fric
star
{\displaystyle t_{\text{fric}}^{\text{star}}}
is called a dynamical friction time.
=== Dynamical friction time vs Crossing time in a virialised system ===
Consider a Mach-1 BH, which travels initially at the sound speed
ς
=
V
0
{\displaystyle {\text{ς}}=V_{0}}
, hence its Bondi radius
s
∙
{\displaystyle s_{\bullet }}
satisfies
G
M
∙
ln
Λ
s
∙
=
V
0
2
=
ς
2
=
0.4053
G
M
⊙
(
N
−
1
)
R
,
{\displaystyle {GM_{\bullet }{\sqrt {\ln \Lambda }} \over s_{\bullet }}=V_{0}^{2}={\text{ς}}^{2}={0.4053GM_{\odot }(N-1) \over R},}
where
the sound speed is
ς
=
4
G
M
⊙
(
N
−
1
)
π
2
R
{\displaystyle {\text{ς}}={\sqrt {4GM_{\odot }(N-1) \over \pi ^{2}R}}}
with the prefactor
4
π
2
≈
4
10
=
0.4
{\displaystyle {4 \over \pi ^{2}}\approx {4 \over 10}=0.4}
fixed by the fact that for a uniform spherical cluster of the mass density
ρ
=
n
M
⊙
≈
M
⊙
(
N
−
1
)
4.19
R
3
{\displaystyle \rho =nM_{\odot }\approx {M_{\odot }(N-1) \over 4.19R^{3}}}
, half of a circular period is the time for "sound" to make a oneway crossing in its longest dimension, i.e.,
2
t
ς
≡
2
t
cross
≡
2
R
ς
=
π
R
3
G
M
⊙
(
N
−
1
)
≈
(
0.4244
G
ρ
)
−
1
/
2
.
{\displaystyle 2t_{\text{ς}}\equiv 2t_{\text{cross}}\equiv {2R \over {\text{ς}}}=\pi {\sqrt {R^{3} \over GM_{\odot }(N-1)}}\approx (0.4244G\rho )^{-1/2}.}
It is customary to call the "half-diameter" crossing time
t
cross
{\displaystyle t_{\text{cross}}}
the dynamical time scale.
Assume the BH stops after traveling a length of
l
fric
≡
ς
t
fric
{\displaystyle l_{\text{fric}}\equiv {\text{ς}}t_{\text{fric}}}
with its momentum
M
∙
V
0
=
M
∙
ς
{\displaystyle M_{\bullet }V_{0}=M_{\bullet }{\text{ς}}}
deposited to
M
∙
M
⊙
{\displaystyle {M_{\bullet } \over M_{\odot }}}
stars in its path over
l
fric
/
(
2
R
)
{\displaystyle l_{\text{fric}}/(2R)}
crossings, then
the number of stars deflected by the BH's Bondi cross section per "diameter" crossing time is
N
defl
=
(
M
∙
M
⊙
)
2
R
l
fric
=
N
π
s
∙
2
π
R
2
=
N
(
M
∙
0.4053
M
⊙
N
)
2
ln
Λ
.
{\displaystyle N^{\text{defl}}={({M_{\bullet } \over M_{\odot }})}{2R \over l_{\text{fric}}}=N{\pi s_{\bullet }^{2} \over \pi R^{2}}=N\left({M_{\bullet } \over 0.4053M_{\odot }N}\right)^{2}\ln \Lambda .}
More generally, the Equation of Motion of the BH at a general velocity
V
∙
{\displaystyle \mathbf {V} _{\bullet }}
in the potential
Φ
{\displaystyle \Phi }
of a sea of stars can be written as
−
d
d
t
(
M
∙
V
∙
)
−
M
∙
∇
Φ
≡
(
M
∙
V
∙
)
t
fric
=
N
π
s
∙
2
π
R
2
⏞
N
defl
(
M
⊙
V
∙
)
2
t
ς
=
8
ln
Λ
′
N
t
ς
M
∙
V
∙
,
{\displaystyle -{d \over dt}(M_{\bullet }V_{\bullet })-M_{\bullet }\nabla \Phi \equiv {(M_{\bullet }V_{\bullet }) \over t_{\text{fric}}}=\overbrace {N\pi s_{\bullet }^{2} \over \pi R^{2}} ^{N^{\text{defl}}}{(M_{\odot }V_{\bullet }) \over 2t_{\text{ς}}}={8\ln \Lambda ' \over Nt_{\text{ς}}}M_{\bullet }V_{\bullet },}
π
2
8
≈
1
{\displaystyle {\pi ^{2} \over 8}\approx 1}
and the Coulomb logarithm modifying factor
ln
Λ
′
ln
Λ
≡
[
π
2
8
]
2
[
(
1
+
V
∙
2
ς'
2
)
]
−
2
(
1
+
M
⊙
M
∙
)
≤
[
ς'
V
∙
]
4
≤
1
{\displaystyle {\ln \Lambda ' \over \ln \Lambda }\equiv \left[{\pi ^{2} \over 8}\right]^{2}\left[(1+{V_{\bullet }^{2} \over {\text{ς'}}^{2}})\right]^{-2}(1+{M_{\odot } \over M_{\bullet }})\leq \left[{{\text{ς'}} \over V_{\bullet }}\right]^{4}\leq 1}
discounts friction on a supersonic moving BH with mass
M
∙
≥
M
⊙
{\displaystyle M_{\bullet }\geq M_{\odot }}
. As a rule of thumb, it takes about a sound crossing
t
ς'
{\displaystyle t_{\text{ς'}}}
time to "sink" subsonic BHs, from the edge to the centre without overshooting, if they weigh more than 1/8th of the total cluster mass. Lighter and faster holes can stay afloat much longer.
=== More rigorous formulation of dynamical friction ===
The full Chandrasekhar dynamical friction formula for the change in velocity of the object involves integrating over the phase space density of the field of matter and is far from transparent.
It reads as
M
∙
d
(
V
∙
)
d
t
=
−
M
∙
V
∙
t
fric
star
=
−
m
V
∙
n
(
x
)
d
x
3
d
t
ln
Λ
lag
,
{\displaystyle {M_{\bullet }d(\mathbf {V} _{\bullet }) \over dt}=-{M_{\bullet }\mathbf {V} _{\bullet } \over t_{\text{fric}}^{\text{star}}}=-{m\mathbf {V} _{\bullet }~n(\mathbf {x} )d\mathbf {x} ^{3} \over dt}\ln \Lambda _{\text{lag}},}
where
n
(
x
)
d
x
3
=
d
t
V
∙
(
π
s
∙
2
)
n
(
x
)
=
d
t
n
(
x
)
|
V
∙
|
π
[
G
(
m
+
M
∙
)
|
V
∙
|
2
/
2
]
2
{\displaystyle ~~n(\mathbf {x} )dx^{3}=dtV_{\bullet }(\pi s_{\bullet }^{2})n(\mathbf {x} )=dtn(\mathbf {x} )|V_{\bullet }|\pi \left[{G(m+M_{\bullet }) \over |V_{\bullet }|^{2}/2}\right]^{2}}
is the number of particles in an infinitesimal cylindrical volume of length
|
V
∙
d
t
|
{\displaystyle |V_{\bullet }dt|}
and a cross-section
π
s
∙
2
{\displaystyle \pi s_{\bullet }^{2}}
within the black hole's sphere of influence.
Like the "Couloumb logarithm"
ln
Λ
{\displaystyle \ln \Lambda }
factors in the contribution of distant background particles, here the factor
ln
(
Λ
lag
)
{\displaystyle \ln(\Lambda _{\text{lag}})}
also
factors in the probability of finding a background slower-than-BH particle to contribute to the drag. The more particles are overtaken by the BH, the more particles drag the BH, and the greater is
ln
(
Λ
beaten
)
{\displaystyle \ln(\Lambda _{\text{beaten}})}
. Also the bigger the system, the greater is
ln
Λ
{\displaystyle \ln \Lambda }
.
A background of elementary (gas or dark) particles can also induce dynamical friction, which scales with the mass density of the surrounding medium,
m
n
{\displaystyle m~n}
; the lower particle mass m is compensated by the higher number density n. The more massive the object, the more matter will be pulled into the wake.
Summing up the gravitational drag of both collisional gas and collisionless stars, we have
M
∙
d
(
V
∙
)
M
∙
d
t
=
−
4
π
[
G
M
∙
|
V
∙
|
]
2
V
^
∙
(
ρ
gas
ln
Λ
lag
g
a
s
+
m
n
*
ln
Λ
lag
∗
)
.
{\displaystyle M_{\bullet }{d(\mathbf {V} _{\bullet }) \over M_{\bullet }dt}=-4\pi \left[{GM_{\bullet } \over |V_{\bullet }|}\right]^{2}\mathbf {\hat {V}} _{\bullet }(\rho _{\text{gas}}\ln \Lambda _{\text{lag}}^{gas}+mn_{\text{*}}\ln \Lambda _{\text{lag}}^{*}).~~}
Here the "lagging-behind" fraction for gas and for stars are given by
ln
Λ
lag
g
a
s
(
u
)
=
ln
[
1
+
u
λ
]
1
2
[
|
1
−
u
|
λ
]
H
[
u
−
λ
−
1
]
−
H
[
1
−
λ
−
u
]
2
exp
[
u
+
λ
,
1
]
min
2
−
[
u
−
λ
,
1
]
min
2
4
λ
,
≈
ln
[
(
u
3
−
1
)
2
+
λ
3
+
u
3
−
1
1
+
λ
3
−
1
]
1
3
,
u
≡
|
V
∙
|
t
ς'
t
,
λ
≡
(
s
∙
ς'
t
)
ln
Λ
lag
∗
ln
Λ
≡
∫
0
|
m
V
∙
|
(
4
π
p
2
d
p
)
e
−
p
2
2
(
m
σ
)
2
(
2
π
m
σ
)
3
|
p
=
m
|
v
|
≈
|
V
∙
|
3
|
V
∙
|
3
+
3.45
σ
3
,
ln
Λ
=
∫
d
x
1
3
2
H
e
a
v
i
s
i
d
e
[
n
(
x
1
)
n
(
x
)
−
1
−
M
∙
N
M
⊙
]
(
s
∙
2
+
|
x
1
−
x
|
2
)
3
2
≈
ln
1
+
(
0.123
N
M
⊙
M
∙
)
2
,
{\displaystyle {\begin{aligned}\ln \Lambda _{\text{lag}}^{gas}(u)&=\ln ~{\left[{1+u \over \lambda }\right]^{1 \over 2}\left[{|1-u| \over \lambda }\right]^{H[u-\lambda -1]-H[1-\lambda -u] \over 2} \over \exp {[u+\lambda ,1]_{\min }^{2}-[u-\lambda ,1]_{\min }^{2} \over 4\lambda }},\\&\approx \ln \left[{{\sqrt {(u^{3}-1)^{2}+\lambda ^{3}}}+u^{3}-1 \over {\sqrt {1+\lambda ^{3}}}-1}\right]^{1 \over 3},~~u\equiv {|V_{\bullet }|t \over {\text{ς'}}t},~~\lambda \equiv ({s_{\bullet } \over {\text{ς'}}t})\\{\ln \Lambda _{\text{lag}}^{*} \over \ln \Lambda }&\equiv \int _{0}^{|mV_{\bullet }|}\!\!\!\!{(4\pi p^{2}dp)e^{-{p^{2} \over 2(m\sigma )^{2}}} \over ({\sqrt {2\pi }}m\sigma )^{3}}\left.\right|_{p=m|v|}\approx {|\mathbf {V} _{\bullet }|^{3} \over |\mathbf {V} _{\bullet }|^{3}+3.45\sigma ^{3}},\\\ln \Lambda &=\int {d\mathbf {x_{1}} ^{3}~2Heaviside[{n(\mathbf {x_{1}} ) \over n(\mathbf {x} )}-1-{M_{\bullet } \over NM_{\odot }}] \over (s_{\bullet }^{2}+|\mathbf {x_{1}} -\mathbf {x} |^{2})^{3 \over 2}}\approx \ln {\sqrt {1+\left({0.123NM_{\odot } \over M_{\bullet }}\right)^{2}}},\end{aligned}}}
where we have further assumed that the BH starts to move from time
t
=
0
{\displaystyle t=0}
; the gas is isothermal with sound speed
ς
{\displaystyle {\text{ς}}}
; the background stars have of (mass) density
m
n
(
x
)
{\displaystyle mn(\mathbf {x} )}
in a Maxwell distribution of momentum
p
=
m
v
{\displaystyle p=mv}
with a Gaussian distribution velocity spread
σ
{\displaystyle \sigma }
(called velocity dispersion, typically
σ
≤
ς
{\displaystyle \sigma \leq {\text{ς}}}
).
Interestingly, the
G
2
(
m
+
M
∙
)
(
m
n
(
x
)
)
{\displaystyle G^{2}(m+M_{\bullet })(mn(\mathbf {x} ))}
dependence suggests that dynamical friction is from the gravitational pull of by the wake, which is induced by the gravitational focusing of the massive body in its two-body encounters with background objects.
We see the force is also proportional to the inverse square of the velocity at the high end, hence the fractional rate of energy loss drops rapidly at high velocities.
Dynamical friction is, therefore, unimportant for objects that move relativistically, such as photons. This can be rationalized by realizing that the faster the object moves through the media, the less time there is for a wake to build up behind it. Friction tends to be the highest at the sound barrier, where
ln
Λ
lag
g
a
s
|
u
=
1
=
ln
ς'
t
s
∙
{\displaystyle \ln \Lambda _{\text{lag}}^{gas}\left.\right|_{u=1}=\ln {{\text{ς'}}t \over s_{\bullet }}}
.
== Gravitational encounters and relaxation ==
Stars in a stellar system will influence each other's trajectories due to strong and weak gravitational encounters. An encounter between two stars is defined to be strong/weak if their mutual potential energy at the closest passage is comparable/minuscule to their initial kinetic energy. Strong encounters are rare, and they are typically only considered important in dense stellar systems, e.g., a passing star can be sling-shot out by binary stars in the core of a globular cluster. This means that two stars need to come within a separation,
s
∗
=
G
M
⊙
+
G
M
⊙
V
2
/
2
=
2
1.5
G
M
⊙
ς
2
=
3.29
R
N
−
1
,
{\displaystyle s_{*}={GM_{\odot }+GM_{\odot } \over V^{2}/2}={2 \over 1.5}{GM_{\odot } \over {\text{ς}}^{2}}={3.29R \over N-1},}
where we used the Virial Theorem, "mutual potential energy balances twice kinetic energy on average", i.e., "the pairwise potential energy per star balances with twice kinetic energy associated with the sound speed in three directions",
1
∼
Q
virial
≡
2
K
⏞
(
N
M
⊙
)
V
2
|
W
|
=
N
M
⊙
ς
2
+
N
M
⊙
ς
2
+
N
M
⊙
ς
2
N
(
N
−
1
)
2
G
M
⊙
2
R
p
a
i
r
,
{\displaystyle 1\sim Q^{\text{virial}}\equiv {\overbrace {2K} ^{(NM_{\odot })V^{2}} \over |W|}={NM_{\odot }{\text{ς}}^{2}+NM_{\odot }{\text{ς}}^{2}+NM_{\odot }{\text{ς}}^{2} \over {N(N-1) \over 2}{GM_{\odot }^{2} \over R_{pair}}},}
where the factor
N
(
N
−
1
)
/
2
{\displaystyle N(N-1)/2}
is the number of handshakes between a pair of stars without double-counting, the mean pair separation
R
pair
=
π
2
24
R
≈
0.411234
R
{\displaystyle R_{\text{pair}}={\pi ^{2} \over 24}R\approx 0.411234R}
is only about 40\% of the radius of the uniform sphere.
Note also the similarity of the
Q
virial
←→
ln
Λ
.
{\displaystyle Q^{\text{virial}}\leftarrow \rightarrow {\sqrt {\ln \Lambda }}.}
=== Mean free path ===
The mean free path of strong encounters in a typically
(
N
−
1
)
=
4.19
n
R
3
≫
100
{\displaystyle (N-1)=4.19nR^{3}\gg 100}
stellar system is then
l
strong
=
1
(
π
s
∗
2
)
n
≈
(
N
−
1
)
8.117
R
≫
R
,
{\displaystyle l_{\text{strong}}={1 \over (\pi s_{*}^{2})n}\approx {(N-1) \over 8.117}R\gg R,}
i.e., it takes about
0.123
N
{\displaystyle 0.123N}
radius crossings for a typical star to come within a cross-section
π
s
∗
2
{\displaystyle \pi s_{*}^{2}}
to be deflected from its path completely. Hence the mean free time of a strong encounter is much longer than the crossing time
R
/
V
{\displaystyle R/V}
.
=== Weak encounters ===
Weak encounters have a more profound effect on the evolution of a stellar system over the course of many passages. The effects of gravitational encounters can be studied with the concept of relaxation time. A simple example illustrating relaxation is two-body relaxation, where a star's orbit is altered due to the gravitational interaction with another star.
Initially, the subject star travels along an orbit with initial velocity,
v
{\displaystyle \mathbf {v} }
, that is perpendicular to the impact parameter, the distance of closest approach, to the field star whose gravitational field will affect the original orbit. Using Newton's laws, the change in the subject star's velocity,
δ
v
{\displaystyle \delta \mathbf {v} }
, is approximately equal to the acceleration at the impact parameter, multiplied by the time duration of the acceleration.
The relaxation time can be thought as the time it takes for
δ
v
{\displaystyle \delta \mathbf {v} }
to equal
v
{\displaystyle \mathbf {v} }
, or the time it takes for the small deviations in velocity to equal the star's initial velocity. The number of "half-diameter" crossings for an average star to relax in a stellar system of
N
{\displaystyle N}
objects is approximately
t
relax
t
ς
=
N
relax
⋍
0.123
(
N
−
1
)
ln
(
N
−
1
)
≫
1
{\displaystyle {t_{\text{relax}} \over t_{\text{ς}}}=N^{\text{relax}}\backsimeq {\frac {0.123(N-1)}{\ln(N-1)}}\gg 1}
from a more rigorous calculation than the above mean free time estimates for strong deflection.
The answer makes sense because there is no relaxation for a single body or 2-body system. A better approximation of the ratio of timescales is
N
′
ln
1
+
N
′
2
|
N
′
=
0.123
(
N
−
2
)
{\displaystyle \left.{\frac {N'}{\ln {\sqrt {1+N'^{2}}}}}\right|_{N'=0.123(N-2)}}
, hence the relaxation time for 3-body, 4-body, 5-body, 7-body, 10-body, ..., 42-body, 72-body, 140-body, 210-body, 550-body are about 16, 8, 6, 4, 3, ..., 3, 4, 6, 8, 16 crossings. There is no relaxation for an isolated binary, and the relaxation is the fastest for a 16-body system; it takes about 2.5 crossings for orbits to scatter each other. A system with
N
∼
10
2
−
10
10
{\displaystyle N\sim 10^{2}-10^{10}}
have much smoother potential, typically takes
∼
ln
N
′
≈
(
2
−
20
)
{\displaystyle \sim \ln N'\approx (2-20)}
weak encounters to build a strong deflection to change orbital energy significantly.
=== Relation between friction and relaxation ===
Clearly that the dynamical friction of a black hole is much faster than the relaxation time by roughly a factor
M
⊙
/
M
∙
{\displaystyle M_{\odot }/M_{\bullet }}
, but these two are very similar for a cluster of black holes,
N
fric
=
t
fric
t
ς
→
t
relax
t
ς
=
N
relax
∼
(
N
−
1
)
10
−
100
,
when
M
∙
→
m
←
M
⊙
.
{\displaystyle N^{\text{fric}}={t_{\text{fric}} \over t_{\text{ς}}}\rightarrow {t_{\text{relax}} \over t_{\text{ς}}}=N^{\text{relax}}\sim {(N-1) \over 10-100},~{\text{when}}~{M_{\bullet }\rightarrow m\leftarrow M_{\odot }}.}
For a star cluster or galaxy cluster with, say,
N
=
10
3
,
R
=
1
p
c
−
10
5
p
c
,
V
=
1
k
m
/
s
−
10
3
k
m
/
s
{\displaystyle N=10^{3},~R=\mathrm {1pc-10^{5}pc} ,~V=\mathrm {1km/s-10^{3}km/s} }
, we have
t
relax
∼
100
t
ς
≈
100
M
y
r
−
10
G
y
r
{\displaystyle t_{\text{relax}}\sim 100t_{\text{ς}}\approx 100\mathrm {Myr} -10\mathrm {Gyr} }
. Hence encounters of members in these stellar or galaxy clusters are significant during the typical 10 Gyr lifetime.
On the other hand, typical galaxy with, say,
N
=
10
6
−
10
11
{\displaystyle N=10^{6}-10^{11}}
stars, would have a crossing time
t
ς
∼
1
k
p
c
−
100
k
p
c
1
k
m
/
s
−
100
k
m
/
s
∼
100
M
y
r
{\displaystyle t_{\text{ς}}\sim {1\mathrm {kpc} -100\mathrm {kpc} \over 1\mathrm {km/s} -100\mathrm {km/s} }\sim 100\mathrm {Myr} }
and their relaxation time is much longer than the age of the Universe. This justifies modelling galaxy potentials with mathematically smooth functions, neglecting two-body encounters throughout the lifetime of typical galaxies. And inside such a typical galaxy the dynamical friction and accretion on stellar black holes over a 10-Gyr Hubble time change the black hole's velocity and mass by only an insignificant fraction
Δ
∼
M
∙
0.1
N
M
⊙
t
t
ς
≤
M
∙
0.1
%
N
M
⊙
{\displaystyle \Delta \sim {M_{\bullet } \over 0.1NM_{\odot }}{t \over t_{\text{ς}}}\leq {M_{\bullet } \over 0.1\%NM_{\odot }}}
if the black hole makes up less than 0.1% of the total galaxy mass
N
M
⊙
∼
10
6
−
11
M
⊙
{\displaystyle NM_{\odot }\sim 10^{6-11}M_{\odot }}
. Especially when
M
∙
∼
M
⊙
{\displaystyle M_{\bullet }\sim M_{\odot }}
, we see that a typical star never experiences an encounter, hence stays on its orbit in a smooth galaxy potential.
The dynamical friction or relaxation time identifies collisionless vs. collisional particle systems. Dynamics on timescales much less than the relaxation time is effectively collisionless because typical star will deviate from its initial orbit size by a tiny fraction
t
/
t
relax
≪
1
{\displaystyle t/t_{\text{relax}}\ll 1}
. They are also identified as systems where subject stars interact with a smooth gravitational potential as opposed to the sum of point-mass potentials. The accumulated effects of two-body relaxation in a galaxy can lead to what is known as mass segregation, where more massive stars gather near the center of clusters, while the less massive ones are pushed towards the outer parts of the cluster.
=== A Spherical-Cow Summary of Continuity Eq. in Collisional and Collisionless Processes ===
Having gone through the details of the rather complex interactions of particles in a gravitational system, it is always helpful to zoom out and extract some generic theme, at an affordable price of rigour, so carry on with a lighter load.
First important concept is "gravity balancing motion" near the perturber and for the background as a whole
Perturber Virial
≈
G
M
∙
s
∙
≈
V
cir
2
≈
⟨
V
⟩
2
≈
⟨
V
2
⟩
¯
≈
σ
2
≈
(
R
t
ς
)
2
≈
c
ς
2
≈
G
(
N
m
)
R
≈
Background Virial
,
{\displaystyle {\text{Perturber Virial}}\approx {GM_{\bullet } \over s_{\bullet }}\approx V_{\text{cir}}^{2}\approx \langle V\rangle ^{2}\approx {\overline {\langle V^{2}\rangle }}\approx \sigma ^{2}\approx \left({R \over t_{\text{ς}}}\right)^{2}\approx c_{\text{ς}}^{2}\approx {G(Nm) \over R}\approx {\text{Background Virial}},}
by consistently omitting all factors of unity
4
π
{\displaystyle 4\pi }
,
π
{\displaystyle \pi }
,
ln
Λ
{\displaystyle \ln {\text{Λ}}}
etc for clarity, approximating the combined mass
M
∙
+
m
≈
M
∙
{\displaystyle M_{\bullet }+m\approx M_{\bullet }}
and
being ambiguous whether the geometry of the system is a thin/thick gas/stellar disk or a (non)-uniform stellar/dark sphere with or without a boundary, and about the subtle distinctions among the kinetic energies from the local Circular rotation speed
V
cir
{\displaystyle V_{\text{cir}}}
, radial infall speed
⟨
V
⟩
{\displaystyle \langle V\rangle }
, globally isotropic or anisotropic random motion
σ
{\displaystyle \sigma }
in one or three directions, or the (non)-uniform isotropic Sound speed
c
ς
{\displaystyle c_{\text{ς}}}
to emphasize of the logic behind the order of magnitude of the friction time scale.
Second we can recap very loosely summarise the various processes so far of collisional and collisionless gas/star or dark matter by Spherical cow style Continuity Equation on any generic quantity Q of the system:
d
Q
d
t
≈
±
Q
(
l
c
ς
)
,
Q being mass M, energy E, momentum (M V), Phase density f, size R, density
N
m
4
π
3
R
3
.
.
.
,
{\displaystyle {dQ \over dt}\approx {\pm Q \over ({l \over c_{\text{ς}}})},~{\text{Q being mass M, energy E, momentum (M V), Phase density f, size R, density}}{Nm \over {4\pi \over 3}R^{3}}...,}
where the
±
{\displaystyle \pm }
sign is generally negative except for the (accreting) mass M, and the Mean free path
l
=
c
ς
t
fric
{\displaystyle l=c_{\text{ς}}t_{\text{fric}}}
or the friction time
t
fric
{\displaystyle t_{\text{fric}}}
can be due to direct molecular viscosity from a physical collision Cross section, or due to gravitational scattering (bending/focusing/Sling shot) of particles; generally the influenced area is the greatest of the competing processes of Bondi accretion, Tidal disruption, and Loss cone capture,
s
2
≈
max
[
Bondi radius
s
∙
,
Tidal radius
s
Hill
,
physical size
s
Loss cone
]
2
.
{\displaystyle s^{2}\approx \max \left[{\text{Bondi radius}}~s_{\bullet },{\text{Tidal radius}}~s_{\text{Hill}},{\text{physical size}}~s_{\text{Loss cone}}\right]^{2}.}
E.g., in case Q is the perturber's mass
Q
=
M
∙
{\displaystyle Q=M_{\bullet }}
, then we can estimate the Dynamical friction time via the (gas/star) Accretion rate
M
˙
∙
=
M
∙
t
fric
≈
∫
0
s
2
d
(
area
)
(
background mean flux
)
≈
s
2
(
ρ
c
ς
)
≈
Perturber influenced cross section
(
s
2
)
background system cross section
(
R
2
)
×
background mass
(
N
m
)
crossing time
t
ς
≈
R
c
ς
≈
1
G
(
N
m
)
R
3
∼
G
ρ
∼
κ
≈
G
M
∙
G
t
ς
G
M
∙
G
(
N
m
)
≈
(
ρ
c
ς
)
(
G
M
∙
c
ς
2
)
2
,
if consider only gravitationally focusing,
≈
M
∙
N
t
ς
,
if for a light perturber
M
∙
→
m
=
M
⊙
→
0
,
if practically collisionless
N
→
∞
,
{\displaystyle {\begin{aligned}{\dot {M}}_{\bullet }=&{M_{\bullet } \over t_{\text{fric}}}\approx \int _{0}^{s^{2}}d({\text{area}})~({\text{background mean flux}})\approx s^{2}(\rho c_{\text{ς}})\\\approx &{\frac {{\text{Perturber influenced cross section}}~(s^{2})}{{\text{background system cross section}}~(R^{2})}}\times {\frac {{\text{background mass}}~(Nm)}{{\text{crossing time}}~t_{\text{ς}}\approx {R \over c_{\text{ς}}}\approx {1 \over {\sqrt {G(Nm) \over R^{3}}}\sim {\sqrt {G\rho }}\sim \kappa }}}\\\approx &{GM_{\bullet } \over Gt_{\text{ς}}}{GM_{\bullet } \over G(Nm)}\approx (\rho c_{\text{ς}})\left({GM_{\bullet } \over c_{\text{ς}}^{2}}\right)^{2},~~{\text{if consider only gravitationally focusing,}}\\\approx &{M_{\bullet } \over Nt_{\text{ς}}},~~{\text{if for a light perturber}}M_{\bullet }\rightarrow m=M_{\odot }\\\rightarrow &0,~~{\text{if practically collisionless}}~~N\rightarrow \infty ,\end{aligned}}}
where we have applied the relations motion-balancing-gravity.
In the limit the perturber is just 1 of the N background particle,
M
∙
→
m
{\displaystyle M_{\bullet }\rightarrow m}
, this friction time is identified with the (gravitational) Relaxation time. Again all Coulomb logarithm etc are suppressed without changing the estimations from these qualitative equations.
For the rest of Stellar dynamics, we will consistently work on precise calculations through primarily Worked Examples, by neglecting gravitational friction and relaxation of the perturber, working in the limit
N
→
∞
{\displaystyle N\rightarrow \infty }
as approximated true in most galaxies on the 14Gyrs Hubble time scale, even though this is sometimes violated for some clusters of stars or clusters of galaxies.of the cluster.
A concise 1-page summary of some main equations in Stellar dynamics and Accretion disc physics are shown here, where one attempts to be more rigorous on the qualitative equations above.
== Connections to statistical mechanics and plasma physics ==
The statistical nature of stellar dynamics originates from the application of the kinetic theory of gases to stellar systems by physicists such as James Jeans in the early 20th century. The Jeans equations, which describe the time evolution of a system of stars in a gravitational field, are analogous to Euler's equations for an ideal fluid, and were derived from the collisionless Boltzmann equation. This was originally developed by Ludwig Boltzmann to describe the non-equilibrium behavior of a thermodynamic system. Similarly to statistical mechanics, stellar dynamics make use of distribution functions that encapsulate the information of a stellar system in a probabilistic manner. The single particle phase-space distribution function,
f
(
x
,
v
,
t
)
{\displaystyle f(\mathbf {x} ,\mathbf {v} ,t)}
, is defined in a way such that
f
(
x
,
v
,
t
)
d
x
d
v
=
d
N
{\displaystyle f(\mathbf {x} ,\mathbf {v} ,t)\,d\mathbf {x} \,d\mathbf {v} =dN}
where
d
N
/
N
{\displaystyle dN/N}
represents the probability of finding a given star with position
x
{\displaystyle \mathbf {x} }
around a differential volume
d
x
{\displaystyle d\mathbf {x} }
and velocity
v
{\displaystyle {\text{v}}}
around a differential velocity space volume
d
v
{\displaystyle d\mathbf {v} }
. The distribution function is normalized (sometimes) such that integrating it over all positions and velocities will equal N, the total number of bodies of the system. For collisional systems, Liouville's theorem is applied to study the microstate of a stellar system, and is also commonly used to study the different statistical ensembles of statistical mechanics.
=== Convention and notation in case of a thermal distribution ===
In most of stellar dynamics literature, it is convenient to adopt the convention that the particle mass is unity in solar mass unit
M
⊙
{\displaystyle M_{\odot }}
, hence a particle's momentum and velocity are identical, i.e.,
p
=
m
v
=
v
,
m
=
1
,
N
total
=
M
total
,
{\displaystyle \mathbf {p} =m\mathbf {v} =\mathbf {v} ,~m=1,~N_{\text{total}}=M_{\text{total}},}
d
M
d
x
3
d
v
3
=
f
(
x
,
v
,
t
)
=
f
(
x
,
p
,
t
)
≡
d
N
d
x
3
d
p
3
{\displaystyle {dM \over dx^{3}dv^{3}}=f(\mathbf {x} ,\mathbf {v} ,t)=f(\mathbf {x} ,\mathbf {p} ,t)\equiv {dN \over dx^{3}dp^{3}}}
For example, the thermal velocity distribution of air molecules (of typically 15 times the proton mass per molecule) in a room of constant temperature
T
0
∼
300
K
{\displaystyle T_{0}\sim \mathrm {300K} }
would have a Maxwell distribution
f
Max
(
x
,
y
,
z
,
m
V
x
,
m
V
y
,
m
V
z
)
=
1
(
2
π
ℏ
)
3
1
exp
(
E
(
x
,
y
,
z
,
p
x
,
p
y
,
p
z
)
−
μ
k
T
0
)
+
1
{\displaystyle f^{\text{Max}}(x,y,z,mV_{x},mV_{y},mV_{z})={1 \over (2\pi \hbar )^{3}}{1 \over \exp \left({E(x,y,z,p_{x},p_{y},p_{z})-\mu \over kT_{0}}\right)+1}}
f
Max
∼
1
(
2
π
ℏ
/
m
)
3
e
μ
k
T
0
e
−
E
m
σ
1
2
,
{\displaystyle f^{\text{Max}}\sim {1 \over (2\pi \hbar /m)^{3}}e^{\mu \over kT_{0}}e^{-E \over m\sigma _{1}^{2}},}
where the energy per unit mass
E
/
m
=
Φ
(
x
,
y
,
z
)
+
(
V
x
2
+
V
y
2
+
V
z
2
)
/
2
,
{\displaystyle E/m=\Phi (x,y,z)+(V_{x}^{2}+V_{y}^{2}+V_{z}^{2})/2,}
where
Φ
(
x
,
y
,
z
)
≡
g
0
z
=
0
{\displaystyle \Phi (x,y,z)\equiv g_{0}z=0}
and
σ
1
=
k
T
0
/
m
∼
0.3
k
m
/
s
{\textstyle \sigma _{1}={\sqrt {kT_{0}/m}}\sim \mathrm {0.3km/s} }
is the width of the velocity Maxwell distribution, identical in each direction and everywhere in the room, and the normalisation constant
e
μ
k
T
0
{\displaystyle e^{\mu \over kT_{0}}}
(assume the chemical potential
μ
∼
(
m
σ
1
2
)
ln
[
n
0
(
2
π
ℏ
m
σ
1
)
3
]
≪
0
{\textstyle \mu \sim (m\sigma _{1}^{2})\ln \left[n_{0}\left({{\sqrt {2\pi }}\hbar \over m\sigma _{1}}\right)^{3}\right]\ll 0}
such that the Fermi-Dirac distribution reduces to a Maxwell velocity distribution) is fixed by the constant gas number density
n
0
=
n
(
x
,
y
,
0
)
{\displaystyle n_{0}=n(x,y,0)}
at the floor level, where
n
(
x
,
y
,
0
)
=
∫
−
∞
∞
m
d
V
x
∫
−
∞
∞
m
d
V
y
∫
−
∞
∞
m
d
V
z
f
(
x
,
y
,
0
,
m
V
x
,
m
V
y
,
m
V
z
)
{\displaystyle n(x,y,0)=\!\!\int _{-\infty }^{\infty }mdV_{x}\!\!\int _{-\infty }^{\infty }mdV_{y}\!\!\int _{-\infty }^{\infty }mdV_{z}f(x,y,0,mV_{x},mV_{y},mV_{z})}
n
≈
(
2
π
)
3
/
2
(
m
σ
1
)
3
(
2
π
ℏ
)
3
e
μ
m
σ
1
2
.
{\displaystyle n\approx {(2\pi )^{3/2}(m\sigma _{1})^{3} \over (2\pi \hbar )^{3}}e^{\mu \over m\sigma _{1}^{2}}.}
=== The CBE ===
In plasma physics, the collisionless Boltzmann equation is referred to as the Vlasov equation, which is used to study the time evolution of a plasma's distribution function.
The Boltzmann equation is often written more generally with the Liouville operator
L
{\displaystyle {\mathcal {L}}}
as
L
f
(
t
,
x
,
p
)
=
f
fit
Max
−
f
(
t
,
x
,
p
)
t
relax
,
{\displaystyle {\mathcal {L}}f(t,\mathbf {x} ,\mathbf {p} )={f_{\text{fit}}^{\text{Max}}-f(t,\mathbf {x} ,\mathbf {p} ) \over t_{\text{relax}}},}
L
≡
∂
∂
t
+
p
m
⋅
∇
+
F
⋅
∂
∂
p
.
{\displaystyle {\mathcal {L}}\equiv {\frac {\partial }{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla +\mathbf {F} \cdot {\frac {\partial }{\partial \mathbf {p} }}\,.}
where
F
≡
p
˙
=
−
m
∇
Φ
{\displaystyle \mathbf {F} \equiv \mathbf {\dot {p}} =-m\nabla \Phi }
is the gravitational force and
f
fit
Max
{\displaystyle f_{\text{fit}}^{\text{Max}}}
is the Maxwell (equipartition) distribution (to fit the same density, same mean and rms velocity as
f
(
t
,
x
,
p
)
{\displaystyle f(t,\mathbf {x} ,\mathbf {p} )}
). The equation means the non-Gaussianity will decay on a (relaxation) time scale of
t
relax
{\displaystyle t_{\text{relax}}}
, and the system will ultimately relaxes to a Maxwell (equipartition) distribution.
Whereas Jeans applied the collisionless Boltzmann equation, along with Poisson's equation, to a system of stars interacting via the long range force of gravity, Anatoly Vlasov applied Boltzmann's equation with Maxwell's equations to a system of particles interacting via the Coulomb Force. Both approaches separate themselves from the kinetic theory of gases by introducing long-range forces to study the long term evolution of a many particle system. In addition to the Vlasov equation, the concept of Landau damping in plasmas was applied to gravitational systems by Donald Lynden-Bell to describe the effects of damping in spherical stellar systems.
A nice property of f(t,x,v) is that many other dynamical quantities can be formed by its moments, e.g., the total mass, local density, pressure, and mean velocity. Applying the collisionless Boltzmann equation, these moments are then related by various forms of continuity equations, of which most notable are the Jeans equations and Virial theorem.
=== Probability-weighted moments and hydrostatic equilibrium ===
Jeans computed the weighted velocity of the Boltzmann Equation after integrating over velocity space
1
ρ
p
∫
{
v
p
d
[
f
p
m
p
]
d
t
−
⟨
v
⟩
p
d
[
f
p
m
p
]
d
t
}
d
3
v
=
0
,
{\displaystyle {1 \over \rho _{p}}\int \!\left\{\mathbf {v} _{p}{d[f_{p}m_{p}] \over dt}-\langle {\mathbf {v} }\rangle _{p}{d[f_{p}m_{p}] \over dt}\right\}d^{3}\mathbf {v} =0,}
and obtain the Momentum (Jeans) Eqs. of a
p
{\displaystyle ^{p}}
opulation (e.g., gas, stars, dark matter):
(
∂
∂
t
+
∑
j
=
1
3
⟨
v
j
p
⟩
∂
∂
x
j
)
⟨
v
i
p
⟩
⏞
⟨
v
⟩
˙
i
p
=
⏟
E
o
M
−
∂
Φ
(
t
,
x
)
∂
x
i
⏞
g
i
∼
O
(
−
G
M
/
R
2
)
−
⏟
balance
pressure
∑
j
=
1
3
∂
ρ
p
∂
x
j
[
ρ
p
(
t
,
x
)
⏟
∫
∞
m
p
f
p
d
3
v
σ
j
i
p
(
t
,
x
)
⏟
O
(
c
s
2
)
]
⏞
∫
∞
d
v
3
(
v
j
−
⟨
v
⟩
j
p
)
(
v
i
−
⟨
v
⟩
i
p
)
m
p
f
p
−
⟨
v
i
p
⟩
[
m
˙
p
/
m
p
]
⏞
1
/
t
|
visc
m
p
=
M
gas
fric
⏟
snow.plough
,
0
=
−
∂
Φ
(
t
,
x
)
∂
x
i
−
∂
(
n
σ
2
)
n
∂
x
i
,
hydrostatic isotropic velocity, no flow and friction
.
{\displaystyle {\begin{aligned}\overbrace {\left({\partial \over \partial t}+\sum _{j=1}^{3}\langle {v_{j}^{p}}\rangle {\partial \over \partial x_{j}}\right)\langle {v_{i}^{p}}\rangle } ^{{\dot {\langle {v}\rangle }}_{i}^{p}}&\underbrace {=} _{EoM}\overbrace {-\partial \Phi (t,\mathbf {x} ) \over \partial x_{i}} ^{g_{i}\sim O(-GM/R^{2})}~~\underbrace {-} _{\text{balance}}^{\text{pressure}}~~\sum _{j=1}^{3}{\partial \over \rho ^{p}\partial x_{j}}\overbrace {[\underbrace {\rho ^{p}(t,\mathbf {x} )} _{\int _{\infty }\!\!\!\!m_{p}f_{p}d^{3}\mathbf {v} }\underbrace {\sigma _{ji}^{p}(t,\mathbf {x} )} _{O(c_{s}^{2})}]} ^{\int \limits _{\infty }\!\!d\mathbf {v} ^{3}(\mathbf {v} _{j}-\langle {v}\rangle _{j}^{p})(\mathbf {v} _{i}-\langle {v}\rangle _{i}^{p})m_{p}f_{p}}-{\underbrace {\langle {v_{i}^{p}}\rangle \overbrace {[{\dot {m}}_{p}/m_{p}]} ^{1/t|_{{\text{visc}}~m_{p}=M_{\text{gas}}}^{\text{fric}}}} _{\text{snow.plough}}},\\0&=-{\partial \Phi (t,\mathbf {x} ) \over \partial x_{i}}-{\partial (n\sigma ^{2}) \over n\partial x_{i}},~~{\text{hydrostatic isotropic velocity, no flow and friction }}.\end{aligned}}}
The general version of Jeans equation, involving (3 x 3) velocity moments is cumbersome.
It only becomes useful or solvable if we could drop some of these moments, especially drop the off-diagonal cross terms for systems of high symmetry, and also drop net rotation or net inflow speed everywhere.
The isotropic version is also called
Hydrostatic equilibrium equation where balancing pressure gradient with gravity; the isotropic version works for axisymmetric disks as well, after replacing the derivative dr with vertical coordinate dz. It means that we could measure the gravity (of dark matter) by observing the gradients of the velocity dispersion and the number density of stars.
== Applications and examples ==
Stellar dynamics is primarily used to study the mass distributions within stellar systems and galaxies. Early examples of applying stellar dynamics to clusters include Albert Einstein's 1921 paper applying the virial theorem to spherical star clusters and Fritz Zwicky's 1933 paper applying the virial theorem specifically to the Coma Cluster, which was one of the original harbingers of the idea of dark matter in the universe. The Jeans equations have been used to understand different observational data of stellar motions in the Milky Way galaxy. For example, Jan Oort utilized the Jeans equations to determine the average matter density in the vicinity of the solar neighborhood, whereas the concept of asymmetric drift came from studying the Jeans equations in cylindrical coordinates.
Stellar dynamics also provides insight into the structure of galaxy formation and evolution. Dynamical models and observations are used to study the triaxial structure of elliptical galaxies and suggest that prominent spiral galaxies are created from galaxy mergers. Stellar dynamical models are also used to study the evolution of active galactic nuclei and their black holes, as well as to estimate the mass distribution of dark matter in galaxies.
=== A unified thick disk potential ===
Consider an oblate potential in cylindrical coordinates
Φ
(
R
,
z
)
=
G
M
0
2
z
0
[
2
sinh
−
1
Q
−
sinh
−
1
Q
+
−
sinh
−
1
Q
−
]
=
G
M
0
2
z
0
log
(
1
+
Q
2
+
Q
)
2
[
1
+
Q
+
2
+
Q
+
]
[
1
+
Q
−
2
+
Q
−
]
,
Q
±
≡
R
0
+
|
|
z
|
±
z
0
|
R
,
Q
≡
R
0
+
[
0
,
|
z
|
−
z
0
]
max
R
,
{\displaystyle {\begin{aligned}\Phi (R,z)&={GM_{0} \over 2z_{0}}\left[2\sinh ^{-1}\!\!Q-\sinh ^{-1}\!\!Q_{+}-\sinh ^{-1}\!\!Q_{-}\right]\\&={GM_{0} \over 2z_{0}}\log {({\sqrt {1+Q^{2}}}+Q)^{2} \over \left[{\sqrt {1+Q_{+}^{2}}}+Q_{+}\right]\left[{\sqrt {1+Q_{-}^{2}}}+Q_{-}\right]},\\Q_{\pm }&\equiv {R_{0}+\left|~|z|\pm z_{0}~\right| \over R},\\Q&\equiv {R_{0}+[0,|z|-z_{0}]_{\max } \over R},\\\end{aligned}}}
where
z
0
,
R
0
{\displaystyle z_{0},R_{0}}
are (positive) vertical and radial length scales.
Despite its complexity, we can easily see some limiting properties of the model.
First we can see the total mass of the system is
M
0
{\displaystyle M_{0}}
because
Φ
(
R
,
z
)
→
G
M
0
2
z
0
(
2
Q
−
−
Q
−
−
Q
+
)
=
−
G
M
0
R
,
{\displaystyle \Phi (R,z)\rightarrow {GM_{0} \over 2z_{0}}(2Q_{-}-Q_{-}-Q_{+})=-{GM_{0} \over R},}
when we take the large radii limit
R
→
∞
,
|
z
|
≥
z
0
,
{\displaystyle R\rightarrow \infty ,~|z|\geq z_{0},}
, so that
Q
=
Q
−
=
Q
+
−
2
z
0
R
=
|
z
|
+
(
R
0
−
z
0
)
R
→
0.
{\displaystyle Q=Q_{-}=Q_{+}-{2z_{0} \over R}={|z|+(R_{0}-z_{0}) \over R}\rightarrow 0.}
We can also show that some special cases of this unified potential become the potential of the Kuzmin razor-thin disk, that of the Point mass
M
0
{\displaystyle M_{0}}
, and that of a uniform-Needle mass distribution:
Φ
K
M
(
R
,
z
)
=
−
G
M
0
R
2
+
(
|
z
|
+
R
0
)
2
,
z
0
=
0
,
{\displaystyle \Phi _{KM}(R,z)=-{GM_{0} \over {\sqrt {R^{2}+(|z|+R_{0})^{2}}}},~~z_{0}=0,}
Φ
P
T
(
R
,
z
)
=
−
G
M
0
R
2
+
z
2
,
z
0
=
R
0
=
0
,
{\displaystyle \Phi _{PT}(R,z)=-{GM_{0} \over {\sqrt {R^{2}+z^{2}}}},~~z_{0}=R_{0}=0,}
Φ
U
N
R
0
=
0
(
R
,
z
)
=
G
M
0
2
z
0
[
2
sinh
−
1
(
0
,
|
z
|
−
z
0
)
max
R
−
sinh
−
1
z
0
+
|
z
|
R
−
sinh
−
1
|
z
0
−
|
z
|
|
R
]
.
{\displaystyle \Phi _{UN}^{R_{0}=0}(R,z)={GM_{0} \over 2z_{0}}\left[2\sinh ^{-1}\!\!{(0,|z|-z_{0})_{\max } \over R}-\sinh ^{-1}\!\!{z_{0}+|z| \over R}-\sinh ^{-1}\!\!{\left|~z_{0}-|z|~\right| \over R}\right].}
=== A worked example of gravity vector field in a thick disk ===
First consider the vertical gravity at the boundary,
g
z
(
R
,
z
)
=
−
∂
z
Φ
(
R
,
z
)
=
−
G
M
0
z
2
z
0
2
[
1
R
0
2
+
R
2
−
1
(
R
0
+
2
z
0
)
2
+
R
2
]
,
z
=
±
z
0
,
{\displaystyle g_{z}(R,z)=-\partial _{z}\Phi (R,z)=-{GM_{0}z \over 2z_{0}^{2}}\left[{1 \over {\sqrt {R_{0}^{2}+R^{2}}}}-{1 \over {\sqrt {(R_{0}+2z_{0})^{2}+R^{2}}}}\right],~~z=\pm z_{0},}
Note that both the potential and the vertical gravity are continuous across the boundaries, hence no razor disk at the boundaries.
Thanks to the fact that at the boundary,
∂
|
z
|
(
2
Q
)
−
∂
|
z
|
Q
−
=
∂
|
z
|
(
Q
+
−
2
z
0
R
)
=
1
R
{\displaystyle \partial _{|z|}(2Q)-\partial _{|z|}Q_{-}=\partial _{|z|}\left(Q_{+}-{\frac {2z_{0}}{R}}\right)={1 \over R}}
is continuous. Apply Gauss's theorem by integrating the vertical force over the entire disk upper and lower boundaries, we have
2
∫
0
∞
(
2
π
R
d
R
)
|
g
z
(
R
,
z
0
)
|
=
4
π
G
M
0
,
{\displaystyle 2\int _{0}^{\infty }(2\pi RdR)|g_{z}(R,z_{0})|=4\pi GM_{0},}
confirming that
M
0
{\displaystyle M_{0}}
takes the meaning of the total disk mass.
The vertical gravity drops with
−
g
z
→
G
M
0
z
(
1
+
R
0
/
z
0
)
/
R
3
{\displaystyle -g_{z}\rightarrow GM_{0}z(1+R_{0}/z_{0})/R^{3}}
at large radii, which is enhanced over the vertical gravity of a point mass
G
M
0
z
/
R
3
{\displaystyle GM_{0}z/R^{3}}
due to the self-gravity of the thick disk.
=== Density of a thick disk from Poisson Equation ===
Insert in the cylindrical Poisson eq.
ρ
(
R
,
z
)
=
∂
z
∂
z
Φ
4
π
G
+
∂
R
(
R
∂
R
Φ
)
4
π
G
R
=
M
0
R
0
/
z
0
4
π
(
R
2
+
R
0
2
)
3
/
2
H
(
z
0
−
|
z
|
)
,
{\displaystyle \rho (R,z)={\partial _{z}\partial _{z}\Phi \over 4\pi G}+{\partial _{R}(R\partial _{R}\Phi ) \over 4\pi GR}={M_{0}R_{0}/z_{0} \over 4\pi (R^{2}+R_{0}^{2})^{3/2}}H(z_{0}-|z|),}
which drops with radius, and is zero beyond
|
z
|
>
z
0
{\displaystyle |z|>z_{0}}
and uniform along the z-direction within the boundary.
=== Surface density and mass of a thick disk ===
Integrating over the entire thick disc of uniform thickness
2
z
0
{\displaystyle 2z_{0}}
, we find the surface density and the total mass as
Σ
(
R
)
=
(
2
z
0
)
ρ
(
R
,
0
)
,
M
0
=
∫
0
∞
(
2
π
R
d
R
)
Σ
(
R
)
.
{\displaystyle \Sigma (R)=(2z_{0})\rho (R,0),~~M_{0}=\int _{0}^{\infty }(2\pi RdR)\Sigma (R).}
This confirms that the absence of extra razor thin discs at the boundaries. In the limit,
z
0
→
0
{\displaystyle z_{0}\rightarrow 0}
, this thick disc potential reduces to that of a razor-thin Kuzmin disk, for which we can verify
|
g
z
(
R
,
0
+
)
|
2
π
G
→
Σ
(
R
)
→
M
0
R
0
2
π
(
R
2
+
R
0
2
)
3
/
2
{\displaystyle {|g_{z}(R,0+)| \over 2\pi G}\rightarrow \Sigma (R)\rightarrow {M_{0}R_{0} \over 2\pi (R^{2}+R_{0}^{2})^{3/2}}}
.
=== Oscillation frequencies in a thick disk ===
To find the vertical and radial oscillation frequencies, we do a Taylor expansion of potential around the midplane.
Φ
(
R
1
,
z
)
≈
Φ
(
R
,
0
)
+
ω
2
R
(
R
1
−
R
)
+
κ
2
2
(
R
1
−
R
)
2
+
ν
2
2
z
2
{\displaystyle \Phi (R_{1},z)\approx \Phi (R,0)+{\omega ^{2}R}(R_{1}-R)+{\kappa ^{2} \over 2}(R_{1}-R)^{2}+{\nu ^{2} \over 2}z^{2}}
and we find the circular speed
V
cir
{\displaystyle V_{\text{cir}}}
and the vertical and radial epicycle frequencies to be given by
(
R
ω
)
2
≡
V
cir
2
=
[
(
1
+
R
0
/
z
0
)
G
M
0
R
2
+
(
R
0
+
z
0
)
2
−
(
R
0
/
z
0
)
G
M
0
R
2
+
R
0
2
]
,
{\displaystyle (R\omega )^{2}\equiv V_{\text{cir}}^{2}=\left[{(1+R_{0}/z_{0})GM_{0} \over {\sqrt {R^{2}+(R_{0}+z_{0})^{2}}}}-{(R_{0}/z_{0})GM_{0} \over {\sqrt {R^{2}+R_{0}^{2}}}}\right],}
ν
2
=
G
M
0
(
R
0
/
z
0
+
1
)
(
R
2
+
(
R
0
+
z
0
)
2
)
3
/
2
,
{\displaystyle \nu ^{2}={GM_{0}(R_{0}/z_{0}+1) \over (R^{2}+(R_{0}+z_{0})^{2})^{3/2}},}
κ
2
+
ν
2
−
2
ω
2
=
4
π
G
ρ
(
R
,
0
)
=
G
M
0
R
0
/
z
0
(
R
2
+
R
0
2
)
3
/
2
.
{\displaystyle \kappa ^{2}+\nu ^{2}-2\omega ^{2}=4\pi G\rho (R,0)={GM_{0}R_{0}/z_{0} \over (R^{2}+R_{0}^{2})^{3/2}}.}
Interestingly the rotation curve
V
cir
{\displaystyle V_{\text{cir}}}
is solid-body-like near the centre
R
≪
R
0
{\displaystyle R\ll R_{0}}
, and is Keplerian far away.
At large radii three frequencies satisfy
[
ω
,
ν
,
κ
,
4
π
G
ρ
]
|
R
→
∞
→
[
1
,
1
+
R
0
/
z
0
,
1
,
R
0
/
z
0
]
1
2
G
M
0
R
3
{\textstyle \left.\left[\omega ,\nu ,\kappa ,{\sqrt {4\pi G\rho }}\right]\right|_{R\to \infty }\to [1,1+R_{0}/z_{0},1,R_{0}/z_{0}]^{1 \over 2}{\sqrt {GM_{0} \over R^{3}}}}
.
E.g., in the case that
R
→
∞
{\displaystyle R\to \infty }
and
R
0
/
z
0
=
3
{\displaystyle R_{0}/z_{0}=3}
, the oscillations
ω
:
ν
:
κ
=
1
:
2
:
1
{\displaystyle \omega :\nu :\kappa =1:2:1}
forms a resonance.
In the case that
R
0
=
0
{\displaystyle R_{0}=0}
, the density is zero everywhere except uniform needle between
|
z
|
≤
z
0
{\displaystyle |z|\leq z_{0}}
along the z-axis.
If we further require
z
0
=
0
{\displaystyle z_{0}=0}
, then we recover a well-known property for closed ellipse orbits in point mass potential,
ω
:
ν
:
κ
=
1
:
1
:
1.
{\displaystyle \omega :\nu :\kappa =1:1:1.}
=== A worked example for neutrinos in galaxies ===
For example, the phase space distribution function of non-relativistic neutrinos of mass m anywhere will not exceed the maximum value set by
f
(
x
,
v
,
t
)
=
d
N
d
x
3
d
v
3
≤
6
(
2
π
ℏ
/
m
)
3
,
{\displaystyle f(\mathbf {x} ,\mathbf {v} ,t)={dN \over dx^{3}dv^{3}}\leq {6 \over (2\pi \hbar /m)^{3}},~~~}
where the Fermi-Dirac statistics says there are at most 6 flavours of neutrinos within a volume
d
x
3
{\displaystyle dx^{3}}
and a velocity volume
d
v
3
=
(
d
p
/
m
)
3
=
[
(
2
π
ℏ
/
d
x
)
/
m
]
3
,
{\displaystyle dv^{3}=(dp/m)^{3}=[(2\pi \hbar /dx)/m]^{3},}
.
Let's approximate the distribution is at maximum, i.e.,
f
(
x
,
y
,
z
,
V
x
,
V
y
,
V
z
)
=
6
(
2
π
ℏ
/
m
)
3
q
α
2
,
0
≤
q
(
E
)
=
Φ
max
−
E
V
0
2
/
2
≤
1
,
{\displaystyle f(x,y,z,V_{x},V_{y},V_{z})={6 \over (2\pi \hbar /m)^{3}}q^{\alpha \over 2},~~0\leq q(E)={\Phi _{\max }-E \over V_{0}^{2}/2}\leq 1,}
where
0
≥
Φ
max
≥
E
=
Φ
(
x
,
y
,
z
)
+
V
x
2
+
V
y
2
+
V
z
2
2
≥
Φ
min
≡
Φ
max
−
V
0
2
2
{\displaystyle 0\geq \Phi _{\max }\geq E=\Phi (x,y,z)+{V_{x}^{2}+V_{y}^{2}+V_{z}^{2} \over 2}\geq \Phi _{\min }\equiv \Phi _{\max }-{V_{0}^{2} \over 2}}
such that
E
min
,
E
max
{\displaystyle E_{\min },E_{\max }}
, respectively, is the potential energy of at the centre or the edge of the gravitational bound system. The corresponding neutrino mass density, assume spherical, would be
ρ
(
r
)
=
n
(
x
,
y
,
z
)
m
=
∫
d
V
x
∫
d
V
y
∫
d
V
z
m
f
(
x
,
y
,
z
,
V
x
,
V
y
,
V
z
)
,
{\displaystyle \rho (r)=n(x,y,z)m=\int dV_{x}\int dV_{y}\int dV_{z}~m~f(x,y,z,V_{x},V_{y},V_{z}),}
which reduces to
ρ
(
r
)
=
C
(
Φ
max
−
Φ
(
r
)
)
3
+
α
2
(
Φ
max
−
Φ
min
)
α
2
,
C
=
6
m
π
2
5
/
2
B
(
1
+
α
2
,
3
2
)
(
2
π
ℏ
/
m
)
3
{\displaystyle \rho (r)={C(\Phi _{\max }-\Phi (r))^{3+\alpha \over 2} \over (\Phi _{\max }-\Phi _{\min })^{\alpha \over 2}},~~~C={6m\pi 2^{5/2}B\left(1+{\alpha \over 2},{3 \over 2}\right) \over (2\pi \hbar /m)^{3}}}
Take the simple case
α
→
0
{\displaystyle \alpha \to 0}
, and estimate the density at the centre
r
=
0
{\displaystyle r=0}
with an escape speed
V
0
{\displaystyle V_{0}}
, we have
ρ
(
r
)
≤
ρ
(
0
)
→
m
4
V
0
3
π
2
ℏ
3
≈
m
e
V
4
V
200
3
×
[Cosmic Critical Density]
.
{\displaystyle \rho (r)\leq \rho (0)\rightarrow {m^{4}V_{0}^{3} \over \pi ^{2}\hbar ^{3}}\approx m_{\mathrm {eV} }^{4}V_{200}^{3}\times {\text{[Cosmic Critical Density]}}.}
Clearly eV-scale neutrinos with
m
e
V
∼
0.1
−
1
{\displaystyle m_{eV}\sim 0.1-1}
is too light to make up the 100–10000 over-density in galaxies with escape velocity
V
200
≡
V
/
(
200
k
m
/
s
)
∼
0.1
−
3.4
{\displaystyle V_{200}\equiv V/(\mathrm {200km/s} )\sim 0.1-3.4}
, while
neutrinos in clusters with
V
∼
2000
k
m
/
s
{\displaystyle V\sim \mathrm {2000km/s} }
could make up
100
−
1000
{\displaystyle 100-1000}
times cosmic background density.
By the way the freeze-out cosmic neutrinos in your room have a non-thermal random momentum
∼
(
2.7
K
)
k
c
∼
(
1
e
V
/
c
2
)
(
70
k
m
/
s
)
{\textstyle \sim {(\mathrm {2.7K} )k \over c}\sim (1~\mathrm {eV} /c^{2})(\mathrm {70km/s} )}
, and do not follow a Maxwell distribution, and are not in thermal equilibrium with the air molecules because of the extremely low cross-section of neutrino-baryon interactions.
=== A Recap on Harmonic Motions in Uniform Sphere Potential ===
Consider building a steady state model of the fore-mentioned uniform sphere of density
ρ
0
{\displaystyle \rho _{0}}
and potential
Φ
(
r
)
{\displaystyle \Phi (r)}
ρ
(
|
r
|
)
=
ρ
0
≡
M
⊙
n
0
,
|
r
|
2
=
x
2
+
y
2
+
z
2
≤
r
0
2
,
Ω
≡
4
π
G
ρ
0
3
≡
V
0
r
0
Φ
(
|
r
|
)
=
Ω
2
(
x
2
+
y
2
+
z
2
)
−
3
V
0
2
2
=
V
e
(
r
)
2
2
−
Φ
(
r
0
)
,
{\displaystyle {\begin{aligned}\rho (|\mathbf {r} |)&=\rho _{0}\equiv M_{\odot }n_{0},~~|\mathbf {r} |^{2}=x^{2}+y^{2}+z^{2}\leq r_{0}^{2},~~\Omega \equiv {\sqrt {4\pi G\rho _{0} \over 3}}\equiv {V_{0} \over r_{0}}\\\Phi (|\mathbf {r} |)&={\Omega ^{2}(x^{2}+y^{2}+z^{2})-3V_{0}^{2} \over 2}={V_{e}(r)^{2} \over 2}-\Phi (r_{0}),\end{aligned}}}
where
V
e
(
r
)
=
V
0
1
−
r
2
r
0
2
=
2
Φ
(
r
0
)
−
2
Φ
(
r
)
{\displaystyle V_{e}(r)=V_{0}{\sqrt {1-{r^{2} \over r_{0}^{2}}}}={\sqrt {2\Phi (r_{0})-2\Phi (r)}}}
is the speed to escape to the edge
r
0
{\displaystyle r_{0}}
.
First a recap on motion "inside" the uniform sphere potential.
Inside this constant density core region, individual stars go on resonant harmonic oscillations of angular frequency
Ω
{\displaystyle \Omega }
with
x
¨
=
−
Ω
2
x
=
−
∂
x
Φ
,
y
¨
=
−
Ω
2
y
,
y
˙
(
t
)
2
2
+
Ω
2
y
(
t
)
2
2
≡
I
y
(
y
,
y
˙
)
=
y
˙
(
0
)
2
2
+
Ω
2
y
(
0
)
2
2
≤
(
Ω
r
0
)
2
2
z
¨
=
−
Ω
2
z
,
→
z
˙
(
t
)
=
z
˙
(
0
)
cos
(
Ω
t
)
+
Ω
z
(
0
)
sin
(
Ω
t
)
.
{\displaystyle {\begin{aligned}{\ddot {x}}=&-\Omega ^{2}x=-\partial _{x}\Phi ,\\{\ddot {y}}=&-\Omega ^{2}y,~~~{{\dot {y}}(t)^{2} \over 2}+{\Omega ^{2}y(t)^{2} \over 2}\equiv I_{y}(y,{\dot {y}})={{\dot {y}}(0)^{2} \over 2}+{\Omega ^{2}y(0)^{2} \over 2}\leq {(\Omega r_{0})^{2} \over 2}\\{\ddot {z}}=&-\Omega ^{2}z,\rightarrow {\dot {z}}(t)={\dot {z}}(0)\cos(\Omega t)+\Omega z(0)\sin(\Omega t).\end{aligned}}}
Loosely speaking our goal is to put stars on a weighted distribution of orbits with various energies
f
(
I
x
(
x
,
x
˙
)
,
I
y
(
y
,
y
˙
)
,
I
z
(
z
,
z
˙
)
=
D
F
(
r
,
V
)
{\displaystyle f\left(I_{x}(x,{\dot {x}}),I_{y}(y,{\dot {y}}),I_{z}(z,{\dot {z}}\right)=DF(\mathbf {r} ,\mathbf {V} )}
, i.e., the phase space density or distribution function, such that their overall stellar number density reproduces the constant core, hence their collective "steady-state" potential. Once this is reached, we call the system is a self-consistent equilibrium.
=== Example on Jeans theorem and CBE on Uniform Sphere Potential ===
Generally for a time-independent system, Jeans theorem predicts that
f
(
x
,
v
)
{\displaystyle f(\mathbf {x} ,\mathbf {v} )}
is an implicit function of the position and velocity through a functional dependence on "constants of motion".
For the uniform sphere, a solution for the Boltzmann Equation, written in spherical coordinates
(
r
,
θ
,
ϕ
)
{\displaystyle (r,\theta ,\phi )}
and its velocity components
(
V
r
,
V
θ
,
V
ϕ
)
{\displaystyle (V_{r},V_{\theta },V_{\phi })}
is
f
(
r
,
θ
,
φ
,
V
r
,
V
θ
,
V
φ
)
=
C
0
V
0
3
V
0
2
2
Q
,
{\displaystyle f(r,\theta ,\varphi ,V_{r},V_{\theta },V_{\varphi })={C_{0} \over V_{0}^{3}}{\sqrt {V_{0}^{2} \over 2Q}},}
where
C
0
=
2
π
−
2
ρ
0
{\displaystyle C_{0}=2\pi ^{-2}\rho _{0}}
is a normalisation constant, which has the dimension of (mass) density. And we define a (positive enthalpy-like dimension
km
2
/
s
2
{\displaystyle {\text{km}}^{2}/{\text{s}}^{2}}
) Quantity
Q
[
x
,
v
]
≡
[
0
,
(
−
V
0
2
−
E
)
+
J
2
2
r
0
2
]
max
[
J
z
|
J
z
|
,
0
]
max
.
{\displaystyle Q[\mathbf {x} ,\mathbf {v} ]\equiv \left[0,\left(-V_{0}^{2}-E\right)+{J^{2} \over 2r_{0}^{2}}\right]_{\max }\left[{J_{z} \over |J_{z}|},0\right]_{\max }.}
Clearly anti-clockwise rotating stars with
J
z
≤
0
,
Q
=
0
{\displaystyle J_{z}\leq 0,~~Q=0}
are excluded.
It is easy to see in spherical coordinates that
J
2
=
r
2
V
t
2
=
r
2
(
V
θ
2
+
V
φ
2
)
,
{\displaystyle J^{2}=r^{2}V_{t}^{2}=r^{2}(V_{\theta }^{2}+V_{\varphi }^{2}),}
J
z
=
V
φ
r
sin
θ
,
{\displaystyle J_{z}=V_{\varphi }r\sin \theta ,}
E
=
V
r
2
+
V
t
2
2
+
Φ
(
r
)
,
V
t
≡
V
θ
2
+
V
φ
2
{\displaystyle E={V_{r}^{2}+V_{t}^{2} \over 2}+\Phi (r),~V_{t}\equiv {\sqrt {V_{\theta }^{2}+V_{\varphi }^{2}}}}
Insert the potential and these definitions of the orbital energy E and angular momentum J and its z-component Jz along every stellar orbit, we have
2
Q
=
Heaviside
(
V
φ
|
V
φ
|
)
×
[
V
0
2
(
1
−
r
2
r
0
2
)
−
V
r
2
−
(
1
−
r
2
r
0
2
)
(
V
θ
2
+
V
φ
2
)
,
0
]
max
,
{\displaystyle 2Q={\text{Heaviside}}\left({V_{\varphi } \over |V_{\varphi }|}\right)\times \left[V_{0}^{2}\left(1-{r^{2} \over r_{0}^{2}}\right)-V_{r}^{2}-\left(1-{r^{2} \over r_{0}^{2}}\right){\left(V_{\theta }^{2}+V_{\varphi }^{2}\right)},0\right]_{\max },}
which implies
|
V
r
|
≤
V
e
(
r
)
{\displaystyle |V_{r}|\leq V_{e}(r)}
, and
|
V
θ
|
,
V
φ
{\displaystyle |V_{\theta }|,V_{\varphi }}
between zero and
V
0
{\displaystyle V_{0}}
.
To verify the above
E
,
J
z
{\displaystyle E,~J_{z}}
being constants of motion in our spherical potential, we note
d
E
/
d
t
=
∂
E
∂
t
+
v
∂
E
∂
x
+
(
−
∇
Φ
)
∂
E
∂
v
{\displaystyle dE/dt={\partial E \over \partial t}+\mathbf {v} {\partial E \over \partial \mathbf {x} }+(\mathbf {-\nabla \Phi } ){\partial E \over \partial \mathbf {v} }}
d
E
/
d
t
=
∂
Φ
∂
t
+
v
∂
Φ
∂
x
+
(
−
∇
Φ
)
v
=
∂
Φ
∂
t
=
0
{\displaystyle dE/dt={\partial \Phi \over \partial t}+\mathbf {v} {\partial \Phi \over \partial \mathbf {x} }+(\mathbf {-\nabla \Phi } )\mathbf {v} ={\partial \Phi \over \partial t}=0}
for any "steady state" potential.
d
J
z
/
d
t
=
∂
J
z
∂
t
+
∂
J
z
∂
x
⋅
v
−
(
∇
Φ
)
⋅
∂
J
z
∂
v
,
{\displaystyle dJ_{z}/dt={\partial J_{z} \over \partial t}+{\partial J_{z} \over \partial \mathbf {x} }\cdot \mathbf {v} -(\mathbf {\nabla \Phi } )\cdot {\partial J_{z} \over \partial \mathbf {v} },}
which reduces to
d
J
z
/
d
t
=
0
+
[
(
V
y
)
V
x
+
(
−
V
x
)
V
y
]
−
[
(
−
y
)
x
R
∂
Φ
(
R
,
z
)
∂
R
+
(
x
)
y
R
∂
Φ
(
R
,
z
)
∂
R
]
=
0
{\displaystyle dJ_{z}/dt=0+[(V_{y})V_{x}+(-V_{x})V_{y}]-\left[(-y){x \over R}{\partial \Phi (R,z) \over \partial R}+(x){y \over R}{\partial \Phi (R,z) \over \partial R}\right]=0}
around the z-axis of any axisymmetric potential, where
R
=
x
2
+
y
2
{\textstyle R={\sqrt {x^{2}+y^{2}}}}
.
Likewise the x and y components of the angular momentum are also conserved for a spherical potential. Hence
d
J
/
d
t
=
0
{\displaystyle dJ/dt=0}
.
So for any time-independent spherical potential (including our uniform sphere model),
the orbital energy E and angular momentum J and its z-component Jz along every stellar orbit satisfy
d
E
[
x
,
v
]
/
d
t
=
d
J
[
x
,
v
]
/
d
t
=
d
J
z
[
x
,
v
]
/
d
t
=
0.
{\displaystyle dE[\mathbf {x} ,\mathbf {v} ]/dt=dJ[\mathbf {x} ,\mathbf {v} ]/dt=dJ_{z}[\mathbf {x} ,\mathbf {v} ]/dt=0.}
Hence using the chain rule, we have
d
d
t
Q
(
E
[
x
,
v
]
,
J
[
x
,
v
]
,
J
z
[
x
,
v
]
)
=
∂
Q
∂
E
d
E
d
t
+
∂
Q
∂
J
z
d
J
z
d
t
+
∂
Q
∂
J
d
J
d
t
=
0
,
{\displaystyle {d \over dt}Q(E[\mathbf {x} ,\mathbf {v} ],J[\mathbf {x} ,\mathbf {v} ],J_{z}[\mathbf {x} ,\mathbf {v} ])={\partial Q \over \partial E}{dE \over dt}+{\partial Q \over \partial J_{z}}{dJ_{z} \over dt}+{\partial Q \over \partial J}{dJ \over dt}=0,}
i.e.,
d
d
t
f
=
f
′
(
Q
)
d
Q
[
x
,
v
]
d
t
=
0
{\textstyle {d \over dt}f=f'(Q){dQ[\mathbf {x} ,\mathbf {v} ] \over dt}=0}
, so that CBE is satisfied, i.e., our
f
(
x
,
v
)
=
f
(
E
[
x
,
v
]
,
J
[
x
,
v
]
,
J
z
[
x
,
v
]
)
{\displaystyle f(\mathbf {x} ,\mathbf {v} )=f(E[\mathbf {x} ,\mathbf {v} ],J[\mathbf {x} ,\mathbf {v} ],J_{z}[\mathbf {x} ,\mathbf {v} ])}
is a solution to the Collisionless Boltzmann Equation for our static spherical potential.
=== A worked example on moments of distribution functions in a uniform spherical cluster ===
We can find out various moments of the above distribution function, reformatted as with the help of three Heaviside functions,
f
(
|
r
|
,
V
r
,
V
θ
,
V
φ
)
=
C
0
V
0
3
H
(
1
−
x
)
(
1
−
x
2
)
1
2
|
x
≡
|
r
|
r
0
H
(
V
φ
)
H
(
1
−
q
)
(
1
−
q
)
1
2
,
q
(
r
,
V
)
≡
V
r
2
V
e
(
|
r
|
)
2
+
V
θ
2
V
0
2
+
V
φ
2
V
0
2
,
{\displaystyle f(|\mathbf {r} |,V_{r},V_{\theta },V_{\varphi })={C_{0} \over V_{0}^{3}}\left.{{\text{H}}(1-x) \over \left(1-x^{2}\right)^{1 \over 2}}\right|_{x\equiv {|\mathbf {r} | \over r_{0}}}{{\text{H}}(V_{\varphi }){\text{H}}(1-q) \over (1-q)^{1 \over 2}},~~q(\mathbf {r} ,\mathbf {V} )\equiv {V_{r}^{2} \over V_{e}(|\mathbf {r} |)^{2}}+{V_{\theta }^{2} \over V_{0}^{2}}+{V_{\varphi }^{2} \over V_{0}^{2}},}
once we input the expression for the earlier potential
Φ
(
r
)
{\displaystyle \Phi (r)}
inside
r
≤
r
0
{\displaystyle r\leq r_{0}}
, or even better the speed to "escape from r to the edge"
r
0
{\displaystyle r_{0}}
of a uniform sphere
V
e
(
r
)
=
V
0
1
−
r
2
r
0
2
.
{\displaystyle V_{e}(r)=V_{0}{\sqrt {1-{r^{2} \over r_{0}^{2}}}}.}
Clearly the factor
V
e
(
|
r
|
)
2
Q
=
max
[
0
,
1
1
−
q
]
{\displaystyle {V_{e}(|\mathbf {r} |) \over {\sqrt {2Q}}}={\sqrt {\max[0,{1 \over 1-q}]}}}
in the DF (distribution function) is well-defined only if
Q
≥
0
→
q
≤
1
{\displaystyle Q\geq 0\rightarrow q\leq 1}
, which implies a narrow range on radius
0
≤
|
r
|
<
r
0
{\displaystyle 0\leq |\mathbf {r} |<r_{0}}
and excludes high velocity particles, e.g.,
V
t
>
V
0
>
V
e
(
r
)
{\displaystyle V_{t}>V_{0}>V_{e}(r)}
, from the distribution function (DF, i.e., phase space density).
In fact, the positivity carves the (
V
φ
≥
0
{\displaystyle V_{\varphi }\geq 0}
) left-half of an ellipsoid in the
[
V
r
,
V
θ
,
V
φ
]
{\displaystyle [V_{r},V_{\theta },V_{\varphi }]}
velocity space ("velocity ellipsoid"),
q
(
r
,
V
)
≡
V
r
2
V
0
2
(
1
−
r
2
/
r
0
2
)
+
(
V
θ
2
V
0
2
+
V
φ
2
V
0
2
)
≡
u
r
2
+
u
θ
2
+
u
φ
2
≤
1
,
{\displaystyle q(\mathbf {r} ,\mathbf {V} )\equiv {V_{r}^{2} \over V_{0}^{2}(1-r^{2}/r_{0}^{2})}+\left({V_{\theta }^{2} \over V_{0}^{2}}+{V_{\varphi }^{2} \over V_{0}^{2}}\right)\equiv u_{r}^{2}+u_{\theta }^{2}+u_{\varphi }^{2}\leq 1,}
where
(
u
r
,
u
θ
,
u
φ
)
{\displaystyle (u_{r},u_{\theta },u_{\varphi })}
is
(
V
r
,
V
θ
,
V
φ
)
{\displaystyle (V_{r},V_{\theta },V_{\varphi })}
rescaled by the function
V
e
(
r
)
=
V
0
1
−
r
2
/
r
0
2
{\displaystyle V_{e}(r)=V_{0}{\sqrt {1-r^{2}/r_{0}^{2}}}}
or
V
0
{\displaystyle V_{0}}
respectively.
The velocity ellipsoid (in this case) has rotational symmetry around the r axis or
V
r
{\displaystyle V_{r}}
axis. It is more squashed (in this case) away from the radial direction, hence more tangentially anisotropic because everywhere
V
e
(
r
)
<
V
0
{\displaystyle V_{e}(r)<V_{0}}
, except at the origin, where the ellipsoid looks isotropic. Now we compute the moments of the phase space.
E.g., the resulting density (moment) is
ρ
(
r
,
θ
,
φ
)
=
∫
−
V
e
(
r
)
V
e
(
r
)
d
V
r
∫
−
V
0
V
0
d
V
θ
∫
0
V
0
d
V
φ
C
0
V
0
3
(
2
Q
V
0
2
)
−
1
/
2
=
∫
−
1
1
∫
−
1
1
∫
0
1
(
V
e
d
u
r
)
(
V
0
d
u
θ
)
(
V
0
d
u
φ
)
C
0
V
0
3
(
1
−
r
2
/
r
0
2
)
1
/
2
(
1
−
q
)
1
/
2
|
q
=
u
r
2
+
u
θ
2
+
u
φ
2
=
C
0
∫
0
1
(
1
−
u
2
)
−
1
/
2
(
2
π
u
2
d
u
)
=
ρ
0
{\displaystyle {\begin{aligned}\rho (r,\theta ,\varphi )&=\int _{-V_{e}(r)}^{V_{e}(r)}dV_{r}\int _{-V_{0}}^{V_{0}}dV_{\theta }\int _{0}^{V_{0}}dV_{\varphi }{C_{0} \over V_{0}^{3}}\left({2Q \over V_{0}^{2}}\right)^{-1/2}\\&=\int _{-1}^{1}\int _{-1}^{1}\int _{0}^{1}{(V_{e}du_{r})(V_{0}du_{\theta })(V_{0}du_{\varphi })C_{0} \over V_{0}^{3}(1-r^{2}/r_{0}^{2})^{1/2}(1-q)^{1/2}}\left.\right|_{q=u_{r}^{2}+u_{\theta }^{2}+u_{\varphi }^{2}}\\&=C_{0}{\int _{0}^{1}(1-u^{2})^{-1/2}(2\pi u^{2}du)}=\rho _{0}\end{aligned}}}
is indeed a spherical (angle-independent) and uniform (radius-independent) density inside the edge, where the normalisation constant
C
0
=
2
π
−
2
ρ
0
{\displaystyle C_{0}=2\pi ^{-2}\rho _{0}}
.
The streaming velocity is computed as the weighted mean of the velocity vector
⟨
V
⟩
(
x
)
≡
∫
f
d
V
3
V
∫
f
d
V
3
=
1
ρ
∫
f
d
V
3
[
V
r
,
V
θ
,
V
φ
]
C
0
V
0
2
(
2
Q
)
−
1
/
2
=
[
∫
−
1
1
u
r
.
.
.
d
u
r
,
∫
−
1
1
u
θ
.
.
.
d
u
θ
,
∫
0
1
(
2
d
u
r
)
∫
0
1
−
u
r
2
(
2
d
u
θ
)
∫
0
1
−
u
r
2
−
u
θ
2
d
u
φ
u
φ
V
0
(
1
−
u
r
2
−
u
θ
2
−
u
φ
2
)
1
/
2
∫
0
1
(
2
π
U
d
U
)
∫
0
1
−
U
2
d
u
φ
(
1
−
U
2
−
u
φ
2
)
−
1
/
2
]
=
[
0
,
0
,
4
V
0
3
π
]
=
V
(
x
)
¯
,
{\displaystyle {\begin{aligned}\langle \mathbf {V} \rangle (\mathbf {x} )&\equiv {\int fd\mathbf {V} ^{3}\mathbf {V} \over \int fd\mathbf {V} ^{3}}\\&={1 \over \rho }\int fd\mathbf {V} ^{3}[V_{r},V_{\theta },V_{\varphi }]{C_{0}V_{0}^{2}(2Q)^{-1/2}}\\&=\left[{\int _{-1}^{1}\!\!u_{r}...du_{r},~~\int _{-1}^{1}\!\!u_{\theta }...du_{\theta },~~\int _{0}^{1}(2du_{r})\int _{0}^{\sqrt {1-u_{r}^{2}}}\!\!(2du_{\theta })\int _{0}^{\sqrt {1-u_{r}^{2}-u_{\theta }^{2}}}\!\!\!\!\!\!\!\!\!\!{du_{\varphi }u_{\varphi }V_{0} \over (1-u_{r}^{2}-u_{\theta }^{2}-u_{\varphi }^{2})^{1/2}} \over \int _{0}^{1}(2\pi UdU)\int _{0}^{\sqrt {1-U^{2}}}du_{\varphi }(1-U^{2}-u_{\varphi }^{2})^{-1/2}}\right]\\&=\left[0,0,{4V_{0} \over 3\pi }\right]={\overline {\mathbf {V} (\mathbf {x} )}},\end{aligned}}}
where the global average (indicated by the overline bar) of flow implies uniform pattern of flat azimuthal rotation, but zero net streaming everywhere in the meridional
(
r
,
θ
)
{\displaystyle (r,\theta )}
plane.
Incidentally, the angular momentum global average of this flat-rotation sphere is
r
×
⟨
V
⟩
¯
=
∫
0
r
0
(
ρ
4
π
r
2
d
r
)
M
0
[
0
,
0
,
r
⟨
V
φ
⟩
]
=
[
0
,
0
,
3
r
0
4
V
φ
¯
]
.
{\displaystyle {\overline {\mathbf {r} \times \langle \mathbf {V} \rangle }}=\int _{0}^{r_{0}}{(\rho 4\pi r^{2}dr) \over M_{0}}[0,0,r\langle V_{\varphi }\rangle ]=[0,0,{3r_{0} \over 4}{\overline {V_{\varphi }}}].}
Note global average of centre of mass does not change, so
V
i
(
x
)
¯
=
0
{\displaystyle {\overline {\mathbf {V} _{i}(\mathbf {x} )}}=0}
due to global momentum conservation in each rectangular direction
i
=
x
,
y
,
z
{\displaystyle i=x,y,z}
, and this does not contradict the global non-zero rotation.
Likewise thanks to the symmetry of
f
(
r
,
θ
,
φ
,
V
r
,
V
θ
,
V
φ
)
=
f
(
r
,
θ
,
±
φ
,
±
V
r
,
±
V
θ
,
V
φ
)
{\displaystyle f(r,\theta ,\varphi ,V_{r},V_{\theta },V_{\varphi })=f(r,\theta ,\pm \varphi ,\pm V_{r},\pm V_{\theta },V_{\varphi })}
, we have
⟨
(
±
V
r
)
V
φ
⟩
=
0
{\displaystyle \langle \mathbf {(\pm V_{r})V_{\varphi }} \rangle =0}
,
⟨
(
±
V
θ
)
V
φ
⟩
=
0
{\displaystyle ~\langle \mathbf {(\pm V_{\theta })V_{\varphi }} \rangle =0}
,
⟨
(
±
V
r
)
V
θ
⟩
=
0
{\displaystyle ~\langle \mathbf {(\pm V_{r})V_{\theta }} \rangle =0}
everywhere}.
Likewise the rms velocity in the rotation direction is computed by a weighted mean as follows, E.g.,
⟨
V
φ
2
⟩
(
|
x
|
)
≡
∫
f
d
V
3
V
φ
2
ρ
(
|
r
|
)
=
∫
0
1
(
2
d
u
r
)
∫
0
1
−
u
r
2
(
2
d
u
θ
)
∫
0
1
−
u
r
2
−
u
θ
2
d
u
φ
(
u
φ
V
0
)
2
(
1
−
q
)
1
/
2
∫
0
1
(
2
π
u
2
d
u
)
(
1
−
u
2
)
−
1
/
2
=
0.25
V
0
2
=
0.5
⟨
V
t
2
⟩
=
∫
0
1
(
2
d
u
r
)
∫
0
1
−
u
r
2
(
2
d
u
φ
)
∫
0
1
−
u
r
2
−
u
φ
2
d
u
θ
(
u
θ
V
0
)
2
(
1
−
q
)
1
/
2
∫
0
1
(
2
π
u
2
d
u
)
(
1
−
u
2
)
−
1
/
2
=
⟨
V
θ
2
⟩
(
|
x
|
)
,
{\displaystyle {\begin{aligned}\langle \mathbf {V} _{\varphi }^{2}\rangle (|\mathbf {x} |)&\equiv {\int fd\mathbf {V} ^{3}V_{\varphi }^{2} \over \rho (|\mathbf {r} |)}\\&={\int _{0}^{1}(2du_{r})\int _{0}^{\sqrt {1-u_{r}^{2}}}(2du_{\theta })\int _{0}^{\sqrt {1-u_{r}^{2}-u_{\theta }^{2}}}du_{\varphi }{(u_{\varphi }V_{0})^{2} \over (1-q)^{1/2}} \over \int _{0}^{1}{(2\pi u^{2}du)(1-u^{2})^{-1/2}}}\\&=0.25V_{0}^{2}=0.5\langle V_{t}^{2}\rangle \\&={\!\!\int _{0}^{1}(2du_{r})\!\!\int _{0}^{\sqrt {1-u_{r}^{2}}}(2du_{\varphi })\!\!\int _{0}^{\sqrt {1-u_{r}^{2}-u_{\varphi }^{2}}}du_{\theta }{(u_{\theta }V_{0})^{2} \over (1-q)^{1/2}} \over \int _{0}^{1}{(2\pi u^{2}du)(1-u^{2})^{-1/2}}}\\&=\langle \mathbf {V} _{\theta }^{2}\rangle (|\mathbf {x} |),\\\end{aligned}}}
Here
⟨
V
t
2
⟩
=
⟨
V
θ
2
+
V
φ
2
⟩
=
0.5
V
0
2
.
{\displaystyle \langle V_{t}^{2}\rangle =\langle V_{\theta }^{2}+V_{\varphi }^{2}\rangle =0.5V_{0}^{2}.}
Likewise
⟨
V
r
2
⟩
(
x
)
=
∫
0
1
(
d
u
φ
)
∫
0
1
−
u
φ
2
(
2
d
u
θ
)
∫
0
1
−
u
φ
2
−
u
θ
2
(
2
d
u
r
)
(
u
r
V
e
(
r
)
)
2
(
1
−
q
)
1
/
2
∫
0
1
(
2
π
u
2
d
u
)
(
1
−
u
2
)
−
1
/
2
=
(
V
0
2
1
−
r
2
r
0
2
)
2
.
{\displaystyle \langle \mathbf {V} _{r}^{2}\rangle (\mathbf {x} )={\!\!\int _{0}^{1}(du_{\varphi })\int _{0}^{\sqrt {1-u_{\varphi }^{2}}}\!\!(2du_{\theta })\!\!\int _{0}^{\sqrt {1-u_{\varphi }^{2}-u_{\theta }^{2}}}\!\!\!{(2du_{r})(u_{r}V_{e}(r))^{2} \over (1-q)^{1/2}} \over \int _{0}^{1}{(2\pi u^{2}du)(1-u^{2})^{-1/2}}}=\left({V_{0} \over 2}{\sqrt {1-{r^{2} \over r_{0}^{2}}}}\right)^{2}.}
So the pressure tensor or dispersion tensor is
σ
i
j
2
(
r
)
=
P
i
j
(
r
)
ρ
(
r
)
=
⟨
V
i
V
j
⟩
−
⟨
V
i
⟩
⟨
V
j
⟩
=
[
[
1
−
(
r
r
0
)
2
]
(
V
0
2
)
2
0
0
0
(
V
0
2
)
2
0
0
0
[
1
−
(
8
3
π
)
2
]
(
V
0
2
)
2
]
{\displaystyle {\begin{aligned}\sigma _{ij}^{2}(\mathbf {r} )=&{P_{ij}(\mathbf {r} ) \over \rho (\mathbf {r} )}\\=&\langle \mathbf {V} _{i}\mathbf {V} _{j}\rangle -\langle \mathbf {V} _{i}\rangle \langle \mathbf {V} _{j}\rangle \\=&{\begin{bmatrix}\left[1-({r \over r_{0}})^{2}\right]\left({V_{0} \over 2}\right)^{2}&0&0\\0&\left({V_{0} \over 2}\right)^{2}&0\\0&0&\left[1-({8 \over 3\pi })^{2}\right]\left({V_{0} \over 2}\right)^{2}\end{bmatrix}}\end{aligned}}}
with zero off-diagonal terms because of the symmetric velocity distribution.
Note while there is no Dark Matter in producing the previous flat rotation curve,
the price is shown by the reduction factor
8
3
π
=
0.8488
{\displaystyle {8 \over 3\pi }=0.8488}
in the random velocity spread in the azimuthal direction. Among the diagonal dispersion tensor moments,
σ
θ
≡
σ
θ
θ
2
=
0.5
V
0
{\displaystyle \sigma _{\theta }\equiv {\sqrt {\sigma _{\theta \theta }^{2}}}=0.5V_{0}}
is the biggest among the three at all radii, and
σ
φ
≡
σ
φ
φ
2
≥
σ
r
≡
σ
r
r
2
{\displaystyle \sigma _{\varphi }\equiv {\sqrt {\sigma _{\varphi \varphi }^{2}}}\geq \sigma _{r}\equiv {\sqrt {\sigma _{rr}^{2}}}}
only near the edge between
0.8488
r
0
≤
r
≤
r
0
{\displaystyle 0.8488r_{0}\leq r\leq r_{0}}
.
The larger tangential kinetic energy than that of radial motion seen in the diagonal dispersions is often phrased by an anisotropy parameter
β
(
r
)
≡
1
−
0.5
⟨
V
t
2
(
|
r
|
)
⟩
⟨
V
r
2
⟩
(
|
r
|
)
=
1
−
⟨
V
θ
2
(
|
r
|
)
⟩
⟨
V
r
2
⟩
(
|
r
|
)
=
−
r
2
r
0
2
−
r
2
≤
0
;
{\displaystyle \beta (r)\equiv 1-{0.5\langle {\mathbf {V} _{t}}^{2}(|\mathbf {r} |)\rangle \over \langle {\mathbf {V} _{r}}^{2}\rangle (|\mathbf {r} |)}=1-{\langle {\mathbf {V} _{\theta }}^{2}(|\mathbf {r} |)\rangle \over \langle {\mathbf {V} _{r}}^{2}\rangle (|\mathbf {r} |)}=-{r^{2} \over r_{0}^{2}-r^{2}}\leq 0;}
a positive anisotropy would have meant that radial motion dominated, and a negative anisotropy means that tangential motion dominates (as in this uniform sphere).
=== A worked example of Virial Theorem ===
Twice kinetic energy per unit mass of the above uniform sphere is
2
K
M
0
=
⟨
V
2
⟩
¯
≡
⟨
V
2
¯
⟩
=
M
0
−
1
∫
0
M
0
⟨
V
θ
2
+
V
φ
2
+
V
r
2
⟩
d
M
=
M
0
−
1
∫
0
1
(
V
0
2
4
+
V
0
2
4
+
(
1
−
x
2
)
V
0
2
4
)
d
(
x
3
M
0
)
=
0.6
V
0
2
,
x
≡
r
r
0
=
(
M
M
0
)
1
3
,
{\displaystyle {\begin{aligned}{2K \over M_{0}}&={\overline {\langle V^{2}\rangle }}\equiv \langle {\overline {V^{2}}}\rangle \\&=M_{0}^{-1}\int _{0}^{M_{0}}\langle V_{\theta }^{2}+V_{\varphi }^{2}+V_{r}^{2}\rangle dM\\&=M_{0}^{-1}\int _{0}^{1}\left({V_{0}^{2} \over 4}+{V_{0}^{2} \over 4}+{(1-x^{2})V_{0}^{2} \over 4}\right)d(x^{3}M_{0})=0.6V_{0}^{2},~~x\equiv {r \over r_{0}}=\left({M \over M_{0}}\right)^{1 \over 3},\end{aligned}}}
which balances the potential energy per unit mass of the uniform sphere, inside which
M
∝
r
3
∝
x
3
{\displaystyle M\propto r^{3}\propto x^{3}}
.
The average Virial per unit mass can be computed from averaging its local value
r
⋅
(
−
∇
Φ
)
{\displaystyle \mathbf {r} \cdot (-\mathbf {\nabla } \Phi )}
, which yields
W
M
0
=
r
⋅
(
−
∇
Φ
)
¯
=
M
0
−
1
∫
0
r
0
r
⋅
−
G
M
r
|
r
|
3
(
ρ
d
r
3
)
=
−
M
0
−
1
∫
0
M
0
G
M
|
r
|
d
M
=
−
M
0
−
1
∫
0
M
0
G
M
d
M
r
0
(
M
/
M
0
)
1
3
=
−
3
G
M
0
5
r
0
=
−
0.6
V
0
2
,
{\displaystyle {\begin{aligned}{W \over M_{0}}&={\overline {\mathbf {r} \cdot (-\mathbf {\nabla } \Phi )}}\\&=M_{0}^{-1}\int _{0}^{r_{0}}\mathbf {r} \cdot {-GM\mathbf {r} \over |\mathbf {r} |^{3}}(\rho d\mathbf {r} ^{3})=-M_{0}^{-1}\int _{0}^{M_{0}}{GM \over |\mathbf {r} |}dM\\&=-M_{0}^{-1}\int _{0}^{M_{0}}{GM~dM \over r_{0}~(M/M_{0})^{1 \over 3}}=-{3GM_{0} \over 5r_{0}}=-0.6V_{0}^{2},\end{aligned}}}
as required by the Virial Theorem. For this self-gravitating sphere, we can also verify that the virial per unit mass equals the averages of half of the potential
E
pot
M
0
=
⟨
Φ
2
⟩
¯
=
M
0
−
1
∫
x
>
0
x
<
1
Φ
(
r
0
x
)
2
d
(
M
0
x
3
)
=
W
M
0
=
−
2
K
M
0
.
{\displaystyle {\begin{aligned}{E_{\text{pot}} \over M_{0}}&={\overline {\langle {\Phi \over 2}\rangle }}\\&=M_{0}^{-1}\int _{x>0}^{x<1}{\Phi (r_{0}x) \over 2}d(M_{0}x^{3})\\&={W \over M_{0}}={-2K \over M_{0}}.\end{aligned}}}
Hence we have verified the validity of Virial Theorem for a uniform sphere under self-gravity, i.e., the gravity due to the mass density of the stars is also the gravity that stars move in self-consistently; no additional dark matter halo contributes to its potential, for example.
=== A worked example of Jeans Equation in a uniform sphere ===
Jeans Equation is a relation on how the pressure gradient of a system should be balancing the potential gradient for an equilibrium galaxy. In our uniform sphere, the potential gradient or gravity is
∇
Φ
=
d
Φ
d
r
=
Ω
2
r
≥
0
,
Ω
=
V
0
r
0
.
{\displaystyle \nabla \Phi ={d\Phi \over dr}={\Omega ^{2}r}\geq 0,~~\Omega ={V_{0} \over r_{0}}.}
The radial pressure gradient
−
d
(
ρ
σ
r
2
)
ρ
d
r
=
−
d
σ
r
2
d
r
−
σ
r
2
r
d
log
ρ
d
log
r
=
Ω
2
r
2
+
0
≥
0.
{\displaystyle -{d(\rho \sigma _{r}^{2}) \over \rho dr}=-{d\sigma _{r}^{2} \over dr}-{\sigma _{r}^{2} \over r}{d\log \rho \over d\log r}={\Omega ^{2}r \over 2}+0\geq 0.}
The reason for the discrepancy is partly due to centrifugal force
V
¯
φ
2
r
=
(
0.8488
V
0
)
2
r
>
0
,
{\displaystyle {{\bar {V}}_{\varphi }^{2} \over r}={(0.8488V_{0})^{2} \over r}>0,}
and partly due to anisotropic pressure
(
σ
θ
2
−
σ
r
2
)
r
=
0.25
Ω
2
r
≥
0
(
σ
φ
2
−
σ
r
2
)
r
=
0.25
Ω
2
r
−
0.1801
V
0
2
r
=
±
,
{\displaystyle {\begin{aligned}{(\sigma _{\theta }^{2}-\sigma _{r}^{2}) \over r}&=0.25\Omega ^{2}r\geq 0\\{(\sigma _{\varphi }^{2}-\sigma _{r}^{2}) \over r}&=0.25\Omega ^{2}r-{0.1801V_{0}^{2} \over r}=\pm ,\end{aligned}}}
so
0.2643
V
0
=
σ
φ
<
σ
r
=
0.5
V
0
{\displaystyle 0.2643V_{0}=\sigma _{\varphi }<\sigma _{r}=0.5V_{0}}
at the very centre,
but the two balance at radius
r
=
0.8488
r
0
{\displaystyle r=0.8488r_{0}}
, and then
reverse to
0.2643
V
0
=
σ
φ
>
σ
r
=
0
{\displaystyle 0.2643V_{0}=\sigma _{\varphi }>\sigma _{r}=0}
at the very edge.
Now we can verify that
∂
⟨
V
r
⟩
∂
t
=
(
−
∑
i
=
x
,
y
,
z
V
i
∂
i
⟨
V
r
⟩
)
−
⟨
V
r
⟩
t
fric
−
∇
r
Φ
+
∑
i
=
x
,
y
,
z
−
∂
i
(
n
σ
i
r
2
)
n
=
V
¯
θ
2
+
V
¯
φ
2
−
2
V
¯
r
2
r
−
0
−
∂
Φ
∂
r
+
[
−
d
(
ρ
σ
r
2
)
ρ
d
r
+
σ
θ
2
+
σ
φ
2
−
2
σ
r
2
r
]
=
0
+
(
0.4244
V
0
)
2
−
2
×
0
r
−
(
Ω
2
r
)
+
[
Ω
2
r
2
+
(
0.5
V
0
)
2
+
(
0.2643
V
0
)
2
−
2
×
0.25
Ω
2
(
r
0
2
−
r
2
)
r
]
=
0.
{\displaystyle {\begin{aligned}{\partial \langle V_{r}\rangle \over \partial t}&=(-\sum _{i=x,y,z}V_{i}\partial _{i}\langle V_{r}\rangle )-{\cancel {\langle V_{r}\rangle \over t_{\text{fric}}}}-\nabla _{r}\Phi +\sum _{i=x,y,z}{-\partial _{i}(n\sigma _{ir}^{2}) \over n}\\&={{\bar {V}}_{\theta }^{2}+{\bar {V}}_{\varphi }^{2}-2{\bar {V}}_{r}^{2} \over r}-0-{\partial \Phi \over \partial r}+\left[-{d(\rho \sigma _{r}^{2}) \over \rho dr}+{\sigma _{\theta }^{2}+\sigma _{\varphi }^{2}-2\sigma _{r}^{2} \over r}\right]\\&={0+(0.4244V_{0})^{2}-2\times 0 \over r}-(\Omega ^{2}r)+\\&\left[{\Omega ^{2}r \over 2}+{(0.5V_{0})^{2}+(0.2643V_{0})^{2}-2\times 0.25\Omega ^{2}(r_{0}^{2}-r^{2}) \over r}\right]\\&=0.\end{aligned}}}
Here the 1st line above is essentially the Jeans equation in the r-direction, which reduces to the 2nd line, the Jeans equation in an anisotropic (aka
β
≠
0
{\displaystyle \beta \neq 0}
) rotational (aka
⟨
V
φ
⟩
≠
0
{\displaystyle \langle V_{\varphi }\rangle \neq 0}
) axisymmetric (
∂
φ
Φ
(
x
,
t
)
=
0
{\displaystyle \partial _{\varphi }\Phi (\mathbf {x} ,t)=0}
) sphere (aka
∂
θ
n
(
x
,
t
)
=
0
{\displaystyle \partial _{\theta }n(\mathbf {x} ,t)=0}
) after much coordinate manipulations of the dispersion tensor; similar equation of motion can be obtained for the two tangential direction, e.g.,
∂
⟨
V
φ
⟩
∂
t
{\displaystyle {\partial \langle V_{\varphi }\rangle \over \partial t}}
, which are useful in modelling ocean currents on the rotating earth surface or angular momentum transfer in accretion disks, where the frictional term
−
⟨
V
φ
⟩
t
fric
{\displaystyle -{\langle V_{\varphi }\rangle \over t_{\text{fric}}}}
is important.
The fact that the l.h.s.
∂
V
r
∂
t
=
0
{\displaystyle {\partial V_{r} \over \partial t}=0}
means that
the force is balanced on the r.h.s. for this uniform (aka
∇
x
m
n
(
x
,
t
)
=
0
{\displaystyle \nabla _{\mathbf {x} }mn(\mathbf {x} ,t)=0}
) spherical model of a galaxy (cluster) to stay in a steady state (aka time-independent equilibrium
∂
n
(
x
,
t
)
∂
t
=
0
{\displaystyle {\partial n(\mathbf {x} ,t) \over \partial t}=0}
everywhere) statically (aka with zero flow
⟨
V
(
x
,
t
)
⟩
=
0
{\displaystyle \langle \mathbf {V} (\mathbf {x} ,t)\rangle =0}
everywhere). Note systems like accretion disk can have a steady net radial inflow
⟨
V
(
x
)
⟩
<
0
{\displaystyle \langle \mathbf {V} (\mathbf {x} )\rangle <0}
everywhere at all time.
=== A worked example of Jeans equation in a thick disk ===
Consider again the thick disk potential in the above example.
If the density is that of a gas fluid, then the pressure would be zero at the boundary
z
=
±
z
0
{\displaystyle z=\pm z_{0}}
. To find the peak of the pressure, we note that
P
(
R
,
z
)
=
∫
z
z
0
∂
z
Φ
ρ
(
R
)
d
z
=
ρ
(
R
)
[
Φ
(
R
,
z
0
)
−
Φ
(
R
,
z
)
]
.
{\displaystyle P(R,z)=\int _{z}^{z_{0}}\partial _{z}\Phi \rho (R)dz=\rho (R)[\Phi (R,z_{0})-\Phi (R,z)].}
So the fluid temperature per unit mass, i.e., the 1-dimensional velocity dispersion squared would be
σ
2
(
R
,
z
)
=
P
(
R
,
z
)
ρ
(
R
)
,
|
z
|
≤
z
0
{\displaystyle \sigma ^{2}(R,z)={P(R,z) \over \rho (R)},~~|z|\leq z_{0}}
σ
2
=
G
M
0
2
z
0
log
Q
(
z
)
Q
(
−
z
)
Q
(
z
0
)
Q
(
−
z
0
)
,
Q
(
z
)
≡
R
0
+
z
0
+
z
+
R
2
+
(
R
0
+
z
0
+
z
)
2
.
{\displaystyle \sigma ^{2}={GM_{0} \over 2z_{0}}\log {Q(z)Q(-z) \over Q(z_{0})Q(-z_{0})},~~Q(z)\equiv R_{0}+z_{0}+z+{\sqrt {R^{2}+(R_{0}+z_{0}+z)^{2}}}.}
Along the rotational z-axis,
σ
2
(
0
,
z
)
=
G
M
0
2
z
0
log
4
(
R
0
+
z
0
+
z
)
(
R
0
+
z
0
−
z
)
4
R
0
(
R
0
+
2
z
0
)
{\displaystyle \sigma ^{2}(0,z)={GM_{0} \over 2z_{0}}\log {4(R_{0}+z_{0}+z)(R_{0}+z_{0}-z) \over 4R_{0}(R_{0}+2z_{0})}}
σ
(
0
,
z
)
=
G
M
0
2
z
0
log
(
R
0
+
z
0
)
2
−
z
2
(
R
0
+
z
0
)
2
−
z
0
2
,
{\displaystyle \sigma (0,z)={\sqrt {GM_{0} \over 2z_{0}}}{\sqrt {\log {(R_{0}+z_{0})^{2}-z^{2} \over (R_{0}+z_{0})^{2}-z_{0}^{2}}}},}
which is clearly the highest at the centre and zero at the boundaries
z
=
±
z
0
{\displaystyle z=\pm z_{0}}
. Both the pressure and the dispersion peak at the midplane
z
=
0
{\displaystyle z=0}
. In fact the hottest and densest point is the centre, where
P
(
0
,
0
)
=
M
0
4
π
R
0
2
z
0
−
G
M
0
log
[
1
−
(
1
+
R
0
/
z
0
)
−
2
]
2
z
0
.
{\displaystyle P(0,0)={M_{0} \over 4\pi R_{0}^{2}z_{0}}{-GM_{0}\log[1-(1+R_{0}/z_{0})^{-2}] \over 2z_{0}}.}
=== A recap on worked examples on Jeans Eq., Virial and Phase space density ===
Having looking at the a few applications of Poisson Eq. and Phase space density and especially the Jeans equation, we can extract a general theme, again using the Spherical cow approach.
Jeans equation links gravity with pressure gradient, it is a generalisation of the Eq. of Motion for single particles. While Jeans equation can be solved in disk systems, the most user-friendly version of the Jeans eq. is the spherical anisotropic version for a static
⟨
v
j
⟩
=
0
{\displaystyle \langle {v_{j}}\rangle =0}
frictionless system
t
fric
→
∞
{\displaystyle t_{\text{fric}}\rightarrow \infty }
, hence the local velocity speed
σ
j
2
(
r
)
=
⟨
v
j
2
⟩
(
r
)
−
⟨
v
j
⟩
2
(
r
)
⏟
=
0
=
∫
∞
d
v
r
d
v
θ
d
v
φ
(
v
j
−
⟨
v
⟩
j
p
⏞
=
0
)
2
f
p
∫
∞
d
v
r
d
v
θ
d
v
φ
f
p
,
{\displaystyle \sigma _{j}^{2}(r)=\langle {v_{j}^{2}}\rangle (r)-\underbrace {\langle {v_{j}}\rangle ^{2}(r)} _{=0}={\int \limits _{\infty }\!\!dv_{r}dv_{\theta }dv_{\varphi }({v}_{j}-\overbrace {\langle {v}\rangle _{j}^{p}} ^{=0})^{2}f_{p} \over \int \limits _{\infty }\!\!dv_{r}dv_{\theta }dv_{\varphi }f_{p}},}
everywhere for each of the three directions
j
=
r
,
θ
,
φ
{\displaystyle ~_{j}=~_{r},~_{\theta },~_{\varphi }}
.
One can project the phase space into these moments, which is easily if in a highly spherical system, which admits conservations of energy
E
=
{\displaystyle E=}
and angular momentum J. The boundary of the system sets the integration range of the velocity bound in the system.
In summary, in the spherical Jeans eq.,
d
Φ
d
r
=
G
M
(
r
)
r
2
=
−
d
(
n
⟨
v
r
2
⟩
)
n
(
r
)
d
r
+
⟨
v
θ
2
⟩
+
⟨
v
ϕ
2
⟩
−
2
⟨
v
r
2
⟩
r
,
=
−
d
(
n
⟨
v
r
2
⟩
)
n
(
r
)
d
r
,
hydrostatic equilibrium if isotropic velocity
=
⟨
v
t
2
⟩
r
,
if purely centrifugal balancing of gravity with no radial motion
,
⟨
v
t
2
⟩
≡
⟨
v
θ
2
⟩
+
⟨
v
ϕ
2
⟩
{\displaystyle {\begin{aligned}{d\Phi \over dr}=&{GM(r) \over r^{2}}\\=&-{d(n\langle {v_{r}^{2}}\rangle ) \over n(r)dr}+{\langle {v_{\theta }^{2}}\rangle +\langle {v_{\phi }^{2}}\rangle -2\langle {v_{r}^{2}}\rangle \over r},\\=&-{d(n\langle {v_{r}^{2}}\rangle ) \over n(r)dr},~~{\text{hydrostatic equilibrium if isotropic velocity }}\\=&{\langle v_{t}^{2}\rangle \over r},~~{\text{if purely centrifugal balancing of gravity with no radial motion}},\langle v_{t}^{2}\rangle \equiv \langle {v_{\theta }^{2}}\rangle +\langle {v_{\phi }^{2}}\rangle \end{aligned}}}
which matches the expectation from the Virial theorem
r
∂
r
Φ
¯
=
v
cir
2
¯
=
G
M
r
¯
=
⟨
v
t
2
⟩
¯
{\displaystyle {\overline {r\partial _{r}\Phi }}={\overline {v_{\text{cir}}^{2}}}={\overline {GM \over r}}={\overline {\langle v_{t}^{2}\rangle }}}
,
or in other words, the
global average
¯
{\displaystyle {\overline {\text{global average}}}}
kinetic energy of an equilibrium equals the average kinetic energy on circular orbits with purely transverse motion.
== See also ==
Stellar classification
Boltzmann equation
Dynamical friction
Jeans equations
Mass segregation (astronomy)
N-body problem
Virial theorem
Stellar kinematics
Poisson's equation
Vector calculus
Accretion disk
Relaxation (physics)
== Further reading ==
Dynamics and Evolution of Galactic Nuclei, D. Merritt (2013). Princeton University Press.
Galactic Dynamics, J. Binney and S. Tremaine (2008). Princeton University Press.
Gravitational N-Body Simulations: Tools and Algorithms, S. Aarseth (2003). Cambridge University Press.
Principles of Stellar Dynamics, S. Chandrasekhar (1960). Dover.
== References == | Wikipedia/Stellar_dynamics |
In physics, the Saha ionization equation is an expression that relates the ionization state of a gas in thermal equilibrium to the temperature and pressure. The equation is a result of combining ideas of quantum mechanics and statistical mechanics and is used to explain the spectral classification of stars. The expression was developed by physicist Meghnad Saha in 1920. It is discussed in many textbooks on statistical physics and plasma physics.
== Description ==
For a gas at a high enough temperature (here measured in energy units, i.e. keV or J) and/or density, the thermal collisions of the atoms will ionize some of the atoms, making an ionized gas. When several or more of the electrons that are normally bound to the atom in orbits around the atomic nucleus are freed, they form an independent electron gas cloud co-existing with the surrounding gas of atomic ions and neutral atoms. With sufficient ionization, the gas can become the state of matter called plasma.
The Saha equation describes the degree of ionization for any gas in thermal equilibrium as a function of the temperature, density, and ionization energies of the atoms.
For a gas composed of a single atomic species, the Saha equation is written:
n
i
+
1
n
e
n
i
=
2
λ
th
3
g
i
+
1
g
i
exp
[
−
ε
i
+
1
−
ε
i
k
B
T
]
{\displaystyle {\frac {n_{i+1}n_{\text{e}}}{n_{i}}}={\frac {2}{\lambda _{\text{th}}^{3}}}{\frac {g_{i+1}}{g_{i}}}\exp \left[-{\frac {\varepsilon _{i+1}-\varepsilon _{i}}{k_{\text{B}}T}}\right]}
where:
n
i
{\displaystyle n_{i}}
is the number density of atoms in the i-th state of ionization, that is with i electrons removed.
g
i
{\displaystyle g_{i}}
is the degeneracy of state for the i-ions.
ε
i
{\displaystyle \varepsilon _{i}}
is the energy required to remove i electrons from a neutral atom, creating an i-level ion.
n
e
{\displaystyle n_{\text{e}}}
is the electron density
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant
λ
th
{\displaystyle \lambda _{\text{th}}}
is the thermal de Broglie wavelength of an electron
λ
th
=
d
e
f
h
2
π
m
e
k
B
T
{\displaystyle \lambda _{\text{th}}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {h}{\sqrt {2\pi m_{\text{e}}k_{\text{B}}T}}}}
m
e
{\displaystyle m_{\text{e}}}
is the mass of an electron
T
{\displaystyle T}
is the temperature of the gas
h
{\displaystyle h}
is the Planck constant
The expression
(
ε
i
+
1
−
ε
i
)
{\textstyle (\varepsilon _{i+1}-\varepsilon _{i})}
is the energy required to ionize the species from state
i
{\displaystyle i}
to state
i
+
1
{\displaystyle i+1}
.
In the case where only one level of ionization is important, we have
n
1
=
n
e
{\textstyle n_{1}=n_{\text{e}}}
for H+; defining the total density H/H+ as
n
=
n
0
+
n
1
,
{\textstyle n=n_{0}+n_{1},}
the Saha equation simplifies to:
n
e
2
n
−
n
e
=
2
λ
th
3
g
1
g
0
exp
[
−
ε
k
B
T
]
{\displaystyle {\frac {n_{\text{e}}^{2}}{n-n_{\text{e}}}}={\frac {2}{\lambda _{\text{th}}^{3}}}{\frac {g_{1}}{g_{0}}}\exp \left[{\frac {-\varepsilon }{k_{\text{B}}T}}\right]}
where
ε
{\displaystyle \varepsilon }
is the energy of ionization. We can define the degree of ionization
x
=
n
1
/
n
{\textstyle x=n_{1}/n}
and find
x
2
1
−
x
=
A
=
2
n
λ
th
3
g
1
g
0
exp
[
−
ε
k
B
T
]
{\displaystyle {\frac {x^{2}}{1-x}}=A={\frac {2}{n\lambda _{\text{th}}^{3}}}{\frac {g_{1}}{g_{0}}}\exp \left[{\frac {-\varepsilon }{k_{\text{B}}T}}\right]}
This gives a quadratic equation that can be solved (in closed form):
x
2
+
A
x
−
A
=
0
,
x
=
(
A
(
1
+
4
A
)
−
A
)
/
2
{\displaystyle x^{2}+Ax-A=0,x=(A{\sqrt {(}}1+{\tfrac {4}{A}})-A)/2}
For small
A
(
T
)
,
{\textstyle A(T),}
low temperature,
x
≈
A
1
/
2
,
∝
n
−
1
/
2
,
{\textstyle x\approx A^{1/2},\propto n^{-1/2},}
so that the ionization decreases with higher number density (factors 10 in both plots).
Note that except for weakly ionized plasmas, the plasma environment affects the atomic structure with the subsequent lowering of the ionization potentials and the "cutoff" of the partition function. Therefore,
ε
i
{\displaystyle \varepsilon _{i}}
and
g
i
{\displaystyle g_{i}}
depend, in general, on
T
{\displaystyle T}
and
n
e
{\displaystyle n_{\text{e}}}
and solving the Saha equation is only possible iteratively.
As a simple example, imagine a gas of monatomic hydrogen, set
g
0
=
g
1
{\displaystyle g_{0}=g_{1}}
and let
ε
{\displaystyle \varepsilon }
= 13.6 eV (158000 K), the ionization energy of hydrogen from its ground state. Let
n
{\displaystyle n}
= 2.69×1025 m−3, which is the Loschmidt constant (nL for NA), or particle density of Earth's atmosphere at standard pressure and temperature. At
T
{\displaystyle T}
= 300 K, the ionization is essentially none:
x
{\displaystyle x}
= 5×10−115 and there would almost certainly be no ionized atoms in the volume of Earth's atmosphere. But
x
{\displaystyle x}
increases rapidly with
T
{\displaystyle T}
, reaching 0.35 for
T
{\displaystyle T}
= 20000 K. There is substantial ionization even though this
k
B
T
{\textstyle k_{B}T}
is much less than the ionization energy (although this depends somewhat on density). This is a common occurrence. Physically, it stems from the fact that at a given temperature, the particles have a distribution of energies, including some with several times
k
B
T
.
{\textstyle k_{B}T.}
These high energy particles are much more effective at ionizing atoms.
In Earth's atmosphere, ionization is actually governed not by the Saha equation but by very energetic cosmic rays, largely of muons. These particles are not in thermal equilibrium with the atmosphere, so they are not at its temperature and the Saha logic does not apply.
Rigorously, the Saha equation is only valid for dilute gases, due to the underlying ideal gas assumption used in its derivation. For dense gases this assumption is no longer valid, because particle interactions becoming significant modifies the chemical potential of the species. And the compressibility of ionized gas and plasma. Hence, the Saha ionization framework has been extended to deal with systems that are denser than the ideal gas limit p/RT [mole/m3], by incorporating corrections for these non-ideal interactions into the thermodynamic potential. This correction leads to improved estimates for the degree of ionization in the corona of the Sun.
== Particle densities ==
The Saha equation is useful for determining the ratio of particle densities for two different ionization levels. The most useful form of the Saha equation for this purpose is
Z
i
N
i
=
Z
i
+
1
Z
e
N
i
+
1
N
e
,
{\displaystyle {\frac {Z_{i}}{N_{i}}}={\frac {Z_{i+1}Z_{e}}{N_{i+1}N_{e}}},}
where Z denotes the partition function of atom/ion resp. electron. The Saha equation can be seen as a restatement of the equilibrium condition for the chemical potentials:
μ
i
=
μ
i
+
1
+
μ
e
{\displaystyle \mu _{i}=\mu _{i+1}+\mu _{e}\,}
This equation simply states that the potential for an atom of ionization state i to ionize is the same as the potential for an electron and an atom of ionization state i + 1. The potentials are equal, therefore the system is in equilibrium and no net change of ionization will occur.
== Stellar atmospheres ==
In the early twenties Ralph H. Fowler (in collaboration with Charles Galton Darwin) developed a new method in statistical mechanics permitting a systematic calculation of the equilibrium properties of matter. He used this to provide a rigorous derivation of the ionization formula which Saha had obtained, by extending to the ionization of atoms the theorem of Jacobus Henricus van 't Hoff, used in physical chemistry for its application to molecular dissociation. Also, a significant improvement in the Saha equation introduced by Fowler was to include the effect of the excited states of atoms and ions. A further important step forward came in 1923, when Edward Arthur Milne and R.H. Fowler published a paper in the Monthly Notices of the Royal Astronomical Society, showing that the criterion of the maximum intensity of absorption lines (belonging to subordinate series of a neutral atom) was much more fruitful in giving information about physical parameters of stellar atmospheres than the criterion employed by Saha which consisted in the marginal appearance or disappearance of absorption lines. The latter criterion requires some knowledge of the relevant pressures in the stellar atmospheres, and Saha following the generally accepted view at the time assumed a value of the order of 1 to 0.1 atmosphere. Milne wrote:
Saha had concentrated on the marginal appearances and disappearances of absorption lines in the stellar sequence, assuming an order of magnitude for the pressure in a stellar atmosphere and calculating the temperature where increasing ionization, for example, inhibited further absorption of the line in question owing to the loss of the series electron. As Fowler and I were one day stamping round my rooms in Trinity and discussing this, it suddenly occurred to me that the maximum intensity of the Balmer lines of hydrogen, for example, was readily explained by the consideration that at the lower temperatures there were too few excited atoms to give appreciable absorption, whilst at the higher temperatures there are too few neutral atoms left to give any absorption. ... That evening I did a hasty order of magnitude calculation of the effect and found that to agree with a temperature of 10000° [K] for the stars of type A0, where the Balmer lines have their maximum, a pressure of the order of 10−4 atmosphere was required. This was very exciting, because standard determinations of pressures in stellar atmospheres from line shifts and line widths had been supposed to indicate a pressure of the order of one atmosphere or more, and I had begun on other grounds to disbelieve this.
The generally accepted view at the time assumed that the composition of stars were similar to Earth. However, in 1925 Cecilia Payne used Saha's ionization theory to calculate that the composition of stellar atmospheres is as we now know it; mostly hydrogen and helium, expanding the knowledge of stars.
== Stellar coronae ==
Saha equilibrium prevails when the plasma is in local thermodynamic equilibrium, which is not the case in the optically thin corona. Here the equilibrium ionization states must be estimated by detailed statistical calculation of collision and recombination rates.
== Early universe ==
Equilibrium ionization, described by the Saha equation, explains evolution in the early universe. After the Big Bang, all atoms were ionized, leaving mostly protons and electrons (looking in the past). According to Saha's approach, when the universe had expanded and cooled such that the temperature reached about 3000 K, electrons (re)combined with protons (10 fm) forming hydrogen atoms (0.1 nm). At this point, 700 millennia since it was 100 million K, the universe became transparent to most electromagnetic radiation. That 3000 K surface, red-shifted in time by a factor of about 1,000, generated the 2.7 K cosmic microwave background radiation, which pervades the universe today.
== References ==
== External links ==
Derivation & Discussion by Hale Bradt www.cambridge.org
A detailed derivation from the University of Utah Physics Department
Lecture notes from the University of Maryland Department of Astronomy | Wikipedia/Saha_ionization_equation |
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space—what they are, rather than where they are", which is studied in celestial mechanics.
Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium, and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics.
== History ==
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.
During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions.
For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.
Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.
In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.
In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.
Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery.
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves.
== Observational astrophysics ==
Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.
Most astrophysical observations are made using the electromagnetic spectrum.
Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes.
Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies.
Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae.
Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope.
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.
Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.
The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.
== Theoretical astrophysics ==
Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics.
== Popularization ==
The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.
The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.
== See also ==
Astrochemistry – Study of molecules in the Universe and their reactions
Astronomical observatories
Astronomical spectroscopy – Measurement of electromagnetic radiation for astronomy
Astroparticle physics – Branch of particle physics
Gravitational-wave astronomy – Branch of astronomy using gravitational waves
Hertzsprung–Russell diagram – Scatter plot of stars showing the relationship of luminosity to stellar classification
High-energy astronomy – Study of astronomical objects that release highly energetic electromagnetic radiation
Important publications in astrophysics
List of astronomers – (includes astrophysicists)
Neutrino astronomy – Observing low-mass stellar particles
Timeline of gravitational physics and relativity
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
Timeline of white dwarfs, neutron stars, and supernovae – Chronological list of developments in knowledge and records
== References ==
== Further reading ==
Longair, Malcolm S. (2006), The Cosmic Century: A History of Astrophysics and Cosmology, Cambridge: Cambridge University Press, ISBN 978-0-521-47436-8
Astrophysics, Scholarpedia Expert articles
== External links ==
Astronomy and Astrophysics, a European Journal
Astrophysical Journal
Cosmic Journey: A History of Scientific Cosmology Archived 2008-10-21 at the Wayback Machine from the American Institute of Physics
International Journal of Modern Physics D from World Scientific
List and directory of peer-reviewed Astronomy / Astrophysics Journals
Ned Wright's Cosmology Tutorial, UCLA | Wikipedia/Theoretical_astrophysics |
Boundary conditions in fluid dynamics are the set of constraints to boundary value problems in computational fluid dynamics. These boundary conditions include inlet boundary conditions, outlet boundary conditions, wall boundary conditions, constant pressure boundary conditions, axisymmetric boundary conditions, symmetric boundary conditions, and periodic or cyclic boundary conditions.
Transient problems require one more thing i.e., initial conditions where initial values of flow variables are specified at nodes in the flow domain. Various types of boundary conditions are used in CFD for different conditions and purposes and are discussed as follows.
== Inlet boundary conditions ==
In inlet boundary conditions, the distribution of all flow variables needs to be specified at inlet boundaries, mainly flow velocity. This type of boundary conditions are common and specified mostly where inlet flow velocity is known.
== Outlet boundary condition ==
In outlet boundary conditions, the distribution of all flow variables needs to be specified, mainly flow velocity. This can be thought as a conjunction to inlet boundary condition. This type of boundary conditions is common and specified mostly where outlet velocity is known.
The flow attains a fully developed state where no change occurs in the flow direction when the outlet is selected far away from the geometrical disturbances. In such region, an outlet could be outlined and the gradient of all variables could be equated to zero in the flow direction except pressure.
== No-slip boundary condition ==
The most common boundary that comes upon in confined fluid flow problems is the wall of the conduit. The appropriate requirement is called the no-slip boundary condition, wherein the normal component of velocity is fixed at zero, and the tangential component is set equal to the velocity of the wall. It may run counter to intuition, but the no-slip condition has been firmly established in both experiment and theory, though only after decades of controversy and debate.
V
normal
=
0
{\displaystyle V_{\text{normal}}=0}
V
tangential
=
V
wall
{\displaystyle V_{\text{tangential}}=V_{\text{wall}}}
Heat transfer through the wall can be specified or if the walls are considered adiabatic, then heat transfer across the wall is set to zero.
Q
Adiabatic Walls
=
0
{\displaystyle Q_{\text{Adiabatic Walls}}=0}
== Constant pressure boundary conditions ==
This type of boundary condition is used where boundary values of pressure are known and the exact details of the flow distribution are unknown. This includes pressure inlet and outlet conditions mainly. Typical examples that utilize this boundary condition include buoyancy driven flows, internal flows with multiple outlets, free surface flows and external flows around objects. An example is flow outlet into atmosphere where pressure is atmospheric.
== Axisymmetric boundary conditions ==
In this boundary condition, the model is axisymmetric with respect to the main axis such that at a particular r = R, all θs and each z = Z-slice, each flow variable has the same value. A good example is the flow in a circular pipe where the flow and pipe axes coincide.
V
r
(
R
,
θ
,
Z
)
=
C
o
n
s
t
a
n
t
{\displaystyle V_{r}(R,\theta ,Z)=Constant}
(
r
=
R
,
θ
,
Z
)
{\displaystyle (r=R,\theta ,Z)}
== Symmetric boundary condition ==
In this boundary condition, it is assumed that on the two sides of the boundary, same physical processes exist. All the variables have same value and gradients at the same distance from the boundary. It acts as a mirror that reflects all the flow distribution to the other side.
The conditions at symmetric boundary are no flow across boundary and no scalar flux across boundary.
A good example is of a pipe flow with a symmetric obstacle in the flow. The obstacle divides the upper flow and lower flow as mirrored flow.
== Periodic or cyclic boundary condition ==
A periodic or cyclic boundary condition arises from a different type of symmetry in a problem. If a component has a repeated pattern in flow distribution more than twice, thus violating the mirror image requirements required for symmetric boundary condition. A good example would be swept vane pump (Fig.), where the marked area is repeated four times in r-theta coordinates. The cyclic-symmetric areas should have the same flow variables and distribution and should satisfy that in every Z-slice.
== See also ==
Flow conditioning
Initial value problem
== Notes ==
== References ==
Versteeg (1995). "Chapter 9". An Introduction to Computational Fluid Dynamics The Finite Volume Method, 2/e. Longman Scientific & Technical. pp. 192–206. ISBN 0-582-21884-5. | Wikipedia/Different_types_of_boundary_conditions_in_fluid_dynamics |
In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations (the elastic boundary changes the flow of the fluid and the fluid moves the elastic boundary simultaneously). In the immersed boundary method the fluid is represented in an Eulerian coordinate system and the structure is represented in Lagrangian coordinates. For Newtonian fluids governed by the Navier–Stokes equations, the fluid equations are
ρ
(
∂
u
(
x
,
t
)
∂
t
+
u
⋅
∇
u
)
=
−
∇
p
+
μ
Δ
u
(
x
,
t
)
+
f
(
x
,
t
)
{\displaystyle \rho \left({\frac {\partial {u}({x},t)}{\partial {t}}}+{u}\cdot \nabla {u}\right)=-\nabla p+\mu \,\Delta u(x,t)+f(x,t)}
and if the flow is incompressible, we have the further condition that
∇
⋅
u
=
0.
{\displaystyle \nabla \cdot u=0.\,}
The immersed structures are typically represented as a collection of one-dimensional fibers, denoted by
Γ
{\displaystyle \Gamma }
. Each fiber can be viewed as a parametric curve
X
(
s
,
t
)
{\displaystyle X(s,t)}
where
s
{\displaystyle s}
is the Lagrangian coordinate along the fiber and
t
{\displaystyle t}
is time. The physics of the fiber is represented via a fiber force distribution function
F
(
s
,
t
)
{\displaystyle F(s,t)}
. Spring forces, bending resistance or any other type of behavior can be built into this term. The force exerted by the structure on the fluid is then interpolated as a source term in the momentum equation using
f
(
x
,
t
)
=
∫
Γ
F
(
s
,
t
)
δ
(
x
−
X
(
s
,
t
)
)
d
s
,
{\displaystyle f(x,t)=\int _{\Gamma }F(s,t)\,\delta {\big (}x-X(s,t){\big )}\,ds,}
where
δ
{\displaystyle \delta }
is the Dirac δ function. The forcing can be extended to multiple dimensions to model elastic surfaces or three-dimensional solids. Assuming a massless structure, the elastic fiber moves with the local fluid velocity and can be interpolated via the delta function
∂
X
(
s
,
t
)
∂
t
=
u
(
X
,
t
)
=
∫
Ω
u
(
x
,
t
)
δ
(
x
−
X
(
s
,
t
)
)
d
x
,
{\displaystyle {\frac {\partial X(s,t)}{\partial t}}=u(X,t)=\int _{\Omega }u(x,t)\,\delta {\big (}x-X(s,t){\big )}\,dx,}
where
Ω
{\displaystyle \Omega }
denotes the entire fluid domain.
Discretization of these equations can be done by assuming an Eulerian grid on the fluid and a separate Lagrangian grid on the fiber.
Approximations of the Delta distribution by smoother functions will allow us to interpolate between the two grids.
Any existing fluid solver can be coupled to a solver for the fiber equations to solve the Immersed Boundary equations.
Variants of this basic approach have been applied to simulate a wide variety of mechanical systems involving elastic structures which interact with fluid flows.
Since the original development of this method by Peskin, a variety of approaches have been developed. These include stochastic formulations for microscopic systems, viscoelastic soft materials, complex fluids, such as the Stochastic Immersed Boundary Methods of Atzberger, Kramer, and Peskin, methods for simulating flows over complicated immersed solid bodies on grids that do not conform to the surface of the body Mittal and Iaccarino, and other approaches that incorporate mass and rotational degrees of freedom Olson, Lim, Cortez. Methods for complicated body shapes include the immersed interface method, the Cartesian grid method, the ghost fluid method and the cut-cell methods categorizing immersed boundary methods into continuous forcing and discrete forcing methods. Methods have been developed for simulations of viscoelastic fluids, curved fluid interfaces, microscopic biophysical systems (proteins in lipid bilayer membranes, swimmers), and engineered devices, such as the Stochastic Immersed Boundary Methods of Atzberger, Kramer, and Peskin,
Stochastic Eulerian Lagrangian Methods of Atzberger, Massed Immersed Boundary Methods of Mori, and Rotational Immersed Boundary Methods of Olson, Lim, Cortez.
In general, for immersed boundary methods and related variants, there is an active research community that is still developing new techniques and related software implementations and incorporating related techniques into simulation packages and CAD engineering software. For more details see below.
== See also ==
Stochastic Eulerian Lagrangian methods
Stokesian dynamics
Volume of fluid method
Level-set method
Marker-and-cell method
== Software: Numerical codes ==
FloEFD: Commercial CFD IBM code
Advanced Simulation Library
Mango-Selm : Immersed Boundary Methods and SELM Simulations, 3D Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB
Stochastic Immersed Boundary Methods in 3D, P. Atzberger, UCSB
Immersed Boundary Method for Uniform Meshes in 2D, A. Fogelson, Utah
IBAMR : Immersed Boundary Method for Adaptive Meshes in 3D, B. Griffith, NYU.
IB2d: Immersed Boundary Method for MATLAB and Python in 2D with 60+ examples, N.A. Battista, TCNJ
ESPResSo: Immersed Boundary Method for soft elastic objects
CFD IBM code based on OpenFoam
sdfibm: Another CFD IBM code based on OpenFoam
SimScale: Immersed Boundary Method for fluid mechanics and conjugate heat transfer simulation in the cloud
== Notes ==
== References ==
Atzberger, Paul J. (2011). "Stochastic Eulerian Lagrangian Methods for Fluid Structure Interactions with Thermal Fluctuations". Journal of Computational Physics. 230 (8): 2821–2837. arXiv:1009.5648. Bibcode:2011JCoPh.230.2821A. doi:10.1016/j.jcp.2010.12.028. S2CID 6067032.
Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S. (2007). "A Stochastic Immersed Boundary Method for Fluid-Structure Dynamics at Microscopic Length Scales". Journal of Computational Physics. 224 (2): 1255–1292. arXiv:0910.5748. Bibcode:2007JCoPh.224.1255A. doi:10.1016/j.jcp.2006.11.015. S2CID 17977915.
Atzberger, Paul (2013), "Incorporating Shear into Stochastic Eulerian Lagrangian Methods for Rheological Studies of Complex Fluids and Soft Materials", Physica D, 265: 57–70, arXiv:2212.10651, doi:10.1016/j.physd.2013.09.002
Jindal, S.; Khalighi, B.; Johnson, J.; Chen, K. (2007), "The Immersed Boundary CFD Approach for Complex Aerodynamics Flow Predictions", SAE Technical Paper Series, vol. 1, doi:10.4271/2007-01-0109.
Atzberger, Paul (2016). "Hydrodynamic Coupling of Particle Inclusions Embedded in Curved Lipid Bilayer Membranes". Soft Matter. 12 (32): 6685–6707. arXiv:1601.06461. doi:10.1039/C6SM00194G. PMID 27373277..
Kim, Jungwoo; Kim, Dongjoo; Choi, Haecheon (2001). "An Immersed-Boundary Finite Volume Method for Simulations of Flow in Complex Geometries". Journal of Computational Physics. 171 (1): 132–150. Bibcode:2001JCoPh.171..132K. doi:10.1006/jcph.2001.6778.
Rower, David A.; Padidar, Misha; Atzberger, Paul J. (April 2022). "Surface fluctuating hydrodynamics methods for the drift-diffusion dynamics of particles and microstructures within curved fluid interfaces". Journal of Computational Physics. 455: 110994. arXiv:1906.01146. doi:10.1016/j.jcp.2022.110994.
Mittal, Rajat; Iaccarino, Gianluca (2005). "Immersed Boundary Methods". Annual Review of Fluid Mechanics. 37 (1): 239–261. Bibcode:2005AnRFM..37..239M. doi:10.1146/annurev.fluid.37.061903.175743.
Mori, Yoichiro; Peskin, Charles S. (2008). "Implicit Second-Order Immersed Boundary Methods with Boundary Mass". Computer Methods in Applied Mechanics and Engineering. 197 (25–28): 2049–2067. Bibcode:2008CMAME.197.2049M. doi:10.1016/j.cma.2007.05.028.
Peskin, Charles S. (2002). "The immersed boundary method". Acta Numerica. 11: 479–517. doi:10.1017/S0962492902000077.
Peskin, Charles S. (1977). "Numerical analysis of blood flow in the heart". Journal of Computational Physics. 25 (3): 220–252. Bibcode:1977JCoPh..25..220P. doi:10.1016/0021-9991(77)90100-0.
Roma, Alexandre M.; Peskin, Charles S.; Berger, Marsha J. (1999). "An Adaptive Version of the Immersed Boundary Method". Journal of Computational Physics. 153 (2): 509–534. Bibcode:1999JCoPh.153..509R. doi:10.1006/jcph.1999.6293.
Singh Bhalla, Amneet Pal; Bale, Rahul; Griffith, Boyce E.; Patankar, Neelesh A. (2013). "A unified mathematical framework and an adaptive numerical method for fluid–structure interaction with rigid, deforming, and elastic bodies". Journal of Computational Physics. 250: 446–476. Bibcode:2013JCoPh.250..446B. doi:10.1016/j.jcp.2013.04.033.
Zhu, Luoding; Peskin, Charles S. (2002). "Simulation of a Flapping Flexible Filament in a Flowing Soap Film by the Immersed Boundary Method" (PDF). Journal of Computational Physics. 179 (2): 452–468. Bibcode:2002JCoPh.179..452Z. doi:10.1006/jcph.2002.7066. S2CID 947507. Archived from the original (PDF) on 2020-01-01. | Wikipedia/Immersed_boundary_method |
In fluid dynamics, the entrance length is the distance a flow travels after entering a pipe before the flow becomes fully developed. Entrance length refers to the length of the entry region, the area following the pipe entrance where effects originating from the interior wall of the pipe propagate into the flow as an expanding boundary layer. When the boundary layer expands to fill the entire pipe, the developing flow becomes a fully developed flow, where flow characteristics no longer change with increased distance along the pipe. Many different entrance lengths exist to describe a variety of flow conditions. Hydrodynamic entrance length describes the formation of a velocity profile caused by viscous forces propagating from the pipe wall. Thermal entrance length describes the formation of a temperature profile. Awareness of entrance length may be necessary for the effective placement of instrumentation, such as fluid flow meters.
== Hydrodynamic entrance length ==
The hydrodynamic entrance region refers to the area of a pipe where fluid entering a pipe develops a velocity profile due to viscous forces propagating from the interior wall of a pipe. This region is characterized by a non-uniform flow. The fluid enters a pipe at a uniform velocity, then fluid particles in the layer in contact with the surface of the pipe come to a complete stop due to the no-slip condition. Due to viscous forces within the fluid, the layer in contact with the pipe surface resists the motion of adjacent layers and slows adjacent layers of fluid down gradually, forming a velocity profile. For the conservation of mass to hold true, the velocity of layers of the fluid in the center of the pipe increases to compensate for the reduced velocities of the layers of fluid near the pipe surface. This develops a velocity gradient across the cross-section of the pipe.
=== Boundary layer ===
The layer in which the shearing viscous forces are significant, is called the boundary layer. This boundary layer is a hypothetical concept. It divides the flow in pipe into two regions:
Boundary layer region: The region in which viscous effects and the velocity changes are significant.
The irrotational (core) flow region: The region in which viscous effects and velocity changes are negligible, also known as the inviscid core.
When the fluid just enters the pipe, the thickness of the boundary layer gradually increases from zero moving in the direction of fluid flow and eventually reaches the pipe center and fills the entire pipe. This region from the entrance of the pipe to the point where the boundary layer covers the entire pipe is termed as the hydrodynamic entrance region and the length of the pipe in this region is termed the hydrodynamic entry length. In this region, the velocity profile develops and thus the flow is called the hydrodynamically developing flow. After this region, the velocity profile is fully developed and continues unchanged. This region is called the hydrodynamically fully developed region. But this is not the fully developed fluid flow until the normalized temperature profile also becomes constant.
In case of laminar flow, the velocity profile in the fully developed region is parabolic but in the case of turbulent flow it gets a little flatter due to vigorous mixing in radial direction and eddy motion.
The velocity profile remains unchanged in the fully developed region.
Hydrodynamic Fully Developed velocity profile Laminar Flow :
∂
u
(
r
,
x
)
∂
x
=
0
⇒
u
=
u
(
r
)
{\displaystyle {\frac {\partial u(r,x)}{\partial x}}=0\quad \Rightarrow u=u(r)}
where
x
{\displaystyle x}
is in the flow direction.
=== Shear stress ===
In the hydrodynamic entrance region, the wall shear stress,
τ
w
{\displaystyle \tau _{w}}
, is highest at the pipe inlet, where the boundary layer thickness is the smallest. Shear stress decreases along the flow direction. That is why the pressure drop is highest in the entrance region of a pipe, which increases the average friction factor for the whole pipe. This increase in the friction factor is negligible for long pipes. In a fully developed region, the pressure gradient and the shear stress in flow are in balance.
=== Calculating hydrodynamic entrance length ===
The length of the hydrodynamic entry region along the pipe is called the hydrodynamic entry length. It is a function of Reynolds number of the flow. In case of laminar flow, this length is given by:
L
h
,
l
a
m
i
n
a
r
=
0.0575
R
e
D
D
{\displaystyle L_{h,laminar}=0.0575Re_{D}D}
where
R
e
{\displaystyle R_{e}}
is the Reynolds number and
D
{\displaystyle D}
is the diameter of the pipe.But in the case of turbulent flow,
L
h
,
t
u
r
b
u
l
e
n
t
=
1.359
D
(
R
e
D
)
1
/
4
.
{\displaystyle L_{h,turbulent}=1.359D(Re_{D})^{1/4}.}
Thus, the entry length in turbulent flow is much shorter as compared to laminar one. In most practical engineering applications, this entrance effect becomes insignificant beyond a pipe length of 10 times the diameter and hence it is approximated to be:
L
h
,
t
u
r
b
u
l
e
n
t
≈
10
D
{\displaystyle L_{h,turbulent}\approx 10D}
Other authors give much longer entrance length, e.g.
Nikuradse recommends
40
D
{\displaystyle 40D}
and
Lien et al. recommend
150
D
{\displaystyle 150D}
for high Reynolds flows.
=== Entry length for pipes with non-circular cross-sections ===
In the case of a non-circular cross-section of a pipe, the same formula can be used to find the entry length with a little modification. A new parameter “hydraulic diameter” relates the flow in non-circular pipe to that of circular pipe flow. This is valid as long as the cross-sectional area shape is not too exaggerated. Hydraulic Diameter is defined as:
D
h
=
4
A
P
{\displaystyle D_{h}={\frac {4A}{P}}}
where
A
{\displaystyle A}
is the area of cross-section and
P
{\displaystyle P}
is the Perimeter of the wet part of the pipe
=== Average velocity of fully developed flow ===
By doing a force balance on a small volume element in the fully developed flow region in the pipe (Laminar Flow), we get velocity as function of radius only i.e. it does not depend upon the axial distance from the entry point. The velocity as the function of radius comes out to be:
u
(
r
)
=
−
R
2
4
μ
d
P
d
x
(
1
−
r
2
R
2
)
{\displaystyle u(r)=-{\frac {R^{2}}{4\mu }}{\frac {dP}{dx}}\left(1-{\frac {r^{2}}{R^{2}}}\right)}
where
d
P
d
x
{\textstyle {\frac {\mathrm {d} P}{\mathrm {d} x}}}
is constant.
By definition of average velocity is given by
V
a
v
g
=
∫
u
d
A
A
c
{\displaystyle V_{avg}={\frac {\int u\mathrm {d} A}{A_{c}}}}
where
A
c
{\displaystyle A_{c}}
is cross-sectional area.
Thus,
V
a
v
g
=
2
R
2
∫
0
R
u
(
r
)
r
d
r
=
−
2
R
2
∫
0
R
R
2
4
μ
d
P
d
x
(
1
−
r
2
R
2
)
r
d
r
=
−
R
2
8
μ
d
P
d
x
{\displaystyle {\begin{aligned}V_{avg}&={\frac {2}{R^{2}}}\int _{0}^{R}u(r)r\mathrm {d} r\\&=-{\frac {2}{R^{2}}}\int _{0}^{R}{\frac {R^{2}}{4\mu }}{\frac {dP}{dx}}(1-{\frac {r^{2}}{R^{2}}})r\mathrm {d} r\\&=-{\frac {R^{2}}{8\mu }}{\frac {dP}{dx}}\end{aligned}}}
For fully developed flow, the maximum velocity will be at
r
=
0
{\displaystyle r=0}
.Thus,
U
m
a
x
=
2
V
a
v
g
.
{\displaystyle U_{max}=2V_{avg}.}
== Thermal entrance length ==
The thermal entrance length is the distance for incoming flow in a pipe to form a temperature profile with a stable shape. The shape of the fully developed temperature profile is determined by temperature and heat flux conditions along the inside wall of the pipe, as well as fluid properties.
=== Overview ===
Fully developed heat flow in a pipe can be considered in the following situation. If the wall of the pipe is constantly heated or cooled so that the heat flux from the wall to the fluid via convection is a fixed value, then the bulk temperature of the fluid steadily increases or decreases respectively at a fixed rate along the flow direction.
An example can be a pipe entirely covered by an electrical heating pad with the flow being introduced after a uniform heat flux from the pad is achieved. At some distance away from the entrance of the fluid, fully developed heat flow is achieved when the heat transfer coefficient of the fluid becomes constant and the temperature profile has the same shape along the flow. This distance is defined as the thermal entrance length, which is important for engineers to design efficient heat transfer processes.
=== Laminar flow ===
For laminar flow, the thermal entrance length is a function of pipe diameter and the dimensionless Reynolds number and Prandtl number.
(
x
f
d
,
t
D
)
l
a
m
i
n
a
r
≈
0.05
R
e
D
P
r
{\displaystyle \left({\frac {x_{fd,t}}{D}}\right)_{laminar}\approx 0.05Re_{D}Pr}
where
R
e
D
{\displaystyle Re_{D}}
is the Reynolds number (based on the pipe diameter) and
P
r
{\displaystyle Pr}
is the Prandtl number.
The Prandtl number modifies the hydrodynamic entrance length to determine thermal entrance length. The Prandtl number is the dimensionless number for the ratio of momentum diffusivity to thermal diffusivity. The thermal entrance length for a fluid with a Prandtl number greater than one will be longer than the hydrodynamic entrance length, and shorter if the Prandtl number is less than one. For example, molten sodium has a low Prandtl number of 0.004, so the thermal entrance length will be significantly shorter than the hydraulic entrance length.
For turbulent flows, thermal entrance length may be approximated solely based on pipe diameter.
(
x
f
d
,
t
D
)
t
u
r
b
u
l
e
n
t
≈
10
{\displaystyle \left({\frac {x_{fd,t}}{D}}\right)_{turbulent}\approx 10}
where
x
f
d
,
t
{\displaystyle x_{fd,t}}
is the thermal entrance length and
D
{\displaystyle D}
is the pipe inner diameter.
=== Heat transfer ===
The development of the temperature profile in the flow is driven by heat transfer determined conditions on the inside surface of the pipe and the fluid. Heat transfer may be a result of a constant heat flux or constant surface temperature. Constant heat flux may be caused by joule heating from a heat source, like heat tape, wrapped around the pipe. Constant temperature conditions may be produced by a phase transition, such as condensation of saturated steam on a pipe surface.
Newtons law of cooling describes convection, the main form of heat transport between the fluid and the pipe:
q
s
″
=
h
(
T
s
−
T
m
)
{\displaystyle q''_{s}=h(T_{s}-T_{m})}
where
q
s
″
{\displaystyle q''_{s}}
is the heat flux into the fluid,
h
{\displaystyle h}
is the convection coefficient,
T
s
{\displaystyle T_{s}}
is the surface temperature, and
T
m
{\displaystyle T_{m}}
is the mean stream temperature.
Constant surface heat flux result in
T
s
−
T
m
{\displaystyle T_{s}-T_{m}}
becoming a constant as the flow develops and constant surface temperature results in
T
s
−
T
m
{\displaystyle T_{s}-T_{m}}
approaching zero.
=== Thermally fully developed flow ===
Unlike hydrodynamic developed flow, a constant profile shape is used to define thermally fully developed flow because temperature continually approaches ambient temperature. Dimensionless analysis of change in profile shape defines when a flow is thermally fully developed.
Requirement for thermally fully developed flow:
∂
∂
x
(
T
s
−
T
T
s
−
T
m
)
f
d
,
t
=
0
{\displaystyle {\frac {\partial }{\partial x}}{\biggl (}{\frac {T_{s}-T}{T_{s}-T_{m}}}{\Biggr )}_{fd,t}=0}
Thermally developed flow results in reduced heat transfer compared to developing flow because the difference between the surface temperature of the pipe and the mean temperature of the flow is greater than the temperature difference between surface temperature of the pipe and the temperature of the fluid near the pipe boundary.
== Concentration entrance length ==
The concentration entrance length describes the length needed for the concentration profile in a flow to be fully developed. The concentration entrance length can be determined by relating it to the hydrodynamic entrance length with the Schmidt number or by experimental techniques. The Schmidt number describes the ratio of momentum diffusivity to mass diffusivity.
x
f
d
,
c
≈
0.05
D
R
e
D
S
c
{\displaystyle x_{fd,c}\approx 0.05DRe_{D}Sc}
where
x
f
d
,
c
{\displaystyle x_{fd,c}}
is the concentration entrance length,
D
{\displaystyle D}
is the pipe inner diameter,
R
e
D
{\displaystyle Re_{D}}
is the Reynolds number (based on the pipe diameter), and
S
c
{\displaystyle Sc}
is the Schmidt number.
== Applications ==
Understanding the entrance length is important for the design and analysis of flow systems. The entrance region will have different velocity, temperature, and other profiles than exist in the fully developed region of the pipe.
=== Flow meters ===
Many types of flow instrumentation, such as flow meters, require a fully developed flow to function properly. Common flow meters, including vortex flow meters and differential-pressure flow meters, require hydrodynamically fully developed flow. Hydraulically fully developed flow is commonly achieved by having long, straight sections of pipe before the flow meter. Alternatively, flow conditioners and straightening devices may be used to produce the desired flow.
=== Wind tunnels ===
Wind tunnels use an inviscid flow of air to test the aerodynamics of an object. Flow straighteners, which consist of many parallel ducts which limit turbulence, are used to produce inviscid flow. Entrance length must be considered in the design of wind tunnels, because the object being tested must be located in the irrotational flow region, between the flow straighteners and the entrance length.
== Exit length ==
Similar to the development of flow at the entrance of the pipe, the flow velocity profile changes before the exit of a pipe. The exit length is much shorter than the entrance length, and is not significant at moderate to high Reynolds numbers.
Hydraulic exit length for laminar flows may be approximated as:
(
x
D
)
L
a
m
≈
{
1
2
Low
R
e
0
R
e
>
100
{\displaystyle \left({\frac {x}{D}}\right)_{Lam}\approx {\begin{cases}{\frac {1}{2}}&{\text{Low }}Re\\0&Re>100\end{cases}}}
where
x
{\displaystyle x}
is the exit length,
D
{\displaystyle D}
is the pipe inner diameter, and
R
e
{\displaystyle Re}
is the Reynolds number.
== See also ==
Fluid dynamics
Heat transfer
Laminar flow
Thermal entrance length
Turbulent flow
Viscosity
== References == | Wikipedia/Entrance_length_(fluid_dynamics) |
In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the continuuum (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface.
At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram.
== Overview ==
Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model.
One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system.
In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs. The control volumes can be stationary or they can move with an arbitrary velocity.
== Substantive derivative ==
Computations in continuum mechanics often require that the regular time derivation operator
d
/
d
t
{\displaystyle d/dt\;}
is replaced by the substantive derivative operator
D
/
D
t
{\displaystyle D/Dt}
.
This can be seen as follows.
Consider a bug that is moving through a volume where there is some scalar,
e.g. pressure, that varies with time and position:
p
=
p
(
t
,
x
,
y
,
z
)
{\displaystyle p=p(t,x,y,z)\;}
.
If the bug during the time interval from
t
{\displaystyle t\;}
to
t
+
d
t
{\displaystyle t+dt\;}
moves from
(
x
,
y
,
z
)
{\displaystyle (x,y,z)\;}
to
(
x
+
d
x
,
y
+
d
y
,
z
+
d
z
)
,
{\displaystyle (x+dx,y+dy,z+dz),\;}
then the bug experiences a change
d
p
{\displaystyle dp\;}
in the scalar value,
d
p
=
∂
p
∂
t
d
t
+
∂
p
∂
x
d
x
+
∂
p
∂
y
d
y
+
∂
p
∂
z
d
z
{\displaystyle dp={\frac {\partial p}{\partial t}}dt+{\frac {\partial p}{\partial x}}dx+{\frac {\partial p}{\partial y}}dy+{\frac {\partial p}{\partial z}}dz}
(the total differential). If the bug is moving with a velocity
v
=
(
v
x
,
v
y
,
v
z
)
,
{\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z}),}
the change in particle position is
v
d
t
=
(
v
x
d
t
,
v
y
d
t
,
v
z
d
t
)
,
{\displaystyle \mathbf {v} dt=(v_{x}dt,v_{y}dt,v_{z}dt),}
and we may write
d
p
=
∂
p
∂
t
d
t
+
∂
p
∂
x
v
x
d
t
+
∂
p
∂
y
v
y
d
t
+
∂
p
∂
z
v
z
d
t
=
(
∂
p
∂
t
+
∂
p
∂
x
v
x
+
∂
p
∂
y
v
y
+
∂
p
∂
z
v
z
)
d
t
=
(
∂
p
∂
t
+
v
⋅
∇
p
)
d
t
.
{\displaystyle {\begin{alignedat}{2}dp&={\frac {\partial p}{\partial t}}dt+{\frac {\partial p}{\partial x}}v_{x}dt+{\frac {\partial p}{\partial y}}v_{y}dt+{\frac {\partial p}{\partial z}}v_{z}dt\\&=\left({\frac {\partial p}{\partial t}}+{\frac {\partial p}{\partial x}}v_{x}+{\frac {\partial p}{\partial y}}v_{y}+{\frac {\partial p}{\partial z}}v_{z}\right)dt\\&=\left({\frac {\partial p}{\partial t}}+\mathbf {v} \cdot \nabla p\right)dt.\\\end{alignedat}}}
where
∇
p
{\displaystyle \nabla p}
is the gradient of the scalar field p. So:
d
d
t
=
∂
∂
t
+
v
⋅
∇
.
{\displaystyle {\frac {d}{dt}}={\frac {\partial }{\partial t}}+\mathbf {v} \cdot \nabla .}
If the bug is just moving with the flow, the same formula applies, but now the velocity vector,v, is that of the flow, u.
The last parenthesized expression is the substantive derivative of the scalar pressure.
Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as
D
D
t
=
∂
∂
t
+
u
⋅
∇
.
{\displaystyle {\frac {D}{Dt}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla .}
== See also ==
Continuum mechanics
Cauchy momentum equation
Special relativity
Substantive derivative
== References ==
James R. Welty, Charles E. Wicks, Robert E. Wilson & Gregory Rorrer Fundamentals of Momentum, Heat, and Mass Transfer ISBN 0-471-38149-7
=== Notes ===
== External links ==
=== PDFs ===
Integral Approach to the Control Volume analysis of Fluid Flow | Wikipedia/Control_surface_(fluid_dynamics) |
Dimensionless numbers (or characteristic numbers) have an important role in analyzing the behavior of fluids and their flow as well as in other transport phenomena. They include the Reynolds and the Mach numbers, which describe as ratios the relative magnitude of fluid and physical system characteristics, such as density, viscosity, speed of sound, and flow speed.
To compare a real situation (e.g. an aircraft) with a small-scale model it is necessary to keep the important characteristic numbers the same. Names and formulation of these numbers were standardized in ISO 31-12 and in ISO 80000-11.
== Diffusive numbers in transport phenomena ==
As a general example of how dimensionless numbers arise in fluid mechanics, the classical numbers in transport phenomena of mass, momentum, and energy are principally analyzed by the ratio of effective diffusivities in each transport mechanism. The six dimensionless numbers give the relative strengths of the different phenomena of inertia, viscosity, conductive heat transport, and diffusive mass transport. (In the table, the diagonals give common symbols for the quantities, and the given dimensionless number is the ratio of the left column quantity over top row quantity; e.g. Re = inertial force/viscous force = vd/ν.) These same quantities may alternatively be expressed as ratios of characteristic time, length, or energy scales. Such forms are less commonly used in practice, but can provide insight into particular applications.
== Droplet formation ==
Droplet formation mostly depends on momentum, viscosity and surface tension. In inkjet printing for example, an ink with a too high Ohnesorge number would not jet properly, and an ink with a too low Ohnesorge number would be jetted with many satellite drops. Not all of the quantity ratios are explicitly named, though each of the unnamed ratios could be expressed as a product of two other named dimensionless numbers.
== List ==
All numbers are dimensionless quantities. See other article for extensive list of dimensionless quantities. Certain dimensionless quantities of some importance to fluid mechanics are given below:
== References ==
Tropea, C.; Yarin, A.L.; Foss, J.F. (2007). Springer Handbook of Experimental Fluid Mechanics. Springer-Verlag. | Wikipedia/Dimensionless_numbers_in_fluid_mechanics |
In computational fluid dynamics, the Stochastic Eulerian Lagrangian Method (SELM) is an approach to capture essential features of fluid-structure interactions subject to thermal fluctuations while introducing approximations which facilitate analysis and the development of tractable numerical methods. SELM is a hybrid approach utilizing an Eulerian description for the continuum hydrodynamic fields and a Lagrangian description for elastic structures. Thermal fluctuations are introduced through stochastic driving fields. Approaches also are introduced for the stochastic fields of the SPDEs to obtain numerical methods taking into account the numerical discretization artifacts to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.
The SELM fluid-structure equations typically used are
ρ
d
u
d
t
=
μ
Δ
u
−
∇
p
+
Λ
[
Υ
(
V
−
Γ
u
)
]
+
λ
+
f
t
h
m
(
x
,
t
)
{\displaystyle \rho {\frac {d{u}}{d{t}}}=\mu \,\Delta u-\nabla p+\Lambda [\Upsilon (V-\Gamma {u})]+\lambda +f_{\mathrm {thm} }(x,t)}
m
d
V
d
t
=
−
Υ
(
V
−
Γ
u
)
−
∇
Φ
[
X
]
+
ξ
+
F
t
h
m
{\displaystyle m{\frac {d{V}}{d{t}}}=-\Upsilon (V-\Gamma {u})-\nabla \Phi [X]+\xi +F_{\mathrm {thm} }}
d
X
d
t
=
V
.
{\displaystyle {\frac {d{X}}{d{t}}}=V.}
The pressure p is determined by the incompressibility condition for the fluid
∇
⋅
u
=
0.
{\displaystyle \nabla \cdot u=0.\,}
The
Γ
,
Λ
{\displaystyle \Gamma ,\Lambda }
operators couple the Eulerian and Lagrangian degrees of freedom. The
X
,
V
{\displaystyle X,V}
denote the composite vectors of the full set of Lagrangian coordinates for the structures. The
Φ
{\displaystyle \Phi }
is the potential energy for a configuration of the structures. The
f
t
h
m
,
F
t
h
m
{\displaystyle f_{\mathrm {thm} },F_{\mathrm {thm} }}
are stochastic driving fields accounting for thermal fluctuations. The
λ
,
ξ
{\displaystyle \lambda ,\xi }
are Lagrange multipliers imposing constraints, such as local rigid body deformations. To ensure that dissipation occurs only through the
Υ
{\displaystyle \Upsilon }
coupling and not as a consequence of the interconversion by the operators
Γ
,
Λ
{\displaystyle \Gamma ,\Lambda }
the following adjoint conditions are imposed
Γ
=
Λ
T
.
{\displaystyle \Gamma =\Lambda ^{T}.}
Thermal fluctuations are introduced through Gaussian random fields with mean zero and the covariance structure
⟨
f
t
h
m
(
s
)
f
t
h
m
T
(
t
)
⟩
=
−
(
2
k
B
T
)
(
μ
Δ
−
Λ
Υ
Γ
)
δ
(
t
−
s
)
.
{\displaystyle \langle f_{\mathrm {thm} }(s)f_{\mathrm {thm} }^{T}(t)\rangle =-\left(2k_{B}{T}\right)\left(\mu \Delta -\Lambda \Upsilon \Gamma \right)\delta (t-s).}
⟨
F
t
h
m
(
s
)
F
t
h
m
T
(
t
)
⟩
=
2
k
B
T
Υ
δ
(
t
−
s
)
.
{\displaystyle \langle F_{\mathrm {thm} }(s)F_{\mathrm {thm} }^{T}(t)\rangle =2k_{B}{T}\Upsilon \delta (t-s).}
⟨
f
t
h
m
(
s
)
F
t
h
m
T
(
t
)
⟩
=
−
2
k
B
T
Λ
Υ
δ
(
t
−
s
)
.
{\displaystyle \langle f_{\mathrm {thm} }(s)F_{\mathrm {thm} }^{T}(t)\rangle =-2k_{B}{T}\Lambda \Upsilon \delta (t-s).}
To obtain simplified descriptions and efficient numerical methods, approximations in various limiting physical regimes have been considered to remove dynamics on small time-scales or inertial degrees of freedom. In different limiting regimes, the SELM framework can be related to the immersed boundary method, accelerated Stokesian dynamics, and arbitrary Lagrangian Eulerian method. The SELM approach has been shown to yield stochastic fluid-structure dynamics that are consistent with statistical mechanics. In particular, the SELM dynamics have been shown to satisfy detailed-balance for the Gibbs–Boltzmann ensemble. Different types of coupling operators have also been introduced allowing for descriptions of structures involving generalized coordinates and additional translational or rotational degrees of freedom. For numerically discretizing the SELM SPDEs, general methods were also introduced for deriving numerical stochastic fields for SPDEs that take discretization artifacts into account to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.
SELM methods have been used for simulations of
viscoelastic fluids and soft materials,
particle inclusions within curved fluid interfaces
and other microscopic systems and engineered devices.
== See also ==
Immersed boundary method
Stokesian dynamics
Volume of fluid method
Level-set method
Marker-and-cell method
== References ==
== Software : Numerical Codes and Simulation Packages ==
Mango-Selm : Stochastic Eulerian Lagrangian and Immersed Boundary Methods, 3D Simulation Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB | Wikipedia/Stochastic_Eulerian_Lagrangian_method |
In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.
== Method overview ==
In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small regions in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers in general, and specifically as boundary layers or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain, respectively.
An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the inner solution, and the other is the outer solution, named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained.
== A simple example ==
Consider the boundary value problem
ε
y
″
+
(
1
+
ε
)
y
′
+
y
=
0
,
{\displaystyle \varepsilon y''+(1+\varepsilon )y'+y=0,}
where
y
{\displaystyle y}
is a function of independent time variable
t
{\displaystyle t}
, which ranges from 0 to 1, the boundary conditions are
y
(
0
)
=
0
{\displaystyle y(0)=0}
and
y
(
1
)
=
1
{\displaystyle y(1)=1}
, and
ε
{\displaystyle \varepsilon }
is a small parameter, such that
0
<
ε
≪
1
{\displaystyle 0<\varepsilon \ll 1}
.
=== Outer solution, valid for t = O(1) ===
Since
ε
{\displaystyle \varepsilon }
is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation
ε
=
0
{\displaystyle \varepsilon =0}
, and hence find the solution to the problem
y
′
+
y
=
0.
{\displaystyle y'+y=0.}
Alternatively, consider that when
y
{\displaystyle y}
and
t
{\displaystyle t}
are both of size O(1), the four terms on the left hand side of the original equation are respectively of sizes
O
(
ε
)
{\displaystyle O(\varepsilon )}
, O(1),
O
(
ε
)
{\displaystyle O(\varepsilon )}
and O(1). The leading-order balance on this timescale, valid in the distinguished limit
ε
→
0
{\displaystyle \varepsilon \to 0}
, is therefore given by the second and fourth terms, i.e.,
y
′
+
y
=
0.
{\displaystyle y'+y=0.}
This has solution
y
=
A
e
−
t
{\displaystyle y=Ae^{-t}}
for some constant
A
{\displaystyle A}
. Applying the boundary condition
y
(
0
)
=
0
{\displaystyle y(0)=0}
, we would have
A
=
0
{\displaystyle A=0}
; applying the boundary condition
y
(
1
)
=
1
{\displaystyle y(1)=1}
, we would have
A
=
e
{\displaystyle A=e}
. It is therefore impossible to satisfy both boundary conditions, so
ε
=
0
{\displaystyle \varepsilon =0}
is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we infer that there must be a boundary layer at one of the endpoints of the domain where
ε
{\displaystyle \varepsilon }
needs to be included. This region will be where
ε
{\displaystyle \varepsilon }
is no longer negligible compared to the independent variable
t
{\displaystyle t}
, i.e.
t
{\displaystyle t}
and
ε
{\displaystyle \varepsilon }
are of comparable size, i.e. the boundary layer is adjacent to
t
=
0
{\displaystyle t=0}
. Therefore, the other boundary condition
y
(
1
)
=
1
{\displaystyle y(1)=1}
applies in this outer region, so
A
=
e
{\displaystyle A=e}
, i.e.
y
O
=
e
1
−
t
{\displaystyle y_{\mathrm {O} }=e^{1-t}}
is an accurate approximate solution to the original boundary value problem in this outer region. It is the leading-order solution.
=== Inner solution, valid for t = O(ε) ===
In the inner region,
t
{\displaystyle t}
and
ε
{\displaystyle \varepsilon }
are both tiny, but of comparable size, so define the new O(1) time variable
τ
=
t
/
ε
{\displaystyle \tau =t/\varepsilon }
. Rescale the original boundary value problem by replacing
t
{\displaystyle t}
with
τ
ε
{\displaystyle \tau \varepsilon }
, and the problem becomes
1
ε
y
″
(
τ
)
+
(
1
+
ε
)
1
ε
y
′
(
τ
)
+
y
(
τ
)
=
0
,
{\displaystyle {\frac {1}{\varepsilon }}y''(\tau )+\left({1+\varepsilon }\right){\frac {1}{\varepsilon }}y'(\tau )+y(\tau )=0,}
which, after multiplying by
ε
{\displaystyle \varepsilon }
and taking
ε
=
0
{\displaystyle \varepsilon =0}
, is
y
″
+
y
′
=
0.
{\displaystyle y''+y'=0.}
Alternatively, consider that when
t
{\displaystyle t}
has reduced to size
O
(
ε
)
{\displaystyle O(\varepsilon )}
, then
y
{\displaystyle y}
is still of size O(1) (using the expression for
y
O
{\displaystyle y_{\mathrm {O} }}
), and so the four terms on the left hand side of the original equation are respectively of sizes
O
(
ε
−
1
)
{\displaystyle O(\varepsilon ^{-1})}
,
O
(
ε
−
1
)
{\displaystyle O(\varepsilon ^{-1})}
, O(1) and O(1). The leading-order balance on this timescale, valid in the distinguished limit
ε
→
0
{\displaystyle \varepsilon \to 0}
, is therefore given by the first and second terms, i.e.
y
″
+
y
′
=
0.
{\displaystyle y''+y'=0.}
This has solution
y
=
B
−
C
e
−
τ
{\displaystyle y=B-Ce^{-\tau }}
for some constants
B
{\displaystyle B}
and
C
{\displaystyle C}
. Since
y
(
0
)
=
0
{\displaystyle y(0)=0}
applies in this inner region, this gives
B
=
C
{\displaystyle B=C}
, so an accurate approximate solution to the original boundary value problem in this inner region (it is the leading-order solution) is
y
I
=
B
(
1
−
e
−
τ
)
=
B
(
1
−
e
−
t
/
ε
)
.
{\displaystyle y_{\mathrm {I} }=B\left({1-e^{-\tau }}\right)=B\left({1-e^{-t/\varepsilon }}\right).}
=== Matching ===
We use matching to find the value of the constant
B
{\displaystyle B}
. The idea of matching is that the inner and outer solutions should agree for values of
t
{\displaystyle t}
in an intermediate (or overlap) region, i.e. where
ε
≪
t
≪
1
{\displaystyle \varepsilon \ll t\ll 1}
. We need the outer limit of the inner solution to match the inner limit of the outer solution, i.e.,
lim
τ
→
∞
y
I
=
lim
t
→
0
y
O
,
{\displaystyle \lim _{\tau \to \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },}
which gives
B
=
e
{\displaystyle B=e}
.
The above problem is the simplest of the simple problems dealing with matched asymptotic expansions. One can immediately calculate that
e
1
−
t
{\displaystyle e^{1-t}}
is the entire asymptotic series for the outer region whereas the
O
(
ε
)
{\displaystyle {\mathcal {O}}(\varepsilon )}
correction to the inner solution
y
I
{\displaystyle y_{\mathrm {I} }}
is
B
(
1
−
e
−
t
/
ε
)
{\textstyle B(1-e^{-t/\varepsilon })}
and the constant of integration
B
{\displaystyle B}
must be obtained from inner-outer matching.
Notice, the intuitive idea for matching of taking the limits i.e.
lim
τ
→
∞
y
I
=
lim
t
→
0
y
O
,
{\textstyle \lim _{\tau \to \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },}
doesn't apply at this level. This is simply because the underlined term doesn't converge to a limit. The methods to follow in these types of cases are either to go for a) method of an intermediate variable or using b) the Van-Dyke matching rule. The former method is cumbersome and works always whereas the Van-Dyke matching rule is easy to implement but with limited applicability. A concrete boundary value problem having all the essential ingredients is the following.
Consider the boundary value problem
ε
y
″
−
x
2
y
′
−
y
=
1
,
y
(
0
)
=
y
(
1
)
=
1
{\displaystyle \varepsilon y''-x^{2}y'-y=1,\quad y(0)=y(1)=1}
The conventional outer expansion
y
O
=
y
0
+
ε
y
1
+
⋯
{\displaystyle y_{\mathrm {O} }=y_{0}+\varepsilon y_{1}+\cdots }
gives
y
0
=
α
e
1
/
x
−
1
{\displaystyle y_{0}=\alpha e^{1/x}-1}
, where
α
{\displaystyle \alpha }
must be obtained from matching.
The problem has boundary layers both on the left and on the right. The left boundary layer near
0
{\displaystyle 0}
has a thickness
ε
1
/
2
{\displaystyle \varepsilon ^{1/2}}
whereas the right boundary layer near
1
{\displaystyle 1}
has thickness
ε
{\displaystyle \varepsilon }
. Let us first calculate the solution on the left boundary layer by rescaling
X
=
x
/
ε
1
/
2
,
Y
=
y
{\displaystyle X=x/\varepsilon ^{1/2},\;Y=y}
, then the differential equation to satisfy on the left is
Y
″
−
ε
1
/
2
X
2
Y
′
−
Y
=
1
,
Y
(
0
)
=
1
{\displaystyle Y''-\varepsilon ^{1/2}X^{2}Y'-Y=1,\quad Y(0)=1}
and accordingly, we assume an expansion
Y
l
=
Y
0
l
+
ε
1
/
2
Y
1
/
2
l
+
⋯
{\displaystyle Y^{l}=Y_{0}^{l}+\varepsilon ^{1/2}Y_{1/2}^{l}+\cdots }
.
The
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
inhomogeneous condition on the left provides us the reason to start the expansion at
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
. The leading order solution is
Y
0
l
=
2
e
−
X
−
1
{\displaystyle Y_{0}^{l}=2e^{-X}-1}
.
This with
1
−
1
{\displaystyle 1-1}
van-Dyke matching gives
α
=
0
{\displaystyle \alpha =0}
.
Let us now calculate the solution on the right rescaling
X
=
(
1
−
x
)
/
ε
,
Y
=
y
{\displaystyle X=(1-x)/\varepsilon ,\;Y=y}
, then the differential equation to satisfy on the right is
Y
″
+
(
1
−
2
ε
X
+
ε
2
X
2
)
Y
′
−
ε
Y
=
ε
,
Y
(
1
)
=
1
,
{\displaystyle Y''+\left(1-2\varepsilon X+\varepsilon ^{2}X^{2}\right)Y'-\varepsilon Y=\varepsilon ,\quad Y(1)=1,}
and accordingly, we assume an expansion
Y
r
=
Y
0
r
+
ε
Y
1
r
+
⋯
.
{\displaystyle Y^{r}=Y_{0}^{r}+\varepsilon Y_{1}^{r}+\cdots .}
The
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
inhomogeneous condition on the right provides us the reason to start the expansion at
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
. The leading order solution is
Y
0
r
=
(
1
−
B
)
+
B
e
−
X
{\displaystyle Y_{0}^{r}=(1-B)+Be^{-X}}
. This with
1
−
1
{\displaystyle 1-1}
van-Dyke matching gives
B
=
2
{\displaystyle B=2}
. Proceeding in a similar fashion if we calculate the higher order-corrections we get the solutions as
Y
l
=
2
e
−
X
−
1
+
ε
1
/
2
e
−
X
(
X
3
3
+
X
2
2
+
X
2
)
+
O
(
ε
)
,
X
=
x
ε
1
/
2
.
{\displaystyle Y^{l}=2e^{-X}-1+\varepsilon ^{1/2}e^{-X}\left({\frac {X^{3}}{3}}+{\frac {X^{2}}{2}}+{\frac {X}{2}}\right)+{\mathcal {O}}(\varepsilon ),\quad X={\frac {x}{\varepsilon ^{1/2}}}.}
y
≡
−
1.
{\displaystyle y\equiv -1.}
Y
r
=
2
e
−
X
−
1
+
2
ε
e
−
X
(
X
+
X
2
)
+
O
(
ε
2
)
,
X
=
1
−
x
ε
.
{\displaystyle Y^{r}=2e^{-X}-1+2\varepsilon e^{-X}\left(X+X^{2}\right)+{\mathcal {O}}(\varepsilon ^{2}),\quad X={\frac {1-x}{\varepsilon }}.}
=== Composite solution ===
To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. In this method, we add the inner and outer approximations and subtract their overlapping value,
y
o
v
e
r
l
a
p
{\displaystyle \,y_{\mathrm {overlap} }}
, which would otherwise be counted twice. The overlapping value is the outer limit of the inner boundary layer solution, and the inner limit of the outer solution; these limits were above found to equal
e
{\displaystyle e}
. Therefore, the final approximate solution to this boundary value problem is,
y
(
t
)
=
y
I
+
y
O
−
y
o
v
e
r
l
a
p
=
e
(
1
−
e
−
t
/
ε
)
+
e
1
−
t
−
e
=
e
(
e
−
t
−
e
−
t
/
ε
)
.
{\displaystyle y(t)=y_{\mathrm {I} }+y_{\mathrm {O} }-y_{\mathrm {overlap} }=e\left({1-e^{-t/\varepsilon }}\right)+e^{1-t}-e=e\left({e^{-t}-e^{-t/\varepsilon }}\right).}
Note that this expression correctly reduces to the expressions for
y
I
{\displaystyle y_{\mathrm {I} }}
and
y
O
{\displaystyle y_{\mathrm {O} }}
when
t
{\displaystyle t}
is
O
(
ε
)
{\displaystyle O(\varepsilon )}
and O(1), respectively.
=== Accuracy ===
This final solution satisfies the problem's original differential equation (shown by substituting it and its derivatives into the original equation). Also, the boundary conditions produced by this final solution match the values given in the problem, up to a constant multiple. This implies, due to the uniqueness of the solution, that the matched asymptotic solution is identical to the exact solution up to a constant multiple. This is not necessarily always the case, any remaining terms should go to zero uniformly as
ε
→
0
{\displaystyle \varepsilon \rightarrow 0}
.
Not only does our solution successfully approximately solve the problem at hand, it closely approximates the problem's exact solution. It happens that this particular problem is easily found to have exact solution
y
(
t
)
=
e
−
t
−
e
−
t
/
ε
e
−
1
−
e
−
1
/
ε
,
{\displaystyle y(t)={\frac {e^{-t}-e^{-t/\varepsilon }}{e^{-1}-e^{-1/\varepsilon }}},}
which has the same form as the approximate solution, by the multiplying constant. The approximate solution is the first term in a binomial expansion of the exact solution in powers of
e
1
−
1
/
ε
{\displaystyle e^{1-1/\varepsilon }}
.
=== Location of boundary layer ===
Conveniently, we can see that the boundary layer, where
y
′
{\displaystyle y'}
and
y
″
{\displaystyle y''}
are large, is near
t
=
0
{\displaystyle t=0}
, as we supposed earlier. If we had supposed it to be at the other endpoint and proceeded by making the rescaling
τ
=
(
1
−
t
)
/
ε
{\displaystyle \tau =(1-t)/\varepsilon }
, we would have found it impossible to satisfy the resulting matching condition. For many problems, this kind of trial and error is the only way to determine the true location of the boundary layer.
== Harder problems ==
The problem above is a simple example because it is a single equation with only one dependent variable, and there is one boundary layer in the solution. Harder problems may contain several co-dependent variables in a system of several equations, and/or with several boundary and/or interior layers in the solution.
It is often desirable to find more terms in the asymptotic expansions of both the outer and the inner solutions. The appropriate form of these expansions is not always clear: while a power-series expansion in
ε
{\displaystyle \varepsilon }
may work, sometimes the appropriate form involves fractional powers of
ε
{\displaystyle \varepsilon }
, functions such as
ε
log
ε
{\displaystyle \varepsilon \log \varepsilon }
, et cetera. As in the above example, we will obtain outer and inner expansions with some coefficients which must be determined by matching.
== Second-order differential equations ==
=== Schrödinger-like second-order differential equations ===
A method of matched asymptotic expansions - with matching of solutions in the common domain of validity - has been developed and used extensively by Dingle and Müller-Kirsten for the derivation of asymptotic expansions of the solutions and characteristic numbers (band boundaries) of Schrödinger-like second-order differential equations with periodic potentials - in particular for the Mathieu equation (best example), Lamé and ellipsoidal wave equations, oblate and prolate spheroidal wave equations, and equations with anharmonic potentials.
=== Convection–diffusion equations ===
Methods of matched asymptotic expansions have been developed to find approximate solutions to the Smoluchowski convection–diffusion equation, which is a singularly perturbed second-order differential equation. The problem has been studied particularly in the context of colloid particles in linear flow fields, where the variable is given by the pair distribution function around a test particle. In the limit of low Péclet number, the convection–diffusion equation also presents a singularity at infinite distance (where normally the far-field boundary condition should be placed) due to the flow field being linear in the interparticle separation. This problem can be circumvented with a spatial Fourier transform as shown by Jan Dhont.
A different approach to solving this problem was developed by Alessio Zaccone and coworkers and consists in placing the boundary condition right at the boundary layer distance, upon assuming (in a first-order approximation) a constant value of the pair distribution function in the outer layer due to convection being dominant there. This leads to an approximate theory for the encounter rate of two interacting colloid particles in a linear flow field in good agreement with the full numerical solution.
When the Péclet number is significantly larger than one, the singularity at infinite separation no longer occurs and the method of matched asymptotics can be applied to construct the full solution for the pair distribution function across the entire domain.
== See also ==
Asymptotic analysis
Multiple-scale analysis
Activation energy asymptotics
== References == | Wikipedia/Method_of_matched_asymptotic_expansions |
Stokesian dynamics
is a solution technique for the Langevin equation, which is the relevant form of Newton's 2nd law for a Brownian particle. The method treats the suspended particles in a discrete sense while the continuum approximation remains valid for the surrounding fluid, i.e., the suspended particles are generally assumed to be significantly larger than the molecules of the solvent. The particles then interact through hydrodynamic forces transmitted via the continuum fluid, and when the particle Reynolds number is small, these forces are determined through the linear Stokes equations (hence the name of the method). In addition, the method can also resolve non-hydrodynamic forces, such as Brownian forces, arising from the fluctuating motion of the fluid, and interparticle or external forces. Stokesian Dynamics can thus be applied to a variety of problems, including sedimentation, diffusion and rheology, and it aims to provide the same level of understanding for multiphase particulate systems as molecular dynamics does for statistical properties of matter. For
N
{\displaystyle N}
rigid particles of radius
a
{\displaystyle a}
suspended in an incompressible Newtonian fluid of viscosity
η
{\displaystyle \eta }
and density
ρ
{\displaystyle \rho }
, the motion of the fluid is governed by the Navier–Stokes equations, while the motion of the particles is described by the coupled equation of motion:
m
d
U
d
t
=
F
H
+
F
B
+
F
P
.
{\displaystyle \mathbf {m} {\frac {d\mathbf {U} }{dt}}=\mathbf {F} ^{\mathrm {H} }+\mathbf {F} ^{\mathrm {B} }+\mathbf {F} ^{\mathrm {P} }.}
In the above equation
U
{\displaystyle \mathbf {U} }
is the particle translational/rotational velocity
vector of dimension 6N.
F
H
{\displaystyle \mathbf {F} ^{\mathrm {H} }}
is the hydrodynamic force, i.e., force exerted by the fluid on the particle due to relative motion between them.
F
B
{\displaystyle \mathbf {F} ^{\mathrm {B} }}
is the stochastic Brownian force due to thermal motion of fluid particles.
F
P
{\displaystyle \mathbf {F} ^{\mathrm {P} }}
is the deterministic nonhydrodynamic force, which may be almost any form of interparticle or external force, e.g. electrostatic repulsion between like charged particles. Brownian dynamics is one of the popular techniques of solving the Langevin equation, but the hydrodynamic interaction in Brownian dynamics is highly simplified and normally includes only the isolated body resistance. On the other hand, Stokesian dynamics includes the many body hydrodynamic interactions. Hydrodynamic interaction is very important for non-equilibrium suspensions, like a sheared suspension, where it plays a vital role in its microstructure and hence its properties. Stokesian dynamics is used primarily for non-equilibrium suspensions where it has been shown to provide results which agree with experiments.
== Hydrodynamic interaction ==
When the motion on the particle scale is such that the particle Reynolds number is small, the hydrodynamic force exerted on the particles in a suspension undergoing a bulk linear shear flow is:
F
H
=
−
R
F
U
(
U
−
U
∞
)
+
R
F
E
:
E
∞
.
{\displaystyle \mathbf {F} ^{\mathrm {H} }=-\mathbf {R} _{\mathrm {FU} }(\mathbf {U} -\mathbf {U} ^{\infty })+\mathbf {R} ^{\mathrm {FE} }:\mathbf {E} ^{\infty }.}
Here,
U
∞
{\displaystyle \mathbf {U} ^{\infty }}
is the velocity of the bulk shear flow evaluated at the particle
center,
E
∞
{\displaystyle \mathbf {E} ^{\infty }}
is the symmetric part of the velocity-gradient tensor;
R
F
U
{\displaystyle \mathbf {R} _{\mathrm {FU} }}
and
R
F
E
{\displaystyle \mathbf {R} _{\mathrm {FE} }}
are the configuration-dependent resistance matrices that give the hydrodynamic force/torque on the particles due to their motion relative to the fluid (
R
F
U
{\displaystyle \mathbf {R} _{\mathrm {FU} }}
) and due to the imposed shear flow (
R
F
E
{\displaystyle \mathbf {R} _{\mathrm {FE} }}
). Note that the subscripts on the matrices indicate the coupling between kinematic (
U
{\displaystyle \mathbf {U} }
) and dynamic (
F
{\displaystyle \mathbf {F} }
) quantities.
One of the key features of Stokesian dynamics is its handling of the hydrodynamic interactions, which is fairly accurate without being computationally inhibitive (like boundary integral methods) for a large number of particles. Classical Stokesian dynamics requires
O
(
N
3
)
{\displaystyle O(N^{3})}
operations where N is the number of particles in the system (usually a periodic box). Recent advances have reduced the computational cost to about
O
(
N
1.25
log
N
)
.
{\displaystyle O(N^{1.25}\,\log N).}
== Brownian force ==
The stochastic or Brownian force
F
B
{\displaystyle \mathbf {F} ^{\mathrm {B} }}
arises from the thermal fluctuations in the fluid and is characterized by:
⟨
F
B
⟩
=
0
{\displaystyle \left\langle \mathbf {F} ^{\mathrm {B} }\right\rangle =0}
⟨
F
B
(
0
)
F
B
(
t
)
⟩
=
2
k
T
R
F
U
δ
(
t
)
{\displaystyle \left\langle \mathbf {F} ^{\mathrm {B} }(0)\mathbf {F} ^{\mathrm {B} }(t)\right\rangle =2kT\mathbf {R} _{\mathrm {FU} }\delta (t)}
The angle brackets denote an ensemble average,
k
{\displaystyle k}
is the Boltzmann constant,
T
{\displaystyle T}
is the absolute temperature and
δ
(
t
)
{\displaystyle \delta (t)}
is the delta function. The amplitude of the correlation between the Brownian forces at time
0
{\displaystyle 0}
and at time
t
{\displaystyle t}
results from the fluctuation-dissipation theorem for the N-body system.
== See also ==
Immersed boundary methods
Stochastic Eulerian Lagrangian methods
== References == | Wikipedia/Stokesian_dynamics |
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components:
a cosmological constant, denoted by lambda (Λ), associated with dark energy;
the postulated cold dark matter, denoted by CDM;
ordinary matter.
It is the current standard model of Big Bang cosmology, as it is the simplest model that provides a reasonably good account of:
the existence and structure of the cosmic microwave background;
the large-scale structure in the distribution of galaxies;
the observed abundances of hydrogen (including deuterium), helium, and lithium;
the accelerating expansion of the universe observed in the light from distant galaxies and supernovae.
The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe.
The ΛCDM model has been successful in modeling a broad collection of astronomical observations over decades. Remaining issues challenge the assumptions of the ΛCDM model and have led to many alternative models.
== Overview ==
The ΛCDM model is based on three postulates on the structure of spacetime:: 227
The cosmological principle, that the universe is the same everywhere and in all directions, and that it is expanding,
A postulate by Hermann Weyl that the lines of spacetime (geodesics) intersect at only one point, where time along each line can be synchronized; the behavior resembles an expanding perfect fluid,: 175
general relativity that relates the geometry of spacetime to the distribution of matter and energy.
This combination greatly simplifies the equations of general relativity into a form called the Friedmann equations. These equations specify the evolution of the scale factor of the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor.: 7 For example, a model might include baryons, photons, neutrinos, and dark matter.: 25.1.1 These component densities become parameters extracted when the model is constrained to match astrophysical observations. The model aims to describe the observable universe from approximately 0.1 s to the present.: 605
The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations.: 25.1.2
Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space, where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter.: 25.1.3
The ΛCDM model includes an expansion of the spatial metric that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light.
The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure,
p
=
−
ρ
c
2
{\displaystyle p=-\rho c^{2}}
, which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy,
Ω
Λ
{\displaystyle \Omega _{\Lambda }}
, is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or 0.6847±0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe.
Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter.
The ΛCDM model proposes specifically cold dark matter, hypothesized as:
Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons)
Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold)
Dissipationless: Cannot cool by radiating photons
Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly the weak force
Dark matter constitutes about 26.5% of the mass–energy density of the universe. The remaining 4.9% comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe.
The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon.
== Cosmic expansion history ==
The expansion of the universe is parameterized by a dimensionless scale factor
a
=
a
(
t
)
{\displaystyle a=a(t)}
(with time
t
{\displaystyle t}
counted from the birth of the universe), defined relative to the present time, so
a
0
=
a
(
t
0
)
=
1
{\displaystyle a_{0}=a(t_{0})=1}
; the usual convention in cosmology is that subscript 0 denotes present-day values, so
t
0
{\displaystyle t_{0}}
denotes the age of the universe. The scale factor is related to the observed redshift
z
{\displaystyle z}
of the light emitted at time
t
e
m
{\displaystyle t_{\mathrm {em} }}
by
a
(
t
em
)
=
1
1
+
z
.
{\displaystyle a(t_{\text{em}})={\frac {1}{1+z}}\,.}
The expansion rate is described by the time-dependent Hubble parameter,
H
(
t
)
{\displaystyle H(t)}
, defined as
H
(
t
)
≡
a
˙
a
,
{\displaystyle H(t)\equiv {\frac {\dot {a}}{a}},}
where
a
˙
{\displaystyle {\dot {a}}}
is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density
ρ
{\displaystyle \rho }
, the curvature
k
{\displaystyle k}
, and the cosmological constant
Λ
{\displaystyle \Lambda }
,
H
2
=
(
a
˙
a
)
2
=
8
π
G
3
ρ
−
k
c
2
a
2
+
Λ
c
2
3
,
{\displaystyle H^{2}=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}+{\frac {\Lambda c^{2}}{3}},}
where, as usual
c
{\displaystyle c}
is the speed of light and
G
{\displaystyle G}
is the gravitational constant.
A critical density
ρ
c
r
i
t
{\displaystyle \rho _{\mathrm {crit} }}
is the present-day density, which gives zero curvature
k
{\displaystyle k}
, assuming the cosmological constant
Λ
{\displaystyle \Lambda }
is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives
ρ
c
r
i
t
=
3
H
0
2
8
π
G
=
1.878
47
(
23
)
×
10
−
26
h
2
k
g
⋅
m
−
3
,
{\displaystyle \rho _{\mathrm {crit} }={\frac {3H_{0}^{2}}{8\pi G}}=1.878\;47(23)\times 10^{-26}\;h^{2}\;\mathrm {kg{\cdot }m^{-3}} ,}
where
h
≡
H
0
/
(
100
k
m
⋅
s
−
1
⋅
M
p
c
−
1
)
{\displaystyle h\equiv H_{0}/(100\;\mathrm {km{\cdot }s^{-1}{\cdot }Mpc^{-1}} )}
is the reduced Hubble constant.
If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent.
The present-day density parameter
Ω
x
{\displaystyle \Omega _{x}}
for various species is defined as the dimensionless ratio: 74
Ω
x
≡
ρ
x
(
t
=
t
0
)
ρ
c
r
i
t
=
8
π
G
ρ
x
(
t
=
t
0
)
3
H
0
2
{\displaystyle \Omega _{x}\equiv {\frac {\rho _{x}(t=t_{0})}{\rho _{\mathrm {crit} }}}={\frac {8\pi G\rho _{x}(t=t_{0})}{3H_{0}^{2}}}}
where the subscript
x
{\displaystyle x}
is one of
b
{\displaystyle \mathrm {b} }
for baryons,
c
{\displaystyle \mathrm {c} }
for cold dark matter,
r
a
d
{\displaystyle \mathrm {rad} }
for radiation (photons plus relativistic neutrinos), and
Λ
{\displaystyle \Lambda }
for dark energy.
Since the densities of various species scale as different powers of
a
{\displaystyle a}
, e.g.
a
−
3
{\displaystyle a^{-3}}
for matter etc.,
the Friedmann equation can be conveniently rewritten in terms of the various density parameters as
H
(
a
)
≡
a
˙
a
=
H
0
(
Ω
c
+
Ω
b
)
a
−
3
+
Ω
r
a
d
a
−
4
+
Ω
k
a
−
2
+
Ω
Λ
a
−
3
(
1
+
w
)
,
{\displaystyle H(a)\equiv {\frac {\dot {a}}{a}}=H_{0}{\sqrt {(\Omega _{\rm {c}}+\Omega _{\rm {b}})a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{k}a^{-2}+\Omega _{\Lambda }a^{-3(1+w)}}},}
where
w
{\displaystyle w}
is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various
Ω
{\displaystyle \Omega }
parameters add up to
1
{\displaystyle 1}
by construction. In the general case this is integrated by computer to give the expansion history
a
(
t
)
{\displaystyle a(t)}
and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations.
In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature
Ω
k
{\displaystyle \Omega _{k}}
is zero and
w
=
−
1
{\displaystyle w=-1}
, so this simplifies to
H
(
a
)
=
H
0
Ω
m
a
−
3
+
Ω
r
a
d
a
−
4
+
Ω
Λ
{\displaystyle H(a)=H_{0}{\sqrt {\Omega _{\rm {m}}a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{\Lambda }}}}
Observations show that the radiation density is very small today,
Ω
rad
∼
10
−
4
{\displaystyle \Omega _{\text{rad}}\sim 10^{-4}}
; if this term is neglected
the above has an analytic solution
a
(
t
)
=
(
Ω
m
/
Ω
Λ
)
1
/
3
sinh
2
/
3
(
t
/
t
Λ
)
{\displaystyle a(t)=(\Omega _{\rm {m}}/\Omega _{\Lambda })^{1/3}\,\sinh ^{2/3}(t/t_{\Lambda })}
where
t
Λ
≡
2
/
(
3
H
0
Ω
Λ
)
;
{\displaystyle t_{\Lambda }\equiv 2/(3H_{0}{\sqrt {\Omega _{\Lambda }}})\ ;}
this is fairly accurate for
a
>
0.01
{\displaystyle a>0.01}
or
t
>
10
{\displaystyle t>10}
million years.
Solving for
a
(
t
)
=
1
{\displaystyle a(t)=1}
gives the present age of the universe
t
0
{\displaystyle t_{0}}
in terms of the other parameters.
It follows that the transition from decelerating to accelerating expansion (the second derivative
a
¨
{\displaystyle {\ddot {a}}}
crossing zero) occurred when
a
=
(
Ω
m
/
2
Ω
Λ
)
1
/
3
,
{\displaystyle a=(\Omega _{\rm {m}}/2\Omega _{\Lambda })^{1/3},}
which evaluates to
a
∼
0.6
{\displaystyle a\sim 0.6}
or
z
∼
0.66
{\displaystyle z\sim 0.66}
for the best-fit parameters estimated from the Planck spacecraft.
== Parameters ==
Multiple variants of the ΛCDM model are used with some differences in parameters.: 25.1 One such set is outlined in the table below.
The Planck collaboration version of the ΛCDM model is based on six parameters: baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1.
The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated.
== Historical development ==
The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density.
During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density.
During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted.
These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty.
== Successes ==
Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy.: 58 The notable successes include:
Accurate modeling the high-precision CMB angular distribution measure by the Planck mission and Atacama Cosmology Telescope.
Accurate description of the linear E-mode polarization of the CMB radiation due to fluctuations on the surface of last scattering events.
Prediction of the observed B-mode polarization of the CMB light due to primordial gravitational waves.
Observations of H2O emission spectra from a galaxy 12.8 billion light years away consistent with molecules excited by cosmic background radiation much more energetic – 16-20K – than the CMB we observe now, 3K.
Predictions of the primordial abundance of deuterium as a result of Big Bang nucleosynthesis. The observed abundance matches the one derived from the nucleosynthesis model with the value for baryon density derived from CMB measurements.: 4.1.2
In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed.
== Challenges ==
Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model.
=== Lack of detection ===
Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions.
=== Violations of the cosmological principle ===
The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions (isotropy) and from every location (homogeneity) on a large enough scale: "the universe looks the same whoever and wherever you are." This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric, to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible.: 408 The assumptions were carried over into the ΛCDM model. However, some findings suggested violations of the cosmological principle.
==== Violations of isotropy ====
Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales.
Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored.
Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle.
The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole.
Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the combined cosmic microwave background temperature and polarization maps.
==== Violations of homogeneity ====
The homogeneity of the universe needed for the ΛCDM applies to very large volumes of space.
N-body simulations in ΛCDM show that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more.
Numerous claims of large-scale structures reported to be in conflict with the predicted scale of homogeneity for ΛCDM do not withstand statistical analysis.: 7.8
=== El Gordo galaxy cluster collision ===
El Gordo is a massive interacting galaxy cluster in the early Universe (
z
=
0.87
{\displaystyle z=0.87}
). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong (
6.16
σ
{\displaystyle 6.16\sigma }
) tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation.
=== KBC void ===
The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at
z
=
1100
{\displaystyle z=1100}
or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model.
=== Hubble tension ===
Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation, compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model.
Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM.
=== S8 tension ===
The "
S
8
{\displaystyle S_{8}}
tension" is a name for another question mark for the ΛCDM model. The
S
8
{\displaystyle S_{8}}
parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as
S
8
≡
σ
8
Ω
m
/
0.3
{\displaystyle S_{8}\equiv \sigma _{8}{\sqrt {\Omega _{\rm {m}}/0.3}}}
Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of
S
8
{\displaystyle S_{8}}
. However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the
S
8
{\displaystyle S_{8}}
tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction.
Some values for
S
8
{\displaystyle S_{8}}
are 0.832±0.013 (2020 Planck), 0.766+0.020−0.014 (2021 KIDS), 0.776±0.017 (2022 DES), 0.790+0.018−0.014 (2023 DES+KIDS), 0.769+0.031−0.034 – 0.776+0.032−0.033 (2023 HSC-SSP), 0.86±0.01 (2024 EROSITA). Values have also obtained using peculiar velocities, 0.637±0.054 (2020) and 0.776±0.033 (2020), among other methods.
=== Axis of evil ===
The "axis of evil" is a name given to a purported correlation between the plane of the Solar System and aspects of the cosmic microwave background (CMB). Such a correlation would give the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance, a result which has been claimed to be evidence of a departure from the Copernican principle. However, a 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy.
=== Cosmological lithium problem ===
The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4.: 141 If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed.
=== Shape of the universe ===
The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature.
=== Violations of the strong equivalence principle ===
The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies.
=== Cold dark matter discrepancies ===
Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model.
Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
==== Cuspy halo problem ====
The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves.
==== Dwarf galaxy problem ====
Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way.
==== Satellite disk problem ====
Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time.
==== High-velocity galaxy problem ====
Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy.
==== Galaxy morphology problem ====
If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years.
==== Fast galaxy bar problem ====
If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast.
==== Small scale crisis ====
Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.
==== High redshift galaxies ====
Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4.
Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars.
=== Missing baryon problem ===
Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies.
They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following
M
b
/
L
{\textstyle M_{\rm {b}}/L}
), weighted with the luminosity function
ϕ
(
L
)
{\textstyle \phi (L)}
over the previously mentioned classes of astrophysical objects:
ρ
b
=
∑
∫
L
ϕ
(
L
)
M
b
L
d
L
.
{\displaystyle \rho _{\rm {b}}=\sum \int L\phi (L){\frac {M_{\rm {b}}}{L}}\,dL.}
The result was:
Ω
b
=
Ω
∗
+
Ω
gas
=
2.2
×
10
−
3
+
1.5
×
10
−
3
h
−
1.3
≃
0.003
,
{\displaystyle \Omega _{\rm {b}}=\Omega _{*}+\Omega _{\text{gas}}=2.2\times 10^{-3}+1.5\times 10^{-3}\;h^{-1.3}\simeq 0.003,}
where
h
≃
0.72
{\displaystyle h\simeq 0.72}
.
Note that this value is much lower than the prediction of standard cosmic nucleosynthesis
Ω
b
≃
0.0486
{\displaystyle \Omega _{\rm {b}}\simeq 0.0486}
, so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons".
The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements.
=== Conventionalism ===
It has been argued that the ΛCDM model has adopted conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper. When faced with new data not in accord with a prevailing model, the conventionalist will find ways to adapt the theory rather than declare it false. Thus dark matter was added after the observations of anomalous galaxy rotation rates. Thomas Kuhn viewed the process differently, as "problem solving" within the existing paradigm.
== Extended models ==
Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature (
Ω
tot
{\displaystyle \Omega _{\text{tot}}}
may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted
r
{\displaystyle r}
), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models.
Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value.
Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio
r
{\displaystyle r}
should be between 0 and 0.3, and the latest results are within those limits.
== See also ==
Bolshoi cosmological simulation
Galaxy formation and evolution
Illustris project
List of cosmological computation software
Millennium Run
Weakly interacting massive particles (WIMPs)
The ΛCDM model is also known as the standard model of cosmology, but is not related to the Standard Model of particle physics.
Inhomogeneous cosmology
== References ==
== Further reading ==
Ostriker, J. P.; Steinhardt, P. J. (1995). "Cosmic Concordance". arXiv:astro-ph/9505066.
Ostriker, Jeremiah P.; Mitton, Simon (2013). Heart of Darkness: Unraveling the mysteries of the invisible universe. Princeton, NJ: Princeton University Press. ISBN 978-0-691-13430-7.
Rebolo, R.; et al. (2004). "Cosmological parameter estimation using Very Small Array data out to ℓ= 1500". Monthly Notices of the Royal Astronomical Society. 353 (3): 747–759. arXiv:astro-ph/0402466. Bibcode:2004MNRAS.353..747R. doi:10.1111/j.1365-2966.2004.08102.x. S2CID 13971059.
== External links ==
Cosmology tutorial/NedWright
Millennium Simulation
WMAP estimated cosmological parameters/Latest Summary | Wikipedia/Standard_cosmological_model |
Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG (MOdified Gravity).
== Overview ==
Scalar–tensor–vector gravity theory, also known as MOdified Gravity (MOG), is based on an action principle and postulates the existence of a vector field, while elevating the three constants of the theory to scalar fields. In the weak-field approximation, STVG produces a Yukawa-like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field.
STVG has been used successfully to explain galaxy rotation curves, the mass profiles of galaxy clusters, gravitational lensing in the Bullet Cluster, and cosmological observations without the need for dark matter. On a smaller scale, in the Solar System, STVG predicts no observable deviation from general relativity. The theory may also offer an explanation for the origin of inertia.
== Mathematical details ==
STVG is formulated using the action principle. In the following discussion, a metric signature of
[
+
,
−
,
−
,
−
]
{\displaystyle [+,-,-,-]}
will be used; the speed of light is set to
c
=
1
{\displaystyle c=1}
, and we are using the following definition for the Ricci tensor:
R
α
β
=
∂
γ
Γ
α
β
γ
−
∂
β
Γ
α
γ
γ
+
Γ
α
β
γ
Γ
γ
δ
δ
−
Γ
α
δ
γ
Γ
γ
β
δ
.
{\displaystyle R_{\alpha \beta }=\partial _{\gamma }\Gamma _{\alpha \beta }^{\gamma }-\partial _{\beta }\Gamma _{\alpha \gamma }^{\gamma }+\Gamma _{\alpha \beta }^{\gamma }\Gamma _{\gamma \delta }^{\delta }-\Gamma _{\alpha \delta }^{\gamma }\Gamma _{\gamma \beta }^{\delta }.}
We begin with the Einstein–Hilbert Lagrangian:
L
G
=
−
1
16
π
G
(
R
+
2
Λ
)
−
g
,
{\displaystyle {\mathcal {L}}_{G}=-{\frac {1}{16\pi G}}(R+2\Lambda ){\sqrt {-g}},}
where
R
{\displaystyle R}
is the trace of the Ricci tensor,
G
{\displaystyle G}
is the gravitational constant,
g
{\displaystyle g}
is the determinant of the metric tensor
g
α
β
{\displaystyle g_{\alpha \beta }}
, while
Λ
{\displaystyle \Lambda }
is the cosmological constant.
We introduce the Maxwell-Proca Lagrangian for the STVG covector field
ϕ
α
{\displaystyle \phi _{\alpha }}
:
L
ϕ
=
−
1
4
π
ω
[
1
4
B
α
β
B
α
β
−
1
2
μ
2
ϕ
α
ϕ
α
+
V
ϕ
(
ϕ
)
]
−
g
,
{\displaystyle {\mathcal {L}}_{\phi }=-{\frac {1}{4\pi }}\omega \left[{\frac {1}{4}}B^{\alpha \beta }B_{\alpha \beta }-{\frac {1}{2}}\mu ^{2}\phi _{\alpha }\phi ^{\alpha }+V_{\phi }(\phi )\right]{\sqrt {-g}},}
where
B
α
β
=
∂
α
ϕ
β
−
∂
β
ϕ
α
=
(
d
ϕ
)
α
β
{\displaystyle B_{\alpha \beta }=\partial _{\alpha }\phi _{\beta }-\partial _{\beta }\phi _{\alpha }=(\mathrm {d} \phi )_{\alpha \beta }}
is the field strength of
ϕ
α
{\displaystyle \phi _{\alpha }}
(given by the exterior derivative),
μ
{\displaystyle \mu }
is the mass of the vector field,
ω
{\displaystyle \omega }
characterizes the strength of the coupling between the fifth force and matter, and
V
ϕ
{\displaystyle V_{\phi }}
is a self-interaction potential.
The three constants of the theory,
G
,
μ
,
{\displaystyle G,\mu ,}
and
ω
,
{\displaystyle \omega ,}
are promoted to scalar fields by introducing associated kinetic and potential terms in the Lagrangian density:
L
S
=
−
1
G
[
1
2
g
α
β
(
∂
α
G
∂
β
G
G
2
+
∂
α
μ
∂
β
μ
μ
2
−
∂
α
ω
∂
β
ω
)
+
V
G
(
G
)
G
2
+
V
μ
(
μ
)
μ
2
+
V
ω
(
ω
)
]
−
g
,
{\displaystyle {\mathcal {L}}_{S}=-{\frac {1}{G}}\left[{\frac {1}{2}}g^{\alpha \beta }\left({\frac {\partial _{\alpha }G\partial _{\beta }G}{G^{2}}}+{\frac {\partial _{\alpha }\mu \partial _{\beta }\mu }{\mu ^{2}}}-\partial _{\alpha }\omega \partial _{\beta }\omega \right)+{\frac {V_{G}(G)}{G^{2}}}+{\frac {V_{\mu }(\mu )}{\mu ^{2}}}+V_{\omega }(\omega )\right]{\sqrt {-g}},}
where
V
G
,
V
μ
,
{\displaystyle V_{G},V_{\mu },}
and
V
ω
{\displaystyle V_{\omega }}
are the self-interaction potentials associated with the scalar fields.
The STVG action integral takes the form
S
=
∫
(
L
G
+
L
ϕ
+
L
S
+
L
M
)
d
4
x
,
{\displaystyle S=\int {({\mathcal {L}}_{G}+{\mathcal {L}}_{\phi }+{\mathcal {L}}_{S}+{\mathcal {L}}_{M})}~\mathrm {d^{4}} x,}
where
L
M
{\displaystyle {\mathcal {L}}_{M}}
is the ordinary matter Lagrangian density.
== Spherically symmetric, static vacuum solution ==
The field equations of STVG can be developed from the action integral using the variational principle. First a test particle Lagrangian is postulated in the form
L
T
P
=
−
m
+
α
ω
q
5
ϕ
μ
u
μ
,
{\displaystyle {\mathcal {L}}_{\mathrm {TP} }=-m+\alpha \omega q_{5}\phi _{\mu }u^{\mu },}
where
m
{\displaystyle m}
is the test particle mass,
α
{\displaystyle \alpha }
is a factor representing the nonlinearity of the theory,
q
5
{\displaystyle q_{5}}
is the test particle's fifth-force charge, and
u
μ
=
d
x
μ
/
d
s
{\displaystyle u^{\mu }=dx^{\mu }/ds}
is its four-velocity. Assuming that the fifth-force charge is proportional to mass, i.e.,
q
5
=
κ
m
,
{\displaystyle q_{5}=\kappa m,}
the value of
κ
=
G
N
/
ω
{\displaystyle \kappa ={\sqrt {G_{N}/\omega }}}
is determined and the following equation of motion is obtained in the spherically symmetric, static gravitational field of a point mass of mass
M
{\displaystyle M}
:
r
¨
=
−
G
N
M
r
2
[
1
+
α
−
α
(
1
+
μ
r
)
e
−
μ
r
]
,
{\displaystyle {\ddot {r}}=-{\frac {G_{N}M}{r^{2}}}\left[1+\alpha -\alpha (1+\mu r)e^{-\mu r}\right],}
where
G
N
{\displaystyle G_{N}}
is Newton's constant of gravitation. Further study of the field equations allows a determination of
α
{\displaystyle \alpha }
and
μ
{\displaystyle \mu }
for a point gravitational source of mass
M
{\displaystyle M}
in the form
μ
=
D
M
,
{\displaystyle \mu ={\frac {D}{\sqrt {M}}},}
α
=
G
∞
−
G
N
G
N
M
(
M
+
E
)
2
,
{\displaystyle \alpha ={\frac {G_{\infty }-G_{N}}{G_{N}}}{\frac {M}{({\sqrt {M}}+E)^{2}}},}
where
G
∞
≃
20
G
N
{\displaystyle G_{\infty }\simeq 20G_{N}}
is determined from cosmological observations, while for the constants
D
{\displaystyle D}
and
E
{\displaystyle E}
galaxy rotation curves yield the following values:
D
≃
25
2
⋅
10
M
⊙
1
/
2
k
p
c
−
1
,
{\displaystyle D\simeq 25^{2}\cdot \,10M_{\odot }^{1/2}\mathrm {kpc} ^{-1},}
E
≃
50
2
⋅
10
M
⊙
1
/
2
,
{\displaystyle E\simeq 50^{2}\cdot \,10M_{\odot }^{1/2},}
where
M
⊙
{\displaystyle M_{\odot }}
is the mass of the Sun. These results form the basis of a series of calculations that are used to confront the theory with observation.
== Agreement with observations ==
STVG/MOG has been applied successfully to a range of astronomical, astrophysical, and cosmological phenomena.
On the scale of the Solar System, the theory predicts no deviation from the results of Newton and Einstein. This is also true for star clusters containing no more than a few million solar masses.
The theory accounts for the rotation curves of spiral galaxies, correctly reproducing the Tully–Fisher law.
STVG is in good agreement with the mass profiles of galaxy clusters.
STVG can also account for key cosmological observations, including:
The acoustic peaks in the cosmic microwave background radiation;
The accelerating expansion of the universe that is apparent from type Ia supernova observations;
The matter power spectrum of the universe that is observed in the form of galaxy-galaxy correlations.
== Problems and criticism ==
A 2017 article on Forbes by Ethan Siegel states that the Bullet Cluster still "proves dark matter exists, but not for the reason most physicists think". There he argues in favor of dark matter over non-local gravity theories, such as STVG/MOG. Observations show that in "undisturbed" galaxy clusters the reconstructed mass from gravitational lensing is located where matter is distributed, and a separation of matter from gravitation only seems to appear after a collision or interaction has taken place. According to Ethan Siegel: "Adding dark matter makes this work, but non-local gravity would make differing before-and-after predictions that can't both match up, simultaneously, with what we observe."
== See also ==
Modified Newtonian dynamics
Nonsymmetric gravitational theory
Tensor–vector–scalar gravity
Reinventing Gravity
== References == | Wikipedia/Modified_gravity_theory |
The European Space Agency (ESA) is a 23-member international organization devoted to space exploration. With its headquarters in Paris and a staff of around 2,547 people globally as of 2023, ESA was founded in 1975 in the context of European integration. Its 2025 annual budget was €7.7 billion.
The ESA Human and Robotic Exploration programme includes human spaceflight (mainly through participation in the International Space Station programme); as well as the launch and operation of crewless exploration missions to other planets such as Mars, Moon, Jupiter and Mercury, Earth observation, science and telecommunication missions, designing launch vehicles; and maintaining a major spaceport, the Guiana Space Centre at Kourou (French Guiana). Further programmes include space safety, satellite navigation, applications and commercialisation.
The main European launch vehicle Ariane 6 is operated through Arianespace with ESA sharing in the costs of launching and further developing the launch vehicle. The agency also collaborates with NASA to manufacture the Orion spacecraft service module ESM which flies on the Space Launch System.
== History ==
=== Foundation ===
After World War II, many European scientists left Western Europe in order to work with the United States. Although the 1950s boom made it possible for Western European countries to invest in research and specifically in space-related activities, Western European scientists realised solely national projects would not be able to compete with the two main superpowers. In 1958, only months after the Sputnik shock, Edoardo Amaldi (Italy) and Pierre Auger (France), two prominent members of the Western European scientific community, met to discuss the foundation of a common Western European space agency. The meeting was attended by scientific representatives from eight countries.
The Western European nations decided to have two agencies: one concerned with developing a launch system, ELDO (European Launcher Development Organisation), and the other the precursor of the European Space Agency, ESRO (European Space Research Organisation). The latter was established on 20 March 1964 by an agreement signed on 14 June 1962. From 1968 to 1972, ESRO launched seven research satellites, but ELDO was not able to deliver a launch vehicle. Both agencies struggled with the underfunding and diverging interests of their participants.
The ESA in its current form was founded with the ESA Convention in 1975, when ESRO was merged with ELDO. The ESA had ten founding member states: Belgium, Denmark, France, West Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, and the United Kingdom. These signed the ESA Convention in 1975 and deposited the instruments of ratification by 1980, when the convention came into force. During this interval the agency functioned in a de facto fashion. ESA launched its first major scientific mission in 1975, Cos-B, a space probe monitoring gamma-ray emissions in the universe, which was first worked on by ESRO.
=== Later activities ===
ESA collaborated with NASA on the International Ultraviolet Explorer (IUE), the world's first high-orbit telescope, which was launched in 1978 and operated successfully for 18 years. A number of successful Earth-orbit projects followed, and in 1986 ESA began Giotto, its first deep-space mission, to study the comets Halley and Grigg–Skjellerup. Hipparcos, a star-mapping mission, was launched in 1989 and in the 1990s SOHO, Ulysses and the Hubble Space Telescope were all jointly carried out with NASA. Later scientific missions in cooperation with NASA include the Cassini–Huygens space probe, to which the ESA contributed by building the Titan landing module Huygens.
As the successor of ELDO, the ESA has also constructed rockets for scientific and commercial payloads. Ariane 1, launched in 1979, carried mostly commercial payloads into orbit from 1984 onward. The next two versions of the Ariane rocket were intermediate stages in the development of a more advanced launch system, the Ariane 4, which operated between 1988 and 2003 and established the ESA as the world leader in commercial space launches in the 1990s. Although the succeeding Ariane 5 experienced a failure on its first flight, it has since firmly established itself within the heavily competitive commercial space launch market with 112 successful launches until 2023. The successor launch vehicle, Ariane 6, had its maiden flight on 9 July 2024. It was followed by flight VA263, the first commercial launch, on 6 March 2025 at 13:24 local time (16:24 BST, 17:24 CET), delivering the Composante Spatiale Optique CSO-3 satellite.
The beginning of the new millennium saw the ESA become, along with agencies like NASA, JAXA, ISRO, the CSA and Roscosmos, one of the major participants in scientific space research. Although ESA had relied on co-operation with NASA in previous decades, especially the 1990s, changed circumstances (such as tough legal restrictions on information sharing by the United States military) led to decisions to rely more on itself and on co-operation with Russia. A 2011 press issue thus stated:
Russia was ESA's first partner in its efforts to ensure long-term access to space. There is a framework agreement between ESA and the government of the Russian Federation on cooperation and partnership in the exploration and use of outer space for peaceful purposes, and cooperation is already underway in two different areas of launcher activity that will bring benefits to both partners.
Notable ESA programmes include SMART-1, a probe testing cutting-edge space propulsion technology, the Mars Express and Venus Express missions, as well as the development of the Ariane 5 rocket and its role in the ISS partnership. The ESA maintains its scientific and research projects mainly for astronomy-space missions such as Corot, launched on 27 December 2006, a milestone in the search for exoplanets.
On 21 January 2019, ArianeGroup and Arianespace announced a one-year contract with the ESA to study and prepare for a mission to mine the Moon for lunar regolith.
In 2021 the ESA ministerial council agreed to the "Matosinhos manifesto" which set three priority areas (referred to as accelerators) "space for a green future, a rapid and resilient crisis response, and the protection of space assets", and two further high visibility projects (referred to as inspirators) an icy moon sample return mission; and human space exploration. In the same year the recruitment process began for the 2022 European Space Agency Astronaut Group.
The first half of 2023 saw the launches of the Jupiter Icy Moons Explorer and the Euclid spacecraft, the latter developed jointly with the Euclid Consortium. After 10 years of planning and building, it is designed to better understand dark energy and dark matter by accurately measuring the accelerating expansion of the universe.
The most notable ESA mission of 2024 was Hera (space mission), which launched on 7 October that year to perform a post-impact survey of the asteroid Dimorphos which was deflected by NASA's Double Asteroid Redirection Test mission.
In early 2025, the European Space Agency released its Strategy 2040, a long-term roadmap adopted by the ESA council to define the agency's priorities. The strategy is centered on 5 key goals:
Protecting the planet and climate
Advancing space exploration
Strengthening European autonomy and resilience
Boosting economic growth and competitiveness
Inspiring future generations
In March 2025, ESA officially launched its European Launcher Challenge (ELC) by publishing the Invitation to Tender (ITT). Initially introduced in November 2023, the program aims to foster new European sovereign launch capabilities, beginning with small launch vehicles and utlimately paving the way for an Ariane 6 successor.
=== Facilities ===
The agency's facilities date back to ESRO and are deliberately distributed among various countries and areas. The most important are the following centres:
ESA headquarters in Paris, France;
ESA science missions are based at ESTEC in Noordwijk, Netherlands;
Earth Observation missions at the ESA Centre for Earth Observation in Frascati, Italy;
ESA Mission Control (ESOC) is in Darmstadt, Germany;
The European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany;
The European Centre for Space Applications and Telecommunications (ECSAT), a research institute created in 2009, is located in Harwell, England, United Kingdom;
The European Space Astronomy Centre (ESAC) is located in Villanueva de la Cañada, Madrid, Spain.
The European Space Security and Education Centre (ESEC), located in Redu, Belgium;
The ESTRACK tracking and deep space communication network.
Many other facilities are operated by national space agencies in close collaboration with ESA.
Esrange near Kiruna in Sweden;
Guiana Space Centre in Kourou, France;
Toulouse Space Centre, France;
Institute of Space Propulsion in Lampoldshausen, Germany;
Columbus Control Centre in Oberpfaffenhofen, Germany.
== Mission ==
The treaty establishing the European Space Agency reads:
Article II, Purpose, Convention of establishment of a European Space Agency, SP-1271(E) from 2003 -- The purpose of the Agency shall be to provide for and to promote, for exclusively peaceful purposes, cooperation among European States in space research and technology and their space applications, with a view to their being used for scientific purposes and for operational space applications systems…
The ESA is responsible for setting a unified space and related industrial policy, recommending space objectives to the member states, and integrating national programs like satellite development, into the European program as much as possible.
Jean-Jacques Dordain – ESA's Director General (2003–2015) – outlined the European Space Agency's mission in a 2003 interview:
Today space activities have pursued the benefit of citizens, and citizens are asking for a better quality of life on Earth. They want greater security and economic wealth, but they also want to pursue their dreams, to increase their knowledge, and they want younger people to be attracted to the pursuit of science and technology.
I think that space can do all of this: it can produce a higher quality of life, better security, more economic wealth, and also fulfill our citizens' dreams and thirst for knowledge, and attract the young generation. This is the reason space exploration is an integral part of overall space activities. It has always been so, and it will be even more important in the future.
== Activities and programmes ==
The ESA describes its work in two overlapping ways:
For the general public, the various fields of work are described as "Activities".
Budgets are organised as "Programmes".
These are either mandatory or optional.
=== Activities ===
According to the ESA website, the activities are:
Observing the Earth
Human and Robotic Exploration
Launchers
Navigation
Space Science
Space Engineering & Technology
Operations
Telecommunications & Integrated Applications
Preparing for the Future
Space for Climate
=== Programmes ===
==== Mandatory ====
Every member country (known as 'Member States') must contribute to these programmes: The European Space Agency Science Programme is a long-term programme of space science missions.
Technology Development Element Programme
Science Core Technology Programme
General Study Programme
European Component Initiative
==== Optional ====
Depending on their individual choices the countries can contribute to the following programmes, becoming 'Participating States', listed according to:
== Employment ==
As of 2023, Many other facilities are operated by national space agencies in close collaboration with the ESA. The ESA employs around 2,547 people, and thousands of contractors. Initially, new employees are contracted for an expandable four-year term, which is until the organization's retirement age of 63. According to the ESA's documents, the staff can receive myriad of perks, such as financial childcare support, retirement plans, and financial help when migrating. The ESA also prevents employees from disclosing any private documents or correspondences to outside parties. Ars Technica's 2023 report, which contained testimonies of 18 people, suggested that there is a widespread harassment between management and its employees, especially with its contractors. Since the ESA is an international organization, unaffiliated with any single nation, any form of legal action is difficult to raise against the organization.
== Member states, funding and budget ==
=== Membership and contribution to the ESA ===
Member states participate to varying degrees with both mandatory space programs and those that are optional. As of 2008, the mandatory programmes made up 25% of total expenditures while optional space programmes were the other 75%. The ESA has traditionally implemented a policy of "georeturn", where funds that ESA member states provide to the ESA "are returned in the form of contracts to companies in those countries."
By 2015, the ESA was an intergovernmental organisation of 22 member states.
The 2008 ESA budget amounted to €3.0 billion whilst the 2009 budget amounted to €3.6 billion. The total budget amounted to about €3.7 billion in 2010, €3.99 billion in 2011, €4.02 billion in 2012, €4.28 billion in 2013, €4.10 billion in 2014, €4.43 billion in 2015, €5.25 billion in 2016, €5.75 billion in 2017, €5.60 billion in 2018, €5.72 billion in 2019, €6,68 billion in 2020, €6.49 billion in 2021, €7.15 billion in 2022, €7.46 billion in 2023 and €7.79 billion in 2024.
English and French are the two official languages of the ESA. Additionally, official documents are also provided in German and documents regarding the Spacelab have been also provided in Italian. If found appropriate, the agency may conduct its correspondence in any language of a member state.
The following table lists all the member states and adjunct members, their ESA convention ratification dates, and their contributions as of 2024:
=== Non-full member states ===
Previously associated members were Austria, Norway and Finland and Slovenia, all of which later joined the ESA as full members. Since January 2025 there have been four associate members: Latvia, Lithuania, Slovakia and Canada. The three European members have shown interest in full membership and may eventually apply within the next years.
==== Latvia ====
Latvia became the second current associated member on 30 June 2020, when the Association Agreement was signed by ESA Director Jan Wörner and the Minister of Education and Science of Latvia, Ilga Šuplinska in Riga. The Saeima ratified it on 27 July.
==== Lithuania ====
In May 2021, Lithuania became the third current associated member. As a consequence its citizens became eligible to apply to the 2022 ESA Astronaut group, applications for which were scheduled to close one week later. The deadline was therefore extended by three weeks to allow Lithuanians a fair chance to apply.
==== Slovakia ====
Slovakia's Associate membership came into effect on 13 October 2022, for an initial duration of seven years. The Association Agreement supersedes the European Cooperating State (ECS) Agreement, which entered into force upon Slovakia's subscription to the Plan for European Cooperating States Charter on 4 February 2016, a scheme introduced at ESA in 2001. The ECS Agreement was subsequently extended until 3 August 2022.
==== Canada ====
Since 1 January 1979, Canada has had the special status of a Cooperating State within the ESA. By virtue of this accord, the Canadian Space Agency takes part in the ESA's deliberative bodies and decision-making and also in the ESA's programmes and activities. Canadian firms can bid for and receive contracts to work on programmes. The accord has a provision ensuring a fair industrial return to Canada. The most recent Cooperation Agreement was signed on 15 December 2010 with a term extending to 2020. For 2014, Canada's annual assessed contribution to the ESA general budget was €6,059,449 (CAD$8,559,050). For 2017, Canada has increased its annual contribution to €21,600,000 (CAD$30,000,000).
=== Budget appropriation and allocation ===
The ESA is funded from annual contributions by national governments of members as well as from an annual contribution by the European Union (EU).
The budget of the ESA was €5.250 billion in 2016. Every 3–4 years, ESA member states agree on a budget plan for several years at an ESA member states conference. This plan can be amended in future years, however provides the major guideline for the ESA for several years. The 2016 budget allocations for major areas of the ESA activity are shown in the chart on the right.
Countries typically have their own space programmes that differ in how they operate organisationally and financially with the ESA. For example, the French space agency CNES has a total budget of €2,015 million, of which €755 million is paid as direct financial contribution to the ESA. Several space-related projects are joint projects between national space agencies and the ESA (e.g. COROT). Also, the ESA is not the only European governmental space organisation (for example European Union Satellite Centre and the European Union Space Programme Agency).
=== Enlargement ===
After the decision of the ESA Council of 21/22 March 2001, the procedure for accession of the European states was detailed as described the document titled "The Plan for European Co-operating States (PECS)". Nations that want to become a full member of the ESA do so in 3 stages. First a Cooperation Agreement is signed between the country and ESA. In this stage, the country has very limited financial responsibilities. If a country wants to co-operate more fully with ESA, it signs a European Cooperating State (ECS) Agreement, albeit to be a candidate for said agreement, a country must be European. The ECS Agreement makes companies based in the country eligible for participation in ESA procurements. The country can also participate in all ESA programmes, except for the Basic Technology Research Programme. While the financial contribution of the country concerned increases, it is still much lower than that of a full member state. The agreement is normally followed by a Plan For European Cooperating State (or PECS Charter). This is a 5-year programme of basic research and development activities aimed at improving the nation's space industry capacity. At the end of the 5-year period, the country can either begin negotiations to become a full member state or an associated state or sign a new PECS Charter. Many countries, most of which joined the EU in both 2004 and 2007, have started to co-operate with the ESA on various levels:
During the Ministerial Meeting in December 2014, ESA ministers approved a resolution calling for discussions to begin with Israel, Australia and South Africa on future association agreements. The ministers noted that "concrete cooperation is at an advanced stage" with these nations and that "prospects for mutual benefits are existing".
A separate space exploration strategy resolution calls for further co-operation with the United States, Russia and China on "LEO exploration, including a continuation of ISS cooperation and the development of a robust plan for the coordinated use of space transportation vehicles and systems for exploration purposes, participation in robotic missions for the exploration of the Moon, the robotic exploration of Mars, leading to a broad Mars Sample Return mission in which Europe should be involved as a full partner, and human missions beyond LEO in the longer term."
In August 2019, the ESA and the Australian Space Agency signed a joint statement of intent "to explore deeper cooperation and identify projects in a range of areas including deep space, communications, navigation, remote asset management, data analytics and mission support." Details of the cooperation were laid out in a framework agreement signed by the two entities.
On 17 November 2020, ESA signed a memorandum of understanding (MOU) with the South African National Space Agency (SANSA). SANSA CEO Dr. Valanathan Munsami tweeted: "Today saw another landmark event for SANSA with the signing of an MoU with the ESA. This builds on initiatives that we have been discussing for a while already and which gives effect to these. Thanks Jan for your hand of friendship and making this possible."
== Launch vehicles ==
The ESA currently has two operational launch vehicles Vega-C and Ariane 6. Rocket launches are carried out by Arianespace, which has 23 shareholders representing the industry that manufactures the Ariane 5 as well as CNES, at the ESA's Guiana Space Centre. Because many communication satellites have equatorial orbits, launches from French Guiana are able to take larger payloads into space than from spaceports at higher latitudes. In addition, equatorial launches give spacecraft an extra 'push' of nearly 500 m/s due to the higher rotational velocity of the Earth at the equator compared to near the Earth's poles where rotational velocity approaches zero.
=== Ariane 6 ===
Ariane 6 is a heavy lift expendable launch vehicle developed by Arianespace. The Ariane 6 entered into its inaugural flight campaign on 26 April 2024 with the flight conducted on 9 July 2024.
=== Vega-C ===
Vega is the ESA's carrier for small satellites. Developed by seven ESA members led by Italy. It is capable of carrying a payload with a mass of between 300 and 1500 kg to an altitude of 700 km, for low polar orbit. Its maiden launch from Kourou was on 13 February 2012. Vega began full commercial exploitation in December 2015.
The rocket has three solid propulsion stages and a liquid propulsion upper stage (the AVUM) for accurate orbital insertion and the ability to place multiple payloads into different orbits.
A larger version of the Vega launcher, Vega-C had its first flight in July 2022. The new evolution of the rocket incorporates a larger first stage booster, the P120C replacing the P80, an upgraded Zefiro (rocket stage) second stage, and the AVUM+ upper stage. This new variant enables larger single payloads, dual payloads, return missions, and orbital transfer capabilities.
=== Ariane launch vehicle development funding ===
Historically, the Ariane family rockets have been funded primarily "with money contributed by ESA governments seeking to participate in the program rather than through competitive industry bids. This [has meant that] governments commit multiyear funding to the development with the expectation of a roughly 90% return on investment in the form of industrial workshare." ESA is proposing changes to this scheme by moving to competitive bids for the development of the Ariane 6.
=== Future rocket development ===
Future projects include the Prometheus reusable engine technology demonstrator, Phoebus (an upgraded second stage for Ariane 6), and Themis (a reusable first stage).
== Human space flight ==
=== Formation and development ===
At the time the ESA was formed, its main goals did not encompass human space flight; rather it considered itself to be primarily a scientific research organisation for uncrewed space exploration in contrast to its American and Soviet counterparts. It is therefore not surprising that the first non-Soviet European in space was not an ESA astronaut on a European space craft; it was Czechoslovak Vladimír Remek who in 1978 became the first non-Soviet or American in space (the first man in space being Yuri Gagarin of the Soviet Union) – on a Soviet Soyuz spacecraft, followed by the Pole Mirosław Hermaszewski and East German Sigmund Jähn in the same year. This Soviet co-operation programme, known as Intercosmos, primarily involved the participation of Eastern bloc countries. In 1982, however, Jean-Loup Chrétien became the first non-Communist Bloc astronaut on a flight to the Soviet Salyut 7 space station.
Because Chrétien did not officially fly into space as an ESA astronaut, but rather as a member of the French CNES astronaut corps, the German Ulf Merbold is considered the first ESA astronaut to fly into space. He participated in the STS-9 Space Shuttle mission that included the first use of the European-built Spacelab in 1983. STS-9 marked the beginning of an extensive ESA/NASA joint partnership that included dozens of space flights of ESA astronauts in the following years. Some of these missions with Spacelab were fully funded and organisationally and scientifically controlled by the ESA (such as two missions by Germany and one by Japan) with European astronauts as full crew members rather than guests on board. Beside paying for Spacelab flights and seats on the shuttles, the ESA continued its human space flight co-operation with the Soviet Union and later Russia, including numerous visits to Mir.
During the latter half of the 1980s, European human space flights changed from being the exception to routine and therefore, in 1990, the European Astronaut Centre in Cologne, Germany was established. It selects and trains prospective astronauts and is responsible for the co-ordination with international partners, especially with regard to the International Space Station. As of 2006, the ESA astronaut corps officially included twelve members, including nationals from most large European countries except the United Kingdom.
In 2008, the ESA started to recruit new astronauts so that final selection would be due in spring 2009. Almost 10,000 people registered as astronaut candidates before registration ended in June 2008. 8,413 fulfilled the initial application criteria. Of the applicants, 918 were chosen to take part in the first stage of psychological testing, which narrowed down the field to 192. After two-stage psychological tests and medical evaluation in early 2009, as well as formal interviews, six new members of the European Astronaut Corps were selected – five men and one woman.
=== Crew vehicles ===
In the 1980s, France pressed for an independent European crew launch vehicle. Around 1978, it was decided to pursue a reusable spacecraft model and starting in November 1987 a project to create a mini-shuttle by the name of Hermes was introduced. The craft was comparable to early proposals for the Space Shuttle and consisted of a small reusable spaceship that would carry 3 to 5 astronauts and 3 to 4 metric tons of payload for scientific experiments. With a total maximum weight of 21 metric tons it would have been launched on the Ariane 5 rocket, which was being developed at that time. It was planned solely for use in low Earth orbit space flights. The planning and pre-development phase concluded in 1991; the production phase was never fully implemented because at that time the political landscape had changed significantly. With the fall of the Soviet Union, the ESA looked forward to co-operation with Russia to build a next-generation space vehicle. Thus the Hermes programme was cancelled in 1995 after about 3 billion dollars had been spent. The Columbus space station programme had a similar fate.
In the 21st century, the ESA started new programmes in order to create its own crew vehicles, most notable among its various projects and proposals is Hopper, whose prototype by EADS, called Phoenix, has already been tested. While projects such as Hopper are neither concrete nor to be realised within the next decade, other possibilities for human spaceflight in co-operation with the Russian Space Agency have emerged. Following talks with the Russian Space Agency in 2004 and June 2005, a co-operation between the ESA and the Russian Space Agency was announced to jointly work on the Russian-designed Kliper, a reusable spacecraft that would be available for space travel beyond LEO (e.g. the moon or even Mars). It was speculated that Europe would finance part of it. A €50 million participation study for Kliper, which was expected to be approved in December 2005, was finally not approved by ESA member states. The Russian state tender for the project was subsequently cancelled in 2006.
In June 2006, ESA member states granted 15 million to the Crew Space Transportation System (CSTS) study, a two-year study to design a spacecraft capable of going beyond Low-Earth orbit based on the current Soyuz design. This project was pursued with Roskosmos instead of the cancelled Kliper proposal. A decision on the actual implementation and construction of the CSTS spacecraft was contemplated for 2008.
In mid-2009 EADS Astrium was awarded a €21 million study into designing a crew vehicle based on the European ATV which is believed to now be the basis of the Advanced Crew Transportation System design.
In November 2012, the ESA decided to join NASA's Orion programme. The ATV would form the basis of a propulsion unit for NASA's new crewed spacecraft. The ESA may also seek to work with NASA on Orion's launch system as well in order to secure a seat on the spacecraft for its own astronauts.
In September 2014, the ESA signed an agreement with Sierra Nevada Corporation for co-operation in Dream Chaser project. Further studies on the Dream Chaser for European Utilization or DC4EU project were funded, including the feasibility of launching a Europeanised Dream Chaser onboard Ariane 5.
== Cooperation with other countries and organisations ==
The ESA has signed co-operation agreements with the following states that currently neither plan to integrate as tightly with ESA institutions as Canada, nor envision future membership of the ESA: Argentina, Brazil, China, India (for the Chandrayan mission), Russia and Turkey.
Additionally, the ESA has joint projects with the EUSPA of the European Union, NASA of the United States and is participating in the International Space Station together with the United States (NASA), Russia and Japan (JAXA).
=== National space organisations of member states ===
The Centre National d'Études Spatiales (CNES) (National Centre for Space Study) is the French government space agency (administratively, a "public establishment of industrial and commercial character"). Its headquarters are in central Paris. CNES is the main participant on the Ariane project. Indeed, CNES designed and tested all Ariane family rockets (mainly from its centre in Évry near Paris)
The UK Space Agency is a partnership of the UK government departments which are active in space. Through the UK Space Agency, the partners provide delegates to represent the UK on the various ESA governing bodies. Each partner funds its own programme.
The Italian Space Agency (Agenzia Spaziale Italiana or ASI) was founded in 1988 to promote, co-ordinate and conduct space activities in Italy. Operating under the Ministry of the Universities and of Scientific and Technological Research, the agency cooperates with numerous entities active in space technology and with the president of the Council of Ministers. Internationally, the ASI provides Italy's delegation to the Council of the European Space Agency and to its subordinate bodies.
The German Aerospace Center (DLR) (German: Deutsches Zentrum für Luft- und Raumfahrt e. V.) is the national research centre for aviation and space flight of the Federal Republic of Germany and of other member states in the Helmholtz Association. Its extensive research and development projects are included in national and international cooperative programmes. In addition to its research projects, the centre is the assigned space agency of Germany bestowing headquarters of German space flight activities and its associates.
The Instituto Nacional de Técnica Aeroespacial (INTA) (National Institute for Aerospace Technique) is a Public Research Organisation specialised in aerospace research and technology development in Spain. Among other functions, it serves as a platform for space research and acts as a significant testing facility for the aeronautic and space sector in the country.
=== NASA ===
The ESA has a long history of collaboration with NASA. Since ESA's astronaut corps was formed, the Space Shuttle has been the primary launch vehicle used by the ESA's astronauts to get into space through partnership programmes with NASA. In the 1980s and 1990s, the Spacelab programme was an ESA-NASA joint research programme that had the ESA develop and manufacture orbital labs for the Space Shuttle for several flights in which the ESA participates with astronauts in experiments.
In robotic science mission and exploration missions, NASA has been the ESA's main partner. Cassini–Huygens was a joint NASA-ESA mission, along with the Infrared Space Observatory, INTEGRAL, SOHO, and others. Also, the Hubble Space Telescope is a joint project of NASA and the ESA. Future ESA-NASA joint projects include the James Webb Space Telescope and the proposed Laser Interferometer Space Antenna. NASA has supported the ESA's MarcoPolo-R mission which landed on asteroid Bennu in October 2020 and is scheduled to return a sample to Earth for further analysis in 2023. NASA and the ESA will also likely join for a Mars sample-return mission. In October 2020, the ESA entered into a memorandum of understanding (MOU) with NASA to work together on the Artemis program, which will provide an orbiting Lunar Gateway and also accomplish the first crewed lunar landing in 50 years, whose team will include the first woman on the Moon. Astronaut selection announcements are expected within two years of the 2024 scheduled launch date. The ESA also purchases seats on the NASA operated Commercial Crew Program. The first ESA astronaut to be on a Commercial Crew Program mission is Thomas Pesquet. Pesquet launched into space aboard Crew Dragon Endeavour on the Crew-2 mission. The ESA also has seats on Crew-3 with Matthias Maurer and Crew-4 with Samantha Cristoforetti.
=== SpaceX ===
In 2023, following the successful launch of the Euclid telescope in July on a Falcon 9 rocket, the ESA approached SpaceX to launch four Galileo communication satellites on two Falcon 9 rockets in 2024, however it would require approval from the European Commission and all member states of the European Union to proceed.
=== Cooperation with other space agencies ===
Since China has invested more money into space activities, the Chinese Space Agency has sought international partnerships. Besides the Russian Space Agency, ESA is one of its most important partners. Both space agencies cooperated in the development of the Double Star Mission. In 2017, the ESA sent two astronauts to China for two weeks sea survival training with Chinese astronauts in Yantai, Shandong.
The ESA entered into a major joint venture with Russia in the form of the CSTS, the preparation of French Guiana spaceport for launches of Soyuz-2 rockets and other projects. With India, the ESA agreed to send instruments into space aboard the ISRO's Chandrayaan-1 in 2008. The ESA is also co-operating with Japan, the most notable current project in collaboration with JAXA is the BepiColombo mission to Mercury.
=== International Space Station ===
With regard to the International Space Station (ISS), the ESA is not represented by all of its member states: 11 of the 22 ESA member states currently participate in the project: Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland and United Kingdom. Austria, Finland and Ireland chose not to participate, because of lack of interest or concerns about the expense of the project. Portugal, Luxembourg, Greece, the Czech Republic, Romania, Poland, Estonia and Hungary joined ESA after the agreement had been signed.
The ESA takes part in the construction and operation of the ISS, with contributions such as Columbus, a science laboratory module that was brought into orbit by NASA's STS-122 Space Shuttle mission, and the Cupola observatory module that was completed in July 2005 by Alenia Spazio for the ESA. The current estimates for the ISS are approaching €100 billion in total (development, construction and 10 years of maintaining the station) of which the ESA has committed to paying €8 billion. About 90% of the costs of the ESA's ISS share will be contributed by Germany (41%), France (28%) and Italy (20%). German ESA astronaut Thomas Reiter was the first long-term ISS crew member.
The ESA has developed the Automated Transfer Vehicle for ISS resupply. Each ATV has a cargo capacity of 7,667 kilograms (16,903 lb). The first ATV, Jules Verne, was launched on 9 March 2008 and on 3 April 2008 successfully docked with the ISS. This manoeuvre, considered a major technical feat, involved using automated systems to allow the ATV to track the ISS, moving at 27,000 km/h, and attach itself with an accuracy of 2 cm. Five vehicles were launched before the program ended with the launch of the fifth ATV, Georges Lemaître, in 2014.
As of 2020, the spacecraft establishing supply links to the ISS are the Russian Progress and Soyuz, Japanese Kounotori (HTV), and the United States vehicles Cargo Dragon 2 and Cygnus stemmed from the Commercial Resupply Services program.
European Life and Physical Sciences research on board the International Space Station (ISS) is mainly based on the European Programme for Life and Physical Sciences in Space programme that was initiated in 2001.
=== Facilities ===
ESA Headquarters, Paris, France
European Space Operations Centre (ESOC), Darmstadt, Germany
European Space Research and Technology Centre (ESTEC), Noordwijk, Netherlands
European Space Astronomy Centre (ESAC), Madrid, Spain
European Centre for Space Applications and Telecommunications (ECSAT), Oxfordshire, United Kingdom
European Astronaut Centre (EAC), Cologne, Germany
ESA Centre for Earth Observation (ESRIN), Frascati, Italy
Guiana Space Centre (CSG), Kourou, French Guiana
European Space Tracking Network (ESTRACK)
European Data Relay System
== Link between ESA and EU ==
The ESA is an independent space agency and not under the jurisdiction of the European Union, although they have common goals, share funding, and work together often.
The initial aim of the European Union (EU) was to make the European Space Agency an agency of the EU by 2014. While the EU and its member states fund together 86% of the budget of the ESA, it is not an EU agency. Furthermore, the ESA has several non-EU members, most notably the United Kingdom which left the EU while remaining a full member of the ESA. The ESA is partnered with the EU on its two current flagship space programmes, the Copernicus series of Earth observation satellites and the Galileo satellite navigation system, with the ESA providing technical oversight and, in the case of Copernicus, some of the funding. The EU, though, has shown an interest in expanding into new areas, whence the proposal to rename and expand its satellite navigation agency (the European GNSS Agency) into the EU Agency for the Space Programme. The proposal drew strong criticism from the ESA, as it was perceived as encroaching on the ESA's turf.
In January 2021, after years of acrimonious relations, EU and ESA officials mended their relationship, with the EU Internal Market commissioner Thierry Breton saying "The European space policy will continue to rely on the ESA and its unique technical, engineering and science expertise," and that the "ESA will continue to be the European agency for space matters. If we are to be successful in our European strategy for space, and we will be, I will need the ESA by my side." ESA director Aschbacher reciprocated, saying "I would really like to make the ESA the main agency, the go-to agency of the European Commission for all its flagship programmes." The ESA and EUSPA are now seen to have distinct roles and competencies, which will be officialised in the Financial Framework Partnership Agreement (FFPA). Whereas the ESA's focus will be on the technical elements of the EU space programmes, the EUSPA will handle the operational elements of those programmes.
== Security incidents ==
On 3 August 1984, the ESA's Paris headquarters were severely damaged and six people were hurt when a bomb exploded. It was planted by the far-left armed Action Directe group.
On 14 December 2015, hackers from Anonymous breached the ESA's subdomains and leaked thousands of login credentials.
== See also ==
European integration § Space
European Space Security and Education Centre
Eurospace
List of European Space Agency programmes and missions
List of government space agencies
SEDS
Space Night
=== European Union matters ===
Agencies of the European Union
Directorate-General for Defence Industry and Space
Enhanced co-operation
European Union Agency for the Space Programme
== Notes ==
== References ==
== Further reading ==
== External links ==
Official website
A European strategy for space – Europa
Convention for the establishment of a European Space Agency, September 2005
Convention for the Establishment of a European Space Agency, Annex I: Privileges and Immunities
European Space Agency fonds and 'Oral History of Europe in Space' project run by the European Space Agency at the Historical Archives of the EU in Florence
Open access at the European Space Agency | Wikipedia/ESA_Science_&_Technology |
The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern cosmic expansion in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924.
The physical models built on the Friedmann equations are called FRW or FLRW models and from the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.
== Assumptions ==
The Friedmann equations build on three assumptions:: 22.1.3
the Friedmann–Lemaître–Robertson–Walker metric,
Einstein's equations for general relativity, and
a perfect fluid source.
The metric in turn starts with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc.
The metric can be written as:: 65
c
2
d
τ
2
=
c
2
d
t
2
−
R
2
(
t
)
(
d
r
2
+
S
k
2
(
r
)
d
ψ
2
)
{\displaystyle c^{2}d\tau ^{2}=c^{2}dt^{2}-R^{2}(t)\left(dr^{2}+S_{k}^{2}(r)d\psi ^{2}\right)}
where
S
−
1
(
r
)
=
sinh
(
r
)
,
S
0
=
1
,
S
1
=
sin
(
r
)
.
{\displaystyle S_{-1}(r)=\sinh(r),S_{0}=1,S_{1}=\sin(r).}
These three possibilities correspond to parameter k of (0) flat space, (+1) a sphere of constant positive curvature or (−1) a hyperbolic space with constant negative curvature.
Here the radial position has been decomposed into a time-dependent scale factor,
R
(
t
)
{\displaystyle R(t)}
, and a comoving coordinate,
r
{\displaystyle r}
.
Inserting this metric into Einstein's field equations relate the evolution of this scale factor to the pressure and energy of the matter in the universe. With the stress–energy tensor for a perfect fluid, results in the equations are described below.: 73
== Equations ==
There are two independent Friedmann equations for modelling a homogeneous, isotropic universe.
The first is:
H
2
≡
(
R
˙
R
)
2
=
8
π
G
ρ
3
−
k
R
2
+
Λ
3
,
{\displaystyle H^{2}\equiv {\left({\frac {\dot {R}}{R}}\right)}^{2}={\frac {8\pi G\rho }{3}}-{\frac {k}{R^{2}}}+{\frac {\Lambda }{3}},}
and second is:
R
¨
R
=
Λ
3
−
4
π
G
3
(
ρ
+
3
p
)
.
{\displaystyle {\frac {\ddot {R}}{R}}={\frac {\Lambda }{3}}-{\frac {4\pi G}{3}}\left(\rho +3p\right).}
The term Friedmann equation sometimes is used only for the first equation.
In these equations,
R(t) is the cosmological scale factor,
G
{\displaystyle G}
is the Newtonian constant of gravitation, Λ is the cosmological constant with dimension length−2, ρ is the energy density and p is the isotropic pressure. k is constant throughout a particular solution, but may vary from one solution to another. The units set the speed of light in vacuum to one.
In previous equations, R, ρ, and p are functions of time. If the cosmological constant, Λ, is ignored, the term
−
k
/
R
2
{\displaystyle -k/R^{2}}
in the first Friedmann equation can be interpreted as a Newtonian total energy, so the evolution of the universe pits gravitational potential energy,
8
π
G
ρ
/
3
{\displaystyle 8\pi G\rho /3}
against kinetic energy,
R
˙
/
R
{\displaystyle {\dot {R}}/R}
. The winner depends upon the k value in the total energy: if k is +1, gravity eventually causes the universe to contract. These conclusions will be altered if the Λ is not zero.
Using the first equation, the second equation can be re-expressed as:
ρ
˙
=
−
3
H
(
ρ
+
p
c
2
)
,
{\displaystyle {\dot {\rho }}=-3H\left(\rho +{\frac {p}{c^{2}}}\right),}
which eliminates Λ. Alternatively the conservation of mass–energy:
T
α
β
;
β
=
0
{\displaystyle T^{\alpha \beta }{}_{;\beta }=0}
leads to the same result.
=== Spatial curvature ===
The first Friedmann equation contains a discrete parameter k = +1, 0 or −1 depending on whether the shape of the universe is a closed 3-sphere, flat (Euclidean space) or an open 3-hyperboloid, respectively. If k is positive, then the universe is "closed": starting off on some paths through the universe return to the starting point. Such a universe is analogous to a sphere: finite but unbounded. If k is negative, then the universe is "open": infinite and no paths return. If k = 0, then the universe is Euclidean (flat) and infinite.: 69
== Dimensionless scale factor ==
A dimensionless scale factor can be defined:
a
(
t
)
≡
R
(
t
)
R
0
{\displaystyle a(t)\equiv {\frac {R(t)}{R_{0}}}}
using the present day value
R
0
=
R
(
now
)
.
{\displaystyle R_{0}=R({\text{now}}).}
The Friedmann equations can be written in terms of this dimensionless scale factor:
H
2
(
t
)
=
(
a
˙
a
)
2
=
8
π
G
3
[
ρ
(
t
)
+
ρ
c
−
ρ
0
a
2
(
t
)
]
{\displaystyle H^{2}(t)=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\left[\rho (t)+{\frac {\rho _{c}-\rho _{0}}{a^{2}(t)}}\right]}
where
a
˙
=
d
a
/
d
t
{\displaystyle {\dot {a}}=da/dt}
,
ρ
c
=
3
H
0
2
/
8
π
G
{\displaystyle \rho _{c}=3H_{0}^{2}/8\pi G}
, and
ρ
0
=
ρ
(
t
=
now
)
{\displaystyle \rho _{0}=\rho (t={\text{now}})}
.: 3
== Critical Density ==
That value of the mass-energy density,
ρ
{\displaystyle \rho }
that gives
k
=
0
{\displaystyle k=0}
when
Λ
=
0
{\displaystyle \Lambda =0}
is called the critical density:
ρ
c
≡
3
H
2
8
π
G
.
{\displaystyle \rho _{c}\equiv {\frac {3H^{2}}{8\pi G}}.}
If the universe has higher density,
ρ
≥
ρ
c
{\displaystyle \rho \geq \rho _{c}}
, then it is called "spatially closed": in this simple approximation the universe would eventually contract. On the other hand, if has lower density,
ρ
≤
ρ
c
{\displaystyle \rho \leq \rho _{c}}
, then it is called "spatially open" and expands forever. Therefore the geometry of the universe is directly connected to its density.: 73
== Density parameter ==
The density parameter Ω is defined as the ratio of the actual (or observed) density ρ to the critical density ρc of the Friedmann universe:: 74
Ω
:=
ρ
ρ
c
=
8
π
G
ρ
3
H
2
.
{\displaystyle \Omega :={\frac {\rho }{\rho _{c}}}={\frac {8\pi G\rho }{3H^{2}}}.}
Both the density
ρ
(
t
)
{\displaystyle \rho (t)}
and the Hubble parameter
H
(
t
)
{\displaystyle H(t)}
depend upon time and thus the density parameter varies with time.: 74
The critical density is equivalent to approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2–0.25 atoms per cubic metre.
A much greater density comes from the unidentified dark matter, although both ordinary and dark matter contribute in favour of contraction of the universe. However, the largest part comes from so-called dark energy, which accounts for the cosmological constant term. Although the total density is equal to the critical density (exactly, up to measurement error), dark energy does not lead to contraction of the universe but rather may accelerate its expansion.
An expression for the critical density is found by assuming Λ to be zero (as it is for all basic Friedmann universes) and setting the normalised spatial curvature, k, equal to zero. When the substitutions are applied to the first of the Friedmann equations given the new
H
0
{\displaystyle H_{0}}
value we find:
ρ
=
3
H
0
2
8
π
G
≈
1.10
×
10
−
26
k
g
m
−
3
≈
1.88
×
10
−
26
h
2
k
g
m
−
3
≈
2.78
×
10
11
h
2
M
⊙
M
p
c
−
3
{\displaystyle {\begin{aligned}\rho ={\frac {3H_{0}^{2}}{8\pi G}}&\approx 1.10\times 10^{-26}\mathrm {kg\,m^{-3}} \\&\approx 1.88\times 10^{-26}{\rm {h}}^{2}\,{\rm {kg}}\,{\rm {m}}^{-3}\\&\approx 2.78\times 10^{11}h^{2}M_{\odot }\,{\rm {Mpc}}^{-3}\end{aligned}}}
where:
H
0
=
76.5
±
2.2
k
m
s
−
1
M
p
c
−
1
≈
2.48
×
10
−
18
s
−
1
{\textstyle H_{0}=76.5\pm 2.2\,\mathrm {km\,s^{-1}\,Mpc^{-1}} \approx 2.48\times 10^{-18}\mathrm {s^{-1}} }
h
=
H
0
100
(
k
m
/
s
)
/
M
p
c
{\textstyle h={\frac {H_{0}}{100\,\mathrm {(km/s)/Mpc} }}}
ρ
c
=
8.5
×
10
−
27
k
g
/
m
3
{\displaystyle \rho _{c}=8.5\times 10^{-27}\mathrm {kg/m^{3}} }
Given the value of dark energy to be
Ω
Λ
=
0.647
{\displaystyle \Omega _{\Lambda }=0.647}
This term originally was used as a means to determine the spatial geometry of the universe, where ρc is the critical density for which the spatial geometry is flat (or Euclidean). Assuming a zero vacuum energy density, if Ω is larger than unity, the space sections of the universe are closed; the universe will eventually stop expanding, then collapse. If Ω is less than unity, they are open; and the universe expands forever. However, one can also subsume the spatial curvature and vacuum energy terms into a more general expression for Ω in which case this density parameter equals exactly unity. Then it is a matter of measuring the different components, usually designated by subscripts. According to the ΛCDM model, there are important components of Ω due to baryons, cold dark matter and dark energy. The spatial geometry of the universe has been measured by the WMAP spacecraft to be nearly flat. This means that the universe can be well approximated by a model where the spatial curvature parameter k is zero; however, this does not necessarily imply that the universe is infinite: it might merely be that the universe is much larger than the part we see.
The first Friedmann equation is often seen in terms of the present values of the density parameters, that is
H
2
H
0
2
=
Ω
0
,
R
a
−
4
+
Ω
0
,
M
a
−
3
+
Ω
0
,
k
a
−
2
+
Ω
0
,
Λ
.
{\displaystyle {\frac {H^{2}}{H_{0}^{2}}}=\Omega _{0,\mathrm {R} }a^{-4}+\Omega _{0,\mathrm {M} }a^{-3}+\Omega _{0,k}a^{-2}+\Omega _{0,\Lambda }.}
Here Ω0,R is the radiation density today (when a = 1), Ω0,M is the matter (dark plus baryonic) density today, Ω0,k = 1 − Ω0 is the "spatial curvature density" today, and Ω0,Λ is the cosmological constant or vacuum density today.
=== Other forms ===
The Hubble parameter can change over time if other parts of the equation are time dependent (in particular the mass density, the vacuum energy, or the spatial curvature). Evaluating the Hubble parameter at the present time yields Hubble's constant which is the proportionality constant of Hubble's law. Applied to a fluid with a given equation of state, the Friedmann equations yield the time evolution and geometry of the universe as a function of the fluid density.
== FLRW models ==
Relativisitic cosmology models based on the FLRW metric and obeying the Friedmann equations are called FRW models.: 73
Direct observation of stars has shown their velocities to be dominated by radial recession, validating these assumptions for cosmological models.: 65
These models are the basis of the standard model of Big Bang cosmological including the current ΛCDM model.: 25.1.3
To apply the metric to cosmology and predict its time evolution via the scale factor
a
(
t
)
{\displaystyle a(t)}
requires Einstein's field equations together with a way of calculating the density,
ρ
(
t
)
,
{\displaystyle \rho (t),}
such as a cosmological equation of state.
This process allows an approximate analytic solution Einstein's field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }}
giving the Friedmann equations when the energy–momentum tensor is similarly assumed to be isotropic and homogeneous. The resulting equations are:
(
a
˙
a
)
2
+
k
c
2
a
2
−
Λ
c
2
3
=
κ
c
4
3
ρ
2
a
¨
a
+
(
a
˙
a
)
2
+
k
c
2
a
2
−
Λ
c
2
=
−
κ
c
2
p
.
{\displaystyle {\begin{aligned}{\left({\frac {\dot {a}}{a}}\right)}^{2}+{\frac {kc^{2}}{a^{2}}}-{\frac {\Lambda c^{2}}{3}}&={\frac {\kappa c^{4}}{3}}\rho \\[4pt]2{\frac {\ddot {a}}{a}}+{\left({\frac {\dot {a}}{a}}\right)}^{2}+{\frac {kc^{2}}{a^{2}}}-\Lambda c^{2}&=-\kappa c^{2}p.\end{aligned}}}
Because the FLRW model assumes homogeneity, some popular accounts mistakenly assert that the Big Bang model cannot account for the observed lumpiness of the universe. In a strictly FLRW model, there are no clusters of galaxies or stars, since these are objects much denser than a typical part of the universe. Nonetheless, the FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models that calculate the lumpiness in the universe are added onto the FLRW models as extensions. Most cosmologists agree that the observable universe is well approximated by an almost FLRW model, i.e., a model that follows the FLRW metric apart from primordial density fluctuations. As of 2003, the theoretical implications of the various extensions to the FLRW model appear to be well understood, and the goal is to make these consistent with observations from COBE and WMAP.
=== Interpretation ===
The pair of equations given above is equivalent to the following pair of equations
ρ
˙
=
−
3
a
˙
a
(
ρ
+
p
c
2
)
a
¨
a
=
−
κ
c
4
6
(
ρ
+
3
p
c
2
)
+
Λ
c
2
3
{\displaystyle {\begin{aligned}{\dot {\rho }}&=-3{\frac {\dot {a}}{a}}\left(\rho +{\frac {p}{c^{2}}}\right)\\[1ex]{\frac {\ddot {a}}{a}}&=-{\frac {\kappa c^{4}}{6}}\left(\rho +{\frac {3p}{c^{2}}}\right)+{\frac {\Lambda c^{2}}{3}}\end{aligned}}}
with
k
{\displaystyle k}
, the spatial curvature index, serving as a constant of integration for the first equation.
The first equation can be derived also from thermodynamical considerations and is equivalent to the first law of thermodynamics, assuming the expansion of the universe is an adiabatic process (which is implicitly assumed in the derivation of the Friedmann–Lemaître–Robertson–Walker metric).
The second equation states that both the energy density and the pressure cause the expansion rate of the universe
a
˙
{\displaystyle {\dot {a}}}
to decrease, i.e., both cause a deceleration in the expansion of the universe. This is a consequence of gravitation, with pressure playing a similar role to that of energy (or mass) density, according to the principles of general relativity. The cosmological constant, on the other hand, causes an acceleration in the expansion of the universe.
=== Cosmological constant ===
The cosmological constant term can be omitted if we make the following replacements
ρ
→
ρ
−
Λ
κ
c
2
,
p
→
p
+
Λ
κ
.
{\displaystyle {\begin{aligned}\rho &\to \rho -{\frac {\Lambda }{\kappa c^{2}}},&p&\to p+{\frac {\Lambda }{\kappa }}.\end{aligned}}}
Therefore, the cosmological constant can be interpreted as arising from a form of energy that has negative pressure, equal in magnitude to its (positive) energy density:
p
=
−
ρ
c
2
,
{\displaystyle p=-\rho c^{2}\,,}
which is an equation of state of vacuum with dark energy.
An attempt to generalize this to
p
=
w
ρ
c
2
{\displaystyle p=w\rho c^{2}}
would not have general invariance without further modification.
In fact, in order to get a term that causes an acceleration of the universe expansion, it is enough to have a scalar field that satisfies
p
<
−
ρ
c
2
3
.
{\displaystyle p<-{\frac {\rho c^{2}}{3}}.}
Such a field is sometimes called quintessence.
=== Newtonian interpretation ===
This is due to McCrea and Milne, although sometimes incorrectly ascribed to Friedmann. The Friedmann equations are equivalent to this pair of equations:
−
a
3
ρ
˙
=
3
a
2
a
˙
ρ
+
3
a
2
p
a
˙
c
2
a
˙
2
2
−
κ
c
4
a
3
ρ
6
a
=
−
k
c
2
2
.
{\displaystyle {\begin{aligned}-a^{3}{\dot {\rho }}=3a^{2}{\dot {a}}\rho +{\frac {3a^{2}p{\dot {a}}}{c^{2}}}\,\\[1ex]{\frac {{\dot {a}}^{2}}{2}}-{\frac {\kappa c^{4}a^{3}\rho }{6a}}=-{\frac {kc^{2}}{2}}\,.\end{aligned}}}
The first equation says that the decrease in the mass contained in a fixed cube (whose side is momentarily a) is the amount that leaves through the sides due to the expansion of the universe plus the mass equivalent of the work done by pressure against the material being expelled. This is the conservation of mass–energy (first law of thermodynamics) contained within a part of the universe.
The second equation says that the kinetic energy (seen from the origin) of a particle of unit mass moving with the expansion plus its (negative) gravitational potential energy (relative to the mass contained in the sphere of matter closer to the origin) is equal to a constant related to the curvature of the universe. In other words, the energy (relative to the origin) of a co-moving particle in free-fall is conserved. General relativity merely adds a connection between the spatial curvature of the universe and the energy of such a particle: positive total energy implies negative curvature and negative total energy implies positive curvature.
The cosmological constant term is assumed to be treated as dark energy and thus merged into the density and pressure terms.
During the Planck epoch, one cannot neglect quantum effects. So they may cause a deviation from the Friedmann equations.
== Useful solutions ==
The Friedmann equations can be solved exactly in presence of a perfect fluid with equation of state
p
=
w
ρ
c
2
,
{\displaystyle p=w\rho c^{2},}
where p is the pressure, ρ is the mass density of the fluid in the comoving frame and w is some constant.
In spatially flat case (k = 0), the solution for the scale factor is
a
(
t
)
=
a
0
t
2
3
(
w
+
1
)
{\displaystyle a(t)=a_{0}\,t^{\frac {2}{3(w+1)}}}
where a0 is some integration constant to be fixed by the choice of initial conditions. This family of solutions labelled by w is extremely important for cosmology. For example, w = 0 describes a matter-dominated universe, where the pressure is negligible with respect to the mass density. From the generic solution one easily sees that in a matter-dominated universe the scale factor goes as
a
(
t
)
∝
t
2
/
3
matter-dominated
{\displaystyle a(t)\propto t^{2/3}\qquad {\text{matter-dominated}}}
Another important example is the case of a radiation-dominated universe, namely when w = 1/3. This leads to
a
(
t
)
∝
t
1
/
2
radiation-dominated
{\displaystyle a(t)\propto t^{1/2}\qquad {\text{radiation-dominated}}}
Note that this solution is not valid for domination of the cosmological constant, which corresponds to an w = −1. In this case the energy density is constant and the scale factor grows exponentially.
Solutions for other values of k can be found at Tersic, Balsa. "Lecture Notes on Astrophysics". Retrieved 24 February 2022.
=== Mixtures ===
If the matter is a mixture of two or more non-interacting fluids each with such an equation of state, then
ρ
˙
f
=
−
3
H
(
ρ
f
+
p
f
c
2
)
{\displaystyle {\dot {\rho }}_{f}=-3H\left(\rho _{f}+{\frac {p_{f}}{c^{2}}}\right)}
holds separately for each such fluid f. In each case,
ρ
˙
f
=
−
3
H
(
ρ
f
+
w
f
ρ
f
)
{\displaystyle {\dot {\rho }}_{f}=-3H\left(\rho _{f}+w_{f}\rho _{f}\right)\,}
from which we get
ρ
f
∝
a
−
3
(
1
+
w
f
)
.
{\displaystyle {\rho }_{f}\propto a^{-3\left(1+w_{f}\right)}\,.}
For example, one can form a linear combination of such terms
ρ
=
A
a
−
3
+
B
a
−
4
+
C
a
0
{\displaystyle \rho =Aa^{-3}+Ba^{-4}+Ca^{0}\,}
where A is the density of "dust" (ordinary matter, w = 0) when a = 1; B is the density of radiation (w = 1/3) when a = 1; and C is the density of "dark energy" (w = −1). One then substitutes this into
(
a
˙
a
)
2
=
8
π
G
3
ρ
−
k
c
2
a
2
{\displaystyle \left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}}
and solves for a as a function of time.
== History ==
Friedmann published two cosmology papers in the 1922-1923 time frame. He adopted the same homogeneity and isotropy assumptions used by Albert Einstein and by Willem de Sitter in their papers, both published in 1917. Both of the earlier works also assumed the universe was static, eternally unchanging. Einstein postulated an additional term to his equations of general relativity to ensure this stability. In his paper, de Sitter showed that spacetime had curvature even in the absence of matter: the new equations of general relativity implied that a vacuum had properties that altered spacetime.: 152
The idea of static universe was a fundamental assumption of philosophy and science. However, Friedmann abandoned the idea in his first paper "On the curvature of space". Starting with Einstein's 10 equations of relativity, Friedmann applies the symmetry of an isotropic universe and a simple model for mass-energy density to derive a relationship between that density and the curvature of spacetime. He demonstrates that in addition to one solution is static, many time dependent solutions also exist.: 157
Friedmann's second paper, "On the possibility of a world with constant negative curvature," published in 1924 explored more complex geometrical ideas. This paper establish the idea that that the finiteness of spacetime was not a property that could be established based on the equations of general relativity alone: both finite and infinite geometries could be used to give solutions. Friedmann used two concepts of a three dimensional sphere as analogy: a trip at constant latitude could return to the starting point or the sphere might have an infinite number of sheets and the trip never repeats.: 167
Friedmann's paper were largely ignored except – initially – by Einstein who actively dismissed them. However once Edwin Hubble published astronomical evidence that the universe was expanding, Einstein became convinced. Unfortunately for Friedmann, Georges Lemaître discovered some aspects of the same solutions and wrote persuasively about the concept of a universe born from a "primordial atom". Thus historians give these two scientists equal billing for the discovery.
== In popular culture ==
Several students at Tsinghua University (CCP leader Xi Jinping's alma mater) participating in the 2022 COVID-19 protests in China carried placards with Friedmann equations scrawled on them, interpreted by some as a play on the words "Free man". Others have interpreted the use of the equations as a call to “open up” China and stop its Zero Covid policy, as the Friedmann equations relate to the expansion, or “opening” of the universe.
== See also ==
Mathematics of general relativity
Solutions of the Einstein field equations
== Sources ==
== Further reading ==
Liebscher, Dierck-Ekkehard (2005). "Expansion". Cosmology. Berlin: Springer. pp. 53–77. ISBN 3-540-23261-3. | Wikipedia/Friedmann_equation |
Bimetric gravity or bigravity refers to two different classes of theories. The first class of theories relies on modified mathematical theories of gravity (or gravitation) in which two metric tensors are used instead of one. The second metric may be introduced at high energies, with the implication that the speed of light could be energy-dependent, enabling models with a variable speed of light.
If the two metrics are dynamical and interact, a first possibility implies two graviton modes, one massive and one massless; such bimetric theories are then closely related to massive gravity. Several bimetric theories with massive gravitons exist, such as those attributed to Nathan Rosen (1909–1995) or Mordehai Milgrom with relativistic extensions of Modified Newtonian Dynamics (MOND). More recently, developments in massive gravity have also led to new consistent theories of bimetric gravity. Though none has been shown to account for physical observations more accurately or more consistently than the theory of general relativity, Rosen's theory has been shown to be inconsistent with observations of the Hulse–Taylor binary pulsar. Some of these theories lead to cosmic acceleration at late times and are therefore alternatives to dark energy. Bimetric gravity is also at odds with measurements of gravitational waves emitted by the neutron-star merger GW170817.
On the contrary, the second class of bimetric gravity theories does not rely on massive gravitons and does not modify Newton's law, but instead describes the universe as a manifold having two coupled Riemannian metrics, where matter populating the two sectors interact through gravitation (and antigravitation if the topology and the Newtonian approximation considered introduce negative mass and negative energy states in cosmology as an alternative to dark matter and dark energy). Some of these cosmological models also use a variable speed of light in the high energy density state of the radiation-dominated era of the universe, challenging the inflation hypothesis.
== Rosen's bigravity (1940 to 1989) ==
In general relativity (GR), it is assumed that the distance between two points in spacetime is given by the metric tensor. Einstein's field equation is then used to calculate the form of the metric based on the distribution of energy and momentum.
In 1940, Rosen proposed that at each point of space-time, there is a Euclidean metric tensor
γ
i
j
{\displaystyle \gamma _{ij}}
in addition to the Riemannian metric tensor
g
i
j
{\displaystyle g_{ij}}
. Thus at each point of space-time there are two metrics:
d
s
2
=
g
i
j
d
x
i
d
x
j
{\displaystyle ds^{2}=g_{ij}dx^{i}dx^{j}}
d
σ
2
=
γ
i
j
d
x
i
d
x
j
{\displaystyle d\sigma ^{2}=\gamma _{ij}dx^{i}dx^{j}}
The first metric tensor,
g
i
j
{\displaystyle g_{ij}}
, describes the geometry of space-time and thus the gravitational field. The second metric tensor,
γ
i
j
{\displaystyle \gamma _{ij}}
, refers to the flat space-time and describes the inertial forces. The Christoffel symbols formed from
g
i
j
{\displaystyle g_{ij}}
and
γ
i
j
{\displaystyle \gamma _{ij}}
are denoted by
{
j
k
i
}
{\displaystyle \{_{jk}^{i}\}}
and
Γ
j
k
i
{\displaystyle \Gamma _{jk}^{i}}
respectively.
Since the difference of two connections is a tensor, one can define the tensor field
Δ
j
k
i
{\displaystyle \Delta _{jk}^{i}}
given by:
Two kinds of covariant differentiation then arise:
g
{\displaystyle g}
-differentiation based on
g
i
j
{\displaystyle g_{ij}}
(denoted by a semicolon, e.g.
X
;
a
{\displaystyle X_{;a}}
), and covariant differentiation based on
γ
i
j
{\displaystyle \gamma _{ij}}
(denoted by a slash, e.g.
X
/
a
{\displaystyle X_{/a}}
). Ordinary partial derivatives are represented by a comma (e.g.
X
,
a
{\displaystyle X_{,a}}
). Let
R
i
j
k
h
{\displaystyle R_{ijk}^{h}}
and
P
i
j
k
h
{\displaystyle P_{ijk}^{h}}
be the Riemann curvature tensors calculated from
g
i
j
{\displaystyle g_{ij}}
and
γ
i
j
{\displaystyle \gamma _{ij}}
, respectively. In the above approach the curvature tensor
P
i
j
k
h
{\displaystyle P_{ijk}^{h}}
is zero, since
γ
i
j
{\displaystyle \gamma _{ij}}
is the flat space-time metric.
A straightforward calculation yields the Riemann curvature tensor
R
i
j
k
h
=
P
i
j
k
h
−
Δ
i
j
/
k
h
+
Δ
i
k
/
j
h
+
Δ
m
j
h
Δ
i
k
m
−
Δ
m
k
h
Δ
i
j
m
=
−
Δ
i
j
/
k
h
+
Δ
i
k
/
j
h
+
Δ
m
j
h
Δ
i
k
m
−
Δ
m
k
h
Δ
i
j
m
{\displaystyle {\begin{aligned}R_{ijk}^{h}&=P_{ijk}^{h}-\Delta _{ij/k}^{h}+\Delta _{ik/j}^{h}+\Delta _{mj}^{h}\Delta _{ik}^{m}-\Delta _{mk}^{h}\Delta _{ij}^{m}\\&=-\Delta _{ij/k}^{h}+\Delta _{ik/j}^{h}+\Delta _{mj}^{h}\Delta _{ik}^{m}-\Delta _{mk}^{h}\Delta _{ij}^{m}\end{aligned}}}
Each term on the right hand side is a tensor. It is seen that from GR one can go to the new formulation just by replacing {:} by
Δ
{\displaystyle \Delta }
and ordinary differentiation by covariant
γ
{\displaystyle \gamma }
-differentiation,
−
g
{\displaystyle {\sqrt {-g}}}
by
g
γ
{\displaystyle {\sqrt {\tfrac {g}{\gamma }}}}
, integration measure
d
4
x
{\displaystyle d^{4}x}
by
−
γ
d
4
x
{\displaystyle {\sqrt {-\gamma }}\,d^{4}x}
, where
g
=
det
(
g
i
j
)
{\displaystyle g=\det(g_{ij})}
,
γ
=
det
(
γ
i
j
)
{\displaystyle \gamma =\det(\gamma _{ij})}
and
d
4
x
=
d
x
1
d
x
2
d
x
3
d
x
4
{\displaystyle d^{4}x=dx^{1}dx^{2}dx^{3}dx^{4}}
. Having once introduced
γ
i
j
{\displaystyle \gamma _{ij}}
into the theory, one has a great number of new tensors and scalars at one's disposal. One can set up other field equations other than Einstein's. It is possible that some of these will be more satisfactory for the description of nature.
The geodesic equation in bimetric relativity (BR) takes the form
It is seen from equations (1) and (2) that
Γ
{\displaystyle \Gamma }
can be regarded as describing the inertial field because it vanishes by a suitable coordinate transformation.
Being the quantity
Δ
{\displaystyle \Delta }
a tensor, it is independent of any coordinate system and hence may be regarded as describing the permanent gravitational field.
Rosen (1973) has found BR satisfying the covariance and equivalence principle. In 1966, Rosen showed that the introduction of the space metric into the framework of general relativity not only enables one to get the energy momentum density tensor of the gravitational field, but also enables one to obtain this tensor from a variational principle. The field equations of BR derived from the variational principle are
where
N
j
i
=
1
2
γ
α
β
(
g
h
i
g
h
j
/
α
)
/
β
{\displaystyle N_{j}^{i}={\frac {1}{2}}\gamma ^{\alpha \beta }(g^{hi}g_{hj/\alpha })_{/\beta }}
or
N
j
i
=
1
2
γ
α
β
{
(
g
h
i
g
h
j
,
α
)
,
β
−
(
g
h
i
g
m
j
Γ
h
α
m
)
,
β
−
γ
α
β
(
Γ
j
α
i
)
,
β
+
Γ
λ
β
i
[
g
h
λ
g
h
j
,
α
−
g
h
λ
g
m
j
Γ
h
α
m
−
Γ
j
α
λ
]
−
Γ
j
β
λ
[
g
h
i
g
h
λ
,
α
−
g
h
i
g
m
λ
Γ
h
α
m
−
Γ
λ
α
i
]
+
Γ
α
β
λ
[
g
h
i
g
h
j
,
λ
−
g
h
i
g
m
j
Γ
h
λ
m
−
Γ
j
λ
i
]
}
{\displaystyle {\begin{aligned}N_{j}^{i}&={\frac {1}{2}}\gamma ^{\alpha \beta }\left\{\left(g^{hi}g_{hj,\alpha }\right)_{,\beta }-\left(g^{hi}g_{mj}\Gamma _{h\alpha }^{m}\right)_{,\beta }-\gamma ^{\alpha \beta }\left(\Gamma _{j\alpha }^{i}\right)_{,\beta }+\Gamma _{\lambda \beta }^{i}\left[g^{h\lambda }g_{hj,\alpha }-g^{h\lambda }g_{mj}\Gamma _{h\alpha }^{m}-\Gamma _{j\alpha }^{\lambda }\right]-\right.\\&\qquad \Gamma _{j\beta }^{\lambda }\left[g^{hi}g_{h\lambda ,\alpha }-g^{hi}g_{m\lambda }\Gamma _{h\alpha }^{m}-\Gamma _{\lambda \alpha }^{i}\right]+\Gamma _{\alpha \beta }^{\lambda }\left.\left[g^{hi}g_{hj,\lambda }-g^{hi}g_{mj}\Gamma _{h\lambda }^{m}-\Gamma _{j\lambda }^{i}\right]\right\}\end{aligned}}}
with
N
=
g
i
j
N
i
j
{\displaystyle N=g^{ij}N_{ij}}
,
κ
=
g
γ
{\displaystyle \kappa ={\sqrt {\frac {g}{\gamma }}}}
and
T
j
i
{\displaystyle T_{j}^{i}}
is the energy-momentum tensor.
The variational principle also leads to the relation
T
j
;
i
i
=
0
{\displaystyle T_{j;i}^{i}=0}
.
Hence from (3)
K
j
;
i
i
=
0
{\displaystyle K_{j;i}^{i}=0}
,
which implies that in a BR, a test particle in a gravitational field moves on a geodesic with respect to
g
i
j
.
{\displaystyle g_{ij}.}
Rosen continued improving his bimetric gravity theory with additional publications in 1978 and 1980, in which he made an attempt "to remove singularities arising in general relativity by modifying it so as to take into account the existence of a fundamental rest frame in the universe." In 1985 Rosen tried again to remove singularities and pseudo-tensors from General Relativity. Twice in 1989 with publications in March and November Rosen further developed his concept of elementary particles in a bimetric field of General Relativity.
It is found that the BR and GR theories differ in the following cases:
propagation of electromagnetic waves
the external field of a high density star
the behaviour of intense gravitational waves propagating through a strong static gravitational field.
The predictions of gravitational radiation in Rosen's theory have been shown since 1992 to be in conflict with observations of the Hulse–Taylor binary pulsar.
== Massive bigravity ==
Since 2010 there has been renewed interest in bigravity after the development by Claudia de Rham, Gregory Gabadadze, and Andrew Tolley (dRGT) of a healthy theory of massive gravity. Massive gravity is a bimetric theory in the sense that nontrivial interaction terms for the metric
g
μ
ν
{\displaystyle g_{\mu \nu }}
can only be written down with the help of a second metric, as the only nonderivative term that can be written using one metric is a cosmological constant. In the dRGT theory, a nondynamical "reference metric"
f
μ
ν
{\displaystyle f_{\mu \nu }}
is introduced, and the interaction terms are built out of the matrix square root of
g
−
1
f
{\displaystyle g^{-1}f}
.
In dRGT massive gravity, the reference metric must be specified by hand. One can give the reference metric an Einstein–Hilbert term, in which case
f
μ
ν
{\displaystyle f_{\mu \nu }}
is not chosen but instead evolves dynamically in response to
g
μ
ν
{\displaystyle g_{\mu \nu }}
and possibly matter. This massive bigravity was introduced by Fawad Hassan and Rachel Rosen as an extension of dRGT massive gravity.
The dRGT theory is crucial to developing a theory with two dynamical metrics because general bimetric theories are plagued by the Boulware–Deser ghost, a possible sixth polarization for a massive graviton. The dRGT potential is constructed specifically to render this ghost nondynamical, and as long as the kinetic term for the second metric is of the Einstein–Hilbert form, the resulting theory remains ghost-free.
The action for the ghost-free massive bigravity is given by
S
=
−
M
g
2
2
∫
d
4
x
−
g
R
(
g
)
−
M
f
2
2
∫
d
4
x
−
f
R
(
f
)
+
m
2
M
g
2
∫
d
4
x
−
g
∑
n
=
0
4
β
n
e
n
(
X
)
+
∫
d
4
x
−
g
L
m
(
g
,
Φ
i
)
.
{\displaystyle S=-{\frac {M_{g}^{2}}{2}}\int d^{4}x{\sqrt {-g}}R(g)-{\frac {M_{f}^{2}}{2}}\int d^{4}x{\sqrt {-f}}R(f)+m^{2}M_{g}^{2}\int d^{4}x{\sqrt {-g}}\displaystyle \sum _{n=0}^{4}\beta _{n}e_{n}(\mathbb {X} )+\int d^{4}x{\sqrt {-g}}{\mathcal {L}}_{\mathrm {m} }(g,\Phi _{i}).}
As in standard general relativity, the metric
g
μ
ν
{\displaystyle g_{\mu \nu }}
has an Einstein–Hilbert kinetic term proportional to the Ricci scalar
R
(
g
)
{\displaystyle R(g)}
and a minimal coupling to the matter Lagrangian
L
m
{\displaystyle {\mathcal {L}}_{\mathrm {m} }}
, with
Φ
i
{\displaystyle \Phi _{i}}
representing all of the matter fields, such as those of the Standard Model. An Einstein–Hilbert term is also given for
f
μ
ν
{\displaystyle f_{\mu \nu }}
. Each metric has its own Planck mass, denoted
M
g
{\displaystyle M_{g}}
and
M
f
{\displaystyle M_{f}}
respectively. The interaction potential is the same as in dRGT massive gravity. The
β
i
{\displaystyle \beta _{i}}
are dimensionless coupling constants and
m
{\displaystyle m}
(or specifically
β
i
1
/
2
m
{\displaystyle \beta _{i}^{1/2}m}
) is related to the mass of the massive graviton. This theory propagates seven degrees of freedom, corresponding to a massless graviton and a massive graviton (although the massive and massless states do not align with either of the metrics).
The interaction potential is built out of the elementary symmetric polynomials
e
n
{\displaystyle e_{n}}
of the eigenvalues of the matrices
K
=
I
−
g
−
1
f
{\displaystyle \mathbb {K} =\mathbb {I} -{\sqrt {g^{-1}f}}}
or
X
=
g
−
1
f
{\displaystyle \mathbb {X} ={\sqrt {g^{-1}f}}}
, parametrized by dimensionless coupling constants
α
i
{\displaystyle \alpha _{i}}
or
β
i
{\displaystyle \beta _{i}}
, respectively. Here
g
−
1
f
{\displaystyle {\sqrt {g^{-1}f}}}
is the matrix square root of the matrix
g
−
1
f
{\displaystyle g^{-1}f}
. Written in index notation,
X
{\displaystyle \mathbb {X} }
is defined by the relation
X
μ
α
X
α
ν
=
g
μ
α
f
ν
α
.
{\displaystyle X^{\mu }{}_{\alpha }X^{\alpha }{}_{\nu }=g^{\mu \alpha }f_{\nu \alpha }.}
The
e
n
{\displaystyle e_{n}}
can be written directly in terms of
X
{\displaystyle \mathbb {X} }
as
e
0
(
X
)
=
1
,
e
1
(
X
)
=
[
X
]
,
e
2
(
X
)
=
1
2
(
[
X
]
2
−
[
X
2
]
)
,
e
3
(
X
)
=
1
6
(
[
X
]
3
−
3
[
X
]
[
X
2
]
+
2
[
X
3
]
)
,
e
4
(
X
)
=
det
X
,
{\displaystyle {\begin{aligned}e_{0}(\mathbb {X} )&=1,\\e_{1}(\mathbb {X} )&=[\mathbb {X} ],\\e_{2}(\mathbb {X} )&={\frac {1}{2}}\left([\mathbb {X} ]^{2}-[\mathbb {X} ^{2}]\right),\\e_{3}(\mathbb {X} )&={\frac {1}{6}}\left([\mathbb {X} ]^{3}-3[\mathbb {X} ][\mathbb {X} ^{2}]+2[\mathbb {X} ^{3}]\right),\\e_{4}(\mathbb {X} )&=\operatorname {det} \mathbb {X} ,\end{aligned}}}
where brackets indicate a trace,
[
X
]
≡
X
μ
μ
{\displaystyle [\mathbb {X} ]\equiv X^{\mu }{}_{\mu }}
. It is the particular antisymmetric combination of terms in each of the
e
n
{\displaystyle e_{n}}
which is responsible for rendering the Boulware–Deser ghost nondynamical.
== See also ==
Alternatives to general relativity
DGP model
Scalar–tensor theory
== References == | Wikipedia/Bimetric_theory |
The tidal force or tide-generating force is the difference in gravitational attraction between different points in a gravitational field, causing bodies to be pulled unevenly and as a result are being stretched towards the attraction. It is the differential force of gravity, the net between gravitational forces, the derivative of gravitational potential, the gradient of gravitational fields. Therefore tidal forces are a residual force, a secondary effect of gravity, highlighting its spatial elements, making the closer near-side more attracted than the more distant far-side.
This produces a range of tidal phenomena, such as ocean tides. Earth's tides are mainly produced by the relative close gravitational field of the Moon
and to a lesser extend by the stronger, but further away gravitational field of the Sun. The ocean on the side of Earth facing the Moon is being pulled by the gravity of the Moon away from Earth's crust, while on the other side of Earth there the crust is being pulled away from the ocean, resulting in Earth being stretched, bulging on both sides, and having opposite high-tides. Tidal forces viewed from Earth, that is from a rotating reference frame, appear as centripetal and centrifugal forces, but are not caused by the rotation.
Further tidal phenomena include solid-earth tides, tidal locking, breaking apart of celestial bodies and formation of ring systems within the Roche limit, and in extreme cases, spaghettification of objects. Tidal forces have also been shown to be fundamentally related to gravitational waves.
In celestial mechanics, the expression tidal force can refer to a situation in which a body or material (for example, tidal water) is mainly under the gravitational influence of a second body (for example, the Earth), but is also perturbed by the gravitational effects of a third body (for example, the Moon). The perturbing force is sometimes in such cases called a tidal force (for example, the perturbing force on the Moon): it is the difference between the force exerted by the third body on the second and the force exerted by the third body on the first.
== Explanation ==
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2. Figure 2 shows the differential force of gravity on a spherical body (body 1) exerted by another body (body 2).
These tidal forces cause strains on both bodies and may distort them or even, in extreme cases, break one or the other apart. The Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one another. These strains would not occur if the gravitational field were uniform, because a uniform field only causes the entire body to accelerate together in the same direction and at the same rate.
== Size and distance ==
The relationship of an astronomical body's size, to its distance from another body, strongly influences the magnitude of tidal force. The tidal force acting on an astronomical body, such as the Earth, is directly proportional to the diameter of the Earth and inversely proportional to the cube of the distance from another body producing a gravitational attraction, such as the Moon or the Sun. Tidal action on bath tubs, swimming pools, lakes, and other small bodies of water is negligible.
Figure 3 is a graph showing how gravitational force declines with distance. In this graph, the attractive force decreases in proportion to the square of the distance (Y = 1/X2), while the slope (Y′ = −2/X3) is inversely proportional to the cube of the distance.
The tidal force corresponds to the difference in Y between two points on the graph, with one point on the near side of the body, and the other point on the far side. The tidal force becomes larger, when the two points are either farther apart, or when they are more to the left on the graph, meaning closer to the attracting body.
For example, even though the Sun has a stronger overall gravitational pull on Earth, the Moon creates a larger tidal bulge because the Moon is closer. This difference is due to the way gravity weakens with distance: the Moon's closer proximity creates a steeper decline in its gravitational pull as you move across Earth (compared to the Sun's very gradual decline from its vast distance). This steeper gradient in the Moon's pull results in a larger difference in force between the near and far sides of Earth, which is what creates the bigger tidal bulge.
Gravitational attraction is inversely proportional to the square of the distance from the source. The attraction will be stronger on the side of a body facing the source, and weaker on the side away from the source. The tidal force is proportional to the difference.
=== Sun, Earth, and Moon ===
The Earth is 81 times more massive than the Moon, the Earth has roughly 4 times the Moon's radius. As a result, at the same distance, the tidal force of the Earth at the surface of the Moon is about 20 times stronger than that of the Moon at the Earth's surface.
== Effects ==
In the case of an infinitesimally small elastic sphere, the effect of a tidal force is to distort the shape of the body without any change in volume. The sphere becomes an ellipsoid with two bulges, pointing towards and away from the other body. Larger objects distort into an ovoid, and are slightly compressed, which is what happens to the Earth's oceans under the action of the Moon. All parts of the Earth are subject to the Moon's gravitational forces, causing the water in the oceans to redistribute, forming bulges on the sides near the Moon and far from the Moon.
When a body rotates while subject to tidal forces, internal friction results in the gradual dissipation of its rotational kinetic energy as heat. In the case for the Earth, and Earth's Moon, the loss of rotational kinetic energy results in a gain of about 2 milliseconds per century. If the body is close enough to its primary, this can result in a rotation which is tidally locked to the orbital motion, as in the case of the Earth's moon. Tidal heating produces dramatic volcanic effects on Jupiter's moon Io. Stresses caused by tidal forces also cause a regular monthly pattern of moonquakes on Earth's Moon.
Tidal forces contribute to ocean currents, which moderate global temperatures by transporting heat energy toward the poles. It has been suggested that variations in tidal forces correlate with cool periods in the global temperature record at 6- to 10-year intervals, and that harmonic beat variations in tidal forcing may contribute to millennial climate changes. No strong link to millennial climate changes has been found to date.
Tidal effects become particularly pronounced near small bodies of high mass, such as neutron stars or black holes, where they are responsible for the "spaghettification" of infalling matter. Tidal forces create the oceanic tide of Earth's oceans, where the attracting bodies are the Moon and, to a lesser extent, the Sun. Tidal forces are also responsible for tidal locking, tidal acceleration, and tidal heating. Tides may also induce seismicity.
By generating conducting fluids within the interior of the Earth, tidal forces also affect the Earth's magnetic field.
== Formulation ==
For a given (externally generated) gravitational field, the tidal acceleration at a point with respect to a body is obtained by vector subtraction of the gravitational acceleration at the center of the body (due to the given externally generated field) from the gravitational acceleration (due to the same field) at the given point. Correspondingly, the term tidal force is used to describe the forces due to tidal acceleration. Note that for these purposes the only gravitational field considered is the external one; the gravitational field of the body (as shown in the graphic) is not relevant. (In other words, the comparison is with the conditions at the given point as they would be if there were no externally generated field acting unequally at the given point and at the center of the reference body. The externally generated field is usually that produced by a perturbing third body, often the Sun or the Moon in the frequent example-cases of points on or above the Earth's surface in a geocentric reference frame.)
Tidal acceleration does not require rotation or orbiting bodies; for example, the body may be freefalling in a straight line under the influence of a gravitational field while still being influenced by (changing) tidal acceleration.
By Newton's law of universal gravitation and laws of motion, a body of mass m at distance R from the center of a sphere of mass M feels a force
F
→
g
{\textstyle {\vec {F}}_{g}}
,
F
→
g
=
−
r
^
G
M
m
R
2
{\displaystyle {\vec {F}}_{g}=-{\hat {r}}~G~{\frac {Mm}{R^{2}}}}
equivalent to an acceleration
a
→
g
{\textstyle {\vec {a}}_{g}}
,
a
→
g
=
−
r
^
G
M
R
2
{\displaystyle {\vec {a}}_{g}=-{\hat {r}}~G~{\frac {M}{R^{2}}}}
where
r
^
{\textstyle {\hat {r}}}
is a unit vector pointing from the body M to the body m (here, acceleration from m towards M has negative sign).
Consider now the acceleration due to the sphere of mass M experienced by a particle in the vicinity of the body of mass m. With R as the distance from the center of M to the center of m, let ∆r be the (relatively small) distance of the particle from the center of the body of mass m. For simplicity, distances are first considered only in the direction pointing towards or away from the sphere of mass M. If the body of mass m is itself a sphere of radius ∆r, then the new particle considered may be located on its surface, at a distance (R ± ∆r) from the centre of the sphere of mass M, and ∆r may be taken as positive where the particle's distance from M is greater than R. Leaving aside whatever gravitational acceleration may be experienced by the particle towards m on account of m's own mass, we have the acceleration on the particle due to gravitational force towards M as:
a
→
g
=
−
r
^
G
M
(
R
±
Δ
r
)
2
{\displaystyle {\vec {a}}_{g}=-{\hat {r}}~G~{\frac {M}{(R\pm \Delta r)^{2}}}}
Pulling out the R2 term from the denominator gives:
a
→
g
=
−
r
^
G
M
R
2
1
(
1
±
Δ
r
R
)
2
{\displaystyle {\vec {a}}_{g}=-{\hat {r}}~G~{\frac {M}{R^{2}}}~{\frac {1}{\left(1\pm {\frac {\Delta r}{R}}\right)^{2}}}}
The Maclaurin series of
1
/
(
1
±
x
)
2
{\textstyle 1/(1\pm x)^{2}}
is
1
∓
2
x
+
3
x
2
∓
⋯
{\textstyle 1\mp 2x+3x^{2}\mp \cdots }
which gives a series expansion of:
a
→
g
=
−
r
^
G
M
R
2
±
r
^
G
2
M
R
2
Δ
r
R
+
⋯
{\displaystyle {\vec {a}}_{g}=-{\hat {r}}~G~{\frac {M}{R^{2}}}\pm {\hat {r}}~G~{\frac {2M}{R^{2}}}~{\frac {\Delta r}{R}}+\cdots }
The first term is the gravitational acceleration due to M at the center of the reference body
m
{\textstyle m}
, i.e., at the point where
Δ
r
{\textstyle \Delta r}
is zero. This term does not affect the observed acceleration of particles on the surface of m because with respect to M, m (and everything on its surface) is in free fall. When the force on the far particle is subtracted from the force on the near particle, this first term cancels, as do all other even-order terms. The remaining (residual) terms represent the difference mentioned above and are tidal force (acceleration) terms. When ∆r is small compared to R, the terms after the first residual term are very small and can be neglected, giving the approximate tidal acceleration
a
→
t
,
axial
{\textstyle {\vec {a}}_{t,{\text{axial}}}}
for the distances ∆r considered, along the axis joining the centers of m and M:
a
→
t
,
axial
≈
±
r
^
2
Δ
r
G
M
R
3
{\displaystyle {\vec {a}}_{t,{\text{axial}}}\approx \pm {\hat {r}}~2\Delta r~G~{\frac {M}{R^{3}}}}
When calculated in this way for the case where ∆r is a distance along the axis joining the centers of m and M,
a
→
t
{\textstyle {\vec {a}}_{t}}
is directed outwards from to the center of m (where ∆r is zero).
Tidal accelerations can also be calculated away from the axis connecting the bodies m and M, requiring a vector calculation. In the plane perpendicular to that axis, the tidal acceleration is directed inwards (towards the center where ∆r is zero), and its magnitude is
1
2
|
a
→
t
,
axial
|
{\textstyle {\frac {1}{2}}\left|{\vec {a}}_{t,{\text{axial}}}\right|}
in linear approximation as in Figure 2.
The tidal accelerations at the surfaces of planets in the Solar System are generally very small. For example, the lunar tidal acceleration at the Earth's surface along the Moon–Earth axis is about 1.1×10−7 g, while the solar tidal acceleration at the Earth's surface along the Sun–Earth axis is about 0.52×10−7 g, where g is the gravitational acceleration at the Earth's surface. Hence the tide-raising force (acceleration) due to the Sun is about 45% of that due to the Moon. The solar tidal acceleration at the Earth's surface was first given by Newton in the Principia.
== See also ==
Amphidromic point
Disrupted planet
Galactic tide
Tidal resonance
Tidal stripping
Tidal tensor
Spacetime curvature
== References ==
== External links ==
Analysis and Prediction of Tides: GeoTide
Gravitational Tides by J. Christopher Mihos of Case Western Reserve University
Audio: Cain/Gay – Astronomy Cast Tidal Forces – July 2007.
Gray, Meghan; Merrifield, Michael. "Tidal Forces". Sixty Symbols. Brady Haran for the University of Nottingham.
Pau Amaro Seoane. "Stellar collisions: Tidal disruption of a star by a massive black hole". Retrieved 2018-12-28.
Myths about Gravity and Tides by Mikolaj Sawicki of John A. Logan College and the University of Colorado.
Tidal Misconceptions by Donald E. Simanek
Tides and centrifugal force by Paolo Sirtoli | Wikipedia/Tidal_force |
A Master of Science (Latin: Magister Scientiae; abbreviated MS, M.S., MSc, M.Sc., SM, S.M., ScM or Sc.M.) is a master's degree. In contrast to the Master of Arts degree, the Master of Science degree is typically granted for studies in sciences, engineering and medicine and is usually for programs that are more focused on scientific and mathematical subjects; however, different universities have different conventions and may also offer the degree for fields typically considered within the humanities and social sciences. While it ultimately depends upon the specific program, earning a Master of Science degree typically includes writing a thesis.
The Master of Science degree was introduced at the University of Michigan in 1858. One of the first recipients of the degree was De Volson Wood, who was conferred a Master of Science degree at the University of Michigan in 1859.
== Algeria ==
Algeria follows the Bologna Process.
== Australia ==
Australian universities commonly have coursework or research-based Master of Science courses for graduate students. They typically run for 1–2 years full-time, with varying amounts of research involved.
== Bangladesh ==
All Bangladeshi private and public universities have Master of Science courses as postgraduate degree. These include most of the major state-owned colleges. A number of private colleges also do offer MS degrees. After passing Bachelor of Science, any student becomes eligible to study in this discipline.
== Belgium ==
Like all EU member states, Belgium follows the Bologna Process. In Belgium, the typical university track involved obtaining two degrees, being a two-year Kandidaat prerequisite track (replaced by Bachelor) followed by a two- or three-year Licentiaat track. The latter was replaced by the Master of Science (M.Sc.) academic degree. This system was not exclusive to scientific degrees and was also used for other programs like law and literature.
== Canada ==
In Canada, Master of Science (MSc) degrees may be entirely course-based, entirely research-based or (more typically) a mixture. Master's programs typically take one to three years to complete and the completion of a scientific thesis is often required. Admission to a master's program is contingent upon holding a four-year university bachelor's degree. Some universities require a master's degree in order to progress to a doctoral program (PhD).
=== Quebec ===
In the province of Quebec, the Master of Science follows the same principles as in the rest of Canada. There is one exception, however, regarding admission to a master's program. Since Québécois students complete two to three years of college before entering university, they have the opportunity to complete a bachelor's degree in three years instead of four. Some undergraduate degrees such as the Bachelor of Education and the Bachelor of Engineering requires four years of study. Following the obtention of their bachelor's degree, students can be admitted into a graduate program to eventually obtain a master's degree.
While some students complete their master's program, others use it as a bridge to doctoral research programs. After one year of study and research in the master's program, many students become eligible to apply to a Doctor of Philosophy (Ph.D.) program directly, without obtaining the Master of Science degree in the first place.
== Chile ==
Commonly the Chilean universities have used "Magíster" for a master's degree, but other than that is similar to the rest of South America.
== Cyprus ==
Like all EU member states, the Republic of Cyprus follow the Bologna Process. Universities in Cyprus have used either "Magíster Scientiae or Artium" or Master of Arts/Science for a master's degree with 90 to 120 ECTS and duration of studies between 1, 2 and 5 years.
== Czech Republic and Slovakia ==
Like all EU member states, Czech Republic and Slovakia follow the Bologna Process. Czech Republic and Slovakia both award two different types of master's degrees; both award a title of Mgr. or Ing. to be used before the name.
Prior to reforms for compliance with the Bologna process, a master's degree could only be obtained after 5 years of uninterrupted study. Under the new system, it takes only 2 years but requires a previously completed 3-year bachelor's program (a Bc. title). Writing a thesis (in both master's and bachelor's programs) and passing final exams are necessary to obtain the degree. It is mostly the case that the final exams cover the main study areas of the whole study program, i.e. a student is required to prove their knowledge in the subjects they attended during the 2 resp. 3 years of their study. Exams also include the defence of a thesis before an academic board.
Ing. (Engineer) degrees are usually awarded for master's degrees achieved in the Natural Sciences or Mathematics-heavy study programmes, whereas an Mgr. (Magister) is generally awarded for Master's studies in social sciences, humanities and the arts.
== Egypt ==
The Master of Science (M.Sc.) is an academic degree for post-graduate candidates or researchers, it usually takes from 4 to 7 years after passing the Bachelor of Science (B.Sc.) degree. Master programs are awarded in many sciences in the Egyptian Universities. A completion of the degree requires finishing a pre-master studies followed by a scientific thesis or research. All M.Sc. degree holders are allowable to take a step forward in the academic track to get the PhD degree.
== Finland ==
Like all EU member states, Finland follows the Bologna Process. The Master of Science (M.Sc.) academic degree usually follows the Bachelor of Science (B.Sc.) studies which typically last five years. For the completion of both the bachelor and the master studies the student must accumulate a total of 300 ECTS credits, thus most Masters programs are two-year programs with 120 credits. The completion of a scientific thesis is required.
== Germany ==
Like all EU member states, Germany follows the Bologna Process. The Master of Science (M.Sc.) academic degree replaces the once common Diplom or Magister programs that typically lasted four to five years. It is awarded in science-related studies with a high percentage of mathematics. For the completion the student must accumulate 300 ECTS Credits, thus most Masters programs are two-year programs with 120 credits. The completion of a scientific thesis is required.
== South America ==
In Argentina, Brazil, Colombia, Ecuador, Mexico, Panama, Peru, Uruguay and Venezuela, the Master of Science or Magister is a postgraduate degree lasting two to four years. The admission to a master's program (Spanish: Licenciatura; Portuguese: Mestrado) requires the full completion of a four to five year long undergraduate degree, bachelor's degree, engineer's degree or a licentiate of the same length. Defense of a research thesis is required. All master's degrees qualify for a doctorate program. Depending on the country, one ECTS credit point can equal on average between 22 and 30 actual study hours. In most of these cases, the number of required attendance hours to the university classes will be at least half of that (one ECTS will mean around 11 to 15 mandatory hours of on-site classes).
== Southeastern Europe ==
In Slavic countries in European southeast (particularly former Yugoslavian republics), the education system was largely based on the German university system (largely due to the presence and influence of the Austria-Hungary Empire in the region). Prior to the implementation of the Bologna Process, academic university studies comprised a 4–5 year-long graduate diplom program, which could have been followed by a 2–4 year long magistar program and then later with 2–5 year long doctor of science program.
After the Bologna Process implementation, again based on the German implementation, Diplom titles and programs were replaced by entirely professional bachelor's and master's programs. The studies are structured such that a master program lasts long enough for the student to accumulate a total of 300 ECTS credits, so its duration would depend on a number of credits acquired during the bachelor studies. Pre-Bologna magistar programs were abandoned – after earning an M.Sc. degree and satisfying other academic requirements a student could proceed to earn a doctor of science degree directly, or skip M.Sc. if the diplom program lasted more than 3 years as it was possible to do so for some time.
== Guyana ==
In Guyana, all universities, including University of Guyana, Texila American University, American International School of Medicine have Master of Science courses as postgraduate degrees. Students who have completed undergraduate Bachelor of Science degree are eligible to study in this discipline.
== India ==
In India, universities offer M.Sc. programs usually in sciences discipline. Generally, post-graduate scientific courses lead to M.Sc. degree while post-graduate engineering courses lead to ME or MTech degree. For example, a master's in automotive engineering would normally be an ME or MTech, while a master's in physics would be an M.Sc. A few top universities also offer combined undergraduate-postgraduate programs leading to a master's degree which is known as integrated masters.
A Master of Science in Engineering (MS.Engg.) degree is also offered in India. It is usually structured as an engineering research degree, lesser than PhD and considered to be parallel to M.Phil. degree in humanities and science. Some institutes such as IITs offer an MS degree for postgraduate engineering courses. This degree is considered a research-oriented degree whereas MTech or ME degree is usually not a research degree in India. M.Sc. degree is also awarded by various IISERs which are one of the top institutes in India.
== Iran ==
In Iran, similar to Canada, Master of Science (MSc) or in Iranian form Kārshenāsi-e arshad degrees may be entirely course-based, entirely research-based, or most commonly a mixture. Master's programs typically take two to three years to complete and the completion of a scientific thesis is often required.
== Ireland ==
Like all EU member states, Ireland follows the Bologna Process. In Ireland, Master of Science (MSc) may be course-based with a research component or entirely research based. The program is most commonly a one-year program and a thesis is required for both course-based and research based degrees.
== Israel ==
In Israel, Master of Science (MSc) may be entirely course-based or include research. The program is most commonly a two-year program and a thesis is required only for research based degrees.
== Italy ==
Like all EU member states, Italy follows the Bologna Process. The degree Master of Science is awarded in the Italian form, Laurea Magistrale. Before the current organization of academic studies there was the Laurea. According to the subject the laurea could require four, five or six years of study. The laurea was subsequently split into a "laurea triennale" (three years) and a "laurea magistrale" (two more years).
== Nepal ==
In Nepal, universities offer the Master of Science degree usually in science and engineering areas. Tribhuvan University offers MSc degree for all the science and engineering courses. Pokhara University and Purbanchal University offer ME for engineering and MSc for science. Kathmandu University offers MS by Research and ME degrees for science and engineering.
== Netherlands ==
Like all EU member states, the Netherlands follows the Bologna Process. In the past graduates of applied universities (HBO) were excluded from using titles such as MSc, as HBO institutions are formally not universities but polytechnic institutions of higher education. However, since 2014 academic titles are granted to any university graduate.
However, older academic titles used in the Netherlands are:
ingenieur (abbreviated as ir.) (for graduates who followed a technical or agricultural program)
meester (abbreviated as mr.) (for graduates who followed an LLM law program)
doctorandus (abbreviated as drs.) (in all other cases).
The bearers of these titles can use either the older title, of MSc, LL.M or MA but not both for the same field of study.
== New Zealand ==
New Zealand universities commonly have coursework or research-based Master of Science courses for graduate students. They typically run for 2 years full-time, with varying amounts of research involved.
== Norway ==
Norway follows the Bologna Process. For engineering, the Master of Science academic degree has been recently introduced and has replaced the previous award forms "Sivilingeniør" (engineer, a.k.a. engineering master) and "Hovedfag" (academic master). Both were awarded after 5 years of university-level studies and required the completion of a scientific thesis.
"Siv.ing", is a protected title traditionally awarded to engineering students who completed a five-year education at The Norwegian University of Science and Technology (Norwegian: Norges teknisk-naturvitenskapelige universitet, NTNU) or other university programs deemed to be equivalent in academic merit. Historically there was no bachelor's degree involved and today's program is a five years master's degree education. The "Siv.ing" title is in the process of being phased out, replaced by (for now, complemented by) the "M.Sc." title. By and large, "Siv.ing" is a title tightly being held on to for the sake of tradition. In academia, the new program offers separate three-year bachelor and two-year master programs. It is awarded in the natural sciences, mathematics and computer science fields. The completion of a scientific thesis is required. All master's degrees are designed to certify a level of education and qualify for a doctorate program.
Master of Science in Business is the English title for those taking a higher business degree, "Siviløkonom" in Norwegian. In addition, there is, for example, the 'Master of Business Administration' (MBA), a practically oriented master's degree in business, but with less mathematics and econometrics, due to its less specific entry requirements and smaller focus on research.
== Pakistan ==
Pakistan inherited its conventions pertaining to higher education from United Kingdom after independence in 1947. Master of Science degree is typically abbreviated as M.Sc. (as in the United Kingdom) and which is awarded after 16 years of education (equivalent with a bachelor's degree in the US and many other countries). Recently, in pursuance to some of the reforms by the Higher Education Commission of Pakistan (the regulatory body of higher education in Pakistan), the traditional 2-year Bachelor of Science (B.Sc.) degree has been replaced by the 4-year Bachelor of Science degree, which is abbreviated as B.S. to enable the Pakistani degrees with the rest of the world. Subsequently, students who pass a 4-year B.S. degree that is awarded after 16 years of education are then eligible to apply for M.S. degree, which is considered at par with Master of Philosophy (M.Phil.) degree.
== Poland ==
Like all EU member states, Poland follows the Bologna Process. The Polish equivalent of Master of Science is "magister" (abbreviated "mgr", written pre-nominally much like "Dr"). Starting in 2001, the MSc programs typically lasting 5 years began to be replaced as below:
3-year associates programs, (licentiate degree termed "licencjat" in Polish. No abbreviated pre-nominal or title.)
3.5-year engineer programs (termed "inżynier", utilizing the pre-nominal abbreviation "inż.")
2-year master programs open to both "licencjat" and "inż." graduates.
1.5-year master programs open only to "inż." graduates.
The degree is awarded predominantly in the natural sciences, mathematics, computer science, economics, as well as in the arts and other disciplines. Those who graduate from an engineering program prior to being awarded a master's degree are allowed to use the "mgr inż." pre-nominal ("master engineer"). This is most common in engineering and agricultural fields of study. Defense of a research thesis is required. All master's degrees in Poland qualify for a doctorate program.
== Russia ==
The title of "master" was introduced by Alexander I at 24 January 1803. The Master had an intermediate position between the candidate and doctor according to the decree "About colleges structure". The master's degree was abolished from 1917 to 1934. Russia has followed the Bologna Process for higher education in Europe since 2011.
== Spain ==
Like all EU member states, Spain follows the Bologna Process. The Master of Science (MSc) degree is a program officially recognized by the Spanish Ministry of Education. It usually involves 1 or 2 years of full-time study. It is targeted at pre-experience candidates who have recently finished their undergraduate studies. An MSc degree can be awarded in every field of study. An MSc degree is required in order to progress to a PhD. MSci, MPhil and DEA are equivalent in Spain.
== Sweden ==
Like all EU member states, Sweden follows the Bologna Process. The Master of Science academic degree has, like in Germany, recently been introduced in Sweden. Students studying Master of Science in Engineering programs are rewarded both the English Master of Science Degree, but also the Swedish equivalent "Teknologisk masterexamen". Whilst "Civilingenjör" is an at least five year long education.
== Syria ==
The Master of Science is a degree that can be studied only in public universities. The program is usually 2 years, but it can be extended to 3 or 4 years. The student is required to pass a specific bachelor's degree to attend a specific Master of Science degree program. The master of science is mostly a research degree, except for some types of programs held with cooperation of foreign universities. The student typically attends courses in the first year of the program and should then prepare a research thesis. Publishing two research papers is recommended and will increase the final evaluation grade.
== United Kingdom ==
The Master of Science (MSc) is typically a taught postgraduate degree, involving lectures, examinations and a project dissertation (normally taking up a third of the program). Master's programs usually involve a minimum of 1 year of full-time study (180 UK credits, of which 150 must be at master's level) and sometimes up to 2 years of full-time study (or the equivalent period part-time). Taught master's degrees are normally classified into Pass, Merit and Distinction (although some universities do not give Merit). Some universities also offer MSc by research programs, where a longer project or set of projects is undertaken full-time; master's degrees by research are normally pass/fail, although some universities may offer a distinction.
The more recent Master in Science (MSci or M.Sci.) degree (Master of Natural Sciences at the University of Cambridge), is an undergraduate (UG) level integrated master's degree offered by UK institutions since the 1990s. It is offered as a first degree with the first three (four in Scotland) years similar to a BSc course and a final year (120 UK credits) at master's level, including a dissertation. The final MSci qualification is thus at the same level as a traditional MSc.
== United States ==
The Master of Science (Magister Scientiæ) degree is normally a full-time two-year degree often abbreviated "MS" or "M.S." It is the primary type in most subjects and may be entirely course-based, entirely research-based or (more typically) a combination of the two. The combination often involves writing and defending a thesis or completing a research project which represents the culmination of the material learned.
Admission to a master's program is normally contingent upon holding a bachelor's degree and progressing to a doctoral program may require a master's degree. In some fields or graduate programs, work on a doctorate can begin immediately after the bachelor's degree. Some programs provide for a joint bachelor's and master's degree after about five years. Some universities use the Latin degree names and due to the flexibility of word order in Latin, Artium Magister (A.M.) or Scientiæ Magister (S.M. or Sc.M.) may be used in some institutions.
== See also ==
Master of Science in Accounting
Master of Science in Administration
Master of Science in Computer Science
Master of Science in Corporate Communication
Master of Science in Economics
Master of Science in Engineering
Master of Science in Finance
Master of Science in Foreign Service
Master of Science in Information Systems
Master of Science in Information Technology
Master of Science in Management
Master of Science in Nursing
Master of Science in Occupational Therapy
Master of Science in Physician Assistant Studies
Master of Science in Project Management
Master of Science in Systems Management
== References == | Wikipedia/Master_of_Science |
Engineering science and mechanics (ESM) is a multidisciplinary and interdisciplinary engineering program and/or academic department. It is available at various American universities, including Pennsylvania State University, University of Virginia, Virginia Polytechnic Institute and State University, Georgia Institute of Technology, and University of Alabama.
== Programs ==
A Bachelor of Science, Master of Science, Master of Engineering, or Ph.D. degree in engineering science, engineering mechanics, or engineering science and mechanics is awarded upon completion of the respective program.
Areas of specialization include aerodynamics, biomechanics, bionanotechnology, biosensors and bioelectronics, composite materials, continuum mechanics, data mining, electromagnetics of complex materials, electronic materials and devices, experimental mechanics, fluid mechanics, laser-assisted micromanufacturing, metamaterials, microfabrication, microfluidic systems, microelectromechanical systems (MEMS) and microoptoelectromechanical systems (MOEMS), nanotechnology, neural engineering, non-destructive testing or evaluation, nonlinear dynamics, optoelectronics, photonics and plasmonics, quantum mechanics, solar-energy-harvesting materials, solid mechanics, solid-state physics, structural health monitoring, and thin films and nanostructured materials.
== History ==
In 1972, the department of engineering mechanics at the Virginia Polytechnic Institute and State University changed its name and undergraduate program to engineering science and mechanics. In 1974, the department of engineering mechanics at the Pennsylvania State University merged with engineering science program and the department was renamed to engineering science and mechanics. Engineering science and mechanics is a graduate program in the School of Civil and Environmental Engineering at the Georgia Institute of Technology. The department of aerospace engineering and mechanics at the University of Alabama offers graduate degrees in engineering science and mechanics.
== Academic departments and programs ==
Department of Engineering Science and Mechanics, Pennsylvania State University.
Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State University.
Graduate Programs in Engineering Science and Mechanics, Georgia Institute of Technology.
Graduate Programs in Engineering Science and Mechanics, University of Alabama.
== See also ==
Applied physics
Applied mechanics
Engineering physics
== References ==
== External links ==
Department of Engineering Science and Mechanics at Pennsylvania State University
Society of Engineering Science Inc. | Wikipedia/Engineering_science_and_mechanics |
Environmental engineering science (EES) is a multidisciplinary field of engineering science that combines the biological, chemical and physical sciences with the field of engineering. This major traditionally requires the student to take basic engineering classes in fields such as thermodynamics, advanced math, computer modeling and simulation and technical classes in subjects such as statics, mechanics, hydrology, and fluid dynamics. As the student progresses, the upper division elective classes define a specific field of study for the student with a choice in a range of science, technology and engineering related classes.
== Difference with related fields ==
As a recently created program, environmental engineering science has not yet been incorporated into the terminology found among environmentally focused professionals. In the few engineering colleges that offer this major, the curriculum shares more classes in common with environmental engineering than it does with environmental science. Typically, EES students follow a similar course curriculum with environmental engineers until their fields diverge during the last year of college. The majority of the environmental engineering students must take classes designed to connect their knowledge of the environment to modern building materials and construction methods. This is meant to direct the environmental engineer into a field where they will more than likely assist in building treatment facilities, preparing environmental impact assessments or helping to mitigate air pollution from specific point sources.
Meanwhile, the environmental engineering science student will choose a direction for their career. From the range of electives they have to choose from, these students can move into a fields such as the design of nuclear storage facilities, bacterial bioreactors or environmental policies. These students combine the practical design background of an engineer with the detailed theory found in many of the biological and physical sciences.
== Description at universities ==
=== Stanford University ===
The Civil and Environmental Engineering department at Stanford University provides the following description for their program in Environmental Engineering and Science:
The Environmental Engineering and Science (EES) program focuses on the chemical and biological processes involved in water quality engineering, water and air pollution, remediation and hazardous substance control, human exposure to pollutants, environmental biotechnology, and environmental protection.
=== UC Berkeley ===
The College of Engineering at UC Berkeley defines Environmental Engineering Science, including the following:
This is a multidisciplinary field requiring an integration of physical, chemical and biological principles with engineering analysis for environmental protection and restoration. The program incorporates courses from many departments on campus to create a discipline that is rigorously based in science and engineering, while addressing a wide variety of environmental issues. Although an environmental engineering option exists within the civil engineering major, the engineering science curriculum provides a more broadly based foundation in the sciences than is possible in civil engineering
=== Massachusetts Institute of Technology ===
At MIT, the major is described in their curriculum, including the following:
The Bachelor of Science in Environmental Engineering Science emphasizes the fundamental physical, chemical, and biological processes necessary for understanding the interactions between man and the environment. Issues considered include the provision of clean and reliable water supplies, flood forecasting and protection, development of renewable and nonrenewable energy sources, causes and implications of climate change, and the impact of human activities on natural cycles
=== University of Florida ===
The College of Engineering at UF defines Environmental Engineering Science as follows:
The broad undergraduate environmental engineering curriculum of EES has earned the department a ranking as a leading undergraduate program. The ABET accredited engineering bachelor's degree is comprehensively based on physical, chemical, and biological principles to solve environmental problems affecting air, land, and water resources. An advising scheme including select faculty, led by the undergraduate coordinator, guides each student through the program.
The program educational objectives of the EES program at the University of Florida are to produce engineering practitioners and graduate students who 3-5 years after graduation:
Continue to learn, develop and apply their knowledge and skills to identify, prevent, and solve environmental problems.
Have careers that benefit society as a result of their educational experiences in science, engineering analysis and design, as well as in their social and cultural studies.
Communicate and work effectively in all work settings including those that are multidisciplinary.
== Lower division coursework ==
Lower division coursework in this field requires the student to take several laboratory-based classes in calculus-based physics, chemistry, biology, programming and analysis. This is intended to give the student background information in order to introduce them to the engineering fields and to prepare them for more technical information in their upper division coursework.
== Upper division coursework ==
The upper division classes in Environmental Engineering Science prepares the student for work in the fields of engineering and science with coursework in subjects including the following:
Fluid mechanics
Mechanics of materials
Thermodynamics
Environmental engineering
Advanced math and statistics
Geology
Physical, organic and atmospheric chemistry
Biochemistry
Microbiology
Ecology
== Electives ==
=== Process engineering ===
On this track, students are introduced to the fundamental reaction mechanisms in the field of chemical and biochemical engineering.
=== Resource engineering ===
For this track, students take classes introducing them to ways to conserve natural resources. This can include classes in water chemistry, sanitation, combustion, air pollution and radioactive waste management.
=== Geoengineering ===
This examines geoengineering in detail.
=== Ecology ===
This prepares the students for using their engineering and scientific knowledge to solve the interactions between plants, animals and the biosphere.
=== Biology ===
This includes further education about microbial, molecular and cell biology. Classes can include cell biology, virology, microbial and plant biology
=== Policy ===
This covers in more detail ways the environment can be protected through political means. This is done by introducing students to qualitative and quantitative tools in classes such as economics, sociology, political science and energy and resources.
== Fields of work ==
The multidisciplinary approach in Environmental Engineering Science gives the student expertise in technical fields related to their own personal interest. While some graduates choose to use this major to go to graduate school, students who choose to work often go into the fields of civil and environmental engineering, biotechnology, and research. However, the less technical math, programming and writing background gives the students opportunities to pursue IT work and technical writing.
== See also ==
Civil engineering
Environmental engineering
Environmental science
Sustainability
Green building
Sustainable engineering
== Notes ==
== References ==
"MIT Course Catalog: Department of Civil and Environmental Engineering." Massachusetts Institute of Technology. <http://web.mit.edu/catalogue/degre.engin.civil.shtml>.
2008-2009 Announcement. Brochure. Berkeley, 2008. Engineering Announcement 2008-2009. University of California, Berkeley. <https://web.archive.org/web/20081203005457/http://coe.berkeley.edu/students/EngAnn08.pdf>.
== External links ==
Engineering Engineering and Science program at Stanford University [1]
What people go on to do in Engineering Science at UC Berkeley [2]
Curriculum at University of Florida [3]
Curriculum at MIT [4]
Curriculum at University of Illinois [5] | Wikipedia/Environmental_engineering_science |
In integrated circuit design, physical design is a step in the standard design cycle which follows after the circuit design. At this step, circuit representations of the components (devices and interconnects) of the design are converted into geometric representations of shapes which, when manufactured in the corresponding layers of materials, will ensure the required functioning of the components. This geometric representation is called integrated circuit layout. This step is usually split into several sub-steps, which include both design and verification and validation of the layout.
Modern day Integrated Circuit (IC) design is split up into Front-end Design using HDLs and Back-end Design or Physical Design. The inputs to physical design are (i) a netlist, (ii) library information on the basic devices in the design, and (iii) a technology file containing the manufacturing constraints. Physical design is usually concluded by Layout Post Processing, in which amendments and additions to the chip layout are performed. This is followed by the Fabrication or Manufacturing Process where designs are transferred onto silicon dies which are then packaged into ICs.
Each of the phases mentioned above has design flows associated with them. These design flows lay down the process and guide-lines/framework for that phase. The physical design flow uses the technology libraries that are provided by the fabrication houses. These technology files provide information regarding the type of silicon wafer used, the standard-cells used, the layout rules (like DRC in VLSI), etc.
The physical design engineer (sometimes called physical engineer or physical designer) is responsible for the design and layout (routing), specifically in ASIC/FPGA design.
== Divisions ==
Typically, the IC physical design is categorized into full custom and semi-custom design.
Full-Custom: Designer has full flexibility on the layout design, no predefined cells are used.
Semi-Custom: Pre-designed library cells (preferably tested with DFM) are used, designer has flexibility in placement of the cells and routing.
One can use ASIC for Full Custom design and FPGA for Semi-Custom design flows. The reason being that one has the flexibility to design/modify design blocks from vendor provided libraries in ASIC. This flexibility is missing for Semi-Custom flows using FPGAs (e.g. Altera).
== ASIC physical design flow ==
The main steps in the ASIC physical design flow are:
Design Netlist (after synthesis)
Floorplanning
Partitioning
Placement
Clock-tree Synthesis (CTS)
Routing
Physical Verification
Layout Post Processing with Mask Data Generation
These steps are just the basics. There are detailed PD flows that are used depending on the Tools used and the methodology/technology. Some of the tools/software used in the back-end design are:
Cadence (Cadence Encounter RTL Compiler, Encounter Digital Implementation, Cadence Voltus IC Power Integrity Solution, Cadence Tempus Timing Signoff Solution)
Synopsys (Design Compiler, IC Compiler II, IC Validator, PrimeTime, PrimePower, PrimeRail)
Magma (BlastFusion, etc.)
Mentor Graphics (Olympus SoC, IC-Station, Calibre)
The ASIC physical design flow uses the technology libraries that are provided by the fabrication houses. Technologies are commonly classified according to minimal feature size. Standard sizes, in the order of miniaturization, are 2μm, 1μm , 0.5μm , 0.35μm, 0.25μm, 180nm, 130nm, 90nm, 65nm, 45nm, 28nm, 22nm, 18nm, 14nm, etc. They may be also classified according to major manufacturing approaches: n-Well process, twin-well process, SOI process, etc.
== Design netlist ==
Physical design is based on a netlist which is the end result of the synthesis process. Synthesis converts the RTL design usually coded in VHDL or Verilog HDL to gate-level descriptions which the next set of tools can read/understand. This netlist contains information on the cells used, their interconnections, area used, and other details. Typical synthesis tools are:
Cadence RTL Compiler/Build Gates/Physically Knowledgeable Synthesis (PKS)
Synopsys Design Compiler
During the synthesis process, constraints are applied to ensure that the design meets the required functionality and speed (specifications). Only after the netlist is verified for functionality and timing it is sent for the physical design flow.
== Steps ==
=== Partitioning ===
Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can be done in the RTL design phase when the design engineer partitions the entire design into sub-blocks and then proceeds to design each module. These modules are linked together in the main module called the TOP LEVEL module. This kind of partitioning is commonly referred to as Logical Partitioning. The goal of partitioning is to split the circuit such that the number of connections between partitions is minimized.
=== Floorplanning ===
The second step in the physical design flow is floorplanning. Floorplanning is the process of identifying structures that should be placed close together, and allocating space for them in such a manner as to meet the sometimes conflicting goals of available space (cost of the chip), required performance, and the desire to have everything close to everything else.
Based on the area of the design and the hierarchy, a suitable floorplan is decided upon. Floorplanning takes into account the macros used in the design, memory, other IP cores and their placement needs, the routing possibilities, and also the area of the entire design. Floorplanning also determines the IO structure and aspect ratio of the design. A bad floorplan will lead to wastage of die area and routing congestion.
In many design methodologies, area and speed are the subjects of trade-offs. This is due to limited routing resources, as the more resources used, the slower the operation. Optimizing for minimum area allows the design both to use fewer resources, and for greater proximity of the sections of the design. This leads to shorter interconnect distances, fewer routing resources used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly, there are no negatives to floorplanning.
As a general rule, data-path sections benefit most from floorplanning, whereas random logic, state machines, and other non-structured logic can safely be left to the placer section of the place and route software.
Data paths are typically the areas of the design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits. Example structures that make up data paths are Adders, Subtractors, Counters, Registers, and Muxes.
=== Placement ===
Before the start of placement optimization all Wire Load Models (WLM) are removed. Placement uses RC values from Virtual Route (VR) to calculate timing. VR is the shortest Manhattan distance between two pins. VR RCs are more accurate than WLM RCs.
Placement is performed in four optimization phases:
Pre-placement optimization
In placement optimization
Post Placement Optimization (PPO) before clock tree synthesis (CTS)
PPO after CTS.
Pre-placement Optimization optimizes the netlist before placement, HFNs (High Fanout Nets) are collapsed. It can also downsize the cells.
In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery. Optimization performs iteration of setup fixing, incremental timing and congestion driven placement.
Post placement optimization before CTS performs netlist optimization with ideal clocks. It can fix setup, hold, max trans/cap violations. It can do placement optimization based on global routing. It re does HFN synthesis.
Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew.
=== Clock tree synthesis ===
The goal of clock tree synthesis (CTS) is to minimize skew and insertion delay. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at .sdc defined clock source and ends at stop pins of flop. There are two types of stop pins known as ignore pins and sync pins. 'Don't touch' circuits and pins in front end (logic synthesis) are treated as 'ignore' circuits or pins at back end (physical synthesis). 'Ignore' pins are ignored for timing analysis. If clock is divided then separate skew analysis is necessary.
Global skew achieves zero skew between two synchronous pins without considering logic relationship.
Local skew achieves zero skew between two synchronous pins while considering logic relationship.
If clock is skewed intentionally to improve setup slack then it is known as useful skew.
Rigidity is the term coined in Astro to indicate the relaxation of constraints. Higher the rigidity tighter is the constraints.
In clock tree optimization (CTO) clock can be shielded so that noise is not coupled to other signals. But shielding increases area by 12 to 15%. Since the clock signal is global in nature the same metal layer used for power routing is used for clock also. CTO is achieved by buffer sizing, gate sizing, buffer relocation, level adjustment and HFN synthesis. We try to improve setup slack in pre-placement, in placement and post placement optimization before CTS stages while neglecting hold slack. In post placement optimization after CTS hold slack is improved. As a result of CTS lot of buffers are added. Generally for 100k gates around 650 buffers are added.
=== Routing ===
There are two types of routing in the physical design process, global routing and detailed routing. Global routing allocates routing resources that are used for connections. It also does track assignment for a particular net.
Detailed routing does the actual connections. Different constraints that are to be taken care during the routing are DRC, wire length, timing etc.
=== Physical verification ===
Physical verification checks the correctness of the generated layout design. This includes verifying that the layout
Complies with all technology requirements – Design Rule Checking (DRC)
Is consistent with the original netlist – Layout vs. Schematic (LVS)
Has no antenna effects – Antenna Rule Checking
This also includes density verification at the full chip level...Cleaning density is a very critical step in the lower technology nodes
Complies with all electrical requirements – Electrical Rule Checking (ERC).
=== Layout post processing ===
Layout Post Processing, also known mask data preparation, often concludes physical design and verification. It converts the physical layout (polygons) into mask data (instructions for the photomask writer). It includes
Chip finishing, such as inserting company/chip labels and final structures (e.g., seal ring, filler structures),
Generating a reticle layout with test patterns and alignment marks,
Layout-to-mask preparation that extends layout data with graphics operations (e.g., resolution enhancement technologies, RET) and adjusts the data to mask production devices (photomask writer).
== See also ==
FEOL
BEOL
== References == | Wikipedia/Physical_design_engineer |
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train.
== Overview ==
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
== By country ==
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
== Basic versus applied science ==
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
== See also ==
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
== References ==
== Further reading ==
Levy, David M. (2002). "Research and Development". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563 | Wikipedia/Pure_science |
In quantum mechanics, the Holstein–Primakoff transformation is a mapping from boson creation and annihilation operators to the spin operators, effectively truncating their infinite-dimensional Fock space to finite-dimensional subspaces.
One important aspect of quantum mechanics is the occurrence of—in general—non-commuting operators which represent observables, quantities that can be measured.
A standard example of a set of such operators are the three components of the angular momentum operators, which are crucial in many quantum systems.
These operators are complicated, and one would like to find a simpler representation, which can be used to generate approximate calculational schemes.
The transformation was developed in 1940 by Theodore Holstein, a graduate student at the time, and Henry Primakoff. This method has found widespread applicability and has been extended in many different directions.
There is a close link to other methods of boson mapping of operator algebras: in particular, the (non-Hermitian) Dyson–Maleev technique, and to a lesser extent the Jordan–Schwinger map. There is, furthermore, a close link to the theory of (generalized) coherent states in Lie algebras.
== Description ==
The basic idea can be illustrated for the basic example of spin operators of quantum mechanics.
For any set of right-handed orthogonal axes, define the components of this vector operator as
S
x
{\displaystyle S_{x}}
,
S
y
{\displaystyle S_{y}}
and
S
z
{\displaystyle S_{z}}
, which are mutually noncommuting, i.e.,
[
S
x
,
S
y
]
=
i
ℏ
S
z
{\displaystyle \left[S_{x},S_{y}\right]=i\hbar S_{z}}
and its cyclic permutations.
In order to uniquely specify the states of a spin, one may diagonalise any set of commuting operators. Normally one uses the SU(2) Casimir operators
S
2
{\displaystyle S^{2}}
and
S
z
{\displaystyle S_{z}}
, which leads to
states with the quantum numbers
|
s
,
m
s
⟩
{\displaystyle \left|s,m_{s}\right\rangle }
,
S
2
|
s
,
m
s
⟩
=
ℏ
2
s
(
s
+
1
)
|
s
,
m
s
⟩
,
{\displaystyle S^{2}\left|s,m_{s}\right\rangle =\hbar ^{2}s(s+1)\left|s,m_{s}\right\rangle ,}
S
z
|
s
,
m
s
⟩
=
ℏ
m
s
|
s
,
m
s
⟩
.
{\displaystyle S_{z}\left|s,m_{s}\right\rangle =\hbar m_{s}\left|s,m_{s}\right\rangle .}
The projection quantum number
m
s
{\displaystyle m_{s}}
takes on all the values
(
−
s
,
−
s
+
1
,
…
,
s
−
1
,
s
)
{\displaystyle (-s,-s+1,\ldots ,s-1,s)}
.
Consider a single particle of spin s (i.e., look at a single irreducible representation of SU(2)). Now take the state with maximal projection
|
s
,
m
s
=
+
s
⟩
{\displaystyle \left|s,m_{s}=+s\right\rangle }
, the extremal weight state as a vacuum for a set of boson operators, and each subsequent state with lower projection quantum number as a boson excitation of the previous one,
|
s
,
s
−
n
⟩
↦
1
n
!
(
a
†
)
n
|
0
⟩
B
.
{\displaystyle \left|s,s-n\right\rangle \mapsto {\frac {1}{\sqrt {n!}}}\left(a^{\dagger }\right)^{n}|0\rangle _{B}~.}
Each additional boson then corresponds to a decrease of ħ in the spin projection. Thus, the spin raising and lowering operators
S
+
=
S
x
+
i
S
y
{\displaystyle S_{+}=S_{x}+iS_{y}}
and
S
−
=
S
x
−
i
S
y
{\displaystyle S_{-}=S_{x}-iS_{y}}
, so that
[
S
+
,
S
−
]
=
2
ℏ
S
z
{\displaystyle [S_{+},S_{-}]=2\hbar S_{z}}
, correspond (in the sense detailed below) to the bosonic annihilation and creation operators, respectively.
The precise relations between the operators must be chosen to ensure the correct commutation relations for the spin operators, such that they act on a finite-dimensional space, unlike the original Fock space.
The resulting Holstein–Primakoff transformation can be written as
The transformation is particularly useful in the case where s is large, when the square roots can be expanded as Taylor series, to give an expansion in decreasing powers of s.
Alternatively to a Taylor expansion there has been recent progress with a resummation of the series that made expressions possible that are polynomial in bosonic operators but still mathematically exact (on the physical subspace). The first method develops a resummation method that is exact for spin
s
=
1
/
2
{\displaystyle s=1/2}
, while the latter employs a Newton series (a finite difference) expansion with an identical result, as shown below
While the expression above is not exact for spins higher than 1/2 it is an improvement over the Taylor series. Exact expressions also exist for higher spins and include
2
s
+
1
{\displaystyle 2s+1}
terms. Much like the result above also for the expressions of higher spins
S
+
=
S
−
†
{\displaystyle S_{+}=S_{-}^{\dagger }}
and therefore the resummation is hermitian.
There also exists a non-Hermitian Dyson–Maleev (by Freeman Dyson and S.V. Maleev) variant realization J is related to the above and valid for all spins,
J
+
=
ℏ
a
,
J
−
=
S
−
2
s
−
a
†
a
=
ℏ
a
†
(
2
s
−
a
†
a
)
,
J
z
=
S
z
=
ℏ
(
s
−
a
†
a
)
,
{\displaystyle J_{+}=\hbar \,a~,\qquad J_{-}=S_{-}~{\sqrt {2s-a^{\dagger }a}}=\hbar a^{\dagger }\,(2s-a^{\dagger }a)~,\qquad J_{z}=S_{z}=\hbar (s-a^{\dagger }a)~,}
satisfying the same commutation relations and characterized by the same Casimir invariant.
The technique can be further extended to the Witt algebra, which is the centerless Virasoro algebra.
== See also ==
Spin wave
Jordan–Wigner transformation
Jordan–Schwinger transformation
Bogoliubov–Valatin transformation
Klein transformation
== References == | Wikipedia/Holstein–Primakoff_transformation |
This article concerns the rotation operator, as it appears in quantum mechanics.
== Quantum mechanical rotations ==
With every physical rotation
R
{\displaystyle R}
, we postulate a quantum mechanical rotation operator
D
^
(
R
)
:
H
→
H
{\displaystyle {\widehat {D}}(R):H\to H}
that is the rule that assigns to each vector in the space
H
{\displaystyle H}
the vector
|
α
⟩
R
=
D
^
(
R
)
|
α
⟩
{\displaystyle |\alpha \rangle _{R}={\widehat {D}}(R)|\alpha \rangle }
that is also in
H
{\displaystyle H}
. We will show that, in terms of the generators of rotation,
D
^
(
n
^
,
ϕ
)
=
exp
(
−
i
ϕ
n
^
⋅
J
^
ℏ
)
,
{\displaystyle {\widehat {D}}(\mathbf {\hat {n}} ,\phi )=\exp \left(-i\phi {\frac {\mathbf {\hat {n}} \cdot {\widehat {\mathbf {J} }}}{\hbar }}\right),}
where
n
^
{\displaystyle \mathbf {\hat {n}} }
is the rotation axis,
J
^
{\displaystyle {\widehat {\mathbf {J} }}}
is angular momentum operator, and
ℏ
{\displaystyle \hbar }
is the reduced Planck constant.
== The translation operator ==
The rotation operator
R
(
z
,
θ
)
{\displaystyle \operatorname {R} (z,\theta )}
, with the first argument
z
{\displaystyle z}
indicating the rotation axis and the second
θ
{\displaystyle \theta }
the rotation angle, can operate through the translation operator
T
(
a
)
{\displaystyle \operatorname {T} (a)}
for infinitesimal rotations as explained below. This is why, it is first shown how the translation operator is acting on a particle at position x (the particle is then in the state
|
x
⟩
{\displaystyle |x\rangle }
according to Quantum Mechanics).
Translation of the particle at position
x
{\displaystyle x}
to position
x
+
a
{\displaystyle x+a}
:
T
(
a
)
|
x
⟩
=
|
x
+
a
⟩
{\displaystyle \operatorname {T} (a)|x\rangle =|x+a\rangle }
Because a translation of 0 does not change the position of the particle, we have (with 1 meaning the identity operator, which does nothing):
T
(
0
)
=
1
{\displaystyle \operatorname {T} (0)=1}
T
(
a
)
T
(
d
a
)
|
x
⟩
=
T
(
a
)
|
x
+
d
a
⟩
=
|
x
+
a
+
d
a
⟩
=
T
(
a
+
d
a
)
|
x
⟩
⇒
T
(
a
)
T
(
d
a
)
=
T
(
a
+
d
a
)
{\displaystyle \operatorname {T} (a)\operatorname {T} (da)|x\rangle =\operatorname {T} (a)|x+da\rangle =|x+a+da\rangle =\operatorname {T} (a+da)|x\rangle \Rightarrow \operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a+da)}
Taylor development gives:
T
(
d
a
)
=
T
(
0
)
+
d
T
(
0
)
d
a
d
a
+
⋯
=
1
−
i
ℏ
p
x
d
a
{\displaystyle \operatorname {T} (da)=\operatorname {T} (0)+{\frac {d\operatorname {T} (0)}{da}}da+\cdots =1-{\frac {i}{\hbar }}p_{x}da}
with
p
x
=
i
ℏ
d
T
(
0
)
d
a
{\displaystyle p_{x}=i\hbar {\frac {d\operatorname {T} (0)}{da}}}
From that follows:
T
(
a
+
d
a
)
=
T
(
a
)
T
(
d
a
)
=
T
(
a
)
(
1
−
i
ℏ
p
x
d
a
)
⇒
T
(
a
+
d
a
)
−
T
(
a
)
d
a
=
d
T
d
a
=
−
i
ℏ
p
x
T
(
a
)
{\displaystyle \operatorname {T} (a+da)=\operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a)\left(1-{\frac {i}{\hbar }}p_{x}da\right)\Rightarrow {\frac {\operatorname {T} (a+da)-\operatorname {T} (a)}{da}}={\frac {d\operatorname {T} }{da}}=-{\frac {i}{\hbar }}p_{x}\operatorname {T} (a)}
This is a differential equation with the solution
T
(
a
)
=
exp
(
−
i
ℏ
p
x
a
)
.
{\displaystyle \operatorname {T} (a)=\exp \left(-{\frac {i}{\hbar }}p_{x}a\right).}
Additionally, suppose a Hamiltonian
H
{\displaystyle H}
is independent of the
x
{\displaystyle x}
position. Because the translation operator can be written in terms of
p
x
{\displaystyle p_{x}}
, and
[
p
x
,
H
]
=
0
{\displaystyle [p_{x},H]=0}
, we know that
[
H
,
T
(
a
)
]
=
0.
{\displaystyle [H,\operatorname {T} (a)]=0.}
This result means that linear momentum for the system is conserved.
== In relation to the orbital angular momentum ==
Classically we have for the angular momentum
L
=
r
×
p
.
{\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} .}
This is the same in quantum mechanics considering
r
{\displaystyle \mathbf {r} }
and
p
{\displaystyle \mathbf {p} }
as operators. Classically, an infinitesimal rotation
d
t
{\displaystyle dt}
of the vector
r
=
(
x
,
y
,
z
)
{\displaystyle \mathbf {r} =(x,y,z)}
about the
z
{\displaystyle z}
-axis to
r
′
=
(
x
′
,
y
′
,
z
)
{\displaystyle \mathbf {r} '=(x',y',z)}
leaving
z
{\displaystyle z}
unchanged can be expressed by the following infinitesimal translations (using Taylor approximation):
x
′
=
r
cos
(
t
+
d
t
)
=
x
−
y
d
t
+
⋯
y
′
=
r
sin
(
t
+
d
t
)
=
y
+
x
d
t
+
⋯
{\displaystyle {\begin{aligned}x'&=r\cos(t+dt)=x-y\,dt+\cdots \\y'&=r\sin(t+dt)=y+x\,dt+\cdots \end{aligned}}}
From that follows for states:
R
(
z
,
d
t
)
|
r
⟩
=
R
(
z
,
d
t
)
|
x
,
y
,
z
⟩
=
|
x
−
y
d
t
,
y
+
x
d
t
,
z
⟩
=
T
x
(
−
y
d
t
)
T
y
(
x
d
t
)
|
x
,
y
,
z
⟩
=
T
x
(
−
y
d
t
)
T
y
(
x
d
t
)
|
r
⟩
{\displaystyle \operatorname {R} (z,dt)|r\rangle =\operatorname {R} (z,dt)|x,y,z\rangle =|x-y\,dt,y+x\,dt,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|x,y,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|r\rangle }
And consequently:
R
(
z
,
d
t
)
=
T
x
(
−
y
d
t
)
T
y
(
x
d
t
)
{\displaystyle \operatorname {R} (z,dt)=\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)}
Using
T
k
(
a
)
=
exp
(
−
i
ℏ
p
k
a
)
{\displaystyle T_{k}(a)=\exp \left(-{\frac {i}{\hbar }}p_{k}a\right)}
from above with
k
=
x
,
y
{\displaystyle k=x,y}
and Taylor expansion we get:
R
(
z
,
d
t
)
=
exp
[
−
i
ℏ
(
x
p
y
−
y
p
x
)
d
t
]
=
exp
(
−
i
ℏ
L
z
d
t
)
=
1
−
i
ℏ
L
z
d
t
+
⋯
{\displaystyle \operatorname {R} (z,dt)=\exp \left[-{\frac {i}{\hbar }}\left(xp_{y}-yp_{x}\right)dt\right]=\exp \left(-{\frac {i}{\hbar }}L_{z}dt\right)=1-{\frac {i}{\hbar }}L_{z}dt+\cdots }
with
L
z
=
x
p
y
−
y
p
x
{\displaystyle L_{z}=xp_{y}-yp_{x}}
the
z
{\displaystyle z}
-component of the angular momentum according to the classical cross product.
To get a rotation for the angle
t
{\displaystyle t}
, we construct the following differential equation using the condition
R
(
z
,
0
)
=
1
{\displaystyle \operatorname {R} (z,0)=1}
:
R
(
z
,
t
+
d
t
)
=
R
(
z
,
t
)
R
(
z
,
d
t
)
⇒
d
R
d
t
=
R
(
z
,
t
+
d
t
)
−
R
(
z
,
t
)
d
t
=
R
(
z
,
t
)
R
(
z
,
d
t
)
−
1
d
t
=
−
i
ℏ
L
z
R
(
z
,
t
)
⇒
R
(
z
,
t
)
=
exp
(
−
i
ℏ
t
L
z
)
{\displaystyle {\begin{aligned}&\operatorname {R} (z,t+dt)=\operatorname {R} (z,t)\operatorname {R} (z,dt)\\[1.1ex]\Rightarrow {}&{\frac {d\operatorname {R} }{dt}}={\frac {\operatorname {R} (z,t+dt)-\operatorname {R} (z,t)}{dt}}=\operatorname {R} (z,t){\frac {\operatorname {R} (z,dt)-1}{dt}}=-{\frac {i}{\hbar }}L_{z}\operatorname {R} (z,t)\\[1.1ex]\Rightarrow {}&\operatorname {R} (z,t)=\exp \left(-{\frac {i}{\hbar }}\,t\,L_{z}\right)\end{aligned}}}
Similar to the translation operator, if we are given a Hamiltonian
H
{\displaystyle H}
which rotationally symmetric about the
z
{\displaystyle z}
-axis,
[
L
z
,
H
]
=
0
{\displaystyle [L_{z},H]=0}
implies
[
R
(
z
,
t
)
,
H
]
=
0
{\displaystyle [\operatorname {R} (z,t),H]=0}
. This result means that angular momentum is conserved.
For the spin angular momentum about for example the
y
{\displaystyle y}
-axis we just replace
L
z
{\displaystyle L_{z}}
with
S
y
=
ℏ
2
σ
y
{\textstyle S_{y}={\frac {\hbar }{2}}\sigma _{y}}
(where
σ
y
{\displaystyle \sigma _{y}}
is the Pauli Y matrix) and we get the spin rotation operator
D
(
y
,
t
)
=
exp
(
−
i
t
2
σ
y
)
.
{\displaystyle \operatorname {D} (y,t)=\exp \left(-i{\frac {t}{2}}\sigma _{y}\right).}
== Effect on the spin operator and quantum states ==
Operators can be represented by matrices. From linear algebra one knows that a certain matrix
A
{\displaystyle A}
can be represented in another basis through the transformation
A
′
=
P
A
P
−
1
{\displaystyle A'=PAP^{-1}}
where
P
{\displaystyle P}
is the basis transformation matrix. If the vectors
b
{\displaystyle b}
respectively
c
{\displaystyle c}
are the z-axis in one basis respectively another, they are perpendicular to the y-axis with a certain angle
t
{\displaystyle t}
between them. The spin operator
S
b
{\displaystyle S_{b}}
in the first basis can then be transformed into the spin operator
S
c
{\displaystyle S_{c}}
of the other basis through the following transformation:
S
c
=
D
(
y
,
t
)
S
b
D
−
1
(
y
,
t
)
{\displaystyle S_{c}=\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)}
From standard quantum mechanics we have the known results
S
b
|
b
+
⟩
=
ℏ
2
|
b
+
⟩
{\textstyle S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle }
and
S
c
|
c
+
⟩
=
ℏ
2
|
c
+
⟩
{\textstyle S_{c}|c+\rangle ={\frac {\hbar }{2}}|c+\rangle }
where
|
b
+
⟩
{\displaystyle |b+\rangle }
and
|
c
+
⟩
{\displaystyle |c+\rangle }
are the top spins in their corresponding bases. So we have:
ℏ
2
|
c
+
⟩
=
S
c
|
c
+
⟩
=
D
(
y
,
t
)
S
b
D
−
1
(
y
,
t
)
|
c
+
⟩
⇒
{\displaystyle {\frac {\hbar }{2}}|c+\rangle =S_{c}|c+\rangle =\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle \Rightarrow }
S
b
D
−
1
(
y
,
t
)
|
c
+
⟩
=
ℏ
2
D
−
1
(
y
,
t
)
|
c
+
⟩
{\displaystyle S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle ={\frac {\hbar }{2}}\operatorname {D} ^{-1}(y,t)|c+\rangle }
Comparison with
S
b
|
b
+
⟩
=
ℏ
2
|
b
+
⟩
{\textstyle S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle }
yields
|
b
+
⟩
=
D
−
1
(
y
,
t
)
|
c
+
⟩
{\displaystyle |b+\rangle =D^{-1}(y,t)|c+\rangle }
.
This means that if the state
|
c
+
⟩
{\displaystyle |c+\rangle }
is rotated about the
y
{\displaystyle y}
-axis by an angle
t
{\displaystyle t}
, it becomes the state
|
b
+
⟩
{\displaystyle |b+\rangle }
, a result that can be generalized to arbitrary axes.
== See also ==
Symmetry in quantum mechanics
Spherical basis
Optical phase space
== References ==
L.D. Landau and E.M. Lifshitz: Quantum Mechanics: Non-Relativistic Theory, Pergamon Press, 1985
P.A.M. Dirac: The Principles of Quantum Mechanics, Oxford University Press, 1958
R.P. Feynman, R.B. Leighton and M. Sands: The Feynman Lectures on Physics, Addison-Wesley, 1965 | Wikipedia/Rotation_operator_(quantum_mechanics) |
In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean), like the expected value from statistics. It is a fundamental concept in all areas of quantum physics.
== Operational definition ==
Consider an operator
A
{\displaystyle A}
. The expectation value is then
⟨
A
⟩
=
⟨
ψ
|
A
|
ψ
⟩
{\displaystyle \langle A\rangle =\langle \psi |A|\psi \rangle }
in Dirac notation with
|
ψ
⟩
{\displaystyle |\psi \rangle }
a normalized state vector.
== Formalism in quantum mechanics ==
In quantum theory, an experimental setup is described by the observable
A
{\displaystyle A}
to be measured, and the state
σ
{\displaystyle \sigma }
of the system. The expectation value of
A
{\displaystyle A}
in the state
σ
{\displaystyle \sigma }
is denoted as
⟨
A
⟩
σ
{\displaystyle \langle A\rangle _{\sigma }}
.
Mathematically,
A
{\displaystyle A}
is a self-adjoint operator on a separable complex Hilbert space. In the most commonly used case in quantum mechanics,
σ
{\displaystyle \sigma }
is a pure state, described by a normalized vector
ψ
{\displaystyle \psi }
in the Hilbert space. The expectation value of
A
{\displaystyle A}
in the state
ψ
{\displaystyle \psi }
is defined as
If dynamics is considered, either the vector
ψ
{\displaystyle \psi }
or the operator
A
{\displaystyle A}
is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however.
If
A
{\displaystyle A}
has a complete set of eigenvectors
ϕ
j
{\displaystyle \phi _{j}}
, with eigenvalues
a
j
{\displaystyle a_{j}}
so that
A
=
∑
j
a
j
|
ϕ
j
⟩
⟨
ϕ
j
|
,
{\displaystyle A=\sum _{j}a_{j}|\phi _{j}\rangle \langle \phi _{j}|,}
then (1) can be expressed as
This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues
a
j
{\displaystyle a_{j}}
are the possible outcomes of the experiment, and their corresponding coefficient
|
⟨
ψ
|
ϕ
j
⟩
|
2
{\displaystyle |\langle \psi |\phi _{j}\rangle |^{2}}
is the probability that this outcome will occur; it is often called the transition probability.
A particularly simple case arises when
A
{\displaystyle A}
is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as
In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator
X
{\displaystyle X}
in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter,
x
{\displaystyle x}
. Specifically, the operator
X
{\displaystyle X}
acts on a spatial vector
|
x
⟩
{\displaystyle |x\rangle }
as
X
|
x
⟩
=
x
|
x
⟩
{\displaystyle X|x\rangle =x|x\rangle }
. In this case, the vector
ψ
{\displaystyle \psi }
can be written as a complex-valued function
ψ
(
x
)
{\displaystyle \psi (x)}
on the spectrum of
X
{\displaystyle X}
(usually the real line). This is formally achieved by projecting the state vector
|
ψ
⟩
{\displaystyle |\psi \rangle }
onto the eigenvalues of the operator, as in the discrete case
ψ
(
x
)
≡
⟨
x
|
ψ
⟩
{\textstyle \psi (x)\equiv \langle x|\psi \rangle }
. It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a completeness relation in quantum mechanics:
∫
|
x
⟩
⟨
x
|
d
x
≡
I
{\displaystyle \int |x\rangle \langle x|\,dx\equiv \mathbb {I} }
The above may be used to derive the common, integral expression for the expected value (4), by inserting identities into the vector expression of expected value, then expanding in the position basis:
⟨
X
⟩
ψ
=
⟨
ψ
|
X
|
ψ
⟩
=
⟨
ψ
|
I
X
I
|
ψ
⟩
=
∬
⟨
ψ
|
x
⟩
⟨
x
|
X
|
x
′
⟩
⟨
x
′
|
ψ
⟩
d
x
d
x
′
=
∬
⟨
x
|
ψ
⟩
∗
x
′
⟨
x
|
x
′
⟩
⟨
x
′
|
ψ
⟩
d
x
d
x
′
=
∬
⟨
x
|
ψ
⟩
∗
x
′
δ
(
x
−
x
′
)
⟨
x
′
|
ψ
⟩
d
x
d
x
′
=
∫
ψ
(
x
)
∗
x
ψ
(
x
)
d
x
=
∫
x
ψ
(
x
)
∗
ψ
(
x
)
d
x
=
∫
x
|
ψ
(
x
)
|
2
d
x
{\displaystyle {\begin{aligned}\langle X\rangle _{\psi }&=\langle \psi |X|\psi \rangle =\langle \psi |\mathbb {I} X\mathbb {I} |\psi \rangle \\&=\iint \langle \psi |x\rangle \langle x|X|x'\rangle \langle x'|\psi \rangle dx\ dx'\\&=\iint \langle x|\psi \rangle ^{*}x'\langle x|x'\rangle \langle x'|\psi \rangle dx\ dx'\\&=\iint \langle x|\psi \rangle ^{*}x'\delta (x-x')\langle x'|\psi \rangle dx\ dx'\\&=\int \psi (x)^{*}x\psi (x)dx=\int x\psi (x)^{*}\psi (x)dx=\int x|\psi (x)|^{2}dx\end{aligned}}}
Where the orthonormality relation of the position basis vectors
⟨
x
|
x
′
⟩
=
δ
(
x
−
x
′
)
{\displaystyle \langle x|x'\rangle =\delta (x-x')}
, reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace
ψ
∗
ψ
{\displaystyle \psi ^{*}\psi }
with
|
ψ
|
2
{\displaystyle |\psi |^{2}}
, which is a common substitution in quantum-mechanical integrals.
The expectation value may then be stated, where x is unbounded, as the formula
A similar formula holds for the momentum operator, in systems where it has continuous spectrum.
All the above formulas are valid for pure states
σ
{\displaystyle \sigma }
only. Prominently in thermodynamics and quantum optics, also mixed states are of importance; these are described by a positive trace-class operator
ρ
=
∑
i
p
i
|
ψ
i
⟩
⟨
ψ
i
|
{\textstyle \rho =\sum _{i}p_{i}|\psi _{i}\rangle \langle \psi _{i}|}
, the statistical operator or density matrix. The expectation value then can be obtained as
== General formulation ==
In general, quantum states
σ
{\displaystyle \sigma }
are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C*-algebra. The expectation value of an observable
A
{\displaystyle A}
is then given by
If the algebra of observables acts irreducibly on a Hilbert space, and if
σ
{\displaystyle \sigma }
is a normal functional, that is, it is continuous in the ultraweak topology, then it can be written as
σ
(
⋅
)
=
Tr
(
ρ
⋅
)
{\displaystyle \sigma (\cdot )=\operatorname {Tr} (\rho \;\cdot )}
with a positive trace-class operator
ρ
{\displaystyle \rho }
of trace 1. This gives formula (5) above. In the case of a pure state,
ρ
=
|
ψ
⟩
⟨
ψ
|
{\displaystyle \rho =|\psi \rangle \langle \psi |}
is a projection onto a unit vector
ψ
{\displaystyle \psi }
. Then
σ
=
⟨
ψ
|
⋅
ψ
⟩
{\displaystyle \sigma =\langle \psi |\cdot \;\psi \rangle }
, which gives formula (1) above.
A
{\displaystyle A}
is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write
A
{\displaystyle A}
in a spectral decomposition,
A
=
∫
a
d
P
(
a
)
{\displaystyle A=\int a\,dP(a)}
with a projection-valued measure
P
{\displaystyle P}
. For the expectation value of
A
{\displaystyle A}
in a pure state
σ
=
⟨
ψ
|
⋅
ψ
⟩
{\displaystyle \sigma =\langle \psi |\cdot \,\psi \rangle }
, this means
⟨
A
⟩
σ
=
∫
a
d
⟨
ψ
|
P
(
a
)
ψ
⟩
,
{\displaystyle \langle A\rangle _{\sigma }=\int a\;d\langle \psi |P(a)\psi \rangle ,}
which may be seen as a common generalization of formulas (2) and (4) above.
In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula (6).
== Example in configuration space ==
As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is
H
=
L
2
(
R
)
{\displaystyle {\mathcal {H}}=L^{2}(\mathbb {R} )}
, the space of square-integrable functions on the real line. Vectors
ψ
∈
H
{\displaystyle \psi \in {\mathcal {H}}}
are represented by functions
ψ
(
x
)
{\displaystyle \psi (x)}
, called wave functions. The scalar product is given by
⟨
ψ
1
|
ψ
2
⟩
=
∫
ψ
1
∗
(
x
)
ψ
2
(
x
)
d
x
{\textstyle \langle \psi _{1}|\psi _{2}\rangle =\int \psi _{1}^{\ast }(x)\psi _{2}(x)\,dx}
. The wave functions have a direct interpretation as a probability distribution:
ρ
(
x
)
d
x
=
ψ
∗
(
x
)
ψ
(
x
)
d
x
{\displaystyle \rho (x)dx=\psi ^{*}(x)\psi (x)dx}
gives the probability of finding the particle in an infinitesimal interval of length
d
x
{\displaystyle dx}
about some point
x
{\displaystyle x}
.
As an observable, consider the position operator
Q
{\displaystyle Q}
, which acts on wavefunctions
ψ
{\displaystyle \psi }
by
(
Q
ψ
)
(
x
)
=
x
ψ
(
x
)
.
{\displaystyle (Q\psi )(x)=x\psi (x).}
The expectation value, or mean value of measurements, of
Q
{\displaystyle Q}
performed on a very large number of identical independent systems will be given by
⟨
Q
⟩
ψ
=
⟨
ψ
|
Q
|
ψ
⟩
=
∫
−
∞
∞
ψ
∗
(
x
)
x
ψ
(
x
)
d
x
=
∫
−
∞
∞
x
ρ
(
x
)
d
x
.
{\displaystyle \langle Q\rangle _{\psi }=\langle \psi |Q|\psi \rangle =\int _{-\infty }^{\infty }\psi ^{\ast }(x)\,x\,\psi (x)\,dx=\int _{-\infty }^{\infty }x\,\rho (x)\,dx.}
The expectation value only exists if the integral converges, which is not the case for all vectors
ψ
{\displaystyle \psi }
. This is because the position operator is unbounded, and
ψ
{\displaystyle \psi }
has to be chosen from its domain of definition.
In general, the expectation of any observable can be calculated by replacing
Q
{\displaystyle Q}
with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator in configuration space,
p
=
−
i
ℏ
d
d
x
{\textstyle \mathbf {p} =-i\hbar \,{\frac {d}{dx}}}
. Explicitly, its expectation value is
⟨
p
⟩
ψ
=
−
i
ℏ
∫
−
∞
∞
ψ
∗
(
x
)
d
ψ
(
x
)
d
x
d
x
.
{\displaystyle \langle \mathbf {p} \rangle _{\psi }=-i\hbar \int _{-\infty }^{\infty }\psi ^{\ast }(x)\,{\frac {d\psi (x)}{dx}}\,dx.}
Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment.
== See also ==
Rayleigh quotient
Uncertainty principle
Virial theorem
== Notes ==
== References ==
== Further reading ==
The expectation value, in particular as presented in the section "Formalism in quantum mechanics", is covered in most elementary textbooks on quantum mechanics.
For a discussion of conceptual aspects, see:
Isham, Chris J (1995). Lectures on Quantum Theory: Mathematical and Structural Foundations. Imperial College Press. ISBN 978-1-86094-001-9. | Wikipedia/Expectation_value_(quantum_physics) |
Degenerate matter occurs when the Pauli exclusion principle significantly alters a state of matter at low temperature. The term is used in astrophysics to refer to dense stellar objects such as white dwarfs and neutron stars, where thermal pressure alone is not enough to prevent gravitational collapse. The term also applies to metals in the Fermi gas approximation.
Degenerate matter is usually modelled as an ideal Fermi gas, an ensemble of non-interacting fermions. In a quantum mechanical description, particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. This degeneracy pressure remains non-zero even at absolute zero temperature. Adding particles or reducing the volume forces the particles into higher-energy quantum states. In this situation, a compression force is required, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature but only on the density of the fermions. Degeneracy pressure keeps dense stars in equilibrium, independent of the thermal structure of the star.
A degenerate mass whose fermions have velocities close to the speed of light (particle kinetic energy larger than its rest mass energy) is called relativistic degenerate matter.
The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne.
== Concept ==
Quantum mechanics uses the word 'degenerate' in two ways: degenerate energy levels and as the low temperature ground state limit for states of matter.: 437 The electron degeneracy pressure occurs in the ground state systems which are non-degenerate in energy levels. The term "degeneracy" derives from work on the specific heat of gases that pre-dates the use of the term in quantum mechanics.
Degenerate matter exhibits quantum mechanical properties when a fermion system temperature approaches absolute zero.: 30 These properties result from a combination of the Pauli exclusion principle and quantum confinement. The Pauli principle allows only one fermion in each quantum state and the confinement ensures that energy of these states increases as they are filled. The lowest states fill up and fermions are forced to occupy high energy states even at low temperature.
While the Pauli principle and Fermi-Dirac distribution applies to all matter, the interesting cases for degenerate matter involve systems of many fermions. These cases can be understood with the help of the Fermi gas model. Examples include electrons in metals and in white dwarf stars and neutrons in neutron stars.: 436 The electrons are confined by Coulomb attraction to positive ion cores; the neutrons are confined by gravitation attraction. The fermions, forced in to higher levels by the Pauli principle, exert pressure preventing further compression.
The allocation or distribution of fermions into quantum states ranked by energy is called the Fermi-Dirac distribution.: 30 Degenerate matter exhibits the results of Fermi-Dirac distribution.
== Degeneracy pressure ==
Unlike a classical ideal gas, whose pressure is proportional to its temperature
P
=
k
B
N
T
V
,
{\displaystyle P=k_{\rm {B}}{\frac {NT}{V}},}
where P is pressure, kB is the Boltzmann constant, N is the number of particles (typically atoms or molecules), T is temperature, and V is the volume, the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas can be derived by treating the system as an ideal Fermi gas, in this way
P
=
(
3
π
2
)
2
/
3
ℏ
2
5
m
(
N
V
)
5
/
3
,
{\displaystyle P={\frac {(3\pi ^{2})^{2/3}\hbar ^{2}}{5m}}\left({\frac {N}{V}}\right)^{5/3},}
where m is the mass of the individual particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by
P
=
K
(
N
V
)
4
/
3
,
{\displaystyle P=K\left({\frac {N}{V}}\right)^{4/3},}
where K is another proportionality constant depending on the properties of the particles making up the gas.
All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure; the degeneracy pressure dominates to the point that temperature has a negligible effect on the total pressure. The adjacent figure shows the thermal pressure (red line) and total pressure (blue line) in a Fermi gas, with the difference between the two being the degeneracy pressure. As the temperature falls, the density and the degeneracy pressure increase, until the degeneracy pressure contributes most of the total pressure.
While degeneracy pressure usually dominates at extremely high densities, it is the ratio between degenerate pressure and thermal pressure which determines degeneracy. Given a sufficiently drastic increase in temperature (such as during a red giant star's helium flash), matter can become non-degenerate without reducing its density.
Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. The free electron model of metals derives their physical properties by considering the conduction electrons alone as a degenerate gas, while the majority of the electrons are regarded as occupying bound quantum states. This solid state contrasts with degenerate matter that forms the body of a white dwarf, where most of the electrons would be treated as occupying free particle momentum states.
Exotic examples of degenerate matter include neutron degenerate matter, strange matter, metallic hydrogen and white dwarf matter.
== Degenerate gases ==
Degenerate gases are gases composed of fermions such as electrons, protons, and neutrons rather than molecules of ordinary matter. The electron gas in ordinary metals and in the interior of white dwarfs are two examples. Following the Pauli exclusion principle, there can be only one fermion occupying each quantum state. In a degenerate gas, all quantum states are filled up to the Fermi energy. Most stars are supported against their own gravitation by normal thermal gas pressure, while in white dwarf stars the supporting force comes from the degeneracy pressure of the electron gas in their interior. In neutron stars, the degenerate particles are neutrons.
A fermion gas in which all quantum states below a given energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy.
=== Electron degeneracy ===
In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed "degeneracy pressure".
Under high densities, matter becomes a degenerate gas when all electrons are stripped from their parent atoms. The core of a star, once hydrogen burning nuclear fusion reactions stops, becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey ordinary gas laws. White dwarfs are luminous not because they are generating energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed of most of the electrons, because they are stuck in fully occupied quantum states. Pressure is increased only by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the particles become spaced closer together due to gravity (and the pressure is increased), so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter.
There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with typical compositions expected for white dwarf stars (carbon and oxygen with two baryons per electron). This mass cut-off is appropriate only for a star supported by ideal electron degeneracy pressure under Newtonian gravity; in general relativity and with realistic Coulomb corrections, the corresponding mass limit is around 1.38 solar masses. The limit may also change with the chemical composition of the object, as it affects the ratio of mass to number of electrons present. The object's rotation, which counteracts the gravitational force, also changes the limit for any particular object. Celestial objects below this limit are white dwarf stars, formed by the gradual shrinking of the cores of stars that run out of fuel. During this shrinking, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (primarily supported by neutron degeneracy pressure) or a black hole may be formed instead.
=== Neutron degeneracy ===
Neutron degeneracy is analogous to electron degeneracy and exists in neutron stars, which are partially supported by the pressure from a degenerate neutron gas. Neutron stars are formed either directly from the supernova of stars with masses between 10 and 25 M☉ (solar masses), or by white dwarfs acquiring a mass in excess of the Chandrasekhar limit of 1.44 M☉, usually either as a result of a merger or by feeding off of a close binary partner. Above the Chandrasekhar limit, the gravitational pressure at the core exceeds the electron degeneracy pressure, and electrons begin to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture). The result is an extremely compact star composed of "nuclear matter", which is predominantly a degenerate neutron gas with a small admixture of degenerate proton and electron gases.
Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas because the more massive neutron has a much shorter wavelength at a given energy. This phenomenon is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. The result is a star with a diameter on the order of a thousandth that of a white dwarf.
The properties of neutron matter set an upper limit to the mass of a neutron star, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for white dwarf stars.
=== Proton degeneracy ===
Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. However, because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modelled as a correction to the equations of state of electron-degenerate matter.
=== Quark degeneracy ===
At densities greater than those supported by neutron degeneracy, quark-degenerate matter
may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. There is no observational evidence to support this conjecture and theoretical models that predict de-confined quark matter are only valid at masses higher than any observed neutron star.: 435
== History ==
In 1914 Walther Nernst described the reduction of the specific heat of gases at very low temperature as "degeneration"; he attributed this to quantum effects. In subsequent work in various papers on quantum thermodynamics by Albert Einstein, by Max Planck, and by Erwin Schrödinger, the effect at low temperatures came to be called "gas degeneracy". A fully degenerate gas has no volume dependence on pressure when temperature approaches absolute zero.
Early in 1927 Enrico Fermi and separately Llewellyn Thomas developed a semi-classical model for electrons in a metal. The model treated the electrons as a gas. Later in 1927, Arnold Sommerfeld applied the Pauli principle via Fermi-Dirac statistics to this electron gas model, computing the specific heat of metals; the result became Fermi gas model for metals. Sommerfeld called the low temperature region with quantum effects a "wholly degenerate gas".
The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. Eddington had suggested that the atoms in Sirius B were almost completely ionised and closely packed. Fowler described white dwarfs as composed of a gas of particles that became degenerate at low temperature; he also pointed out that ordinary atoms are broadly similar in regards to the filling of energy levels by fermions. In 1926, Milne proposed that degenerate matter is found in core of stars, not only in compact stars.
In 1927 Ralph H. Fowler applied Fermi's model to the puzzle of the stability of white dwarf stars. This approach was extended to relativistic models by later studies and with the work of Subrahmanyan Chandrasekhar became the accepted model for star stability.
== See also ==
Bose–Einstein condensate – Degenerate bosonic gas
Fermi liquid theory – Theoretical model in physics
Metallic hydrogen – High-pressure phase of hydrogen
== Citations ==
== References ==
Cohen-Tanoudji, Claude (2011). Advances in Atomic Physics. World Scientific. p. 791. ISBN 978-981-277-496-5. Archived from the original on 2012-05-11. Retrieved 2012-01-31.
== External links ==
Lecture 17: Stellar Evolution. Discusses degenerate gases in models of stars | Wikipedia/Degenerate_matter |
Photon energy is the energy carried by a single photon. The amount of energy is directly proportional to the photon's electromagnetic frequency and thus, equivalently, is inversely proportional to the wavelength. The higher the photon's frequency, the higher its energy. Equivalently, the longer the photon's wavelength, the lower its energy.
Photon energy can be expressed using any energy unit. Among the units commonly used to denote photon energy are the electronvolt (eV) and the joule (as well as its multiples, such as the microjoule). As one joule equals 6.24×1018 eV, the larger units may be more useful in denoting the energy of photons with higher frequency and higher energy, such as gamma rays, as opposed to lower energy photons as in the optical and radio frequency regions of the electromagnetic spectrum.
== Formulas ==
=== Physics ===
Photon energy is directly proportional to frequency.
E
=
h
f
{\displaystyle E=hf}
where
E
{\displaystyle E}
is energy (joules in the SI system)
h
{\displaystyle h}
is the Planck constant
f
{\displaystyle f}
is frequency
This equation is known as the Planck relation.
Additionally, using equation f = c/λ,
E
=
h
c
λ
{\displaystyle E={\frac {hc}{\lambda }}}
where
E is the photon's energy
λ is the photon's wavelength
c is the speed of light in vacuum
h is the Planck constant
The photon energy at 1 Hz is equal to 6.62607015×10−34 J, which is equal to 4.135667697×10−15 eV.
=== Electronvolt ===
Photon energy is often measured in electronvolts. One electronvolt (eV) is exactly 1.602176634×10−19 J or, using the atto prefix, 0.1602176634 aJ, in the SI system. To find the photon energy in electronvolt using the wavelength in micrometres, the equation is approximately
E
(eV)
=
1.2398
λ
(μm)
{\displaystyle E{\text{ (eV)}}={\frac {1.2398}{\lambda {\text{ (μm)}}}}}
since
h
c
/
e
{\displaystyle hc/e}
= 1.239841984...×10−6 eV⋅m where h is the Planck constant, c is the speed of light, and e is the elementary charge.
The photon energy of near infrared radiation at 1 μm wavelength is approximately 1.2398 eV.
== Examples ==
An FM radio station transmitting at 100 MHz emits photons with an energy of about 4.1357×10−7 eV. This minuscule amount of energy is approximately 8×10−13 times the electron's mass (via mass–energy equivalence).
Very-high-energy gamma rays have photon energies of 100 GeV to over 1 PeV (1011 to 1015 electronvolts) or 16 nJ to 160 μJ. This corresponds to frequencies of 2.42×1025 Hz to 2.42×1029 Hz.
During photosynthesis, specific chlorophyll molecules absorb red-light photons at a wavelength of 700 nm in the photosystem I, corresponding to an energy of each photon of ≈ 2 eV ≈ 3×10−19 J ≈ 75 kBT, where kBT denotes the thermal energy. A minimum of 48 photons is needed for the synthesis of a single glucose molecule from CO2 and water (chemical potential difference 5×10−18 J) with a maximal energy conversion efficiency of 35%.
== See also ==
Electromagnetic radiation
Electromagnetic spectrum
Planck relation
Soft photon
== References == | Wikipedia/Photon_energy |
In astrodynamics, the vis-viva equation is one of the equations that model the motion of orbiting bodies. It is the direct result of the principle of conservation of mechanical energy which applies when the only force acting on an object is its own weight which is the gravitational force determined by the product of the mass of the object and the strength of the surrounding gravitational field.
Vis viva (Latin for "living force") is a term from the history of mechanics and this name is given to the orbital equation originally derived by Isaac Newton.: 30 It represents the principle that the difference between the total work of the accelerating forces of a system and that of the retarding forces is equal to one half the vis viva accumulated or lost in the system while the work is being done.
== Formulation ==
For any Keplerian orbit (elliptic, parabolic, hyperbolic, or radial), the vis-viva equation: 30 is as follows:: 30
v
2
=
G
M
(
2
r
−
1
a
)
{\displaystyle v^{2}=GM\left({2 \over r}-{1 \over a}\right)}
where:
v is the relative speed of the two bodies
r is the distance between the two bodies' centers of mass
a is the length of the semi-major axis (a > 0 for ellipses, a = ∞ or 1/a = 0 for parabolas, and a < 0 for hyperbolas)
G is the gravitational constant
M is the mass of the central body
The product of GM can also be expressed as the standard gravitational parameter using the Greek letter μ.: 33
== Practical applications ==
Given the total mass and the scalars r and v at a single point of the orbit, one can compute:
r and v at any other point in the orbit; and
the specific orbital energy
ε
{\displaystyle \varepsilon \,\!}
, allowing an object orbiting a larger object to be classified as having not enough energy to remain in orbit, hence being "suborbital" (a ballistic missile, for example), having enough energy to be "orbital", but without the possibility to complete a full orbit anyway because it eventually collides with the other body, or having enough energy to come from and/or go to infinity (as a meteor, for example).
The formula for escape velocity can be obtained from the Vis-viva equation by taking the limit as
a
{\displaystyle a}
approaches
∞
{\displaystyle \infty }
:
v
e
2
=
G
M
(
2
r
−
0
)
→
v
e
=
2
G
M
r
{\displaystyle v_{e}^{2}=GM\left({\frac {2}{r}}-0\right)\rightarrow v_{e}={\sqrt {\frac {2GM}{r}}}}
For a given orbital radius, the escape velocity will be
2
{\displaystyle {\sqrt {2}}}
times the orbital velocity.: 32
== Derivation for elliptic orbits (0 ≤ eccentricity < 1) ==
Specific total energy is constant throughout the orbit. Thus, using the subscripts a and p to denote apoapsis (apogee) and periapsis (perigee), respectively,
ε
=
v
a
2
2
−
G
M
r
a
=
v
p
2
2
−
G
M
r
p
{\displaystyle \varepsilon ={\frac {v_{a}^{2}}{2}}-{\frac {GM}{r_{a}}}={\frac {v_{p}^{2}}{2}}-{\frac {GM}{r_{p}}}}
Rearranging,
v
a
2
2
−
v
p
2
2
=
G
M
r
a
−
G
M
r
p
{\displaystyle {\frac {v_{a}^{2}}{2}}-{\frac {v_{p}^{2}}{2}}={\frac {GM}{r_{a}}}-{\frac {GM}{r_{p}}}}
Recalling that for an elliptical orbit (and hence also a circular orbit) the velocity and radius vectors are perpendicular at apoapsis and periapsis, conservation of angular momentum requires specific angular momentum
h
=
r
p
v
p
=
r
a
v
a
=
constant
{\displaystyle h=r_{p}v_{p}=r_{a}v_{a}={\text{constant}}}
, thus
v
p
=
r
a
r
p
v
a
{\displaystyle v_{p}={\frac {r_{a}}{r_{p}}}v_{a}}
:
1
2
(
1
−
r
a
2
r
p
2
)
v
a
2
=
G
M
r
a
−
G
M
r
p
{\displaystyle {\frac {1}{2}}\left(1-{\frac {r_{a}^{2}}{r_{p}^{2}}}\right)v_{a}^{2}={\frac {GM}{r_{a}}}-{\frac {GM}{r_{p}}}}
1
2
(
r
p
2
−
r
a
2
r
p
2
)
v
a
2
=
G
M
r
a
−
G
M
r
p
{\displaystyle {\frac {1}{2}}\left({\frac {r_{p}^{2}-r_{a}^{2}}{r_{p}^{2}}}\right)v_{a}^{2}={\frac {GM}{r_{a}}}-{\frac {GM}{r_{p}}}}
Isolating the kinetic energy at apoapsis and simplifying,
1
2
v
a
2
=
(
G
M
r
a
−
G
M
r
p
)
⋅
r
p
2
r
p
2
−
r
a
2
1
2
v
a
2
=
G
M
(
r
p
−
r
a
r
a
r
p
)
r
p
2
r
p
2
−
r
a
2
1
2
v
a
2
=
G
M
r
p
r
a
(
r
p
+
r
a
)
{\displaystyle {\begin{aligned}{\frac {1}{2}}v_{a}^{2}&=\left({\frac {GM}{r_{a}}}-{\frac {GM}{r_{p}}}\right)\cdot {\frac {r_{p}^{2}}{r_{p}^{2}-r_{a}^{2}}}\\{\frac {1}{2}}v_{a}^{2}&=GM\left({\frac {r_{p}-r_{a}}{r_{a}r_{p}}}\right){\frac {r_{p}^{2}}{r_{p}^{2}-r_{a}^{2}}}\\{\frac {1}{2}}v_{a}^{2}&=GM{\frac {r_{p}}{r_{a}(r_{p}+r_{a})}}\end{aligned}}}
From the geometry of an ellipse,
2
a
=
r
p
+
r
a
{\displaystyle 2a=r_{p}+r_{a}}
where a is the length of the semimajor axis. Thus,
1
2
v
a
2
=
G
M
2
a
−
r
a
r
a
(
2
a
)
=
G
M
(
1
r
a
−
1
2
a
)
=
G
M
r
a
−
G
M
2
a
{\displaystyle {\frac {1}{2}}v_{a}^{2}=GM{\frac {2a-r_{a}}{r_{a}(2a)}}=GM\left({\frac {1}{r_{a}}}-{\frac {1}{2a}}\right)={\frac {GM}{r_{a}}}-{\frac {GM}{2a}}}
Substituting this into our original expression for specific orbital energy,
ε
=
v
2
2
−
G
M
r
=
v
p
2
2
−
G
M
r
p
=
v
a
2
2
−
G
M
r
a
=
−
G
M
2
a
{\displaystyle \varepsilon ={\frac {v^{2}}{2}}-{\frac {GM}{r}}={\frac {v_{p}^{2}}{2}}-{\frac {GM}{r_{p}}}={\frac {v_{a}^{2}}{2}}-{\frac {GM}{r_{a}}}=-{\frac {GM}{2a}}}
Thus,
ε
=
−
G
M
2
a
{\displaystyle \varepsilon =-{\frac {GM}{2a}}}
and the vis-viva equation may be written
v
2
2
−
G
M
r
=
−
G
M
2
a
{\displaystyle {\frac {v^{2}}{2}}-{\frac {GM}{r}}=-{\frac {GM}{2a}}}
or
v
2
=
G
M
(
2
r
−
1
a
)
{\displaystyle v^{2}=GM\left({\frac {2}{r}}-{\frac {1}{a}}\right)}
Therefore, the conserved angular momentum L = mh can be derived using
r
a
+
r
p
=
2
a
{\displaystyle r_{a}+r_{p}=2a}
and
r
a
r
p
=
b
2
{\displaystyle r_{a}r_{p}=b^{2}}
, where a is semi-major axis and b is semi-minor axis of the elliptical orbit, as follows:
v
a
2
=
G
M
(
2
r
a
−
1
a
)
=
G
M
a
(
2
a
−
r
a
r
a
)
=
G
M
a
(
r
p
r
a
)
=
G
M
a
(
b
r
a
)
2
{\displaystyle v_{a}^{2}=GM\left({\frac {2}{r_{a}}}-{\frac {1}{a}}\right)={\frac {GM}{a}}\left({\frac {2a-r_{a}}{r_{a}}}\right)={\frac {GM}{a}}\left({\frac {r_{p}}{r_{a}}}\right)={\frac {GM}{a}}\left({\frac {b}{r_{a}}}\right)^{2}}
and alternately,
v
p
2
=
G
M
(
2
r
p
−
1
a
)
=
G
M
a
(
2
a
−
r
p
r
p
)
=
G
M
a
(
r
a
r
p
)
=
G
M
a
(
b
r
p
)
2
{\displaystyle v_{p}^{2}=GM\left({\frac {2}{r_{p}}}-{\frac {1}{a}}\right)={\frac {GM}{a}}\left({\frac {2a-r_{p}}{r_{p}}}\right)={\frac {GM}{a}}\left({\frac {r_{a}}{r_{p}}}\right)={\frac {GM}{a}}\left({\frac {b}{r_{p}}}\right)^{2}}
Therefore, specific angular momentum
h
=
r
p
v
p
=
r
a
v
a
=
b
G
M
a
{\displaystyle h=r_{p}v_{p}=r_{a}v_{a}=b{\sqrt {\frac {GM}{a}}}}
, and
Total angular momentum
L
=
m
h
=
m
b
G
M
a
{\displaystyle L=mh=mb{\sqrt {\frac {GM}{a}}}}
== References == | Wikipedia/Vis-viva_equation |
In fluid dynamics, drag, sometimes referred to as fluid resistance, is a force acting opposite to the direction of motion of any object moving with respect to a surrounding fluid. This can exist between two fluid layers, two solid surfaces, or between a fluid and a solid surface. Drag forces tend to decrease fluid velocity relative to the solid object in the fluid's path.
Unlike other resistive forces, drag force depends on velocity. Drag force is proportional to the relative velocity for low-speed flow and is proportional to the velocity squared for high-speed flow. This distinction between low and high-speed flow is measured by the Reynolds number.
Drag is instantaneously related to vorticity dynamics through the Josephson-Anderson relation.
== Examples ==
Examples of drag include:
Net aerodynamic or hydrodynamic force: Drag acting opposite to the direction of movement of a solid object such as cars, aircraft, and boat hulls.
Viscous drag of fluid in a pipe: Drag force on the immobile pipe restricts the velocity of the fluid through the pipe.
In the physics of sports, drag force is necessary to explain the motion of balls, javelins, arrows, and frisbees and the performance of runners and swimmers. For a top sprinter, overcoming drag can require 5% of their energy output.
== Types ==
There are many distinct types of drag caused by different physical interactions between the object and fluid. Two types of drag are relevant for all objects:
Form drag, which is caused by the pressure exerted on the object as the fluid flow goes around the object. Form drag is determined by the cross-sectional shape and area of the body.
Skin friction drag (or viscous drag), which is caused by friction between the fluid and the surface of the object. The surface may be the outside of an object, such as a boat hull, or the inside of an object, such as the bore of a pipe.
There are two types of which are primarily relevant for aircraft:
Lift-induced drag appears with wings or a lifting body in aviation and with semi-planing or planing hulls for watercraft
Wave drag (aerodynamics) is caused by the presence of shockwaves and first appears at subsonic aircraft speeds when local flow velocities become supersonic. The wave drag of the supersonic Concorde prototype aircraft was reduced at Mach 2 by 1.8% by applying the area rule which extended the rear fuselage 3.73 m (12.2 ft) on the production aircraft.
Wave resistance affects watercraft:
Wave resistance (ship hydrodynamics) occurs when a solid object is moving along a fluid boundary and making surface waves.
Last, in aerodynamics the term "parasitic drag" is often used. Parasitic drag is the sum of form drag and skin friction drag and is entirely negative to an aircraft, in contrast with lift-induced drag which is a consequence of generating lift.
=== Comparison of form drag and skin friction ===
The effect of streamlining on the relative proportions of skin friction and form drag is shown in the table at right for an airfoil, which is a streamlined body, and a cylinder, which is a bluff body. Also shown is a flat plate in two different orientations, illustrating the effect of orientation on the relative proportions of skin friction and form drag, and showing the pressure difference between front and back.
A body is known as bluff or blunt when the source of drag is dominated by pressure forces, and streamlined if the drag is dominated by viscous forces. For example, road vehicles are bluff bodies. For aircraft, pressure and friction drag are included in the definition of parasitic drag. Parasite drag is often expressed in terms of a hypothetical.
=== Lift-induced drag ===
Lift-induced drag (also called induced drag) is drag which occurs as the result of the creation of lift on a three-dimensional lifting body, such as the wing or propeller of an airplane. Induced drag consists primarily of two components: drag due to the creation of trailing vortices (vortex drag); and the presence of additional viscous drag (lift-induced viscous drag) that is not present when lift is zero. The trailing vortices in the flow-field, present in the wake of a lifting body, derive from the turbulent mixing of air from above and below the body which flows in slightly different directions as a consequence of creation of lift.
With other parameters remaining the same, as the lift generated by a body increases, so does the lift-induced drag. This means that as the wing's angle of attack increases (up to a maximum called the stalling angle), the lift coefficient also increases, and so too does the lift-induced drag. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasite drag, increases due to the formation of turbulent unattached flow in the wake behind the body.
=== Parasitic drag ===
Parasitic drag, or profile drag, is the sum of viscous pressure drag (form drag) and drag due to surface roughness (skin friction drag). Additionally, the presence of multiple bodies in relative proximity may incur so called interference drag, which is sometimes described as a component of parasitic drag. In aeronautics the parasitic drag and lift-induced drag are often given separately.
For an aircraft at low speed, induced drag tends to be relatively greater than parasitic drag because a high angle of attack is required to maintain lift, increasing induced drag. As speed increases, the angle of attack is reduced and the induced drag decreases. Parasitic drag, however, increases because the fluid is flowing more quickly around protruding objects increasing friction or drag. At even higher speeds (transonic), wave drag enters the picture. Each of these forms of drag changes in proportion to the others based on speed. The combined overall drag curve therefore shows a minimum at some airspeed - an aircraft flying at this speed will be at or close to its optimal efficiency. Pilots will use this speed to maximize endurance (minimum fuel consumption), or maximize gliding range in the event of an engine failure.
The equivalent parasite area is the area which a flat plate perpendicular to the flow would have to match the parasite drag of an aircraft. It is a measure used when comparing the drag of different aircraft. For example, the Douglas DC-3 has an equivalent parasite area of 2.20 m2 (23.7 sq ft) and the McDonnell Douglas DC-9, with 30 years of advancement in aircraft design, an area of 1.91 m2 (20.6 sq ft) although it carried five times as many passengers.
== The drag equation ==
Drag depends on the properties of the fluid and on the size, shape, and speed of the object. One way to express this is by means of the drag equation:
F
D
=
1
2
ρ
v
2
C
D
A
{\displaystyle F_{\mathrm {D} }\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{\mathrm {D} }\,A}
where
F
D
{\displaystyle F_{\rm {D}}}
is the drag force,
ρ
{\displaystyle \rho }
is the density of the fluid,
v
{\displaystyle v}
is the speed of the object relative to the fluid,
A
{\displaystyle A}
is the cross sectional area, and
C
D
{\displaystyle C_{\rm {D}}}
is the drag coefficient – a dimensionless number.
The drag coefficient depends on the shape of the object and on the Reynolds number
R
e
=
v
D
ν
=
ρ
v
D
μ
,
{\displaystyle \mathrm {Re} ={\frac {vD}{\nu }}={\frac {\rho vD}{\mu }},}
where
D
{\displaystyle D}
is some characteristic diameter or linear dimension. Actually,
D
{\displaystyle D}
is the equivalent diameter
D
e
{\displaystyle D_{e}}
of the object. For a sphere,
D
e
{\displaystyle D_{e}}
is the D of the sphere itself.
For a rectangular shape cross-section in the motion direction,
D
e
=
1.30
⋅
(
a
⋅
b
)
0.625
(
a
+
b
)
0.25
{\displaystyle D_{e}=1.30\cdot {\frac {(a\cdot b)^{0.625}}{(a+b)^{0.25}}}}
, where a and b are the rectangle edges.
ν
{\displaystyle {\nu }}
is the kinematic viscosity of the fluid (equal to the dynamic viscosity
μ
{\displaystyle {\mu }}
divided by the density
ρ
{\displaystyle {\rho }}
).
At low
R
e
{\displaystyle \mathrm {Re} }
,
C
D
{\displaystyle C_{\rm {D}}}
is asymptotically proportional to
R
e
−
1
{\displaystyle \mathrm {Re} ^{-1}}
, which means that the drag is linearly proportional to the speed, i.e. the drag force on a small sphere moving through a viscous fluid is given by the Stokes Law:
F
d
=
3
π
μ
D
v
{\displaystyle F_{\rm {d}}=3\pi \mu Dv}
At high
R
e
{\displaystyle \mathrm {Re} }
,
C
D
{\displaystyle C_{\rm {D}}}
is more or less constant, but drag will vary as the square of the speed varies. The graph to the right shows how
C
D
{\displaystyle C_{\rm {D}}}
varies with
R
e
{\displaystyle \mathrm {Re} }
for the case of a sphere. Since the power needed to overcome the drag force is the product of the force times speed, the power needed to overcome drag will vary as the square of the speed at low Reynolds numbers, and as the cube of the speed at high numbers.
It can be demonstrated that drag force can be expressed as a function of a dimensionless number, which is dimensionally identical to the Bejan number. Consequently, drag force and drag coefficient can be a function of Bejan number. In fact, from the expression of drag force it has been obtained:
F
d
=
Δ
p
A
w
=
1
2
C
D
A
f
ν
μ
l
2
R
e
L
2
{\displaystyle F_{\rm {d}}=\Delta _{\rm {p}}A_{\rm {w}}={\frac {1}{2}}C_{\rm {D}}A_{\rm {f}}{\frac {\nu \mu }{l^{2}}}\mathrm {Re} _{L}^{2}}
and consequently allows expressing the drag coefficient
C
D
{\displaystyle C_{\rm {D}}}
as a function of Bejan number and the ratio between wet area
A
w
{\displaystyle A_{\rm {w}}}
and front area
A
f
{\displaystyle A_{\rm {f}}}
:
C
D
=
2
A
w
A
f
B
e
R
e
L
2
{\displaystyle C_{\rm {D}}=2{\frac {A_{\rm {w}}}{A_{\rm {f}}}}{\frac {\mathrm {Be} }{\mathrm {Re} _{L}^{2}}}}
where
R
e
L
{\displaystyle \mathrm {Re} _{L}}
is the Reynolds number related to fluid path length L.
== At high velocity ==
As mentioned, the drag equation with a constant drag coefficient gives the force moving through fluid a relatively large velocity, i.e. high Reynolds number, Re > ~1000. This is also called quadratic drag.
F
D
=
1
2
ρ
v
2
C
D
A
,
{\displaystyle F_{D}\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{D}\,A,}
The derivation of this equation is presented at Drag equation § Derivation.
The reference area A is often the orthographic projection of the object, or the frontal area, on a plane perpendicular to the direction of motion. For objects with a simple shape, such as a sphere, this is the cross sectional area. Sometimes a body is a composite of different parts, each with a different reference area (drag coefficient corresponding to each of those different areas must be determined).
In the case of a wing, the reference areas are the same, and the drag force is in the same ratio as the lift force. Therefore, the reference for a wing is often the lifting area, sometimes referred to as "wing area" rather than the frontal area.
For an object with a smooth surface, and non-fixed separation points (like a sphere or circular cylinder), the drag coefficient may vary with Reynolds number Re, up to extremely high values (Re of the order 107).
For an object with well-defined fixed separation points, like a circular disk with its plane normal to the flow direction, the drag coefficient is constant for Re > 3,500.
The further the drag coefficient Cd is, in general, a function of the orientation of the flow with respect to the object (apart from symmetrical objects like a sphere).
=== Power ===
Under the assumption that the fluid is not moving relative to the currently used reference system, the power required to overcome the aerodynamic drag is given by:
P
D
=
F
D
⋅
v
=
1
2
ρ
v
3
A
C
D
{\displaystyle P_{D}=\mathbf {F} _{D}\cdot \mathbf {v} ={\tfrac {1}{2}}\rho v^{3}AC_{D}}
The power needed to push an object through a fluid increases as the cube of the velocity increases. For example, a car cruising on a highway at 50 mph (80 km/h) may require only 10 horsepower (7.5 kW) to overcome aerodynamic drag, but that same car at 100 mph (160 km/h) requires 80 hp (60 kW). With a doubling of speeds, the drag/force quadruples per the formula. Exerting 4 times the force over a fixed distance produces 4 times as much work. At twice the speed, the work (resulting in displacement over a fixed distance) is done twice as fast. Since power is the rate of doing work, 4 times the work done in half the time requires 8 times the power.
When the fluid is moving relative to the reference system, for example, a car driving into headwind, the power required to overcome the aerodynamic drag is given by the following formula:
P
D
=
F
D
⋅
v
o
=
1
2
C
D
A
ρ
(
v
w
+
v
o
)
2
v
o
{\displaystyle P_{D}=\mathbf {F} _{D}\cdot \mathbf {v_{o}} ={\tfrac {1}{2}}C_{D}A\rho (v_{w}+v_{o})^{2}v_{o}}
Where
v
w
{\displaystyle v_{w}}
is the wind speed and
v
o
{\displaystyle v_{o}}
is the object speed (both relative to ground).
=== Velocity of a falling object ===
Velocity as a function of time for an object falling through a non-dense medium, and released at zero relative-velocity v = 0 at time t = 0, is roughly given by a function involving a hyperbolic tangent (tanh):
v
(
t
)
=
2
m
g
ρ
A
C
D
tanh
(
t
g
ρ
C
D
A
2
m
)
.
{\displaystyle v(t)={\sqrt {\frac {2mg}{\rho AC_{D}}}}\tanh \left(t{\sqrt {\frac {g\rho C_{D}A}{2m}}}\right).\,}
The hyperbolic tangent has a limit value of one, for large time t. In other words, velocity asymptotically approaches a maximum value called the terminal velocity vt:
v
t
=
2
m
g
ρ
A
C
D
.
{\displaystyle v_{t}={\sqrt {\frac {2mg}{\rho AC_{D}}}}.\,}
For an object falling and released at relative-velocity v = vi at time t = 0, with vi < vt, is also defined in terms of the hyperbolic tangent function:
v
(
t
)
=
v
t
tanh
(
t
g
v
t
+
arctanh
(
v
i
v
t
)
)
.
{\displaystyle v(t)=v_{t}\tanh \left(t{\frac {g}{v_{t}}}+\operatorname {arctanh} \left({\frac {v_{i}}{v_{t}}}\right)\right).\,}
For vi > vt, the velocity function is defined in terms of the hyperbolic cotangent function:
v
(
t
)
=
v
t
coth
(
t
g
v
t
+
coth
−
1
(
v
i
v
t
)
)
.
{\displaystyle v(t)=v_{t}\coth \left(t{\frac {g}{v_{t}}}+\coth ^{-1}\left({\frac {v_{i}}{v_{t}}}\right)\right).\,}
The hyperbolic cotangent also has a limit value of one, for large time t. Velocity asymptotically tends to the terminal velocity vt, strictly from above vt.
For vi = vt, the velocity is constant:
v
(
t
)
=
v
t
.
{\displaystyle v(t)=v_{t}.}
These functions are defined by the solution of the following differential equation:
g
−
ρ
A
C
D
2
m
v
2
=
d
v
d
t
.
{\displaystyle g-{\frac {\rho AC_{D}}{2m}}v^{2}={\frac {dv}{dt}}.\,}
Or, more generically (where F(v) are the forces acting on the object beyond drag):
1
m
∑
F
(
v
)
−
ρ
A
C
D
2
m
v
2
=
d
v
d
t
.
{\displaystyle {\frac {1}{m}}\sum F(v)-{\frac {\rho AC_{D}}{2m}}v^{2}={\frac {dv}{dt}}.\,}
For a potato-shaped object of average diameter d and of density ρobj, terminal velocity is about
v
t
=
g
d
ρ
o
b
j
ρ
.
{\displaystyle v_{t}={\sqrt {gd{\frac {\rho _{obj}}{\rho }}}}.\,}
For objects of water-like density (raindrops, hail, live objects—mammals, birds, insects, etc.) falling in air near Earth's surface at sea level, the terminal velocity is roughly equal to
v
t
=
90
d
,
{\displaystyle v_{t}=90{\sqrt {d}},\,}
with d in metres and vt in m/s.
For example, for a human body (
d
{\displaystyle d}
≈0.6 m)
v
t
{\displaystyle v_{t}}
≈70 m/s, for a small animal like a cat (
d
{\displaystyle d}
≈0.2 m)
v
t
{\displaystyle v_{t}}
≈40 m/s, for a small bird (
d
{\displaystyle d}
≈0.05 m)
v
t
{\displaystyle v_{t}}
≈20 m/s, for an insect (
d
{\displaystyle d}
≈0.01 m)
v
t
{\displaystyle v_{t}}
≈9 m/s, and so on. Terminal velocity for very small objects (pollen, etc.) at low Reynolds numbers is determined by Stokes law.
In short, terminal velocity is higher for larger creatures, and thus potentially more deadly. A creature such as a mouse falling at its terminal velocity is much more likely to survive impact with the ground than a human falling at its terminal velocity.
== Low Reynolds numbers: Stokes' drag ==
The equation for viscous resistance or linear drag is appropriate for objects or particles moving through a fluid at relatively slow speeds (assuming there is no turbulence). Purely laminar flow only exists up to Re = 0.1 under this definition. In this case, the force of drag is approximately proportional to velocity. The equation for viscous resistance is:
F
D
=
−
b
v
{\displaystyle \mathbf {F} _{D}=-b\mathbf {v} \,}
where:
b
{\displaystyle b}
is a constant that depends on both the material properties of the object and fluid, as well as the geometry of the object; and
v
{\displaystyle \mathbf {v} }
is the velocity of the object.
When an object falls from rest, its velocity will be
v
(
t
)
=
(
ρ
−
ρ
0
)
V
g
b
(
1
−
e
−
b
t
/
m
)
{\displaystyle v(t)={\frac {(\rho -\rho _{0})\,V\,g}{b}}\left(1-e^{-b\,t/m}\right)}
where:
ρ
{\displaystyle \rho }
is the density of the object,
ρ
0
{\displaystyle \rho _{0}}
is density of the fluid,
V
{\displaystyle V}
is the volume of the object,
g
{\displaystyle g}
is the acceleration due to gravity (i.e., 9.8 m/s
2
{\displaystyle ^{2}}
), and
m
{\displaystyle m}
is mass of the object.
The velocity asymptotically approaches the terminal velocity
v
t
=
(
ρ
−
ρ
0
)
V
g
b
{\displaystyle v_{t}={\frac {(\rho -\rho _{0})Vg}{b}}}
. For a given
b
{\displaystyle b}
, denser objects fall more quickly.
For the special case of small spherical objects moving slowly through a viscous fluid (and thus at small Reynolds number), George Gabriel Stokes derived an expression for the drag constant:
b
=
6
π
η
r
{\displaystyle b=6\pi \eta r\,}
where
r
{\displaystyle r}
is the Stokes radius of the particle, and
η
{\displaystyle \eta }
is the fluid viscosity.
The resulting expression for the drag is known as Stokes' drag:
F
D
=
−
6
π
η
r
v
.
{\displaystyle \mathbf {F} _{D}=-6\pi \eta r\,\mathbf {v} .}
For example, consider a small sphere with radius
r
{\displaystyle r}
= 0.5 micrometre (diameter = 1.0 μm) moving through water at a velocity
v
{\displaystyle v}
of 10 μm/s. Using 10−3 Pa·s as the dynamic viscosity of water in SI units,
we find a drag force of 0.09 pN. This is about the drag force that a bacterium experiences as it swims through water.
The drag coefficient of a sphere can be determined for the general case of a laminar flow with Reynolds numbers less than
2
⋅
10
5
{\displaystyle 2\cdot 10^{5}}
using the following formula:
C
D
=
24
R
e
+
4
R
e
+
0.4
;
R
e
<
2
⋅
10
5
{\displaystyle C_{D}={\frac {24}{Re}}+{\frac {4}{\sqrt {Re}}}+0.4~{\text{;}}~~~~~Re<2\cdot 10^{5}}
For Reynolds numbers less than 1, Stokes' law applies and the drag coefficient approaches
24
R
e
{\displaystyle {\frac {24}{Re}}}
!
== Aerodynamics ==
In aerodynamics, aerodynamic drag, also known as air resistance, is the fluid drag force that acts on any moving solid body in the direction of the air's freestream flow.
From the body's perspective (near-field approach), the drag results from forces due to pressure distributions over the body surface, symbolized
D
p
r
{\displaystyle D_{pr}}
.
Forces due to skin friction, which is a result of viscosity, denoted
D
f
{\displaystyle D_{f}}
.
Alternatively, calculated from the flow field perspective (far-field approach), the drag force results from three natural phenomena: shock waves, vortex sheet, and viscosity.
=== Overview of aerodynamics ===
When the airplane produces lift, another drag component results. Induced drag, symbolized
D
i
{\displaystyle D_{i}}
, is due to a modification of the pressure distribution due to the trailing vortex system that accompanies the lift production. An alternative perspective on lift and drag is gained from considering the change of momentum of the airflow. The wing intercepts the airflow and forces the flow to move downward. This results in an equal and opposite force acting upward on the wing which is the lift force. The change of momentum of the airflow downward results in a reduction of the rearward momentum of the flow which is the result of a force acting forward on the airflow and applied by the wing to the air flow; an equal but opposite force acts on the wing rearward which is the induced drag. Another drag component, namely wave drag,
D
w
{\displaystyle D_{w}}
, results from shock waves in transonic and supersonic flight speeds. The shock waves induce changes in the boundary layer and pressure distribution over the body surface.
Therefore, there are three ways of categorizing drag.: 19
Pressure drag and friction drag
Profile drag and induced drag
Vortex drag, wave drag and wake drag
The pressure distribution acting on a body's surface exerts normal forces on the body. Those forces can be added together and the component of that force that acts downstream represents the drag force,
D
p
r
{\displaystyle D_{pr}}
. The nature of these normal forces combines shock wave effects, vortex system generation effects, and wake viscous mechanisms.
Viscosity of the fluid has a major effect on drag. In the absence of viscosity, the pressure forces acting to hinder the vehicle are canceled by a pressure force further aft that acts to push the vehicle forward; this is called pressure recovery and the result is that the drag is zero. That is to say, the work the body does on the airflow is reversible and is recovered as there are no frictional effects to convert the flow energy into heat. Pressure recovery acts even in the case of viscous flow. Viscosity, however results in pressure drag and it is the dominant component of drag in the case of vehicles with regions of separated flow, in which the pressure recovery is infective.
The friction drag force, which is a tangential force on the aircraft surface, depends substantially on boundary layer configuration and viscosity. The net friction drag,
D
f
{\displaystyle D_{f}}
, is calculated as the downstream projection of the viscous forces evaluated over the body's surface. The sum of friction drag and pressure (form) drag is called viscous drag. This drag component is due to viscosity.
=== History ===
The idea that a moving body passing through air or another fluid encounters resistance had been known since the time of Aristotle. According to Mervyn O'Gorman, this was named "drag" by Archibald Reith Low. Louis Charles Breguet's paper of 1922 began efforts to reduce drag by streamlining. Breguet went on to put his ideas into practice by designing several record-breaking aircraft in the 1920s and 1930s. Ludwig Prandtl's boundary layer theory in the 1920s provided the impetus to minimise skin friction. A further major call for streamlining was made by Sir Melvill Jones who provided the theoretical concepts to demonstrate emphatically the importance of streamlining in aircraft design.
In 1929 his paper 'The Streamline Airplane' presented to the Royal Aeronautical Society was seminal. He proposed an ideal aircraft that would have minimal drag which led to the concepts of a 'clean' monoplane and retractable undercarriage. The aspect of Jones's paper that most shocked the designers of the time was his plot of the horse power required versus velocity, for an actual and an ideal plane. By looking at a data point for a given aircraft and extrapolating it horizontally to the ideal curve, the velocity gain for the same power can be seen. When Jones finished his presentation, a member of the audience described the results as being of the same level of importance as the Carnot cycle in thermodynamics.
=== Power curve in aviation ===
The interaction of parasitic and induced drag vs. airspeed can be plotted as a characteristic curve, illustrated here. In aviation, this is often referred to as the power curve, and is important to pilots because it shows that, below a certain airspeed, maintaining airspeed counterintuitively requires more thrust as speed decreases, rather than less. The consequences of being "behind the curve" in flight are important and are taught as part of pilot training. At the subsonic airspeeds where the "U" shape of this curve is significant, wave drag has not yet become a factor, and so it is not shown in the curve.
=== Wave drag in transonic and supersonic flow ===
Wave drag, sometimes referred to as compressibility drag, is drag that is created when a body moves in a compressible fluid and at the speed that is close to the speed of sound in that fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight.
In transonic flight, wave drag is the result of the formation of shockwaves in the fluid, formed when local areas of supersonic (Mach number greater than 1.0) flow are created. In practice, supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air increases as it accelerates over the body to speeds above Mach 1.0. However, full supersonic flow over the vehicle will not develop until well past Mach 1.0. Aircraft flying at transonic speed often incur wave drag through the normal course of operation. In transonic flight, wave drag is commonly referred to as transonic compressibility drag. Transonic compressibility drag increases significantly as the speed of flight increases towards Mach 1.0, dominating other forms of drag at those speeds.
In supersonic flight (Mach numbers greater than 1.0), wave drag is the result of shockwaves present in the fluid and attached to the body, typically oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows, or in bodies with turning angles sufficiently large, unattached shockwaves, or bow waves will instead form. Additionally, local areas of transonic flow behind the initial shockwave may occur at lower supersonic speeds, and can lead to the development of additional, smaller shockwaves present on the surfaces of other lifting bodies, similar to those found in transonic flows. In supersonic flow regimes, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.
The closed form solution for the minimum wave drag of a body of revolution with a fixed length was found by Sears and Haack, and is known as the Sears-Haack Distribution. Similarly, for a fixed volume, the shape for minimum wave drag is the Von Karman Ogive.
The Busemann biplane theoretical concept is not subject to wave drag when operated at its design speed, but is incapable of generating lift in this condition.
== d'Alembert's paradox ==
In 1752 d'Alembert proved that potential flow, the 18th century state-of-the-art inviscid flow theory amenable to mathematical solutions, resulted in the prediction of zero drag. This was in contradiction with experimental evidence, and became known as d'Alembert's paradox. In the 19th century the Navier–Stokes equations for the description of viscous flow were developed by Saint-Venant, Navier and Stokes. Stokes derived the drag around a sphere at very low Reynolds numbers, the result of which is called Stokes' law.
In the limit of high Reynolds numbers, the Navier–Stokes equations approach the inviscid Euler equations, of which the potential-flow solutions considered by d'Alembert are solutions. However, all experiments at high Reynolds numbers showed there is drag. Attempts to construct inviscid steady flow solutions to the Euler equations, other than the potential flow solutions, did not result in realistic results.
The notion of boundary layers—introduced by Prandtl in 1904, founded on both theory and experiments—explained the causes of drag at high Reynolds numbers. The boundary layer is the thin layer of fluid close to the object's boundary, where viscous effects remain important even when the viscosity is very small (or equivalently the Reynolds number is very large).
== See also ==
== References ==
'Improved Empirical Model for Base Drag Prediction on Missile Configurations, based on New Wind Tunnel Data', Frank G Moore et al. NASA Langley Center
'Computational Investigation of Base Drag Reduction for a Projectile at Different Flight Regimes', M A Suliman et al. Proceedings of 13th International Conference on Aerospace Sciences & Aviation Technology, ASAT- 13, May 26 – 28, 2009
'Base Drag and Thick Trailing Edges', Sighard F. Hoerner, Air Materiel Command, in: Journal of the Aeronautical Sciences, Oct 1950, pp 622–628
== Bibliography ==
French, A. P. (1970). Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). W. W. White & Company Inc., New York. ISBN 978-0-393-09958-4.
G. Falkovich (2011). Fluid Mechanics (A short course for physicists). Cambridge University Press. ISBN 978-1-107-00575-4.
Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4.
Huntley, H. E. (1967). Dimensional Analysis. LOC 67-17978.
Batchelor, George (2000). An introduction to fluid dynamics. Cambridge Mathematical Library (2nd ed.). Cambridge University Press. ISBN 978-0-521-66396-0. MR 1744638.
L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. ISBN 978-0-273-01120-0
Anderson, John D. Jr. (2000); Introduction to Flight, Fourth Edition, McGraw Hill Higher Education, Boston, Massachusetts, USA. 8th ed. 2015, ISBN 978-0078027673.
== External links ==
Educational materials on air resistance
Aerodynamic Drag and its effect on the acceleration and top speed of a vehicle.
Vehicle Aerodynamic Drag calculator based on drag coefficient, frontal area and speed.
Smithsonian National Air and Space Museum's How Things Fly website
Effect of dimples on a golf ball and a car | Wikipedia/Drag_force |
In orbital mechanics, Kepler's equation relates various geometric properties of the orbit of a body subject to a central force.
It was derived by Johannes Kepler in 1609 in Chapter 60 of his Astronomia nova, and in book V of his Epitome of Copernican Astronomy (1621) Kepler proposed an iterative solution to the equation. This equation and its solution, however, first appeared in a 9th-century work by Habash al-Hasib al-Marwazi, which dealt with problems of parallax. The equation has played an important role in the history of both physics and mathematics, particularly classical celestial mechanics.
== Equation ==
Kepler's equation is
where
M
{\displaystyle M}
is the mean anomaly,
E
{\displaystyle E}
is the eccentric anomaly, and
e
{\displaystyle e}
is the eccentricity.
The 'eccentric anomaly'
E
{\displaystyle E}
is useful to compute the position of a point moving in a Keplerian orbit. As for instance, if the body passes the periastron at coordinates
x
=
a
(
1
−
e
)
{\displaystyle x=a(1-e)}
,
y
=
0
{\displaystyle y=0}
, at time
t
=
t
0
{\displaystyle t=t_{0}}
, then to find out the position of the body at any time, you first calculate the mean anomaly
M
{\displaystyle M}
from the time and the mean motion
n
{\displaystyle n}
by the formula
M
=
n
(
t
−
t
0
)
{\displaystyle M=n(t-t_{0})}
, then solve the Kepler equation above to get
E
{\displaystyle E}
, then get the coordinates from:
where
a
{\displaystyle a}
is the semi-major axis,
b
{\displaystyle b}
the semi-minor axis.
Kepler's equation is a transcendental equation because sine is a transcendental function, and it cannot be solved for
E
{\displaystyle E}
algebraically. Numerical analysis and series expansions are generally required to evaluate
E
{\displaystyle E}
.
== Alternate forms ==
There are several forms of Kepler's equation. Each form is associated with a specific type of orbit. The standard Kepler equation is used for elliptic orbits (
0
≤
e
<
1
{\displaystyle 0\leq e<1}
). The hyperbolic Kepler equation is used for hyperbolic trajectories (
e
>
1
{\displaystyle e>1}
). The radial Kepler equation is used for linear (radial) trajectories (
e
=
1
{\displaystyle e=1}
). Barker's equation is used for parabolic trajectories (for which
e
=
1
{\displaystyle e=1}
). With the parabolic orbit, unlike the elliptical or hyperbolic orbits, it is possible to solve Barker's equation and find a closed-form expression for the position as a function of time.
When
e
=
0
{\displaystyle e=0}
, the orbit is circular. Increasing
e
{\displaystyle e}
causes the circle to become elliptical. When
e
=
1
{\displaystyle e=1}
, there are four possibilities:
a parabolic trajectory,
a trajectory that goes back and forth along a line segment from the centre of attraction to a point at some distance away,
a trajectory going in or out along an infinite ray emanating from the centre of attraction, with its speed going to zero with distance
or a trajectory along a ray, but with speed not going to zero with distance.
A value of
e
{\displaystyle e}
slightly above 1 results in a hyperbolic orbit with a turning angle of just under 180 degrees. Further increases reduce the turning angle, and as
e
{\displaystyle e}
goes to infinity, the orbit becomes a straight line of infinite length.
=== Hyperbolic Kepler equation ===
The Hyperbolic Kepler equation is:
where
H
{\displaystyle H}
is the hyperbolic eccentric anomaly.
This equation is derived by redefining M to be the square root of −1 times the right-hand side of the elliptical equation:
M
=
i
(
E
−
e
sin
E
)
{\displaystyle M=i\left(E-e\sin E\right)}
(in which
E
{\displaystyle E}
is now imaginary) and then replacing
E
{\displaystyle E}
by
i
H
{\displaystyle iH}
.
=== Radial Kepler equations ===
The Radial Kepler equation for the case where the object does not have enough energy to escape is:
where
t
{\displaystyle t}
is proportional to time and
x
{\displaystyle x}
is proportional to the distance from the centre of attraction along the ray and attains the value 1 at the maximum distance. This equation is derived by multiplying Kepler's equation by 1/2 and setting
e
{\displaystyle e}
to 1:
t
(
x
)
=
1
2
[
E
−
sin
E
]
.
{\displaystyle t(x)={\frac {1}{2}}\left[E-\sin E\right].}
and then making the substitution
E
=
2
sin
−
1
(
x
)
.
{\displaystyle E=2\sin ^{-1}({\sqrt {x}}).}
The radial equation for when the object has enough energy to escape is:
When the energy is exactly the minimum amount needed to escape, then the time is simply proportional to the distance to the power 3/2.
== Inverse problem ==
Calculating
M
{\displaystyle M}
for a given value of
E
{\displaystyle E}
is straightforward. However, solving for
E
{\displaystyle E}
when
M
{\displaystyle M}
is given can be considerably more challenging. There is no closed-form solution. Solving for
E
{\displaystyle E}
is more or less equivalent to solving for the true anomaly, or the difference between the true anomaly and the mean anomaly, which is called the "Equation of the center".
One can write an infinite series expression for the solution to Kepler's equation using Lagrange inversion, but the series does not converge for all combinations of
e
{\displaystyle e}
and
M
{\displaystyle M}
(see below).
Confusion over the solvability of Kepler's equation has persisted in the literature for four centuries. Kepler himself expressed doubt at the possibility of finding a general solution:
I am sufficiently satisfied that it [Kepler's equation] cannot be solved a priori, on account of the different nature of the arc and the sine. But if I am mistaken, and any one shall point out the way to me, he will be in my eyes the great Apollonius.
Fourier series expansion (with respect to
M
{\displaystyle M}
) using Bessel functions is
E
=
M
+
∑
m
=
1
∞
2
m
J
m
(
m
e
)
sin
(
m
M
)
,
e
≤
1
,
M
∈
[
−
π
,
π
]
.
{\displaystyle E=M+\sum _{m=1}^{\infty }{\frac {2}{m}}J_{m}(me)\sin(mM),\quad e\leq 1,\quad M\in [-\pi ,\pi ].}
With respect to
e
{\displaystyle e}
, it is a Kapteyn series.
=== Inverse Kepler equation ===
The inverse Kepler equation is the solution of Kepler's equation for all real values of
e
{\displaystyle e}
:
E
=
{
∑
n
=
1
∞
M
n
3
n
!
lim
θ
→
0
+
(
d
n
−
1
d
θ
n
−
1
(
(
θ
θ
−
sin
(
θ
)
3
)
n
)
)
,
e
=
1
∑
n
=
1
∞
M
n
n
!
lim
θ
→
0
+
(
d
n
−
1
d
θ
n
−
1
(
(
θ
θ
−
e
sin
(
θ
)
)
n
)
)
,
e
≠
1
{\displaystyle E={\begin{cases}\displaystyle \sum _{n=1}^{\infty }{\frac {M^{\frac {n}{3}}}{n!}}\lim _{\theta \to 0^{+}}\!{\Bigg (}{\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}{\bigg (}{\bigg (}{\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}{\bigg )}^{\!\!\!n}{\bigg )}{\Bigg )},&e=1\\\displaystyle \sum _{n=1}^{\infty }{\frac {M^{n}}{n!}}\lim _{\theta \to 0^{+}}\!{\Bigg (}{\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}{\bigg (}{\Big (}{\frac {\theta }{\theta -e\sin(\theta )}}{\Big )}^{\!n}{\bigg )}{\Bigg )},&e\neq 1\end{cases}}}
Evaluating this yields:
E
=
{
s
+
1
60
s
3
+
1
1400
s
5
+
1
25200
s
7
+
43
17248000
s
9
+
1213
7207200000
s
11
+
151439
12713500800000
s
13
+
⋯
with
s
=
(
6
M
)
1
/
3
,
e
=
1
1
1
−
e
M
−
e
(
1
−
e
)
4
M
3
3
!
+
(
9
e
2
+
e
)
(
1
−
e
)
7
M
5
5
!
−
(
225
e
3
+
54
e
2
+
e
)
(
1
−
e
)
10
M
7
7
!
+
(
11025
e
4
+
4131
e
3
+
243
e
2
+
e
)
(
1
−
e
)
13
M
9
9
!
+
⋯
,
e
≠
1
{\displaystyle E={\begin{cases}\displaystyle s+{\frac {1}{60}}s^{3}+{\frac {1}{1400}}s^{5}+{\frac {1}{25200}}s^{7}+{\frac {43}{17248000}}s^{9}+{\frac {1213}{7207200000}}s^{11}+{\frac {151439}{12713500800000}}s^{13}+\cdots {\text{ with }}s=(6M)^{1/3},&e=1\\\\\displaystyle {\frac {1}{1-e}}M-{\frac {e}{(1-e)^{4}}}{\frac {M^{3}}{3!}}+{\frac {(9e^{2}+e)}{(1-e)^{7}}}{\frac {M^{5}}{5!}}-{\frac {(225e^{3}+54e^{2}+e)}{(1-e)^{10}}}{\frac {M^{7}}{7!}}+{\frac {(11025e^{4}+4131e^{3}+243e^{2}+e)}{(1-e)^{13}}}{\frac {M^{9}}{9!}}+\cdots ,&e\neq 1\end{cases}}}
These series can be reproduced in Mathematica with the InverseSeries operation.
InverseSeries[Series[M - Sin[M], {M, 0, 10}]]
InverseSeries[Series[M - e Sin[M], {M, 0, 10}]]
These functions are simple Maclaurin series. Such Taylor series representations of transcendental functions are considered to be definitions of those functions. Therefore, this solution is a formal definition of the inverse Kepler equation. However,
E
{\displaystyle E}
is not an entire function of
M
{\displaystyle M}
at a given non-zero
e
{\displaystyle e}
. Indeed, the derivative
d
M
/
d
E
=
1
−
e
cos
E
{\displaystyle \mathrm {dM} /\mathrm {d} E=1-e\cos E}
goes to zero at an infinite set of complex numbers when
e
<
1
,
{\displaystyle e<1,}
the nearest to zero being at
E
=
±
i
cosh
−
1
(
1
/
e
)
,
{\displaystyle E=\pm i\cosh ^{-1}(1/e),}
and at these two points
M
=
E
−
e
sin
E
=
±
i
(
cosh
−
1
(
1
/
e
)
−
1
−
e
2
)
{\displaystyle M=E-e\sin E=\pm i\left(\cosh ^{-1}(1/e)-{\sqrt {1-e^{2}}}\right)}
(where inverse cosh is taken to be positive), and
d
E
/
d
M
{\displaystyle \mathrm {d} E/\mathrm {d} M}
goes to infinity at these values of
M
{\displaystyle M}
. This means that the radius of convergence of the Maclaurin series is
cosh
−
1
(
1
/
e
)
−
1
−
e
2
{\displaystyle \cosh ^{-1}(1/e)-{\sqrt {1-e^{2}}}}
and the series will not converge for values of
M
{\displaystyle M}
larger than this. The series can also be used for the hyperbolic case, in which case the radius of convergence is
cos
−
1
(
1
/
e
)
−
e
2
−
1
.
{\displaystyle \cos ^{-1}(1/e)-{\sqrt {e^{2}-1}}.}
The series for when
e
=
1
{\displaystyle e=1}
converges when
M
<
2
π
{\displaystyle M<2\pi }
.
While this solution is the simplest in a certain mathematical sense,, other solutions are preferable for most applications. Alternatively, Kepler's equation can be solved numerically.
The solution for
e
≠
1
{\displaystyle e\neq 1}
was found by Karl Stumpff in 1968, but its significance wasn't recognized.
One can also write a Maclaurin series in
e
{\displaystyle e}
. This series does not converge when
e
{\displaystyle e}
is larger than the Laplace limit (about 0.66), regardless of the value of
M
{\displaystyle M}
(unless
M
{\displaystyle M}
is a multiple of 2π), but it converges for all
M
{\displaystyle M}
if
e
{\displaystyle e}
is less than the Laplace limit. The coefficients in the series, other than the first (which is simply
M
{\displaystyle M}
), depend on
M
{\displaystyle M}
in a periodic way with period 2π.
=== Inverse radial Kepler equation ===
The inverse radial Kepler equation (
e
=
1
{\displaystyle e=1}
) for the case in which the object does not have enough energy to escape can similarly be written as:
x
(
t
)
=
∑
n
=
1
∞
[
lim
r
→
0
+
(
t
2
3
n
n
!
d
n
−
1
d
r
n
−
1
(
r
n
(
3
2
(
sin
−
1
(
r
)
−
r
−
r
2
)
)
−
2
3
n
)
)
]
{\displaystyle x(t)=\sum _{n=1}^{\infty }\left[\lim _{r\to 0^{+}}\left({\frac {t^{{\frac {2}{3}}n}}{n!}}{\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} r^{\,n-1}}}\!\left(r^{n}\left({\frac {3}{2}}{\Big (}\sin ^{-1}({\sqrt {r}})-{\sqrt {r-r^{2}}}{\Big )}\right)^{\!-{\frac {2}{3}}n}\right)\right)\right]}
Evaluating this yields:
x
(
t
)
=
p
−
1
5
p
2
−
3
175
p
3
−
23
7875
p
4
−
1894
3031875
p
5
−
3293
21896875
p
6
−
2418092
62077640625
p
7
−
⋯
|
p
=
(
3
2
t
)
2
/
3
{\displaystyle x(t)=p-{\frac {1}{5}}p^{2}-{\frac {3}{175}}p^{3}-{\frac {23}{7875}}p^{4}-{\frac {1894}{3031875}}p^{5}-{\frac {3293}{21896875}}p^{6}-{\frac {2418092}{62077640625}}p^{7}-\ \cdots \ {\bigg |}{p=\left({\tfrac {3}{2}}t\right)^{2/3}}}
To obtain this result using Mathematica:
InverseSeries[Series[ArcSin[Sqrt[t]] - Sqrt[(1 - t) t], {t, 0, 15}]]
== Numerical approximation of inverse problem ==
=== Newton's method ===
For most applications, the inverse problem can be computed numerically by finding the root of the function:
f
(
E
)
=
E
−
e
sin
(
E
)
−
M
(
t
)
{\displaystyle f(E)=E-e\sin(E)-M(t)}
This can be done iteratively via Newton's method:
E
n
+
1
=
E
n
−
f
(
E
n
)
f
′
(
E
n
)
=
E
n
−
E
n
−
e
sin
(
E
n
)
−
M
(
t
)
1
−
e
cos
(
E
n
)
{\displaystyle E_{n+1}=E_{n}-{\frac {f(E_{n})}{f'(E_{n})}}=E_{n}-{\frac {E_{n}-e\sin(E_{n})-M(t)}{1-e\cos(E_{n})}}}
Note that
E
{\displaystyle E}
and
M
{\displaystyle M}
are in units of radians in this computation. This iteration is repeated until desired accuracy is obtained (e.g. when
f
(
E
)
{\displaystyle f(E)}
< desired accuracy). For most elliptical orbits an initial value of
E
0
=
M
(
t
)
{\displaystyle E_{0}=M(t)}
is sufficient. For orbits with
e
>
0.8
{\displaystyle e>0.8}
, a initial value of
E
0
=
π
{\displaystyle E_{0}=\pi }
can be used. Numerous works developed accurate (but also more complex) start guesses. If
e
{\displaystyle e}
is identically 1, then the derivative of
f
{\displaystyle f}
, which is in the denominator of Newton's method, can get close to zero, making derivative-based methods such as Newton-Raphson, secant, or regula falsi numerically unstable. In that case, the bisection method will provide guaranteed convergence, particularly since the solution can be bounded in a small initial interval. On modern computers, it is possible to achieve 4 or 5 digits of accuracy in 17 to 18 iterations. A similar approach can be used for the hyperbolic form of Kepler's equation.: 66–67 In the case of a parabolic trajectory, Barker's equation is used.
=== Fixed-point iteration ===
A related method starts by noting that
E
=
M
+
e
sin
E
{\displaystyle E=M+e\sin {E}}
. Repeatedly substituting the expression on the right for the
E
{\displaystyle E}
on the right yields a simple fixed-point iteration algorithm for evaluating
E
(
e
,
M
)
{\displaystyle E(e,M)}
. This method is identical to Kepler's 1621 solution. In pseudocode:
The number of iterations,
n
{\displaystyle n}
, depends on the value of
e
{\displaystyle e}
. The hyperbolic form similarly has
H
=
sinh
−
1
(
H
+
M
e
)
{\displaystyle H=\sinh ^{-1}\left({\frac {H+M}{e}}\right)}
.
This method is related to the Newton's method solution above in that
E
n
+
1
=
E
n
−
E
n
−
e
sin
(
E
n
)
−
M
(
t
)
1
−
e
cos
(
E
n
)
=
E
n
+
(
M
+
e
sin
E
n
−
E
n
)
(
1
+
e
cos
E
n
)
1
−
e
2
(
cos
E
n
)
2
{\displaystyle E_{n+1}=E_{n}-{\frac {E_{n}-e\sin(E_{n})-M(t)}{1-e\cos(E_{n})}}=E_{n}+{\frac {(M+e\sin {E_{n}}-E_{n})(1+e\cos {E_{n}})}{1-e^{2}(\cos {E_{n}})^{2}}}}
To first order in the small quantities
M
−
E
n
{\displaystyle M-E_{n}}
and
e
{\displaystyle e}
,
E
n
+
1
≈
M
+
e
sin
E
n
{\displaystyle E_{n+1}\approx M+e\sin {E_{n}}}
.
== See also ==
Equation of the center
Kepler's laws of planetary motion
Kepler problem
Kepler problem in general relativity
Radial trajectory
== References ==
== External links ==
Danby, John M.; Burkardt, Thomas M. (1983). "The solution of Kepler's equation. I". Celestial Mechanics. 31 (2): 95–107. Bibcode:1983CeMec..31...95D. doi:10.1007/BF01686811. S2CID 189832421.
Conway, Bruce A. (1986). "An improved algorithm due to Laguerre for the solution of Kepler's equation". 24th Aerospace Sciences Meeting. doi:10.2514/6.1986-84.
Mikkola, Seppo (1987). "A cubic approximation for Kepler's equation" (PDF). Celestial Mechanics. 40 (3): 329–334. Bibcode:1987CeMec..40..329M. doi:10.1007/BF01235850. S2CID 122237945.
Nijenhuis, Albert (1991). "Solving Kepler's equation with high efficiency and accuracy". Celestial Mechanics and Dynamical Astronomy. 51 (4): 319–330. Bibcode:1991CeMDA..51..319N. doi:10.1007/BF00052925. S2CID 121845017.
Markley, F. Landis (1995). "Kepler equation solver". Celestial Mechanics and Dynamical Astronomy. 63 (1): 101–111. Bibcode:1995CeMDA..63..101M. doi:10.1007/BF00691917. S2CID 120405765.
Fukushima, Toshio (1996). "A method solving kepler's equation without transcendental function evaluations". Celestial Mechanics and Dynamical Astronomy. 66 (3): 309–319. Bibcode:1996CeMDA..66..309F. doi:10.1007/BF00049384. S2CID 120352687.
Charles, Edgar D.; Tatum, Jeremy B. (1997). "The convergence of Newton-Raphson iteration with Kepler's equation". Celestial Mechanics and Dynamical Astronomy. 69 (4): 357–372. Bibcode:1997CeMDA..69..357C. doi:10.1023/A:1008200607490. S2CID 118637706.
Stumpf, Laura (1999). "Chaotic behaviour in the Newton iterative function associated with Kepler's equation". Celestial Mechanics and Dynamical Astronomy. 74 (2): 95–109. Bibcode:1999CeMDA..74...95S. doi:10.1023/A:1008339416143. S2CID 122491746.
Palacios, Manuel (2002). "Kepler equation and accelerated Newton method". Journal of Computational and Applied Mathematics. 138 (2): 335–346. Bibcode:2002JCoAM.138..335P. doi:10.1016/S0377-0427(01)00369-7.
Boyd, John P. (2007). "Rootfinding for a transcendental equation without a first guess: Polynomialization of Kepler's equation through Chebyshev polynomial equation of the sine". Applied Numerical Mathematics. 57 (1): 12–18. doi:10.1016/j.apnum.2005.11.010.
Pál, András (2009). "An analytical solution for Kepler's problem". Monthly Notices of the Royal Astronomical Society. 396 (3): 1737–1742. arXiv:0904.0324. Bibcode:2009MNRAS.396.1737P. doi:10.1111/j.1365-2966.2009.14853.x.
Esmaelzadeh, Reza; Ghadiri, Hossein (2014). "Appropriate starter for solving the Kepler's equation". International Journal of Computer Applications. 89 (7): 31–38. Bibcode:2014IJCA...89g..31E. doi:10.5120/15517-4394.
Zechmeister, Mathias (2018). "CORDIC-like method for solving Kepler's equation". Astronomy and Astrophysics. 619: A128. arXiv:1808.07062. Bibcode:2018A&A...619A.128Z. doi:10.1051/0004-6361/201833162.
Kepler's Equation at Wolfram Mathworld | Wikipedia/Kepler's_equation |
The classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity and can thereby move due to the conservation of momentum.
It is credited to Konstantin Tsiolkovsky, who independently derived it and published it in 1903, although it had been independently derived and published by William Moore in 1810, and later published in a separate book in 1813. Robert Goddard also developed it independently in 1912, and Hermann Oberth derived it independently about 1920.
The maximum change of velocity of the vehicle,
Δ
v
{\displaystyle \Delta v}
(with no external forces acting) is:
Δ
v
=
v
e
ln
m
0
m
f
=
I
sp
g
0
ln
m
0
m
f
,
{\displaystyle \Delta v=v_{\text{e}}\ln {\frac {m_{0}}{m_{f}}}=I_{\text{sp}}g_{0}\ln {\frac {m_{0}}{m_{f}}},}
where:
v
e
{\displaystyle v_{\text{e}}}
is the effective exhaust velocity;
I
sp
{\displaystyle I_{\text{sp}}}
is the specific impulse in dimension of time;
g
0
{\displaystyle g_{0}}
is standard gravity;
ln
{\displaystyle \ln }
is the natural logarithm function;
m
0
{\displaystyle m_{0}}
is the initial total mass, including propellant, a.k.a. wet mass;
m
f
{\displaystyle m_{f}}
is the final total mass without propellant, a.k.a. dry mass.
Given the effective exhaust velocity determined by the rocket motor's design, the desired delta-v (e.g., orbital speed or escape velocity), and a given dry mass
m
f
{\displaystyle m_{f}}
, the equation can be solved for the required wet mass
m
0
{\displaystyle m_{0}}
:
m
0
=
m
f
e
Δ
v
/
v
e
.
{\displaystyle m_{0}=m_{f}e^{\Delta v/v_{\text{e}}}.}
The required propellant mass is then
m
0
−
m
f
=
m
f
(
e
Δ
v
/
v
e
−
1
)
{\displaystyle m_{0}-m_{f}=m_{f}(e^{\Delta v/v_{\text{e}}}-1)}
The necessary wet mass grows exponentially with the desired delta-v.
== History ==
The equation is named after Russian scientist Konstantin Tsiolkovsky who independently derived it and published it in his 1903 work.
The equation had been derived earlier by the British mathematician William Moore in 1810, and later published in a separate book in 1813.
American Robert Goddard independently developed the equation in 1912 when he began his research to improve rocket engines for possible space flight. German engineer Hermann Oberth independently derived the equation about 1920 as he studied the feasibility of space travel.
While the derivation of the rocket equation is a straightforward calculus exercise, Tsiolkovsky is honored as being the first to apply it to the question of whether rockets could achieve speeds necessary for space travel.
== Experiment of the boat ==
In order to understand the principle of rocket propulsion, Konstantin Tsiolkovsky proposed the famous experiment of "the boat". A person is in a boat away from the shore without oars. They want to reach this shore. They notice that the boat is loaded with a certain quantity of stones and have the idea of quickly and repeatedly throwing the stones in succession in the opposite direction. Effectively, the quantity of movement of the stones thrown in one direction corresponds to an equal quantity of movement for the boat in the other direction (ignoring friction / drag).
== Derivation ==
=== Most popular derivation ===
Consider the following system:
In the following derivation, "the rocket" is taken to mean "the rocket and all of its unexpended propellant".
Newton's second law of motion relates external forces (
F
→
i
{\displaystyle {\vec {F}}_{i}}
) to the change in linear momentum of the whole system (including rocket and exhaust) as follows:
∑
i
F
→
i
=
lim
Δ
t
→
0
P
→
Δ
t
−
P
→
0
Δ
t
{\displaystyle \sum _{i}{\vec {F}}_{i}=\lim _{\Delta t\to 0}{\frac {{\vec {P}}_{\Delta t}-{\vec {P}}_{0}}{\Delta t}}}
where
P
→
0
{\displaystyle {\vec {P}}_{0}}
is the momentum of the rocket at time
t
=
0
{\displaystyle t=0}
:
P
→
0
=
m
V
→
{\displaystyle {\vec {P}}_{0}=m{\vec {V}}}
and
P
→
Δ
t
{\displaystyle {\vec {P}}_{\Delta t}}
is the momentum of the rocket and exhausted mass at time
t
=
Δ
t
{\displaystyle t=\Delta t}
:
P
→
Δ
t
=
(
m
−
Δ
m
)
(
V
→
+
Δ
V
→
)
+
Δ
m
V
→
e
{\displaystyle {\vec {P}}_{\Delta t}=\left(m-\Delta m\right)\left({\vec {V}}+\Delta {\vec {V}}\right)+\Delta m{\vec {V}}_{\text{e}}}
and where, with respect to the observer:
V
→
{\displaystyle {\vec {V}}}
is the velocity of the rocket at time
t
=
0
{\displaystyle t=0}
V
→
+
Δ
V
→
{\displaystyle {\vec {V}}+\Delta {\vec {V}}}
is the velocity of the rocket at time
t
=
Δ
t
{\displaystyle t=\Delta t}
V
→
e
{\displaystyle {\vec {V}}_{\text{e}}}
is the velocity of the mass added to the exhaust (and lost by the rocket) during time
Δ
t
{\displaystyle \Delta t}
m
{\displaystyle m}
is the mass of the rocket at time
t
=
0
{\displaystyle t=0}
(
m
−
Δ
m
)
{\displaystyle \left(m-\Delta m\right)}
is the mass of the rocket at time
t
=
Δ
t
{\displaystyle t=\Delta t}
The velocity of the exhaust
V
→
e
{\displaystyle {\vec {V}}_{\text{e}}}
in the observer frame is related to the velocity of the exhaust in the rocket frame
v
e
{\displaystyle v_{\text{e}}}
by:
v
→
e
=
V
→
e
−
V
→
{\displaystyle {\vec {v}}_{\text{e}}={\vec {V}}_{\text{e}}-{\vec {V}}}
thus,
V
→
e
=
V
→
+
v
→
e
{\displaystyle {\vec {V}}_{\text{e}}={\vec {V}}+{\vec {v}}_{\text{e}}}
Solving this yields:
P
→
Δ
t
−
P
→
0
=
m
Δ
V
→
+
v
→
e
Δ
m
−
Δ
m
Δ
V
→
{\displaystyle {\vec {P}}_{\Delta t}-{\vec {P}}_{0}=m\Delta {\vec {V}}+{\vec {v}}_{\text{e}}\Delta m-\Delta m\Delta {\vec {V}}}
If
V
→
{\displaystyle {\vec {V}}}
and
v
→
e
{\displaystyle {\vec {v}}_{\text{e}}}
are opposite,
F
→
i
{\displaystyle {\vec {F}}_{\text{i}}}
have the same direction as
V
→
{\displaystyle {\vec {V}}}
,
Δ
m
Δ
V
→
{\displaystyle \Delta m\Delta {\vec {V}}}
are negligible (since
d
m
d
v
→
→
0
{\displaystyle dm\,d{\vec {v}}\to 0}
), and using
d
m
=
−
Δ
m
{\displaystyle dm=-\Delta m}
(since ejecting a positive
Δ
m
{\displaystyle \Delta m}
results in a decrease in rocket mass in time),
∑
i
F
i
=
m
d
V
d
t
+
v
e
d
m
d
t
{\displaystyle \sum _{i}F_{i}=m{\frac {dV}{dt}}+v_{\text{e}}{\frac {dm}{dt}}}
If there are no external forces then
∑
i
F
i
=
0
{\textstyle \sum _{i}F_{i}=0}
(conservation of linear momentum) and
−
m
d
V
d
t
=
v
e
d
m
d
t
{\displaystyle -m{\frac {dV}{dt}}=v_{\text{e}}{\frac {dm}{dt}}}
Assuming that
v
e
{\displaystyle v_{\text{e}}}
is constant (known as Tsiolkovsky's hypothesis), so it is not subject to integration, then the above equation may be integrated as follows:
−
∫
V
V
+
Δ
V
d
V
=
v
e
∫
m
0
m
f
d
m
m
{\displaystyle -\int _{V}^{V+\Delta V}\,dV={v_{e}}\int _{m_{0}}^{m_{f}}{\frac {dm}{m}}}
This then yields
Δ
V
=
v
e
ln
m
0
m
f
{\displaystyle \Delta V=v_{\text{e}}\ln {\frac {m_{0}}{m_{f}}}}
or equivalently
m
f
=
m
0
e
−
Δ
V
/
v
e
{\displaystyle m_{f}=m_{0}e^{-\Delta V\ /v_{\text{e}}}}
or
m
0
=
m
f
e
Δ
V
/
v
e
{\displaystyle m_{0}=m_{f}e^{\Delta V/v_{\text{e}}}}
or
m
0
−
m
f
=
m
f
(
e
Δ
V
/
v
e
−
1
)
{\displaystyle m_{0}-m_{f}=m_{f}\left(e^{\Delta V/v_{\text{e}}}-1\right)}
where
m
0
{\displaystyle m_{0}}
is the initial total mass including propellant,
m
f
{\displaystyle m_{f}}
the final mass, and
v
e
{\displaystyle v_{\text{e}}}
the velocity of the rocket exhaust with respect to the rocket (the specific impulse, or, if measured in time, that multiplied by gravity-on-Earth acceleration). If
v
e
{\displaystyle v_{\text{e}}}
is NOT constant, we might not have rocket equations that are as simple as the above forms. Many rocket dynamics researches were based on the Tsiolkovsky's constant
v
e
{\displaystyle v_{\text{e}}}
hypothesis.
The value
m
0
−
m
f
{\displaystyle m_{0}-m_{f}}
is the total working mass of propellant expended.
Δ
V
{\displaystyle \Delta V}
(delta-v) is the integration over time of the magnitude of the acceleration produced by using the rocket engine (what would be the actual acceleration if external forces were absent). In free space, for the case of acceleration in the direction of the velocity, this is the increase of the speed. In the case of an acceleration in opposite direction (deceleration) it is the decrease of the speed. Of course gravity and drag also accelerate the vehicle, and they can add or subtract to the change in velocity experienced by the vehicle. Hence delta-v may not always be the actual change in speed or velocity of the vehicle.
=== Other derivations ===
==== Impulse-based ====
The equation can also be derived from the basic integral of acceleration in the form of force (thrust) over mass.
By representing the delta-v equation as the following:
Δ
v
=
∫
t
0
t
f
|
T
|
m
0
−
t
Δ
m
d
t
{\displaystyle \Delta v=\int _{t_{0}}^{t_{f}}{\frac {|T|}{{m_{0}}-{t}\Delta {m}}}~dt}
where T is thrust,
m
0
{\displaystyle m_{0}}
is the initial (wet) mass and
Δ
m
{\displaystyle \Delta m}
is the initial mass minus the final (dry) mass,
and realising that the integral of a resultant force over time is total impulse, assuming thrust is the only force involved,
∫
t
0
t
f
F
d
t
=
J
{\displaystyle \int _{t_{0}}^{t_{f}}F~dt=J}
The integral is found to be:
J
ln
(
m
0
)
−
ln
(
m
f
)
Δ
m
{\displaystyle J~{\frac {\ln({m_{0}})-\ln({m_{f}})}{\Delta m}}}
Realising that impulse over the change in mass is equivalent to force over propellant mass flow rate (p), which is itself equivalent to exhaust velocity,
J
Δ
m
=
F
p
=
V
exh
{\displaystyle {\frac {J}{\Delta m}}={\frac {F}{p}}=V_{\text{exh}}}
the integral can be equated to
Δ
v
=
V
exh
ln
(
m
0
m
f
)
{\displaystyle \Delta v=V_{\text{exh}}~\ln \left({\frac {m_{0}}{m_{f}}}\right)}
==== Acceleration-based ====
Imagine a rocket at rest in space with no forces exerted on it (Newton's first law of motion). From the moment its engine is started (clock set to 0) the rocket expels gas mass at a constant mass flow rate R (kg/s) and at exhaust velocity relative to the rocket ve (m/s). This creates a constant force F propelling the rocket that is equal to R × ve. The rocket is subject to a constant force, but its total mass is decreasing steadily because it is expelling gas. According to Newton's second law of motion, its acceleration at any time t is its propelling force F divided by its current mass m:
a
=
d
v
d
t
=
−
F
m
(
t
)
=
−
R
v
e
m
(
t
)
{\displaystyle ~a={\frac {dv}{dt}}=-{\frac {F}{m(t)}}=-{\frac {Rv_{\text{e}}}{m(t)}}}
Now, the mass of fuel the rocket initially has on board is equal to m0 – mf. For the constant mass flow rate R it will therefore take a time T = (m0 – mf)/R to burn all this fuel. Integrating both sides of the equation with respect to time from 0 to T (and noting that R = dm/dt allows a substitution on the right) obtains:
Δ
v
=
v
f
−
v
0
=
−
v
e
[
ln
m
f
−
ln
m
0
]
=
v
e
ln
(
m
0
m
f
)
.
{\displaystyle ~\Delta v=v_{f}-v_{0}=-v_{\text{e}}\left[\ln m_{f}-\ln m_{0}\right]=~v_{\text{e}}\ln \left({\frac {m_{0}}{m_{f}}}\right).}
==== Limit of finite mass "pellet" expulsion ====
The rocket equation can also be derived as the limiting case of the speed change for a rocket that expels its fuel in the form of
N
{\displaystyle N}
pellets consecutively, as
N
→
∞
{\displaystyle N\to \infty }
, with an effective exhaust speed
v
eff
{\displaystyle v_{\text{eff}}}
such that the mechanical energy gained per unit fuel mass is given by
1
2
v
eff
2
{\textstyle {\tfrac {1}{2}}v_{\text{eff}}^{2}}
.
In the rocket's center-of-mass frame, if a pellet of mass
m
p
{\displaystyle m_{p}}
is ejected at speed
u
{\displaystyle u}
and the remaining mass of the rocket is
m
{\displaystyle m}
, the amount of energy converted to increase the rocket's and pellet's kinetic energy is
1
2
m
p
v
eff
2
=
1
2
m
p
u
2
+
1
2
m
(
Δ
v
)
2
.
{\displaystyle {\tfrac {1}{2}}m_{p}v_{\text{eff}}^{2}={\tfrac {1}{2}}m_{p}u^{2}+{\tfrac {1}{2}}m(\Delta v)^{2}.}
Using momentum conservation in the rocket's frame just prior to ejection,
u
=
Δ
v
m
m
p
{\textstyle u=\Delta v{\tfrac {m}{m_{p}}}}
, from which we find
Δ
v
=
v
eff
m
p
m
(
m
+
m
p
)
.
{\displaystyle \Delta v=v_{\text{eff}}{\frac {m_{p}}{\sqrt {m(m+m_{p})}}}.}
Let
ϕ
{\displaystyle \phi }
be the initial fuel mass fraction on board and
m
0
{\displaystyle m_{0}}
the initial fueled-up mass of the rocket. Divide the total mass of fuel
ϕ
m
0
{\displaystyle \phi m_{0}}
into
N
{\displaystyle N}
discrete pellets each of mass
m
p
=
ϕ
m
0
/
N
{\displaystyle m_{p}=\phi m_{0}/N}
. The remaining mass of the rocket after ejecting
j
{\displaystyle j}
pellets is then
m
=
m
0
(
1
−
j
ϕ
/
N
)
{\displaystyle m=m_{0}(1-j\phi /N)}
. The overall speed change after ejecting
j
{\displaystyle j}
pellets is the sum
Δ
v
=
v
eff
∑
j
=
1
j
=
N
ϕ
/
N
(
1
−
j
ϕ
/
N
)
(
1
−
j
ϕ
/
N
+
ϕ
/
N
)
{\displaystyle \Delta v=v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\phi /N}{\sqrt {(1-j\phi /N)(1-j\phi /N+\phi /N)}}}}
Notice that for large
N
{\displaystyle N}
the last term in the denominator
ϕ
/
N
≪
1
{\displaystyle \phi /N\ll 1}
and can be neglected to give
Δ
v
≈
v
eff
∑
j
=
1
j
=
N
ϕ
/
N
1
−
j
ϕ
/
N
=
v
eff
∑
j
=
1
j
=
N
Δ
x
1
−
x
j
{\displaystyle \Delta v\approx v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\phi /N}{1-j\phi /N}}=v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\Delta x}{1-x_{j}}}}
where
Δ
x
=
ϕ
N
{\textstyle \Delta x={\frac {\phi }{N}}}
and
x
j
=
j
ϕ
N
{\textstyle x_{j}={\frac {j\phi }{N}}}
.
As
N
→
∞
{\displaystyle N\rightarrow \infty }
this Riemann sum becomes the definite integral
lim
N
→
∞
Δ
v
=
v
eff
∫
0
ϕ
d
x
1
−
x
=
v
eff
ln
1
1
−
ϕ
=
v
eff
ln
m
0
m
f
,
{\displaystyle \lim _{N\to \infty }\Delta v=v_{\text{eff}}\int _{0}^{\phi }{\frac {dx}{1-x}}=v_{\text{eff}}\ln {\frac {1}{1-\phi }}=v_{\text{eff}}\ln {\frac {m_{0}}{m_{f}}},}
since the final remaining mass of the rocket is
m
f
=
m
0
(
1
−
ϕ
)
{\displaystyle m_{f}=m_{0}(1-\phi )}
.
=== Special relativity ===
If special relativity is taken into account, the following equation can be derived for a relativistic rocket, with
Δ
v
{\displaystyle \Delta v}
again standing for the rocket's final velocity (after expelling all its reaction mass and being reduced to a rest mass of
m
1
{\displaystyle m_{1}}
) in the inertial frame of reference where the rocket started at rest (with the rest mass including fuel being
m
0
{\displaystyle m_{0}}
initially), and
c
{\displaystyle c}
standing for the speed of light in vacuum:
m
0
m
1
=
[
1
+
Δ
v
c
1
−
Δ
v
c
]
c
2
v
e
{\displaystyle {\frac {m_{0}}{m_{1}}}=\left[{\frac {1+{\frac {\Delta v}{c}}}{1-{\frac {\Delta v}{c}}}}\right]^{\frac {c}{2v_{\text{e}}}}}
Writing
m
0
m
1
{\textstyle {\frac {m_{0}}{m_{1}}}}
as
R
{\displaystyle R}
allows this equation to be rearranged as
Δ
v
c
=
R
2
v
e
c
−
1
R
2
v
e
c
+
1
{\displaystyle {\frac {\Delta v}{c}}={\frac {R^{\frac {2v_{\text{e}}}{c}}-1}{R^{\frac {2v_{\text{e}}}{c}}+1}}}
Then, using the identity
R
2
v
e
c
=
exp
[
2
v
e
c
ln
R
]
{\textstyle R^{\frac {2v_{\text{e}}}{c}}=\exp \left[{\frac {2v_{\text{e}}}{c}}\ln R\right]}
(here "exp" denotes the exponential function; see also Natural logarithm as well as the "power" identity at logarithmic identities) and the identity
tanh
x
=
e
2
x
−
1
e
2
x
+
1
{\textstyle \tanh x={\frac {e^{2x}-1}{e^{2x}+1}}}
(see Hyperbolic function), this is equivalent to
Δ
v
=
c
tanh
(
v
e
c
ln
m
0
m
1
)
{\displaystyle \Delta v=c\tanh \left({\frac {v_{\text{e}}}{c}}\ln {\frac {m_{0}}{m_{1}}}\right)}
== Terms of the equation ==
=== Delta-v ===
Delta-v (literally "change in velocity"), symbolised as Δv and pronounced delta-vee, as used in spacecraft flight dynamics, is a measure of the impulse that is needed to perform a maneuver such as launching from, or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of the vehicle.
Delta-v is produced by reaction engines, such as rocket engines, is proportional to the thrust per unit mass and burn time, and is used to determine the mass of propellant required for the given manoeuvre through the rocket equation.
For multiple manoeuvres, delta-v sums linearly.
For interplanetary missions delta-v is often plotted on a porkchop plot which displays the required mission delta-v as a function of launch date.
=== Mass fraction ===
In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload.
=== Effective exhaust velocity ===
The effective exhaust velocity is often specified as a specific impulse and they are related to each other by:
v
e
=
g
0
I
sp
,
{\displaystyle v_{\text{e}}=g_{0}I_{\text{sp}},}
where
I
sp
{\displaystyle I_{\text{sp}}}
is the specific impulse in seconds,
v
e
{\displaystyle v_{\text{e}}}
is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2),
g
0
{\displaystyle g_{0}}
is the standard gravity, 9.80665 m/s2 (in Imperial units 32.174 ft/s2).
== Applicability ==
The rocket equation captures the essentials of rocket flight physics in a single short equation. It also holds true for rocket-like reaction vehicles whenever the effective exhaust velocity is constant, and can be summed or integrated when the effective exhaust velocity varies. The rocket equation only accounts for the reaction force from the rocket engine; it does not include other forces that may act on a rocket, such as aerodynamic or gravitational forces. As such, when using it to calculate the propellant requirement for launch from (or powered descent to) a planet with an atmosphere, the effects of these forces must be included in the delta-V requirement (see Examples below). In what has been called "the tyranny of the rocket equation", there is a limit to the amount of payload that the rocket can carry, as higher amounts of propellant increment the overall weight, and thus also increase the fuel consumption. The equation does not apply to non-rocket systems such as aerobraking, gun launches, space elevators, launch loops, tether propulsion or light sails.
The rocket equation can be applied to orbital maneuvers in order to determine how much propellant is needed to change to a particular new orbit, or to find the new orbit as the result of a particular propellant burn. When applying to orbital maneuvers, one assumes an impulsive maneuver, in which the propellant is discharged and delta-v applied instantaneously. This assumption is relatively accurate for short-duration burns such as for mid-course corrections and orbital insertion maneuvers. As the burn duration increases, the result is less accurate due to the effect of gravity on the vehicle over the duration of the maneuver. For low-thrust, long duration propulsion, such as electric propulsion, more complicated analysis based on the propagation of the spacecraft's state vector and the integration of thrust are used to predict orbital motion.
== Examples ==
Assume an exhaust velocity of 4,500 meters per second (15,000 ft/s) and a
Δ
v
{\displaystyle \Delta v}
of 9,700 meters per second (32,000 ft/s) (Earth to LEO, including
Δ
v
{\displaystyle \Delta v}
to overcome gravity and aerodynamic drag).
Single-stage-to-orbit rocket:
1
−
e
−
9.7
/
4.5
{\displaystyle 1-e^{-9.7/4.5}}
= 0.884, therefore 88.4% of the initial total mass has to be propellant. The remaining 11.6% is for the engines, the tank, and the payload.
Two-stage-to-orbit: suppose that the first stage should provide a
Δ
v
{\displaystyle \Delta v}
of 5,000 meters per second (16,000 ft/s);
1
−
e
−
5.0
/
4.5
{\displaystyle 1-e^{-5.0/4.5}}
= 0.671, therefore 67.1% of the initial total mass has to be propellant to the first stage. The remaining mass is 32.9%. After disposing of the first stage, a mass remains equal to this 32.9%, minus the mass of the tank and engines of the first stage. Assume that this is 8% of the initial total mass, then 24.9% remains. The second stage should provide a
Δ
v
{\displaystyle \Delta v}
of 4,700 meters per second (15,000 ft/s);
1
−
e
−
4.7
/
4.5
{\displaystyle 1-e^{-4.7/4.5}}
= 0.648, therefore 64.8% of the remaining mass has to be propellant, which is 16.2% of the original total mass, and 8.7% remains for the tank and engines of the second stage, the payload, and in the case of a space shuttle, also the orbiter. Thus together 16.7% of the original launch mass is available for all engines, the tanks, and payload.
== Stages ==
In the case of sequentially thrusting rocket stages, the equation applies for each stage, where for each stage the initial mass in the equation is the total mass of the rocket after discarding the previous stage, and the final mass in the equation is the total mass of the rocket just before discarding the stage concerned. For each stage the specific impulse may be different.
For example, if 80% of the mass of a rocket is the fuel of the first stage, and 10% is the dry mass of the first stage, and 10% is the remaining rocket, then
Δ
v
=
v
e
ln
100
100
−
80
=
v
e
ln
5
=
1.61
v
e
.
{\displaystyle {\begin{aligned}\Delta v\ &=v_{\text{e}}\ln {100 \over 100-80}\\&=v_{\text{e}}\ln 5\\&=1.61v_{\text{e}}.\\\end{aligned}}}
With three similar, subsequently smaller stages with the same
v
e
{\displaystyle v_{\text{e}}}
for each stage, gives:
Δ
v
=
3
v
e
ln
5
=
4.83
v
e
{\displaystyle \Delta v\ =3v_{\text{e}}\ln 5\ =4.83v_{\text{e}}}
and the payload is 10% × 10% × 10% = 0.1% of the initial mass.
A comparable SSTO rocket, also with a 0.1% payload, could have a mass of 11.1% for fuel tanks and engines, and 88.8% for fuel. This would give
Δ
v
=
v
e
ln
(
100
/
11.2
)
=
2.19
v
e
.
{\displaystyle \Delta v\ =v_{\text{e}}\ln(100/11.2)\ =2.19v_{\text{e}}.}
If the motor of a new stage is ignited before the previous stage has been discarded and the simultaneously working motors have a different specific impulse (as is often the case with solid rocket boosters and a liquid-fuel stage), the situation is more complicated.
== See also ==
Delta-v budget
Jeep problem
Mass ratio
Oberth effect - applying delta-v in a gravity well increases the final velocity
Relativistic rocket
Reversibility of orbits
Robert H. Goddard - added terms for gravity and drag in vertical flight
Spacecraft propulsion
Stigler’s law of eponymy
== References ==
== External links ==
How to derive the rocket equation
Relativity Calculator – Learn Tsiolkovsky's rocket equations
Tsiolkovsky's rocket equations plot and calculator in WolframAlpha | Wikipedia/Tsiolkovsky_rocket_equation |
In astrodynamics, the patched conic approximation or patched two-body approximation is a method to simplify trajectory calculations for spacecraft in a multiple-body environment.
== Method ==
The simplification is achieved by dividing space into various parts by assigning each of the n bodies (e.g. the Sun, planets, moons) its own sphere of influence. When the spacecraft is within the sphere of influence of a smaller body, only the gravitational force between the spacecraft and that smaller body is considered, otherwise the gravitational force between the spacecraft and the larger body is used. This reduces a complicated n-body problem to multiple two-body problems, for which the solutions are the well-known conic sections of the Kepler orbits.
Although this method gives a good approximation of trajectories for interplanetary spacecraft missions, there are missions for which this approximation does not provide sufficiently accurate results. Notably, it does not model Lagrangian points.
== Example ==
On an Earth-to-Mars transfer, a hyperbolic trajectory is required to escape from Earth's gravity well, then an elliptic or hyperbolic trajectory in the Sun's sphere of influence is required to transfer from Earth's sphere of influence to that of Mars, etc. By patching these conic sections together—matching the position and velocity vectors between segments—the appropriate mission trajectory can be found.
== See also ==
Two-body problem
N-body problem
Sphere of influence
Kerbal Space Program, a popular spaceflight simulator based on the patched conic approximation
== References ==
== Bibliography ==
Carlson, K. M. (1970-11-30). An Analytical Solution to Patched-Conic Trajectories Satisfying Initial and Final Boundary Conditions (pdf). Technical Memorandum (Technical report). Bellcomm Inc. TM-70-2011-1. | Wikipedia/Patched_conic_approximation |
A numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics. The results of this simulation can be compared with past measurements to check for accuracy and then be used to predict future positions. Its main use therefore is in preparation of almanacs.
== Older efforts ==
The simulations can be done in either Cartesian or in spherical coordinates. The former are easier, but extremely calculation intensive, and only practical on an electronic computer. As such only the latter was used in former times. Strictly speaking, the latter was not much less calculation intensive, but it was possible to start with some simple approximations and then to add perturbations, as much as needed to reach the wanted accuracy.
In essence this mathematical simulation of the Solar System is a form of the N-body problem. The symbol N represents the number of bodies, which can grow quite large if one includes the Sun, 8 planets, dozens of moons, and countless planetoids, comets and so forth. However the influence of the Sun on any other body is so large, and the influence of all the other bodies on each other so small, that the problem can be reduced to the analytically solvable 2-body problem. The result for each planet is an orbit, a simple description of its position as function of time. Once this is solved the influences moons and planets have on each other are added as small corrections. These are small compared to a full planetary orbit. Some corrections might be still several degrees large, while measurements can be made to an accuracy of better than 1″.
Although this method is no longer used for simulations, it is still useful to find an approximate ephemeris as one can take the relatively simple main solution, perhaps add a few of the largest perturbations, and arrive without too much effort at the wanted planetary position. The disadvantage is that perturbation theory is very advanced mathematics.
== Modern method ==
The modern method consists of numerical integration in 3-dimensional space. One starts with a high accuracy value for the position (x, y, z) and the velocity (vx, vy, vz) for each of the bodies involved. When also the mass of each body is known, the acceleration (ax, ay, az) can be calculated from Newton's law of gravitation. Each body attracts each other body, the total acceleration being the sum of all these attractions. Next one chooses a small time-step Δt and applies Newton's second law of motion. The acceleration multiplied with Δt gives a correction to the velocity. The velocity multiplied with Δt gives a correction to the position. This procedure is repeated for all other bodies.
The result is a new value for position and velocity for all bodies. Then, using these new values one starts over the whole calculation for the next time-step Δt. Repeating this procedure often enough, and one ends up with a description of the positions of all bodies over time.
The advantage of this method is that for a computer it is a very easy job to do, and it yields highly accurate results for all bodies at the same time, doing away with the complex and difficult procedures for determining perturbations. The disadvantage is that one must start with highly accurate figures in the first place, or the results will drift away from the reality in time; that one gets x, y, z positions which are often first to be transformed into more practical ecliptical or equatorial coordinates before they can be used; and that it is an all or nothing approach. If one wants to know the position of one planet on one particular time, then all other planets and all intermediate time-steps are to be calculated too.
== Integration ==
In the previous section it was assumed that acceleration remains constant over a small timestep Δt so that the calculation reduces to simply the addition of V × Δt to R and so forth. In reality this is not the case, except when one takes Δt so small that the number of steps to be taken would be prohibitively high. Because while at any time the position is changed by the acceleration, the value of the acceleration is determined by the instantaneous position. Evidently a full integration is needed.
Several methods are available. First notice the needed equations:
a
→
j
=
∑
i
≠
j
n
G
M
i
|
r
→
i
−
r
→
j
|
3
(
r
→
i
−
r
→
j
)
{\displaystyle {\vec {a}}_{j}=\sum _{i\neq j}^{n}G{\frac {M_{i}}{|{\vec {r}}_{i}-{\vec {r}}_{j}|^{3}}}({\vec {r}}_{i}-{\vec {r}}_{j})}
This equation describes the acceleration all bodies i running from 1 to N exercise on a particular body j. It is a vector equation, so it is to be split in 3 equations for each of the X, Y, Z components, yielding:
(
a
j
)
x
=
∑
i
≠
j
n
G
M
i
(
(
x
i
−
x
j
)
2
+
(
y
i
−
y
j
)
2
+
(
z
i
−
z
j
)
2
)
3
/
2
(
x
i
−
x
j
)
{\displaystyle (a_{j})_{x}=\sum _{i\neq j}^{n}G{\frac {M_{i}}{((x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+(z_{i}-z_{j})^{2})^{3/2}}}(x_{i}-x_{j})}
with the additional relationships
a
x
=
d
v
x
d
t
{\displaystyle a_{x}={\frac {dv_{x}}{dt}}}
,
v
x
=
d
x
d
t
{\displaystyle v_{x}={\frac {dx}{dt}}}
likewise for Y and Z.
The former equation (gravitation) may look foreboding, but its calculation is no problem. The latter equations (motion laws) seem simpler, but yet they cannot be calculated. Computers cannot integrate, they cannot work with infinitesimal values, so instead of dt we use Δt and bringing the resulting variable to the left:
Δ
v
x
=
a
x
Δ
t
{\displaystyle \Delta v_{x}=a_{x}\Delta t}
, and:
Δ
x
=
v
x
Δ
t
{\displaystyle \Delta x=v_{x}\Delta t}
Remember that a is still a function of time. The simplest way to solve these is just the Euler algorithm, which in essence is the linear addition described above. Limiting ourselves to 1 dimension only in some general computer language:
a.old = gravitationfunction(x.old)
x.new = x.old + v.old * dt
v.new = v.old + a.old * dt
As in essence the acceleration used for the whole duration of the timestep, is the one as it was in the beginning of the timestep, this simple method has no high accuracy. Much better results are achieved by taking a mean acceleration, the average between the beginning value and the expected (unperturbed) end value:
a.old = gravitationfunction(x.old)
x.expect = x.old + v.old * dt
a.expect = gravitationfunction(x.expect)
v.new = v.old + (a.old + a.expect) * 0.5 * dt
x.new = x.old + (v.new + v.old) * 0.5 * dt
Of course still better results can be expected by taking intermediate values. This is what happens when using the Runge-Kutta method, especially the one of grade 4 or 5 are most useful. The most common method used is the leapfrog method due to its good long term energy conservation.
A completely different method is the use of Taylor series. In that case we write:
r
=
r
0
+
r
0
′
t
+
r
0
″
t
2
2
!
+
.
.
.
{\displaystyle r=r_{0}+r'_{0}t+r''_{0}{\frac {t^{2}}{2!}}+...}
but rather than developing up to some higher derivative in r only, one can develop in r and v (that is r') by writing
r
=
f
r
0
+
g
r
0
′
{\displaystyle r=fr_{0}+gr'_{0}}
and then write out the factors f and g in a series.
== Approximations ==
To calculate the accelerations the gravitational attraction of each body on each other body is to be taken into account. As a consequence the amount of calculation in the simulation goes up with the square of the number of bodies: Doubling the number of bodies increases the work with a factor four. To increase the accuracy of the simulation not only more decimals are to be taken but also smaller timesteps, again quickly increasing the amount of work. Evidently tricks are to be applied to reduce the amount of work. Some of these tricks are given here.
By far the most important trick is the use of a proper integration method, as already outlined above.
The choice of units is important. Rather than to work in SI units, which would make some values extremely small and some extremely large, all units are to be scaled such that they are in the neighbourhood of 1. For example, for distances in the Solar System the astronomical unit is most straightforward. If this is not done one is almost certain to see a simulation abandoned in the middle of a calculation on a floating point overflow or underflow, and if not that bad, still accuracy is likely to get lost due to truncation errors.
If N is large (not so much in Solar System simulations, but more in galaxy simulations) it is customary to create dynamic groups of bodies. All bodies in a particular direction and on large distance from the reference body, which is being calculated at that moment, are taken together and their gravitational attraction is averaged over the whole group.
The total amount of energy and angular momentum of a closed system are conserved quantities. By calculating these amounts after every time step the simulation can be programmed to increase the stepsize Δt if they do not change significantly, and to reduce it if they start to do so. Combining the bodies in groups as in the previous and apply larger and thus less timesteps on the faraway bodies than on the closer ones, is also possible.
To allow for an excessively rapid change of the acceleration when a particular body is close to the reference body, it is customary to introduce a small parameter e so that
a
=
G
M
r
2
+
e
{\displaystyle a={\frac {GM}{r^{2}+e}}}
== Complications ==
If the highest possible accuracy is needed, the calculations become much more complex. In the case of comets, nongravitational forces, such as radiation pressure and gas drag, must be taken into account. In the case of Mercury, and other planets for long term calculations, relativistic effects cannot be ignored. Then also the total energy is no longer a constant (because the four vector energy with linear momentum is). The finite speed of light also makes it important to allow for light-time effects, both classical and relativistic. Planets can no longer be considered as particles, but their shape and density must also be considered. For example, the flattening of the Earth causes precession, which causes the axial tilt to change, which affects the long-term movements of all planets.
Long term models, going beyond a few tens of millions of years, are not possible due to the lack of stability of the Solar System.
== See also ==
Ephemeris
VSOP (planets)
== References ==
Boulet, Dan L. (1991). Methods of orbit determination for the microcomputer. Richmond, Virginia: Willmann-Bell, Inc. ISBN 978-0-943396-34-7. OCLC 23287041. | Wikipedia/Numerical_model_of_solar_system |
Lunar theory attempts to account for the motions of the Moon. There are many small variations (or perturbations) in the Moon's motion, and many attempts have been made to account for them. After centuries of being problematic, lunar motion can now be modeled to a very high degree of accuracy (see section Modern developments).
Lunar theory includes:
the background of general theory; including mathematical techniques used to analyze the Moon's motion and to generate formulae and algorithms for predicting its movements; and also
quantitative formulae, algorithms, and geometrical diagrams that may be used to compute the Moon's position for a given time; often by the help of tables based on the algorithms.
Lunar theory has a history of over 2000 years of investigation. Its more modern developments have been used over the last three centuries for fundamental scientific and technological purposes, and are still being used in that way.
== Applications ==
Applications of lunar theory have included the following:
In the eighteenth century, comparison between lunar theory and observation was used to test Newton's law of universal gravitation by the motion of the lunar apogee.
In the eighteenth and nineteenth centuries, navigational tables based on lunar theory, initially in the Nautical Almanac, were much used for the determination of longitude at sea by the method of lunar distances.
In the very early twentieth century, comparison between lunar theory and observation was used in another test of gravitational theory, to test (and rule out) Simon Newcomb's suggestion that a well-known discrepancy in the motion of the perihelion of Mercury might be explained by a fractional adjustment of the power -2 in Newton's inverse square law of gravitation (the discrepancy was later successfully explained by the general theory of relativity).
In the mid-twentieth century, before the development of atomic clocks, lunar theory and observation were used in combination to implement an astronomical time scale (ephemeris time) free of the irregularities of mean solar time.
In the late twentieth and early twenty-first centuries, modern developments of lunar theory are being used in the Jet Propulsion Laboratory Development Ephemeris series of models of the Solar System, in conjunction with high-precision observations, to test the exactness of physical relationships associated with the general theory of relativity, including the strong equivalence principle, relativistic gravitation, geodetic precession, and the constancy of the gravitational constant.
== History ==
The Moon has been observed for millennia. Over these ages, various levels of care and precision have been possible, according to the techniques of observation available at any time. There is a correspondingly long history of lunar theories: it stretches from the times of the Babylonian and Greek astronomers, down to modern lunar laser ranging.
The history can be considered to fall into three parts: from ancient times to Newton; the period of classical (Newtonian) physics; and modern developments.
=== Babylon ===
Of Babylonian astronomy, practically nothing was known to historians of science before the 1880s. Surviving ancient writings of Pliny had made bare mention of three astronomical schools in Mesopotamia – at Babylon, Uruk, and 'Hipparenum' (possibly 'Sippar'). But definite modern knowledge of any details only began when Joseph Epping deciphered cuneiform texts on clay tablets from a Babylonian archive: In these texts he identified an ephemeris of positions of the Moon. Since then, knowledge of the subject, still fragmentary, has had to be built up by painstaking analysis of deciphered texts, mainly in numerical form, on tablets from Babylon and Uruk (no trace has yet been found of anything from the third school mentioned by Pliny).
To the Babylonian astronomer Kidinnu (in Greek or Latin, Kidenas or Cidenas) has been attributed the invention (5th or 4th century BC) of what is now called "System B" for predicting the position of the moon, taking account that the moon continually changes its speed along its path relative to the background of fixed stars. This system involved calculating daily stepwise changes of lunar speed, up or down, with a minimum and a maximum approximately each month. The basis of these systems appears to have been arithmetical rather than geometrical, but they did approximately account for the main lunar inequality now known as the equation of the center.
The Babylonians kept very accurate records for hundreds of years of new moons and eclipses. Some time between the years 500 BC and 400 BC they identified and began to use the 19 year cyclic relation between lunar months and solar years now known as the Metonic cycle.
This helped them build a numerical theory of the main irregularities in the Moon's motion, reaching remarkably good estimates for the (different) periods of the three most prominent features of the Moon's motion:
The synodic month, i.e. the mean period for the phases of the Moon. Now called "System B", it reckons the synodic month as 29 days and (sexagesimally) 3,11;0,50 "time degrees", where each time degree is one degree of the apparent motion of the stars, or 4 minutes of time, and the sexagesimal values after the semicolon are fractions of a time degree. This converts to 29.530594 days = 29d 12h 44m 3.33s, to compare with a modern value (as at 1900 Jan 0) of 29.530589 days, or 29d 12h 44m 2.9s. This same value was used by Hipparchos and Ptolemy, was used throughout the Middle Ages, and still forms the basis of the Hebrew calendar.
The mean lunar velocity relative to the stars they estimated at 13° 10′ 35″ per day, giving a corresponding month of 27.321598 days, to compare with modern values of 13° 10′ 35.0275″ and 27.321582 days.
The anomalistic month, i.e. the mean period for the Moon's approximately monthly accelerations and decelerations in its rate of movement against the stars, had a Babylonian estimate of 27.5545833 days, to compare with a modern value 27.554551 days.
The draconitic month, i.e. the mean period with which the path of the Moon against the stars deviates first north and then south in ecliptic latitude by comparison with the ecliptic path of the Sun, was indicated by a number of different parameters leading to various estimates, e.g. of 27.212204 days, to compare with a modern value of 27.212221, but the Babylonians also had a numerical relationship that 5458 synodic months were equal to 5923 draconitic months, which when compared with their accurate value for the synodic month leads to practically exactly the modern figure for the draconitic month.
The Babylonian estimate for the synodic month was adopted for the greater part of two millennia by Hipparchus, Ptolemy, and medieval writers (and it is still in use as part of the basis for the calculated Hebrew (Jewish) calendar).
=== Greece and Hellenistic Egypt ===
Thereafter, from Hipparchus and Ptolemy in the Bithynian and Ptolemaic epochs down to the time of Newton's work in the seventeenth century, lunar theories were composed mainly with the help of geometrical ideas, inspired more or less directly by long series of positional observations of the moon. Prominent in these geometrical lunar theories were combinations of circular motions – applications of the theory of epicycles.
==== Hipparchus ====
Hipparchus, whose works are mostly lost and known mainly from quotations by other authors, assumed that the Moon moved in a circle inclined at 5° to the ecliptic, rotating in a retrograde direction (i.e. opposite to the direction of annual and monthly apparent movements of the Sun and Moon relative to the fixed stars) once in 182⁄3 years. The circle acted as a deferent, carrying an epicycle along which the Moon was assumed to move in a retrograde direction. The center of the epicycle moved at a rate corresponding to the mean change in Moon's longitude, while the period of the Moon around the epicycle was an anomalistic month. This epicycle approximately provided for what was later recognized as the elliptical inequality, the equation of the center, and its size approximated to an equation of the center of about 5° 1'. This figure is much smaller than the modern value: but it is close to the difference between the modern coefficients of the equation of the center (1st term) and that of the evection: the difference is accounted for by the fact that the ancient measurements were taken at times of eclipses, and the effect of the evection (which subtracts under those conditions from the equation of the center) was at that time unknown and overlooked. For further information see also separate article Evection.
==== Ptolemy ====
Ptolemy's work the Almagest had wide and long-lasting acceptance and influence for over a millennium. He gave a geometrical lunar theory that improved on that of Hipparchus by providing for a second inequality of the Moon's motion, using a device that made the apparent apogee oscillate a little – prosneusis of the epicycle. This second inequality or second anomaly accounted rather approximately, not only for the equation of the center, but also for what became known (much later) as the evection. But this theory, applied to its logical conclusion, would make the distance (and apparent diameter) of the Moon appear to vary by a factor of about 2, which is clearly not seen in reality. (The apparent angular diameter of the Moon does vary monthly, but only over a much narrower range of about 0.49°–0.55°.) This defect of the Ptolemaic theory led to proposed replacements by Ibn al-Shatir in the 14th century and by Copernicus in the 16th century.
=== Ibn al-Shatir and Copernicus ===
Significant advances in lunar theory were made by the Arab astronomer, Ibn al-Shatir (1304–1375). Drawing on the observation that the distance to the Moon did not change as drastically as required by Ptolemy's lunar model, he produced a new lunar model that replaced Ptolemy's crank mechanism with a double epicycle model that reduced the computed range of distances of the Moon from the Earth. A similar lunar theory, developed some 150 years later by the Renaissance astronomer Nicolaus Copernicus, had the same advantage concerning the lunar distances.
=== Tycho Brahe, Johannes Kepler, and Jeremiah Horrocks ===
Tycho Brahe and Johannes Kepler refined the Ptolemaic lunar theory, but did not overcome its central defect of giving a poor account of the (mainly monthly) variations in the Moon's distance, apparent diameter and parallax. Their work added to the lunar theory three substantial further discoveries.
The nodes and the inclination of the lunar orbital plane both appear to librate, with a monthly (according to Tycho) or semi-annual period (according to Kepler).
The lunar longitude has a twice-monthly Variation, by which the Moon moves faster than expected at new and full moon, and slower than expected at the quarters.
There is also an annual effect, by which the lunar motion slows down a little in January and speeds up a little in July: the annual equation.
The refinements of Brahe and Kepler were recognized by their immediate successors as improvements, but their seventeenth-century successors tried numerous alternative geometrical configurations for the lunar motions to improve matters further. A notable success was achieved by Jeremiah Horrocks, who proposed a scheme involving an approximate 6 monthly libration in the position of the lunar apogee and also in the size of the elliptical eccentricity. This scheme had the great merit of giving a more realistic description of the changes in distance, diameter and parallax of the Moon.
=== Newton ===
A first gravitational period for lunar theory started with the work of Newton. He was the first to define the problem of the perturbed motion of the Moon in recognisably modern terms. His groundbreaking work is shown for example in the Principia in all versions including the first edition published in 1687.
Newton's biographer, David Brewster, reported that the complexity of Lunar Theory impacted Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–3, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Edmund Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more" [Emphasis in original].
==== Solar perturbation of lunar motion ====
Newton identified how to evaluate the perturbing effect on the relative motion of the Earth and Moon, arising from their gravity towards the Sun, in Book 1, Proposition 66, and in Book 3, Proposition 25. The starting-point for this approach is Corollary VI to the laws of motion. This shows that if the external accelerative forces from some massive body happens to act equally and in parallel on some different other bodies considered, then those bodies would be affected equally, and in that case their motions (relative to each other) would continue as if there were no such external accelerative forces at all. It is only in the case that the external forces (e.g. in Book 1, Prop. 66, and Book 3, Prop. 25, the gravitational attractions towards the Sun) are different in size or in direction in their accelerative effects on the different bodies considered (e.g. on the Earth and Moon), that consequent effects are appreciable on the relative motions of the latter bodies. (Newton referred to accelerative forces or accelerative gravity due to some external massive attractor such as the Sun. The measure he used was the acceleration that the force tends to produce (in modern terms, force per unit mass), rather than what we would now call the force itself.)
Thus Newton concluded that it is only the difference between the Sun's accelerative attraction on the Moon and the Sun's attraction on the Earth that perturbs the motion of the Moon relative to the Earth.
Newton then in effect used vector decomposition of forces, to carry out this analysis. In Book 1, Proposition 66 and in Book 3, Proposition 25, he showed by a geometrical construction, starting from the total gravitational attraction of the Sun on the Earth, and of the Sun on the Moon, the difference that represents the perturbing effect on the motion of the Moon relative to the Earth. In summary, line LS in Newton's diagram as shown below represents the size and direction of the perturbing acceleration acting on the Moon in the Moon's current position P (line LS does not pass through point P, but the text shows that this is not intended to be significant, it is a result of the scale factors and the way the diagram has been built up).
Shown here is Newton's diagram from the first (1687) Latin edition of the Principia (Book 3, Proposition 25, p. 434). Here he introduced his analysis of perturbing accelerations on the Moon in the Sun-Earth-Moon system. Q represents the Sun, S the Earth, and P the Moon.
Parts of this diagram represent distances, other parts gravitational accelerations (attractive forces per unit mass). In a dual significance, SQ represents the Earth-Sun distance, and then it also represents the size and direction of the Earth-Sun gravitational acceleration. Other distances in the diagram are then in proportion to distance SQ. Other attractions are in proportion to attraction SQ.
The Sun's attractions are SQ (on the Earth) and LQ (on the Moon). The size of LQ is drawn so that the ratio of attractions LQ:SQ is the inverse square of the ratio of distances PQ:SQ. (Newton constructs KQ=SQ, giving an easier view of the proportions.) The Earth's attraction on the Moon acts along direction PS. (But line PS signifies only distance and direction so far, nothing has been defined about the scale factor between solar and terrestrial attractions).
After showing solar attractions LQ on the Moon and SQ on the Earth, on the same scale, Newton then makes a vector decomposition of LQ into components LM and MQ. Then he identifies the perturbing acceleration on the Moon as the difference of this from SQ. SQ and MQ are parallel to each other, so SQ can be directly subtracted from MQ, leaving MS. The resulting difference, after subtracting SQ from LQ, is therefore the vector sum of LM and MS: these add up to a perturbing acceleration LS.
Later Newton identified another resolution of the perturbing acceleration LM+MS = LS, into orthogonal components: a transverse component parallel to LE, and a radial component, effectively ES.
Newton's diagrammatic scheme, since his time, has been re-presented in other and perhaps visually clearer ways. Shown here is a vector presentation indicating, for two different positions, P1 and P2, of the Moon in its orbit around the Earth, the respective vectors LS1 and LS2 for the perturbing acceleration due to the Sun. The Moon's position at P1 is fairly close to what it was at P in Newton's diagram; corresponding perturbation LS1 is like Newton's LS in size and direction. At another position P2, the Moon is farther away from the Sun than the Earth is, the Sun's attraction LQ2 on the Moon is weaker than the Sun's attraction SQ=SQ2 on the Earth, and then the resulting perturbation LS2 points obliquely away from the Sun.
Constructions like those in Newton's diagram can be repeated for many different positions of the Moon in its orbit. For each position, the result is a perturbation vector like LS1 or LS2 in the second diagram. Shown here is an often-presented form of the diagram that summarises sizes and directions of the perturbation vectors for many different positions of the Moon in its orbit. Each small arrow is a perturbation vector like LS, applicable to the Moon in the particular position around the orbit from which the arrow begins. The perturbations on the Moon when it is nearly in line along the Earth-Sun axis, i.e. near new or full moon, point outwards, away from the Earth. When the Moon-Earth line is 90° from the Earth-Sun axis they point inwards, towards the Earth, with a size that is only half the maximum size of the axial (outwards) perturbations. (Newton gave a rather good quantitative estimate for the size of the solar perturbing force: at quadrature where it adds to the Earth's attraction he put it at 1⁄178.725 of the mean terrestrial attraction, and twice as much as that at the new and full moons where it opposes and diminishes the Earth's attraction.)
Newton also showed that the same pattern of perturbation applies, not only to the Moon, in its relation to the Earth as disturbed by the Sun, but also to other particles more generally in their relation to the solid Earth as disturbed by the Sun (or by the Moon); for example different portions of the tidal waters at the Earth's surface. The study of the common pattern of these perturbing accelerations grew out of Newton's initial study of the perturbations of the Moon, which he also applied to the forces moving tidal waters. Nowadays this common pattern itself has become often known as a tidal force whether it is being applied to the disturbances of the motions of the Moon, or of the Earth's tidal waters – or of the motions of any other object that suffers perturbations of analogous pattern.
After introducing his diagram 'to find the force of the Sun to perturb the Moon' in Book 3, Proposition 25, Newton developed a first approximation to the solar perturbing force, showing in further detail how its components vary as the Moon follows its monthly path around the Earth. He also took the first steps in investigating how the perturbing force shows its effects by producing irregularities in the lunar motions.
For a selected few of the lunar inequalities, Newton showed in some quantitative detail how they arise from the solar perturbing force.
Much of this lunar work of Newton's was done in the 1680s, and the extent and accuracy of his first steps in the gravitational analysis was limited by several factors, including his own choice to develop and present the work in what was, on the whole, a difficult geometrical way, and by the limited accuracy and uncertainty of many astronomical measurements in his time.
== Classical gravitational period after Newton ==
The main aim of Newton's successors, from Leonhard Euler, Alexis Clairaut and Jean d'Alembert in the mid-eighteenth century, down to Ernest William Brown in the late nineteenth and early twentieth century, was to account completely and much more precisely for the moon's motions on the basis of Newton's laws, i.e. the laws of motion and of universal gravitation by attractions inversely proportional to the squares of the distances between the attracting bodies. They also wished to put the inverse-square law of gravitation to the test, and for a time in the 1740s it was seriously doubted, on account of what was then thought to be a large discrepancy between the Newton-theoretical and the observed rates in the motion of the lunar apogee. However Clairaut showed shortly afterwards (1749–50) that at least the major cause of the discrepancy lay not in the lunar theory based on Newton's laws, but in excessive approximations that he and others had relied on to evaluate it.
Most of the improvements in theory after Newton were made in algebraic form: they involved voluminous and highly laborious amounts of infinitesimal calculus and trigonometry. It also remained necessary, for completing the theories of this period, to refer to observational measurements.
=== Results of the theories ===
The lunar theorists used (and invented) many different mathematical approaches to analyse the gravitational problem. Not surprisingly, their results tended to converge. From the time of the earliest gravitational analysts among Newton's successors, Euler, Clairaut and d'Alembert, it was recognized that nearly all of the main lunar perturbations could be expressed in terms of just a few angular arguments and coefficients. These can be represented by:
the mean motions or positions of the Moon and the Sun, together with three coefficients and three angular positions, which together define the shape and location of their apparent orbits:
the two eccentricities (
e
{\displaystyle e}
, about 0.0549, and
e
′
{\displaystyle e'}
, about 0.01675) of the ellipses that approximate to the apparent orbits of the Moon and the Sun;
the angular direction of the perigees (
Γ
{\displaystyle \Gamma }
and
Γ
′
{\displaystyle \Gamma '}
) (or their opposite points the apogees) of the two orbits; and
the angle of inclination (
i
{\displaystyle i}
, mean value about 18523") between the planes of the two orbits, together with the direction (
Ω
{\displaystyle \Omega }
) of the line of nodes in which those two planes intersect. The ascending node (
Ω
{\displaystyle \Omega }
) is the node passed by the Moon when it is tending northwards relative to the ecliptic.
From these basic parameters, just four basic differential angular arguments are enough to express, in their different combinations, nearly all of the most significant perturbations of the lunar motions. They are given here with their conventional symbols due to Delaunay; they are sometimes known as the Delaunay arguments:
l
{\displaystyle l}
the Moon's mean anomaly (angular distance of the mean longitude of the Moon from the mean longitude of its perigee
Γ
{\displaystyle \Gamma }
);
l
′
{\displaystyle l'}
the Sun's mean anomaly (angular distance of the mean longitude of the Sun from the mean longitude of its perigee
Γ
′
{\displaystyle \Gamma '}
);
F
{\displaystyle F}
the Moon's mean argument of latitude (angular distance of the mean longitude of the Moon from the mean longitude of its ascending (northward-bound) node
Ω
{\displaystyle \Omega }
);
D
{\displaystyle D}
the Moon's mean (solar) elongation (angular distance of the mean longitude of the Moon from the mean longitude of the Sun).
This work culminated into Brown's lunar theory (1897–1908) and Tables of the Motion of the Moon (1919). These were used in the American Ephemeris and Nautical Almanac until 1968, and in a modified form until 1984.
=== Largest or named lunar inequalities ===
Several of the largest lunar perturbations in longitude (contributions to the difference in its true ecliptic longitude relative to its mean longitude) have been named. In terms of the differential arguments, they can be expressed in the following way, with coefficients rounded to the nearest second of arc ("):
==== Equation of the center ====
The Moon's equation of the center, or elliptic inequality, was known at least in approximation, to the ancients from the Babylonians and Hipparchus onwards. Knowledge of more recent date is that it corresponds to the approximate application of Kepler's law of equal areas in an elliptical orbit, and represents the speeding-up of the Moon as its distance from the Earth decreases while it moves towards its perigee, and then its slowing down as its distance from the Earth increases while it moves towards its apogee. The effect on the Moon's longitude can be approximated by a series of terms, of which the first three are
+
22639
″
sin
(
l
)
+
769
″
sin
(
2
l
)
+
36
″
sin
(
3
l
)
{\displaystyle +22639''\sin(l)+769''\sin(2l)+36''\sin(3l)}
.
==== Evection ====
The evection (or its approximation) was known to Ptolemy, but its name and knowledge of its cause dates from the 17th century. Its effect on the Moon's longitude has an odd-appearing period of about 31.8 days. This can be represented in a number of ways, for example as the result of an approximate 6-monthly libration in the position of perigee, with an accompanying 6-monthly pulsation in the size of the Moon's orbital eccentricity. Its principal term is
+
4586
″
sin
(
2
D
−
l
)
{\displaystyle +4586''\sin(2D-l)}
.
==== Variation ====
The Variation, discovered by Tycho Brahe, is a speeding-up of the Moon as it approaches new-moon and full-moon, and a slowing-down as it approaches first and last quarter. Its gravitational explanation with a quantitative estimate was first given by Newton. Its principal term is
+
2370
″
sin
(
2
D
)
{\displaystyle +2370''\sin(2D)}
.
==== Annual equation ====
The annual equation, also discovered by Brahe, was qualitatively explained by Newton in terms that the Moon's orbit becomes slightly expanded in size, and longer in period, when the Earth is at perihelion closest to the Sun at the beginning of January, and the Sun's perturbing effect is strongest, and then slightly contracted in size and shorter in period when the Sun is most distant in early July, so that its perturbing effect is weaker: the modern value for the principal term due to this effect is
−
668
″
sin
(
l
′
)
{\displaystyle -668''\sin(l')}
.
==== Parallactic inequality ====
The parallactic inequality, first found by Newton, makes Brahe's Variation a little asymmetric as a result of the finite distance and non-zero parallax of the Sun. Its effect is that the Moon is a little behind at first quarter, and a little ahead at last quarter. Its principal term is
−
125
″
sin
(
D
)
{\displaystyle -125''\sin(D)}
.
==== Reduction to the ecliptic ====
The reduction to the ecliptic represents the geometric effect of expressing the Moon's motion in terms of a longitude in the plane of the ecliptic, although its motion is really taking place in a plane that is inclined by about 5 degrees. Its principal term is
−
412
″
sin
(
2
F
)
{\displaystyle -412''\sin(2F)}
.
The analysts of the mid-18th century expressed the perturbations of the Moon's position in longitude using about 25-30 trigonometrical terms. However, work in the nineteenth and twentieth century led to very different formulations of the theory so these terms are no longer current. The number of terms needed to express the Moon's position with the accuracy sought at the beginning of the twentieth century was over 1400; and the number of terms needed to emulate the accuracy of modern numerical integrations based on laser-ranging observations is in the tens of thousands: there is no limit to the increase in number of terms needed as requirements of accuracy increase.
== Modern developments ==
=== Digital computers and lunar laser ranging ===
Since the Second World War and especially since the 1960s, lunar theory has been further developed in a somewhat different way. This has been stimulated in two ways: on the one hand, by the use of automatic digital computation, and on the other hand, by modern observational data-types, with greatly increased accuracy and precision.
Wallace John Eckert, a student of Ernest William Brown and employee at IBM, used the experimental digital computers developed there after the Second World War for computation of astronomical ephemerides. One of the projects was to put Brown's lunar theory into the machine and evaluate the expressions directly. Another project was something entirely new: a numerical integration of the equations of motion for the Sun and the four major planets. This became feasible only after electronic digital computers became available. Eventually this led to the Jet Propulsion Laboratory Development Ephemeris series.
In the meantime, Brown's theory was improved with better constants and the introduction of Ephemeris Time and the removal of some empirical corrections associated with this. This led to the Improved Lunar Ephemeris (ILE), which, with some minor successive improvements, was used in the astronomical almanacs from 1960 through 1983 and enabled lunar landing missions.
The most significant improvement of position observations of the Moon have been the Lunar Laser Ranging measurements, obtained using Earth-bound lasers and special retroreflectors placed on the surface of the Moon. The time-of-flight of a pulse of laser light to one of the retroreflectors and back gives a measure of the Moon's distance at that time. The first of five retroreflectors that are operational today was taken to the Moon in the Apollo 11 spacecraft in July 1969 and placed in a suitable position on the Moon's surface by Buzz Aldrin. Range precision has been extended further by the Apache Point Observatory Lunar Laser-ranging Operation, established in 2005.
=== Numerical integrations, relativity, tides, librations ===
The lunar theory, as developed numerically to fine precision using these modern measures, is based on a larger range of considerations than the classical theories: It takes account not only of gravitational forces (with relativistic corrections) but also of many tidal and geophysical effects and a greatly extended theory of lunar libration. Like many other scientific fields this one has now developed so as to be based on the work of large teams and institutions. An institution notably taking one of the leading parts in these developments has been the Jet Propulsion Laboratory (JPL) at California Institute of Technology; and names particularly associated with the transition, from the early 1970s onwards, from classical lunar theories and ephemerides towards the modern state of the science include those of J. Derral Mulholland and J.G. Williams, and for the linked development of solar system (planetary) ephemerides E. Myles Standish.
Since the 1970s, JPL has produced a series of numerically integrated Development Ephemerides (numbered DExxx), incorporating Lunar Ephemerides (LExxx). Planetary and lunar ephemerides DE200/LE200 were used in the official Astronomical Almanac ephemerides for 1984–2002, and ephemerides DE405/LE405, of further improved accuracy and precision, have been in use as from the issue for 2003. The current ephemeris is DE440.
=== Analytical developments ===
In parallel with these developments, a new class of analytical lunar theory has also been developed in recent years, notably the Ephemeride Lunaire Parisienne by Jean Chapront and Michelle Chapront-Touzé from the Bureau des Longitudes. Using computer-assisted algebra, the analytical developments have been taken further than previously could be done by the classical analysts working manually. Also, some of these new analytical theories (like ELP) have been fitted to the numerical ephemerides previously developed at JPL as mentioned above. The main aims of these recent analytical theories, in contrast to the aims of the classical theories of past centuries, have not been to generate improved positional data for current dates; rather, their aims have included the study of further aspects of the motion, such as long-term properties, which may not so easily be apparent from the modern numerical theories themselves.
== Notable astronomers ==
Among notable astronomers and mathematicians down the ages, whose names are associated with lunar theories, are:
Babylonian/Chaldean
Naburimannu
Kidinnu
Soudines
Ancient Greeks/Hellenistic
Hipparchus
Ptolemy
Medieval Islamic world
Ibn al-Shatir
European Middle Ages
Sandivogius of Czechel
Albert Brudzewski
Nicolaus Copernicus
European, 16th to early 20th centuries
Tycho Brahe
Johannes Kepler
Jeremiah Horrocks
Ismaël Bullialdus
John Flamsteed
Isaac Newton
Edmond Halley
Leonhard Euler
Alexis Clairaut
Jean d'Alembert
Tobias Mayer
Johann Tobias Bürg
Pierre-Simon Laplace
Philippe le Doulcet
Johann Karl Burckhardt
Peter Andreas Hansen
Charles-Eugène Delaunay
John Couch Adams
North American, 19th to early 20th centuries
Simon Newcomb
George William Hill
Ernest William Brown
Wallace John Eckert
Other notable mathematicians and mathematical astronomers also made significant contributions.
== Notes ==
== References ==
== Bibliography ==
'AE 1871': "Nautical Almanac & Astronomical Ephemeris" for 1871, (London, 1867).
E W Brown (1896). An Introductory Treatise on the Lunar Theory, Cambridge University Press.
E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 53 (1897), 39–116.
E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 53 (1899), 163–202.
E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 54 (1900), 1–63.
E W Brown. "On the verification of the Newtonian law", Monthly Notes of the Royal Astronomical Society 63 (1903), 396–397.
E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 57 (1905), 51–145.
E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 59 (1908), 1–103.
E W Brown (1919). Tables of the Motion of the Moon, New Haven.
M Chapront-Touzé & J Chapront. "The lunar ephemeris ELP-2000", Astronomy & Astrophysics 124 (1983), 50–62.
M Chapront-Touzé & J Chapront: "ELP2000-85: a semi-analytical lunar ephemeris adequate for historical times", Astronomy & Astrophysics 190 (1988), 342–352.
M Chapront-Touzé & J Chapront, Analytical Ephemerides of the Moon in the 20th Century (Observatoire de Paris, 2002).
J Chapront; M Chapront-Touzé; G Francou. "A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements", Astronomy & Astrophysics 387 (2002), 700–709.
J Chapront & G Francou. "The lunar theory ELP revisited. Introduction of new planetary perturbations", Astronomy & Astrophysics 404 (2003), 735–742.
I B Cohen and Anne Whitman (1999). Isaac Newton: 'The Principia', a new translation, University of California Press. (For bibliographic details but no text, see external link.)
J O Dickey; P L Bender; J E Faller; and others. "Lunar Laser Ranging: A Continuing Legacy of the Apollo Program", Science 265 (1994), pp. 482–490.
J L E Dreyer (1906). A History of Astronomy from Thales to Kepler, Cambridge University Press, (later republished under the modified title "History of the Planetary Systems from Thales to Kepler").
W J Eckert et al. Improved Lunar Ephemeris 1952–1959: A Joint Supplement to the American Ephemeris and the (British) Nautical Almanac, (US Government Printing Office, 1954).
J Epping & J N Strassmaier. "Zur Entzifferung der astronomischen Tafeln der Chaldaer" ("On the Deciphering of the Astronomical Tables of the Chaldaeans"), Stimmen aus Maria Laach, vol. 21 (1881), pp. 277–292.
'ESAE 1961': Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac ('prepared jointly by the Nautical Almanac Offices of the United Kingdom and the United States of America'), London (HMSO), 1961.
K Garthwaite; D B Holdridge & J D Mulholland. "A preliminary special perturbation theory for the lunar motion", Astronomical Journal 75 (1970), 1133.
H Godfray (1885). Elementary Treatise on the Lunar Theory, London, (4th ed.).
Andrew Motte (1729a) (translator). "The Mathematical Principles of Natural Philosophy, by Sir Isaac Newton, translated into English", Volume I, containing Book 1.
Andrew Motte (1729b) (translator). "The Mathematical Principles of Natural Philosophy, by Sir Isaac Newton, translated into English", Volume II, containing Books 2 and 3 (with Index, Appendix containing additional (Newtonian) proofs, and "The Laws of the Moon's Motion according to Gravity", by John Machin).
J D Mulholland & P J Shelus. "Improvement of the numerical lunar ephemeris with laser ranging data", Moon 8 (1973), 532.
O Neugebauer (1975). A History of Ancient Mathematical Astronomy, (in 3 volumes), New York (Springer).
X X Newhall; E M Standish; J G Williams. "DE102: A numerically integrated ephemeris of the Moon and planets spanning forty-four centuries", Astronomy and Astrophysics 125 (1983), 150.
U S Naval Observatory (2009). "History of the Astronomical Almanac" Archived 2009-03-05 at the Wayback Machine.
J G Williams et al. "Making solutions from lunar laser ranging data", Bulletin of the American Astronomical Society (1972), 4Q, 267.
J.G. Williams; S.G. Turyshev; & D.H. Boggs. "Progress in Lunar Laser Ranging Tests of Relativistic Gravity", Physical Review Letters, 93 (2004), 261101.
== External links ==
Quotations related to Lunar theory at Wikiquote | Wikipedia/Lunar_theory |
A sphere of influence (SOI) in astrodynamics and astronomy is the oblate spheroid-shaped region where a particular celestial body exerts the main gravitational influence on an orbiting object. This is usually used to describe the areas in the Solar System where planets dominate the orbits of surrounding objects such as moons, despite the presence of the much more massive but distant Sun.
In the patched conic approximation, used in estimating the trajectories of bodies moving between the neighbourhoods of different bodies using a two-body approximation, ellipses and hyperbolae, the SOI is taken as the boundary where the trajectory switches which mass field it is influenced by. It is not to be confused with the sphere of activity which extends well beyond the sphere of influence.
== Models ==
The most common base models to calculate the sphere of influence is the Hill sphere and the Laplace sphere, but updated and particularly more dynamic ones have been described.
The general equation describing the radius of the sphere
r
SOI
{\displaystyle r_{\text{SOI}}}
of a planet:
r
SOI
≈
a
(
m
M
)
2
/
5
{\displaystyle r_{\text{SOI}}\approx a\left({\frac {m}{M}}\right)^{2/5}}
where
a
{\displaystyle a}
is the semimajor axis of the smaller object's (usually a planet's) orbit around the larger body (usually the Sun).
m
{\displaystyle m}
and
M
{\displaystyle M}
are the masses of the smaller and the larger object (usually a planet and the Sun), respectively.
In the patched conic approximation, once an object leaves the planet's SOI, the primary/only gravitational influence is the Sun (until the object enters another body's SOI). Because the definition of rSOI relies on the presence of the Sun and a planet, the term is only applicable in a three-body or greater system and requires the mass of the primary body to be much greater than the mass of the secondary body. This changes the three-body problem into a restricted two-body problem.
== Table of selected SOI radii ==
The table shows the values of the sphere of gravity of the bodies of the solar system in relation to the Sun (with the exception of the Moon which is reported relative to Earth):
An important understanding to be drawn from this table is that "Sphere of Influence" here is "Primary". For example, though Jupiter is much larger in mass than say, Neptune, its Primary SOI is much smaller due to Jupiter's much closer proximity to the Sun.
== Increased accuracy on the SOI ==
The Sphere of influence is, in fact, not quite a sphere. The distance to the SOI depends on the angular distance
θ
{\displaystyle \theta }
from the massive body. A more accurate formula is given by
r
SOI
(
θ
)
≈
a
(
m
M
)
2
/
5
1
1
+
3
cos
2
(
θ
)
10
{\displaystyle r_{\text{SOI}}(\theta )\approx a\left({\frac {m}{M}}\right)^{2/5}{\frac {1}{\sqrt[{10}]{1+3\cos ^{2}(\theta )}}}}
Averaging over all possible directions we get:
r
SOI
¯
=
0.9431
a
(
m
M
)
2
/
5
{\displaystyle {\overline {r_{\text{SOI}}}}=0.9431a\left({\frac {m}{M}}\right)^{2/5}}
== Derivation ==
Consider two point masses
A
{\displaystyle A}
and
B
{\displaystyle B}
at locations
r
A
{\displaystyle r_{A}}
and
r
B
{\displaystyle r_{B}}
, with mass
m
A
{\displaystyle m_{A}}
and
m
B
{\displaystyle m_{B}}
respectively. The distance
R
=
|
r
B
−
r
A
|
{\displaystyle R=|r_{B}-r_{A}|}
separates the two objects. Given a massless third point
C
{\displaystyle C}
at location
r
C
{\displaystyle r_{C}}
, one can ask whether to use a frame centered on
A
{\displaystyle A}
or on
B
{\displaystyle B}
to analyse the dynamics of
C
{\displaystyle C}
.
Consider a frame centered on
A
{\displaystyle A}
. The gravity of
B
{\displaystyle B}
is denoted as
g
B
{\displaystyle g_{B}}
and will be treated as a perturbation to the dynamics of
C
{\displaystyle C}
due to the gravity
g
A
{\displaystyle g_{A}}
of body
A
{\displaystyle A}
. Due to their gravitational interactions, point
A
{\displaystyle A}
is attracted to point
B
{\displaystyle B}
with acceleration
a
A
=
G
m
B
R
3
(
r
B
−
r
A
)
{\displaystyle a_{A}={\frac {Gm_{B}}{R^{3}}}(r_{B}-r_{A})}
, this frame is therefore non-inertial. To quantify the effects of the perturbations in this frame, one should consider the ratio of the perturbations to the main body gravity i.e.
χ
A
=
|
g
B
−
a
A
|
|
g
A
|
{\displaystyle \chi _{A}={\frac {|g_{B}-a_{A}|}{|g_{A}|}}}
. The perturbation
g
B
−
a
A
{\displaystyle g_{B}-a_{A}}
is also known as the tidal forces due to body
B
{\displaystyle B}
. It is possible to construct the perturbation ratio
χ
B
{\displaystyle \chi _{B}}
for the frame centered on
B
{\displaystyle B}
by interchanging
A
↔
B
{\displaystyle A\leftrightarrow B}
.
As
C
{\displaystyle C}
gets close to
A
{\displaystyle A}
,
χ
A
→
0
{\displaystyle \chi _{A}\rightarrow 0}
and
χ
B
→
∞
{\displaystyle \chi _{B}\rightarrow \infty }
, and vice versa. The frame to choose is the one that has the smallest perturbation ratio. The surface for which
χ
A
=
χ
B
{\displaystyle \chi _{A}=\chi _{B}}
separates the two regions of influence. In general this region is rather complicated but in the case that one mass dominates the other, say
m
A
≪
m
B
{\displaystyle m_{A}\ll m_{B}}
, it is possible to approximate the separating surface. In such a case this surface must be close to the mass
A
{\displaystyle A}
, denote
r
{\displaystyle r}
as the distance from
A
{\displaystyle A}
to the separating surface.
The distance to the sphere of influence must thus satisfy
m
B
m
A
r
3
R
3
=
m
A
m
B
R
2
r
2
{\displaystyle {\frac {m_{B}}{m_{A}}}{\frac {r^{3}}{R^{3}}}={\frac {m_{A}}{m_{B}}}{\frac {R^{2}}{r^{2}}}}
and so
r
=
R
(
m
A
m
B
)
2
/
5
{\displaystyle r=R\left({\frac {m_{A}}{m_{B}}}\right)^{2/5}}
is the radius of the sphere of influence of body
A
{\displaystyle A}
== Gravity well ==
Gravity well (or funnel) is a metaphorical concept for a gravitational field of a mass, with the field being curved in a funnel-shaped well around the mass, illustrating the steep gravitational potential and its energy that needs to be accounted for in order to escape or enter the main part of a sphere of influence.
An example for this is the strong gravitational field of the Sun and Mercury being deep within it. At perihelion Mercury goes even deeper into the Sun's gravity well, causing an anomalistic or perihelion apsidal precession which is more recognizable than with other planets due to Mercury being deep in the gravity well. This characteristic of Mercury's orbit was famously calculated by Albert Einstein through his formulation of gravity with the speed of light, and the corresponding general relativity theory, eventually being one of the first cases proving the theory.
== See also ==
Hill sphere
Sphere of influence (black hole)
Clearing the neighbourhood
== References ==
== General references ==
Bate, Roger R.; Mueller, Donald D.; White, Jerry E. (1971). Fundamentals of astrodynamics. Dover books on astronomy. New York: Dover Publications. pp. 333–334. ISBN 978-0-486-60061-1.
Sellers, Jerry Jon; Astore, William J.; Giffen, Robert B.; Larson, Wiley J. (2015). Marilyn (ed.). Understanding space: an introduction to astronautics (4nd ed.). New York: McGraw-Hill Companies. pp. 228, 738. ISBN 978-0-9904299-4-4.
Danby, J. M. A. (1992). Fundamentals of celestial mechanics (2nd ed.). Richmond, Va., U.S.A: Willmann-Bell. pp. 352–353. ISBN 978-0-943396-20-0.
== External links ==
Project Pluto | Wikipedia/Sphere_of_influence_(astrodynamics) |
In mathematics, a transcendental function is an analytic function that does not satisfy a polynomial equation whose coefficients are functions of the independent variable that can be written using only the basic operations of addition, subtraction, multiplication, and division (without the need of taking limits). This is in contrast to an algebraic function.
Examples of transcendental functions include the exponential function, the logarithm function, the hyperbolic functions, and the trigonometric functions. Equations over these expressions are called transcendental equations.
== Definition ==
Formally, an analytic function
f
{\displaystyle f}
of one real or complex variable is transcendental if it is algebraically independent of that variable. This means the function does not satisfy any polynomial equation. For example, the function
f
{\displaystyle f}
given by
f
(
x
)
=
a
x
+
b
c
x
+
d
{\displaystyle f(x)={\frac {ax+b}{cx+d}}}
for all
x
{\displaystyle x}
is not transcendental, but algebraic, because it satisfies the polynomial equation
(
a
x
+
b
)
−
(
c
x
+
d
)
f
(
x
)
=
0
{\displaystyle (ax+b)-(cx+d)f(x)=0}
.
Similarly, the function
f
{\displaystyle f}
that satisfies the equation
f
(
x
)
5
+
f
(
x
)
=
x
{\displaystyle f(x)^{5}+f(x)=x}
for all
x
{\displaystyle x}
is not transcendental, but algebraic, even though it cannot be written as a finite expression involving the basic arithmetic operations.
This definition can be extended to functions of several variables.
== History ==
The transcendental functions sine and cosine were tabulated from physical measurements in antiquity, as evidenced in Greece (Hipparchus) and India (jya and koti-jya). In describing Ptolemy's table of chords, an equivalent to a table of sines, Olaf Pedersen wrote:
The mathematical notion of continuity as an explicit concept is unknown to Ptolemy. That he, in fact, treats these functions as continuous appears from his unspoken presumption that it is possible to determine a value of the dependent variable corresponding to any value of the independent variable by the simple process of linear interpolation.
A revolutionary understanding of these circular functions occurred in the 17th century and was explicated by Leonhard Euler in 1748 in his Introduction to the Analysis of the Infinite. These ancient transcendental functions became known as continuous functions through quadrature of the rectangular hyperbola xy = 1 by Grégoire de Saint-Vincent in 1647, two millennia after Archimedes had produced The Quadrature of the Parabola.
The area under the hyperbola was shown to have the scaling property of constant area for a constant ratio of bounds. The hyperbolic logarithm function so described was of limited service until 1748 when Leonhard Euler related it to functions where a constant is raised to a variable exponent, such as the exponential function where the constant base is e. By introducing these transcendental functions and noting the bijection property that implies an inverse function, some facility was provided for algebraic manipulations of the natural logarithm even if it is not an algebraic function.
The exponential function is written
exp
(
x
)
=
e
x
{\displaystyle \exp(x)=e^{x}}
. Euler identified it with the infinite series
∑
k
=
0
∞
x
k
/
k
!
{\textstyle \sum _{k=0}^{\infty }x^{k}/k!}
, where k! denotes the factorial of k.
The even and odd terms of this series provide sums denoting cosh(x) and sinh(x), so that
e
x
=
cosh
x
+
sinh
x
.
{\displaystyle e^{x}=\cosh x+\sinh x.}
These transcendental hyperbolic functions can be converted into circular functions sine and cosine by introducing (−1)k into the series, resulting in alternating series. After Euler, mathematicians view the sine and cosine this way to relate the transcendence to logarithm and exponent functions, often through Euler's formula in complex number arithmetic.
== Examples ==
The following functions are transcendental:
f
1
(
x
)
=
x
π
f
2
(
x
)
=
e
x
f
3
(
x
)
=
log
e
x
f
4
(
x
)
=
cosh
x
f
5
(
x
)
=
sinh
x
f
6
(
x
)
=
tanh
x
f
7
(
x
)
=
sinh
−
1
x
f
8
(
x
)
=
tanh
−
1
x
f
9
(
x
)
=
cos
x
f
10
(
x
)
=
sin
x
f
11
(
x
)
=
tan
x
f
12
(
x
)
=
sin
−
1
x
f
13
(
x
)
=
tan
−
1
x
f
14
(
x
)
=
x
!
f
15
(
x
)
=
1
/
x
!
f
16
(
x
)
=
x
x
{\displaystyle {\begin{aligned}f_{1}(x)&=x^{\pi }\\[2pt]f_{2}(x)&=e^{x}\\[2pt]f_{3}(x)&=\log _{e}{x}\\[2pt]f_{4}(x)&=\cosh {x}\\f_{5}(x)&=\sinh {x}\\f_{6}(x)&=\tanh {x}\\f_{7}(x)&=\sinh ^{-1}{x}\\[2pt]f_{8}(x)&=\tanh ^{-1}{x}\\[2pt]f_{9}(x)&=\cos {x}\\f_{10}(x)&=\sin {x}\\f_{11}(x)&=\tan {x}\\f_{12}(x)&=\sin ^{-1}{x}\\[2pt]f_{13}(x)&=\tan ^{-1}{x}\\[2pt]f_{14}(x)&=x!\\f_{15}(x)&=1/x!\\[2pt]f_{16}(x)&=x^{x}\\[2pt]\end{aligned}}}
For the first function
f
1
(
x
)
{\displaystyle f_{1}(x)}
, the exponent
π
{\displaystyle \pi }
can be replaced by any other irrational number, and the function will remain transcendental. For the second and third functions
f
2
(
x
)
{\displaystyle f_{2}(x)}
and
f
3
(
x
)
{\displaystyle f_{3}(x)}
, the base
e
{\displaystyle e}
can be replaced by any other positive real number base not equaling 1, and the functions will remain transcendental. Functions 4-8 denote the hyperbolic trigonometric functions, while functions 9-13 denote the circular trigonometric functions. The fourteenth function
f
14
(
x
)
{\displaystyle f_{14}(x)}
denotes the analytic extension of the factorial function via the gamma function, and
f
15
(
x
)
{\displaystyle f_{15}(x)}
is its reciprocal, an entire function. Finally, in the last function
f
16
(
x
)
{\displaystyle f_{16}(x)}
, the exponent
x
{\displaystyle x}
can be replaced by
k
x
{\displaystyle kx}
for any nonzero real
k
{\displaystyle k}
, and the function will remain transcendental.
== Algebraic and transcendental functions ==
The most familiar transcendental functions are the logarithm, the exponential (with any non-trivial base), the trigonometric, and the hyperbolic functions, and the inverses of all of these. Less familiar are the special functions of analysis, such as the gamma, elliptic, and zeta functions, all of which are transcendental. The generalized hypergeometric and Bessel functions are transcendental in general, but algebraic for some special parameter values.
Transcendental functions cannot be defined using only the operations of addition, subtraction, multiplication, division, and
n
{\displaystyle n}
th roots (where
n
{\displaystyle n}
is any integer), without using some "limiting process".
A function that is not transcendental is algebraic. Simple examples of algebraic functions are the rational functions and the square root function, but in general, algebraic functions cannot be defined as finite formulas of the elementary functions, as shown by the example above with
f
(
x
)
5
+
f
(
x
)
=
x
{\displaystyle f(x)^{5}+f(x)=x}
(see Abel–Ruffini theorem).
The indefinite integral of many algebraic functions is transcendental. For example, the integral
∫
t
=
1
x
1
t
d
t
{\displaystyle \int _{t=1}^{x}{\frac {1}{t}}dt}
turns out to equal the logarithm function
l
o
g
e
(
x
)
{\displaystyle log_{e}(x)}
. Similarly, the limit or the infinite sum of many algebraic function sequences is transcendental. For example,
lim
n
→
∞
(
1
+
x
/
n
)
n
{\displaystyle \lim _{n\to \infty }(1+x/n)^{n}}
converges to the exponential function
e
x
{\displaystyle e^{x}}
, and the infinite sum
∑
n
=
0
∞
x
2
n
(
2
n
)
!
{\displaystyle \sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}}
turns out to equal the hyperbolic cosine function
cosh
x
{\displaystyle \cosh x}
. In fact, it is impossible to define any transcendental function in terms of algebraic functions without using some such "limiting procedure" (integrals, sequential limits, and infinite sums are just a few).
Differential algebra examines how integration frequently creates functions that are algebraically independent of some class, such as when one takes polynomials with trigonometric functions as variables.
== Transcendentally transcendental functions ==
Most familiar transcendental functions, including the special functions of mathematical physics, are solutions of algebraic differential equations. Those that are not, such as the gamma and the zeta functions, are called transcendentally transcendental or hypertranscendental functions.
== Exceptional set ==
If f is an algebraic function and
α
{\displaystyle \alpha }
is an algebraic number then f (α) is also an algebraic number. The converse is not true: there are entire transcendental functions f such that f (α) is an algebraic number for any algebraic α. For a given transcendental function the set of algebraic numbers giving algebraic results is called the exceptional set of that function. Formally it is defined by:
E
(
f
)
=
{
α
∈
Q
¯
:
f
(
α
)
∈
Q
¯
}
.
{\displaystyle {\mathcal {E}}(f)=\left\{\alpha \in {\overline {\mathbb {Q} }}\,:\,f(\alpha )\in {\overline {\mathbb {Q} }}\right\}.}
In many instances the exceptional set is fairly small. For example,
E
(
exp
)
=
{
0
}
,
{\displaystyle {\mathcal {E}}(\exp )=\{0\},}
this was proved by Lindemann in 1882. In particular exp(1) = e is transcendental. Also, since exp(iπ) = −1 is algebraic we know that iπ cannot be algebraic. Since i is algebraic this implies that π is a transcendental number.
In general, finding the exceptional set of a function is a difficult problem, but if it can be calculated then it can often lead to results in transcendental number theory. Here are some other known exceptional sets:
Klein's j-invariant
E
(
j
)
=
{
α
∈
H
:
[
Q
(
α
)
:
Q
]
=
2
}
,
{\displaystyle {\mathcal {E}}(j)=\left\{\alpha \in {\mathcal {H}}\,:\,[\mathbb {Q} (\alpha ):\mathbb {Q} ]=2\right\},}
where
H
{\displaystyle {\mathcal {H}}}
is the upper half-plane, and
[
Q
(
α
)
:
Q
]
{\displaystyle [\mathbb {Q} (\alpha ):\mathbb {Q} ]}
is the degree of the number field
Q
(
α
)
.
{\displaystyle \mathbb {Q} (\alpha ).}
This result is due to Theodor Schneider.
Exponential function in base 2:
E
(
2
x
)
=
Q
,
{\displaystyle {\mathcal {E}}(2^{x})=\mathbb {Q} ,}
This result is a corollary of the Gelfond–Schneider theorem, which states that if
α
≠
0
,
1
{\displaystyle \alpha \neq 0,1}
is algebraic, and
β
{\displaystyle \beta }
is algebraic and irrational then
α
β
{\displaystyle \alpha ^{\beta }}
is transcendental. Thus the function 2x could be replaced by cx for any algebraic c not equal to 0 or 1. Indeed, we have:
E
(
x
x
)
=
E
(
x
1
x
)
=
Q
∖
{
0
}
.
{\displaystyle {\mathcal {E}}(x^{x})={\mathcal {E}}\left(x^{\frac {1}{x}}\right)=\mathbb {Q} \setminus \{0\}.}
A consequence of Schanuel's conjecture in transcendental number theory would be that
E
(
e
e
x
)
=
∅
.
{\displaystyle {\mathcal {E}}\left(e^{e^{x}}\right)=\emptyset .}
A function with empty exceptional set that does not require assuming Schanuel's conjecture is
f
(
x
)
=
exp
(
1
+
π
x
)
.
{\displaystyle f(x)=\exp(1+\pi x).}
While calculating the exceptional set for a given function is not easy, it is known that given any subset of the algebraic numbers, say A, there is a transcendental function whose exceptional set is A. The subset does not need to be proper, meaning that A can be the set of algebraic numbers. This directly implies that there exist transcendental functions that produce transcendental numbers only when given transcendental numbers. Alex Wilkie also proved that there exist transcendental functions for which first-order-logic proofs about their transcendence do not exist by providing an exemplary analytic function.
== Dimensional analysis ==
In dimensional analysis, transcendental functions are notable because they make sense only when their argument is dimensionless (possibly after algebraic reduction). Because of this, transcendental functions can be an easy-to-spot source of dimensional errors. For example, log(5 metres) is a nonsensical expression, unlike log(5 metres / 3 metres) or log(3) metres. One could attempt to apply a logarithmic identity to get log(5) + log(metres), which highlights the problem: applying a non-algebraic operation to a dimension creates meaningless results.
== See also ==
Complex function
Function (mathematics)
Generalized function
List of special functions and eponyms
List of types of functions
Rational function
Special functions
== References ==
== External links ==
Definition of "Transcendental function" in the Encyclopedia of Math | Wikipedia/Transcendental_functions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.