text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
The bidirectional reflectance distribution function (BRDF), symbol
f
r
(
ω
i
,
ω
r
)
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})}
, is a function of four real variables that defines how light from a source is reflected off an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms. The function takes an incoming light direction,
ω
i
{\displaystyle \omega _{\text{i}}}
, and outgoing direction,
ω
r
{\displaystyle \omega _{\text{r}}}
(taken in a coordinate system where the surface normal
n
{\displaystyle \mathbf {n} }
lies along the z-axis), and returns the ratio of reflected radiance exiting along
ω
r
{\displaystyle \omega _{\text{r}}}
to the irradiance incident on the surface from direction
ω
i
{\displaystyle \omega _{\text{i}}}
. Each direction
ω
{\displaystyle \omega }
is itself parameterized by azimuth angle
ϕ
{\displaystyle \phi }
and zenith angle
θ
{\displaystyle \theta }
, therefore the BRDF as a whole is a function of 4 variables. The BRDF has units sr−1, with steradians (sr) being a unit of solid angle.
== Definition ==
The BRDF was first defined by Fred Nicodemus around 1965. The definition is:
f
r
(
ω
i
,
ω
r
)
=
d
L
r
(
ω
r
)
d
E
i
(
ω
i
)
=
d
L
r
(
ω
r
)
L
i
(
ω
i
)
cos
θ
i
d
ω
i
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})\,=\,{\frac {\mathrm {d} L_{\text{r}}(\omega _{\text{r}})}{\mathrm {d} E_{\text{i}}(\omega _{\text{i}})}}\,=\,{\frac {\mathrm {d} L_{\text{r}}(\omega _{\text{r}})}{L_{\text{i}}(\omega _{\text{i}})\cos \theta _{\text{i}}\mathrm {d} \omega _{\text{i}}}}}
where
L
{\displaystyle L}
is radiance, or power per unit solid-angle-in-the-direction-of-a-ray per unit projected-area-perpendicular-to-the-ray,
E
{\displaystyle E}
is irradiance, or power per unit surface area, and
θ
i
{\displaystyle \theta _{\text{i}}}
is the angle between
ω
i
{\displaystyle \omega _{\text{i}}}
and the surface normal,
n
{\displaystyle \mathbf {n} }
. The index
i
{\displaystyle {\text{i}}}
indicates incident light, whereas the index
r
{\displaystyle {\text{r}}}
indicates reflected light.
The reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities, is because irradiating light other than
d
E
i
(
ω
i
)
{\displaystyle \mathrm {d} E_{\text{i}}(\omega _{\text{i}})}
, which are of no interest for
f
r
(
ω
i
,
ω
r
)
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})}
, might illuminate the surface which would unintentionally affect
L
r
(
ω
r
)
{\displaystyle L_{\text{r}}(\omega _{\text{r}})}
, whereas
d
L
r
(
ω
r
)
{\displaystyle \mathrm {d} L_{\text{r}}(\omega _{\text{r}})}
is only affected by
d
E
i
(
ω
i
)
{\displaystyle \mathrm {d} E_{\text{i}}(\omega _{\text{i}})}
.
== Related functions ==
The Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) is a 6-dimensional function,
f
r
(
ω
i
,
ω
r
,
x
)
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}},\,\mathbf {x} )}
, where
x
{\displaystyle \mathbf {x} }
describes a 2D location over an object's surface.
The Bidirectional Texture Function (BTF) is appropriate for modeling non-flat surfaces, and has the same parameterization as the SVBRDF; however in contrast, the BTF includes non-local scattering effects like shadowing, masking, interreflections or subsurface scattering. The functions defined by the BTF at each point on the surface are thus called Apparent BRDFs.
The Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF), is a further generalized 8-dimensional function
S
(
x
i
,
ω
i
,
x
r
,
ω
r
)
{\displaystyle S(\mathbf {x} _{\text{i}},\,\omega _{\text{i}},\,\mathbf {x} _{\text{r}},\,\omega _{\text{r}})}
in which light entering the surface may scatter internally and exit at another location.
In all these cases, the dependence on the wavelength of light has been ignored. In reality, the BRDF is wavelength dependent, and to account for effects such as iridescence or luminescence the dependence on wavelength must be made explicit:
f
r
(
λ
i
,
ω
i
,
λ
r
,
ω
r
)
{\displaystyle f_{\text{r}}(\lambda _{\text{i}},\,\omega _{\text{i}},\,\lambda _{\text{r}},\,\omega _{\text{r}})}
. Note that in the typical case where all optical elements are linear, the function will obey
f
r
(
λ
i
,
ω
i
,
λ
r
,
ω
r
)
=
0
{\displaystyle f_{\text{r}}(\lambda _{\text{i}},\,\omega _{\text{i}},\,\lambda _{\text{r}},\,\omega _{\text{r}})=0}
except when
λ
i
=
λ
r
{\displaystyle \lambda _{\text{i}}=\lambda _{\text{r}}}
: that is, it will only emit light at wavelength equal to the incoming light. In this case it can be parameterized as
f
r
(
λ
,
ω
i
,
ω
r
)
{\displaystyle f_{\text{r}}(\lambda ,\,\omega _{\text{i}},\,\omega _{\text{r}})}
, with only one wavelength parameter.
== Physically based BRDFs ==
Physically realistic BRDFs for reciprocal linear optics have additional properties, including,
positivity:
f
r
(
ω
i
,
ω
r
)
≥
0
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})\geq 0}
obeying Helmholtz reciprocity:
f
r
(
ω
i
,
ω
r
)
=
f
r
(
ω
r
,
ω
i
)
{\displaystyle f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})=f_{\text{r}}(\omega _{\text{r}},\,\omega _{\text{i}})}
conserving energy:
∀
ω
i
,
∫
Ω
f
r
(
ω
i
,
ω
r
)
cos
θ
r
d
ω
r
≤
1
{\displaystyle \forall \omega _{\text{i}},\,\int _{\Omega }f_{\text{r}}(\omega _{\text{i}},\,\omega _{\text{r}})\,\cos {\theta _{\text{r}}}d\omega _{\text{r}}\leq 1}
== Applications ==
The BRDF is a fundamental radiometric concept, and accordingly is used in computer graphics for photorealistic rendering of synthetic scenes (see the rendering equation), as well as in computer vision for many inverse problems such as object recognition. BRDF has also been used for modeling light trapping in solar cells (e.g. using the OPTOS formalism) or low concentration solar photovoltaic systems.
In the context of satellite remote sensing, NASA uses a BRDF model to characterise surface reflectance anisotropy. For a given land area, the BRDF is established based on selected multiangular observations of surface reflectance. While single observations depend on view geometry and solar angle, the MODIS BRDF/Albedo product describes intrinsic surface properties in several spectral bands, at a resolution of 500 meters. The BRDF/Albedo product can be used to model surface albedo depending on atmospheric scattering.
== Models ==
BRDFs can be measured directly from real objects using calibrated cameras and lightsources; however, many phenomenological and analytic models have been proposed including the Lambertian reflectance model frequently assumed in computer graphics. Some useful features of recent models include:
accommodating anisotropic reflection
editable using a small number of intuitive parameters
accounting for Fresnel effects at grazing angles
being well-suited to Monte Carlo methods.
W. Matusik et al. found that interpolating between measured samples produced realistic results and was easy to understand.
=== Some examples ===
Lambertian model, representing perfectly diffuse (matte) surfaces by a constant BRDF.
Lommel–Seeliger, lunar and Martian reflection.
Hapke scattering model, physically motivated approximation of the radiative transfer solution for a porous, irregular, and particulate surface. Often used in astronomy for planet/small body surface reflection simulations. Multiple versions and modifications exist.
Phong reflectance model, a phenomenological model akin to plastic-like specularity.
Blinn–Phong model, resembling Phong, but allowing for certain quantities to be interpolated, reducing computational overhead.
Torrance–Sparrow model, a general model representing surfaces as distributions of perfectly specular microfacets.
Cook–Torrance model, a specular-microfacet model (Torrance–Sparrow) accounting for wavelength and thus color shifting.
Ward model, a specular-microfacet model with an elliptical-Gaussian distribution function dependent on surface tangent orientation (in addition to surface normal).
Oren–Nayar model, a "directed-diffuse" microfacet model, with perfectly diffuse (rather than specular) microfacets.
Ashikhmin–Shirley model, allowing for anisotropic reflectance, along with a diffuse substrate under a specular surface.
HTSG (He, Torrance, Sillion, Greenberg), a comprehensive physically based model.
Fitted Lafortune model, a generalization of Phong with multiple specular lobes, and intended for parametric fits of measured data.
Lebedev model for analytical-grid BRDF approximation.
ABC-like model for accurate and efficient rendering of glossy surfaces.
ABg model
K-correlation (ABC) model
== Acquisition ==
Traditionally, BRDF measurement devices called gonioreflectometers employ one or more goniometric arms to position a light source and a detector at various directions from a flat sample of the material to be measured. To measure a full BRDF, this process must be repeated many times, moving the light source each time to measure a different incidence angle.
Unfortunately, using such a device to densely measure the BRDF is very time-consuming. One of the first improvements on these techniques used a half-silvered mirror and a digital camera to take many BRDF samples of a planar target at once. Since this work, many researchers have developed other devices for efficiently acquiring BRDFs from real world samples, and it remains an active area of research.
There is an alternative way to measure BRDF based on HDR images. The standard algorithm is to measure the BRDF point cloud from images and optimize it by one of the BRDF models.
A fast way to measure BRDF or BTDF is a conoscopic scatterometer The advantage of this measurement instrument is that a near-hemispheric measurement can be captured in a fraction of a second with resolution of roughly 0.1°. This instrument has two disadvantages. The first is that the dynamic range is limited by the camera being used; this can be as low as 8 bits for older image sensors or as high as 32 bits for the newer automotive image sensors. The other disadvantage is that for BRDF measurements the beam must pass from an external light source, bounce off a pellicle and pass in reverse through the first few elements of the conoscope before being scattered by the sample. Each of these elements is antireflection-coated, but roughly 0.3% of the light is reflected at each air-glass interface. These reflections will show up in the image as a spurious signal. For scattering surfaces with a large signal, this is not a problem, but for Lambertian surfaces it is.
== BRDF fabrication ==
BRDF fabrication refers to the process of implementing a surface based on the measured or synthesized information of a target BRDF. There exist three ways to perform such a task, but in general, it can be summarized as the following steps:
Measuring or synthesizing the target BRDF distribution.
Sample this distribution to discretize it and make the fabrication feasible.
Design a geometry that produces this distribution (with microfacet, halftoning).
Optimize the continuity and smoothness of the surface with respect to the manufacturing procedure.
Many approaches have been proposed for manufacturing the BRDF of the target :
Milling the BRDF: This procedure starts with sampling the BRDF distribution and generating it with microfacet geometry then the surfaced is optimized in terms of smoothness and continuity to meet the limitations of the milling machine. The final BRDF distribution is the convolution of the substrate and the geometry of the milled surface.
Printing the BRDF: In order to generate spatially varying BRDF (svBRDF) it has been proposed to use gamut mapping and halftoning to achieve the targeted BRDF. Given a set of metallic inks with known BRDF an algorithm proposed to linearly combine them to produce the targeted distribution. So far printing only means gray-scale or color printing but real-world surfaces can exhibit different amounts of specularity that affects their final appearance, as a result this novel method can help us print images even more realistically.
Combination of Ink and Geometry: In addition to color and specularity, real-world objects also contain texture. A 3D printer can be used to manufacture the geometry and cover the surface with a suitable ink; by optimally creating the facets and choosing the ink combination, this method can give us a higher degree of freedom in design and more accurate BRDF fabrication.
== See also ==
Albedo
BSDF
Gonioreflectometer
Opposition spike
Photometry (astronomy)
Radiometry
Reflectance
Schlick's approximation
Specular highlight
== References ==
== Further reading ==
Lubin, Dan; Robert Massom (2006-02-10). Polar Remote Sensing. Volume I: Atmosphere and Oceans (1st ed.). Springer. p. 756. ISBN 978-3-540-43097-1.
Matt, Pharr; Greg Humphreys (2004). Physically Based Rendering (1st ed.). Morgan Kaufmann. p. 1019. ISBN 978-0-12-553180-1.
Schaepman-Strub, G.; M. E. Schaepman; T. H. Painter; S. Dangel; J. V. Martonchik (2006-07-15). "Reflectance quantities in optical remote sensing: definitions and case studies". Remote Sensing of Environment. 103 (1): 27–42. Bibcode:2006RSEnv.103...27S. doi:10.1016/j.rse.2006.03.002.
An intuitive introduction to the concept of reflection model and BRDF. | Wikipedia/Bidirectional_reflectance_distribution_function |
In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms (such as path tracing), which handle all types of light paths, typical radiosity only account for paths (represented by the code "LD*E") which leave a light source and are reflected diffusely some number of times (possibly zero) before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.
Radiosity methods were first developed in about 1950 in the engineering field of heat transfer. They were later refined specifically for the problem of rendering computer graphics in 1984–1985 by researchers at Cornell University and Hiroshima University.
Notable commercial radiosity engines are Enlighten by Geomerics (used for games including Battlefield 3 and Need for Speed: The Run); 3ds Max; form•Z; LightWave 3D and the Electric Image Animation System.
== Visual characteristics ==
The inclusion of radiosity calculations in the rendering process often lends an added element of realism to the finished scene, because of the way it mimics real-world phenomena. Consider a simple room scene.
The image on the left was rendered with a typical direct illumination renderer. There are three types of lighting in this scene which have been specifically chosen and placed by the artist in an attempt to create realistic lighting: spot lighting with shadows (placed outside the window to create the light shining on the floor), ambient lighting (without which any part of the room not lit directly by a light source would be totally dark), and omnidirectional lighting without shadows (to reduce the flatness of the ambient lighting).
The image on the right was rendered using a radiosity algorithm. There is only one source of light: an image of the sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or designed by the artist.
== Overview of the radiosity algorithm ==
The surfaces of the scene to be rendered are each divided up into one or more smaller surfaces (patches).
A view factor (also known as form factor) is computed for each pair of patches; it is a coefficient describing how well the patches can see each other. Patches that are far away from each other, or oriented at oblique angles relative to one another, will have smaller view factors. If other patches are in the way, the view factor will be reduced or zero, depending
on whether the occlusion is partial or total.
The view factors are used as coefficients in a linear system of rendering equations. Solving this system yields the radiosity, or brightness, of each patch, taking into account diffuse interreflections and soft shadows.
Progressive radiosity solves the system iteratively with intermediate radiosity values for the patch, corresponding to bounce levels. That is, after each iteration, we know how the scene looks after one light bounce, after two passes, two bounces, and so forth. This is useful for getting an interactive preview of the scene. Also, the user can stop the iterations once the image looks good enough, rather than wait for the computation to numerically converge.
Another common method for solving the radiosity equation is "shooting radiosity," which iteratively solves the radiosity equation by "shooting" light from the patch with the most energy at each step. After the first pass, only those patches which are in direct line of sight of a light-emitting patch will be illuminated. After the second pass, more patches will become illuminated as the light begins to bounce around the scene. The scene continues to grow brighter and eventually reaches a steady state.
== Mathematical formulation ==
The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the amount of light energy transferred among surfaces. In order to simplify computations, the method assumes that all scattering is perfectly diffuse. Surfaces are typically discretized into quadrilateral or triangular elements over which a piecewise polynomial function is defined.
After this breakdown, the amount of light energy transfer can be computed by using the known reflectivity of the reflecting patch, combined with the view factor of the two patches. This dimensionless quantity is computed from the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of the first patch which is covered by the second.
More correctly, radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:
B
(
x
)
d
A
=
E
(
x
)
d
A
+
ρ
(
x
)
d
A
∫
S
B
(
x
′
)
1
π
r
2
cos
θ
x
cos
θ
x
′
⋅
V
i
s
(
x
,
x
′
)
d
A
′
{\displaystyle B(x)\,dA=E(x)\,dA+\rho (x)\,dA\int _{S}B(x'){\frac {1}{\pi r^{2}}}\cos \theta _{x}\cos \theta _{x'}\cdot \mathrm {Vis} (x,x')\,\mathrm {d} A'}
where:
B(x)i dAi is the total energy leaving a small area dAi around a point x.
E(x)i dAi is the emitted energy.
ρ(x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per unit area (the total energy which arrives from other patches).
S denotes that the integration variable x' runs over all the surfaces in the scene
r is the distance between x and x'
θx and θx' are the angles between the line joining x and x' and vectors normal to the surface at x and x' respectively.
Vis(x,x' ) is a visibility function, defined to be 1 if the two points x and x' are visible from each other, and 0 if they are not.
If the surfaces are approximated by a finite number of planar patches, each of which is taken to have a constant radiosity Bi and reflectivity ρi, the above equation gives the discrete radiosity equation,
B
i
=
E
i
+
ρ
i
∑
j
=
1
n
F
i
j
B
j
{\displaystyle B_{i}=E_{i}+\rho _{i}\sum _{j=1}^{n}F_{ij}B_{j}}
where Fij is the geometrical view factor for the radiation leaving j and hitting patch i.
This equation can then be applied to each patch. The equation is monochromatic, so color radiosity rendering requires calculation for each of the required colors.
=== Solution methods ===
The equation can formally be solved as matrix equation, to give the vector solution:
B
=
(
I
−
ρ
F
)
−
1
E
{\displaystyle B=(I-\rho F)^{-1}E\;}
This gives the full "infinite bounce" solution for B directly. However the number of calculations to compute the matrix solution scales according to n3, where n is the number of patches. This becomes prohibitive for realistically large values of n.
Instead, the equation can more readily be solved iteratively, by repeatedly applying the single-bounce update formula above. Formally, this is a solution of the matrix equation by Jacobi iteration. Because the reflectivities ρi are less than 1, this scheme converges quickly, typically requiring only a handful of iterations to produce a reasonable solution. Other standard iterative methods for matrix equation solutions can also be used, for example the Gauss–Seidel method, where updated values for each patch are used in the calculation as soon as they are computed, rather than all being updated synchronously at the end of each sweep. The solution can also be tweaked to iterate over each of the sending elements in turn in its main outermost loop for each update, rather than each of the receiving patches. This is known as the shooting variant of the algorithm, as opposed to the gathering variant. Using the view factor reciprocity, Ai Fij = Aj Fji, the update equation can also be re-written in terms of the view factor Fji seen by each sending patch Aj:
A
i
B
i
=
A
i
E
i
+
ρ
i
∑
j
=
1
n
A
j
B
j
F
j
i
{\displaystyle A_{i}B_{i}=A_{i}E_{i}+\rho _{i}\sum _{j=1}^{n}A_{j}B_{j}F_{ji}}
This is sometimes known as the "power" formulation, since it is now the total transmitted power of each element that is being updated, rather than its radiosity.
The view factor Fij itself can be calculated in a number of ways. Early methods used a hemicube (an imaginary cube centered upon the first surface to which the second surface was projected, devised by Michael F. Cohen and Donald P. Greenberg in 1985). The surface of the hemicube was divided into pixel-like squares, for each of which a view factor can be readily calculated analytically. The full form factor could then be approximated by adding up the contribution from each of the pixel-like squares. The projection onto the hemicube, which could be adapted from standard methods for determining the visibility of polygons, also solved the problem of intervening patches partially obscuring those behind.
However all this was quite computationally expensive, because ideally form factors must be derived for every possible pair of patches, leading to a quadratic increase in computation as the number of patches increased. This can be reduced somewhat by using a binary space partitioning tree to reduce the amount of time spent determining which patches are completely hidden from others in complex scenes; but even so, the time spent to determine the form factor still typically scales as n log n. New methods include adaptive integration.
=== Sampling approaches ===
The form factors Fij themselves are not in fact explicitly needed in either of the update equations; neither to estimate the total intensity Σj Fij Bj gathered from the whole view, nor to estimate how the power Aj Bj being radiated is distributed. Instead, these updates can be estimated by sampling methods, without ever having to calculate form factors explicitly. Since the mid 1990s such sampling approaches have been the methods most predominantly used for practical radiosity calculations.
The gathered intensity can be estimated by generating a set of samples in the unit circle, lifting these onto the hemisphere, and then seeing what was the radiosity of the element that a ray incoming in that direction would have originated on. The estimate for the total gathered intensity is then just the average of the radiosities discovered by each ray. Similarly, in the power formulation, power can be distributed by generating a set of rays from the radiating element in the same way, and spreading the power to be distributed equally between each element a ray hits.
This is essentially the same distribution that a path-tracing program would sample in tracing back one diffuse reflection step; or that a bidirectional ray-tracing program would sample to achieve one forward diffuse reflection step when light source mapping forwards. The sampling approach therefore to some extent represents a convergence between the two techniques, the key difference remaining that the radiosity technique aims to build up a sufficiently accurate map of the radiance of all the surfaces in the scene, rather than just a representation of the current view.
== Reducing computation time ==
Although in its basic form radiosity is assumed to have a quadratic increase in computation time with added geometry (surfaces and patches), this need not be the case. The radiosity problem can be rephrased as a problem of rendering a texture mapped scene. In this case, the computation time increases only linearly with the number of patches (ignoring complex issues like cache use).
Following the commercial enthusiasm for radiosity-enhanced imagery, but prior to the standardization of rapid radiosity calculation, many architects and graphic artists used a technique referred to loosely as false radiosity. By darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping, a radiosity-like effect of patch interaction could be created with a standard scanline renderer (cf. ambient occlusion).
Static, pre-computed radiosity may be displayed in real time via lightmaps using standard rasterization techniques.
== Advantages ==
One of the advantages of the Radiosity algorithm is that it is relatively simple to explain and implement. This makes it a useful algorithm for teaching students about global illumination algorithms. A typical direct illumination renderer already contains nearly all of the algorithms (perspective transformations, texture mapping, hidden surface removal) required to implement radiosity. A strong grasp of mathematics is not required to understand or implement this algorithm.
== Limitations ==
Typical radiosity methods only account for light paths of the form LD*E, i.e. paths which start at a light source and make multiple diffuse bounces before reaching the eye. Although there are several approaches to integrating other illumination effects such as specular and glossy reflections, radiosity-based methods are generally not used to solve the complete rendering equation.
Basic radiosity also has trouble resolving sudden changes in visibility (e.g. hard-edged shadows) because coarse, regular discretization into piecewise constant elements corresponds to a low-pass box filter of the spatial domain. Discontinuity meshing [1] uses knowledge of visibility events to generate a more intelligent discretization.
== Confusion about terminology ==
Radiosity was perhaps the first rendering algorithm in widespread use which accounted for diffuse indirect lighting. Earlier rendering algorithms, such as Whitted-style ray tracing were capable of computing effects such as reflections, refractions, and shadows, but despite being highly global phenomena, these effects were not commonly referred to as "global illumination." As a consequence, the terms "diffuse interreflection" and "radiosity" both became confused with "global illumination" in popular parlance. However, the three are distinct concepts.
The radiosity method, in the context of computer graphics, derives from (and is fundamentally the same as) the radiosity method in heat transfer. In this context, radiosity is the total radiative flux (both reflected and re-radiated) leaving a surface; this is also sometimes known as radiant exitance. Calculation of radiosity, rather than surface temperatures, is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.
== See also ==
Cornell Box
Photon Mapping
Ray tracing
Path tracing
== References ==
== Further reading ==
Radiosity Overview, from HyperGraph of SIGGRAPH (provides full matrix radiosity algorithm and progressive radiosity algorithm)
Radiosity, by Hugo Elias (also provides a general overview of lighting algorithms, along with programming examples)
Radiosity, by Allen Martin (a slightly more mathematical explanation of radiosity)
ROVER, by Dr. Tralvex Yeap (Radiosity Abstracts & Bibliography Library)
Radiosity: Basic Implementations (Basic radiosity survey)
== External links ==
RADical, by Parag Chaudhuri (an implementation of shooting & sorting variant of progressive radiosity algorithm with OpenGL acceleration, extending from GLUTRAD by Colbeck)
Radiosity Renderer and Visualizer (simple implementation of radiosity renderer based on OpenGL)
Enlighten (Licensed software code that provides realtime radiosity for computer game applications. Developed by the UK company Geomerics) | Wikipedia/Radiosity_(computer_graphics) |
In numerical analysis, the quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences (also called quasi-random sequences or sub-random sequences) to achieve variance reduction. This is in contrast to the regular Monte Carlo method or Monte Carlo integration, which are based on sequences of pseudorandom numbers.
Monte Carlo and quasi-Monte Carlo methods are stated in a similar way.
The problem is to approximate the integral of a function f as the average of the function evaluated at a set of points x1, ..., xN:
∫
[
0
,
1
]
s
f
(
u
)
d
u
≈
1
N
∑
i
=
1
N
f
(
x
i
)
.
{\displaystyle \int _{[0,1]^{s}}f(u)\,{\rm {d}}u\approx {\frac {1}{N}}\,\sum _{i=1}^{N}f(x_{i}).}
Since we are integrating over the s-dimensional unit cube, each xi is a vector of s elements. The difference between quasi-Monte Carlo and Monte Carlo is the way the xi are chosen. Quasi-Monte Carlo uses a low-discrepancy sequence such as the Halton sequence, the Sobol sequence, or the Faure sequence, whereas Monte Carlo uses a pseudorandom sequence. The advantage of using low-discrepancy sequences is a faster rate of convergence. Quasi-Monte Carlo has a rate of convergence close to O(1/N), whereas the rate for the Monte Carlo method is O(N−0.5).
The Quasi-Monte Carlo method recently became popular in the area of mathematical finance or computational finance. In these areas, high-dimensional numerical integrals, where the integral should be evaluated within a threshold ε, occur frequently. Hence, the Monte Carlo method and the quasi-Monte Carlo method are beneficial in these situations.
== Approximation error bounds of quasi-Monte Carlo ==
The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the set x1, ..., xN. Specifically, the Koksma–Hlawka inequality states that the error
ε
=
|
∫
[
0
,
1
]
s
f
(
u
)
d
u
−
1
N
∑
i
=
1
N
f
(
x
i
)
|
{\displaystyle \varepsilon =\left|\int _{[0,1]^{s}}f(u)\,{\rm {d}}u-{\frac {1}{N}}\,\sum _{i=1}^{N}f(x_{i})\right|}
is bounded by
|
ε
|
≤
V
(
f
)
D
N
,
{\displaystyle |\varepsilon |\leq V(f)D_{N},}
where V(f) is the Hardy–Krause variation of the function f (see Morokoff and Caflisch (1995) for the detailed definitions). DN is the so-called star discrepancy of the set (x1,...,xN) and is defined as
D
N
=
sup
Q
⊂
[
0
,
1
]
s
|
number of points in
Q
N
−
volume
(
Q
)
|
,
{\displaystyle D_{N}=\sup _{Q\subset [0,1]^{s}}\left|{\frac {{\text{number of points in }}Q}{N}}-\operatorname {volume} (Q)\right|,}
where Q is a rectangular solid in [0,1]s with sides parallel to the coordinate axes. The inequality
|
ε
|
≤
V
(
f
)
D
N
{\displaystyle |\varepsilon |\leq V(f)D_{N}}
can be used to show that the error of the approximation by the quasi-Monte Carlo method is
O
(
(
log
N
)
s
N
)
{\displaystyle O\left({\frac {(\log N)^{s}}{N}}\right)}
, whereas the Monte Carlo method has a probabilistic error of
O
(
1
N
)
{\displaystyle O\left({\frac {1}{\sqrt {N}}}\right)}
. Thus, for sufficiently large
N
{\displaystyle N}
, quasi-Monte Carlo will always outperform random Monte Carlo. However,
log
(
N
)
s
{\displaystyle \log(N)^{s}}
grows exponentially quickly with the dimension, meaning a poorly-chosen sequence can be much worse than Monte Carlo in high dimensions. In practice, it is almost always possible to select an appropriate low-discrepancy sequence, or apply an appropriate transformation to the integrand, to ensure that quasi-Monte Carlo performs at least as well as Monte Carlo (and often much better).
== Monte Carlo and quasi-Monte Carlo for multidimensional integrations ==
For one-dimensional integration, quadrature methods such as the trapezoidal rule, Simpson's rule, or Newton–Cotes formulas are known to be efficient if the function is smooth. These approaches can be also used for multidimensional integrations by repeating the one-dimensional integrals over multiple dimensions. However, the number of function evaluations grows exponentially as s, the number of dimensions, increases. Hence, a method that can overcome this curse of dimensionality should be used for multidimensional integrations. The standard Monte Carlo method is frequently used when the quadrature methods are difficult or expensive to implement. Monte Carlo and quasi-Monte Carlo methods are accurate and relatively fast when the dimension is high, up to 300 or higher.
Morokoff and Caflisch studied the performance of Monte Carlo and quasi-Monte Carlo methods for integration. In the paper, Halton, Sobol, and Faure sequences for quasi-Monte Carlo are compared with the standard Monte Carlo method using pseudorandom sequences. They found that the Halton sequence performs best for dimensions up to around 6; the Sobol sequence performs best for higher dimensions; and the Faure sequence, while outperformed by the other two, still performs better than a pseudorandom sequence.
However, Morokoff and Caflisch gave examples where the advantage of the quasi-Monte Carlo is less than expected theoretically. Still, in the examples studied by Morokoff and Caflisch, the quasi-Monte Carlo method did yield a more accurate result than the Monte Carlo method with the same number of points. Morokoff and Caflisch remark that the advantage of the quasi-Monte Carlo method is greater if the integrand is smooth, and the number of dimensions s of the integral is small.
== Drawbacks of quasi-Monte Carlo ==
Lemieux mentioned the drawbacks of quasi-Monte Carlo:
In order for
O
(
(
log
N
)
s
N
)
{\displaystyle O\left({\frac {(\log N)^{s}}{N}}\right)}
to be smaller than
O
(
1
N
)
{\displaystyle O\left({\frac {1}{\sqrt {N}}}\right)}
,
s
{\displaystyle s}
needs to be small and
N
{\displaystyle N}
needs to be large (e.g.
N
>
2
s
{\displaystyle N>2^{s}}
). For large s, depending on the value of N, the discrepancy of a point set from a low-discrepancy generator might be not smaller than for a random set.
For many functions arising in practice,
V
(
f
)
=
∞
{\displaystyle V(f)=\infty }
(e.g. if Gaussian variables are used).
We only know an upper bound on the error (i.e., ε ≤ V(f) DN) and it is difficult to compute
D
N
∗
{\displaystyle D_{N}^{*}}
and
V
(
f
)
{\displaystyle V(f)}
.
In order to overcome some of these difficulties, we can use a randomized quasi-Monte Carlo method.
== Randomization of quasi-Monte Carlo ==
Since the low discrepancy sequence are not random, but deterministic, quasi-Monte Carlo method can be seen as a deterministic algorithm or derandomized algorithm. In this case, we only have the bound (e.g., ε ≤ V(f) DN) for error, and the error is hard to estimate. In order to recover our ability to analyze and estimate the variance, we can randomize the method (see randomization for the general idea). The resulting method is called the randomized quasi-Monte Carlo method and can be also viewed as a variance reduction technique for the standard Monte Carlo method. Among several methods, the simplest transformation procedure is through random shifting. Let {x1,...,xN} be the point set from the low discrepancy sequence. We sample s-dimensional random vector U and mix it with {x1, ..., xN}. In detail, for each xj, create
y
j
=
x
j
+
U
(
mod
1
)
{\displaystyle y_{j}=x_{j}+U{\pmod {1}}}
and use the sequence
(
y
j
)
{\displaystyle (y_{j})}
instead of
(
x
j
)
{\displaystyle (x_{j})}
. If we have R replications for Monte Carlo, sample s-dimensional random vector U for each replication. Randomization allows to give an estimate of the variance while still using quasi-random sequences. Compared to pure quasi Monte-Carlo, the number of samples of the quasi random sequence will be divided by R for an equivalent computational cost, which reduces the theoretical convergence rate. Compared to standard Monte-Carlo, the variance and the computation speed are slightly better from the experimental results in Tuffin (2008)
== See also ==
Monte Carlo method – Probabilistic problem-solving algorithm
Monte Carlo methods in finance – Probabilistic measurement methods
Quasi-Monte Carlo methods in finance
Biology Monte Carlo method – Method for simulating ion transport
Computational physics – Numerical simulations of physical problems via computers
Low-discrepancy sequences – Type of mathematical sequencePages displaying short descriptions of redirect targets
Discrepancy theory – Theory of irregularities of distribution
Markov chain Monte Carlo – Calculation of complex statistical distributions
== References ==
R. E. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1–49.
Josef Dick and Friedrich Pillichshammer, Digital Nets and Sequences. Discrepancy Theory and Quasi-Monte Carlo Integration, Cambridge University Press, Cambridge, 2010, ISBN 978-0-521-19159-3
Gunther Leobacher and Friedrich Pillichshammer, Introduction to quasi-Monte Carlo Integration and Applications, Compact Textbooks in Mathematics, Birkhäuser, 2014, ISBN 978-3-319-03424-9
Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Math., 1651, Springer, Berlin, 1997, ISBN 3-540-62606-9
William J. Morokoff and Russel E. Caflisch, Quasi-random sequences and their discrepancies, SIAM J. Sci. Comput. 15 (1994), no. 6, 1251–1279 (At CiteSeer:[2])
Harald Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-295-5
Harald G. Niederreiter, Quasi-Monte Carlo methods and pseudo-random numbers, Bull. Amer. Math. Soc. 84 (1978), no. 6, 957–1041
Oto Strauch and Štefan Porubský, Distribution of Sequences: A Sampler, Peter Lang Publishing House, Frankfurt am Main 2005, ISBN 3-631-54013-2
== External links ==
The MCQMC Wiki page contains a lot of free online material on Monte Carlo and quasi-Monte Carlo methods
A very intuitive and comprehensive introduction to Quasi-Monte Carlo methods | Wikipedia/Quasi-Monte_Carlo_method |
The Phong reflection model (also called Phong illumination or Phong lighting) is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.
== History ==
The Phong reflection model was developed by Bui Tuong Phong at the University of Utah, who published it in his 1975 Ph.D. dissertation. It was published in conjunction with a method for interpolating the calculation for each individual pixel that is rasterized from a polygonal surface model; the interpolation technique is known as Phong shading, even when it is used with a reflection model other than Phong's. Phong's methods were considered radical at the time of their introduction, but have since become the de facto baseline shading method for many rendering applications. Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.
== Concepts ==
Phong reflection is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Phong's informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large highlights that fall off more gradually. The model also includes an ambient term to account for the small amount of light that is scattered about the entire scene.
For each light source in the scene, components
i
s
{\displaystyle i_{\text{s}}}
and
i
d
{\displaystyle i_{\text{d}}}
are defined as the intensities (often as RGB values) of the specular and diffuse components of the light sources, respectively. A single term
i
a
{\displaystyle i_{\text{a}}}
controls the ambient lighting; it is sometimes computed as a sum of contributions from all light sources.
For each material in the scene, the following parameters are defined:
k
s
{\displaystyle k_{\text{s}}}
, which is a specular reflection constant, the ratio of reflection of the specular term of incoming light,
k
d
{\displaystyle k_{\text{d}}}
, which is a diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light (Lambertian reflectance),
k
a
{\displaystyle k_{\text{a}}}
, which is an ambient reflection constant, the ratio of reflection of the ambient term present in all points in the scene rendered, and
α
{\displaystyle \alpha }
, which is a shininess constant for this material, which is larger for surfaces that are smoother and more mirror-like. When this constant is large the specular highlight is small.
Furthermore, there is
lights
{\displaystyle {\text{lights}}}
, which is the set of all light sources,
L
^
m
{\displaystyle {\hat {L}}_{m}}
, which is the direction vector from the point on the surface toward each light source (
m
{\displaystyle m}
specifies the light source),
N
^
{\displaystyle {\hat {N}}}
, which is the normal at this point on the surface,
R
^
m
{\displaystyle {\hat {R}}_{m}}
, which is the direction that a perfectly reflected ray of light would take from this point on the surface, and
V
^
{\displaystyle {\hat {V}}}
, which is the direction pointing towards the viewer (such as a virtual camera).
Then the Phong reflection model provides an equation for computing the illumination of each surface point
I
p
{\displaystyle I_{\text{p}}}
:
I
p
=
k
a
i
a
+
∑
m
∈
lights
(
k
d
(
L
^
m
⋅
N
^
)
i
m
,
d
+
k
s
(
R
^
m
⋅
V
^
)
α
i
m
,
s
)
.
{\displaystyle I_{\text{p}}=k_{\text{a}}i_{\text{a}}+\sum _{m\;\in \;{\text{lights}}}(k_{\text{d}}({\hat {L}}_{m}\cdot {\hat {N}})i_{m,{\text{d}}}+k_{\text{s}}({\hat {R}}_{m}\cdot {\hat {V}})^{\alpha }i_{m,{\text{s}}}).}
where the direction vector
R
^
m
{\displaystyle {\hat {R}}_{m}}
is calculated as the reflection of
L
^
m
{\displaystyle {\hat {L}}_{m}}
on the surface characterized by the surface normal
N
^
{\displaystyle {\hat {N}}}
using
R
^
m
=
2
(
L
^
m
⋅
N
^
)
N
^
−
L
^
m
{\displaystyle {\hat {R}}_{m}=2({\hat {L}}_{m}\cdot {\hat {N}}){\hat {N}}-{\hat {L}}_{m}}
The hats indicate that the vectors are normalized. The diffuse term is not affected by the viewer direction (
V
^
{\displaystyle {\hat {V}}}
). The specular term is large only when the viewer direction (
V
^
{\displaystyle {\hat {V}}}
) is aligned with the reflection direction
R
^
m
{\displaystyle {\hat {R}}_{m}}
. Their alignment is measured by the
α
{\displaystyle \alpha }
power of the cosine of the angle between them. The cosine of the angle between the normalized vectors
R
^
m
{\displaystyle {\hat {R}}_{m}}
and
V
^
{\displaystyle {\hat {V}}}
is equal to their dot product. When
α
{\displaystyle \alpha }
is large, in the case of a nearly mirror-like reflection, the specular highlight will be small, because any viewpoint not aligned with the reflection will have a cosine less than one which rapidly approaches zero when raised to a high power.
Although the above formulation is the common way of presenting the Phong reflection model, each term should only be included if the term's dot product is positive. (Additionally, the specular term should only be included if the dot product of the diffuse term is positive.)
When the color is represented as RGB values, as often is the case in computer graphics, this equation is typically modeled separately for R, G and B intensities, allowing different reflection constants
k
a
,
{\displaystyle k_{\text{a}},}
k
d
{\displaystyle k_{\text{d}}}
and
k
s
{\displaystyle k_{\text{s}}}
for the different color channels.
When implementing the Phong reflection model, there are a number of methods for approximating the model, rather than implementing the exact formulas, which can speed up the calculation; for example, the Blinn–Phong reflection model is a modification of the Phong reflection model, which is more efficient if the viewer and the light source are treated to be at infinity.
Another approximation that addresses the calculation of the exponentiation in the specular term is the following: Considering that the specular term should be taken into account only if its dot product is positive, it can be approximated as
max
(
0
,
R
^
m
⋅
V
^
)
α
=
max
(
0
,
1
−
λ
)
β
γ
=
(
max
(
0
,
1
−
λ
)
β
)
γ
≈
max
(
0
,
1
−
β
λ
)
γ
{\displaystyle \max(0,{\hat {R}}_{m}\cdot {\hat {V}})^{\alpha }=\max(0,1-\lambda )^{\beta \gamma }=\left(\max(0,1-\lambda )^{\beta }\right)^{\gamma }\approx \max(0,1-\beta \lambda )^{\gamma }}
where
λ
=
1
−
R
^
m
⋅
V
^
{\displaystyle \lambda =1-{\hat {R}}_{m}\cdot {\hat {V}}}
, and
β
=
α
/
γ
{\displaystyle \beta =\alpha /\gamma \,}
is a real number which doesn't have to be an integer. If
γ
{\displaystyle \gamma }
is chosen to be a power of 2, i.e.
γ
=
2
n
{\displaystyle \gamma =2^{n}}
where
n
{\displaystyle n}
is an integer, then the expression
(
1
−
β
λ
)
γ
{\displaystyle (1-\beta \lambda )^{\gamma }}
can be more efficiently calculated by squaring
(
1
−
β
λ
)
n
{\displaystyle (1-\beta \lambda )\ n}
times, i.e.
(
1
−
β
λ
)
γ
=
(
1
−
β
λ
)
2
n
=
(
1
−
β
λ
)
2
⋅
2
⋅
…
⋅
2
⏞
n
=
(
…
(
(
1
−
β
λ
)
2
)
2
…
)
2
⏞
n
.
{\displaystyle (1-\beta \lambda )^{\gamma }\,=\,(1-\beta \lambda )^{2^{n}}\,=\,(1-\beta \lambda )^{\overbrace {\scriptstyle 2\,\cdot \,2\,\cdot \,\dots \,\cdot \,2} ^{n}}\,=\,(\dots ((1-\beta \lambda )\overbrace {^{2})^{2}\dots )^{2}} ^{n}.}
This approximation of the specular term holds for a sufficiently large integer
γ
{\displaystyle \gamma }
(typically, 4 or 8 will be enough).
Furthermore, the value
λ
{\displaystyle \lambda }
can be approximated as
λ
=
(
R
^
m
−
V
^
)
⋅
(
R
^
m
−
V
^
)
/
2
{\displaystyle \lambda =({\hat {R}}_{m}-{\hat {V}})\cdot ({\hat {R}}_{m}-{\hat {V}})/2}
, or as
λ
=
(
R
^
m
×
V
^
)
⋅
(
R
^
m
×
V
^
)
/
2.
{\displaystyle \lambda =({\hat {R}}_{m}\times {\hat {V}})\cdot ({\hat {R}}_{m}\times {\hat {V}})/2.}
The latter is much less sensitive to normalization errors in
R
^
m
{\displaystyle {\hat {R}}_{m}}
and
V
^
{\displaystyle {\hat {V}}}
than Phong's dot-product-based
λ
=
1
−
R
^
m
⋅
V
^
{\displaystyle \lambda =1-{\hat {R}}_{m}\cdot {\hat {V}}}
is, and practically doesn't require
R
^
m
{\displaystyle {\hat {R}}_{m}}
and
V
^
{\displaystyle {\hat {V}}}
to be normalized except for very low-resolved triangle meshes.
This method substitutes a few multiplications for a variable exponentiation, and removes the need for an accurate reciprocal-square-root-based vector normalization.
== Inverse model ==
The Phong reflection model in combination with Phong shading is an approximation of shading of objects in real life. This means that the Phong equation can relate the shading seen in a photograph with the surface normals of the visible object. Inverse refers to the wish to estimate the surface normals given a rendered image, natural or computer-made.
The Phong reflection model contains many parameters, such as the surface diffuse reflection parameter (albedo) which may vary within the object. Thus the normals of an object in a photograph can only be determined, by introducing additional information such as the number of lights, light directions and reflection parameters.
For example, we have a cylindrical object, for instance a finger, and wish to compute the normal
N
=
[
N
x
,
N
z
]
{\displaystyle N=[N_{x},N_{z}]}
on a line on the object. We assume only one light, no specular reflection, and uniform known (approximated) reflection parameters. We can then simplify the Phong equation to:
I
p
(
x
)
=
C
a
+
C
d
(
L
(
x
)
⋅
N
(
x
)
)
{\displaystyle I_{p}(x)=C_{a}+C_{d}(L(x)\cdot N(x))}
With
C
a
{\displaystyle C_{a}}
a constant equal to the ambient light and
C
d
{\displaystyle C_{d}}
a constant equal to the diffusion reflection. We can re-write the equation to:
(
I
p
(
x
)
−
C
a
)
/
C
d
=
L
(
x
)
⋅
N
(
x
)
{\displaystyle (I_{p}(x)-C_{a})/C_{d}=L(x)\cdot N(x)}
Which can be rewritten for a line through the cylindrical object as:
(
I
p
−
C
a
)
/
C
d
=
L
x
N
x
+
L
z
N
z
{\displaystyle (I_{p}-C_{a})/C_{d}=L_{x}N_{x}+L_{z}N_{z}}
For instance if the light direction is 45 degrees above the object
L
=
[
0.71
,
0.71
]
{\displaystyle L=[0.71,0.71]}
we get two equations with two unknowns.
(
I
p
−
C
a
)
/
C
d
=
0.71
N
x
+
0.71
N
z
{\displaystyle (I_{p}-C_{a})/C_{d}=0.71N_{x}+0.71N_{z}}
1
=
(
N
x
2
+
N
z
2
)
{\displaystyle 1={\sqrt {(N_{x}^{2}+N_{z}^{2})}}}
Because of the powers of two in the equation there are two possible solutions for the normal direction. Thus some prior information of the geometry is needed to define the correct normal direction. The normals are directly related to angles of inclination of the line on the object surface. Thus the normals allow the calculation of the relative surface heights of the line on the object using a line integral, if we assume a continuous surface.
If the object is not cylindrical, we have three unknown normal values
N
=
[
N
x
,
N
y
,
N
z
]
{\displaystyle N=[N_{x},N_{y},N_{z}]}
. Then the two equations still allow the normal to rotate around the view vector, thus additional constraints are needed from prior geometric information. For instance in face recognition those geometric constraints can be obtained using principal component analysis (PCA) on a database of depth-maps of faces, allowing only surface normals solutions which are found in a normal population.
== Applications ==
The Phong reflection model is often used together with Phong shading to shade surfaces in 3D computer graphics software. Apart from this, it may also be used for other purposes. For example, it has been used to model the reflection of thermal radiation from the Pioneer probes in an attempt to explain the Pioneer anomaly.
== See also ==
Bidirectional reflectance distribution function – Function of four real variables that defines how light is reflected at an opaque surface
Blinn–Phong shading model – Shading algorithm in computer graphicsPages displaying short descriptions of redirect targets
List of common shading algorithms
Gamma correction – Image luminance mapping function
Phong shading – Interpolation technique for surface shading
Specular highlight – Bright spot of light that appears on shiny objects when illuminated
== References ==
== External links ==
Phong reflection model in Matlab
Phong reflection model in GLSL | Wikipedia/Phong_reflection_model |
In 3D computer graphics rendering, a hemicube is one way to represent a 180° view from a surface or point in space.
== What is Hemicube? ==
A hemicube is a data structure used in computer graphics to represent a 180° view from a surface or point in space. It is a cube that has been cut in half along a plane parallel to one of its faces, resulting in six faces. The six faces of the hemicube are divided into different shapes, depending on their aspect ratio. The square face is divided into 4 quadrants, the diamond-shaped face is divided into 2 triangles, and the two rectangles are divided into 4 and 8 rectangles, respectively.
Hemicubes are used in radiosity rendering, a method for calculating global illumination in 3D scenes. Radiosity calculates the amount of light that is reflected from one surface to another, taking into account the shape and material properties of the surfaces involved. Hemicubes are used to store the radiosity information for a hemisphere, which can then be used to calculate the radiosity for the entire scene.
Hemicubes are a relatively efficient way to store radiosity information, and they can be used to render scenes with complex lighting arrangements. However, they can be inaccurate for scenes with very bright or very dark areas.
Here are some of the advantages of using hemicubes in computer graphics:
They are relatively efficient to store and render.
They can be used to render scenes with complex lighting arrangements.
They are accurate for most scenes.
Here are some of the disadvantages of using hemicubes in computer graphics:
They can be inaccurate for scenes with very bright or very dark areas.
They can be difficult to implement in some rendering engines.
Overall, hemicubes are a useful data structure for representing 180° views in computer graphics. They are efficient to store and render, and they can be used to render scenes with complex lighting arrangements. However, they can be inaccurate for scenes with very bright or very dark areas.
== Shape ==
Although the name implies any half of a cube, a hemicube is usually a cube cut through a plane parallel to one of its faces. Therefore, it consists of one square face, one diamond shape face, two 2:1 aspect ratio rectangles, and two 1:2 aspect ratio rectangles totaling six sides.
The reason for this specific arrangement of faces is that it allows for a more efficient representation of a 180° view from a surface or point in space. The square face represents the direct view, the diamond-shaped face represents the view from the zenith, and the two rectangles represent the views from the x- and y-axes. This arrangement of faces ensures that all possible directions are represented, and it also allows for a more efficient implementation of radiosity algorithms.
The hemicube data structure was first introduced by Cohen and Greenberg in 1985. They used it to develop a radiosity algorithm that could be used to render complex scenes with global illumination. Since then, hemicubes have been used in a variety of other applications, including environment mapping and reflection mapping.
The hemicube data structure is a relatively simple data structure, but it is very efficient for representing a 180° view.
Hemicubes can be used to render scenes with complex lighting arrangements, including scenes with shadows and reflections.
Hemicubes can be used to implement radiosity algorithms, which are used to calculate global illumination in 3D scenes.
Hemicubes can also be used for environment mapping and reflection mapping.
== Uses ==
The hemicube may be used in the Radiosity algorithm or other Light Transport algorithms in order to determine the amount of light arriving at a particular point on a surface.
The hemicube may be used in the Radiosity algorithm or other Light Transport algorithms in order to determine the amount of light arriving at a particular point on a surface. The Radiosity algorithm is a method for calculating global illumination in 3D scenes. Global illumination is the process of taking into account the reflections and refractions of light as it travels through a scene. This results in more realistic images, as the light is not simply assumed to travel in straight lines.
The hemicube is used in the Radiosity algorithm to store the view factors for a hemisphere. A view factor is a measure of the amount of light that is reflected from one surface to another. The hemicube is divided into a grid of cells, and each cell stores the view factor for the direction that corresponds to that cell.
When the Radiosity algorithm is run, it uses the hemicube to calculate the amount of light that is arriving at each point on a surface. The algorithm starts at a point on the surface and then traces rays in all directions. The view factors from the hemicube are used to calculate the amount of light that is reflected from each surface that the ray intersects.
The Radiosity algorithm is a computationally expensive algorithm, but it can produce very realistic images. The hemicube is a key part of the Radiosity algorithm, as it allows the algorithm to store the view factors for a hemisphere in a relatively efficient way.
The hemicube was first proposed by Michael F. Cohen and Donald P. Greenberg in their 1985 paper "The Hemi-cube: A Radiosity Solution for Complex Environments".
The hemicube has been used in a number of other Light Transport algorithms, including the Progressive Radiosity algorithm and the Monte Carlo Radiosity algorithm.
The hemicube can also be used for other purposes, such as environment mapping and reflection mapping.
In some cases, a hemicube may be used in environment mapping or reflection mapping. | Wikipedia/Hemicube_(computer_graphics) |
Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind.
These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs monitor physical conditions, such as temperature, sound, and pressure. Modern networks are bi-directional, both collecting data and enabling control of sensor activity. The development of these networks was motivated by military applications such as battlefield surveillance. Such networks are used in industrial and consumer applications, such as industrial process monitoring and control and machine health monitoring and agriculture.
A WSN is built of "nodes" – from a few to hundreds or thousands, where each node is connected to other sensors. Each such node typically has several parts: a radio transceiver with an internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from a shoebox to (theoretically) a grain of dust, although microscopic dimensions have yet to be realized. Sensor node cost is similarly variable, ranging from a few to hundreds of dollars, depending on node sophistication. Size and cost constraints constrain resources such as energy, memory, computational speed and communications bandwidth. The topology of a WSN can vary from a simple star network to an advanced multi-hop wireless mesh network. Propagation can employ routing or flooding.
In computer science and telecommunications, wireless sensor networks are an active research area supporting many workshops and conferences, including International Workshop on Embedded Networked Sensors (EmNetS), IPSN, SenSys, MobiCom and EWSN. As of 2010, wireless sensor networks had deployed approximately 120 million remote units worldwide.
== Application ==
=== Area monitoring ===
Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some phenomenon is to be monitored. A military example is the use of sensors to detect enemy intrusion; a civilian example is the geo-fencing of gas or oil pipelines.
=== Health care monitoring ===
There are several types of sensor networks for medical applications: implanted, wearable, and environment-embedded. Implantable medical devices are those that are inserted inside the human body. Wearable devices are used on the body surface of a human or just at close proximity of the user. Environment-embedded systems employ sensors contained in the environment. Possible applications include body position measurement, location of persons, overall monitoring of ill patients in hospitals and at home. Devices embedded in the environment track the physical state of a person for continuous health diagnosis, using as input the data from a network of depth cameras, a sensing floor, or other similar devices. Body-area networks can collect information about an individual's health, fitness, and energy expenditure. In health care applications the privacy and authenticity of user data has prime importance. Especially due to the integration of sensor networks, with IoT, the user authentication becomes more challenging; however, a solution is presented in recent work.
=== Habitat monitoring ===
Wireless sensor networks have been used to monitor various species and habitats, beginning with the Great Duck Island Deployment, including marmots, cane toads in Australia and zebras in Kenya.
=== Environmental/Earth sensing ===
There are many applications in monitoring environmental parameters, examples of which are given below. They share the extra challenges of harsh environments and reduced power supply.
==== Air quality monitoring ====
Experiments have shown that personal exposure to air pollution in cities can vary a lot. Therefore, it is of interest to have higher temporal and spatial resolution of pollutants and particulates. For research purposes, wireless sensor networks have been deployed to monitor the concentration of dangerous gases for citizens (e.g., in London). However, sensors for gases and particulate matter suffer from high unit-to-unit variability, cross-sensitivities, and (concept) drift. Moreover, the quality of data is currently insufficient for trustworthy decision-making, as field calibration leads to unreliable measurement results, and frequent recalibration might be required. A possible solution could be blind calibration or the usage of mobile references.
==== Forest fire detection ====
A network of Sensor Nodes can be installed in a forest to detect when a fire has started. The nodes can be equipped with sensors to measure temperature, humidity and gases which are produced by fire in the trees or vegetation. The early detection is crucial for a successful action of the firefighters; thanks to Wireless Sensor Networks, the fire brigade will be able to know when a fire is started and how it is spreading.
==== Landslide detection ====
A landslide detection system makes use of a wireless sensor network to detect the slight movements of soil and changes in various parameters that may occur before or during a landslide. Through the data gathered it may be possible to know the impending occurrence of landslides long before it actually happens.
==== Water quality monitoring ====
Water quality monitoring involves analyzing water properties in dams, rivers, lakes and oceans, as well as underground water reserves. The use of many wireless distributed sensors enables the creation of a more accurate map of the water status, and allows the permanent deployment of monitoring stations in locations of difficult access, without the need of manual data retrieval.
==== Natural disaster prevention ====
Wireless sensor networks can be effective in preventing adverse consequences of natural disasters, like floods. Wireless nodes have been deployed successfully in rivers, where changes in water levels must be monitored in real time.
=== Industrial monitoring ===
==== Machine health monitoring ====
Wireless sensor networks have been developed for machinery condition-based maintenance (CBM) as they offer significant cost savings and enable new functionality.
Wireless sensors can be placed in locations difficult or impossible to reach with a wired system, such as rotating machinery and untethered vehicles.
==== Data logging ====
Wireless sensor networks also are used for the collection of data for monitoring of environmental information. This can be as simple as monitoring the temperature in a fridge or the level of water in overflow tanks in nuclear power plants. The statistical information can then be used to show how systems have been working. The advantage of WSNs over conventional loggers is the "live" data feed that is possible.
==== Water/waste water monitoring ====
Monitoring the quality and level of water includes many activities such as checking the quality of underground or surface water and ensuring a country's water infrastructure for the benefit of both human and animal. It may be used to protect the wastage of water.
==== Structural health monitoring ====
WSN can be used to monitor the condition of civil infrastructure and related geo-physical processes close to real time, and over long periods through data logging, using appropriately interfaced sensors.
==== Wine production ====
Wireless sensor networks are used to monitor wine production, both in the field and the cellar.
=== Threat detection ===
The Wide Area Tracking System (WATS) is a prototype network for detecting a ground-based nuclear device such as a nuclear "briefcase bomb". WATS is being developed at the Lawrence Livermore National Laboratory (LLNL). WATS would be made up of wireless gamma and neutron sensors connected through a communications network. Data picked up by the sensors undergoes "data fusion", which converts the information into easily interpreted forms; this data fusion is the most important aspect of the system.
The data fusion process occurs within the sensor network rather than at a centralized computer and is performed by a specially developed algorithm based on Bayesian statistics. WATS would not use a centralized computer for analysis because researchers found that factors such as latency and available bandwidth tended to create significant bottlenecks. Data processed in the field by the network itself (by transferring small amounts of data between neighboring sensors) is faster and makes the network more scalable.
An important factor in WATS development is ease of deployment, since more sensors both improves the detection rate and reduces false alarms. WATS sensors could be deployed in permanent positions or mounted in vehicles for mobile protection of specific locations. One barrier to the implementation of WATS is the size, weight, energy requirements and cost of currently available wireless sensors. The development of improved sensors is a major component of current research at the Nonproliferation, Arms Control, and International Security (NAI) Directorate at LLNL.
WATS was profiled to the U.S. House of Representatives' Military Research and Development Subcommittee on October 1, 1997, during a hearing on nuclear terrorism and countermeasures. On August 4, 1998, in a subsequent meeting of that subcommittee, Chairman Curt Weldon stated that research funding for WATS had been cut by the Clinton administration to a subsistence level and that the program had been poorly re-organized.
==== Incident monitoring ====
Studies show that using sensors for incident monitoring improve the response of firefighters and police to an unexpected situation. For an early detection of incidents we can use acoustic sensors to detect a spike in the noise of the city because of a possible accident, or use termic sensors to detect a possible fire.
=== Supply chains ===
Using low-power electronics, WSN:s can be cost-efficiently applied also in supply chains in various industries.
== Characteristics ==
The main characteristics of a WSN include
Power consumption constraints for nodes using batteries or energy harvesting. Examples of suppliers are ReVibe Energy and Perpetuum
Ability to cope with node failures (resilience)
Some mobility of nodes (for highly mobile nodes see MWSNs)
Heterogeneity of nodes
Homogeneity of nodes
Scalability to large scale of deployment
Ability to withstand harsh environmental conditions
Ease of use
Cross-layer optimization
Cross-layer is becoming an important studying area for wireless communications. In addition, the traditional layered approach presents three main problems:
Traditional layered approach cannot share different information among different layers, which leads to each layer not having complete information. The traditional layered approach cannot guarantee the optimization of the entire network.
The traditional layered approach does not have the ability to adapt to the environmental change.
Because of the interference between the different users, access conflicts, fading, and the change of environment in the wireless sensor networks, traditional layered approach for wired networks is not applicable to wireless networks.
So the cross-layer can be used to make the optimal modulation to improve the transmission performance, such as data rate, energy efficiency, quality of service (QoS), etc. Sensor nodes can be imagined as small computers which are extremely basic in terms of their interfaces and their components. They usually consist of a processing unit with limited computational power and limited memory, sensors or MEMS (including specific conditioning circuitry), a communication device (usually radio transceivers or alternatively optical), and a power source usually in the form of a battery. Other possible inclusions are energy harvesting modules, secondary ASICs, and possibly secondary communication interface (e.g. RS-232 or USB).
The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server. Other special components in routing based networks are routers, designed to compute, calculate and distribute the routing tables.
== Platforms ==
=== Hardware ===
One major challenge in a WSN is to produce low cost and tiny sensor nodes. There are an increasing number of small companies producing WSN hardware and the commercial situation can be compared to home computing in the 1970s. Many of the nodes are still in the research and development stage, particularly their software. Also inherent to sensor network adoption is the use of very low power methods for radio communication and data acquisition.
In many applications, a WSN communicates with a local area network or wide area network through a gateway. The Gateway acts as a bridge between the WSN and the other network. This enables data to be stored and processed by devices with more resources, for example, in a remotely located server. A wireless wide area network used primarily for low-power devices is known as a Low-Power Wide-Area Network (LPWAN).
=== Wireless ===
There are several wireless standards and solutions for sensor node connectivity. Thread and Zigbee can connect sensors operating at 2.4 GHz with a data rate of 250 kbit/s. Many use a lower frequency to increase radio range (typically 1 km), for example Z-wave operates at 915 MHz and in the EU 868 MHz has been widely used but these have a lower data rate (typically 50 kbit/s). The IEEE 802.15.4 working group provides a standard for low power device connectivity and commonly sensors and smart meters use one of these standards for connectivity. With the emergence of Internet of Things, many other proposals have been made to provide sensor connectivity. LoRa is a form of LPWAN which provides long range low power wireless connectivity for devices, which has been used in smart meters and other long range sensor applications. Wi-SUN connects devices at home. NarrowBand IOT and LTE-M can connect up to millions of sensors and devices using cellular technology.
=== Software ===
Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs may be deployed in large numbers in various environments, including remote and hostile regions, where ad hoc communications are a key component. For this reason, algorithms and protocols need to address the following issues:
Increased lifespan
Robustness and fault tolerance
Self-configuration
Lifetime maximization: Energy/Power Consumption of the sensing device should be minimized and sensor nodes should be energy efficient since their limited energy resource determines their lifetime. To conserve power, wireless sensor nodes normally power off both the radio transmitter and the radio receiver when not in use.
==== Routing protocols ====
Wireless sensor networks are composed of low-energy, small-size, and low-range unattended sensor nodes. Recently, it has been observed that by periodically turning on and off the sensing and communication capabilities of sensor nodes, we can significantly reduce the active time and thus prolong network lifetime. However, this duty cycling may result in high network latency, routing overhead, and neighbor discovery delays due to asynchronous sleep and wake-up scheduling. These limitations call for a countermeasure for duty-cycled wireless sensor networks which should minimize routing information, routing traffic load, and energy consumption. Researchers from Sungkyunkwan University have proposed a lightweight non-increasing delivery-latency interval routing referred as LNDIR. This scheme can discover minimum latency routes at each non-increasing delivery-latency interval instead of each time slot. Simulation experiments demonstrated the validity of this novel approach in minimizing routing information stored at each sensor. Furthermore, this novel routing can also guarantee the minimum delivery latency from each source to the sink. Performance improvements of up to 12-fold and 11-fold are observed in terms of routing traffic load reduction and energy efficiency, respectively, as compared to existing schemes.
==== Operating systems ====
Operating systems for wireless sensor network nodes are typically less complex than general-purpose operating systems. They more strongly resemble embedded systems, for two reasons. First, wireless sensor networks are typically deployed with a particular application in mind, rather than as a general platform. Second, a need for low costs and low power leads most wireless sensor nodes to have low-power microcontrollers ensuring that mechanisms such as virtual memory are either unnecessary or too expensive to implement.
It is therefore possible to use embedded operating systems such as eCos or uC/OS for sensor networks. However, such operating systems are often designed with real-time properties.
TinyOS, developed by David Culler, is perhaps the first operating system specifically designed for wireless sensor networks. TinyOS is based on an event-driven programming model instead of multithreading. TinyOS programs are composed of event handlers and tasks with run-to-completion semantics. When an external event occurs, such as an incoming data packet or a sensor reading, TinyOS signals the appropriate event handler to handle the event. Event handlers can post tasks that are scheduled by the TinyOS kernel some time later.
LiteOS is a newly developed OS for wireless sensor networks, which provides UNIX-like abstraction and support for the C programming language.
Contiki, developed by Adam Dunkels, is an OS which uses a simpler programming style in C while providing advances such as 6LoWPAN and Protothreads.
RIOT (operating system) is a more recent real-time OS including similar functionality to Contiki.
PreonVM is an OS for wireless sensor networks, which provides 6LoWPAN based on Contiki and support for the Java programming language.
=== Online collaborative sensor data management platforms ===
Online collaborative sensor data management platforms are on-line database services that allow sensor owners to register and connect their devices to feed data into an online database for storage and also allow developers to connect to the database and build their own applications based on that data. Examples include Xively and the Wikisensing platform Archived 2021-06-09 at the Wayback Machine. Such platforms simplify online collaboration between users over diverse data sets ranging from energy and environment data to that collected from transport services. Other services include allowing developers to embed real-time graphs & widgets in websites; analyse and process historical data pulled from the data feeds; send real-time alerts from any datastream to control scripts, devices and environments.
The architecture of the Wikisensing system describes the key components of such systems to include APIs and interfaces for online collaborators, a middleware containing the business logic needed for the sensor data management and processing and a storage model suitable for the efficient storage and retrieval of large volumes of data.
== Simulation ==
At present, agent-based modeling and simulation is the only paradigm which allows the simulation of complex behavior in the environments of wireless sensors (such as flocking). Agent-based simulation of wireless sensor and ad hoc networks is a relatively new paradigm. Agent-based modelling was originally based on social simulation.
Network simulators like Opnet, Tetcos NetSim and NS can be used to simulate a wireless sensor network.
== Other concepts ==
=== Localization ===
Network localization refers to the problem of estimating the location of wireless sensor nodes during deployments and in dynamic settings. For ultra-low power sensors, size, cost and environment precludes the use of Global Positioning System receivers on sensors. In 2000, Nirupama Bulusu, John Heidemann and Deborah Estrin first motivated and proposed a radio connectivity based system for localization of wireless sensor networks. Subsequently, such localization systems have been referred to as range free localization systems, and many localization systems for wireless sensor networks have been subsequently proposed including AHLoS, APS, and Stardust.
=== Sensor data calibration and fault tolerance ===
Sensors and devices used in wireless sensor networks are state-of-the-art technology with the lowest possible price. The sensor measurements we get from these devices are therefore often noisy, incomplete and inaccurate. Researchers studying wireless sensor networks hypothesize that much more information can be extracted from hundreds of unreliable measurements spread across a field of interest than from a smaller number of high-quality, high-reliability instruments with the same total cost.
=== Macroprogramming ===
Macro-programming is a term coined by Matt Welsh. It refers to programming the entire sensor network as an ensemble, rather than individual sensor nodes. Another way to macro-program a network is to view the sensor network as a database, which was popularized by the TinyDB system developed by Sam Madden.
=== Reprogramming ===
Reprogramming is the process of updating the code on the sensor nodes. The most feasible form of reprogramming is remote reprogramming whereby the code is disseminated wirelessly while the nodes are deployed. Different reprogramming protocols exist that provide different levels of speed of operation, reliability, energy expenditure, requirement of code resident on the nodes, suitability to different wireless environments, resistance to DoS, etc. Popular reprogramming protocols are Deluge (2004), Trickle (2004), MNP (2005), Synapse (2008), and Zephyr (2009).
=== Security ===
Infrastructure-less architecture (i.e. no gateways are included, etc.) and inherent requirements (i.e. unattended working environment, etc.) of WSNs might pose several weak points that attract adversaries. Therefore, security is a big concern when WSNs are deployed for special applications such as military and healthcare. Owing to their unique characteristics, traditional security methods of computer networks would be useless (or less effective) for WSNs. Hence, lack of security mechanisms would cause intrusions towards those networks. These intrusions need to be detected and mitigation methods should be applied.
There have been important innovations in securing wireless sensor networks. Most wireless embedded networks use omni-directional antennas and therefore neighbors can overhear communication in and out of nodes. This was used this to develop a primitive called "local monitoring" which was used for detection of sophisticated attacks, like blackhole or wormhole, which degrade the throughput of large networks to close-to-zero. This primitive has since been used by many researchers and commercial wireless packet sniffers. This was subsequently refined for more sophisticated attacks such as with collusion, mobility, and multi-antenna, multi-channel devices.
=== Distributed sensor network ===
If a centralized architecture is used in a sensor network and the central node fails, then the entire network will collapse, however the reliability of the sensor network can be increased by using a distributed control architecture. Distributed control is used in WSNs for the following reasons:
Sensor nodes are prone to failure,
For better collection of data,
To provide nodes with backup in case of failure of the central node.
There is also no centralised body to allocate the resources and they have to be self organized.
As for the distributed filtering over distributed sensor network. the general setup is to observe the underlying process through a group of sensors organized according to a given network topology, which renders the individual observer estimates the system state based not only on its own measurement but also on its neighbors'.
=== Data integration and sensor web ===
The data gathered from wireless sensor networks is usually saved in the form of numerical data in a central base station. Additionally, the Open Geospatial Consortium (OGC) is specifying standards for interoperability interfaces and metadata encodings that enable real time integration of heterogeneous sensor webs into the Internet, allowing any individual to monitor or control wireless sensor networks through a web browser.
=== In-network processing ===
To reduce communication costs some algorithms remove or reduce nodes' redundant sensor information and avoid forwarding data that is of no use. This technique has been used, for instance, for distributed anomaly detection or distributed optimization. As nodes can inspect the data they forward, they can measure averages or directionality for example of readings from other nodes. For example, in sensing and monitoring applications, it is generally the case that neighboring sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires techniques for in-network data aggregation and mining. Aggregation reduces the amount of network traffic which helps to reduce energy consumption on sensor nodes. Recently, it has been found that network gateways also play an important role in improving energy efficiency of sensor nodes by scheduling more resources for the nodes with more critical energy efficiency need and advanced energy efficient scheduling algorithms need to be implemented at network gateways for the improvement of the overall network energy efficiency.
=== Secure data aggregation ===
This is a form of in-network processing where sensor nodes are assumed to be unsecured with limited available energy, while the base station is assumed to be secure with unlimited available energy. Aggregation complicates the already existing security challenges for wireless sensor networks and requires new security techniques tailored specifically for these scenarios. Providing security to aggregate data in wireless sensor networks is known as secure data aggregation in WSN. were the first few works discussing techniques for secure data aggregation in wireless sensor networks.
Two main security challenges in secure data aggregation are confidentiality and integrity of data. While encryption is traditionally used to provide end to end confidentiality in wireless sensor network, the aggregators in a secure data aggregation scenario need to decrypt the encrypted data to perform aggregation. This exposes the plaintext at the aggregators, making the data vulnerable to attacks from an adversary. Similarly an aggregator can inject false data into the aggregate and make the base station accept false data. Thus, while data aggregation improves energy efficiency of a network, it complicates the existing security challenges.
== See also ==
Autonomous system
Bluetooth mesh networking
Center for Embedded Network Sensing
List of ad hoc routing protocols
Meteorological instrumentation
Mobile wireless sensor networks
OpenWSN
Optical wireless communications
Robotic mapping
Smart object
Unattended ground sensor
Virtual sensor network
Wireless ad hoc networks
== References ==
== Further reading ==
Amir Hozhabri; Mohammadreza Eslaminejad; Mitra Mahrouyan, Chain-based Gateway nodes routing for energy efficiency in WSN
Kiran Maraiya, Kamal Kant, Nitin Gupta "Wireless Sensor Network: A Review on Data Aggregation" International Journal of Scientific & Engineering Research Volume 2 Issue 4, April 2011.
Chalermek Intanagonwiwat, Deborah Estrin, Ramesh Govindan, John Heidemann, "Impact of Network Density on Data Aggregation in Wireless SensorNetworks," November 4, 2001.
Bulusu, Nirupama; Jha, Sanjay (2005). Wireless Sensor Networks: A Systems Perspective. Artech House. ISBN 978-1-58053-867-1.
== External links ==
Media related to Wireless sensor networks at Wikimedia Commons
IEEE 802.15.4 Standardization Committee
Secure Data Aggregation in Wireless Sensor Networks: A *Survey
A list of secure aggregation proposals for WSN | Wikipedia/Wireless_sensor_network |
Computer network programming involves writing computer programs that enable processes to communicate with each other across a computer network.
== Connection-oriented and connectionless communications ==
Very generally, most of communications can be divided into connection-oriented, and connectionless. Whether a communication is connection-oriented or connectionless, is defined by the communication protocol, and not by application programming interface (API). Examples of the connection-oriented protocols include Transmission Control Protocol (TCP) and Sequenced Packet Exchange (SPX), and examples of connectionless protocols include User Datagram Protocol (UDP), "raw IP", and Internetwork Packet Exchange (IPX).
== Clients and servers ==
For connection-oriented communications, communication parties usually have different roles. One party is usually waiting for incoming connections; this party is usually referred to as "server". Another party is the one which initiates connection; this party is usually referred to as "client".
For connectionless communications, one party ("server") is usually waiting for an incoming packet, and another party ("client") is usually understood as the one which sends an unsolicited packet to "server".
== Popular protocols and APIs ==
Network programming traditionally covers different layers of OSI/ISO model (most of application-level programming belongs to L4 and up). The table below contains some examples of popular protocols belonging to different OSI/ISO layers, and popular APIs for them.
== See also ==
Software-defined networking
Infrastructure as code
Site reliability engineering
DevOps
== References ==
W. Richard Stevens: UNIX Network Programming, Volume 1, Second Edition: Networking APIs: Sockets and XTI, Prentice Hall, 1998, ISBN 0-13-490012-X | Wikipedia/Computer_network_programming |
The Media Gateway Control Protocol (MGCP) is a telecommunication protocol for signaling and call control in hybrid voice over IP (VoIP) and traditional telecommunication systems. It implements the media gateway control protocol architecture for controlling media gateways connected to the public switched telephone network (PSTN). The media gateways provide conversion of traditional electronic media to the Internet Protocol (IP) network. The protocol is a successor to the Simple Gateway Control Protocol (SGCP), which was developed by Bellcore and Cisco, and the Internet Protocol Device Control (IPDC).
The methodology of MGCP reflects the structure of the PSTN with the control over the network residing in a call control center softswitch, which is analogous to the central office in the telephone network. The endpoints are low-intelligence devices, mostly executing control commands from a media gateway controller, also called call agent, in the softswitch and providing result indications in response. The protocol represents a decomposition of other VoIP models, such as H.323 and the Session Initiation Protocol (SIP), in which the endpoint devices of a call have higher levels of signaling intelligence.
MGCP is a text-based protocol consisting of commands and responses. It uses the Session Description Protocol (SDP) for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol (RTP) for framing the media streams.
== Architecture ==
The media gateway control protocol architecture and its methodologies and programming interfaces are described in RFC 2805.
MGCP is a master-slave protocol in which media gateways (MGs) are controlled by a call control agent or softswitch. This controller is called a media gateway controller (MGC) or call agent (CA). With the network protocol it can control each specific port on a media gateway. This facilitates centralized gateway administration and provides scalable IP telephony solutions. The distributed system is composed of at least one call agent and one or usually, multiple media gateways, which performs the conversion of media signals between circuit-switched and packet-switched networks, and at least one signaling gateway (SG) when connected to the PSTN.
MGCP presents a call control architecture with limited intelligence at the edge (endpoints, media gateways) and intelligence at the core controllers. The MGCP model assumes that call agents synchronize with each other to send coherent commands and responses to the gateways under their control.
The call agent uses MGCP to request event notifications, reports, status, and configuration data from the media gateway, as well as to specify connection parameters and activation of signals toward the PSTN telephony interface.
A softswitch is typically used in conjunction with signaling gateways, for access to Signalling System No. 7 (SS7) functionality, for example. The call agent does not use MGCP to control a signaling gateway; rather, SIGTRAN protocols are used to backhaul signaling between a signaling gateway and the call agents.
=== Multiple call agents ===
Typically, a media gateway may be configured with a list of call agents from which it may accept control commands.
In principle, event notifications may be sent to different call agents for each endpoint on the gateway, according to the instructions received from the call agents by setting the NotifiedEntity parameter. In practice, however, it is usually desirable that all endpoints of a gateway are controlled by the same call agent; other call agents are available to provide redundancy in the event that the primary call agent fails, or loses contact with the media gateway. In the event of such a failure it is the backup call agent's responsibility to reconfigure the media gateway so that it reports to the backup call agent. The gateway may be audited to determine the controlling call agent, a query that may be used to resolve any conflicts.
In case of multiple call agents, MGCP assumes that they maintain knowledge of device state among themselves. Such failover features take into account both planned and unplanned outages.
== Protocol overview ==
MGCP recognizes three essential elements of communication, the media gateway controller (call agent), the media gateway endpoint, and connections between these entities. A media gateway may host multiple endpoints and each endpoint should be able to engage in multiple connections. Multiple connections on the endpoints support calling features such as call waiting and three-way calling.
MGCP is a text-based protocol using a command and response model. Commands and responses are encoded in messages that are structured and formatted with the whitespace characters space, horizontal tab, carriage return, linefeed, colon, and full stop. Messages are transmitted using the User Datagram Protocol (UDP). Media gateways use the port number 2427, and call agents use 2727 by default.
The message sequence of command (or request) and its response is known as a transaction, which is identified by the numerical Transaction Identifier exchanged in each transaction. The protocol specification defines nine standard commands that are distinguished by a four-letter command verb: AUEP, AUCX, CRCX, DLCX, EPCF, MDCX, NTFY, RQNT, and RSIP. Responses begin with a three-digit numerical response code that identifies the outcome or result of the transaction.
Two verbs are used by a call agent to query the state of an endpoint and its associated connections.
AUEP: Audit Endpoint
AUCX: Audit Connection
Three verbs are used by a call agent to manage the connection to a media gateway endpoint.
CRCX: Create Connection
DLCX: Delete Connection. This command may also be issued by an endpoint to terminate a connection.
MDCX: Modify Connection. This command is used to alter operating characteristics of the connection, e.g. speech encoders, muting, half-duplex/full-duplex state and others.
One verb is used by a call agent to request notification of events occurring at the endpoint, and to apply signals to the connected PSTN network link, or to a connected telephony endpoint, e.g., a telephone.
RQNT: Request for Notification
One verb is used by an endpoint to indicate to the call agent that it has detected an event for which the call agent had previously requested notification with the RQNT command:
NTFY: Notify
One verb is used by a call agent to modify coding characteristics expected by the line side of the endpoint:
EPCF: Endpoint Configuration
One verb is used by an endpoint to indicate to the call agent that it is in the process of restarting:
RSIP: Restart In Progress
== Standards documents ==
RFC 3435 – Media Gateway Control Protocol (MGCP) Version 1.0 (this supersedes RFC 2705)
RFC 3660 – Basic Media Gateway Control Protocol (MGCP) Packages (informational)
RFC 3661 – Media Gateway Control Protocol (MGCP) Return Code Usage
RFC 3064 – MGCP CAS Packages
RFC 3149 – MGCP Business Phone Packages
RFC 3991 – Media Gateway Control Protocol (MGCP) Redirect and Reset Package
RFC 3992 – Media Gateway Control Protocol (MGCP) Lockstep State Reporting Mechanism (informational)
RFC 2805 – Media Gateway Control Protocol Architecture and Requirements
RFC 2897 – Proposal for an MGCP Advanced Audio Package
== Megaco ==
Another implementation of the media gateway control protocol architecture is the H.248/Megaco protocol, a collaboration of the Internet Engineering Task Force (RFC 3525) and the International Telecommunication Union (Recommendation H.248.1). Both protocols follow the guidelines of the overlying media gateway control protocol architecture, as described in RFC 2805. However, the protocols are incompatible due to differences in protocol syntax and underlying connection model.
== See also ==
RTP audio video profile
== References ==
== External links ==
MGCP Information Site Information related to MGCP
H.248 Information Site Information related to H.248/Megaco, including pointers to standards and draft specifications | Wikipedia/Media_Gateway_Control_Protocol |
Open Network Computing (ONC) Remote Procedure Call (RPC), commonly known as Sun RPC is a remote procedure call system. ONC was originally developed by Sun Microsystems in the 1980s as part of their Network File System project.
ONC is based on calling conventions used in Unix and the C programming language. It serializes data using the External Data Representation (XDR), which has also found some use to encode and decode data in files that are to be accessed on more than one platform. ONC then delivers the XDR payload using either UDP or TCP. Access to RPC services on a machine are provided via a port mapper that listens for queries on a well-known port (number 111) over UDP and TCP.
ONC RPC version 2 was first described in RFC 1050 published in April 1988. In June 1988 it was updated by RFC 1057. Later it was updated by RFC 1831, published in August 1995. RFC 5531, published in May 2009, is the current version. All these documents describe only version 2 and version 1 was not covered by any RFC document. Authentication mechanisms used by ONC RPC are described in RFC 2695, RFC 2203, and RFC 2623.
Implementations of ONC RPC exist in most Unix-like systems. Microsoft supplied an implementation for Windows in their (now discontinued) Microsoft Windows Services for UNIX product; in addition, a number of third-party implementation of ONC RPC for Windows exist, including versions for C/C++, Java, and .NET (see external links).
In 2009, Sun relicensed the ONC RPC code under the standard 3-clause BSD license, which was reconfirmed by Oracle Corporation in 2010 following confusion about the scope of the relicensing.
== See also ==
XDR – The grammar defined in RFC 1831 is a small extension of the XDR grammar defined in RFC 4506.
DCE
XML-RPC
== References ==
Birrell, A. D.; Nelson, B. J. (1984). "Implementing remote procedure calls". ACM Transactions on Computer Systems. 2: 39–59. doi:10.1145/2080.357392. S2CID 11525846.
=== Notes ===
== External links ==
RFC 5531 - RPC: Remote Procedure Call Protocol Specification Version 2 (current version)
RFC 1831 - RPC: Remote Procedure Call Protocol Specification Version 2 (third published version)
RFC 1057 - RPC: Remote Procedure Call Protocol Specification Version 2 (second published version)
RFC 1050 - RPC: Remote Procedure Call Protocol Specification Version 2 (first published version)
Remote Procedure Calls (RPC) — A tutorial on ONC RPC by Dr Dave Marshall of Cardiff University
Introduction to RPC Programming — A developer's introduction to RPC and XDR, from SGI IRIX documentation.
Sun ONC Developer's guide
Netbula's PowerRPC for Windows (ONC RPC for Windows with extended IDL)
Netbula's JRPC (ONC RPC for Java)(supports J2SE, J2ME and Android
ONC/RPC Implementation of the University of Aachen (Germany)
Remote Tea (LGPL Java Implementation)
Remote Tea .Net (LGPL C# Implementation)
Distinct Corporation's ONC RPC for Windows
Linux Journal article on ONC RPC
Java NIO based ONC RPC library | Wikipedia/Open_Network_Computing_Remote_Procedure_Call |
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
== Composition ==
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
== IEEE 1016 ==
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled after IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts of view, viewpoint, stakeholder, and concern from architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016, Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:
Context viewpoint
Composition viewpoint
Logical viewpoint
Dependency viewpoint
Information viewpoint
Patterns use viewpoint
Interface viewpoint
Structure viewpoint
Interaction viewpoint
State dynamics viewpoint
Algorithm viewpoint
Resource viewpoint
In addition, users of the standard are not limited to these viewpoints but may define their own.
== IEEE status ==
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.
== See also ==
Game design document
High-level design
Low-level design
== References ==
== External links ==
IEEE 1016 website | Wikipedia/Software_design_document |
Fundamental modeling concepts (FMC) provide a framework to describe software-intensive systems. It strongly emphasizes the communication about software-intensive systems by using a semi-formal graphical notation that can easily be understood.
== Introduction ==
FMC distinguishes three perspectives to look at a software system:
Structure of the system
Processes in the system
Value domains of the system
FMC defines a dedicated diagram type for each perspective. FMC diagrams use a simple and lean notation. The purpose of FMC diagrams is to facilitate the communication about a software system, not only between technical experts but also between technical experts and business or domain experts. The comprehensibility of FMC diagrams has made them famous among its supporters.
The common approach when working with FMC is to start with a high-level diagram of the compositional structure of a system. This “big picture” diagram serves as a reference in the communication with all involved stakeholders of the project. Later on, the high-level diagram is iteratively refined to model technical details of the system. Complementary diagrams for processes observed in the system or value domains found in the system are introduced as needed.
== Diagram Types ==
FMC uses three diagram types to model different aspects of a system:
Compositional Structure Diagram depicts the static structure of a system. This diagram type is also known as FMC Block Diagram
Dynamic Structure Diagram depicts processes that can be observed in a system. This diagram type is also known as FMC Petri-net
Value Range Structure Diagram depicts structures of values found in the system. This diagram type is also known as FMC E/R Diagram
All FMC diagrams are bipartite graphs. Each bipartite graph consists of two disjoint sets of vertices with the condition that no vertex is connected to another vertex of the same set. In FMC diagrams, members of one set are represented by angular shapes, and members of the other set are represented by curved shapes. Each element in an FMC diagram can be refined by another diagram of the same type, provided that the combined graph is also bipartite. This mechanism allows modeling all relevant layers of abstraction with the same notation.
=== Compositional Structure Diagram ===
Compositional structure diagrams depict the static structure of a system, and the relationships between system components. System components can be active or passive. Agents are active system components. They perform activities in the system. Storages and channels are passive components which store or transmit information.
The image to the right is an example of a compositional structure diagram. It contains the agents Order Processor, Supplier Manager, Supplier, Online Shop and an unnamed human agent. Agents are represented by rectangles. The dots and the shadow of the agent Supplier indicate that this agent has multiple instances, i.e. the Supplier Manager communicates with one or many suppliers. The so-called human agent represents a user interacting with the system.
The diagram contains the storages Orders, Purchase Order and Product Catalog. Storages are represented by curved shapes. Agents can read from storages, write to storages or modify the content of storages. The directions of the arrows indicate which operation is performed by an agent. In the diagram, the Supplier Manager can modify the content of the Product Catalog, whereas the Order Processor can only read the content of the Product Catalog.
Agents communicate via channels. The direction of information flow is either indicated by arrows (not shown in the picture), by a request-response-symbol (e.g. between Supplier Manager and Supplier) or omitted (e.g. between Order Processor and Supplier Manager).
=== Dynamic Structure Diagram ===
Dynamic structures are derived from petri nets.
"They are used to express system behavior over time, depicting the actions performed by the agents. So they clarify how a system is working and how communication takes place between different agents."
=== Value Range Structure Diagram ===
Value range structure diagrams (also known as FMC Entity Relationship Diagrams) can be compared with the Entity-relationship model.
"[They] are used to depict value range structures or topics as mathematical structures. Value range structures describe observable values at locations within the system whereas topic diagrams allow a much wider usage in order to cover all correlations between interesting points."
== References ==
Knoepfel, Andreas; Bernhard Groene; Peter Tabeling (2005). Fundamental Modeling Concepts - Effective Communication of IT Systems. Wiley. 0-470-02710-X.
== External links ==
FMC home page
FMC-Stencils for MS-Visio
FMC-Coaching & Training
[1] | Wikipedia/Fundamental_modeling_concepts |
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
== Composition ==
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
== IEEE 1016 ==
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled after IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts of view, viewpoint, stakeholder, and concern from architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016, Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:
Context viewpoint
Composition viewpoint
Logical viewpoint
Dependency viewpoint
Information viewpoint
Patterns use viewpoint
Interface viewpoint
Structure viewpoint
Interaction viewpoint
State dynamics viewpoint
Algorithm viewpoint
Resource viewpoint
In addition, users of the standard are not limited to these viewpoints but may define their own.
== IEEE status ==
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.
== See also ==
Game design document
High-level design
Low-level design
== References ==
== External links ==
IEEE 1016 website | Wikipedia/Software_Design_Description |
In software engineering, a software design pattern or design pattern is a general, reusable solution to a commonly occurring problem in many contexts in software design. A design pattern is not a rigid structure to be transplanted directly into source code. Rather, it is a description or a template for solving a particular type of problem that can be deployed in many different situations. Design patterns can be viewed as formalized best practices that the programmer may use to solve common problems when designing a software application or system.
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.
== History ==
Patterns originated as an architectural concept by Christopher Alexander as early as 1977 in A Pattern Language (cf. his article, "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying patterns to programming – specifically pattern languages – and presented their results at the OOPSLA conference that year. In the following years, Beck, Cunningham and others followed up on this work.
Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides), which is frequently abbreviated as "GoF". That same year, the first Pattern Languages of Programming Conference was held, and the following year the Portland Pattern Repository was set up for documentation of design patterns. The scope of the term remains a matter of dispute. Notable books in the design pattern genre include:
Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. ISBN 978-0-201-63361-0.
Brinch Hansen, Per (1995). Studies in Computational Science: Parallel Programming Paradigms. Prentice Hall. ISBN 978-0-13-439324-7.
Buschmann, Frank; Meunier, Regine; Rohnert, Hans; Sommerlad, Peter (1996). Pattern-Oriented Software Architecture, Volume 1: A System of Patterns. John Wiley & Sons. ISBN 978-0-471-95869-7.
Beck, Kent (1997). Smalltalk Best Practice Patterns. Prentice Hall. ISBN 978-0134769042.
Schmidt, Douglas C.; Stal, Michael; Rohnert, Hans; Buschmann, Frank (2000). Pattern-Oriented Software Architecture, Volume 2: Patterns for Concurrent and Networked Objects. John Wiley & Sons. ISBN 978-0-471-60695-6.
Fowler, Martin (2002). Patterns of Enterprise Application Architecture. Addison-Wesley. ISBN 978-0-321-12742-6.
Hohpe, Gregor; Woolf, Bobby (2003). Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley. ISBN 978-0-321-20068-6.
Freeman, Eric T.; Robson, Elisabeth; Bates, Bert; Sierra, Kathy (2004). Head First Design Patterns. O'Reilly Media. ISBN 978-0-596-00712-6.
Larman, Craig (2004). Applying UML and Patterns (3rd Ed, 1st Ed 1995). Pearson. ISBN 978-0131489066.
Although design patterns have been applied practically for a long time, formalization of the concept of design patterns languished for several years.
== Practice ==
Design patterns can speed up the development process by providing proven development paradigms. Effective software design requires considering issues that may not become apparent until later in the implementation. Freshly written code can often have hidden, subtle issues that take time to be detected; issues that sometimes can cause major problems down the road. Reusing design patterns can help to prevent such issues, and enhance code readability for those familiar with the patterns.
Software design techniques are difficult to apply to a broader range of problems. Design patterns provide general solutions, documented in a format that does not require specifics tied to a particular problem.
In 1996, Christopher Alexander was invited to give a Keynote Speech to the 1996 OOPSLA Convention. Here he reflected on how his work on Patterns in Architecture had developed and his hopes for how the Software Design community could help Architecture extend Patterns to create living structures that use generative schemes that are more like computer code.
== Motif ==
A pattern describes a design motif, a.k.a. prototypical micro-architecture, as a set of program constituents (e.g., classes, methods...) and their relationships. A developer adapts the motif to their codebase to solve the problem described by the pattern. The resulting code has structure and organization similar to the chosen motif.
== Domain-specific patterns ==
Efforts have also been made to codify design patterns in particular domains, including the use of existing design patterns as well as domain-specific design patterns. Examples include user interface design patterns, information visualization, secure design, "secure usability", Web design and business model design.
The annual Pattern Languages of Programming Conference proceedings include many examples of domain-specific patterns.
== Object-oriented programming ==
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
== Examples ==
Design patterns can be organized into groups based on what kind of problem they solve. Creational patterns create objects. Structural patterns organize classes and objects to form larger structures that provide new functionality. Behavioral patterns describe collaboration between objects.
=== Creational patterns ===
=== Structural patterns ===
=== Behavioral patterns ===
=== Concurrency patterns ===
== Documentation ==
The documentation for a design pattern describes the context in which the pattern is used, the forces within the context that the pattern seeks to resolve, and the suggested solution. There is no single, standard format for documenting design patterns. Rather, a variety of different formats have been used by different pattern authors. However, according to Martin Fowler, certain pattern forms have become more well-known than others, and consequently become common starting points for new pattern-writing efforts. One example of a commonly used documentation format is the one used by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides in their book Design Patterns. It contains the following sections:
Pattern Name and Classification: A descriptive and unique name that helps in identifying and referring to the pattern.
Intent: A description of the goal behind the pattern and the reason for using it.
Also Known As: Other names for the pattern.
Motivation (Forces): A scenario consisting of a problem and a context in which this pattern can be used.
Applicability: Situations in which this pattern is usable; the context for the pattern.
Structure: A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.
Participants: A listing of the classes and objects used in the pattern and their roles in the design.
Collaboration: A description of how classes and objects used in the pattern interact with each other.
Consequences: A description of the results, side effects, and trade offs caused by using the pattern.
Implementation: A description of an implementation of the pattern; the solution part of the pattern.
Sample Code: An illustration of how the pattern can be used in a programming language.
Known Uses: Examples of real usages of the pattern.
Related Patterns: Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
== Criticism ==
Some suggest that design patterns may be a sign that features are missing in a given programming language (Java or C++ for instance). Peter Norvig demonstrates that 16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan. Related observations were made by Hannemann and Kiczales who implemented several of the 23 design patterns using an aspect-oriented programming language (AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns.
See also Paul Graham's essay "Revenge of the Nerds".
Inappropriate use of patterns may unnecessarily increase complexity. FizzBuzzEnterpriseEdition offers a humorous example of over-complexity introduced by design patterns.
By definition, a pattern must be programmed anew into each application that uses it. Since some authors see this as a step backward from software reuse as provided by components, researchers have worked to turn patterns into components. Meyer and Arnout were able to provide full or partial componentization of two-thirds of the patterns they attempted.
In order to achieve flexibility, design patterns may introduce additional levels of indirection, which may complicate the resulting design and decrease runtime performance.
== Relationship to other topics ==
Software design patterns offer finer granularity compared to software architecture patterns and software architecture styles, as design patterns focus on solving detailed, low-level design problems within individual components or subsystems. Examples include Singleton, Factory Method, and Observer.
Software Architecture Pattern refers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system. Software architecture patterns operate at a higher level of abstraction than design patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples include Circuit Breaker.
Software Architecture Style refers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions. Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples include Layered Architecture, Microservices, and Event-Driven Architecture.
== See also ==
== References ==
== Further reading == | Wikipedia/Design_pattern_(computer_science) |
User-centered design (UCD) or user-driven development (UDD) is a framework of processes in which usability goals, user characteristics, environment, tasks and workflow of a product, service or brand are given extensive attention at each stage of the design process. This attention includes testing which is conducted during each stage of design and development from the envisioned requirements, through pre-production models to post production.
Testing is beneficial as it is often difficult for the designers of a product to understand the experiences of first-time users and each user's learning curve. UCD is based on the understanding of a user, their demands, priorities and experiences, and can lead to increased product usefulness and usability. UCD applies cognitive science principles to create intuitive, efficient products by understanding users' mental processes, behaviors, and needs.
UCD differs from other product design philosophies in that it tries to optimize the product around how users engage with the product, in order that users are not forced to change their behavior and expectations to accommodate the product. The users are at the focus, followed by the product's context, objectives and operating environment, and then the granular details of task development, organization, and flow.
== History ==
The term user-centered design (UCD) was coined by Rob Kling in 1977 and later adopted in Donald A. Norman's research laboratory at the University of California, San Diego. The concept became popular as a result of Norman's 1986 book User-Centered System Design: New Perspectives on Human-Computer Interaction and the concept gained further attention and acceptance in Norman's 1988 book The Design of Everyday Things, in which Norman describes the psychology behind what he deems 'good' and 'bad' design through examples. He exalts the importance of design in our everyday lives and the consequences of errors caused by bad designs.
Norman describes principles for building well-designed products. His recommendations are based on the user's needs, leaving aside what he considers secondary issues like aesthetics. The main highlights of these are:
Simplifying the structure of the tasks such that the possible actions at any moment are intuitive.
Making things visible, including the conceptual model of the system, actions, results of actions and feedback.
Achieving correct mappings between intended results and required actions.
Embracing and exploiting the constraints of systems.
In a later book, Emotional Design,: p.5 onwards Norman returns to some of his earlier ideas to elaborate what he had come to find as overly reductive.
== Models and approaches ==
The UCD process considers user requirements from the beginning and throughout the product cycle. Requirements are noted and refined through investigative methods including: ethnographic study, contextual inquiry, prototype testing, usability testing and other methods. Generative methods may also be used including: card sorting, affinity diagramming and participatory design sessions. In addition, user requirements can be inferred by careful analysis of usable products similar to the product being designed.
UCD takes inspiration from the following models:
Cooperative design (a.k.a. co-design) which involves designers and users on an equal footing. This is the Scandinavian tradition of design of IT artifacts and it has been evolving since 1970.
Participatory design (PD), a North American model inspired by cooperative design, with focus on the participation of users. Since 1990, bi-annual conferences have been held.
Contextual design (CD, a.k.a. customer-centered design) involves gathering data from actual customers in real-world situations and applying findings to the final design.
The following principles help in ensuring a design is user-centered:
Design is based upon an explicit understanding of users, tasks and environments.
Users are involved throughout design and development.
Design is driven and refined by user-centered evaluation.
Process is iterative (see below).
Design addresses the whole user experience.
Design team includes multidisciplinary skills and perspectives.
== User-centered design process ==
The goal of UCD is to make products with a high degree of usability (i.e., convenience of use, manageability, effectiveness, and meeting the user's requirements). The general phases of the UCD process are:
Specify context of use: Identify the primary users of the product and their reasons, requirements and environment for product use.
Specify requirements: Identified the detailed technical requirements of the product. This can aid designers in planning development and setting goals.
Create design solutions and development: Based on product goals and requirements, create an iterative cycle of product testing and refinement.
Evaluate product: Perform usability testing and collect user feedback at every design stage.
The above procedure is repeated to further refine the product. These phases are general approaches and factors such as design goals, team and their timeline, and environment in which the product is developed, determine the appropriate phases for a project and their order. Practical models include the waterfall model, agile model or any other software engineering practice.
== Analysis tools ==
There are a number of tools that are used in the analysis of UCD, mainly: personas, scenarios, and essential use cases.
=== Persona ===
During the UCD process, the design team may create a persona, an archetype representing a product user which helps guide decisions about product features, navigation, interactions, and aesthetics. In most cases, personas are synthesized from a series of ethnographic interviews with real people, then captured in one- or two-page descriptions that include behavior patterns, goals, skills, attitudes, and environment, and possibly fictional personal details to give it more character.
== See also ==
== References ==
== Further reading ==
ISO 13407:1999 Human-centred design processes for interactive systems
ISO 9241-210:2010 Ergonomics of human-system interaction -- Part 210: Human-centred design for interactive systems
Human Centered Design, IDEO’s David Kelley (video)
User Centered Design, Don Norman (video) | Wikipedia/User_centered_design |
The systems modeling language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems.
SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities.
== Contrast with UML ==
SysML offers several systems engineering specific improvements over UML, which has been developed as a software modeling language. These improvements include the following:
SysML's diagrams express system engineering concepts better due to the removal of UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities.
SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs.
SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis.
SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE Recommended Practice for Architectural Description of Software Intensive Systems).
SysML reuses seven of UML 2's fourteen "nominative" types of diagrams,
and adds two diagrams (requirement and parametric diagrams) for a total of nine diagram types. SysML also supports allocation tables, a tabular format that can be dynamically derived from SysML allocation relationships. A table which compares SysML and UML 2 diagrams is available in the SysML FAQ.
Consider modeling an automotive system: with SysML one can use Requirement diagrams to efficiently capture functional, performance, and interface requirements, whereas with UML one is subject to the limitations of use case diagrams to define high-level functional requirements. Likewise, with SysML one can use Parametric diagrams to precisely define performance and quantitative constraints like maximum acceleration, minimum curb weight, and total air conditioning capacity. UML provides no straightforward mechanism to capture this sort of essential performance and quantitative information.
Concerning the rest of the automotive system, enhanced activity diagrams and state machine diagrams can be used to specify the embedded software control logic and information flows for the on-board automotive computers. Other SysML structural and behavioral diagrams can be used to model factories that build the automobiles, as well as the interfaces between the organizations that work in the factories.
== History ==
The SysML initiative originated in a January 2001 decision by the International Council on Systems Engineering (INCOSE) Model Driven Systems Design workgroup to customize the UML for systems engineering applications. Following this decision, INCOSE and the Object Management Group (OMG), which maintains the UML specification, jointly chartered the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. The SE DSIG, with support from INCOSE and the ISO AP 233 workgroup, developed the requirements for the modeling language, which were subsequently issued by the OMG parting in the UML for Systems Engineering Request for Proposal (UML for SE RFP; OMG document ad/03-03-41) in March 2003.
In 2003 David Oliver and Sanford Friedenthal of INCOSE requested that Cris Kobryn, who successfully led the UML 1 and UML 2 language design teams, lead their joint effort to respond to the UML for SE RFP. As Chair of the SysML Partners, Kobryn coined the language name "SysML" (short for "Systems Modeling Language"), designed the original SysML logo, and organized the SysML Language Design team as an open source specification project. Friedenthal served as Deputy Chair, and helped organize the original SysML Partners team.
In January 2005, the SysML Partners published the SysML v0.9 draft specification. Later, in August 2005, Friedenthal and several other original SysML Partners left to establish a competing SysML Submission Team (SST). The SysML Partners released the SysML v1.0 Alpha specification in November 2005.
=== OMG SysML ===
After a series of competing SysML specification proposals, a SysML Merge Team was proposed to the OMG in April 2006. This proposal was voted upon and adopted by the OMG in July 2006 as OMG SysML, to differentiate it from the original open source specification from which it was derived. Because OMG SysML is derived from open source SysML, it also includes an open source license for distribution and use.
The OMG SysML v. 1.0 specification was issued by the OMG as an Available Specification in September 2007. The current version of OMG SysML is v1.6, which was issued by the OMG in December 2019. In addition, SysML was published by the International Organization for Standardization (ISO) in 2017 as a full International Standard (IS), ISO/IEC 19514:2017 (Information technology -- Object management group systems modeling language).
The OMG has been working on the next generation of SysML and issued a Request for Proposals (RFP) for version 2 on December 8, 2017, following its open standardization process. The resulting specification, which will incorporate language enhancements from experience applying the language, will include a UML profile, a metamodel, and a mapping between the profile and metamodel. A second RFP for a SysML v2 Application Programming Interface (API) and Services RFP was issued in June 2018. Its aim is to enhance the interoperability of model-based systems engineering tools.
== Diagrams ==
SysML includes 9 types of diagram, some of which are taken from UML.
Activity diagram
Block definition diagram
Internal block diagram
Package diagram
Parametric diagram
Requirement diagram
Sequence diagram
State machine diagram
Use case diagram
== Tools ==
There are several modeling tool vendors offering SysML support. Lists of tool vendors who support SysML or OMG SysML can be found on the SysML Forum or SysML websites, respectively.
=== Model exchange ===
As an OMG UML 2.0 profile, SysML models are designed to be exchanged using the XML Metadata Interchange (XMI) standard. In addition, architectural alignment work is underway to support the ISO 10303 (also known as STEP, the Standard for the Exchange of Product model data) AP-233 standard for exchanging and sharing information between systems engineering software applications and tools.
== See also ==
SoaML
Energy systems language
Object process methodology
Universal Systems Language
List of SysML tools
== References ==
== Further reading ==
Balmelli, Laurent (2007). An Overview of the Systems Modeling Language for Products and Systems Development (PDF). Journal of Object Technology, vol. 6, no. 6, July–August 2007, pp. 149-177.
Delligatti, Lenny (2013). SysML Distilled: A Brief Guide to the Systems Modeling Language. Addison-Wesley Professional. ISBN 978-0-321-92786-6.
Holt, Jon (2008). SysML for Systems Engineering. The Institution of Engineering and Technology. ISBN 978-0-86341-825-9.
Weilkiens, Tim (2008). Systems Engineering with SysML/UML: Modeling, Analysis, Design. Morgan Kaufmann / The OMG Press. ISBN 978-0-12-374274-2.
Friedenthal, Sanford; Moore, Alan; Steiner, Rick (2016). A Practical Guide to SysML: The Systems Modeling Language (Third ed.). Morgan Kaufmann / The OMG Press. ISBN 978-0-12-800202-5.
Douglass, Bruce (2015). Agile Systems Engineering. Morgan Kaufmann. ISBN 978-0128021200.
== External links ==
Introduction to Systems Modeling Language (SysML), Part 1 and Part 2. YouTube.
SysML Open Source Specification Project Provides information related to SysML open source specifications, FAQ, mailing lists, and open source licenses.
OMG SysML Website Furnishes information related to the OMG SysML specification, SysML tutorial, papers, and tool vendor information.
Article "EE Times article on SysML (May 8, 2006)"
SE^2 MBSE Challenge team: "Telescope Modeling"
Paper "System Modelling Language explained" (PDF format)
Bruce Douglass: Real-Time Agile Systems and Software Development
List of Popular SysML Modeling Tools | Wikipedia/Systems_Modeling_Language |
In mathematics, the Iwasawa algebra Λ(G) of a profinite group G is a variation of the group ring of G with p-adic coefficients that take the topology of G into account. More precisely, Λ(G) is the inverse limit of the group rings Zp(G/H) as H runs through the open normal subgroups of G. Commutative Iwasawa algebras were introduced by Iwasawa (1959) in his study of Zp extensions in Iwasawa theory, and non-commutative Iwasawa algebras of compact p-adic analytic groups were introduced by Lazard (1965).
== Iwasawa algebra of the p-adic integers ==
In the special case when the profinite group G is isomorphic to the additive group of the ring of p-adic integers Zp, the Iwasawa algebra Λ(G) is isomorphic to the ring of the formal power series Zp[[T]] in one variable over Zp. The isomorphism is given by identifying 1 + T with a topological generator of G. This ring is a 2-dimensional complete Noetherian regular local ring, and in particular a unique factorization domain.
It follows from the Weierstrass preparation theorem for formal power series over a complete local ring that the prime ideals of this ring are as follows:
Height 0: the zero ideal.
Height 1: the ideal (p), and the ideals generated by irreducible distinguished polynomials (polynomials with leading coefficient 1 and all other coefficients divisible by p).
Height 2: the maximal ideal (p,T).
=== Finitely generated modules ===
The rank of a finitely generated module is the number of times the module Zp[[T]] occurs in it. This is well-defined and is additive for short exact sequences of finitely-generated modules. The rank of a finitely generated module is zero if and only if the module is a torsion module, which happens if and only if the support has dimension at most 1.
Many of the modules over this algebra that occur in Iwasawa theory are finitely generated torsion modules. The structure of such modules can be described as follows. A quasi-isomorphism of modules is a homomorphism whose kernel and cokernel are both finite groups, in other words modules with support either empty or the height 2 prime ideal. For any finitely generated torsion module there is a quasi-isomorphism to a finite sum of modules of the form Zp[[T]]/(fn) where f
is a generator of a height 1 prime ideal. Moreover, the number of times any module Zp[[T]]/(f) occurs in the module is well defined and independent of the composition series. The torsion module therefore has a characteristic power series, a formal power series given by the product of the power series fn, that is uniquely defined up to multiplication by a unit. The ideal generated by the characteristic power series is called the characteristic ideal of the Iwasawa module. More generally, any generator of the characteristic ideal is called a characteristic power series.
The μ-invariant of a finitely-generated torsion module is the number of times the module Zp[[T]]/(p) occurs in it. This invariant is additive on short exact sequences of finitely generated torsion modules (though it is not additive on short exact sequences of finitely generated modules). It vanishes if and only if the finitely generated torsion module is finitely generated as a module over the subring Zp. The λ-invariant is the sum of the degrees of the distinguished polynomials that occur. In other words, if the module is pseudo-isomorphic to
⨁
i
Z
p
[
[
T
]
]
/
(
p
μ
i
)
⊕
⨁
j
Z
p
[
[
T
]
]
/
(
f
j
m
j
)
{\displaystyle \bigoplus _{i}\mathbf {Z} _{p}[\![T]\!]/(p^{\mu _{i}})\oplus \bigoplus _{j}\mathbf {Z} _{p}[\![T]\!]/(f_{j}^{m_{j}})}
where the fj are distinguished polynomials, then
μ
=
∑
i
μ
i
{\displaystyle \mu =\sum _{i}\mu _{i}}
and
λ
=
∑
j
m
j
deg
(
f
j
)
.
{\displaystyle \lambda =\sum _{j}m_{j}\deg(f_{j}).}
In terms of the characteristic power series, the μ-invariant is the minimum of the (p-adic) valuations of the coefficients and the λ-invariant is the power of T at which that minimum first occurs.
If the rank, the μ-invariant, and the λ-invariant of a finitely generated module all vanish, the module is finite (and conversely); in other words its underlying abelian group is a finite abelian p-group. These are the finitely generated modules whose support has dimension at most 0. Such modules are Artinian and have a well defined length, which is finite and additive on short exact sequences.
=== Iwasawa's theorem ===
Write νn for the element 1+γ+γ2+...+γpn–1 where γ is a topological generator of Γ. Iwasawa (1959) showed that if X is a finitely generated torsion module over the Iwasawa algebra and
X/νnX has order pen then
e
n
=
μ
p
n
+
λ
n
+
c
{\displaystyle e_{n}=\mu p^{n}+\lambda n+c}
for n sufficiently large, where μ, λ, and c depend only on X and not on n. Iwasawa's original argument was ad hoc, and Serre (1958) pointed out that the Iwasawa's result could be deduced from standard results about the structure of modules over integrally closed Noetherian rings such as the Iwasawa algebra.
In particular this applies to the case when en is the largest power of p dividing the order of the ideal class group of the cyclotomic field generated by the roots of unity of order pn+1. The Ferrero–Washington theorem states that μ=0 in this case.
== Higher rank and non-commutative Iwasawa algebras ==
More general Iwasawa algebras are of the form
Λ
(
G
)
:=
lim
←
H
Z
p
[
G
/
H
]
{\displaystyle \Lambda (G):=\varprojlim _{H}\mathbf {Z} _{p}[G/H]}
where G is a compact p-adic Lie group. The case above corresponds to
G
=
Z
p
{\displaystyle G=\mathbf {Z} _{p}}
. A classification of modules over
Λ
(
G
)
{\displaystyle \Lambda (G)}
up to pseudo-isomorphism is possible in case
G
=
Z
p
n
.
{\displaystyle G=\mathbf {Z} _{p}^{n}.}
For non-commutative G,
Λ
(
G
)
{\displaystyle \Lambda (G)}
-modules are classified up to so-called pseudo-null modules.
== References ==
Ardakov, K.; Brown, K. A. (2006), "Ring-theoretic properties of Iwasawa algebras: a survey", Documenta Mathematica: 7–33, arXiv:math/0511345, Bibcode:2005math.....11345A, ISSN 1431-0635, MR 2290583
Iwasawa, Kenkichi (1959), "On Γ-extensions of algebraic number fields", Bulletin of the American Mathematical Society, 65 (4): 183–226, doi:10.1090/S0002-9904-1959-10317-7, ISSN 0002-9904, MR 0124316
Lazard, Michel (1965), "Groupes analytiques p-adiques", Publications Mathématiques de l'IHÉS, 26 (26): 389–603, ISSN 1618-1913, MR 0209286
Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2000), "Chapter 5", Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323 (1st ed.), Berlin: Springer-Verlag, ISBN 978-3-540-66671-4, MR 1737196, Zbl 0948.11001
Serre, Jean-Pierre (1958), "Classes des corps cyclotomiques (d'après K. Iwasawa) Exp.174", Séminaire Bourbaki, Vol. 5, Paris: Société Mathématique de France, pp. 83–93, MR 1603459 | Wikipedia/Iwasawa_algebra |
The mathematical disciplines of combinatorics and dynamical systems interact in a number of ways. The ergodic theory of dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field of arithmetic combinatorics. Also dynamical systems theory is heavily involved in the relatively recent field of combinatorics on words. Also combinatorial aspects of dynamical systems are studied. Dynamical systems can be defined on combinatorial objects; see for example graph dynamical system.
== See also ==
Symbolic dynamics
Analytic combinatorics
Combinatorics and physics
Arithmetic dynamics
== References ==
Alsedà, Lluís; Libre, Jaume; Misiurewicz, Michał (October 2000), Combinatorial Dynamics and Entropy in Dimension One (2nd ed.), World Scientific, ISBN 978-981-02-4053-0
Baake, Michael; Damanik, David; Putnam, Ian; Solomyak, Boris (2004), Aperiodic Order: Dynamical Systems, Combinatorics, and Operators (PDF), Banff International Research Station for Mathematical Innovation and Discovery.
Berthé, Valérie; Ferenczi, Sébastien; Zamboni, Luca Q. (2005), "Interactions between dynamics, arithmetics and combinatorics: the good, the bad, and the ugly", Algebraic and topological dynamics, Contemp. Math., vol. 385, Providence, RI: Amer. Math. Soc., pp. 333–364, MR 2180244.
Fauvet, F.; Mitschi, C. (2003), From combinatorics to dynamical systems: Proceedings of the Computer Algebra Conference in honor of Jean Thomann held in Strasbourg, March 22–23, 2002, IRMA Lectures in Mathematics and Theoretical Physics, vol. 3, Berlin: Walter de Gruyter & Co., ISBN 3-11-017875-3, MR 2049418.
Fogg, N. Pytheas (2002), Fogg, N. Pytheas; Berthé, Valéré; Ferenczi, Sébastien; Mauduit, Christian; Siegel, Anne (eds.), Substitutions in Dynamics, Arithmetics and Combinatorics, Lecture Notes in Mathematics, vol. 1794, Berlin: Springer-Verlag, doi:10.1007/b13861, ISBN 3-540-44141-7, MR 1970385.
Forman, Robin (1998), "Combinatorial vector fields and dynamical systems", Mathematische Zeitschrift, 228 (4): 629–681, doi:10.1007/PL00004638, MR 1644432, S2CID 121002180.
Kaimanovich, V.; Lodkin, A. (2006), Representation theory, dynamical systems, and asymptotic combinatorics (Papers from the conference held in St. Petersburg, June 8–13, 2004), American Mathematical Society Translations, Series 2, vol. 217, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4208-9, MR 2286117.
Latapy, Matthieu (2000), "Generalized integer partitions, tilings of zonotopes and lattices", in Krob, Daniel; Mikhalev, Alexander A. (eds.), Formal Power Series and Algebraic Combinatorics: 12th International Conference, FPSAC'00, Moscow, Russia, June 2000, Proceedings, Berlin: Springer, pp. 256–267, arXiv:math/0008022, Bibcode:2000math......8022L, MR 1798219.
Lothaire, M. (2005), Applied combinatorics on words, Encyclopedia of Mathematics and its Applications, vol. 105, Cambridge: Cambridge University Press, ISBN 978-0-521-84802-2, MR 2165687.
Lundberg, Erik (2007), "Almost all orbit types imply period-3", Topology and Its Applications, 154 (14): 2741–2744, doi:10.1016/j.topol.2007.05.009.
Mortveit, Henning S.; Reidys, Christian M. (2008), An introduction to sequential dynamical systems, Universitext, New York: Springer, ISBN 978-0-387-30654-4, MR 2357144.
Nekrashevych, Volodymyr (2008), "Symbolic dynamics and self-similar groups", Holomorphic Dynamics and Renormalization: A Volume in Honour of John Milnor's 75th Birthday, Fields Inst. Commun., vol. 53, Providence, RI: Amer. Math. Soc., pp. 25–73, MR 2477417.
Starke, Jens; Schanz, Michael (1998), "Dynamical system approaches to combinatorial optimization", Handbook of combinatorial optimization, Vol. 2, Boston, MA: Kluwer Acad. Publ., pp. 471–524, MR 1665408.
== External links ==
Combinatorics of Iterated Functions: Combinatorial Dynamics & Dynamical Combinatorics
Combinatorial dynamics at Scholarpedia | Wikipedia/Combinatorics_and_dynamical_systems |
In algebraic geometry and number theory, the torsion conjecture or uniform boundedness conjecture for torsion points for abelian varieties states that the order of the torsion group of an abelian variety over a number field can be bounded in terms of the dimension of the variety and the number field. A stronger version of the conjecture is that the torsion is bounded in terms of the dimension of the variety and the degree of the number field. The torsion conjecture has been completely resolved in the case of elliptic curves.
== Elliptic curves ==
From 1906 to 1911, Beppo Levi published a series of papers investigating the possible finite orders of points on elliptic curves over the rationals. He showed that there are infinitely many elliptic curves over the rationals with the following torsion groups:
Cn with 1 ≤ n ≤ 10, where Cn denotes the cyclic group of order n;
C12;
C2n × C2 with 1 ≤ n ≤ 4, where × denotes the direct sum.
At the 1908 International Mathematical Congress in Rome, Levi conjectured that this is a complete list of torsion groups for elliptic curves over the rationals. The torsion conjecture for elliptic curves over the rationals was independently reformulated by Trygve Nagell (1952) and again by Andrew Ogg (1971), with the conjecture becoming commonly known as Ogg's conjecture.
Andrew Ogg (1971) drew the connection between the torsion conjecture for elliptic curves over the rationals and the theory of classical modular curves. In the early 1970s, the work of Gérard Ligozat, Daniel Kubert, Barry Mazur, and John Tate showed that several small values of n do not occur as orders of torsion points on elliptic curves over the rationals. Barry Mazur (1977, 1978) proved the full torsion conjecture for elliptic curves over the rationals. His techniques were generalized by Kamienny (1992) and Kamienny & Mazur (1995), who obtained uniform boundedness for quadratic fields and number fields of degree at most 8 respectively. Finally, Loïc Merel (1996) proved the conjecture for elliptic curves over any number field. He proved for K a number field of degree
d
=
[
K
:
Q
]
{\displaystyle d=[K:\mathbb {Q} ]}
and an elliptic curve
E
/
K
{\displaystyle E/K}
that there is a bound on the order of the torsion group depending only on the degree
|
E
(
K
)
tors
|
≤
B
(
d
)
{\displaystyle |E(K)_{\text{tors}}|\leq B(d)}
. Furthermore if
P
∈
E
(
K
)
tors
{\displaystyle P\in E(K)_{\text{tors}}}
is a point of prime order
p
{\displaystyle p}
we have
p
≤
d
3
d
2
.
{\displaystyle p\leq d^{3d^{2}}.}
An effective bound for the size of the torsion group in terms of the degree of the number field was given by Parent (1999). Parent proved that for
P
∈
E
(
K
)
tors
{\displaystyle P\in E(K)_{\text{tors}}}
a point of prime power order
p
n
{\displaystyle p^{n}}
we have
p
n
≤
B
(
d
,
p
)
=
{
129
(
3
d
−
1
)
(
3
d
)
6
if
p
=
2
,
65
(
5
d
−
1
)
(
2
d
)
6
if
p
=
3
,
65
(
3
d
−
1
)
(
2
d
)
6
if
p
>
3.
{\displaystyle p^{n}\leq B(d,p)={\begin{cases}129(3^{d}-1)(3d)^{6}&{\text{if }}p=2,\\65(5^{d}-1)(2d)^{6}&{\text{if }}p=3,\\65(3^{d}-1)(2d)^{6}&{\text{if }}p>3.\end{cases}}}
Setting
B
max
(
d
)
=
129
(
5
d
−
1
)
(
3
d
)
6
{\displaystyle B_{\text{max}}(d)=129(5^{d}-1)(3d)^{6}}
we get from the structure result behind the Mordell-Weil theorem, i.e. there are two integers
n
1
,
n
2
{\displaystyle n_{1},n_{2}}
such that
E
(
K
)
tors
≅
Z
/
n
1
Z
×
Z
/
n
2
Z
{\displaystyle E(K)_{\text{tors}}\cong \mathbb {Z} /n_{1}\mathbb {Z} \times \mathbb {Z} /n_{2}\mathbb {Z} }
, a coarse but effective bound
B
(
d
)
=
(
B
max
(
d
)
B
max
(
d
)
)
2
.
{\displaystyle B(d)=\left(B_{\text{max}}(d)^{B_{\text{max}}(d)}\right)^{2}.}
Joseph Oesterlé gave in private notes from 1994 a slightly better bound for points of prime order
p
{\displaystyle p}
of
p
≤
(
3
d
/
2
+
1
)
2
{\displaystyle p\leq (3^{d/2}+1)^{2}}
, which turns out to be useful for computations over fields of small order, but alone is not enough to yield an effective bound for
|
E
(
K
)
tors
|
{\displaystyle |E(K)_{\text{tors}}|}
. Derickx et al. (2023) provide a published version of Oesterlé's result.
For number fields of small degree more refined results are known (Sutherland 2012). A complete list of possible torsion groups has been given for elliptic curves over
Q
{\displaystyle \mathbb {Q} }
(see above) and for quadratic and cubic number fields. In degree 1 and 2 all groups that arise occur infinitely often. The same holds for cubic fields except for the group C21 which occurs only in a single elliptic curve over
K
=
Q
(
ζ
9
)
+
{\displaystyle K=\mathbb {Q} (\zeta _{9})^{+}}
. For quartic and quintic number fields the torsion groups that arise infinitely often have been determined. The following table gives the set of all prime numbers
S
(
d
)
{\displaystyle S(d)}
that actually arise as the order of a torsion point
P
∈
E
(
K
)
tors
{\displaystyle P\in E(K)_{\text{tors}}}
where
Primes
(
q
)
{\displaystyle {\text{Primes}}(q)}
denotes the set of all prime numbers at most q (Derickx et al. (2023) and Khawaja (2024)).
The next table gives the set of all prime numbers
S
′
(
d
)
{\displaystyle S'(d)}
that arise infinitely often as the order of a torsion point (Derickx et al. (2023)).
Barry Mazur gave a survey talk on the torsion conjecture on the occasion of the establishment of the Ogg Professorship at the Institute for Advanced Study in October 2022.
== See also ==
Bombieri–Lang conjecture
Uniform boundedness conjecture for preperiodic points
Uniform boundedness conjecture for rational points
== References ==
== Bibliography ==
Kamienny, Sheldon (1992). "Torsion points on elliptic curves and
q
{\displaystyle q}
-coefficients of modular forms". Inventiones Mathematicae. 109 (2): 221–229. Bibcode:1992InMat.109..221K. doi:10.1007/BF01232025. MR 1172689. S2CID 118750444.
Kamienny, Sheldon; Mazur, Barry (1995). "Rational torsion of prime order in elliptic curves over number fields". Astérisque. 228. With an appendix by A. Granville: 81–100. MR 1330929.
Mazur, Barry (1977). "Modular curves and the Eisenstein ideal". Publications Mathématiques de l'IHÉS. 47 (1): 33–186. doi:10.1007/BF02684339. MR 0488287. S2CID 122609075.
Mazur, Barry (1978), "Rational isogenies of prime degree", Inventiones Mathematicae, 44 (2), with appendix by Dorian Goldfeld: 129–162, Bibcode:1978InMat..44..129M, doi:10.1007/BF01390348, MR 0482230, S2CID 121987166
Merel, Loïc (1996). "Bornes pour la torsion des courbes elliptiques sur les corps de nombres" [Bounds for the torsion of elliptic curves over number fields]. Inventiones Mathematicae (in French). 124 (1): 437–449. Bibcode:1996InMat.124..437M. doi:10.1007/s002220050059. MR 1369424. S2CID 3590991.
Nagell, Trygve (1952). "Problems in the theory of exceptional points on plane cubics of genus one". Den 11te Skandinaviske Matematikerkongress, Trondheim 1949, Oslo. Johan Grundt Tanum forlag. pp. 71–76. OCLC 608098404.
Ogg, Andrew (1971). "Rational points of finite order on elliptic curves". Inventiones Mathematicae. 22 (2): 105–111. Bibcode:1971InMat..12..105O. doi:10.1007/BF01404654. S2CID 121794531.
Ogg, Andrew (1973). "Rational points on certain elliptic modular curves". Proc. Symp. Pure Math. Proceedings of Symposia in Pure Mathematics. 24: 221–231. doi:10.1090/pspum/024/0337974. ISBN 9780821814246.
Parent, Pierre (1999). "Bornes effectives pour la torsion des courbes elliptiques sur les corps de nombres" [Effective bounds for the torsion of elliptic curves over number fields]. Journal für die Reine und Angewandte Mathematik (in French). 1999 (506): 85–116. arXiv:alg-geom/9611022. doi:10.1515/crll.1999.009. MR 1665681.
Schappacher, Norbert; Schoof, René (1996), "Beppo Levi and the arithmetic of elliptic curves" (PDF), The Mathematical Intelligencer, 18 (1): 57–69, doi:10.1007/bf03024818, MR 1381581, S2CID 125072148, Zbl 0849.01036
Sutherland, Andrew V. (2012). "Torsion subgroups of elliptic curves over number fields" (PDF). math.mit.edu.
Derickx, Maarten; Kamienny, Sheldon; Stein, William; Stoll, Michael (2023). "Torsion points on elliptic curves over number fields of small degree". Algebra & Number Theory. 17 (2): 267–308. arXiv:1707.00364. doi:10.2140/ant.2023.17.267.
Khawaja, Maleeha (2024). "Torsion primes for elliptic curves over degree 8 number fields". Research in Number Theory. 10 (2). arXiv:2304.14284. doi:10.1007/s40993-024-00533-6. | Wikipedia/Uniform_boundedness_conjecture_for_torsion_points |
In mathematics, the arithmetic of abelian varieties is the study of the number theory of an abelian variety, or a family of abelian varieties. It goes back to the studies of Pierre de Fermat on what are now recognized as elliptic curves; and has become a very substantial area of arithmetic geometry both in terms of results and conjectures. Most of these can be posed for an abelian variety A over a number field K; or more generally (for global fields or more general finitely-generated rings or fields).
== Integer points on abelian varieties ==
There is some tension here between concepts: integer point belongs in a sense to affine geometry, while abelian variety is inherently defined in projective geometry. The basic results, such as Siegel's theorem on integral points, come from the theory of diophantine approximation.
== Rational points on abelian varieties ==
The basic result, the Mordell–Weil theorem in Diophantine geometry, says that A(K), the group of points on A over K, is a finitely-generated abelian group. A great deal of information about its possible torsion subgroups is known, at least when A is an elliptic curve. The question of the rank is thought to be bound up with L-functions (see below).
The torsor theory here leads to the Selmer group and Tate–Shafarevich group, the latter (conjecturally finite) being difficult to study.
== Heights ==
The theory of heights plays a prominent role in the arithmetic of abelian varieties. For instance, the canonical Néron–Tate height is a quadratic form with remarkable properties that appear in the statement of the Birch and Swinnerton-Dyer conjecture.
== Reduction mod p ==
Reduction of an abelian variety A modulo a prime ideal of (the integers of) K — say, a prime number p — to get an abelian variety Ap over a finite field, is possible for almost all p. The 'bad' primes, for which the reduction degenerates by acquiring singular points, are known to reveal very interesting information. As often happens in number theory, the 'bad' primes play a rather active role in the theory.
Here a refined theory of (in effect) a right adjoint to reduction mod p — the Néron model — cannot always be avoided. In the case of an elliptic curve there is an algorithm of John Tate describing it.
== L-functions ==
For abelian varieties such as Ap, there is a definition of local zeta-function available. To get an L-function for A itself, one takes a suitable Euler product of such local functions; to understand the finite number of factors for the 'bad' primes one has to refer to the Tate module of A, which is (dual to) the étale cohomology group H1(A), and the Galois group action on it. In this way one gets a respectable definition of Hasse–Weil L-function for A. In general its properties, such as functional equation, are still conjectural – the Taniyama–Shimura conjecture (which was proven in 2001) was just a special case, so that's hardly surprising.
It is in terms of this L-function that the conjecture of Birch and Swinnerton-Dyer is posed. It is just one particularly interesting aspect of the general theory about values of L-functions L(s) at integer values of s, and there is much empirical evidence supporting it.
== Complex multiplication ==
Since the time of Carl Friedrich Gauss (who knew of the lemniscate function case) the special role has been known of those abelian varieties
A
{\displaystyle A}
with extra automorphisms, and more generally endomorphisms. In terms of the ring
E
n
d
(
A
)
{\displaystyle {\rm {End}}(A)}
, there is a definition of abelian variety of CM-type that singles out the richest class. These are special in their arithmetic. This is seen in their L-functions in rather favourable terms – the harmonic analysis required is all of the Pontryagin duality type, rather than needing more general automorphic representations. That reflects a good understanding of their Tate modules as Galois modules. It also makes them harder to deal with in terms of the conjectural algebraic geometry (Hodge conjecture and Tate conjecture). In those problems the special situation is more demanding than the general.
In the case of elliptic curves, the Kronecker Jugendtraum was the programme Leopold Kronecker proposed, to use elliptic curves of CM-type to do class field theory explicitly for imaginary quadratic fields – in the way that roots of unity allow one to do this for the field of rational numbers. This generalises, but in some sense with loss of explicit information (as is typical of several complex variables).
== Manin–Mumford conjecture ==
The Manin–Mumford conjecture of Yuri Manin and David Mumford, proved by Michel Raynaud, states that a curve C in its Jacobian variety J can only contain a finite number of points that are of finite order (a torsion point) in J, unless C = J. There are other more general versions, such as the Bogomolov conjecture which generalizes the statement to non-torsion points.
== References == | Wikipedia/Manin-Mumford_conjecture |
In arithmetic geometry, the uniform boundedness conjecture for rational points asserts that for a given number field
K
{\displaystyle K}
and a positive integer
g
≥
2
{\displaystyle g\geq 2}
, there exists a number
N
(
K
,
g
)
{\displaystyle N(K,g)}
depending only on
K
{\displaystyle K}
and
g
{\displaystyle g}
such that for any algebraic curve
C
{\displaystyle C}
defined over
K
{\displaystyle K}
having genus equal to
g
{\displaystyle g}
has at most
N
(
K
,
g
)
{\displaystyle N(K,g)}
K
{\displaystyle K}
-rational points. This is a refinement of Faltings's theorem, which asserts that the set of
K
{\displaystyle K}
-rational points
C
(
K
)
{\displaystyle C(K)}
is necessarily finite.
== Progress ==
The first significant progress towards the conjecture was due to Caporaso, Harris, and Mazur. They proved that the conjecture holds if one assumes the Bombieri–Lang conjecture.
== Mazur's conjecture B ==
Mazur's conjecture B is a weaker variant of the uniform boundedness conjecture that asserts that there should be a number
N
(
K
,
g
,
r
)
{\displaystyle N(K,g,r)}
such that for any algebraic curve
C
{\displaystyle C}
defined over
K
{\displaystyle K}
having genus
g
{\displaystyle g}
and whose Jacobian variety
J
C
{\displaystyle J_{C}}
has Mordell–Weil rank over
K
{\displaystyle K}
equal to
r
{\displaystyle r}
, the number of
K
{\displaystyle K}
-rational points of
C
{\displaystyle C}
is at most
N
(
K
,
g
,
r
)
{\displaystyle N(K,g,r)}
.
Michael Stoll proved that Mazur's conjecture B holds for hyperelliptic curves with the additional hypothesis that
r
≤
g
−
3
{\displaystyle r\leq g-3}
. Stoll's result was further refined by Katz, Rabinoff, and Zureick-Brown in 2015. Both of these works rely on Chabauty's method.
Mazur's conjecture B was resolved by Dimitrov, Gao, and Habegger in 2021 using the earlier work of Gao and Habegger on the geometric Bogomolov conjecture instead of Chabauty's method.
== References == | Wikipedia/Uniform_boundedness_conjecture_for_rational_points |
In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. Only special cases of the conjecture have been proven.
The modern formulation of the conjecture relates to arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1. The first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K (Wiles 2006).
The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
== Background ==
Mordell (1922) proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated.
If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve.
If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points.
Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves.
An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function.
The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by Deuring (1941) for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves over Q, as a consequence of the modularity theorem in 2001.
Finding rational points on a general elliptic curve is a difficult problem. Finding the points on an elliptic curve modulo a given prime p is conceptually straightforward, as there are only a finite number of possibilities to check. However, for large primes it is computationally intensive.
== History ==
In the early 1960s Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. From these numerical results Birch & Swinnerton-Dyer (1965) conjectured that Np for a curve E with rank r obeys an asymptotic law
∏
p
≤
x
N
p
p
≈
C
log
(
x
)
r
as
x
→
∞
{\displaystyle \prod _{p\leq x}{\frac {N_{p}}{p}}\approx C\log(x)^{r}{\mbox{ as }}x\rightarrow \infty }
where C is a constant.
Initially, this was based on somewhat tenuous trends in graphical plots; this induced a measure of skepticism in J. W. S. Cassels (Birch's Ph.D. advisor). Over time the numerical evidence stacked up.
This in turn led them to make a general conjecture about the behavior of a curve's L-function L(E, s) at s = 1, namely that it would have a zero of order r at this point. This was a far-sighted conjecture for the time, given that the analytic continuation of L(E, s) was only established for curves with complex multiplication, which were also the main source of numerical examples. (NB that the reciprocal of the L-function is from some points of view a more natural object of study; on occasion, this means that one should consider poles rather than zeroes.)
The conjecture was subsequently extended to include the prediction of the precise leading Taylor coefficient of the L-function at s = 1. It is conjecturally given by
L
(
r
)
(
E
,
1
)
r
!
=
#
S
h
a
(
E
)
Ω
E
R
E
∏
p
|
N
c
p
(
#
E
t
o
r
)
2
{\displaystyle {\frac {L^{(r)}(E,1)}{r!}}={\frac {\#\mathrm {Sha} (E)\Omega _{E}R_{E}\prod _{p|N}c_{p}}{(\#E_{\mathrm {tor} })^{2}}}}
where the quantities on the right-hand side are invariants of the curve, studied by Cassels, Tate, Shafarevich and others (Wiles 2006):
#
E
t
o
r
{\displaystyle \#E_{\mathrm {tor} }}
is the order of the torsion group,
#
S
h
a
(
E
)
=
{\displaystyle \#\mathrm {Sha} (E)=}
#Ш(E) is the order of the Tate–Shafarevich group,
Ω
E
{\displaystyle \Omega _{E}}
is the real period of E multiplied by the number of connected components of E,
R
E
{\displaystyle R_{E}}
is the regulator of E which is defined via the canonical heights of a basis of rational points,
c
p
{\displaystyle c_{p}}
is the Tamagawa number of E at a prime p dividing the conductor N of E. It can be found by Tate's algorithm.
At the time of the inception of the conjecture little was known, not even the well-definedness of the left side (referred to as analytic) or the right side (referred to as algebraic) of this equation. John Tate expressed this in 1974 in a famous quote.: 198
This remarkable conjecture relates the behavior of a function
L
{\displaystyle L}
at a point where it is not at present known to be defined to the order of a group Ш which is not known to be finite!
By the modularity theorem proved in 2001 for elliptic curves over
Q
{\displaystyle \mathbb {Q} }
the left side is now known to be well-defined and the finiteness of Ш(E) is known when additionally the analytic rank is at most 1, i.e., if
L
(
E
,
s
)
{\displaystyle L(E,s)}
vanishes at most to order 1 at
s
=
1
{\displaystyle s=1}
. Both parts remain open.
== Current status ==
The Birch and Swinnerton-Dyer conjecture has been proved only in special cases:
Coates & Wiles (1977) proved that if E is a curve over a number field F with complex multiplication by an imaginary quadratic field K of class number 1, F = K or Q, and L(E, 1) is not 0 then E(F) is a finite group. This was extended to the case where F is any finite abelian extension of K by Arthaud (1978).
Gross & Zagier (1986) showed that if a modular elliptic curve has a first-order zero at s = 1 then it has a rational point of infinite order; see Gross–Zagier theorem.
Kolyvagin (1989) showed that a modular elliptic curve E for which L(E, 1) is not zero has rank 0, and a modular elliptic curve E for which L(E, 1) has a first-order zero at s = 1 has rank 1.
Rubin (1991) showed that for elliptic curves defined over an imaginary quadratic field K with complex multiplication by K, if the L-series of the elliptic curve was not zero at s = 1, then the p-part of the Tate–Shafarevich group had the order predicted by the Birch and Swinnerton-Dyer conjecture, for all primes p > 7.
Breuil et al. (2001), extending work of Wiles (1995), proved that all elliptic curves defined over the rational numbers are modular, which extends results #2 and #3 to all elliptic curves over the rationals, and shows that the L-functions of all elliptic curves over Q are defined at s = 1.
Bhargava & Shankar (2015) proved that the average rank of the Mordell–Weil group of an elliptic curve over Q is bounded above by 7/6. Combining this with the p-parity theorem of Nekovář (2009) and Dokchitser & Dokchitser (2010) and with the proof of the main conjecture of Iwasawa theory for GL(2) by Skinner & Urban (2014), they conclude that a positive proportion of elliptic curves over Q have analytic rank zero, and hence, by Kolyvagin (1989), satisfy the Birch and Swinnerton-Dyer conjecture.
There are currently no proofs involving curves with a rank greater than 1.
There is extensive numerical evidence for the truth of the conjecture.
== Consequences ==
Much like the Riemann hypothesis, this conjecture has multiple consequences, including the following two:
Let n be an odd square-free integer. Assuming the Birch and Swinnerton-Dyer conjecture, n is the area of a right triangle with rational side lengths (a congruent number) if and only if the number of triplets of integers (x, y, z) satisfying 2x2 + y2 + 8z2 = n is twice the number of triplets satisfying 2x2 + y2 + 32z2 = n. This statement, due to Tunnell's theorem (Tunnell 1983), is related to the fact that n is a congruent number if and only if the elliptic curve y2 = x3 − n2x has a rational point of infinite order (thus, under the Birch and Swinnerton-Dyer conjecture, its L-function has a zero at 1). The interest in this statement is that the condition is easily verified.
In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip of families of L-functions. Admitting the BSD conjecture, these estimations correspond to information about the rank of families of elliptic curves in question. For example: suppose the generalized Riemann hypothesis and the BSD conjecture, the average rank of curves given by y2 = x3 + ax+ b is smaller than 2.
Because of the existence of the functional equation of the L-function of an elliptic curve, BSD allows us to calculate the parity of the rank of an elliptic curve. This is a conjecture in its own right called the parity conjecture, and it relates the parity of the rank of an elliptic curve to its global root number. This leads to many explicit arithmetic phenomena which are yet to be proved unconditionally. For instance:
Every positive integer n ≡ 5, 6 or 7 (mod 8) is a congruent number.
The elliptic curve given by y2 = x3 + ax + b where a ≡ b (mod 2) has infinitely many solutions over
Q
(
ζ
8
)
{\displaystyle \mathbb {Q} (\zeta _{8})}
.
Every positive rational number d can be written in the form d = s2(t3 – 91t – 182) for s and t in
Q
{\displaystyle \mathbb {Q} }
.
For every rational number t, the elliptic curve given by y2 = x(x2 – 49(1 + t4)2) has rank at least 1.
There are many more examples for elliptic curves over number fields.
== Generalizations ==
There is a version of this conjecture for general abelian varieties over number fields. A version for abelian varieties over
Q
{\displaystyle \mathbb {Q} }
is the following:: 462
lim
s
→
1
L
(
A
/
Q
,
s
)
(
s
−
1
)
r
=
#
S
h
a
(
A
)
Ω
A
R
A
∏
p
|
N
c
p
#
A
(
Q
)
tors
⋅
#
A
^
(
Q
)
tors
.
{\displaystyle \lim _{s\to 1}{\frac {L(A/\mathbb {Q} ,s)}{(s-1)^{r}}}={\frac {\#\mathrm {Sha} (A)\Omega _{A}R_{A}\prod _{p|N}c_{p}}{\#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}}}.}
All of the terms have the same meaning as for elliptic curves, except that the square of the order of the torsion needs to be replaced by the product
#
A
(
Q
)
tors
⋅
#
A
^
(
Q
)
tors
{\displaystyle \#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}}
involving the dual abelian variety
A
^
{\displaystyle {\hat {A}}}
. Elliptic curves as 1-dimensional abelian varieties are their own duals, i.e.
E
^
=
E
{\displaystyle {\hat {E}}=E}
, which simplifies the statement of the BSD conjecture. The regulator
R
A
{\displaystyle R_{A}}
needs to be understood for the pairing between a basis for the free parts of
A
(
Q
)
{\displaystyle A(\mathbb {Q} )}
and
A
^
(
Q
)
{\displaystyle {\hat {A}}(\mathbb {Q} )}
relative to the Poincare bundle on the product
A
×
A
^
{\displaystyle A\times {\hat {A}}}
.
The rank-one Birch-Swinnerton-Dyer conjecture for modular elliptic curves and modular abelian varieties of GL(2)-type over totally real number fields was proved by Shou-Wu Zhang in 2001.
Another generalization is given by the Bloch-Kato conjecture.
== Notes ==
== References ==
Shoeib, Maisara (26 May 2025). "A Topological Perspective on the Birch and Swinnerton–Dyer Conjecture". arXiv:2505.19796.
== External links ==
Weisstein, Eric W. "Swinnerton-Dyer Conjecture". MathWorld.
"Birch and Swinnerton-Dyer Conjecture". PlanetMath.
The Birch and Swinnerton-Dyer Conjecture: An Interview with Professor Henri Darmon by Agnes F. Beaudry
What is the Birch and Swinnerton-Dyer Conjecture? lecture by Manjul Bhargava (September 2016) given during the Clay Research Conference held at the University of Oxford | Wikipedia/Birch_Swinnerton-Dyer_Conjecture |
Cadence Design Systems, Inc. (stylized as cādence) is an American multinational technology and computational software company. Headquartered in San Jose, California, Cadence was formed in 1988 through the merger of SDA Systems and ECAD. Initially specialized in electronic design automation (EDA) software for the semiconductor industry, currently the company makes software and hardware for designing products such as integrated circuits, systems on chips (SoCs), printed circuit boards, and pharmaceutical drugs, also licensing intellectual property for the electronics, aerospace, defense and automotive industries, among others.
== History ==
=== 1983–1999 ===
Founded in 1983 in San Jose, California, Cadence Design Systems began as an electronic design automation (EDA) company named Solomon Design Automation (SDA). SDA's cofounders included James Solomon, Richard Newton, and Alberto Sangiovanni-Vincentelli. Cadence was formed by the merger of SDA and ECAD. A public company, ECAD had been co-founded by Ping Chao, Glen Antle, and Paul Huang in 1982. Cadence Design Systems was officially formed through SDA and ECAD's 1988 merger, with Joseph Costello was appointed both CEO and president of the newly combined company. After the merger, Cadence began trading on the New York Stock Exchange and Costello oversaw further mergers and acquisitions.
In 1989, the company acquired Gateway Design Automation for $72 million. In 1990 it acquired Automated Systems Inc., and in doing so added "board design to its existing line of chip design software." In 1991, Cadence acquired its rival Valid Logic Systems for around $200 million, its biggest acquisition yet. The revenues of the combined company were $390 million, making Cadence "the largest provider of the software used by electronic engineers to design computer chips and circuit boards," according to the New York Times.
In 1996, Cadence acquired High Level Design Systems, at which point Cadence had 3,300 employees and $742 million in annual revenue. Following the resignation of Cadence's original CEO Joe Costello in 1997, Jack Harding was appointed CEO. Ray Bingham was named CEO in 1999. Cadence purchased Ambit Design Systems for $260 million, which made tools for system-on-a-chip technology, in 1998, and OrCAD Systems in 1999. After acquiring Quickturn Design in 1999, Cadence was described as a "white knight" for the act by the New York Times, as Quickturn had been subject to a hostile takeover by Cadence's rival Mentor Graphics.
=== 2000–2019 ===
Under urging by executives such as Jim Hogan and executive vice president Penny Herscher, between 2001 and 2003, Cadence purchased a number of implementation tools through acquisition, such as Silicon Perspective, Verplex, and Celestry Design. The acquisitions were apparently in part to counter the 2001 purchase of Avanti by Synopsys, as Synopsys had become their primary market rival. In 2004, Mike Fister became Cadence's new CEO and president, with Ray Bingham becoming chairman. The former chairman, Donald L. Lucas, remained on the Cadence board. Between 2004 and 2007, Cadence purchased four companies, including the software developer Verisity, and in 2006, it spent $1 billion in stock buybacks.
In 2007, Cadence announced it would be introducing a new chip-making process that laid wires diagonally as well as horizontally and vertically, arguing it would make its designs more efficient. In June 2007, Cadence had a market value of around $6.4 billion. That year, Cadence was rumored to be in talks with Kohlberg Kravis Roberts and Blackstone Group regarding a possible sale of the company. Cadence withdrew a $1.6 billion offer to purchase Mentor Graphics in 2008. Also that year, Cadence's board appointed Lip-Bu Tan as acting CEO, after the resignation of Mike Fister; Tan had served on the Cadence board of directors since 2004. In January 2009, the board of directors of Cadence voted unanimously to confirm Lip-Bu Tan as president and CEO. In 2011, it purchased Altos Design Automation. Subsequent notable acquisitions included Cosmic Circuits and Tensilica in 2013, Forte Design Systems in 2014, and the AWR Corporation in 2019.
=== 2020–2025 ===
Cadence had 9,300 employees and annual revenue of $3 billion in 2021. Most of its revenue came from licensing its software and intellectual property. In April 2021, following a Washington Post report on the use of Cadence and Synopsys technology in the People's Liberation Army's military-civil fusion efforts, U.S. legislators Michael McCaul and Tom Cotton requested that the United States Department of Commerce tighten controls on the sales of semiconductor manufacturing software. On December 15, 2021, Anirudh Devgan assumed the role of Cadence president & CEO, after having been named Cadence president in 2017. Lip-Bu Tan retired as CEO and became executive chairman and left this position and the board in May 2023. In 2021, Cadence launched an artificial intelligence platform to streamline processor development.
Although most of Cadence's customers for decades were "traditional semiconductor firms," around 40% of Cadence's revenue by 2022 came from customers who were "systems" oriented, or seeking products tailored for various industries that utilized chips in a central role. Cadence was also increasingly designing customized chips for clients and having them manufactured by third parties such as Taiwan Semiconductor Manufacturing, a practice which had become more popular in the face of worldwide chip shortages and shipping issues, according to Reuters. By late 2022, Cadence had clients such as Tesla and Apple Inc. Cadence acquired OpenEye Scientific Software for $500 million in September 2022, rebranding the company OpenEye Cadence Molecular Sciences and making it into a business unit. OpenEye signed Pfizer as a software client in October 2023.
Cadence purchased various businesses from Rambus in 2023. As of September 2023, Cadence was "looking into" applying for funding from the $52 billion CHIPS and Science Act, passed in 2022 bring more of the international semiconductor supply chain into the United States. In February 2024, Cadence "quietly stepped into the supercomputer business," according to TechRadar, when it unveiled the M1, its own supercomputer designed to run computational fluid dynamics (CFD) while utilizing AI. In June 2024, Cadence purchased BETA CAE Systems.
In January 2025, Cadence announced the acquisition of Secure-IC, a leading embedded security IP platform provider; the acquisition is expected to close by mid-2025, following the usual regulatory approvals and other closing conditions, and be immaterial to 2025 revenue and earnings.
In 2025, the Trump administration paused the issuing of licenses for exports of Cadence software to China.
== Products ==
Originally known as a creator of electronic design automation (EDA) software, the company currently develops software, hardware and intellectual property (IP) used to design chips, chiplet-style products, and printed circuit boards, while also selling hardware systems that run its chip design software.
It also has tools for "electromagnetics, thermal and computational fluid dynamics in the high-tech electronics, aerospace and defense and automotive sectors," and according to Investor's Business Daily in 2023, it specializes in products for fields such as "artificial intelligence and machine learning, cloud computing, 3D technology, and AI-enabled big data analytics." Among market applications are "hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer, industrial and health care."
=== Integrated circuit software ===
The company develops a number of technologies for creating custom integrated circuits. For example, its Virtuoso Platform, later renamed Virtuoso Studio, incorporates tools for designing full-custom integrated circuits. In 2019, Cadence introduced its Spectre X parallel circuit simulator, so that users could distribute time- and frequency-domain simulations across hundreds of CPUs for speed. Cadence also developed AWR, a radio frequency to millimeter wave design environment for designing 5G/wireless products. AWR is used for communications, aerospace and defense, semiconductor, computer, and consumer electronics.
=== Digital implementation and signoff ===
Cadence has a number of digital implementation and signoff tools, including Genus, Innovus, Tempus & Voltus, among others. In 2020, Cadence integrated its Innovus place and route engine and optimizer into Genus Synthesis. Stratus is Cadence's high-level synthesis tool, and is used to create RTL implementations from C, C++, or SystemC code. Other formal verification and signoff tools include Conformal Equivalence Checker, Joules RTL Power Solution, Quantus Extraction Solution, and Cadence's Modus DFT Software Solution.
=== System verification ===
Cadence has developed a number of formal verification products for chip design. JasperGold is a formal verification tool, initially introduced in 2003 and upgraded with machine learning in 2019. vManager is a verification management tool for tracking the verification process. Cadence announced Perspec System Verifier in 2014 for defining and verifying system-level verification scenarios, with Perspec made compatible with the Accellera Portable Test and Stimulus Standard (PSS) several years later. Introduced in 2017, Cadence's parallel simulator Xcelium is based on a multi-core parallel computing architecture.
=== Hardware emulation ===
In 2015, Cadence announced the Palladium Z1 hardware emulation platform, with over 100 million gates per hour compile speed, and greater than 1 MHz execution for billion-gate designs. which was based on emulation technology from Cadence's 1998 acquisition of Quickturn. Cadence announced Palladium Z2 in 2021, claiming a 1.5X performance and 2X capacity improvement over the Z1.
The Protium FPGA prototyping platform was introduced in 2014, followed by the Protium S1 in 2017, which was built on Xilinx Virtex UltraScale FPGAs. Protium X1 rack-based prototyping was introduced in 2019, which Cadence claimed supported a 1.2 billion gate SoCs at around 5 MHz. with Palladium S1/X1 and Protium sharing a single compilation flow. In 2021, Protium X2 was announced; Cadence claimed a 1.5X performance and 2X capacity improvement over Protium X1.
=== SIP blocks ===
Cadence supplies semiconductor intellectual property (SIP) blocks, covering interface design, USB, MIPI, ethernet, memory, analog, SoC peripherals, and data plane processing units. Cadence also develops chip verification technologies including simulators and formal verification tools. Cadence develops Tensilica DSP processors for audio, vision, wireless modems, and convolutional neural nets. Tensilica DSP processors IP in 2019 included: Tensilica Vision DSPs for imaging, vision, and AI processing; Tensilica HiFi DSPs for audio processing; Tensilica Fusion DSPs for IoT; Tensilica ConnX DSPs for radar, lidar, and communications processing; and Tensilica DNA Processor Family for AI acceleration. In 2021, Cadence launched the Tensilica AI Platform to accelerate AI SoC development and improve performances.
=== PCB and packaging technologies ===
The company has a number of printed circuit board (PCB) and packaging technologies for designing circuit boards. Its Allegro Platform has tools for co-design of integrated circuits, packages, and PCBs. OrCAD/PSpice has tools for smaller design teams and individual PCB designers. OrbitIO Interconnect Designer is a die/package planning & route optimization tool. InspectAR uses augmented reality to map out complicated circuit board electronics for real-time labelling of board schematics.
=== Systems design and analysis ===
The company has a number of tools for system analysis. Sigrity has tools for signal, power integrity, and thermal integrity analysis and IC package design. Introduced in April 2019 as part of Cadence's expansion into system analysis, Clarity is a 3D field solver for electromagnetic analysis, that uses distributed adaptive meshing to partition jobs across multiple cores. In September 2019, Cadence announced Celsius, a parallel architecture thermal solver that uses finite element analysis for solid structures and computational fluid dynamics (CFD) for fluids.
Cascade Technologies, Inc includes hi-fidelity CFD solvers for multiphysics analysis of turbulence fluid flow. Acquired by Cadence from Pointwise in 2021, Fidelity Pointwise is for computational fluid dynamics (CFD) mesh generation.
=== Machine design and digital twins ===
Cadence in 2021 acquired a number of system analysis products from NUMECA, known for software tools used in the automotive, marine, aerospace, and power generation industries. Among the tools were Fidelity (formerly known as OMNIS), a computational fluid dynamics (CFD), mesh generation, multi-physics simulation, and optimization product. Its Cadence Reality digital twin platform creates manipulatable digital models of designs or factories.
Cadence Design Systems in February 2024 launched its Cadence Millennium Enterprise Multiphysics Platform, or Millennium M1. The hardware/software combination was designed for creating digital twins. It draws from Cadence's older Fidelity CFD suite.
=== Drug design ===
Cadence's OpenEye Scientific division has computational molecular modeling and simulation software used by pharmaceutical and biotechnology companies for purposes such as drug discovery and antibody discovery. The Orion is OpenEye's software-as-a-service platform. OpenEye Scientific has its headquarters in Santa Fe, New Mexico.
=== Artificial intelligence ===
The company was increasingly incorporating artificial intelligence (AI) in 2023, according to Reuters, by "providing tools to design chips for AI" as well as by "adding AI into its own software to help in the complex process of designing chips." Cerebrus was released in 2021, and is a machine learning-based chip which utilizes reinforcement learning and is meant to automatically optimize the Cadence digital design flow. In 2022, Cadence introduced the AI platform Optimality Intelligent System Explorer, a system design tool with multiphysics system analysis software. Designed to be compatible with Clarity 3D and SigrityX, Microsoft was an early adopter. In September 2023, Cadence released software called ChipGPT, allowing companies to create custom silicon with assistance from AI.
== Recognition ==
In 2016, former Cadence CEO Lip-Bu Tan was awarded the Dr. Morris Chang Exemplary Leadership Award by the Global Semiconductor Alliance. In 2019, Investor's Business Daily ranked Cadence Design Systems #5 on its 50 Best Environmental, Social, and Governance (ESG) Companies list. In 2020, Cadence ranked #45 on People magazine's Companies that Care list. Fortune magazine named Cadence to its 100 Best Companies to Work For list for the sixth consecutive year in 2020. In 2021, Anirudh Devgan was awarded the prestigious IEEE/SEMI Phil Kaufman award and in 2022 was inducted into National Academy of Engineering.
== Sponsorship ==
In May 2022, the Formula 1 motor racing team McLaren announced a multi-year partnership deal with Cadence. Cadence partnered with the San Francisco 49ers in April 2023 on a several year technology project to fix energy efficiencies at Levi's Stadium. The deal also gave Cadence the naming rights to the team's mobile app.
== Acquisitions timeline ==
== Lawsuits ==
Avanti Corporation From 1995 until 2002, Cadence was involved in a 6-year-long legal dispute with Avanti Corporation (brand name "Avant!"), in which Cadence claimed Avanti stole Cadence code, and Avanti denied it. According to Business Week "The Avanti case is probably the most dramatic tale of white-collar crime in the history of Silicon Valley". The Avanti executives eventually pleaded no contest and Cadence received several hundred million dollars in restitution. Avanti was then purchased by Synopsys, which paid $265 million more to settle the remaining claims. The case resulted in a number of legal precedents.
Aptix Corporation Quickturn Design Systems, a company acquired by Cadence, was involved in a series of legal events with Aptix Corporation. Aptix licensed a patent to Mentor Graphics and the two companies jointly sued Quickturn over an alleged patent infringement. Amr Mohsen, CEO of Aptix, forged and tampered with legal evidence and was subsequently charged with conspiracy, perjury, and obstruction of justice. Mohsen was arrested after violating his bail agreement by attempting to flee the country. While in jail, Mohsen plotted to intimidate witnesses and kill the federal judge presiding over his case. Mohsen was further charged with attempting to delay a federal trial by feigning incompetency. Due to the overwhelming misconduct, the judge ruled the lawsuit as unenforceable and Mohsen was sentenced to 17 years in prison. Mentor Graphics subsequently sued Aptix to recoup legal costs. Cadence also sued Mentor Graphics and Aptix to recover legal costs.
Berkeley Design Automation In 2013, Cadence sued Berkeley Design Automation (BDA) for circumvention of a license scheme to link its Analog FastSpice (AFS) simulator to Cadence's Analog Design Environment (Virtuoso ADE). The lawsuit was settled less than one year later with an undisclosed payment of BDA and a multi-year agreement to support interoperability of AFS with ADE through Cadence's official interface. BDA was bought by Mentor Graphics a few months later.
== See also ==
Comparison of EDA software
List of EDA companies
List of semiconductor IP core vendors
List of the largest software companies
List of S&P 400 companies
Semiconductor intellectual property core
Ken Kundert, Cadence fellow and creator of the Spectre circuit simulation family of products (including SpectreRF) and the Verilog-A analog hardware description language
== References ==
== External links ==
Official website
Business data for Cadence Design Systems, Inc.: | Wikipedia/Valid_Logic_Systems |
Functional safety is the part of the overall safety of a system or piece of equipment that depends on automatic protection operating correctly in response to its inputs or failure in a predictable manner (fail-safe). The automatic protection system should be designed to properly handle likely systematic errors, hardware failures and operational/environmental stress.
== Objective ==
The objective of functional safety is freedom from unacceptable risk of physical injury or of damage to the health of people either directly or indirectly (through damage to property or to the environment) by the proper implementation of one or more automatic protection functions (often called safety functions). A safety system (often called a safety-related system) consists of one or more safety functions.
Functional safety is intrinsically end-to-end in scope in that it has to treat the function of a component or subsystem as part of the function of the entire automatic protection function of any system. Thus, although functional safety standards focus on electrical, electronic, and programmable systems (E/E/PS), the end-to-end scope means that in practice, functional safety methods must extend to the non-E/E/PS parts of the system that the E/E/PS actuators, valves, motor controls or monitors.
== Achieving functional safety ==
Functional safety is achieved when every specified safety function is carried out and the level of performance required of each safety function is met. This is normally achieved by a process that includes the following steps as a minimum:
Identifying what the required safety functions are. This means the hazards and safety functions have to be known. A process of function reviews, formal HAZIDs, HAZOPs and accident reviews are applied to identify these.
Assessment of the risk-reduction required by the safety function, which will involve a safety integrity level (SIL) or performance level or other quantification assessment. A SIL (or PL, AgPL, ASIL) applies to an end-to-end safety function of the safety-related system, not just to a component or a part of the system.
Ensuring the safety function performs to the design intent, including under conditions of incorrect operator input and failure modes. This will involve having the design and lifecycle managed by qualified and competent engineers carrying out processes to a recognized functional safety standard. In Europe, that standard is IEC EN 61508, or one of the industry specific standards derived from IEC EN 61508, or for simple systems some other standard like ISO 13849.
Verification that the system meets the assigned SIL, ASIL, PL or agPL by determining the probability of dangerous failure, checking minimum levels of redundancy, and reviewing systematic capability (SC). These three metrics have been called "the three barriers". Failure modes of a device are typically determined by failure mode and effects analysis of the system (FMEA). Failure probabilities for each failure mode are typically determined using failure mode, effects, and diagnostic analysis FMEDA.
Conduct functional safety audits to examine and assess the evidence that the appropriate safety lifecycle management techniques were applied consistently and thoroughly in the relevant lifecycle stages of product.
Neither safety nor functional safety can be determined without considering the system as a whole and the environment with which it interacts. Functional safety is inherently end-to-end in scope. Modern systems often have software intensively commanding and controlling safety-critical functions. Therefore, software functionality and correct software behavior must be part of the Functional safety engineering effort to ensure acceptable safety risk at the system level.
== Certifying functional safety ==
Any claim of functional safety for a component, subsystem or system should be independently certified to one of the recognized functional safety standards. A certified product can then be claimed to be functionally safe to a particular safety integrity level or a performance level in a specific range of applications: the certificate and the assessment report is provided to the customers describing the scope and limits of performance.
=== Certification bodies ===
Functional safety is a technically challenging field. Certifications should be done by independent organizations with experience and strong technical depth (electronics, programmable electronics, mechanical, and probabilistic analysis). Functional safety certification is performed by accredited certification bodies (CB). Accreditation is awarded to a CB organization by an accreditation body (AB). In most countries there is one AB. In the United States, the American National Standards Institute (ANSI) is the AB for functional safety accreditation. In the United Kingdom, the United Kingdom Accreditation Service (UKAS) provides functional safety accreditation. ABs are members of the International Accreditation Forum (IAF) for work in management systems, products, services, and personnel accreditation or the International Laboratory Accreditation Cooperation (ILAC) for laboratory testing accreditation. A multilateral recognition arrangement (MLA) between ABs will ensure global recognition of accredited CBs.
IEC 61508 functional safety certification programs have been established by several global Certification Bodies. Each has defined their own scheme based upon IEC 61508 and other functional safety standards. The scheme lists the referenced standards and specifies procedures which describes their test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program. Functional safety certification programs for IEC 61508 standards are being offered globally by several recognized CBs including Intertek, SGS, TÜV Rheinland, TÜV SÜD and UL.
An important element of functional safety certification is on-going surveillance by the certification agency. Most CB organizations have included surveillance audits in their scheme. The follow-up surveillance ensures that the product, sub-system, or system is still being manufactured in accordance with what was originally certified for functional safety. Follow-up surveillance may occur at various frequencies depending on the certification body, but will typically look at the product's field failure history, hardware design changes, software changes, as well as the manufacturer's ongoing compliance of functional safety management systems.
=== Military aerospace ===
For military aerospace and defense systems MIL-STD-882E addresses functional hazard analyses (FHA) and determining which functions implemented in hardware and software are safety significant. The Functional safety focus is on ensuring safety critical functions and functional threads in the system, subsystem and software are analyzed and verified for correct behavior per safety requirements, including functional failure conditions and faults and appropriate mitigation in the design. These system safety principles underpinning functional safety were developed in the military, nuclear and aerospace industries, and then taken up by rail transport, process and control industries developing sector specific standards. Functional safety standards are applied across all industry sectors dealing with safety critical requirements and are especially applicable anytime software commands, controls or monitors a safety-critical function. Thousands of products and processes meet the standards based on IEC 61508: from bathroom showers, automotive safety products, sensors, actuators, diving equipment, Process Controllers and their integration to ships, aircraft and major plants.
=== Aviation ===
The US FAA have similar functional safety certification processes, in the form of ARP4761, US RTCA DO-178C for software and DO-254 for complex electronic hardware, which is applied throughout the aerospace industry. Functional safety and design assurance on civil/commercial transport aircraft is documented in SAE ARP4754A as functional design assurance levels (FDALS). The system FDALs drive the depth of engineering safety analysis. The level of rigor (LOR) or safety tasks performed to ensure acceptable risk are dependent upon the identification of specific functional failure condition and hazard severity relating to the safety-critical functions (SCF). In many cases functional behavior in embedded software is thoroughly analyzed and tested to ensure the system functions as intended under credible fault and failure conditions. Functional safety is becoming the normal focused approach on complex software intensive systems and highly integrated systems with safety consequences. The traditional software safety tasks and model based functional safety tasks are performed to provide objective safety evidence that the system functionality and safety features perform as intended in normal and off nominal failures. The entry point of functional safety begins early in the process by performing Functional Hazard Analyses (FHA)to identify hazards and risks and to influence the safety design requirements and functional allocation and decomposition to mitigate hazards. The behavior of the software and SCFs at the system level is a vital part of any functional safety effort. Analyses and implementation results are documented in functional hazard assessments (FHA) or system safety assessments or safety cases. Model-based functional safety processes are often used and required on highly integrated and complex software intensive systems to understand all of the many interactions and predicted behavior and to help in the safety verification and certification process.
==== Safety Review Boards ====
At Boeing, a Safety Review Board (SRB) is responsible for deciding only if an issue is or is not a safety issue. An SRB brings together multiple company subject-matter experts (SMEs) in many disciplines. The most knowledgeable SME presents the issue, assisted and guided by the Aviation Safety organization. The safety decision is taken as a vote. Any vote for "safety" results in a board decision of "safety".
=== Space ===
In the US, NASA developed an infrastructure for safety critical systems adopted widely by industry, both in North America and elsewhere, with a standard, supported by guidelines. The NASA standard and guidelines are built on ISO 12207, which is a software practice standard rather than a safety critical standard, hence the extensive nature of the documentation NASA has been obliged to add, compared to using a purpose designed standard such as IEC EN 61508. A certification process for systems developed in accord with the NASA guidelines exists.
=== Automotive ===
The automotive industry has developed ISO 26262 "Road Vehicles Functional Safety Standard" based on IEC 61508. The certification of those systems ensures the compliance with the relevant regulations and helps to protect the public. The ATEX Directive has also adopted a functional safety standard, it is BS EN 50495:2010 "Safety Devices Required for the Safe Functioning of Equipment with Respect to Explosion Risks" covers safety related devices such as purge controllers and Ex e motor circuit breakers. It is applied by notified bodies under the ATEX Directive. The standard ISO 26262 particularly addresses the automotive development cycle. It is a multi-part standard defining requirements and providing guidelines for achieving functional safety in E/E systems installed in series production passenger cars. ISO 26262 is considered a best-practice framework for achieving automotive functional safety. The compliance process usually takes time as employees need to be trained in order to develop the expected competencies.
== Contemporary functional safety standards ==
The primary functional safety standards in current use are listed below:
IEC EN 61508 Parts 1 to 7 is a core functional safety standard, applied widely to all types of safety critical E/E/PS and to systems with a safety function incorporating E/E/PS.
UK Defence Standard 00-56 Issue 2
US RTCA DO-178C, North American Avionics Software
US RTCA DO-254, North American Avionics Hardware
EUROCAE ED-12B, European Airborne Flight Safety Systems
IEC 61513, Nuclear power plants – Instrumentation and control for systems important to safety – General requirements for systems, based on EN 61508
IEC 61511-1, Functional safety – Safety instrumented systems for the process industry sector – Part 1: Framework, definitions, system, hardware and software requirements, based on EN 61508
IEC 61511-2, Functional safety – Safety instrumented systems for the process industry sector – Part 2: Guidelines for the application of IEC 61511-1, based on EN 61508
IEC 61511-3, Functional safety – Safety instrumented systems for the process industry sector – Part 3: Guidance for the determination of the required safety integrity levels, based on EN 61508
IEC 62061, Safety of machinery – Functional safety of safety-related electrical, electronic and programmable electronic control systems, based on EN 61508
ISO 13849-1, -2, Safety of machinery – Safety-related parts of control systems. Non-technology dependent standard for control system safety of machinery.
EN 50126, Railway industry specific – RAMS review of operations, system and maintenance conditions for project equipment
EN 50128, Railway industry specific - Software (communications, signaling & processing systems) safety review
EN 50129, Railway industry specific - System safety in electronic systems
EN 50495, Safety devices required for the safe functioning of equipment with respect to explosion risks
NASA Safety Critical Guidelines
ISO 19014, Earth moving machinery – Functional safety
ISO 25119, Tractors and machinery for agriculture and forestry – Safety-related parts of control systems
ISO 26262, Road vehicles functional safety
The standard ISO 26262 particularly addresses the automotive development cycle. It is a multi-part standard defining requirements and providing guidelines for achieving functional safety in E/E systems installed in series production passenger cars. The standard ISO 26262 is considered a best practice framework for achieving automotive functional safety.
== See also ==
FMEA
FMEDA
IEC 61508
Plant process and emergency shutdown systems
Safety integrity level
Spurious trip level
== References ==
== External links ==
IEC Functional safety zone
61508.org The 61508 Association | Wikipedia/Functional_safety |
Electronic Design magazine, founded in 1952, is an electronics and electrical engineering trade magazine and website.
== History ==
Hayden Publishing Company began publishing the bi-weekly magazine Electronic Design in December 1952, and was later published by InformaUSA, Inc.
In 1986, Verenigde Nederlandse Uitgeverijen, purchased Hayden Publishing Inc.
In June 1988, Verenigde Nederlandse Uitgeverijen, purchased Electronic Design from McGraw-Hill.
In July 1989, Penton Media, purchased Electronic Design, then in Hasbrouck, N.J., from Verenigde Nederlandse Uitgeverijen.
In July 2007, Penton Media's OEM electronics publication, EE Product News, merged with Penton Media's "Electronic Design" magazine. EE Product News was founded in 1941, as a monthly publication.
In September 2016, Informa, purchased Penton Media, including Electronic Design.
In November 2019, Endeavor Business Media purchased Electronic Design from Informa.
== Content ==
Sections include Technology Reports (products), Engineering Essentials (new standards), Engineering Features (events), and Embedded in Electronic Design (embedded hardware and software). Design Solutions are contributed by field engineers and Ideas For Design are submitted by readers. Electronic Design also covers components. Techview presents news and products in the categories of Analog & Power, Digital, Electronic design automation, Communications, Test, and Wireless. The magazine covers emerging technologies and large-scale trends."Understanding Electronic Design and Its Peers". December 2020.
Six "big" issues are published per year. The Technology Forecast issue is published in January. In June, the Megatrends issue describes industry trends. The "Best" issue reviews the year's "best" designs, events and products. "Your Issue" covers topics from the annual reader survey results. "One Powerful Issue" covers Power and "Wireless Everywhere" covers Wireless.
Editorial staff include: William Wong, Senior Content Director; James Morra, Senior Editor; Andy Turudic, Technology Editor; Cabe Atwell, Technology Editor; Alix Paultre, Editor-at-Large; and David Maliniak, Senior Editor."Contacts". April 2024.
== Notable contributor ==
Bob Pease was an electronics engineer and author employed by National Semiconductor Corporation who wrote a monthly column, Pease Porridge, about analog electronics, and answered letters.
== Distribution ==
The publication is free, in print and PDF, for qualified engineers and North American industry managers. It is also available online.
== See also ==
EE Times
EDN
Electronics (magazine)
Electronic News
== References ==
== External links ==
Electronic Design
Endeavor Business Media | Wikipedia/Electronic_Design_(magazine) |
In semiconductor device fabrication, the inverse lithography technology (ILT) is an optical proximity correction approach to optimize photomask design. It is basically an approach to solve an inverse imaging problem: to calculate the shapes of the openings in a photomask ("source") so that the passing light produces a good approximation of the desired pattern ("target") on the illuminated material, typically a photoresist. As such, it is treated as a mathematical optimization problem of a special kind, because usually an analytical solution does not exist. In conventional approaches known as the optical proximity correction (OPC) a "target" shape is augmented with carefully tuned rectangles to produce a "Manhattan shape" for the "source", as shown in the illustration. The ILT approach generates curvilinear shapes for the "source", which deliver better approximations for the "target".
The ILT was proposed in 1980s, however at that time it was impractical due to the huge required computational power and complicated "source" shapes, which presented difficulties for verification (design rule checking) and manufacturing. However in late 2000s developers started reconsidering ILT due to significant increases in computational power.
== References == | Wikipedia/Inverse_lithography_technology |
In the automated design of integrated circuits, signoff (also written as sign-off) checks is the collective name given to a series of verification steps that the design must pass before it can be taped out. This implies an iterative process involving incremental fixes across the board using one or more check types, and then retesting the design. There are two types of sign-off's: front-end sign-off and back-end sign-off. After back-end sign-off, the chip goes to fabrication. After listing out all the features in the specification, the verification engineer will write coverage for those features to identify bugs, and send back the RTL design to the designer. Bugs, or defects, can include issues like missing features (comparing the layout to the specification), errors in design (typo and functional errors), etc. When the coverage reaches a maximum percentage then the verification team will sign it off. By using a methodology like UVM, OVM, or VMM, the verification team develops a reusable environment. Nowadays, UVM is more popular than others.
== History ==
During the late 1960s engineers at semiconductor companies like Intel used rubylith for the production of semiconductor lithography photomasks. Manually drawn circuit draft schematics of the semiconductor devices made by engineers were transeferred manually onto D-sized vellum sheets by a skilled schematic designer to make a physical layout of the device on a photomask.: 6
The vellum would be later hand-checked and signed off by the original engineer; all edits to the schematics would also be noted, checked, and, again, signed off.: 6
== Check types ==
Signoff checks have become more complex as VLSI designs approach 22nm and below process nodes, because of the increased impact of previously ignored (or more crudely approximated) second-order effects. There are several categories of signoff checks.
Layout Versus Schematic (LVS) – Also known as schematic verification, this is used to verify that the placement and routing of the standard cells in the design has not altered the functionality of the constructed circuit.
Design rule checking (DRC) – Also sometimes known as geometric verification, this involves verifying if the design can be reliably manufactured given current photolithography limitations. In advanced process nodes, DFM rules are upgraded from optional (for better yield) to required.
Formal verification – Here, the logical functionality of the post-layout netlist (including any layout-driven optimization) is verified against the pre-layout, post-synthesis netlist.
Voltage drop analysis – Also known as IR-drop analysis, this check verifies if the power grid is strong enough to ensure that the voltage representing the binary high value never dips lower than a set margin (below which the circuit will not function correctly or reliably) due to the combined switching of millions of transistors.
Signal integrity analysis – Here, noise due to crosstalk and other issues is analyzed, and its effect on circuit functionality is checked to ensure that capacitive glitches are not large enough to cross the threshold voltage of gates along the data path.
Static timing analysis (STA) – Slowly being superseded by statistical static timing analysis (SSTA), STA is used to verify if all the logic data paths in the design can work at the intended clock frequency, especially under the effects of on-chip variation. STA is run as a replacement for SPICE, because SPICE simulation's runtime makes it infeasible for full-chip analysis modern designs.
Electromigration lifetime checks – To ensure a minimum lifetime of operation at the intended clock frequency without the circuit succumbing to electromigration.
Functional Static Sign-off checks – which use search and analysis techniques to check for design failures under all possible test cases; functional static sign-off domains include clock domain crossing, reset domain crossing and X-propagation.
== Tools ==
A small subset of tools are classified as "golden" or signoff-quality. Categorizing a tool as signoff-quality without vendor-bias is a matter of trial and error, since the accuracy of the tool can only be determined after the design has been fabricated. So, one of the metrics that is in use (and often touted by the tool manufacturer/vendor) is the number of successful tapeouts enabled by the tool in question. It has been argued that this metric is insufficient, ill-defined, and irrelevant for certain tools, especially tools that play only a part in the full flow.
While vendors often embellish the ease of end-to-end (typically RTL to GDS for ASICs, and RTL to timing closure for FPGAs) execution through their respective tool suite, most semiconductor design companies use a combination of tools from various vendors (often called "best of breed" tools) in order to minimize correlation errors pre- and post-silicon. Since independent tool evaluation is expensive (single licenses for design tools from major vendors like Synopsys and Cadence may cost tens or hundreds of thousands of dollars) and a risky proposition (if the failed evaluation is done on a production design, resulting in a time to market delay), it is feasible only for the largest design companies (like Intel, IBM, Freescale, and TI). As a value add, several semiconductor foundries now provide pre-evaluated reference/recommended methodologies (sometimes referred to as "RM" flows) which includes a list of recommended tools, versions, and scripts to move data from one tool to another and automate the entire process.
This list of vendors and tools is meant to be representative and is not exhaustive:
DRC/LVS - Mentor HyperLynx DRC Free/Gold, Mentor Calibre, Magma Quartz, Synopsys Hercules, Cadence Assura
Voltage drop analysis - Cadence Voltus, Apache Redhawk, Magma Quartz Rail
Signal integrity analysis - Cadence CeltIC (crosstalk noise), Cadence Tempus Timing Signoff Solution, Synopsys PrimeTime SI (crosstalk delay/noise), Extreme-DA GoldTime SI (crosstalk delay/noise)
Static timing analysis - Synopsys PrimeTime, Magma Quartz SSTA, Cadence ETS, Cadence Tempus Timing Signoff Solution, Extreme-DA GoldTime
== References == | Wikipedia/Signoff_(electronic_design_automation) |
Daisy Systems Corporation, incorporated in 1981 in Mountain View, California, was a computer-aided engineering company, a pioneer in the electronic design automation (EDA) industry.
It was a manufacturer of computer hardware and software for EDA, including schematic capture, logic simulation, parameter extraction and other tools for printed circuit board design and semiconductor chip layout.
In mid-1980s, it had a subsidiary in Germany, Daisy Systems GmbH and one in Israel.
The company merged with Cadnetix Corporation of Boulder, Colorado in 1988, with the resulting company then known officially as Daisy/Cadnetix, Inc. with the trade name DAZIX. It filed for protection under Chapter 11 of the Federal Bankruptcy Code in 1990 and was acquired by Intergraph later that year. Intergraph incorporated DAZIX into its EDA business unit, which was later spun off as an independent subsidiary named VeriBest, Inc. VeriBest was ultimately acquired by Mentor Graphics in late 1999. The Veribest tool suite became Mentors flagship layout tool. Today it is known as Mentor Xpedition.
Daisy Systems was founded by Aryeh Finegold, David Stamm and Vinod Khosla; its original investors were Fred Adler and Oak Investment Partners.
Daisy along with Valid Logic Systems and Mentor Graphics, collectively known as DMV, added front end design to the existing computer-aided design aspects of computer automation.
== People ==
Many notable people in the EDA industry once worked for Daisy Systems, including Harvey Jones, who became the CEO of Synopsys, and Vinod Khosla, who, a year later in 1982, co-founded Sun Microsystems. Aryeh Finegold went on to co-found Mercury Interactive, and Dave Stamm and Don Smith went on to co-found Clarify. Tony Zingale became CEO of Clarify and then CEO of Mercury Interactive and later CEO of Jive Software. Mike Schuh co-founded Intrinsa Corporation before joining Foundation Capital as General Partner. George T. Haber went on to work at Sun and later founded CompCore Multimedia, GigaPixel, Mobilygen and CrestaTech. Dave Millman and Rick Carlson founded EDAC (now ESD Alliance), the industry organization for EDA vendors.
== Software ==
Daisy applications ran on the Daisy-DNIX operating system, a Unix-like operating system running on Intel 80286 and later processors.
In 1983
DABL (Daisy Behavioral Language) was developed at Daisy by Fred Chow. It was a hardware modelling language similar to VHDL.
The use of DABL for simulation models of processor interconnection networks is described by Lynn R. Freytag.
== References == | Wikipedia/Daisy_Systems |
Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs).
== History ==
=== Early days ===
The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s.
Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time.
The next era began following the publication of "Introduction to VLSI Systems" by Carver Mead and Lynn Conway in 1980, and is considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today.
The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Widely used were the Espresso heuristic logic minimizer, responsible for circuit complexity reductions and Magic, a computer-aided design platform. Another crucial development was the formation of MOSIS, a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects per wafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth.
=== Commercial birth ===
1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such as Hewlett-Packard, Tektronix and Intel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business. Daisy Systems, Mentor Graphics and Valid Logic Systems were all founded around this time and collectively referred to as DMV. In 1981, the U.S. Department of Defense additionally began funding of VHDL as a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis.
The first trade show for EDA was held at the Design Automation Conference in 1984 and in 1986, Verilog, another popular high-level design language, was first introduced as a hardware description language by Gateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to perform logic synthesis.
=== Modern day ===
Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools.
Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts). Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal.
EDA for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology. Some users are foundry operators, who operate the semiconductor fabrication facilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality into FPGAs or field-programmable gate arrays, customisable integrated circuit designs.
== Software focuses ==
=== Design ===
Design flow primarily remains characterised via several primary components; these include:
High-level synthesis (additionally known as behavioral synthesis or algorithmic synthesis) – The high-level design description (e.g. in C/C++) is converted into RTL or the register transfer level, responsible for representing circuitry via the utilisation of interactions between registers.
Logic synthesis – The translation of RTL design description (e.g. written in Verilog or VHDL) into a discrete netlist or representation of logic gates.
Schematic capture – For standard cell digital, analog, RF-like Capture CIS in Orcad by Cadence and ISIS in Proteus.
Layout – usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus
=== Simulation ===
Transistor simulation – low-level transistor-simulation of a schematic/layout's behavior, accurate at device-level.
Logic simulation – digital-simulation of an RTL or gate-netlist's digital (Boolean 0/1) behavior, accurate at Boolean-level.
Behavioral simulation – high-level simulation of a design's architectural operation, accurate at cycle-level or interface-level.
Hardware emulation – Use of special purpose hardware to emulate the logic of a proposed design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is called in-circuit emulation.
Technology CAD simulate and analyze the underlying process technology. Electrical properties of devices are derived directly from device physics
=== Analysis and verification ===
Functional verification: ensures logic design matches specifications and executes tasks correctly. Includes dynamic functional verification via simulation, emulation, and prototypes.
RTL Linting for adherence to coding rules such as syntax, semantics, and style.
Clock domain crossing verification (CDC check): similar to linting, but these checks/tools specialize in detecting and reporting potential issues like data loss, meta-stability due to use of multiple clock domains in the design.
Formal verification, also model checking: attempts to prove, by mathematical methods, that the system has certain desired properties, and that some undesired effects (such as deadlock) cannot occur.
Equivalence checking: algorithmic comparison between a chip's RTL-description and synthesized gate-netlist, to ensure functional equivalence at the logical level.
Static timing analysis: analysis of the timing of a circuit in an input-independent manner, hence finding a worst case over all possible inputs.
Layout extraction: starting with a proposed layout, compute the (approximate) electrical characteristics of every wire and device. Often used in conjunction with static timing analysis above to estimate the performance of the completed chip.
Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for cases of interest in IC and PCB design. They are known for being slower but more accurate than the layout extraction above.
Physical verification, PV: checking if a design is physically manufacturable, and that the resulting chips will not have any function-preventing physical defects, and will meet original specifications.
=== Manufacturing preparation ===
Mask data preparation or MDP - The generation of actual lithography photomasks, utilised to physically manufacture the chip.
Chip finishing which includes custom designations and structures to improve manufacturability of the layout. Examples of the latter are a seal ring and filler structures.
Producing a reticle layout with test patterns and alignment marks.
Layout-to-mask preparation that enhances layout data with graphics operations, such as resolution enhancement techniques (RET) – methods for increasing the quality of the final photomask. This also includes optical proximity correction (OPC) or inverse lithography technology (ILT) – the up-front compensation for diffraction and interference effects occurring later when chip is manufactured using this mask.
Mask generation – The generation of flat mask image from hierarchical design.
Automatic test pattern generation or ATPG – The generation of pattern data systematically to exercise as many logic-gates and other components as possible.
Built-in self-test or BIST – The installation of self-contained test-controllers to automatically test a logic or memory structure in the design
=== Functional safety ===
Functional safety analysis, systematic computation of failure in time (FIT) rates and diagnostic coverage metrics for designs in order to meet the compliance requirements for the desired safety integrity levels.
Functional safety synthesis, add reliability enhancements to structured elements (modules, RAMs, ROMs, register files, FIFOs) to improve fault detection / fault tolerance. This includes (not limited to) addition of error detection and / or correction codes (Hamming), redundant logic for fault detection and fault tolerance (duplicate / triplicate) and protocol checks (interface parity, address alignment, beat count)
Functional safety verification, running of a fault campaign, including insertion of faults into the design and verification that the safety mechanism reacts in an appropriate manner for the faults that are deemed covered.
== Companies ==
=== Current ===
Market capitalization and company name as of March 2023:
$57.87 billion – Synopsys
$56.68 billion – Cadence Design Systems
$24.98 billion – Ansys
AU$4.88 billion – Altium
¥77.25 billion – Zuken
=== Defunct ===
Market capitalization and company name as of December 2011:
$2.33 billion – Mentor Graphics; Siemens acquired Mentor in 2017 and renamed as Siemens EDA in 2021
$507 million – Magma Design Automation; Synopsys acquired Magma in February 2012
NT$6.44 billion – SpringSoft; Synopsys acquired SpringSoft in August 2012
=== Acquisitions ===
Many EDA companies acquire small companies with software or other technology that can be adapted to their core business. Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs on digital circuitry; many new tools incorporate analog design and mixed systems. This is happening due to a trend to place entire electronic systems on a single chip.
== Technical conferences ==
Design Automation Conference
International Conference on Computer-Aided Design
Design Automation and Test in Europe
Asia and South Pacific Design Automation Conference
Symposia on VLSI Technology and Circuits
== See also ==
Computer-aided design (CAD)
Circuit design
EDA database
Foundations and Trends in Electronic Design Automation
Signoff (electronic design automation)
Comparison of EDA software
Platform-based design
Silicon compiler
== References ==
Notes | Wikipedia/Electronic_design |
Resolution enhancement technologies are methods used to modify the photomasks in the lithographic processes used to make integrated circuits (ICs or "chips") to compensate for limitations in the optical resolution of the projection systems. These processes allow the creation of features well beyond the limit that would normally apply due to the Rayleigh criterion. Modern technologies allow the creation of features on the order of 5 nanometers (nm), far below the normal resolution possible using deep ultraviolet (DUV) light.
== Background ==
Integrated circuits are created in a multi-step process known as photolithography. This process starts with the design of the IC circuitry as a series of layers than will be patterned onto the surface of a sheet of silicon or other semiconductor material known as a wafer.
Each layer of the ultimate design is patterned onto a photomask, which in modern systems is made of fine lines of chromium deposited on highly purified quartz glass. Chromium is used because it is highly opaque to UV light, and quartz because it has limited thermal expansion under the intense heat of the light sources as well as being highly transparent to ultraviolet light. The mask is positioned over the wafer and then exposed to an intense UV light source. With a proper optical imaging system between the mask and the wafer (or no imaging system if the mask is sufficiently closely positioned to the wafer such as in early lithography machines), the mask pattern is imaged on a thin layer of photoresist on the surface of the wafer and a light (UV or EUV)-exposed part of the photoresist experiences chemical reactions causing the photographic pattern to be physically created on the wafer.
When light shines on a pattern like that on a mask, diffraction effects occur. This causes the sharply focused light from the UV lamp to spread out on the far side of the mask and becoming increasingly unfocussed over distance. In early systems in the 1970s, avoiding these effects required the mask to be placed in direct contact with the wafer in order to reduce the distance from the mask to the surface. When the mask was lifted it would often pull off the resist coating and ruin that wafer. Producing a diffraction-free image was ultimately solved through the projection aligner systems, which dominated chip making through the 1970s and early 1980s.
The relentless drive of Moore's law ultimately reached the limit of what the projection aligners could handle. Efforts were made to extend their lifetimes by moving to ever-higher UV wavelengths, first to DUV and then to EUV, but the small amounts of light given off at these wavelengths made the machines impractical, requiring enormous lamps and long exposure times. This was solved through the introduction of the steppers, which used a mask at much larger sizes and used lenses to reduce the image. These systems continued to improve in a fashion similar to the aligners, but by the late 1990s were also facing the same issues.
At the time, there was considerable debate about how to continue the move to smaller features. Systems using excimer lasers in the soft-X-ray region were one solution, but these were incredibly expensive and difficult to work with. It was at this time that resolution enhancement began to be used.
== Basic concept ==
The basic concept underlying the various resolution enhancement systems is the creative use of diffraction in certain locations to offset the diffraction in others. For instance, when light diffracts around a line on the mask it will produce a series of bright and dark lines, or "bands". that will spread out the desired sharp pattern. To offset this, a second pattern is deposited whose diffraction pattern overlaps with the desired features, and whose bands are positioned to overlap the original pattern's to produce the opposite effect - dark on light or vice versa. Multiple features of this sort are added, and the combined pattern produces the original feature. Typically, on the mask these additional features look like additional lines lying parallel to the desired feature.
Adding these enhancement features has been an area of continual improvement since the early 2000s. In addition to using additional patterning, modern systems add phase-shifting materials, multiple-patterning and other techniques. Together, they have allowed feature size to continue to shrink to orders of magnitude below the diffraction limit of the optics.
== Using resolution enhancement ==
Traditionally, after an IC design has been converted into a physical layout, the timing verified, and the polygons certified to be DRC-clean, the IC was ready for fabrication. The data files representing the various layers were shipped to a mask shop, which used mask-writing equipment to convert each data layer into a corresponding mask, and the masks were shipped to the fab where they were used to repeatedly manufacture the designs in silicon. In the past, the creation of the IC layout was the end of the involvement of electronic design automation.
However, as Moore's law has driven features to ever-smaller dimensions, new physical effects that could be effectively ignored in the past are now affecting the features that are formed on the silicon wafer. So even though the final layout may represent what is desired in silicon, the layout can still undergo dramatic alteration through several EDA tools before the masks are fabricated and shipped. These alterations are required not to make any change in the device as designed, but to simply allow the manufacturing equipment, often purchased and optimized for making ICs one or two generations behind, to deliver the new devices. These alterations can be classed as being of two types.
The first type is distortion corrections, namely pre-compensating for distortions inherent in the manufacturing process, be it from a processing step, such as: photolithography, etching, planarization, and deposition. These distortions are measured and a suitable model fitted, compensation is carried out usually using a rule or model based algorithm. When applied to printing distortions during photolithography, this distortion compensation is known as Optical Proximity Correction (OPC).
The second type of Reticle Enhancement involves actually improving the manufacturability or resolution of the process. Examples of this include:
For each of these manufacturability improvement techniques there are certain layouts that either cannot be improved or cause issues in printing. These are classed as non-compliant layouts. These are avoided either at the design stage - using, for instance, Radically Restrictive Design Rules and/or creating addition DRC checks if appropriate. Both the lithographic compensations and manufacturability improvements are usually grouped under the heading resolution enhancement techniques (RET). Such techniques have been used since the 180nm node and have become more aggressively used as minimum feature size as dropped significantly below that of the imaging wavelength, currently limited to 13.5 nm.
This is closely related to, and a part of, the more general category of design for manufacturability (IC) or DFM.
After RET, the next step in an EDA flow is usually mask data preparation.
== See also ==
Inverse lithography technology
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field, from which this summary was derived, with permission. | Wikipedia/Resolution_enhancement_techniques |
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.
Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM.
While DFM is applicable to the design process, a similar concept called DFSS (design for Six Sigma) is also practiced in many organizations.
== For printed circuit boards (PCB) ==
In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage.
Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving.
As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs. For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand.
== For integrated circuits (IC) ==
Semiconductor Design for Manufacturing (DFM)
Semiconductor Design for Manufacturing (DFM) is a comprehensive set of principles and techniques used in integrated circuit (IC) design to ensure that those designs transition smoothly into high-volume manufacturing with optimal yield and reliability. DFM focuses on anticipating potential fabrication issues and proactively modifying chip layouts and circuits to mitigate their impact.
Background
As semiconductor technology scales to smaller nodes, transistors and interconnects become incredibly dense and sensitive to subtle variations in the manufacturing process. These variations can lead to defects that cause chips to malfunction or degrade their performance. DFM aims to minimize the impact of these variations, improving yield and making chip manufacturing more cost-effective.
Key Concepts in DFM
Design Rules: Foundries provide detailed design rules that specify minimum dimensions, spacing, and other geometrical constraints that must be adhered to for successful fabrication. DFM-aware design tools automatically check designs against these rules, flagging potential violations for correction.
Process Variability: DFM techniques account for inherent variability in manufacturing processes such as lithography, etching, and deposition. By simulating how variations might affect specific design structures, designers can modify layouts to minimize sensitivity to these variations.
Yield Optimization: DFM aims to maximize yield, the percentage of chips that function correctly out of a manufactured wafer. This involves identifying critical areas of the design, adding redundancy, and implementing layout strategies that improve the likelihood of successful fabrication.
Reliability: DFM encompasses techniques to ensure chips are reliable throughout their expected lifespan. This involves analyzing how design choices impact electromigration, hot carrier injection, and other potential failure mechanisms, and designing accordingly.
DFM Techniques
Some common DFM techniques used in semiconductor design include:
Redundancy: Adding extra transistors or circuit elements to critical paths, so if one element fails, the chip can still function.
Fill Patterns: Adding non-functional geometrical shapes to empty areas of a layout to improve pattern density and minimize local manufacturing variations.
Optical Proximity Correction (OPC): Modifying mask patterns to compensate for distortions that occur during the lithography process.
Restricted Design Rules (RDR): A subset of design rules that are more conservative than standard rules, offering higher manufacturability.
Yield Simulations: Using statistical models to predict how design and process variations impact yield, allowing for informed design modification.
DFM and Design Flow
DFM is integrated throughout the semiconductor design flow:
Design: Designers use DFM-aware tools that automatically check for rule violations and potential manufacturability issues.
Verification: Verification processes include extensive DFM checks to ensure the design meets all manufacturing requirements.
Physical Implementation: During this stage, techniques like fill insertion and OPC are applied to the design for manufacturing optimization.
Signoff: A thorough design rule check (DRC) and layout vs. schematic (LVS) verification is performed to ensure the design is ready for fabrication.
Importance of DFM
DFM is essential for the successful and cost-effective production of advanced semiconductor devices. By proactively addressing manufacturability issues during the design stage, DFM leads to:
Higher yields
Faster time-to-market
Reduced risk of design re-spins
Lower manufacturing costs
== For CNC machining ==
=== Objective ===
The objective is to design for lower cost. The cost is driven by time, so the design must minimize the time required to not just machine (remove the material), but also the set-up time of the CNC machine, NC programming, fixturing and many other activities that are dependent on the complexity and size of the part.
=== Set-Up time of operations (flip of the part) ===
Unless a 4th and/or 5th axis is used, a CNC can only approach the part from a single direction. One side must be machined at a time (called an operation or op). Then the part must be flipped from side to side to machine all of the features. The geometry of the features dictates whether the part must be flipped over or not. The more ops (flip of the part), the more expensive the part because it incurs substantial set-up and load/unload time.
Each operation (flip of the part) has set-up time, machine time, time to load/unload tools, time to load/unload parts, and time to create the NC program for each operation. If a part has only 1 operation, then parts only have to be loaded/unloaded once. If it has 5 operations, then load/unload time is significant.
The low hanging fruit is minimizing the number of operations (flip of the part) to create significant savings. For example, it may take only 2 minutes to machine the face of a small part, but it will take an hour to set the machine up to do it. Or, if there are 5 operations at 1.5 hours each, but only 30 minutes total machine time, then 7.5 hours is charged for just 30 minutes of machining.
Lastly, the volume (number of parts to machine) plays a critical role in amortizing the set-up time, programming time and other activities into the cost of the part. In the example above, the part in quantities of 10 could cost 7–10 times the cost in quantities of 100.
Typically, the law of diminishing returns presents itself at volumes of 100–300 because set-up times, custom tooling and fixturing can be amortized into the noise.
=== Material type ===
The most easily machined types of metals include aluminum, brass, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own.
=== Material form ===
Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two most common forms from which machined parts are made. The size and shape of the component may determine which form of material must be used. It is common for engineering drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of plate on a per pound basis. So although the material form isn't directly related to the geometry of the component, cost can be removed at the design stage by specifying the least expensive form of the material.
=== Tolerances ===
A significant contributing factor to the cost of a machined component is the geometric tolerance to which the features must be made. The tighter the tolerance required, the more expensive the component will be to machine. When designing, specify the loosest tolerance that will serve the function of the component. Tolerances must be specified on a feature by feature basis. There are creative ways to engineer components with lower tolerances that still perform as well as ones with higher tolerances.
=== Design and shape ===
As machining is a subtractive process, the time to remove the material is a major factor in determining the machining cost. The volume and shape of the material to be removed as well as how fast the tools can be fed will determine the machining time. When using milling cutters, the strength and stiffness of the tool which is determined in part by the length to diameter ratio of the tool will play the largest role in determining that speed. The shorter the tool is relative to its diameter the faster it can be fed through the material. A ratio of 3:1 (L:D) or under is optimum. If that ratio cannot be achieved, a solution like this depicted here can be used. For holes, the length to diameter ratio of the tools are less critical, but should still be kept under 10:1.
There are many other types of features which are more or less expensive to machine. Generally chamfers cost less to machine than radii on outer horizontal edges. 3D interpolation is used to create radii on edges that are not on the same plane which incur 10X the cost. Undercuts are more expensive to machine. Features that require smaller tools, regardless of L:D ratio, are more expensive.
== Design for inspection ==
The concept of design for inspection (DFI) should complement and work in collaboration with design for manufacturability (DFM) and design for assembly (DFA) to reduce product manufacturing cost and increase manufacturing practicality. There are instances when this method could cause calendar delays since it consumes many hours of additional work such as the case of the need to prepare for design review presentations and documents. To address this, it is proposed that instead of periodic inspections, organizations could adopt the framework of empowerment, particularly at the stage of product development, wherein the senior management empowers the project leader to evaluate manufacturing processes and outcomes against expectations on product performance, cost, quality and development time. Experts, however, cite the necessity for the DFI because it is crucial in performance and quality control, determining key factors such as product reliability, safety, and life cycles. For an aerospace components company, where inspection is mandatory, there is the requirement for the suitability of the manufacturing process for inspection. Here, a mechanism is adopted such as an inspectability index, which evaluates design proposals. Another example of DFI is the concept of cumulative count of conforming chart (CCC chart), which is applied in inspection and maintenance planning for systems where different types of inspection and maintenance are available.
== Design for additive manufacturing ==
Additive manufacturing broadens the ability of a designer to optimize the design of a product or part (to save materials for example). Designs tailored for additive manufacturing are sometimes very different from designs tailored for machining or forming manufacturing operations.
In addition, due to some size constraints of additive manufacturing machines, sometimes the related bigger designs are split into smaller sections with self-assembly features or fasteners locators.
A common characteristic of additive manufacturing methods, such as fused deposition modeling, is the need for temporary support structures for overhanging part features. Post-processing removal of these temporary support structures increases the overall cost of fabrication. Parts can be designed for additive manufacturing by eliminating or reducing the need for temporary support structures. This can be done by limiting the angle of overhanging structures to less than the limit of the given additive manufacturing machine, material, and process (for example, less than 70 degrees from vertical).
== See also ==
Design for X
Electronic design automation
Reliability engineering
Six Sigma
Statistical process control
DFMA
Rule-based DFM analysis for direct metal laser sintering
Rule based analysis of extrusion process
Rule based DFM analysis for metal spinning
Rule based DFM analysis for deep drawing
Rule based DFM analysis for forging
DFM analysis for stereolithography
Rule-based DFM analysis for electric discharge machining
== References ==
== Sources ==
Mentor Graphics - DFM: What is it and what will it do? (must fill request form).
Mentor Graphics - DFM: Magic Bullet or Marketing Hype (must fill request form).
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field of EDA. The above summary was derived, with permission, from Volume II, Chapter 19, Design for Manufacturability in the Nanometer Era, by Nicola Dragone, Carlo Guardiani, and Andrzej J. Strojwas.
Design for Manufacturability And Statistical Design: A Constructive Approach, by Michael Orshansky, Sani Nassif, Duane Boning ISBN 0-387-30928-4
Estimating Space ASICs Using SEER-IC/H, by Robert Cisneros, Tecolote Research, Inc. (2008) Complete Presentation Archived 2012-02-20 at the Wayback Machine
== External links ==
Why DFM/DFMA is Business Critical
Design for manufacturing checklist – DFM,DFA(Design for assembly checklist from Quick-teck PCB manufacturer
Arc Design for Manufacturability Tips
Design for Manufacturing and Assembly
Turning designs into reality: The Manufacturability paradigm | Wikipedia/Design_for_manufacturability_(IC) |
Magma Design Automation was a software company in the electronic design automation (EDA) industry. The company was founded in 1997 and maintained headquarters in San Jose, California, with facilities throughout North America, Europe and Asia. Magma software products were used in major elements of integrated circuit design, including: synthesis, placement, routing, power management, circuit simulation, verification and analog/mixed-signal design.
Magma was acquired by Synopsys in a merger finalized February 22, 2012 at a cash value of about $523 million, or $7.35 per share.
== History ==
Magma was founded in 1997 by a team including Rajeev Madhavan, who was chairman, CEO and president from the company's inception. The company initially competed primarily with Cadence and Avanti Corporation in physical design but eventually broadened its product portfolio and competed with all three of the largest established EDA companies: Cadence, Mentor Graphics and Synopsys. Magma had a particularly strong presence in the convergence device segment through key customers such as Qualcomm, Broadcom and Texas Instruments. In 2001 Roy Jewell joined Magma as chief operating officer and later that year added the title of president.
Magma completed an initial public offering on Nasdaq, under the ticker symbol LAVA, on November 20, 2001 — the last EDA company to go public — and achieved its peak annual revenue of $214.4 million in its 2008 fiscal year. Magma was the fourth largest EDA company by revenue.
In 2002 Magma was named to the Red Herring 100 for innovation and business strategy. In 2005 Forbes ranked Magma No. 2 on its list of fastest-growing technology companies.
=== Patent Dispute ===
Magma was involved in a legal dispute with Synopsys beginning in September 2004, when Synopsys sued Magma for allegedly infringing two patents. Claims and counter-claims accelerated, resulting in separate court cases in California and Delaware, and a number of disputed patents. On March 29, 2007, Magma and Synopsys announced the companies had agreed to settle all pending litigation between them. As part of the settlement Magma made a $12.5 million payment to Synopsys and each company cross-licensed four previously disputed patents to the other.
== Synopsys Acquisition ==
On November 30, 2011, Magma and Synopsys announced they had entered into a definitive agreement by which Synopsys would buy Magma for $507 million US$. The merger was finalized on February 22, 2012, with cash value of the transaction at about $523 million, or $7.35 per Magma share.
== References ==
== External links ==
Magma homepage | Wikipedia/Magma_Design_Automation |
Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
When census data (comprising every member of the target population) cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation.
Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena.
A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.
Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
== Introduction ==
"Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population.
Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics were once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields.
Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification.
The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name.
== Statistical data ==
=== Data collection ===
==== Sampling ====
When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models.
To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population.
Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population.
==== Experimental and observational studies ====
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies—for which a statistician would use a modified, more structured estimation method (e.g., difference in differences estimation and instrumental variables, among many others) that produce consistent estimators.
===== Experiments =====
The basic steps of a statistical experiment are:
Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects.
Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data.
Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol.
Further examining the data set in secondary analyses, to suggest new hypotheses for future study.
Documenting and presenting the results of the study.
Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.
===== Observational study =====
An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected.
=== Types of data ===
Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation.
Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented.
Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998), van den Berg (1991).)
The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer.": 82
== Methods ==
=== Descriptive statistics ===
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent.
=== Inferential statistics ===
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.
==== Terminology and theory of inferential statistics ====
===== Statistics, estimators and pivotal quantities =====
Consider independent identically distributed (IID) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these IID variables. The population being examined is described by a probability distribution that may have unknown parameters.
A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function. Commonly used estimators include sample mean, unbiased sample variance and sample covariance.
A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value.
Between two estimators of a given parameter, the one with lower mean squared error is said to be more efficient. Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter.
Other desirable properties for estimators include: UMVUE estimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) and consistent estimators which converges in probability to the true value of such parameter.
This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: the method of moments, the maximum likelihood method, the least squares method and the more recent method of estimating equations.
===== Null hypothesis and alternative hypothesis =====
Interpretation of statistical information can often involve the development of a null hypothesis which is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time. The alternative hypothesis is the name of the hypothesis that contradicts the null hypothesis.
The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (the status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors.
===== Error =====
Working from a null hypothesis, two broad categories of error are recognized:
Type I errors where the null hypothesis is falsely rejected, giving a "false positive".
Type II errors where the null hypothesis fails to be rejected and an actual difference between populations is missed, giving a "false negative".
Standard deviation refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while Standard error refers to an estimate of difference between sample mean and population mean.
A statistical error is the amount by which an observation differs from its expected value. A residual is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction).
Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error.
Many statistical methods seek to minimize the residual sum of squares, and these are called "methods of least squares" in contrast to Least absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also differentiable, which provides a handy property for doing regression. Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve.
Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
===== Interval estimation =====
Most studies only sample part of a population, so results do not fully represent the whole population. Any estimates obtained from the sample only approximate the population value. Confidence intervals allow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%. From the frequentist perspective, such a claim does not even make sense, as the true value is not a random variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observed random variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use a credible interval from Bayesian statistics: this approach depends on a different way of interpreting what is meant by "probability", that is as a Bayesian probability.
In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds.
===== Significance =====
Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as the p-value).
The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator does not belong to the critical region given that the alternative hypothesis is true. The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false.
Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably.
Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error.
Some problems are usually associated with this framework (See criticism of hypothesis testing):
A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this. One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. The p-value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies. A better and increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it.
Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is the probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result. An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability.
Rejecting the null hypothesis does not automatically prove the alternative hypothesis.
As everything in inferential statistics it relies on sample size, and therefore under fat tails p-values may be seriously mis-computed.
===== Examples =====
Some well-known statistical tests and procedures are:
=== Bayesian Statistics ===
An alternative paradigm to the popular frequentist paradigm is to use Bayes' theorem to update the prior probability of the hypotheses in consideration based on the relative likelihood of the evidence gathered to obtain a posterior probability. Bayesian methods have been aided by the increase in available computing power to compute the posterior probability using numerical approximation techniques like Markov Chain Monte Carlo.
For statistically modelling purposes, Bayesian models tend to be hierarchical, for example, one could model each Youtube channel as having video views distributed as a normal distribution with channel dependent mean and variance
N
(
μ
i
,
σ
i
)
{\displaystyle {\mathcal {N}}(\mu _{i},\sigma _{i})}
, while modeling the channel means as themselves coming from a normal distribution representing the distribution of average video view counts per channel, and the variances as coming from another distribution.
The concept of using likelihood ratio can also be prominently seen in medical diagnostic testing.
=== Exploratory data analysis ===
Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task.
=== Mathematical statistics ===
Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. All statistical analyses make use of at least some mathematics, and mathematical statistics can therefore be regarded as a fundamental component of general statistics.
== History ==
Formal discussions on inference date back to the mathematicians and cryptographers of the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis.
Although the term statistic was introduced by the Italian scholar Girolamo Ghilini in 1589 with reference to a collection of facts and information about a state, it was the German Gottfried Achenwall in 1749 who started using the term as a collection of quantitative information, in the modern use for this science. The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences.
The mathematical foundations of statistics developed from discussions concerning games of chance among mathematicians such as Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work of Juan Caramuel), probability theory as a mathematical discipline only took shape at the very end of the 17th century, particularly in Jacob Bernoulli's posthumous work Ars Conjectandi. This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis. The method of least squares was first described by Adrien-Marie Legendre in 1805, though Carl Friedrich Gauss presumably made use of it a decade earlier in 1795.
The modern field of statistics emerged in the late 19th and early 20th century in three stages. The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others. Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment, the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things. Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London.
The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance (which was the first to use the statistical term, variance), his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information. He also coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation". In his 1930 book The Genetical Theory of Natural Selection, he applied statistics to various biological concepts such as Fisher's principle (which A. W. F. Edwards called "probably the most celebrated argument in evolutionary biology") and Fisherian runaway, a concept in sexual selection about a positive feedback runaway effect found in evolution.
The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling.
Among the early attempts to measure national economic activity were those of William Petty in the 17th century. In the 20th century the uniform System of National Accounts was developed.
Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze big data.
== Applications ==
=== Applied statistics, theoretical statistics and mathematical statistics ===
Applied statistics, sometimes referred to as Statistical science, comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments.
Statistical consultants can help organizations and companies that do not have in-house expertise relevant to their particular questions.
=== Machine learning and data mining ===
Machine learning models are statistical and probabilistic models that capture patterns in the data through use of computational algorithms.
=== Statistics in academia ===
Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Business statistics applies statistical methods in econometrics, auditing and production and operations, including services improvement and marketing research. A study of two journals in tropical biology found that the 12 most frequent statistical tests are: analysis of variance (ANOVA), chi-squared test, Student's t-test, linear regression, Pearson's correlation coefficient, Mann-Whitney U test, Kruskal-Wallis test, Shannon's diversity index, Tukey's range test, cluster analysis, Spearman's rank correlation coefficient and principal component analysis.
A typical statistics course covers descriptive statistics, probability, binomial and normal distributions, test of hypotheses and confidence intervals, linear regression, and correlation. Modern fundamental statistical courses for undergraduate students focus on correct test selection, results interpretation, and use of free statistics software.
=== Statistical computing ===
The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models.
Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with a new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose statistical software are now available. Examples of available software capable of complex statistical computation include programs such as Mathematica, SAS, SPSS, and R.
=== Business statistics ===
In business, "statistics" is a widely used management- and decision support tool. It is particularly applied in financial management, marketing management, and production, services and operations management. Statistics is also heavily used in management accounting and auditing. The discipline of Management Science formalizes the use of statistics, and other mathematics, in business. (Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.)
A typical "Business Statistics" course is intended for business majors, and covers descriptive statistics (collection, description, analysis, and summary of data), probability (typically the binomial and normal distributions), test of hypotheses and confidence intervals, linear regression, and correlation; (follow-on) courses may include forecasting, time series, decision trees, multiple linear regression, and other topics from business analytics more generally. Professional certification programs, such as the CFA, often include topics in statistics.
== Specialized disciplines ==
Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include:
In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology:
Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions.
== Misuse ==
Misuse of statistics can produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics.
Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy.
There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter. A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics, by Darrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)).
Ways to avoid misuse of statistics include using proper diagrams and avoiding bias. Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias. Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs. Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented. To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism."
To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:
Who says so? (Does he/she have an axe to grind?)
How does he/she know? (Does he/she have the resources to know the facts?)
What's missing? (Does he/she give us a complete picture?)
Did someone change the subject? (Does he/she offer us the right answer to the wrong problem?)
Does it make sense? (Is his/her conclusion logical and consistent with what we already know?)
=== Misinterpretation: correlation ===
The concept of correlation is particularly noteworthy for the potential confusion it can cause. Statistical analysis of a data set often reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death, might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable or confounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables.
== See also ==
Foundations and major areas of statistics
== References ==
== Further reading ==
Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results". (p. 63.)
Barbara Illowsky; Susan Dean (2014). Introductory Statistics. OpenStax CNX. ISBN 978-1938168208.
Stockburger, David W. "Introductory Statistics: Concepts, Models, and Applications". Missouri State University (3rd Web ed.). Archived from the original on 28 May 2020.
OpenIntro Statistics Archived 2019-06-16 at the Wayback Machine, 3rd edition by Diez, Barr, and Cetinkaya-Rundel
Stephen Jones, 2010. Statistics in Psychology: Explanations without Equations. Palgrave Macmillan. ISBN 978-1137282392.
Cohen, J (1990). "Things I have learned (so far)" (PDF). American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066x.45.12.1304. S2CID 7180431. Archived from the original (PDF) on 2017-10-18.
Gigerenzer, G (2004). "Mindless statistics". Journal of Socio-Economics. 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033.
Ioannidis, J.P.A. (2005). "Why most published research findings are false". PLOS Medicine. 2 (4): 696–701. doi:10.1371/journal.pmed.0040168. PMC 1855693. PMID 17456002.
== External links ==
(Electronic Version): TIBCO Software Inc. (2020). Data Science Textbook.
Online Statistics Education: An Interactive Multimedia Course of Study. Developed by Rice University (Lead Developer), University of Houston Clear Lake, Tufts University, and National Science Foundation.
UCLA Statistical Computing Resources (archived 17 July 2006)
Philosophy of Statistics from the Stanford Encyclopedia of Philosophy | Wikipedia/statistics |
Criminal procedure is the adjudication process of the criminal law. While criminal procedure differs dramatically by jurisdiction, the process generally begins with a formal criminal charge with the person on trial either being free on bail or incarcerated, and results in the conviction or acquittal of the defendant. Criminal procedure can be either in form of inquisitorial or adversarial criminal procedure.
== Basic rights ==
Currently, in many countries with a democratic system and the rule of law, criminal procedure puts the burden of proof on the prosecution – that is, it is up to the prosecution to prove that the defendant is guilty beyond any reasonable doubt, as opposed to having the defense prove that they are innocent, and any doubt is resolved in favor of the defendant. This provision, known as the presumption of innocence, is required, for example, in the 46 countries that are members of the Council of Europe, under Article 6 of the European Convention on Human Rights, and it is included in other human rights documents. However, in practice, it operates somewhat differently in different countries. Such basic rights also include the right for the defendant to know what offence he or she has been arrested for or is being charged with, and the right to appear before a judicial official within a certain time of being arrested. Many jurisdictions also allow the defendant the right to legal counsel and provide any defendant who cannot afford their own lawyer with a lawyer paid for at the public expense.
== Difference between criminal and civil cases ==
Countries using the common law tend to make a clear distinction between civil and criminal procedures. For example, an English criminal court may force a convicted accused to pay a fine to the Crown as punishment for the crime, and sometimes to pay the legal costs of the prosecution, but does not normally order the convicted accused to pay any compensation to the victim of the crime. The victim must pursue their claim for compensation in a civil, not a criminal, action. In countries using the continental civil law system, such as France and Italy, the victim of a crime (known as the "injured party") may be awarded damages by a criminal court judge.
The standards of proof are higher in a criminal action than in a civil one since the loser risks not only financial penalties but also being sent to prison (or, in some countries, execution). In English law, the prosecution must prove the guilt of a criminal "beyond reasonable doubt", while the plaintiff in a civil action is required to prove his case "on the balance of probabilities". "Beyond reasonable doubt" is not defined for the jury which decides the verdict, but it has been said by appeal courts that proving guilt beyond reasonable doubt requires the prosecution to exclude any reasonable hypothesis consistent with innocence: Plomp v. R. In a civil case, however, the court simply weighs the evidence and decides what is most probable.
Criminal and civil procedure are different. Although some systems, including the English, allow a private citizen to bring a criminal prosecution against another citizen, criminal actions are nearly always started by the state. Civil actions, on the other hand, are usually started by individuals.
In Anglo-American law, the party bringing a criminal action (that is, in most cases, the state) is called the prosecution, but the party bringing a civil action is the plaintiff. In a civil action the other party is known as the defendant. In a criminal case, the private party may be known as the defendant or the accused. A criminal case in the United States against a person named Ms. Sanchez would be entitled United States v. (short for versus, or against) Sanchez if initiated by the federal government; if brought by a state, the case would typically be called State v. Sanchez or People v. Sanchez. In the United Kingdom, the criminal case would be styled R. (short for Rex or Regina, that is, the King or Queen) v. Sanchez. In both the United States and the United Kingdom, a civil action between Ms. Sanchez and a Mr. Smith would be Sanchez v. Smith if started by Sanchez and Smith v. Sanchez if begun by Smith.
Evidence given at a criminal trial is not necessarily admissible in a civil action about the same matter, just as evidence given in a civil cause is not necessarily admissible on a criminal trial. For example, the victim of a road accident does not directly benefit if the driver who injured him is found guilty of the crime of careless driving. He still has to prove his case in a civil action. In fact he may be able to prove his civil case even when the driver is found not guilty in the criminal trial. If the accused has given evidence on his trial he may be cross-examined on those statements in a subsequent civil action regardless of the criminal verdict.
Once the plaintiff has shown that the defendant is liable, the main argument in a civil court is about the amount of money, or damages, which the defendant should pay to the plaintiff.
== Differences between civil law and common law systems ==
The majority of civil law jurisdictions ('civil law' as a type of law system, not as opposed to criminal law) follow an inquisitorial system of adjudication, in which judges undertake an active investigation of the claims by examining the evidence at the trial (while other judges contribute likewise by preparing reports).
In common law systems, the trial judge presides over proceedings grounded in the adversarial system of dispute resolution, where both the prosecution and the defence prepare arguments to be presented before the court. Some civil law systems have adopted adversarial procedures.
Proponents of either system tend to consider that their system defends best the rights of the innocent. There is a tendency in common law countries to believe that civil law / inquisitorial systems do not have the so-called "presumption of innocence", and do not provide the defence with adequate rights. Conversely, there is a tendency in countries with an inquisitorial system to believe that accusatorial proceedings unduly favour rich defendants who can afford large legal teams, and therefore disfavour poorer defendants.
== See also ==
Offence (law)
Trial (law)
Civil procedure
Code of Criminal Procedure, 1973 of India
Court Appointed Special Advocates
Criminal Procedure Act
Criminal procedure in the United States
Formal procedure law in Switzerland
Italian Criminal Procedure
Code of Criminal Procedure (Japan)
Criminal Procedure Code (Malaysia)
Criminal Procedure Code (Ukraine)
== References ==
== Further reading ==
Israel, Jerold H.; Kamisar, Yale; LaFave, Wayne R. (2003). Criminal Procedure and the Constitution: Leading Supreme Court Cases and Introductory Text. St. Paul, MN: West Publishing. ISBN 0-314-14669-5. | Wikipedia/Criminal_trial |
A case–control study (also known as case–referent study) is a type of observational study in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute. Case–control studies are often used to identify factors that may contribute to a medical condition by comparing subjects who have the condition with patients who do not have the condition but are otherwise similar. They require fewer resources but provide less evidence for causal inference than a randomized controlled trial. A case–control study is often used to produce an odds ratio. Some statistical methods make it possible to use a case–control study to also estimate relative risk, risk differences, and other quantities.
== Definition ==
Porta's Dictionary of Epidemiology defines the case–control study as: "an observational epidemiological study of persons with the disease (or another outcome variable) of interest and a suitable control group of persons without the disease (comparison group, reference group). The potential relationship of a suspected risk factor or an attribute to the disease is examined by comparing the diseased and nondiseased subjects with regard to how frequently the factor or attribute is present (or, if quantitative, the levels of the attribute) in each of the groups (diseased and nondiseased)."
The case–control study is frequently contrasted with cohort studies, wherein exposed and unexposed subjects are observed until they develop an outcome of interest.
=== Control group selection ===
Controls need not be in good health; inclusion of sick people is sometimes appropriate, as the control group should represent those at risk of becoming a case. Controls should come from the same population as the cases, and their selection should be independent of the exposures of interest.
Controls can carry the same disease as the experimental group, but of another grade/severity, therefore being different from the outcome of interest. However, because the difference between the cases and the controls will be smaller, this results in a lower power to detect an exposure effect.
As with any epidemiological study, greater numbers in the study will increase the power of the study. Numbers of cases and controls do not have to be equal. In many situations, it is much easier to recruit controls than to find cases. Increasing the number of controls above the number of cases, up to a ratio of about 4 to 1, may be a cost-effective way to improve the study.
=== Prospective vs. retrospective cohort studies ===
A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies.
A retrospective study, on the other hand, looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Many valuable case–control studies, such as Lane and Claypon's 1926 investigation of risk factors for breast cancer, were retrospective investigations. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigations are often criticised. If the outcome of interest is uncommon, however, the size of prospective investigation required to estimate relative risk is often too large to be feasible. In retrospective studies the odds ratio provides an estimate of relative risk. One should take special care to avoid sources of bias and confounding in retrospective studies.
== Strengths and weaknesses ==
Case–control studies are a relatively inexpensive and frequently used type of epidemiological study that can be carried out by small teams or individual researchers in single facilities in a way that more structured experimental studies often cannot be. They have pointed the way to a number of important discoveries and advances. The case–control study design is often used in the study of rare diseases or as a preliminary study where little is known about the association between the risk factor and disease of interest.
Compared to prospective cohort studies they tend to be less costly and shorter in duration. In several situations, they have greater statistical power than cohort studies, which must often wait for a 'sufficient' number of disease events to accrue.
Case–control studies are observational in nature and thus do not provide the same level of evidence as randomized controlled trials. The results may be confounded by other factors, to the extent of giving the opposite answer to better studies. A meta-analysis of what was considered 30 high-quality studies concluded that use of a product halved a risk, when in fact the risk was, if anything, increased. It may also be more difficult to establish the timeline of exposure to disease outcome in the setting of a case–control study than within a prospective cohort study design where the exposure is ascertained prior to following the subjects over time in order to ascertain their outcome status. The most important drawback in case–control studies relates to the difficulty of obtaining reliable information about an individual's exposure status over time. Case–control studies are therefore placed low in the hierarchy of evidence.
== Examples ==
One of the most significant triumphs of the case–control study was the demonstration of the link between tobacco smoking and lung cancer, by Richard Doll and Bradford Hill. They showed a statistically significant association in a large case–control study. Opponents argued for many years that this type of study cannot prove causation, but the eventual results of cohort studies confirmed the causal link which the case–control studies suggested, and it is now accepted that tobacco smoking is the cause of about 87% of all lung cancer mortality in the US.
== Analysis ==
Case–control studies were initially analyzed by testing whether or not there were significant differences between the proportion of exposed subjects among cases and controls. Subsequently, Cornfield pointed out that, when the disease outcome of interest is rare, the odds ratio of exposure can be used to estimate the relative risk (see rare disease assumption). The validity of the odds ratio depends highly on the nature of the disease studied, on the sampling methodology and on the type of follow-up. Although in classical case–control studies, it remains true that the odds ratio can only approximate the relative risk in the case of rare diseases, there is a number of other types of studies (case–cohort, nested case–control, cohort studies) in which it was later shown that the odds ratio of exposure can be used to estimate the relative risk or the incidence rate ratio of exposure without the need for the rare disease assumption.
When the logistic regression model is used to model the case–control data and the odds ratio is of interest, both the prospective and retrospective likelihood methods will lead to identical maximum likelihood estimations for covariate, except for the intercept. The usual methods of estimating more interpretable parameters than odds ratios—such as risk ratios, levels, and differences—is biased if applied to case–control data, but special statistical procedures provide easy to use consistent estimators.
== Impact on longevity and public health ==
Tetlock and Gardner claimed that the contributions of medical science to increasing human longevity and public health were negligible, and too often negative, until Scottish physician Archie Cochrane was able to convince the medical establishment to adopt randomized control trials after World War II.
== See also ==
Nested case–control study
Retrospective cohort study
Prospective cohort study
Randomized controlled trial
== References ==
== Further reading ==
Stolley, Paul D., Schlesselman, James J. (1982). Case–control studies: design, conduct, analysis. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-502933-X. (Still a very useful book, and a great place to start, but now a bit out of date.)
== External links ==
Wellcome Trust Case Control Consortium | Wikipedia/Case-control_study |
In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.
Similarly, a protocol may refer to the procedural methods of health organizations, commercial laboratories, manufacturing plants, etc. to ensure their activities (e.g., blood testing at a hospital, testing of certified reference materials at a calibration laboratory, and manufacturing of transmission gears at a facility) are consistent to a specific standard, encouraging safe use and accurate results.
Finally, in the field of social science, a protocol may also refer to a "descriptive record" of observed events or a "sequence of behavior" of one or more organisms, recorded during or immediately after an activity (e.g., how an infant reacts to certain stimuli or how gorillas behave in natural habitat) to better identify "consistent patterns and cause-effect relationships." These protocols may take the form of hand-written journals or electronically documented media, including video and audio capture.
== Experiment and study protocol ==
Various fields of science, such as environmental science and clinical research, require the coordinated, standardized work of many participants. Additionally, any associated laboratory testing and experiment must be done in a way that is both ethically sound and results can be replicated by others using the same methods and equipment. As such, rigorous and vetted testing and experimental protocols are required. In fact, such predefined protocols are an essential component of Good Laboratory Practice (GLP) and Good Clinical Practice (GCP) regulations. Protocols written for use by a specific laboratory may incorporate or reference standard operating procedures (SOP) governing general practices required by the laboratory. A protocol may also reference applicable laws and regulations that are applicable to the procedures described. Formal protocols typically require approval by one or more individuals—including for example a laboratory directory, study director, and/or independent ethics committee: 12 —before they are implemented for general use. Clearly defined protocols are also required by research funded by the National Institutes of Health.
In a clinical trial, the protocol is carefully designed to safeguard the health of the participants as well as answer specific research questions. A protocol describes what types of people may participate in the trial; the schedule of tests, procedures, medications, and dosages; and the length of the study. While in a clinical trial, participants following a protocol are seen regularly by research staff to monitor their health and to determine the safety and effectiveness of their treatment. Since 1996, clinical trials conducted are widely expected to conform to and report the information called for in the CONSORT Statement, which provides a framework for designing and reporting protocols. Though tailored to health and medicine, ideas in the CONSORT statement are broadly applicable to other fields where experimental research is used.
Protocols will often address:
safety: Safety precautions are a valuable addition to a protocol, and can range from requiring goggles to provisions for containment of microbes, environmental hazards, toxic substances, and volatile solvents. Procedural contingencies in the event of an accident may be included in a protocol or in a referenced SOP.
procedures: Procedural information may include not only safety procedures but also procedures for avoiding contamination, calibration of equipment, equipment testing, documentation, and all other relevant issues. These procedural protocols can be used by skeptics to invalidate any claimed results if flaws are found.
equipment used: Equipment testing and documentation includes all necessary specifications, calibrations, operating ranges, etc. Environmental factors such as temperature, humidity, barometric pressure, and other factors can often have effects on results. Documenting these factors should be a part of any good procedure.
reporting: A protocol may specify reporting requirements. Reporting requirements would include all elements of the experiments design and protocols and any environmental factors or mechanical limitations that might affect the validity of the results.
calculations and statistics: Protocols for methods that produce numerical results generally include detailed formulas for calculation of results. A formula may also be included for preparation of reagents and other solutions required for the work. Methods of statistical analysis may be included to guide interpretation of the data.
bias: Many protocols include provisions for avoiding bias in the interpretation of results. Approximation error is common to all measurements. These errors can be absolute errors from limitations of the equipment or propagation errors from approximate numbers used in calculations. Sample bias is the most common and sometimes the hardest bias to quantify. Statisticians often go to great lengths to ensure that the sample used is representative. For instance political polls are best when restricted to likely voters and this is one of the reasons why web polls cannot be considered scientific. The sample size is another important concept and can lead to biased data simply due to an unlikely event. A sample size of 10, i.e., polling 10 people, will seldom give valid polling results. Standard deviation and variance are concepts used to quantify the likely relevance of a given sample size. The placebo effect and observer bias often require the blinding of patients and researchers as well as a control group.
Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency, and consistency between methodology and protocol.
=== Blinded protocols ===
A protocol may require blinding to avoid bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments, and must be measured and reported. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.
An experimenter may have latitude defining procedures for blinding and controls but may be required to justify those choices if the results are published or submitted to a regulatory agency. When it is known during the experiment which data was negative there are often reasons to rationalize why that data shouldn't be included. Positive data are rarely rationalized the same way.
== See also ==
== References == | Wikipedia/Protocol_(natural_sciences) |
The Mathematical Sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences.
Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island).
== See also ==
Exact sciences – Sciences that admit of absolute precision in their results
Formal science – Study of abstract structures described by formal systems
Relationship between mathematics and physics
== References ==
== External links ==
Division of Mathematical Sciences at the National Science Foundation, including a list of disciplinary areas supported
Faculty of Mathematical Sciences at University of Khartoum, offers academic degrees in Mathematics, Computer Sciences and Statistics
Programs of the Mathematical Sciences Research Institute
Research topics studied at the Isaac Newton Institute for Mathematical Sciences
Mathematical Sciences in the U.S. FY 2016 Budget; a report from the AAAS | Wikipedia/Mathematical_sciences |
Structural equation modeling (SEM) is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology, business, and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".
SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis (CFA), confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.
SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.
== History ==
Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs). Discussions comparing and contrasting various SEM approaches are available highlighting disciplinary differences in data structures and the concerns motivating economic models.
Judea Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.
SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest. The use of experimental designs may address some of these doubts.
Today, SEM forms the basis of machine learning and (interpretable) neural networks. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning.
== General steps and considerations ==
The following considerations apply to the construction and assessment of many structural equation models.
=== Model specification ===
Building or specifying a model requires attending to:
the set of variables to be employed,
what is known about the variables,
what is theorized or hypothesized about the variables' causal connections and disconnections,
what the researcher seeks to learn from the modeling, and
the instances of missing values and/or the need for imputation.
Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
which effects and/or correlations/covariances are to be included and estimated,
which effects and other coefficients are forbidden or presumed unnecessary,
and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).
The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections.
Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.
=== Estimation of free model coefficients ===
Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.
One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
=== Model assessment ===
Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
whether the data contain reasonable measurements of appropriate variables,
whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)
Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a χ2 (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model χ2 test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the χ2 test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a χ2 test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to χ2. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of χ2 testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that χ2 increases (and hence χ2 probability decreases) with increasing sample size (N). There are two mistakes in discounting χ2 on this basis. First, for proper models, χ2 does not increase with increasing N, so if χ2 increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, χ2 increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by χ2, so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The χ2 model test, possibly adjusted, is the strongest available structural equation model test.
Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.
This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The χ2 evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.
Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.
The considerations relevant to using fit indices include checking:
whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.);
whether a model test is, or is not, available. (A χ2 value, degrees of freedom, and probability will be available for models reporting indices based on χ2.)
and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).
Some of the more commonly used fit statistics include
Chi-square
A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.
Akaike information criterion (AIC)
An index of relative model fit: The preferred model is the one with the lowest AIC value.
A
I
C
=
2
k
−
2
ln
(
L
)
{\displaystyle {\mathit {AIC}}=2k-2\ln(L)\,}
where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model.
Root Mean Square Error of Approximation (RMSEA)
Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested.
Standardized Root Mean Squared Residual (SRMR)
The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.
Comparative Fit Index (CFI)
In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.
The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.
=== Sample size, power, and estimation ===
Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
=== Interpretation ===
Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.
Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.
SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model.
SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.
The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.
The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.
Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.
The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.
=== Controversies and movements ===
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides.
These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives.
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM.
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
== Extensions, modeling alternatives, and statistical kin ==
Categorical dependent variables
Categorical intervening variables
Copulas
Deep Path Modelling
Exploratory Structural Equation Modeling
Fusion validity models
Item response theory models
Latent class models
Latent growth modeling
Link functions
Longitudinal models
Measurement invariance models
Mixture model
Multilevel models, hierarchical models (e.g. people nested in groups)
Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.)
Multi-method multi-trait models
Random intercepts models
Structural Equation Model Trees
Structural Equation Multidimensional scaling
== Software ==
Structural equation modeling programs differ widely in their capabilities and user requirements. Below is a table of available software.
== See also ==
Causal model – Conceptual model in philosophy of science
Graphical model – Probabilistic model
Judea Pearl
Multivariate statistics – Simultaneous observation and analysis of more than one outcome variable
Partial least squares path modeling – Method for structural equation modeling
Partial least squares regression – Statistical method
Simultaneous equations model – Type of statistical model
Causal map – A network consisting of links or arcs between nodes or factors
Bayesian Network – Statistical modelPages displaying short descriptions of redirect targets
== References ==
== Bibliography ==
Hu, Li-tze; Bentler, Peter M (1999). "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives". Structural Equation Modeling. 6: 1–55. doi:10.1080/10705519909540118. hdl:2027.42/139911.
Kaplan, D. (2008). Structural Equation Modeling: Foundations and Extensions (2nd ed.). SAGE. ISBN 978-1412916240.
Kline, Rex (2011). Principles and Practice of Structural Equation Modeling (Third ed.). Guilford. ISBN 978-1-60623-876-9.
MacCallum, Robert; Austin, James (2000). "Applications of Structural Equation Modeling in Psychological Research" (PDF). Annual Review of Psychology. 51: 201–226. doi:10.1146/annurev.psych.51.1.201. PMID 10751970. Archived from the original (PDF) on 28 January 2015. Retrieved 25 January 2015.
Quintana, Stephen M.; Maxwell, Scott E. (1999). "Implications of Recent Developments in Structural Equation Modeling for Counseling Psychology". The Counseling Psychologist. 27 (4): 485–527. doi:10.1177/0011000099274002. S2CID 145586057.
== Further reading ==
Bagozzi, Richard P; Yi, Youjae (2011). "Specification, evaluation, and interpretation of structural equation models". Journal of the Academy of Marketing Science. 40 (1): 8–34. doi:10.1007/s11747-011-0278-x. S2CID 167896719.
Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers, ISBN 0-340-69243-X
Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606.
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley, ISBN 0-471-01171-1
Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA, ISBN 0-8058-4104-0
Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001.
Haavelmo, Trygve (January 1943). "The Statistical Implications of a System of Simultaneous Equations". Econometrica. 11 (1): 1–12. doi:10.2307/1905714. JSTOR 1905714.
Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE, ISBN 0-8039-5318-6
Jöreskog, Karl G.; Yang, Fan (1996). "Non-linear structural equation models: The Kenny-Judd model with interaction effects". In Marcoulides, George A.; Schumacker, Randall E. (eds.). Advanced structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: Sage Publications. pp. 57–88. ISBN 978-1-317-84380-1.
Lewis-Beck, Michael; Bryman, Alan E.; Bryman, Emeritus Professor Alan; Liao, Tim Futing (2004). "Structural Equation Modeling". The SAGE Encyclopedia of Social Science Research Methods. doi:10.4135/9781412950589.n979. hdl:2022/21973. ISBN 978-0-7619-2363-3.
Schermelleh-Engel, K.; Moosbrugger, H.; Müller, H. (2003), "Evaluating the fit of structural equation models" (PDF), Methods of Psychological Research, 8 (2): 23–74.
== External links ==
Structural equation modeling page under David Garson's StatNotes, NCSU
Issues and Opinion on Structural Equation Modeling, SEM in IS Research
The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000.
Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models
Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM | Wikipedia/Structural_equation_modelling |
Least squares is a mathematical optimization method that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The method is widely used in areas such as regression analysis, curve fitting and data modeling. The least squares method can be categorized into linear and nonlinear forms, depending on the relationship between the model parameters and the observed data. The method was first proposed by Adrien-Marie Legendre in 1805 and further developed by Carl Friedrich Gauss.
== History ==
=== Founding ===
The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation.
The method was the culmination of several advances that took place during the course of the eighteenth century:
The combination of different observations as being the best estimate of the true value; errors decrease with aggregation rather than increase, first appeared in Isaac Newton's work in 1671, though it went unpublished, and again in 1700. It was perhaps first expressed formally by Roger Cotes in 1722.
The combination of different observations taken under the same conditions contrary to simply trying one's best to observe and record a single observation accurately. The approach was known as the method of averages. This approach was notably used by Newton while studying equinoxes in 1700, also writing down the first of the 'normal equations' known from ordinary least squares, Tobias Mayer while studying the librations of the Moon in 1750, and by Pierre-Simon Laplace in his work in explaining the differences in motion of Jupiter and Saturn in 1788.
The combination of different observations taken under different conditions. The method came to be known as the method of least absolute deviation. It was notably performed by Roger Joseph Boscovich in his work on the shape of the Earth in 1757 and by Pierre-Simon Laplace for the same problem in 1789 and 1799.
The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to specify a mathematical form of the probability density for the errors and define a method of estimation that minimizes the error of estimation. For this purpose, Laplace used a symmetric two-sided exponential distribution we now call Laplace distribution to model the error distribution, and used the sum of absolute deviation as error of estimation. He felt these to be the simplest assumptions he could make, and he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median.
=== The method ===
The first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the Earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy in France, Italy, and Prussia, which constitutes an extraordinarily rapid acceptance of a scientific technique.
In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795. This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that the arithmetic mean is indeed the best estimate of the location parameter by changing both the probability density and the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution.
An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On 1 January 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the Sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the Sun without solving Kepler's complicated nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis.
In 1810, after reading Gauss's work, Laplace, after proving the central limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, normally distributed, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. An extended version of this result is known as the Gauss–Markov theorem.
The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares.
== Problem statement ==
The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of n points (data pairs)
(
x
i
,
y
i
)
{\displaystyle (x_{i},y_{i})\!}
, i = 1, …, n, where
x
i
{\displaystyle x_{i}\!}
is an independent variable and
y
i
{\displaystyle y_{i}\!}
is a dependent variable whose value is found by observation. The model function has the form
f
(
x
,
β
)
{\displaystyle f(x,{\boldsymbol {\beta }})}
, where m adjustable parameters are held in the vector
β
{\displaystyle {\boldsymbol {\beta }}}
. The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the observed value of the dependent variable and the value predicted by the model:
r
i
=
y
i
−
f
(
x
i
,
β
)
.
{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}).}
The least-squares method finds the optimal parameter values by minimizing the sum of squared residuals,
S
{\displaystyle S}
:
S
=
∑
i
=
1
n
r
i
2
.
{\displaystyle S=\sum _{i=1}^{n}r_{i}^{2}.}
In the simplest case
f
(
x
i
,
β
)
=
β
{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\beta }
and the result of the least-squares method is the arithmetic mean of the input data.
An example of a model in two dimensions is that of the straight line. Denoting the y-intercept as
β
0
{\displaystyle \beta _{0}}
and the slope as
β
1
{\displaystyle \beta _{1}}
, the model function is given by
f
(
x
,
β
)
=
β
0
+
β
1
x
{\displaystyle f(x,{\boldsymbol {\beta }})=\beta _{0}+\beta _{1}x}
. See linear least squares for a fully worked out example of this model.
A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, x and z, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point.
To the right is a residual plot illustrating random fluctuations about
r
i
=
0
{\displaystyle r_{i}=0}
, indicating that a linear model
(
Y
i
=
β
0
+
β
1
x
i
+
U
i
)
{\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+U_{i})}
is appropriate.
U
i
{\displaystyle U_{i}}
is an independent, random variable.
If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model
(
Y
i
=
β
0
+
β
1
x
i
+
β
2
x
i
2
+
U
i
)
{\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+U_{i})}
would be appropriate for the data. The residuals for a parabolic model can be calculated via
r
i
=
y
i
−
β
^
0
−
β
^
1
x
i
−
β
^
2
x
i
2
{\displaystyle r_{i}=y_{i}-{\hat {\beta }}_{0}-{\hat {\beta }}_{1}x_{i}-{\hat {\beta }}_{2}x_{i}^{2}}
.
== Limitations ==
This regression formulation considers only observational errors in the dependent variable (but the alternative total least squares regression can account for errors in both variables). There are two rather different contexts with different implications:
Regression for prediction. Here a model is fitted to provide a prediction rule for application in a similar situation to which the data used for fitting apply. Here the dependent variables corresponding to such future application would be subject to the same types of observation error as those in the data used for fitting. It is therefore logically consistent to use the least-squares prediction rule for such data.
Regression for fitting a "true relationship". In standard regression analysis that leads to fitting by least squares there is an implicit assumption that errors in the independent variable are zero or strictly controlled so as to be negligible. When errors in the independent variable are non-negligible, models of measurement error can be used; such methods can lead to parameter estimates, hypothesis testing and confidence intervals that take into account the presence of observation errors in the independent variables. An alternative approach is to fit a model by total least squares; this can be viewed as taking a pragmatic approach to balancing the effects of the different sources of error in formulating an objective function for use in model-fitting.
== Solving the least squares problem ==
The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains m parameters, there are m gradient equations:
∂
S
∂
β
j
=
2
∑
i
r
i
∂
r
i
∂
β
j
=
0
,
j
=
1
,
…
,
m
,
{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m,}
and since
r
i
=
y
i
−
f
(
x
i
,
β
)
{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}
, the gradient equations become
−
2
∑
i
r
i
∂
f
(
x
i
,
β
)
∂
β
j
=
0
,
j
=
1
,
…
,
m
.
{\displaystyle -2\sum _{i}r_{i}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m.}
The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives.
=== Linear least squares ===
A regression model is a linear one when the model comprises a linear combination of the parameters, i.e.,
f
(
x
,
β
)
=
∑
j
=
1
m
β
j
ϕ
j
(
x
)
,
{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{m}\beta _{j}\phi _{j}(x),}
where the function
ϕ
j
{\displaystyle \phi _{j}}
is a function of
x
{\displaystyle x}
.
Letting
X
i
j
=
ϕ
j
(
x
i
)
{\displaystyle X_{ij}=\phi _{j}(x_{i})}
and putting the independent and dependent variables in matrices
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively, we can compute the least squares in the following way. Note that
D
{\displaystyle D}
is the set of all data.
L
(
D
,
β
)
=
‖
Y
−
X
β
‖
2
=
(
Y
−
X
β
)
T
(
Y
−
X
β
)
{\displaystyle L(D,{\boldsymbol {\beta }})=\left\|Y-X{\boldsymbol {\beta }}\right\|^{2}=(Y-X{\boldsymbol {\beta }})^{\mathsf {T}}(Y-X{\boldsymbol {\beta }})}
=
Y
T
Y
−
2
Y
T
X
β
+
β
T
X
T
X
β
{\displaystyle =Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}}
The gradient of the loss is:
∂
L
(
D
,
β
)
∂
β
=
∂
(
Y
T
Y
−
2
Y
T
X
β
+
β
T
X
T
X
β
)
∂
β
=
−
2
X
T
Y
+
2
X
T
X
β
{\displaystyle {\frac {\partial L(D,{\boldsymbol {\beta }})}{\partial {\boldsymbol {\beta }}}}={\frac {\partial \left(Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}\right)}{\partial {\boldsymbol {\beta }}}}=-2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}}
Setting the gradient of the loss to zero and solving for
β
{\displaystyle {\boldsymbol {\beta }}}
, we get:
−
2
X
T
Y
+
2
X
T
X
β
=
0
⇒
X
T
Y
=
X
T
X
β
{\displaystyle -2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}=0\Rightarrow X^{\mathsf {T}}Y=X^{\mathsf {T}}X{\boldsymbol {\beta }}}
β
^
=
(
X
T
X
)
−
1
X
T
Y
{\displaystyle {\boldsymbol {\hat {\beta }}}=\left(X^{\mathsf {T}}X\right)^{-1}X^{\mathsf {T}}Y}
=== Non-linear least squares ===
There is, in some cases, a closed-form solution to a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parameters
β
{\displaystyle \beta }
that minimizes the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation:
β
j
k
+
1
=
β
j
k
+
Δ
β
j
,
{\displaystyle {\beta _{j}}^{k+1}={\beta _{j}}^{k}+\Delta \beta _{j},}
where a superscript k is an iteration number, and the vector of increments
Δ
β
j
{\displaystyle \Delta \beta _{j}}
is called the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-order Taylor series expansion about
β
k
{\displaystyle {\boldsymbol {\beta }}^{k}}
:
f
(
x
i
,
β
)
=
f
k
(
x
i
,
β
)
+
∑
j
∂
f
(
x
i
,
β
)
∂
β
j
(
β
j
−
β
j
k
)
=
f
k
(
x
i
,
β
)
+
∑
j
J
i
j
Δ
β
j
.
{\displaystyle {\begin{aligned}f(x_{i},{\boldsymbol {\beta }})&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\left(\beta _{j}-{\beta _{j}}^{k}\right)\\[1ex]&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}J_{ij}\,\Delta \beta _{j}.\end{aligned}}}
The Jacobian J is a function of constants, the independent variable and the parameters, so it changes from one iteration to the next. The residuals are given by
r
i
=
y
i
−
f
k
(
x
i
,
β
)
−
∑
k
=
1
m
J
i
k
Δ
β
k
=
Δ
y
i
−
∑
j
=
1
m
J
i
j
Δ
β
j
.
{\displaystyle r_{i}=y_{i}-f^{k}(x_{i},{\boldsymbol {\beta }})-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}=\Delta y_{i}-\sum _{j=1}^{m}J_{ij}\,\Delta \beta _{j}.}
To minimize the sum of squares of
r
i
{\displaystyle r_{i}}
, the gradient equation is set to zero and solved for
Δ
β
j
{\displaystyle \Delta \beta _{j}}
:
−
2
∑
i
=
1
n
J
i
j
(
Δ
y
i
−
∑
k
=
1
m
J
i
k
Δ
β
k
)
=
0
,
{\displaystyle -2\sum _{i=1}^{n}J_{ij}\left(\Delta y_{i}-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}\right)=0,}
which, on rearrangement, become m simultaneous linear equations, the normal equations:
∑
i
=
1
n
∑
k
=
1
m
J
i
j
J
i
k
Δ
β
k
=
∑
i
=
1
n
J
i
j
Δ
y
i
(
j
=
1
,
…
,
m
)
.
{\displaystyle \sum _{i=1}^{n}\sum _{k=1}^{m}J_{ij}J_{ik}\,\Delta \beta _{k}=\sum _{i=1}^{n}J_{ij}\,\Delta y_{i}\qquad (j=1,\ldots ,m).}
The normal equations are written in matrix notation as
(
J
T
J
)
Δ
β
=
J
T
Δ
y
.
{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\Delta \mathbf {y} .}
These are the defining equations of the Gauss–Newton algorithm.
=== Differences between linear and nonlinear least squares ===
The model function, f, in LLSQ (linear least squares) is a linear combination of parameters of the form
f
=
X
i
1
β
1
+
X
i
2
β
2
+
⋯
{\displaystyle f=X_{i1}\beta _{1}+X_{i2}\beta _{2}+\cdots }
The model may represent a straight line, a parabola or any other linear combination of functions. In NLLSQ (nonlinear least squares) the parameters appear as functions, such as
β
2
,
e
β
x
{\displaystyle \beta ^{2},e^{\beta x}}
and so forth. If the derivatives
∂
f
/
∂
β
j
{\displaystyle \partial f/\partial \beta _{j}}
are either constant or depend only on the values of the independent variable, the model is linear in the parameters. Otherwise, the model is nonlinear.
Need initial values for the parameters to find the solution to a NLLSQ problem; LLSQ does not require them.
Solution algorithms for NLLSQ often require that the Jacobian can be calculated similar to LLSQ. Analytical expressions for the partial derivatives can be complicated. If analytical expressions are impossible to obtain either the partial derivatives must be calculated by numerical approximation or an estimate must be made of the Jacobian, often via finite differences.
Non-convergence (failure of the algorithm to find a minimum) is a common phenomenon in NLLSQ.
LLSQ is globally concave so non-convergence is not an issue.
Solving NLLSQ is usually an iterative process which has to be terminated when a convergence criterion is satisfied. LLSQ solutions can be computed using direct methods, although problems with large numbers of parameters are typically solved with iterative methods, such as the Gauss–Seidel method.
In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares.
Under the condition that the errors are uncorrelated with the predictor variables, LLSQ yields unbiased estimates, but even under that condition NLLSQ estimates are generally biased.
These differences must be considered whenever the solution to a nonlinear least squares problem is being sought.
== Example ==
Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring y is proportional to the force, F, applied to it.
y
=
f
(
F
,
k
)
=
k
F
{\displaystyle y=f(F,k)=kF}
constitutes the model, where F is the independent variable. In order to estimate the force constant, k, we conduct a series of n measurements with different forces to produce a set of data,
(
F
i
,
y
i
)
,
i
=
1
,
…
,
n
{\displaystyle (F_{i},y_{i}),\ i=1,\dots ,n\!}
, where yi is a measured spring extension. Each experimental observation will contain some error,
ε
{\displaystyle \varepsilon }
, and so we may specify an empirical model for our observations,
y
i
=
k
F
i
+
ε
i
.
{\displaystyle y_{i}=kF_{i}+\varepsilon _{i}.}
There are many methods we might use to estimate the unknown parameter k. Since the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we estimate k using least squares. The sum of squares to be minimized is
S
=
∑
i
=
1
n
(
y
i
−
k
F
i
)
2
.
{\displaystyle S=\sum _{i=1}^{n}\left(y_{i}-kF_{i}\right)^{2}.}
The least squares estimate of the force constant, k, is given by
k
^
=
∑
i
F
i
y
i
∑
i
F
i
2
.
{\displaystyle {\hat {k}}={\frac {\sum _{i}F_{i}y_{i}}{\sum _{i}F_{i}^{2}}}.}
We assume that applying force causes the spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law.
== Uncertainty quantification ==
In a least squares calculation with unit weights, or in linear regression, the variance on the jth parameter, denoted
var
(
β
^
j
)
{\displaystyle \operatorname {var} ({\hat {\beta }}_{j})}
, is usually estimated with
var
(
β
^
j
)
=
σ
2
(
[
X
T
X
]
−
1
)
j
j
≈
σ
^
2
C
j
j
,
{\displaystyle \operatorname {var} ({\hat {\beta }}_{j})=\sigma ^{2}\left(\left[X^{\mathsf {T}}X\right]^{-1}\right)_{jj}\approx {\hat {\sigma }}^{2}C_{jj},}
σ
^
2
≈
S
n
−
m
{\displaystyle {\hat {\sigma }}^{2}\approx {\frac {S}{n-m}}}
C
=
(
X
T
X
)
−
1
,
{\displaystyle C=\left(X^{\mathsf {T}}X\right)^{-1},}
where the true error variance σ2 is replaced by an estimate, the reduced chi-squared statistic, based on the minimized value of the residual sum of squares (objective function), S. The denominator, n − m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations. C is the covariance matrix.
== Statistical testing ==
If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables.
It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases.
The Gauss–Markov theorem. In a linear model in which the errors have expectation zero conditional on the independent variables, are uncorrelated and have equal variances, the best linear unbiased estimator of any linear combination of the observations, is its least-squares estimator. "Best" means that the least squares estimators of the parameters have minimum variance. The assumption of equal variance is valid when the errors all belong to the same distribution.
If the errors belong to a normal distribution, the least-squares estimators are also the maximum likelihood estimators in a linear model.
However, suppose the errors are not normally distributed. In that case, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution.
== Weighted least squares ==
A special case of generalized least squares called weighted least squares occurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms, heteroscedasticity is when the variance of
Y
i
{\displaystyle Y_{i}}
depends on the value of
x
i
{\displaystyle x_{i}}
which causes the residual plot to create a "fanning out" effect towards larger
Y
i
{\displaystyle Y_{i}}
values as seen in the residual plot to the right. On the other hand, homoscedasticity is assuming that the variance of
Y
i
{\displaystyle Y_{i}}
and variance of
U
i
{\displaystyle U_{i}}
are equal.
== Relationship to principal components ==
The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in the
y
{\displaystyle y}
direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally.
== Relationship to measure theory ==
Notable statistician Sara van de Geer used empirical process theory and the Vapnik–Chervonenkis dimension to prove a least-squares estimator can be interpreted as a measure on the space of square-integrable functions.
== Regularization ==
=== Tikhonov regularization ===
In some contexts, a regularized version of the least squares solution may be preferable. Tikhonov regularization (or ridge regression) adds a constraint that
‖
β
‖
2
2
{\displaystyle \left\|\beta \right\|_{2}^{2}}
, the squared
ℓ
2
{\displaystyle \ell _{2}}
-norm of the parameter vector, is not greater than a given value to the least squares formulation, leading to a constrained minimization problem. This is equivalent to the unconstrained minimization problem where the objective function is the residual sum of squares plus a penalty term
α
‖
β
‖
2
2
{\displaystyle \alpha \left\|\beta \right\|_{2}^{2}}
and
α
{\displaystyle \alpha }
is a tuning parameter (this is the Lagrangian form of the constrained minimization problem).
In a Bayesian context, this is equivalent to placing a zero-mean normally distributed prior on the parameter vector.
=== Lasso method ===
An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that
‖
β
‖
1
{\displaystyle \|\beta \|_{1}}
, the L1-norm of the parameter vector, is no greater than a given value. (One can show like above using Lagrange multipliers that this is equivalent to an unconstrained minimization of the least-squares penalty with
α
‖
β
‖
1
{\displaystyle \alpha \|\beta \|_{1}}
added.) In a Bayesian context, this is equivalent to placing a zero-mean Laplace prior distribution on the parameter vector. The optimization problem may be solved using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least angle regression algorithm.
One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Some feature selection techniques are developed based on the LASSO including Bolasso which bootstraps samples, and FeaLect which analyzes the regression coefficients corresponding to different values of
α
{\displaystyle \alpha }
to score all the features.
The L1-regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. An extension of this approach is elastic net regularization.
== See also ==
== References ==
== Further reading ==
Björck, Å. (1996). Numerical Methods for Least Squares Problems. SIAM. ISBN 978-0-89871-360-2.
Kariya, T.; Kurata, H. (2004). Generalized Least Squares. Hoboken: Wiley. ISBN 978-0-470-86697-9.
Luenberger, D. G. (1997) [1969]. "Least-Squares Estimation". Optimization by Vector Space Methods. New York: John Wiley & Sons. pp. 78–102. ISBN 978-0-471-18117-0.
Rao, C. R.; Toutenburg, H.; et al. (2008). Linear Models: Least Squares and Alternatives. Springer Series in Statistics (3rd ed.). Berlin: Springer. ISBN 978-3-540-74226-5.
Van de moortel, Koen (April 2021). "Multidirectional regression analysis".
Wolberg, J. (2005). Data Analysis Using the Method of Least Squares: Extracting the Most Information from Experiments. Berlin: Springer. ISBN 978-3-540-25674-8.
== External links ==
Media related to Least squares at Wikimedia Commons | Wikipedia/Method_of_least_squares |
Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models (in particular, linear regression), although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.
Multilevel models are particularly appropriate for research designs where data for participants are organized at more than one level (i.e., nested data). The units of analysis are usually individuals (at a lower level) who are nested within contextual/aggregate units (at a higher level). While the lowest level of data in multilevel models is usually an individual, repeated measurements of individuals may also be examined. As such, multilevel models provide an alternative type of analysis for univariate or multivariate analysis of repeated measures. Individual differences in growth curves may be examined. Furthermore, multilevel models can be used as an alternative to ANCOVA, where scores on the dependent variable are adjusted for covariates (e.g. individual differences) before testing treatment differences. Multilevel models are able to analyze these experiments without the assumptions of homogeneity-of-regression slopes that is required by ANCOVA.
Multilevel models can be used on data with many levels, although 2-level models are the most common and the rest of this article deals only with these. The dependent variable must be examined at the lowest level of analysis.
== Level 1 regression equation ==
When there is a single level 1 independent variable, the level 1 model is
Y
i
j
=
β
0
j
+
β
1
j
X
i
j
+
e
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{ij}+e_{ij}}
.
Y
i
j
{\displaystyle Y_{ij}}
refers to the score on the dependent variable for an individual observation at Level 1 (subscript i refers to individual case, subscript j refers to the group).
X
i
j
{\displaystyle X_{ij}}
refers to the Level 1 predictor.
β
0
j
{\displaystyle \beta _{0j}}
refers to the intercept of the dependent variable for group j.
β
1
j
{\displaystyle \beta _{1j}}
refers to the slope for the relationship in group j (Level 2) between the Level 1 predictor and the dependent variable.
e
i
j
{\displaystyle e_{ij}}
refers to the random errors of prediction for the Level 1 equation (it is also sometimes referred to as
r
i
j
{\displaystyle r_{ij}}
).
e
i
j
∼
N
(
0
,
σ
1
2
)
{\displaystyle e_{ij}\sim {\mathcal {N}}(0,\sigma _{1}^{2})}
At Level 1, both the intercepts and slopes in the groups can be either fixed (meaning that all groups have the same values, although in the real world this would be a rare occurrence), non-randomly varying (meaning that the intercepts and/or slopes are predictable from an independent variable at Level 2), or randomly varying (meaning that the intercepts and/or slopes are different in the different groups, and that each have their own overall mean and variance).
When there are multiple level 1 independent variables, the model can be expanded by substituting vectors and matrices in the equation.
When the relationship between the response
Y
i
j
{\displaystyle Y_{ij}}
and predictor
X
i
j
{\displaystyle X_{ij}}
can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model to nonlinear mixed-effects model. For example, when the response
Y
i
j
{\displaystyle Y_{ij}}
is the cumulative infection trajectory of the
i
{\displaystyle i}
-th country, and
X
i
j
{\displaystyle X_{ij}}
represents the
j
{\displaystyle j}
-th time points, then the ordered pair
(
X
i
j
,
Y
i
j
)
{\displaystyle (X_{ij},Y_{ij})}
for each country may show a shape similar to logistic function.
== Level 2 regression equation ==
The dependent variables are the intercepts and the slopes for the independent variables at Level 1 in the groups of Level 2.
u
0
j
∼
N
(
0
,
σ
2
2
)
{\displaystyle u_{0j}\sim {\mathcal {N}}(0,\sigma _{2}^{2})}
u
1
j
∼
N
(
0
,
σ
3
2
)
{\displaystyle u_{1j}\sim {\mathcal {N}}(0,\sigma _{3}^{2})}
β
0
j
=
γ
00
+
γ
01
w
j
+
u
0
j
{\displaystyle \beta _{0j}=\gamma _{00}+\gamma _{01}w_{j}+u_{0j}}
β
1
j
=
γ
10
+
γ
11
w
j
+
u
1
j
{\displaystyle \beta _{1j}=\gamma _{10}+\gamma _{11}w_{j}+u_{1j}}
γ
00
{\displaystyle \gamma _{00}}
refers to the overall intercept. This is the grand mean of the scores on the dependent variable across all the groups when all the predictors are equal to 0.
γ
10
{\displaystyle \gamma _{10}}
refers to the average slope between the dependent variable and the Level 1 predictor.
w
j
{\displaystyle w_{j}}
refers to the Level 2 predictor.
γ
01
{\displaystyle \gamma _{01}}
and
γ
11
{\displaystyle \gamma _{11}}
refer to the effect of the Level 2 predictor on the Level 1 intercept and slope respectively.
u
0
j
{\displaystyle u_{0j}}
refers to the deviation in group j from the overall intercept.
u
1
j
{\displaystyle u_{1j}}
refers to the deviation in group j from the average slope between the dependent variable and the Level 1 predictor.
== Types of models ==
Before conducting a multilevel model analysis, a researcher must decide on several aspects, including which predictors are to be included in the analysis, if any. Second, the researcher must decide whether parameter values (i.e., the elements that will be estimated) will be fixed or random. Fixed parameters are composed of a constant over all the groups, whereas a random parameter has a different value for each of the groups. Additionally, the researcher must decide whether to employ a maximum likelihood estimation or a restricted maximum likelihood estimation type.
=== Random intercepts model ===
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups. This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information about intraclass correlations, which are helpful in determining whether multilevel models are required in the first place.
=== Random slopes model ===
A random slopes model is a model in which slopes are allowed to vary according to a correlation matrix, and therefore, the slopes are different across grouping variable such as time or individuals. This model assumes that intercepts are fixed (the same across different contexts).
=== Random intercepts and slopes model ===
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts.
=== Developing a multilevel model ===
In order to conduct a multilevel model analysis, one would start with fixed coefficients (slopes and intercepts). One aspect would be allowed to vary at a time (that is, would be changed), and compared with the previous model in order to assess better model fit. There are three different questions that a researcher would ask in assessing a model. First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?
In order to assess models, different model fit statistics would be examined. One such statistic is the chi-square likelihood-ratio test, which assesses the difference between models. The likelihood-ratio test can be employed for model building in general, for examining what happens when effects in a model are allowed to vary, and when testing a dummy-coded categorical variable as a single effect. However, the test can only be used when models are nested (meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC), among others. See further Model selection.
== Assumptions ==
Multilevel models have the same assumptions as other major general linear models (e.g., ANOVA, regression), but some of the assumptions are modified for the hierarchical nature of the design (i.e., nested data).
Linearity
The assumption of linearity states that there is a rectilinear (straight-line, as opposed to non-linear or U-shaped) relationship between variables. However, the model can be extended to nonlinear relationships. Particularly, when the mean part of the level 1 regression equation is replaced with a non-linear parametric function, then such a model framework is widely called the nonlinear mixed-effects model.
Normality
The assumption of normality states that the error terms at every level of the model are normally distributed. However, most statistical software allows one to specify different distributions for the variance terms, such as a Poisson, binomial, logistic. The multilevel modelling approach can be used for all forms of Generalized Linear models.
Homoscedasticity
The assumption of homoscedasticity, also known as homogeneity of variance, assumes equality of population variances. However, different variance-correlation matrix can be specified to account for this, and the heterogeneity of variance can itself be modeled.
Independence of observations (No Autocorrelation of Model's Residuals)
Independence is an assumption of general linear models, which states that cases are random samples from the population and that scores on the dependent variable are independent of each other. One of the main purposes of multilevel models is to deal with cases where the assumption of independence is violated; multilevel models do, however, assume that 1) the level 1 and level 2 residuals are uncorrelated and 2) The errors (as measured by the residuals) at the highest level are uncorrelated.
Orthogonality of regressors to random effects
The regressors must not correlate with the random effects,
u
0
j
{\displaystyle u_{0j}}
. This assumption is testable but often ignored, rendering the estimator inconsistent. If this assumption is violated, the random-effect must be modeled explicitly in the fixed part of the model, either by using dummy variables or including cluster means of all
X
i
j
{\displaystyle X_{ij}}
regressors. This assumption is probably the most important assumption the estimator makes, but one that is misunderstood by most applied researchers using these types of models.
== Statistical tests ==
The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in a Z-test. A t-test can also be computed. When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups.
== Statistical power ==
Statistical power for multilevel models differs depending on whether it is level 1 or level 2 effects that are being examined. Power for level 1 effects is dependent upon the number of individual observations, whereas the power for level 2 effects is dependent upon the number of groups. To conduct research with sufficient power, large sample sizes are required in multilevel models. However, the number of individual observations in groups is not as important as the number of groups in a study. In order to detect cross-level interactions, given that the group sizes are not too small, recommendations have been made that at least 20 groups are needed, although many fewer can be used if one is only interested in inference on the fixed effects and the random effects are control, or "nuisance", variables. The issue of statistical power in multilevel models is complicated by the fact that power varies as a function of effect size and intraclass correlations, it differs for fixed effects versus random effects, and it changes depending on the number of groups and the number of individual observations per group.
== Applications ==
=== Level ===
The concept of level is the keystone of this approach. In an educational research example, the levels for a 2-level model might be
pupil
class
However, if one were studying multiple schools and multiple school districts, a 4-level model could include
pupil
class
school
district
The researcher must establish for each variable the level at which it was measured. In this example "test score" might be measured at pupil level, "teacher experience" at class level, "school funding" at school level, and "urban" at district level.
=== Example ===
As a simple example, consider a basic linear regression model that predicts income as a function of age, class, gender and race. It might then be observed that income levels also vary depending on the city and state of residence. A simple way to incorporate this into the regression model would be to add an additional independent categorical variable to account for the location (i.e. a set of additional binary predictors and associated regression coefficients, one per location). This would have the effect of shifting the mean income up or down—but it would still assume, for example, that the effect of race and gender on income is the same everywhere. In reality, this is unlikely to be the case—different local laws, different retirement policies, differences in level of racial prejudice, etc. are likely to cause all of the predictors to have different sorts of effects in different locales.
In other words, a simple linear regression model might, for example, predict that a given randomly sampled person in Seattle would have an average yearly income $10,000 higher than a similar person in Mobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location. Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set of hyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter.
Multilevel models are a subclass of hierarchical Bayesian models, which are general models with multiple levels of random variables and arbitrary relationships among the different variables. Multilevel analysis has been extended to include multilevel structural equation modeling, multilevel latent class modeling, and other more general models.
=== Uses ===
Multilevel models have been used in education research or geographical research, to estimate separately the variance between pupils within the same school, and the variance between schools. In psychological applications, the multiple levels are items in an instrument, individuals, and families. In sociological applications, multilevel models are used to examine individuals embedded within regions or countries. In organizational psychology research, data from individuals must often be nested within teams or other functional units. They are often used in ecological research as well under the more general term mixed models.
Different covariables may be relevant on different levels. They can be used for longitudinal studies, as with growth studies, to separate changes within one individual and differences between individuals.
Cross-level interactions may also be of substantive interest; for example, when a slope is allowed to vary randomly, a level-2 predictor may be included in the slope formula for the level-1 covariate. For example, one may estimate the interaction of race and neighborhood to obtain an estimate of the interaction between an individual's characteristics and the social context.
=== Applications to longitudinal (repeated measures) data ===
== Alternative ways of analyzing hierarchical data ==
There are several alternative ways of analyzing hierarchical data, although most of them have some problems. First, traditional statistical techniques can be used. One could disaggregate higher-order variables to the individual level, and thus conduct an analysis on this individual level (for example, assign class variables to the individual level). The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy. Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level. The problem with this approach is that it discards all within-group information (because it takes the average of the individual level variables). As much as 80–90% of the variance could be wasted, and the relationship between aggregated variables is inflated, and thus distorted. This is known as ecological fallacy, and statistically, this type of analysis results in decreased power in addition to the loss of information.
Another way to analyze hierarchical data would be through a random-coefficients model. This model assumes that each group has a different regression model—with its own intercept and slope. Because groups are sampled, the model assumes that the intercepts and slopes are also randomly sampled from a population of group intercepts and slopes. This allows for an analysis in which one can assume that slopes are fixed but intercepts are allowed to vary. However this presents a problem, as individual components are independent but group components are independent between groups, but dependent within groups. This also allows for an analysis in which the slopes are random; however, the correlations of the error terms (disturbances) are dependent on the values of the individual-level variables. Thus, the problem with using a random-coefficients model in order to analyze hierarchical data is that it is still not possible to incorporate higher order variables.
== Error terms ==
Multilevel models have two error terms, which are also known as disturbances. The individual components are all independent, but there are also group components, which are independent between groups but correlated within groups. However, variance components can differ, as some groups are more homogeneous than others.
== Bayesian nonlinear mixed-effects model ==
Multilevel modeling is frequently used in diverse applications and it can be formulated by the Bayesian framework. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
s
p
a
c
e
r
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {\begin{aligned}&{y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\\{\phantom {spacer}}\\&\epsilon _{ij}\sim N(0,\sigma ^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.\end{aligned}}}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
s
p
a
c
e
r
η
l
i
∼
N
(
0
,
ω
l
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\\{\phantom {spacer}}\\&\eta _{li}\sim N(0,\omega _{l}^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,l=1,\ldots ,K.\end{aligned}}}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
s
p
a
c
e
r
α
l
∼
π
(
α
l
)
,
s
p
a
c
e
r
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
s
p
a
c
e
r
ω
l
2
∼
π
(
ω
l
2
)
,
s
p
a
c
e
r
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\sigma ^{2}\sim \pi (\sigma ^{2}),\\{\phantom {spacer}}\\&\alpha _{l}\sim \pi (\alpha _{l}),\\{\phantom {spacer}}\\&(\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\\{\phantom {spacer}}\\&\omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\\{\phantom {spacer}}\\&l=1,\ldots ,K.\end{aligned}}}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
}
Stage 1: Individual-Level Model
s
p
a
c
e
r
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 2: Population Model
s
p
a
c
e
r
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 3: Prior
{\displaystyle {\begin{aligned}=&~\left.{\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})}\right\}{\text{Stage 1: Individual-Level Model}}\\{\phantom {spacer}}\\\times &~\left.{\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 2: Population Model}}\\{\phantom {spacer}}\\\times &~\left.{p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 3: Prior}}\end{aligned}}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== See also ==
Hyperparameter
Mixed-design analysis of variance
Multiscale modeling
Random effects model
Nonlinear mixed-effects model
Bayesian hierarchical modeling
Restricted randomization
== Notes ==
== References ==
== Further reading ==
Gelman, A.; Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp. 235–299. ISBN 978-0-521-68689-1.
Goldstein, H. (2011). Multilevel Statistical Models (4th ed.). London: Wiley. ISBN 978-0-470-74865-7.
Hedeker, D.; Gibbons, R. D. (2012). Longitudinal Data Analysis (2nd ed.). New York: Wiley. ISBN 978-0-470-88918-3.
Hox, J. J. (2010). Multilevel Analysis: Techniques and Applications (2nd ed.). New York: Routledge. ISBN 978-1-84872-845-5.
Raudenbush, S. W.; Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods (2nd ed.). Thousand Oaks, CA: Sage. This concentrates on education.
Snijders, T. A. B.; Bosker, R. J. (2011). Multilevel Analysis: an Introduction to Basic and Advanced Multilevel Modeling (2nd ed.). London: Sage. ISBN 9781446254332.
Swamy, P. A. V. B.; Tavlas, George S. (2001). "Random Coefficient Models". In Baltagi, Badi H. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 410–429. ISBN 978-0-631-21254-6.
Verbeke, G.; Molenberghs, G. (2013). Linear Mixed Models for Longitudinal Data. Springer. Includes SAS code
Gomes, Dylan G.E. (20 January 2022). "Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model?". PeerJ. 10: e12794. doi:10.7717/peerj.12794. PMC 8784019. PMID 35116198.
== External links ==
Centre for Multilevel Modelling | Wikipedia/Multilevel_model |
The Design of Experiments is a 1935 book by the English statistician Ronald Fisher about the design of experiments and is considered a foundational work in experimental design. Among other contributions, the book introduced the concept of the null hypothesis in the context of the lady tasting tea experiment. A chapter is devoted to the Latin square.
== Chapters ==
Introduction
The principles of experimentation, illustrated by a psycho-physical experiment
A historical experiment on growth rate
An agricultural experiment in randomized blocks
The Latin square
The factorial design in experimentation
Confounding
Special cases of partial confounding
The increase of precision by concomitant measurements. Statistical Control
The generalization of null hypotheses. Fiducial probability
The measurement of amount of information in general
== Quotations regarding the null hypothesis ==
Fisher introduced the null hypothesis by an example, the now famous Lady tasting tea experiment, as a casual wager. She claimed the ability to determine the means of tea preparation by taste. Fisher proposed an experiment and an analysis to test her claim. She was to be offered 8 cups of tea, 4 prepared by each method, for determination. He proposed the null hypothesis that she possessed no such ability, so she was just guessing. With this assumption, the number of correct guesses (the test statistic) formed a hypergeometric distribution. Fisher calculated that her chance of guessing all cups correctly was 1/70. He was provisionally willing to concede her ability (rejecting the null hypothesis) in this case only. Having an example, Fisher commented:
"...the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis."
"...the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the basis of the 'problem of distribution,' of which the test of significance is the solution."
"We may, however, choose any null hypothesis we please, provided it is exact."
Regarding an alternative non-directional significance test of the Lady tasting tea experiment:
"For this purpose the new test proposed would be entirely inappropriate, and no experimenter would be tempted to employ it. Mathematically, however, it is as valid as any other, in that with proper randomisation it is demonstrable that it would give a significant result with known probability, if the null hypothesis were true."
Regarding which test of significance to apply:
"The notion that different tests of significance are appropriate to test different features of the same null hypothesis presents no difficulty to workers engaged in practical experimentation, but has been the occasion of much theoretical discussion among statisticians."
On selecting the appropriate experimental measurement and null hypothesis:
"This question, when the answer to it is not already known, can be fruitfully discussed only when the experimenter has in view, not a single null hypothesis, but a class of such hypotheses, in the significance of deviations from each of which he is equally interested."
== See also ==
Statistical Methods for Research Workers (1925)
Combinatorics of Experimental Design (1987)
List of important publications in statistics
== Notes ==
== Bibliography ==
Fisher, Ronald A. (1971) [1935]. The Design of Experiments (9th ed.). Macmillan. ISBN 0-02-844690-9. | Wikipedia/The_Design_of_Experiments |
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
== Graphical model ==
Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if
m
{\displaystyle m}
parent nodes represent
m
{\displaystyle m}
Boolean variables, then the probability function could be represented by a table of
2
m
{\displaystyle 2^{m}}
entries, one entry for each of the
2
m
{\displaystyle 2^{m}}
possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
== Example ==
Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false).
The joint probability function is, by the chain rule of probability,
Pr
(
G
,
S
,
R
)
=
Pr
(
G
∣
S
,
R
)
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(G,S,R)=\Pr(G\mid S,R)\Pr(S\mid R)\Pr(R)}
where G = "Grass wet (true/false)", S = "Sprinkler turned on (true/false)", and R = "Raining (true/false)".
The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables:
Pr
(
R
=
T
∣
G
=
T
)
=
Pr
(
G
=
T
,
R
=
T
)
Pr
(
G
=
T
)
=
∑
x
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
T
)
∑
x
,
y
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
y
)
{\displaystyle \Pr(R=T\mid G=T)={\frac {\Pr(G=T,R=T)}{\Pr(G=T)}}={\frac {\sum _{x\in \{T,F\}}\Pr(G=T,S=x,R=T)}{\sum _{x,y\in \{T,F\}}\Pr(G=T,S=x,R=y)}}}
Using the expansion for the joint probability function
Pr
(
G
,
S
,
R
)
{\displaystyle \Pr(G,S,R)}
and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Pr
(
G
=
T
,
S
=
T
,
R
=
T
)
=
Pr
(
G
=
T
∣
S
=
T
,
R
=
T
)
Pr
(
S
=
T
∣
R
=
T
)
Pr
(
R
=
T
)
=
0.99
×
0.01
×
0.2
=
0.00198.
{\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}}
Then the numerical results (subscripted by the associated variable values) are
Pr
(
R
=
T
∣
G
=
T
)
=
0.00198
T
T
T
+
0.1584
T
F
T
0.00198
T
T
T
+
0.288
T
T
F
+
0.1584
T
F
T
+
0.0
T
F
F
=
891
2491
≈
35.77
%
.
{\displaystyle \Pr(R=T\mid G=T)={\frac {0.00198_{TTT}+0.1584_{TFT}}{0.00198_{TTT}+0.288_{TTF}+0.1584_{TFT}+0.0_{TFF}}}={\frac {891}{2491}}\approx 35.77\%.}
To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function
Pr
(
S
,
R
∣
do
(
G
=
T
)
)
=
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(S,R\mid {\text{do}}(G=T))=\Pr(S\mid R)\Pr(R)}
obtained by removing the factor
Pr
(
G
∣
S
,
R
)
{\displaystyle \Pr(G\mid S,R)}
from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action:
Pr
(
R
∣
do
(
G
=
T
)
)
=
Pr
(
R
)
.
{\displaystyle \Pr(R\mid {\text{do}}(G=T))=\Pr(R).}
To predict the impact of turning the sprinkler on:
Pr
(
R
,
G
∣
do
(
S
=
T
)
)
=
Pr
(
R
)
Pr
(
G
∣
R
,
S
=
T
)
{\displaystyle \Pr(R,G\mid {\text{do}}(S=T))=\Pr(R)\Pr(G\mid R,S=T)}
with the term
Pr
(
S
=
T
∣
R
)
{\displaystyle \Pr(S=T\mid R)}
removed, showing that the action affects the grass but not the rain.
These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action
do
(
x
)
{\displaystyle {\text{do}}(x)}
can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then
Pr
(
Y
,
Z
∣
do
(
x
)
)
=
Pr
(
Y
,
Z
,
X
=
x
)
Pr
(
X
=
x
∣
Z
)
.
{\displaystyle \Pr(Y,Z\mid {\text{do}}(x))={\frac {\Pr(Y,Z,X=x)}{\Pr(X=x\mid Z)}}.}
A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separates the (only) back-door path S ← R → G. However, if S is not observed, no other set d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. In that case P(G | do(S = T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between S and G is due to a causal connection or is spurious
(apparent dependence arising from a common cause, R). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.
Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for
2
10
=
1024
{\displaystyle 2^{10}=1024}
values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most
10
⋅
2
3
=
80
{\displaystyle 10\cdot 2^{3}=80}
values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
== Inference and learning ==
Bayesian networks perform three main inference tasks:
=== Inferring unobserved variables ===
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the evidence variables) are observed. This process of computing the posterior distribution of variables given evidence is called probabilistic inference. The posterior gives a universal sufficient statistic for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems.
The most common exact inference methods are: variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a space–time tradeoff and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's treewidth. The most common approximate inference algorithms are importance sampling, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation, generalized belief propagation and variational methods.
=== Parameter learning ===
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X's parents. The distribution of X conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on distribution are known; one can then use the principle of maximum entropy to determine a single distribution, the one with the greatest entropy given the constraints. (Analogously, in the specific context of a dynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the entropy rate of the implied stochastic process.)
Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via the maximum likelihood approach. Direct maximization of the likelihood (or of the posterior probability) is often complex given unobserved variables. A classical approach to this problem is the expectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.
=== Structure learning ===
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:
The first 2 represent the same dependencies (
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are independent given
Y
{\displaystyle Y}
) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, since
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when
X
{\displaystyle X}
and
Z
{\displaystyle Z}
have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
== Statistical introduction ==
Given data
x
{\displaystyle x\,\!}
and parameter
θ
{\displaystyle \theta }
, a simple Bayesian analysis starts with a prior probability (prior)
p
(
θ
)
{\displaystyle p(\theta )}
and likelihood
p
(
x
∣
θ
)
{\displaystyle p(x\mid \theta )}
to compute a posterior probability
p
(
θ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
)
{\displaystyle p(\theta \mid x)\propto p(x\mid \theta )p(\theta )}
.
Often the prior on
θ
{\displaystyle \theta }
depends in turn on other parameters
φ
{\displaystyle \varphi }
that are not mentioned in the likelihood. So, the prior
p
(
θ
)
{\displaystyle p(\theta )}
must be replaced by a likelihood
p
(
θ
∣
φ
)
{\displaystyle p(\theta \mid \varphi )}
, and a prior
p
(
φ
)
{\displaystyle p(\varphi )}
on the newly introduced parameters
φ
{\displaystyle \varphi }
is required, resulting in a posterior probability
p
(
θ
,
φ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
∣
φ
)
p
(
φ
)
.
{\displaystyle p(\theta ,\varphi \mid x)\propto p(x\mid \theta )p(\theta \mid \varphi )p(\varphi ).}
This is the simplest example of a hierarchical Bayes model.
The process may be repeated; for example, the parameters
φ
{\displaystyle \varphi }
may depend in turn on additional parameters
ψ
{\displaystyle \psi \,\!}
, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
=== Introductory examples ===
Given the measured quantities
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}\,\!}
each with normally distributed errors of known standard deviation
σ
{\displaystyle \sigma \,\!}
,
x
i
∼
N
(
θ
i
,
σ
2
)
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2})}
Suppose we are interested in estimating the
θ
i
{\displaystyle \theta _{i}}
. An approach would be to estimate the
θ
i
{\displaystyle \theta _{i}}
using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
θ
i
=
x
i
.
{\displaystyle \theta _{i}=x_{i}.}
However, if the quantities are related, so that for example the individual
θ
i
{\displaystyle \theta _{i}}
have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
x
i
∼
N
(
θ
i
,
σ
2
)
,
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2}),}
θ
i
∼
N
(
φ
,
τ
2
)
,
{\displaystyle \theta _{i}\sim N(\varphi ,\tau ^{2}),}
with improper priors
φ
∼
flat
{\displaystyle \varphi \sim {\text{flat}}}
,
τ
∼
flat
∈
(
0
,
∞
)
{\displaystyle \tau \sim {\text{flat}}\in (0,\infty )}
. When
n
≥
3
{\displaystyle n\geq 3}
, this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual
θ
i
{\displaystyle \theta _{i}}
will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
=== Restrictions on priors ===
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable
τ
{\displaystyle \tau \,\!}
in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the expected loss will be inadmissible.
== Definitions and concepts ==
Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V.
=== Factorization definition ===
X is a Bayesian network with respect to G if its joint probability density function (with respect to a product measure) can be written as a product of the individual density functions, conditional on their parent variables:
p
(
x
)
=
∏
v
∈
V
p
(
x
v
|
x
pa
(
v
)
)
{\displaystyle p(x)=\prod _{v\in V}p\left(x_{v}\,{\big |}\,x_{\operatorname {pa} (v)}\right)}
where pa(v) is the set of parents of v (i.e. those vertices pointing directly to v via a single edge).
For any set of random variables, the probability of any member of a joint distribution can be calculated from conditional probabilities using the chain rule (given a topological ordering of X) as follows:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
v
+
1
=
x
v
+
1
,
…
,
X
n
=
x
n
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} \left(X_{v}=x_{v}\mid X_{v+1}=x_{v+1},\ldots ,X_{n}=x_{n}\right)}
Using the definition above, this can be written as:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} (X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}\,{\text{ that is a parent of }}X_{v}\,)}
The difference between the two expressions is the conditional independence of the variables from any of their non-descendants, given the values of their parent variables.
=== Local Markov property ===
X is a Bayesian network with respect to G if it satisfies the local Markov property: each variable is conditionally independent of its non-descendants given its parent variables:
X
v
⊥
⊥
X
V
∖
de
(
v
)
∣
X
pa
(
v
)
for all
v
∈
V
{\displaystyle X_{v}\perp \!\!\!\perp X_{V\,\smallsetminus \,\operatorname {de} (v)}\mid X_{\operatorname {pa} (v)}\quad {\text{for all }}v\in V}
where de(v) is the set of descendants and V \ de(v) is the set of non-descendants of v.
This can be expressed in terms similar to the first definition, as
P
(
X
v
=
x
v
∣
X
i
=
x
i
for each
X
i
that is not a descendant of
X
v
)
=
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle {\begin{aligned}&\operatorname {P} (X_{v}=x_{v}\mid X_{i}=x_{i}{\text{ for each }}X_{i}{\text{ that is not a descendant of }}X_{v}\,)\\[6pt]={}&P(X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}{\text{ that is a parent of }}X_{v}\,)\end{aligned}}}
The set of parents is a subset of the set of non-descendants because the graph is acyclic.
=== Marginal independence structure ===
In general, learning a Bayesian network from data is known to be NP-hard. This is due in part to the combinatorial explosion of enumerating DAGs as the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure: while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by a simple undirected graph with special properties such as equal intersection and independence numbers.
=== Developing Bayesian networks ===
Developing a Bayesian network often begins with creating a DAG G such that X satisfies the local Markov property with respect to G. Sometimes this is a causal DAG. The conditional probability distributions of each variable given its parents in G are assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution of X is the product of these conditional distributions, then X is a Bayesian network with respect to G.
=== Markov blanket ===
The Markov blanket of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. X is a Bayesian network with respect to G if every node is conditionally independent of all other nodes in the network, given its Markov blanket.
==== d-separation ====
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that.
Let P be a trail from node u to v. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. Then P is said to be d-separated by a set of nodes Z if any of the following conditions holds:
P contains (but does not need to be entirely) a directed chain,
u
⋯
←
m
←
⋯
v
{\displaystyle u\cdots \leftarrow m\leftarrow \cdots v}
or
u
⋯
→
m
→
⋯
v
{\displaystyle u\cdots \rightarrow m\rightarrow \cdots v}
, such that the middle node m is in Z,
P contains a fork,
u
⋯
←
m
→
⋯
v
{\displaystyle u\cdots \leftarrow m\rightarrow \cdots v}
, such that the middle node m is in Z, or
P contains an inverted fork (or collider),
u
⋯
→
m
←
⋯
v
{\displaystyle u\cdots \rightarrow m\leftarrow \cdots v}
, such that the middle node m is not in Z and no descendant of m is in Z.
The nodes u and v are d-separated by Z if all trails between them are d-separated. If u and v are not d-separated, they are d-connected.
X is a Bayesian network with respect to G if, for any two nodes u, v:
X
u
⊥
⊥
X
v
∣
X
Z
{\displaystyle X_{u}\perp \!\!\!\perp X_{v}\mid X_{Z}}
where Z is a set which d-separates u and v. (The Markov blanket is the minimal set of nodes which d-separates node v from all other nodes.)
=== Causal networks ===
Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:
a
→
b
→
c
and
a
←
b
←
c
{\displaystyle a\rightarrow b\rightarrow c\qquad {\text{and}}\qquad a\leftarrow b\leftarrow c}
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a node X is actively caused to be in a given state x (an action written as do(X = x)), then the probability density function changes to that of the network obtained by cutting the links from the parents of X to X, and setting X to the caused value x. Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.
== Inference complexity and approximation algorithms ==
In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable deterministic algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2. Second, they proved that no tractable randomized algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2.
At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF)) and that approximate inference within a factor 2n1−ɛ for every ɛ > 0, even for Bayesian networks with restricted architecture, is NP-hard.
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by
1
/
p
(
n
)
{\displaystyle 1/p(n)}
where
p
(
n
)
{\displaystyle p(n)}
was any polynomial of the number of nodes in the network,
n
{\displaystyle n}
.
== Software ==
Notable software for Bayesian networks include:
Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling.
OpenBUGS – Open-source development of WinBUGS.
SPSS Modeler – Commercial software that includes an implementation for Bayesian networks.
Stan (software) – Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo.
PyMC – A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS)
WinBUGS – One of the first computational implementations of MCMC samplers. No longer maintained.
== History ==
The term Bayesian network was coined by Judea Pearl in 1985 to emphasize:
the often subjective nature of the input information
the reliance on Bayes' conditioning as the basis for updating information
the distinction between causal and evidential modes of reasoning
In the late 1980s Pearl's Probabilistic Reasoning in Intelligent Systems and Neapolitan's Probabilistic Reasoning in Expert Systems summarized their properties and established them as a field of study.
== See also ==
== Notes ==
== References ==
== Further reading ==
Conrady S, Jouffe L (2015-07-01). Bayesian Networks and BayesiaLab – A practical introduction for researchers. Franklin, Tennessee: Bayesian USA. ISBN 978-0-9965333-0-0.
Charniak E (Winter 1991). "Bayesian networks without tears" (PDF). AI Magazine.
Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P (2013). Computational Intelligence A Methodological Introduction. London: Springer-Verlag. ISBN 978-1-4471-5012-1.
Borgelt C, Steinbrecher M, Kruse R (2009). Graphical Models – Representations for Learning, Reasoning and Data Mining (Second ed.). Chichester: Wiley. ISBN 978-0-470-74956-2.
== External links ==
An Introduction to Bayesian Networks and their Contemporary Applications
On-line Tutorial on Bayesian nets and probability
Web-App to create Bayesian nets and run it with a Monte Carlo method
Continuous Time Bayesian Networks
Bayesian Networks: Explanation and Analogy
A live tutorial on learning Bayesian networks
A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
Hierarchical Naive Bayes Model for handling sample uncertainty Archived 2007-09-28 at the Wayback Machine, shows how to perform classification and learning with continuous and discrete variables with replicated measurements. | Wikipedia/Bayesian_model |
The average treatment effect (ATE) is a measure used to compare treatments (or interventions) in randomized experiments, evaluation of policy interventions, and medical trials. The ATE measures the difference in mean (average) outcomes between units assigned to the treatment and units assigned to the control. In a randomized trial (i.e., an experimental study), the average treatment effect can be estimated from a sample using a comparison in mean outcomes for treated and untreated units. However, the ATE is generally understood as a causal parameter (i.e., an estimate or property of a population) that a researcher desires to know, defined without reference to the study design or estimation procedure. Both observational studies and experimental study designs with random assignment may enable one to estimate an ATE in a variety of ways.
The average treatment effect is under some conditions directly related to the partial dependence plot.
== General definition ==
Originating from early statistical analysis in the fields of agriculture and medicine, the term "treatment" is now applied, more generally, to other fields of natural and social science, especially psychology, political science, and economics such as, for example, the evaluation of the impact of public policies. The nature of a treatment or outcome is relatively unimportant in the estimation of the ATE—that is to say, calculation of the ATE requires that a treatment be applied to some units and not others, but the nature of that treatment (e.g., a pharmaceutical, an incentive payment, a political advertisement) is irrelevant to the definition and estimation of the ATE.
The expression "treatment effect" refers to the causal effect of a given treatment or intervention (for example, the administering of a drug) on an outcome variable of interest (for example, the health of the patient). In the Neyman-Rubin "potential outcomes framework" of causality a treatment effect is defined for each individual unit in terms of two "potential outcomes." Each unit has one outcome that would manifest if the unit were exposed to the treatment and another outcome that would manifest if the unit were exposed to the control. The "treatment effect" is the difference between these two potential outcomes. However, this individual-level treatment effect is unobservable because individual units can only receive the treatment or the control, but not both. Random assignment to treatment ensures that units assigned to the treatment and units assigned to the control are identical (over a large number of iterations of the experiment). Indeed, units in both groups have identical distributions of covariates and potential outcomes. Thus the average outcome among the treatment units serves as a counterfactual for the average outcome among the control units. The differences between these two averages is the ATE, which is an estimate of the central tendency of the distribution of unobservable individual-level treatment effects. If a sample is randomly constituted from a population, the sample ATE (abbreviated SATE) is also an estimate of the population ATE (abbreviated PATE).
While an experiment ensures, in expectation, that potential outcomes (and all covariates) are equivalently distributed in the treatment and control groups, this is not the case in an observational study. In an observational study, units are not assigned to treatment and control randomly, so their assignment to treatment may depend on unobserved or unobservable factors. Observed factors can be statistically controlled (e.g., through regression or matching), but any estimate of the ATE could be confounded by unobservable factors that influenced which units received the treatment versus the control.
== Formal definition ==
In order to define formally the ATE, we define two potential outcomes :
y
0
(
i
)
{\displaystyle y_{0}(i)}
is the value of the outcome variable for individual
i
{\displaystyle i}
if they are not treated,
y
1
(
i
)
{\displaystyle y_{1}(i)}
is the value of the outcome variable for individual
i
{\displaystyle i}
if they are treated. For example,
y
0
(
i
)
{\displaystyle y_{0}(i)}
is the health status of the individual if they are not administered the drug under study and
y
1
(
i
)
{\displaystyle y_{1}(i)}
is the health status if they are administered the drug.
The treatment effect for individual
i
{\displaystyle i}
is given by
y
1
(
i
)
−
y
0
(
i
)
=
β
(
i
)
{\displaystyle y_{1}(i)-y_{0}(i)=\beta (i)}
. In the general case, there is no reason to expect this effect to be constant across individuals. The average treatment effect is given by
ATE
=
E
[
y
1
−
y
0
]
{\displaystyle {\text{ATE}}=\mathbb {E} [y_{1}-y_{0}]}
and can be estimated (if a law of large numbers holds)
A
T
E
^
=
1
N
∑
i
(
y
1
(
i
)
−
y
0
(
i
)
)
{\displaystyle {\widehat {ATE}}={\frac {1}{N}}\sum _{i}(y_{1}(i)-y_{0}(i))}
where the summation occurs over all
N
{\displaystyle N}
individuals in the population.
If we could observe, for each individual,
y
1
(
i
)
{\displaystyle y_{1}(i)}
and
y
0
(
i
)
{\displaystyle y_{0}(i)}
among a large representative sample of the population, we could estimate the ATE simply by taking the average value of
y
1
(
i
)
−
y
0
(
i
)
{\displaystyle y_{1}(i)-y_{0}(i)}
across the sample. However, we can not observe both
y
1
(
i
)
{\displaystyle y_{1}(i)}
and
y
0
(
i
)
{\displaystyle y_{0}(i)}
for each individual since an individual cannot be both treated and not treated. For example, in the drug example, we can only observe
y
1
(
i
)
{\displaystyle y_{1}(i)}
for individuals who have received the drug and
y
0
(
i
)
{\displaystyle y_{0}(i)}
for those who did not receive it. This is the main problem faced by scientists in the evaluation of treatment effects and has triggered a large body of estimation techniques.
== Estimation ==
Depending on the data and its underlying circumstances, many methods can be used to estimate the ATE. The most common ones are:
Natural experiments
Difference in differences
Regression discontinuity designs
Propensity score matching
Instrumental variables estimation
== An example ==
Consider an example where all units are unemployed individuals, and some experience a policy intervention (the treatment group), while others do not (the control group). The causal effect of interest is the impact a job search monitoring policy (the treatment) has on the length of an unemployment spell: On average, how much shorter would one's unemployment be if they experienced the intervention? The ATE, in this case, is the difference in expected values (means) of the treatment and control groups' length of unemployment.
A positive ATE, in this example, would suggest that the job policy increased the length of unemployment. A negative ATE would suggest that the job policy decreased the length of unemployment. An ATE estimate equal to zero would suggest that there was no advantage or disadvantage to providing the treatment in terms of the length of unemployment. Determining whether an ATE estimate is distinguishable from zero (either positively or negatively) requires statistical inference.
Because the ATE is an estimate of the average effect of the treatment, a positive or negative ATE does not indicate that any particular individual would benefit or be harmed by the treatment. Thus the average treatment effect neglects the distribution of the treatment effect. Some parts of the population might be worse off with the treatment even if the mean effect is positive.
== Heterogenous treatment effects ==
Some researchers call a treatment effect "heterogenous" if it affects different individuals differently (heterogeneously). For example, perhaps the above treatment of a job search monitoring policy affected men and women differently, or people who live in different states differently. ATE requires a strong assumption known as the stable unit treatment value assumption (SUTVA) which requires the value of the potential outcome
y
(
i
)
{\displaystyle y(i)}
be unaffected by the mechanism used to assign the treatment and the treatment exposure of all other individuals. Let
d
{\displaystyle d}
be the treatment, the treatment effect for individual
i
{\displaystyle i}
is given by
y
1
(
i
,
d
)
−
y
0
(
i
,
d
)
{\displaystyle y_{1}(i,d)-y_{0}(i,d)}
. The SUTVA assumption allows us to declare
y
1
(
i
,
d
)
=
y
1
(
i
)
,
y
0
(
i
,
d
)
=
y
0
(
i
)
{\displaystyle y_{1}(i,d)=y_{1}(i),y_{0}(i,d)=y_{0}(i)}
.
One way to look for heterogeneous treatment effects is to divide the study data into subgroups (e.g., men and women, or by state), and see if the average treatment effects are different by subgroup. If the average treatment effects are different, SUTVA is violated. A per-subgroup ATE is called a "conditional average treatment effect" (CATE), i.e. the ATE conditioned on membership in the subgroup. CATE can be used as an estimate if SUTVA does not hold.
A challenge with this approach is that each subgroup may have substantially less data than the study as a whole, so if the study has been powered to detect the main effects without subgroup analysis, there may not be enough data to properly judge the effects on subgroups.
There is some work on detecting heterogeneous treatment effects using random forests as well as detecting heterogeneous subpopulations using cluster analysis. Recently, metalearning approaches have been developed that use arbitrary regression frameworks as base learners to infer the CATE. Representation learning can be used to further improve the performance of these methods.
== References ==
== Further reading ==
Wooldridge, Jeffrey M. (2013). "Policy Analysis with Pooled Cross Sections". Introductory Econometrics: A Modern Approach. Mason, OH: Thomson South-Western. pp. 438–443. ISBN 978-1-111-53104-1. | Wikipedia/Average_treatment_effect |
Energy statistics refers to collecting, compiling, analyzing and disseminating data on commodities such as coal, crude oil, natural gas, electricity, or renewable energy sources (biomass, geothermal, wind or solar energy), when they are used for the energy they contain. Energy is the capability of some substances, resulting from their physico-chemical properties, to do work or produce heat. Some energy commodities, called fuels, release their energy content as heat when they burn. This heat could be used to run an internal or external combustion engine.
The need to have statistics on energy commodities became obvious during the 1973 oil crisis that brought tenfold increase in petroleum prices. Before the crisis, to have accurate data on global energy supply and demand was not deemed critical. Another concern of energy statistics today is a huge gap in energy use between developed and developing countries. As the gap narrows (see picture), the pressure on energy supply increases tremendously.
The data on energy and electricity come from three principal sources:
Energy industry
Other industries ("self-producers")
Consumers
The flows of and trade in energy commodities are measured both in physical units (e.g., metric tons), and, when energy balances are calculated, in energy units (e.g., terajoules or tons of oil equivalent). What makes energy statistics specific and different from other fields of economic statistics is the fact that energy commodities undergo greater number of transformations (flows) than other commodities. In these transformations energy is conserved, as defined by and within the limitations of the first and second laws of thermodynamics.
== See also ==
Energy system
World energy resources and consumption
== External links ==
Statistical Energy Database Review: Enerdata Yearbook 2012
International Energy Agency: Statistics
United Nations: Energy Statistics
The Oslo Group on Energy Statistics
DOE Energy Information Administration
Year of Energy 2009
European Energy Statistics & Key Indicators
== Publications ==
Energy Statistics Yearbook 2004, United Nations, 2006
Energy Balances and Electricity Profiles 2004, United Nations, 2006 | Wikipedia/Statistical_study_of_energy_data |
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.
Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion).
Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics.
== History of probability ==
The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657. In the 19th century, what is considered the classical definition of probability was completed by Pierre Laplace.
Initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory.
This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti.
== Treatment ==
Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more.
=== Motivation ===
Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. The power set of the sample space (or equivalently, the event space) is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called events. In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred.
Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events.
The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5} has a probability of 1/6, and the event {1,2,3,4,5,6} has a probability of 1, that is, absolute certainty.
When doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable. A random variable is a function that assigns to each elementary event in the sample space a real number. This function is usually denoted by a capital letter. In the case of a die, the assignment of a number to certain elementary events can be done using the identity function. This does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" (
X
(
heads
)
=
0
{\textstyle X({\text{heads}})=0}
) and to the outcome "tails" the number "1" (
X
(
tails
)
=
1
{\displaystyle X({\text{tails}})=1}
).
=== Discrete probability distributions ===
Discrete probability theory deals with events that occur in countable sample spaces.
Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins.
Classical definition:
Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability.
For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by
3
6
=
1
2
{\displaystyle {\tfrac {3}{6}}={\tfrac {1}{2}}}
, since 3 faces out of the 6 have even numbers and each face has the same probability of appearing.
Modern definition:
The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by
Ω
{\displaystyle \Omega }
. It is then assumed that for each element
x
∈
Ω
{\displaystyle x\in \Omega \,}
, an intrinsic "probability" value
f
(
x
)
{\displaystyle f(x)\,}
is attached, which satisfies the following properties:
f
(
x
)
∈
[
0
,
1
]
for all
x
∈
Ω
;
{\displaystyle f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;}
∑
x
∈
Ω
f
(
x
)
=
1
.
{\displaystyle \sum _{x\in \Omega }f(x)=1\,.}
That is, the probability function f(x) lies between zero and one for every value of x in the sample space Ω, and the sum of f(x) over all values x in the sample space Ω is equal to 1. An event is defined as any subset
E
{\displaystyle E\,}
of the sample space
Ω
{\displaystyle \Omega \,}
. The probability of the event
E
{\displaystyle E\,}
is defined as
P
(
E
)
=
∑
x
∈
E
f
(
x
)
.
{\displaystyle P(E)=\sum _{x\in E}f(x)\,.}
So, the probability of the entire sample space is 1, and the probability of the null event is 0.
The function
f
(
x
)
{\displaystyle f(x)\,}
mapping a point in the sample space to the "probability" value is called a probability mass function abbreviated as pmf.
=== Continuous probability distributions ===
Continuous probability theory deals with events that occur in a continuous sample space.
Classical definition:
The classical definition breaks down when confronted with the continuous case. See Bertrand's paradox.
Modern definition:
If the sample space of a random variable X is the set of real numbers (
R
{\displaystyle \mathbb {R} }
) or a subset thereof, then a function called the cumulative distribution function (CDF)
F
{\displaystyle F\,}
exists, defined by
F
(
x
)
=
P
(
X
≤
x
)
{\displaystyle F(x)=P(X\leq x)\,}
. That is, F(x) returns the probability that X will be less than or equal to x.
The CDF necessarily satisfies the following properties.
F
{\displaystyle F\,}
is a monotonically non-decreasing, right-continuous function;
lim
x
→
−
∞
F
(
x
)
=
0
;
{\displaystyle \lim _{x\rightarrow -\infty }F(x)=0\,;}
lim
x
→
∞
F
(
x
)
=
1
.
{\displaystyle \lim _{x\rightarrow \infty }F(x)=1\,.}
The random variable
X
{\displaystyle X}
is said to have a continuous probability distribution if the corresponding CDF
F
{\displaystyle F}
is continuous. If
F
{\displaystyle F\,}
is absolutely continuous, then its derivative exists almost everywhere and integrating the derivative gives us the CDF back again. In this case, the random variable X is said to have a probability density function (PDF) or simply density
f
(
x
)
=
d
F
(
x
)
d
x
.
{\displaystyle f(x)={\frac {dF(x)}{dx}}\,.}
For a set
E
⊆
R
{\displaystyle E\subseteq \mathbb {R} }
, the probability of the random variable X being in
E
{\displaystyle E\,}
is
P
(
X
∈
E
)
=
∫
x
∈
E
d
F
(
x
)
.
{\displaystyle P(X\in E)=\int _{x\in E}dF(x)\,.}
In case the PDF exists, this can be written as
P
(
X
∈
E
)
=
∫
x
∈
E
f
(
x
)
d
x
.
{\displaystyle P(X\in E)=\int _{x\in E}f(x)\,dx\,.}
Whereas the PDF exists only for continuous random variables, the CDF exists for all random variables (including discrete random variables) that take values in
R
.
{\displaystyle \mathbb {R} \,.}
These concepts can be generalized for multidimensional cases on
R
n
{\displaystyle \mathbb {R} ^{n}}
and other continuous sample spaces.
=== Measure-theoretic probability theory ===
The utility of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two.
An example of such distributions could be a mix of discrete and continuous distributions—for example, a random variable that is 0 with probability 1/2, and takes a random value from a normal distribution with probability 1/2. It can still be studied to some extent by considering it to have a PDF of
(
δ
[
x
]
+
φ
(
x
)
)
/
2
{\displaystyle (\delta [x]+\varphi (x))/2}
, where
δ
[
x
]
{\displaystyle \delta [x]}
is the Dirac delta function.
Other distributions may not even be a mix, for example, the Cantor distribution has no positive probability for any single point, neither does it have a density. The modern approach to probability theory solves these problems using measure theory to define the probability space:
Given any set
Ω
{\displaystyle \Omega \,}
(also called sample space) and a σ-algebra
F
{\displaystyle {\mathcal {F}}\,}
on it, a measure
P
{\displaystyle P\,}
defined on
F
{\displaystyle {\mathcal {F}}\,}
is called a probability measure if
P
(
Ω
)
=
1.
{\displaystyle P(\Omega )=1.\,}
If
F
{\displaystyle {\mathcal {F}}\,}
is the Borel σ-algebra on the set of real numbers, then there is a unique probability measure on
F
{\displaystyle {\mathcal {F}}\,}
for any CDF, and vice versa. The measure corresponding to a CDF is said to be induced by the CDF. This measure coincides with the pmf for discrete variables and PDF for continuous variables, making the measure-theoretic approach free of fallacies.
The probability of a set
E
{\displaystyle E\,}
in the σ-algebra
F
{\displaystyle {\mathcal {F}}\,}
is defined as
P
(
E
)
=
∫
ω
∈
E
μ
F
(
d
ω
)
{\displaystyle P(E)=\int _{\omega \in E}\mu _{F}(d\omega )\,}
where the integration is with respect to the measure
μ
F
{\displaystyle \mu _{F}\,}
induced by
F
.
{\displaystyle F\,.}
Along with providing better understanding and unification of discrete and continuous probabilities, measure-theoretic treatment also allows us to work on probabilities outside
R
n
{\displaystyle \mathbb {R} ^{n}}
, as in the theory of stochastic processes. For example, to study Brownian motion, probability is defined on a space of functions.
When it is convenient to work with a dominating measure, the Radon-Nikodym theorem is used to define a density as the Radon-Nikodym derivative of the probability distribution of interest with respect to this dominating measure. Discrete densities are usually defined as this derivative with respect to a counting measure over the set of all possible outcomes. Densities for absolutely continuous distributions are usually defined as this derivative with respect to the Lebesgue measure. If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions.
== Classical probability distributions ==
Certain random variables occur very often in probability theory because they well describe many natural or physical processes. Their distributions, therefore, have gained special importance in probability theory. Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions. Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions.
== Convergence of random variables ==
In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions.
Weak convergence
A sequence of random variables
X
1
,
X
2
,
…
,
{\displaystyle X_{1},X_{2},\dots ,\,}
converges weakly to the random variable
X
{\displaystyle X\,}
if their respective CDF converges
F
1
,
F
2
,
…
{\displaystyle F_{1},F_{2},\dots \,}
converges to the CDF
F
{\displaystyle F\,}
of
X
{\displaystyle X\,}
, wherever
F
{\displaystyle F\,}
is continuous. Weak convergence is also called convergence in distribution.
Most common shorthand notation:
X
n
→
D
X
{\displaystyle \displaystyle X_{n}\,{\xrightarrow {\mathcal {D}}}\,X}
Convergence in probability
The sequence of random variables
X
1
,
X
2
,
…
{\displaystyle X_{1},X_{2},\dots \,}
is said to converge towards the random variable
X
{\displaystyle X\,}
in probability if
lim
n
→
∞
P
(
|
X
n
−
X
|
≥
ε
)
=
0
{\displaystyle \lim _{n\rightarrow \infty }P\left(\left|X_{n}-X\right|\geq \varepsilon \right)=0}
for every ε > 0.
Most common shorthand notation:
X
n
→
P
X
{\displaystyle \displaystyle X_{n}\,{\xrightarrow {P}}\,X}
Strong convergence
The sequence of random variables
X
1
,
X
2
,
…
{\displaystyle X_{1},X_{2},\dots \,}
is said to converge towards the random variable
X
{\displaystyle X\,}
strongly if
P
(
lim
n
→
∞
X
n
=
X
)
=
1
{\displaystyle P(\lim _{n\rightarrow \infty }X_{n}=X)=1}
. Strong convergence is also known as almost sure convergence.
Most common shorthand notation:
X
n
→
a
.
s
.
X
{\displaystyle \displaystyle X_{n}\,{\xrightarrow {\mathrm {a.s.} }}\,X}
As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true.
=== Law of large numbers ===
Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the law of large numbers. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence.
The law of large numbers (LLN) states that the sample average
X
¯
n
=
1
n
∑
k
=
1
n
X
k
{\displaystyle {\overline {X}}_{n}={\frac {1}{n}}{\sum _{k=1}^{n}X_{k}}}
of a sequence of independent and identically distributed random variables
X
k
{\displaystyle X_{k}}
converges towards their common expectation (expected value)
μ
{\displaystyle \mu }
, provided that the expectation of
|
X
k
|
{\displaystyle |X_{k}|}
is finite.
It is in the different forms of convergence of random variables that separates the weak and the strong law of large numbers
Weak law:
X
¯
n
→
P
μ
{\displaystyle \displaystyle {\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu }
for
n
→
∞
{\displaystyle n\to \infty }
Strong law:
X
¯
n
→
a
.
s
.
μ
{\displaystyle \displaystyle {\overline {X}}_{n}\,{\xrightarrow {\mathrm {a.\,s.} }}\,\mu }
for
n
→
∞
.
{\displaystyle n\to \infty .}
It follows from the LLN that if an event of probability p is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards p.
For example, if
Y
1
,
Y
2
,
.
.
.
{\displaystyle Y_{1},Y_{2},...\,}
are independent Bernoulli random variables taking values 1 with probability p and 0 with probability 1-p, then
E
(
Y
i
)
=
p
{\displaystyle {\textrm {E}}(Y_{i})=p}
for all i, so that
Y
¯
n
{\displaystyle {\bar {Y}}_{n}}
converges to p almost surely.
=== Central limit theorem ===
The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics."
The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let
X
1
,
X
2
,
…
{\displaystyle X_{1},X_{2},\dots \,}
be independent random variables with mean
μ
{\displaystyle \mu }
and variance
σ
2
>
0.
{\displaystyle \sigma ^{2}>0.\,}
Then the sequence of random variables
Z
n
=
∑
i
=
1
n
(
X
i
−
μ
)
σ
n
{\displaystyle Z_{n}={\frac {\sum _{i=1}^{n}(X_{i}-\mu )}{\sigma {\sqrt {n}}}}\,}
converges in distribution to a standard normal random variable.
For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT).
== See also ==
Mathematical Statistics – Branch of statisticsPages displaying short descriptions of redirect targets
Expected value – Average value of a random variable
Variance – Statistical measure of how far values spread from their average
Fuzzy logic – System for reasoning about vagueness
Fuzzy measure theory – theory of generalized measures in which the additive property is replaced by the weaker property of monotonicityPages displaying wikidata descriptions as a fallback
Glossary of probability and statistics
Likelihood function – Function related to statistics and probability theory
Notation in probability
Predictive modelling – Form of modelling that uses statistics to predict outcomes
Probabilistic logic – Applications of logic under uncertainty
Probabilistic proofs of non-probabilistic theorems
Probability distribution – Mathematical function for the probability a given outcome occurs in an experiment
Probability axioms – Foundations of probability theory
Probability interpretations – Philosophical interpretation of the axioms of probability
Probability space – Mathematical concept
Statistical independence – When the occurrence of one event does not affect the likelihood of anotherPages displaying short descriptions of redirect targets
Statistical physics – Physics of many interacting particlesPages displaying short descriptions of redirect targets
Subjective logic – Type of probabilistic logic
Pairwise independence§Probability of the union of pairwise independent events – Set of random variables of which any two are independent
=== Lists ===
Catalog of articles in probability theory
List of probability topics
List of publications in statistics
List of statistical topics
== References ==
=== Citations ===
=== Sources === | Wikipedia/Measure-theoretic_probability_theory |
In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one treatment group, more than one control group, or both.
A placebo control group can be used to support a double-blind study, in which some subjects are given an ineffective treatment (in medical studies typically a sugar pill) to minimize differences in the experiences of subjects in the different groups; this is done in a way that ensures no participant in the experiment (subject or experimenter) knows to which group each subject belongs. In such cases, a third, non-treatment control group can be used to measure the placebo effect directly, as the difference between the responses of placebo subjects and untreated subjects, perhaps paired by age group or other factors (such as being twins).
For the conclusions drawn from the results of an experiment to have validity, it is essential that the items or patients assigned to treatment and control groups be representative of the same population. In some experiments, such as many in agriculture or psychology, this can be achieved by randomly assigning items from a common population to one of the treatment and control groups. In studies of twins involving just one treatment group and a control group, it is statistically efficient to do this random assignment separately for each pair of twins, so that one is in the treatment group and one in the control group.
In some medical studies, where it may be unethical not to treat patients who present with symptoms, controls may be given a standard treatment, rather than no treatment at all. An alternative is to select controls from a wider population, provided that this population is well-defined and that those presenting with symptoms at the clinic are representative of those in the wider population. Another method to reduce ethical concerns would be to test early-onset symptoms, with enough time later to offer real treatments to the control subjects, and let those subjects know the first treatments are "experimental" and might not be as effective as later treatments, again with the understanding there would be ample time to try other remedies.
== Relevance ==
A clinical control group can be a placebo arm or it can involve an old method used to address a clinical outcome when testing a new idea. For example in a study released by the British Medical Journal, in 1995 studying the effects of strict blood pressure control versus more relaxed blood pressure control in diabetic patients, the clinical control group was the diabetic patients that did not receive tight blood pressure control. In order to qualify for the study, the patients had to meet the inclusion criteria and not match the exclusion criteria. Once the study population was determined, the patients were placed in either the experimental group (strict blood pressure control <150/80mmHg) versus non strict blood pressure control (<180/110). There were a wide variety of ending points for patients such as death, myocardial infarction, stroke, etc. The study was stopped before completion because strict blood pressure control was so much superior to the clinical control group which had relaxed blood pressure control. The study was no longer considered ethical because tight blood pressure control was so much more effective at preventing end points that the clinical control group had to be discontinued.
The clinical control group is not always a placebo group. Sometimes the clinical control group can involve comparing a new drug to an older drug in a superiority trial. In a superiority trial, the clinical control group is the older medication rather than the new medication. For example in the ALLHAT trial, Thiazide diuretics were demonstrated to be superior to calcium channel blockers or angiotensin-converting enzyme inhibitors in reducing cardiovascular events in high risk patients with hypertension. In the ALLHAT study, the clinical control group was not a placebo it was ACEI or Calcium Channel Blockers.
Overall, clinical control groups can either be a placebo or an old standard of therapy.
== See also ==
Scientific control
Wait list control group
Blocking (statistics)
Hawthorne effect
== References == | Wikipedia/Control_group |
This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries.
== Pre twentieth century ==
Al-Khalil ibn Ahmad al-Farahidi: wrote a (now lost) book on cryptography titled the "Book of Cryptographic Messages".
Al-Kindi, 9th century Arabic polymath and originator of frequency analysis.
Athanasius Kircher, attempts to decipher crypted messages
Augustus the Younger, Duke of Brunswick-Lüneburg, wrote a standard book on cryptography
Ibn Wahshiyya: published several cipher alphabets that were used to encrypt magic formulas.
John Dee, wrote an occult book, which in fact was a cover for crypted text
Ibn 'Adlan: 13th-century cryptographer who made important contributions on the sample size of the frequency analysis.
Duke of Mantua Francesco I Gonzaga is the one who used the earliest example of homophonic Substitution cipher in early 1400s.
Ibn al-Durayhim: gave detailed descriptions of eight cipher systems that discussed substitution ciphers, leading to the earliest suggestion of a "tableau" of the kind that two centuries later became known as the "Vigenère table".
Ahmad al-Qalqashandi: Author of Subh al-a 'sha, a fourteen volume encyclopedia in Arabic, which included a section on cryptology. The list of ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter.
Charles Babbage, UK, 19th century mathematician who, about the time of the Crimean War, secretly developed an effective attack against polyalphabetic substitution ciphers.
Leone Battista Alberti, polymath/universal genius, inventor of polyalphabetic substitution (more specifically, the Alberti cipher), and what may have been the first mechanical encryption aid.
Giovanni Battista della Porta, author of a seminal work on cryptanalysis.
Étienne Bazeries, French, military, considered one of the greatest natural cryptanalysts. Best known for developing the "Bazeries Cylinder" and his influential 1901 text Les Chiffres secrets dévoilés ("Secret ciphers unveiled").
Giovan Battista Bellaso, Italian cryptologist
Giovanni Fontana (engineer), wrote two encrypted books
Hildegard of Bingen used her own alphabet to write letters.
Julius Caesar, Roman general/politician, has the Caesar cipher named after him, and a lost work on cryptography by Probus (probably Valerius Probus) is claimed to have covered his use of military cryptography in some detail. It is likely that he did not invent the cipher named after him, as other substitution ciphers were in use well before his time.
Friedrich Kasiski, author of the first published attack on the Vigenère cipher, now known as the Kasiski test.
Auguste Kerckhoffs, known for contributing cipher design principles.
Edgar Allan Poe, author of the book, A Few Words on Secret Writing, an essay on cryptanalysis, and The Gold Bug, a short story featuring the use of letter frequencies in the solution of a cryptogram.
Johannes Trithemius, mystic and first to describe tableaux (tables) for use in polyalphabetic substitution. Wrote an early work on steganography and cryptography generally.
Philips van Marnix, lord of Sint-Aldegonde, deciphered Spanish messages for William the Silent during the Dutch revolt against the Spanish.
John Wallis codebreaker for Cromwell and Charles II
Sir Charles Wheatstone, inventor of the so-called Playfair cipher and general polymath.
== World War I and World War II wartime cryptographers ==
Richard J. Hayes (1902–1976) Irish code breaker in World War II.
Jean Argles (1925–2023), British code breaker in World War II
Arne Beurling (1905–1986), Swedish mathematician and cryptographer.
Lambros D. Callimahos, US, NSA, worked with William F. Friedman, taught NSA cryptanalysts.
Ann Z. Caracristi, US, SIS, solved Japanese Army codes in World War II, later became deputy director of National Security Agency.
Alec Naylor Dakin, UK, Hut 4, Bletchley Park during World War II.
Ludomir Danilewicz, Poland, Biuro Szyfrow, helped to construct the Enigma machine copies to break the ciphers.
Patricia Davies (born 1923), British code breaker in World War II
Alastair Denniston, UK, director of the Government Code and Cypher School at Bletchley Park from 1919 to 1942.
Agnes Meyer Driscoll, US, broke several Japanese ciphers.
Genevieve Grotjan Feinstein, US, SIS, noticed the pattern that led to breaking Purple.
Elizebeth Smith Friedman, US, Coast Guard and US Treasury Department cryptographer, co-invented modern cryptography.
William F. Friedman, US, SIS, introduced statistical methods into cryptography.
Cecilia Elspeth Giles, UK, Bletchley Park
Jack Good UK, Government Code and Cypher School, Bletchley Park worked with Alan Turing on the statistical approach to cryptanalysis.
Nigel de Grey, UK, Room 40, played an important role in the decryption of the Zimmermann Telegram during World War I.
Dillwyn Knox, UK, Room 40 and Government Code and Cypher School, broke commercial Enigma cipher as used by the Abwehr (German military intelligence).
Solomon Kullback US, SIS, helped break the Japanese Red cipher, later Chief Scientist at the National Security Agency.
Frank W. Lewis US, worked with William F. Friedman, puzzle master
William Hamilton Martin and Bernon F. Mitchell, U.S. National Security Agency cryptologists who defected to the Soviet Union in 1960
Leo Marks UK, Special Operations Executive cryptography director, author and playwright.
Donald Michie UK, Government Code and Cypher School, Bletchley Park worked on Cryptanalysis of the Lorenz cipher and the Colossus computer.
Consuelo Milner, US, crytopgraher for the Naval Applied Science Lab
Max Newman, UK, Government Code and Cypher School, Bletchley Park headed the section that developed the Colossus computer for Cryptanalysis of the Lorenz cipher.
Georges Painvin French, broke the ADFGVX cipher during the First World War.
Marian Rejewski, Poland, Biuro Szyfrów, a Polish mathematician and cryptologist who, in 1932, solved the Enigma machine with plugboard, the main cipher device then in use by Germany. The first to break the cipher in history.
John Joseph Rochefort US, made major contributions to the break into JN-25 after the attack on Pearl Harbor.
Leo Rosen US, SIS, deduced that the Japanese Purple machine was built with stepping switches.
Frank Rowlett US, SIS, leader of the team that broke Purple.
Jerzy Różycki, Poland, Biuro Szyfrów, helped break German Enigma ciphers.
Luigi Sacco, Italy, Italian General and author of the Manual of Cryptography.
Laurance Safford US, chief cryptographer for the US Navy for 2 decades+, including World War II.
Abraham Sinkov US, SIS.
John Tiltman UK, Brigadier, Room 40, Government Code and Cypher School, Bletchley Park, GCHQ, NSA. Extraordinary length and range of cryptographic service
Alan Mathison Turing UK, Government Code and Cypher School, Bletchley Park where he was chief cryptographer, inventor of the Bombe that was used in decrypting Enigma, mathematician, logician, and renowned pioneer of Computer Science.
William Thomas Tutte UK, Government Code and Cypher School, Bletchley Park, with John Tiltman, broke Lorenz SZ 40/42 encryption machine (codenamed Tunny) leading to the development of the Colossus computer.
Betty Webb (code breaker), British codebreaker during World War II
William Stone Weedon, US,
Gordon Welchman UK, Government Code and Cypher School, Bletchley Park where he was head of Hut Six (German Army and Air Force Enigma cipher. decryption), made an important contribution to the design of the Bombe.
Herbert Yardley US, MI8 (US), author "The American Black Chamber", worked in China as a cryptographer and briefly in Canada.
Henryk Zygalski, Poland, Biuro Szyfrów, inventor of Zygalski sheets, broke German Enigma ciphers pre-1939.
Karl Stein German, Head of the Division IVa (security of own processes) at Cipher Department of the High Command of the Wehrmacht. Discoverer of Stein manifold.
Gisbert Hasenjaeger German, Tester of the Enigma. Discovered new proof of the completeness theorem of Kurt Gödel for predicate logic.
Heinrich Scholz German, Worked in Division IVa at OKW. Logician and pen friend of Alan Turning.
Gottfried Köthe German, Cryptanalyst at OKW. Mathematician created theory of topological vector spaces.
Ernst Witt German, Mathematician at OKW. Mathematical Discoveries Named After Ernst Witt.
Helmut Grunsky German, worked in complex analysis and geometric function theory. He introduced Grunsky's theorem and the Grunsky inequalities.
Georg Hamel.
Oswald Teichmüller German, temporarily employed at OKW as cryptanalyst. Introduced quasiconformal mappings and differential geometric methods into complex analysis. Described by Friedrich L. Bauer as an extreme Nazi and a true genius.
Hans Rohrbach German, Mathematician at AA/Pers Z, the German department of state, civilian diplomatic cryptological agency.
Wolfgang Franz German, Mathematician who worked at OKW. Later significant discoveries in Topology.
Werner Weber German, Mathematician at OKW.
Georg Aumann German, Mathematician at OKW. His doctoral student was Friedrich L. Bauer.
Otto Leiberich German, Mathematician who worked as a linguist at the Cipher Department of the High Command of the Wehrmacht.
Alexander Aigner German, Mathematician who worked at OKW.
Erich Hüttenhain German, Chief cryptanalyst of and led Chi IV (section 4) of the Cipher Department of the High Command of the Wehrmacht. A German mathematician and cryptanalyst who tested a number of German cipher machines and found them to be breakable.
Wilhelm Fenner German, Chief Cryptologist and Director of Cipher Department of the High Command of the Wehrmacht.
Walther Fricke German, Worked alongside Dr Erich Hüttenhain at Cipher Department of the High Command of the Wehrmacht. Mathematician, logician, cryptanalyst and linguist.
Fritz Menzer German. Inventor of SG39 and SG41.
== Other pre-computer ==
Rosario Candela, US, Architect and notable amateur cryptologist who authored books and taught classes on the subject to civilians at Hunter College.
Claude Elwood Shannon, US, founder of information theory, proved the one-time pad to be unbreakable.
== Modern ==
See also: Category:Modern cryptographers for a more exhaustive list.
=== Symmetric-key algorithm inventors ===
Ross Anderson, UK, University of Cambridge, co-inventor of the Serpent cipher.
Paulo S. L. M. Barreto, Brazilian, University of São Paulo, co-inventor of the Whirlpool hash function.
George Blakley, US, independent inventor of secret sharing.
Eli Biham, Israel, co-inventor of the Serpent cipher.
Don Coppersmith, co-inventor of DES and MARS ciphers.
Joan Daemen, Belgian, Radboud University, co-developer of Rijndael which became the Advanced Encryption Standard (AES), and Keccak which became SHA-3.
Horst Feistel, German, IBM, namesake of Feistel networks and Lucifer cipher.
Lars Knudsen, Denmark, co-inventor of the Serpent cipher.
Ralph Merkle, US, inventor of Merkle trees.
Bart Preneel, Belgian, KU Leuven, co-inventor of RIPEMD-160.
Vincent Rijmen, Belgian, KU Leuven, co-developer of Rijndael which became the Advanced Encryption Standard (AES).
Ronald L. Rivest, US, MIT, inventor of RC cipher series and MD algorithm series.
Bruce Schneier, US, inventor of Blowfish and co-inventor of Twofish and Threefish.
Xuejia Lai, CH, co-inventor of International Data Encryption Algorithm (IDEA).
Adi Shamir, Israel, Weizmann Institute, inventor of secret sharing.
Walter Tuchman. US. led the Data Encryption Standard development team at IBM and inventor of Triple DES
=== Asymmetric-key algorithm inventors ===
Leonard Adleman, US, USC, the 'A' in RSA.
David Chaum, US, inventor of blind signatures.
Clifford Cocks, UK GCHQ first inventor of RSA, a fact that remained secret until 1997 and so was unknown to Rivest, Shamir, and Adleman.
Whitfield Diffie, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol.
Taher Elgamal, US (born Egyptian), inventor of the Elgamal discrete log cryptosystem.
Shafi Goldwasser, US and Israel, MIT and Weizmann Institute, co-discoverer of zero-knowledge proofs, and of Semantic security.
Martin Hellman, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol.
Neal Koblitz, independent co-creator of elliptic curve cryptography.
Alfred Menezes, co-inventor of MQV, an elliptic curve technique.
Silvio Micali, US (born Italian), MIT, co-discoverer of zero-knowledge proofs, and of Semantic security.
Victor Miller, independent co-creator of elliptic curve cryptography.
David Naccache, inventor of the Naccache–Stern cryptosystem and of the Naccache–Stern knapsack cryptosystem.
Moni Naor, co-inventor the Naor–Yung encryption paradigm for CCA security.
Rafail Ostrovsky, co-inventor of Oblivious RAM, of single-server Private Information Retrieval, and proactive cryptosystems.
Pascal Paillier, inventor of Paillier encryption.
Michael O. Rabin, Israel, inventor of Rabin encryption.
Ronald L. Rivest, US, MIT, the 'R' in RSA.
Adi Shamir, Israel, Weizmann Institute, the 'S' in RSA.
Victor Shoup, US, NYU Courant, co-inventor of the Cramer-Shoup cryptosystem.
Moti Yung, co-inventor of the Naor–Yung encryption paradigm for CCA security, of threshold cryptosystems, and proactive cryptosystems.
=== Cryptanalysts ===
Joan Clarke, English cryptanalyst and numismatist best known for her work as a code-breaker at Bletchley Park during the Second World War.
Ross Anderson, UK.
Eli Biham, Israel, co-discoverer of differential cryptanalysis and Related-key attack.
Matt Blaze, US.
Dan Boneh, US, Stanford University.
Niels Ferguson, Netherlands, co-inventor of Twofish and Fortuna.
Ian Goldberg, Canada, University of Waterloo.
Lars Knudsen, Denmark, DTU, discovered integral cryptanalysis.
Paul Kocher, US, discovered differential power analysis.
Mitsuru Matsui, Japan, discoverer of linear cryptanalysis.
Kenny Paterson, UK, previously Royal Holloway, now ETH Zurich, known for several attacks on cryptosystems.
David Wagner, US, UC Berkeley, co-discoverer of the slide and boomerang attacks.
Xiaoyun Wang, the People's Republic of China, known for MD5 and SHA-1 hash function attacks.
Alex Biryukov, University of Luxembourg, known for impossible differential cryptanalysis and slide attack.
Moti Yung, Kleptography.
Bill Buchanan, creator of ASecuritySite - one of the most comprehensive cryptography website in the World.
=== Algorithmic number theorists ===
Daniel J. Bernstein, US, developed several popular algorithms, fought US government restrictions in Bernstein v. United States.
Don Coppersmith, US
Dorian M. Goldfeld, US, Along with Michael Anshel and Iris Anshel invented the Anshel–Anshel–Goldfeld key exchange and the Algebraic Eraser. They also helped found Braid Group Cryptography.
Victor Shoup, US, NYU Courant.
=== Theoreticians ===
Mihir Bellare, US, UCSD, co-proposer of the Random oracle model.
Dan Boneh, US, Stanford.
Gilles Brassard, Canada, Université de Montréal. Co-inventor of quantum cryptography.
Claude Crépeau, Canada, McGill University.
Oded Goldreich, Israel, Weizmann Institute, author of Foundations of Cryptography.
Shafi Goldwasser, US and Israel.
Silvio Micali, US, MIT.
Rafail Ostrovsky, US, UCLA.
Charles Rackoff, co-discoverer of zero-knowledge proofs.
Oded Regev, inventor of learning with errors.
Phillip Rogaway, US, UC Davis, co-proposer of the Random oracle model.
Amit Sahai, US, UCLA.
Victor Shoup, US, NYU Courant.
Gustavus Simmons, US, Sandia, authentication theory.
Moti Yung, US, Google.
=== Government cryptographers ===
Clifford Cocks, UK, GCHQ, secret inventor of the algorithm later known as RSA.
James H. Ellis, UK, GCHQ, secretly proved the possibility of asymmetric encryption.
Lowell Frazer, US, National Security Agency
Laura Holmes, US, National Security Agency
Julia Wetzel, US, National Security Agency
Malcolm Williamson, UK, GCHQ, secret inventor of the protocol later known as the Diffie–Hellman key exchange.
=== Cryptographer businesspeople ===
Bruce Schneier, US, CTO and founder of Counterpane Internet Security, Inc. and cryptography author.
Scott Vanstone, Canada, founder of Certicom and elliptic curve cryptography proponent.
== See also ==
Cryptography
== References ==
== External links ==
List of cryptographers' home pages | Wikipedia/Cryptographers |
The Genetical Theory of Natural Selection is a book by Ronald Fisher which combines Mendelian genetics with Charles Darwin's theory of natural selection, with Fisher being the first to argue that "Mendelism therefore validates Darwinism" and stating with regard to mutations that "The vast majority of large mutations are deleterious; small mutations are both far more frequent and more likely to be useful", thus refuting orthogenesis. First published in 1930 by The Clarendon Press, it is one of the most important books of the modern synthesis, and helped define population genetics. It had been described by J. F. Crow as the "deepest book on evolution since Darwin".
It is commonly cited in biology books, outlining many concepts that are still considered important such as Fisherian runaway, Fisher's principle, reproductive value, Fisher's fundamental theorem of natural selection, Fisher's geometric model, the sexy son hypothesis, mimicry and the evolution of dominance. It was dictated to his wife in the evenings as he worked at Rothamsted Research in the day.
== Contents ==
In the preface, Fisher considers some general points, including that there must be an understanding of natural selection distinct from that of evolution, and that the then-recent advances in the field of genetics (see history of genetics) now allowed this. In the first chapter, Fisher considers the nature of inheritance, rejecting blending inheritance, because it would eliminate genetic variance, in favour of particulate inheritance. The second chapter introduces Fisher's fundamental theorem of natural selection. The third considers the evolution of dominance, which Fisher believed was strongly influenced by modifiers. Other chapters discuss parental investment, Fisher's geometric model, concerning how spontaneous mutations affect biological fitness, Fisher's principle which explains why the sex ratio between males and females is almost always 1:1, reproductive value, examining the demography of having girl children. Using his knowledge of statistics, the Fisherian runaway, which explores how sexual selection can lead to a positive feedback runaway loop, producing features such as the peacock's plumage. He also wrote about the evolution of dominance, which explores genetic dominance.
=== Eugenics ===
The last five chapters (8-12) include Fisher's concern about dysgenics and proposals for eugenics. Fisher attributed the fall of civilizations to the fertility of their upper classes being diminished, and used British 1911 census data to show an inverse relationship between fertility and social class, partly due, he claimed, to the lower financial costs and hence increasing social status of families with fewer children. He proposed the abolition of extra allowances to large families, with the allowances proportional to the earnings of the father. He served in several official committees to promote eugenics. In 1934, he resigned from the Eugenics Society over a dispute about increasing the power of scientists within the movement.
== Editions ==
A second, slightly revised edition was republished in 1958. In 1999, a third variorum edition (ISBN 0-19-850440-3), with the original 1930 text, annotated with the 1958 alterations, notes and alterations accidentally omitted from the second edition was published, edited by professor John Henry Bennett of the University of Adelaide.
== Dedication ==
The book is dedicated to Major Leonard Darwin, Fisher's friend, correspondent and son of Charles Darwin, "In gratitude for the encouragement, given to the author, during the last fifteen years, by discussing many of the problems dealt with in this book."
== Reviews ==
The book was reviewed by Charles Galton Darwin, who sent Fisher his copy of the book, with notes in the margin, starting a correspondence which lasted several years. The book also had a major influence on W. D. Hamilton's theories on the genetic basis of kin selection.
John Henry Bennett gave an account of the writing and reception of the book.
Sewall Wright, who had many disagreements with Fisher, reviewed the book and wrote that it was "certain to take rank as one of the major contributions to the theory of evolution." J. B. S. Haldane described it as "brilliant." Reginald Punnett was negative, however.
The book was largely overlooked for 40 years, and in particular Fisher's fundamental theorem of natural selection was misunderstood. The work had a great effect on W. D. Hamilton, who discovered it as an undergraduate at the University of Cambridge and noted in these excerpts from the rear cover of the 1999 variorum edition:
The publication of the variorum edition in 1999 led to renewed interest in the work and reviews by Laurence Cook, Brian Charlesworth, James F. Crow, and A. W. F. Edwards.
== References ==
== Bibliography ==
== External links ==
The Genetical Theory Of Natural Selection at the Internet Archive
The Genetical Theory Of Natural Selection | Wikipedia/The_Genetical_Theory_of_Natural_Selection |
In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.
== Value and representation ==
The value of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well).
An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators.
The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value.
The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width, precision, or bitness of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n − 1. Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII.
There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1) − 1. Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement.
Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language, on a different processor, or in an execution context of different bitness; see § Words.
Some older computer architectures used decimal representations of integers, stored in binary-coded decimal (BCD) or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a nibble), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet).
== Common integral data types ==
Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths.
The table above lists integral type widths that are supported in hardware by common processors. High-level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range).
Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku, support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package. These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they, too, can only represent a finite subset of the mathematical integers. These schemes support very large numbers; for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long.
A Boolean type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.
A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal.
=== Bytes and octets ===
The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines.
The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate.
In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet.
=== Words ===
The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 40-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS.
Practically all new desktop processors are capable of using 64-bit words, though embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.
One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers. That variable should have been declared as long, which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. This issue is resolved by C99 in stdint.h in the form of intptr_t.
The bitness of a program may refer to the word size (or bitness) of the processor on which it runs, or it may refer to the width of a memory address or pointer, which can differ between execution modes or contexts. For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses.
=== Standard integer ===
The standard integer size is platform-dependent.
In C, it is denoted by int and required to be at least 16 bits. Windows and Unix systems have 32-bit ints on both 32-bit and 64-bit architectures.
=== Short integer ===
A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine.
In C, it is denoted by short. It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. A conforming program can assume that it can safely store values between −(215−1) and 215−1, but it may not assume that the range is not larger. In Java, a short is always a 16-bit integer. In the Windows API, the datatype SHORT is defined as a 16-bit signed integer on all machines.
=== Long integer ===
A long integer can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine.
In C, it is denoted by long. It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(231−1) and 231−1, but it may not assume that the range is not larger.
=== Long long ===
In the C99 version of the C programming language and the C++11 version of C++, a long long type is supported that has double the minimum capacity of the standard long. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the long long type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(263−1) to 263−1 for signed and 0 to 264−1 for unsigned, must be fulfilled; however, extending this range is permitted. This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h; this was introduced in C99 and C++11.
== Syntax ==
Integer literals can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping. Examples of integer literals are:
42
10000
-233000
There are several alternate methods for writing integer literals in many programming languages:
Many programming languages, especially those influenced by C, prefix an integer literal with 0X or 0x to represent a hexadecimal value, e.g. 0xDEADBEEF. Other languages may use a different notation, e.g. some assembly languages append an H or h to the end of a hexadecimal value.
Perl, Ruby, Java, Julia, D, Go, C#, Rust, Python (starting from version 3.6), and PHP (from version 7.4.0 onwards) allow embedded underscores for clarity, e.g. 10_000_000, and fixed-form Fortran ignores embedded spaces in integer literals. C (starting from C23) and C++ use single quotes for this purpose.
In C and C++, a leading zero indicates an octal value, e.g. 0755. This was primarily intended to be used with Unix modes; however, it has been criticized because normal integers may also lead with zero. As such, Python, Ruby, Haskell, and OCaml prefix octal values with 0O or 0o, following the layout used by hexadecimal values.
Several languages, including Java, C#, Scala, Python, Ruby, OCaml, C (starting from C23) and C++ can represent binary values by prefixing a number with 0B or 0b.
== Extreme values ==
In many programming languages, there exist predefined constants representing the least and the greatest values representable with a given integer type.
Names for these include
SmallBASIC: MAXINT
Java: java.lang.Integer.MAX_VALUE, java.lang.Integer.MIN_VALUE
Corresponding fields exist for the other integer classes in Java.
C: INT_MAX, etc.
GLib: G_MININT, G_MAXINT, G_MAXUINT, ...
Haskell: minBound, maxBound
Pascal: MaxInt
Python 2: sys.maxint
Turing: maxint
== See also ==
Arbitrary-precision arithmetic
Binary-coded decimal (BCD)
C data types
Integer overflow
Signed number representations
== Notes ==
== References == | Wikipedia/Integer_(computer_science) |
A biological network is a method of representing systems as complex sets of binary interactions or relations between various biological entities. In general, networks or graphs are used to capture relationships between entities or objects. A typical graphing representation consists of a set of nodes connected by edges.
== History of networks ==
As early as 1736 Leonhard Euler analyzed a real-world issue known as the Seven Bridges of Königsberg, which established the foundation of graph theory. From the 1930s-1950s the study of random graphs were developed. During the mid 1990s, it was discovered that many different types of "real" networks have structural properties quite different from random networks. In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine. In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.
In the 1980s, researchers started viewing DNA or genomes as the dynamic storage of a language system with precise computable finite states represented as a finite-state machine. Recent complex systems research has also suggested some far-reaching commonality in the organization of information in problems from biology, computer science, and physics.
== Networks in biology ==
=== Protein–protein interaction networks ===
Protein-protein interaction networks (PINs) represent the physical relationship among proteins present in a cell, where proteins are nodes, and their interactions are undirected edges. Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction. Protein–protein interactions (PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which the yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.
Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are the Human Protein Reference Database, Database of Interacting Proteins, the Molecular Interaction Database (MINT), IntAct, and BioGRID. At the same time, multiple computational approaches have been proposed to predict interactions. FunCoup and STRING are examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.
Recent studies have indicated the conservation of molecular networks through deep evolutionary time. Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees. This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning.
=== Gene regulatory networks (DNA–protein interaction networks) ===
The genome encodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins called transcription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes. The complete set of gene products and the interactions among them constitutes gene regulatory networks (GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes.
GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example, the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition.
GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as., Reactome and KEGG. High-throughput measurement technologies, such as microarray, RNA-Seq, ChIP-chip, and ChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.
=== Gene co-expression networks (transcript–transcript association networks) ===
Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc. weighted gene co-expression network analysis is extensively used to identify co-expression modules and intramodular hub genes. Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules.
=== Metabolic networks ===
Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed by enzymes. The complete set of all these biochemical reactions in all the pathways represents the metabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.
=== Signaling networks ===
Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, the MAPK/ERK pathway is transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events. Signaling networks typically integrate protein–protein interaction networks, gene regulatory networks, and metabolic networks. Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.
=== Neuronal networks ===
The complex interactions in the brain make it a perfect candidate to apply network theory. Neurons in the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain. For instance, small-world network properties have been demonstrated in connections between cortical regions of the primate brain or during swallowing in humans. This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions.
=== Food webs ===
All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricate food web of predator and prey interactions. The stability of these interactions has been a long-standing question in ecology. That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole. This is especially important considering the potential species loss due to global climate change.
=== Between-species interaction networks ===
In biology, pairwise interactions have historically been the focus of intense study. With the recent advances in network science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of larger ecological networks. The use of network analysis can allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (from competitive to cooperative) using the same general framework. For example, plant-pollinator interactions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of the food chain for primary consumers, yet these interaction networks are threatened by anthropogenic change. The use of network analysis can illuminate how pollination networks work and may, in turn, inform conservation efforts. Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), and modularity play a large role in network stability. These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat. More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network. Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time. Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.
=== Within-species interaction networks ===
Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level. One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.
Researchers interested in ethology across many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies. Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant of fitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such as frequency-dependent selection and disease and information transmission. For instance, a study on wire-tailed manakins (a small passerine bird) found that a male's degree in the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings). In bottlenose dolphin groups, an individual's degree and betweenness centrality values may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.
Social network analysis can also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equid fission-fusion species, Grevy's zebra and onagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not. Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverse primate order, suggesting that using network measures (such as centrality, assortativity, modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.
Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in female chacma baboons (Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability. Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness. This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags and computer vision can be used to automate the collection of social associations. Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood.
=== DNA-DNA chromatin networks ===
Within a nucleus, DNA is constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands of chromatin relative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst different loci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus with Genome Architecture Mapping (GAM) can be used to construct a network of loci with edges representing highly linked genomic regions.
The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region.
== Modelling biological networks ==
=== Introduction ===
To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily on graph theory, computer science, and bioinformatics.
=== Association ===
There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize is correlation which specifically centers around the linear relationship between two variables. As an example, weighted gene co-expression network analysis uses Pearson correlation to analyze linked gene expression and understand genetics at a systems level. Another measure of correlation is linkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome. An example of its use is in detecting relationships in GAM data across genomic intervals based upon detection frequencies of certain loci.
=== Centrality ===
The concept of centrality can be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.
In 2005, Researchers at Harvard Medical School utilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.
=== Communities ===
Studying the community structure of a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network. A food web of The Secaucus High School Marsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of subsectioning networks and thus a plethora of different algorithms exist for creating these relationships. Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy or time complexity of calculation.
In 2002, a food web of marine mammals in the Chesapeake Bay was divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms. Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm.
The Louvain method is a greedy algorithm that attempts to maximize modularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity. Once no modularity increase can occur by joining nodes to a community, a new weighted network is constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs. While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is easy to understand compared to many other community detection algorithms.
The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as the Louvain Method solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than the Louvain Method, performs faster with better community detection and can be a valuable tool for identifying groups.
=== Network Motifs ===
Network motifs, or statistically significant recurring interaction patterns within a network, are a commonly used tool to understand biological networks. A major use case of network motifs is in Neurophysiology where motif analysis is commonly used to understand interconnected neuronal functions at varying scales. As an example, in 2017, researchers at Beijing Normal University analyzed highly represented 2 and 3 node network motifs in directed functional brain networks constructed by Resting state fMRI data to study the basic mechanisms in brain information flow.
== See also ==
List of omics topics in biology
Biological network inference
Biostatistics
Computational biology
Systems biology
Weighted correlation network analysis
Interactome
Network medicine
Ecological network
== References ==
== Books ==
== External links ==
Networkbio.org, The site of the series of Integrative Network Biology (INB) meetings. For the 2012 event also see www.networkbio.org
Network Tools and Applications in Biology (NETTAB) workshops.
Networkbiology.org, NetworkBiology wiki site.
Linding Lab, Technical University of Denmark (DTU) studies Network Biology and Cellular Information Processing, and is also organizing the Denmark branch of the annual "Integrative Network Biology and Cancer" symposium series.
NRNB.org, The National Resource for Network Biology. A US National Institute of Health (NIH) Biomedical Technology Research Center dedicated to the study of biological networks.
Network Repository The first interactive data and network data repository with real-time visual analytics.
Animal Social Network Repository (ASNR) The first multi-taxonomic repository that collates 790 social networks from more than 45 species, including those of mammals, reptiles, fish, birds, and insects | Wikipedia/Network_biology |
Statistical Methods for Research Workers is a classic book on statistics, written by the statistician R. A. Fisher. It is considered by some to be one of the 20th century's most influential books on statistical methods, together with his The Design of Experiments (1935). It was originally published in 1925, by Oliver & Boyd (Edinburgh); the final and posthumous 14th edition was published in 1970. The impulse to write a book on the statistical methodology he had developed came not from Fisher himself but from D. Ward Cutler, one of the two editors of a series of "Biological Monographs and Manuals" being published by Oliver and Boyd.
== Reviews ==
According to Denis Conniffe:
Ronald A. Fisher was "interested in application and in the popularization
of statistical methods and his early book Statistical Methods for Research Workers, published in 1925, went through many editions and
motivated and influenced the practical use of statistics in many fields of
study. His Design of Experiments (1935) [promoted] statistical technique and application. In that book he
emphasized examples and how to design experiments systematically from
a statistical point of view. The mathematical justification of the methods
described was not stressed and, indeed, proofs were often barely sketched
or omitted altogether ..., a fact which led H. B. Mann to fill the gaps with a rigorous mathematical treatment in his well-known treatise, Mann (1949)."
According to Erich L. Lehmann:Even reviewers who were not offended by Fisher's attack on traditional methods found much to criticize. In particular, they complained about Fisher's dogmatism, the lack of proofs, the emphasis on small samples, and the difficulty of the book. However, a review by Harold Hotelling, which was submitted to the Journal of the American Statistical Association in 1927, did justice to Fisher's achievement. Hotelling stated in his review that "most books on statistics consist of pedagogic rehashes of identical material. This comfortably orthodox subject matter is absent from the volume under review, which summarizes for the reader the author's independent codification of statistical theory and some of his brilliant constributions to the subject, not all of which have previosuly been published".
== Chapters ==
Prefaces
Introduction
Diagrams
Distributions
Tests of Goodness of Fit, Independence and Homogeneity; with table of χ2
Tests of Significance of Means, Difference of Means, and Regression Coefficients
The Correlation Coefficient
Intraclass Correlations and the Analysis of Variance
Further Applications of the Analysis of Variance
SOURCES USED FOR DATA AND METHODS INDEX
In the second edition of 1928 a chapter 9 was added: The Principles of Statistical Estimation.
== See also ==
The Design of Experiments
== Notes ==
== Further reading ==
The March 1951 issue of the Journal of the American Statistical Association contains articles celebrating the 25th anniversary of the publication of the first edition.
A.W.F. Edwards (2005) "R. A. Fisher, Statistical Methods for Research Workers, 1925," in I. Grattan-Guinness (ed) Landmark Writings in Western Mathematics: Case Studies, 1640-1940, Amsterdam: Elsevier.
Savage, Leonard J. (1976). "On Rereading R. A. Fisher". Annals of Statistics. 4 (3): 441–500. doi:10.1214/aos/1176343456.
=== Reviews ===
Nature anonymous review of Fisher’s Statistical Methods [1]
BMJ anonymous review of Fisher’s Statistical Methods [2]
Student’s review of Fisher’s Statistical Methods
Egon Pearson’s reviews of Fisher’s Statistical Methods
Harold Hotelling’s review of Fishers’ Statistical Methods
Leon Isserlis’s review of Fishers’ Statistical Methods
W. P. Elderton’sreview of Fisher’s Statistical Methods
== External links ==
Text of first edition
The 14th edition (prepared from notes left by Fisher when he died in 1962) is reprinted as the first part of Statistical Methods, Experimental Design and Scientific Inference | Wikipedia/Statistical_Methods_for_Research_Workers |
Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is it allows calculation of the posterior distribution of the prior, providing an updated probability estimate.
Frequentist statistics may yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters as random variables and its use of subjective information in establishing assumptions on these parameters. As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to be robust, with the posterior distribution less sensitive to the more flexible hierarchical priors.
Hierarchical modeling, as its name implies, retains nested data structure, and is used when information is available at several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own time-based profile of daily infected cases. In decline curve analysis to describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own time-based profile of oil or gas production rates (usually, barrels per month). Hierarchical modeling is used to devise computatation based strategies for multiparameter problems.
== Philosophy ==
Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters.
Individual degrees of belief, expressed in the form of probabilities, come with uncertainty. Amidst this is the change of the degrees of belief over time. As was stated by Professor José M. Bernardo and Professor Adrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities. Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event.
== Bayes' theorem ==
The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options.
Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospital j having survival probability
θ
j
{\displaystyle \theta _{j}}
, the survival probability will be updated with the occurrence of y, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients.
In order to make updated probability statements about
θ
j
{\displaystyle \theta _{j}}
, given the occurrence of event y, we must begin with a model providing a joint probability distribution for
θ
j
{\displaystyle \theta _{j}}
and y. This can be written as a product of the two distributions that are often referred to as the prior distribution
P
(
θ
)
{\displaystyle P(\theta )}
and the sampling distribution
P
(
y
∣
θ
)
{\displaystyle P(y\mid \theta )}
respectively:
P
(
θ
,
y
)
=
P
(
θ
)
P
(
y
∣
θ
)
{\displaystyle P(\theta ,y)=P(\theta )P(y\mid \theta )}
Using the basic property of conditional probability, the posterior distribution will yield:
P
(
θ
∣
y
)
=
P
(
θ
,
y
)
P
(
y
)
=
P
(
y
∣
θ
)
P
(
θ
)
P
(
y
)
{\displaystyle P(\theta \mid y)={\frac {P(\theta ,y)}{P(y)}}={\frac {P(y\mid \theta )P(\theta )}{P(y)}}}
This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to deconstruct the probability,
P
(
θ
∣
y
)
{\displaystyle P(\theta \mid y)}
, relative to solvable subsets of its supportive evidence.
== Exchangeability ==
The usual starting point of a statistical analysis is the assumption that the n values
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
are exchangeable. If no information – other than data y – is available to distinguish any of the
θ
j
{\displaystyle \theta _{j}}
’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry of prior distribution parameters. This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution as independently and identically distributed, given some unknown parameter vector
θ
{\displaystyle \theta }
, with distribution
P
(
θ
)
{\displaystyle P(\theta )}
.
=== Finite exchangeability ===
For a fixed number n, the set
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable if the joint probability
P
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})}
is invariant under permutations of the indices. That is, for every permutation
π
{\displaystyle \pi }
or
(
π
1
,
π
2
,
…
,
π
n
)
{\displaystyle (\pi _{1},\pi _{2},\ldots ,\pi _{n})}
of (1, 2, …, n),
P
(
y
1
,
y
2
,
…
,
y
n
)
=
P
(
y
π
1
,
y
π
2
,
…
,
y
π
n
)
.
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})=P(y_{\pi _{1}},y_{\pi _{2}},\ldots ,y_{\pi _{n}}).}
The following is an exchangeable, but not independent and identical (iid), example:
Consider an urn with a red ball and a blue ball inside, with probability
1
2
{\displaystyle {\frac {1}{2}}}
of drawing either. Balls are drawn without replacement, i.e. after one ball is drawn from the n balls, there will be n − 1 remaining balls left for the next draw.
Let
Y
i
=
{
1
,
if the
i
th ball is red
,
0
,
otherwise
.
{\displaystyle {\text{Let }}Y_{i}={\begin{cases}1,&{\text{if the }}i{\text{th ball is red}},\\0,&{\text{otherwise}}.\end{cases}}}
The probability of selecting a red ball in the first draw and a blue ball in the second draw is equal to the probability of selecting a blue ball on the first draw and a red on the second, both of which are 1/2:
[
P
(
y
1
=
1
,
y
2
=
0
)
=
P
(
y
1
=
0
,
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{1}=1,y_{2}=0)=P(y_{1}=0,y_{2}=1)={\frac {1}{2}}]}
)
This makes
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
exchangeable.
But the probability of selecting a red ball on the second draw given that the red ball has already been selected in the first is 0. This is not equal to the probability that the red ball is selected in the second draw, which is 1/2:
[
P
(
y
2
=
1
∣
y
1
=
1
)
=
0
≠
P
(
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{2}=1\mid y_{1}=1)=0\neq P(y_{2}=1)={\frac {1}{2}}]}
).
Thus,
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are not independent.
If
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are independent and identically distributed, then they are exchangeable, but the converse is not necessarily true.
=== Infinite exchangeability ===
Infinite exchangeability is the property that every finite subset of an infinite sequence
y
1
{\displaystyle y_{1}}
,
y
2
,
…
{\displaystyle y_{2},\ldots }
is exchangeable. For any n, the sequence
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable.
== Hierarchical models ==
=== Components ===
Bayesian hierarchical modeling makes use of two important concepts in deriving the posterior distribution, namely:
Hyperparameters: parameters of the prior distribution
Hyperpriors: distributions of Hyperparameters
Suppose a random variable Y follows a normal distribution with parameter
θ
{\displaystyle \theta }
as the mean and 1 as the variance, that is
Y
∣
θ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta \sim N(\theta ,1)}
. The tilde relation
∼
{\displaystyle \sim }
can be read as "has the distribution of" or "is distributed as". Suppose also that the parameter
θ
{\displaystyle \theta }
has a distribution given by a normal distribution with mean
μ
{\displaystyle \mu }
and variance 1, i.e.
θ
∣
μ
∼
N
(
μ
,
1
)
{\displaystyle \theta \mid \mu \sim N(\mu ,1)}
. Furthermore,
μ
{\displaystyle \mu }
follows another distribution given, for example, by the standard normal distribution,
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
. The parameter
μ
{\displaystyle \mu }
is called the hyperparameter, while its distribution given by
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
is an example of a hyperprior distribution. The notation of the distribution of Y changes as another parameter is added, i.e.
Y
∣
θ
,
μ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta ,\mu \sim N(\theta ,1)}
. If there is another stage, say,
μ
{\displaystyle \mu }
following another normal distribution with a mean of
β
{\displaystyle \beta }
and a variance of
ϵ
{\displaystyle \epsilon }
, then
μ
∼
N
(
β
,
ϵ
)
{\displaystyle \mu \sim N(\beta ,\epsilon )}
,
{\displaystyle {\mbox{ }}}
β
{\displaystyle \beta }
and
ϵ
{\displaystyle \epsilon }
can also be called hyperparameters with hyperprior distributions.
=== Framework ===
Let
y
j
{\displaystyle y_{j}}
be an observation and
θ
j
{\displaystyle \theta _{j}}
a parameter governing the data generating process for
y
j
{\displaystyle y_{j}}
. Assume further that the parameters
θ
1
,
θ
2
,
…
,
θ
j
{\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{j}}
are generated exchangeably from a common population, with distribution governed by a hyperparameter
ϕ
{\displaystyle \phi }
.
The Bayesian hierarchical model contains the following stages:
Stage I:
y
j
∣
θ
j
,
ϕ
∼
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle {\text{Stage I: }}y_{j}\mid \theta _{j},\phi \sim P(y_{j}\mid \theta _{j},\phi )}
Stage II:
θ
j
∣
ϕ
∼
P
(
θ
j
∣
ϕ
)
{\displaystyle {\text{Stage II: }}\theta _{j}\mid \phi \sim P(\theta _{j}\mid \phi )}
Stage III:
ϕ
∼
P
(
ϕ
)
{\displaystyle {\text{Stage III: }}\phi \sim P(\phi )}
The likelihood, as seen in stage I is
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle P(y_{j}\mid \theta _{j},\phi )}
, with
P
(
θ
j
,
ϕ
)
{\displaystyle P(\theta _{j},\phi )}
as its prior distribution. Note that the likelihood depends on
ϕ
{\displaystyle \phi }
only through
θ
j
{\displaystyle \theta _{j}}
.
The prior distribution from stage I can be broken down into:
P
(
θ
j
,
ϕ
)
=
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta _{j},\phi )=P(\theta _{j}\mid \phi )P(\phi )}
[from the definition of conditional probability]
With
ϕ
{\displaystyle \phi }
as its hyperparameter with hyperprior distribution,
P
(
ϕ
)
{\displaystyle P(\phi )}
.
Thus, the posterior distribution is proportional to:
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
,
ϕ
)
P
(
θ
j
,
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j},\phi )P(\theta _{j},\phi )}
[using Bayes' Theorem]
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
)
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j})P(\theta _{j}\mid \phi )P(\phi )}
=== Example calculation ===
As an example, a teacher wants to estimate how well a student did on the SAT. The teacher uses the current grade point average (GPA) of the student for an estimate. Their current GPA, denoted by
Y
{\displaystyle Y}
, has a likelihood given by some probability function with parameter
θ
{\displaystyle \theta }
, i.e.
Y
∣
θ
∼
P
(
Y
∣
θ
)
{\displaystyle Y\mid \theta \sim P(Y\mid \theta )}
. This parameter
θ
{\displaystyle \theta }
is the SAT score of the student. The SAT score is viewed as a sample coming from a common population distribution indexed by another parameter
ϕ
{\displaystyle \phi }
, which is the high school grade of the student (freshman, sophomore, junior or senior). That is,
θ
∣
ϕ
∼
P
(
θ
∣
ϕ
)
{\displaystyle \theta \mid \phi \sim P(\theta \mid \phi )}
. Moreover, the hyperparameter
ϕ
{\displaystyle \phi }
follows its own distribution given by
P
(
ϕ
)
{\displaystyle P(\phi )}
, a hyperprior.
These relationships can be used to calculate the likelihood of a specific SAT score relative to a particular GPA:
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta ,\phi )P(\theta ,\phi )}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
All information in the problem will be used to solve for the posterior distribution. Instead of solving only using the prior distribution and the likelihood function, using hyperpriors allows a more nuanced distinction of relationships between given variables.
=== 2-stage hierarchical model ===
In general, the joint posterior distribution of interest in 2-stage hierarchical models is:
P
(
θ
,
ϕ
∣
Y
)
=
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
P
(
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
P
(
Y
)
{\displaystyle P(\theta ,\phi \mid Y)={P(Y\mid \theta ,\phi )P(\theta ,\phi ) \over P(Y)}={P(Y\mid \theta )P(\theta \mid \phi )P(\phi ) \over P(Y)}}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
=== 3-stage hierarchical model ===
For 3-stage hierarchical models, the posterior distribution is given by:
P
(
θ
,
ϕ
,
X
∣
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
P
(
Y
)
{\displaystyle P(\theta ,\phi ,X\mid Y)={P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X) \over P(Y)}}
P
(
θ
,
ϕ
,
X
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
{\displaystyle P(\theta ,\phi ,X\mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X)}
== Bayesian nonlinear mixed-effects model ==
A three stage version of Bayesian hierarchical modeling could be used to calculate probability at 1) an individual level, 2) at the level of population and 3) the prior, which is an assumed probability distribution that takes place before evidence is initially acquired:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\quad \epsilon _{ij}\sim N(0,\sigma ^{2}),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
η
l
i
∼
N
(
0
,
ω
l
2
)
,
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle \theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\quad \eta _{li}\sim N(0,\omega _{l}^{2}),\quad i=1,\ldots ,N,\,l=1,\ldots ,K.}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
α
l
∼
π
(
α
l
)
,
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
ω
l
2
∼
π
(
ω
l
2
)
,
l
=
1
,
…
,
K
.
{\displaystyle \sigma ^{2}\sim \pi (\sigma ^{2}),\quad \alpha _{l}\sim \pi (\alpha _{l}),\quad (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\quad \omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\quad l=1,\ldots ,K.}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters. The variable
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
.
Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If the prior is not considered, the relationship reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
⏟
S
t
a
g
e
1
:
I
n
d
i
v
i
d
u
a
l
−
L
e
v
e
l
M
o
d
e
l
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
2
:
P
o
p
u
l
a
t
i
o
n
M
o
d
e
l
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
3
:
P
r
i
o
r
{\displaystyle =\underbrace {\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})} _{Stage1:Individual-LevelModel}\times \underbrace {\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage2:PopulationModel}\times \underbrace {p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage3:Prior}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow.
A standard research cycle involves 1) literature review, 2) defining a problem and 3) specifying the research question and hypothesis. Bayesian-specific workflow stratifies this approach to include three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== References == | Wikipedia/Bayesian_hierarchical_modeling |
Management science (or managerial science) is a wide and interdisciplinary study of solving complex problems and making strategic decisions as it pertains to institutions, corporations, governments and other types of organizational entities. It is closely related to management, economics, business, engineering, management consulting, and other fields. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms and aims to improve an organization's ability to enact rational and accurate management decisions by arriving at optimal or near optimal solutions to complex decision problems.: 113
Management science looks to help businesses achieve goals using a number of scientific methods. The field was initially an outgrowth of applied mathematics, where early challenges were problems relating to the optimization of systems which could be modeled linearly, i.e., determining the optima (maximum value of profit, assembly line performance, crop yield, bandwidth, etc. or minimum of loss, risk, costs, etc.) of some objective function. Today, the discipline of management science may encompass a diverse range of managerial and organizational activity as it regards to a problem which is structured in mathematical or other quantitative form in order to derive managerially relevant insights and solutions.
== Overview ==
Management science is concerned with a number of areas of study:
Developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems. The models used can often be represented mathematically, but sometimes computer-based, visual or verbal representations are used as well or instead.
Designing and developing new and better models of organizational excellence.
Helping to improve, stabilize or otherwise manage profit margins in enterprises.
Management science research can be done on three levels:
The fundamental level lies in three mathematical disciplines: probability, optimization, and dynamical systems theory.
The modeling level is about building models, analyzing them mathematically, gathering and analyzing data, implementing models on computers, solving them, experimenting with them—all this is part of management science research on the modeling level. This level is mainly instrumental, and driven mainly by statistics and econometrics.
The application level, just as in any other engineering and economics disciplines, strives to make a practical impact and be a driver for change in the real world.
The management scientist's mandate is to use rational, systematic and science-based techniques to inform and improve decisions of all kinds. The techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. The norm for scholars in management science is to focus their work in a certain area or subfield of management like public administration, finance, calculus, information and so forth.
== History ==
Although management science as it exists now covers a myriad of topics having to do with coming up with solutions that increase the efficiency of a business, it was not even a field of study in the not too distant past. There are a number of businessmen and management specialists who can receive credit for the creation of the idea of management science. Most commonly, however, the founder of the field is considered to be Frederick Winslow Taylor in the early 20th century. Likewise, administration expert Luther Gulick and management expert Peter Drucker both had an impact on the development of management science in the 1930s and 1940s. Drucker is quoted as having said that, "the purpose of the corporation is to be economically efficient." This thought process is foundational to management science. Even before the influence of these men, there was Louis Brandeis who became known as "the people's lawyer". In 1910, Brandeis was the creator of a new business approach which he coined as "scientific management", a term that is often falsely attributed to the aforementioned Frederick Winslow Taylor.
These men represent some of the earliest ideas of management science at its conception. After the idea was born, it was further explored around the time of World War II. It was at this time that management science became more than an idea and was put into practice. This sort of experimentation was essential to the development of the field as it is known today.
The origins of management science can be traced to operations research, which became influential during World War II when the Allied forces recruited scientists of various disciplines to assist with military operations. In these early applications, the scientists used simple mathematical models to make efficient use of limited technologies and resources. The application of these models to the corporate sector became known as management science.
In 1967 Stafford Beer characterized the field of management science as "the business use of operations research".
== Theory ==
Some of the fields that management science involves include:
== Applications ==
Management science's applications are diverse allowing the use of it in many fields. Below are examples of the applications of management science.
In finance, management science is instrumental in portfolio optimization, risk management, and investment strategies. By employing mathematical models, analysts can assess market trends, optimize asset allocation, and mitigate financial risks, contributing to more informed and strategic decision-making.
In healthcare, management science plays a crucial role in optimizing resource allocation, patient scheduling, and facility management. Mathematical models aid healthcare professionals in streamlining operations, reducing waiting times, and improving overall efficiency in the delivery of care.
Logistics and supply chain management benefit significantly from management science applications. Optimization algorithms assist in route planning, inventory management, and demand forecasting, enhancing the efficiency of the entire supply chain.
In manufacturing, management science supports process optimization, production planning, and quality control. Mathematical models help identify bottlenecks, reduce production costs, and enhance overall productivity.
Furthermore, management science contributes to strategic decision-making in project management, marketing, and human resources. By leveraging quantitative techniques, organizations can make data-driven decisions, allocate resources effectively, and enhance overall performance across diverse functional areas.
== See also ==
== References ==
== Further reading ==
Kenneth R. Baker, Dean H. Kropp (1985). Management Science: An Introduction to the Use of Decision Models
David Charles Heinze (1982). Management Science: Introductory Concepts and Applications
Lee J. Krajewski, Howard E. Thompson (1981). "Management Science: Quantitative Methods in Context"
Thomas W. Knowles (1989). Management science: Building and Using Models
Kamlesh Mathur, Daniel Solow (1994). Management Science: The Art of Decision Making
Laurence J. Moore, Sang M. Lee, Bernard W. Taylor (1993). Management Science
William Thomas Morris (1968). Management Science: A Bayesian Introduction.
William E. Pinney, Donald B. McWilliams (1987). Management Science: An Introduction to Quantitative Analysis for Management
Gerald E. Thompson (1982). Management Science: An Introduction to Modern Quantitative Analysis and Decision Making. New York : McGraw-Hill Publishing Co. | Wikipedia/Management_Science |
In statistics, the delta method is a method of deriving the asymptotic distribution of a random variable. It is applicable when the random variable being considered can be defined as a differentiable function of a random variable which is asymptotically Gaussian.
== History ==
The delta method was derived from propagation of error, and the idea behind was known in the early 20th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938.
== Univariate delta method ==
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying
n
[
X
n
−
θ
]
→
D
N
(
0
,
σ
2
)
,
{\displaystyle {{\sqrt {n}}[X_{n}-\theta ]\,{\xrightarrow {D}}\,{\mathcal {N}}(0,\sigma ^{2})},}
where θ and σ2 are finite valued constants and
→
D
{\displaystyle {\xrightarrow {D}}}
denotes convergence in distribution, then
n
[
g
(
X
n
)
−
g
(
θ
)
]
→
D
N
(
0
,
σ
2
⋅
[
g
′
(
θ
)
]
2
)
{\displaystyle {{\sqrt {n}}[g(X_{n})-g(\theta )]\,{\xrightarrow {D}}\,{\mathcal {N}}(0,\sigma ^{2}\cdot [g'(\theta )]^{2})}}
for any function g satisfying the property that its first derivative, evaluated at
θ
{\displaystyle \theta }
,
g
′
(
θ
)
{\displaystyle g'(\theta )}
exists and is non-zero valued.
The intuition of the delta method is that any such g function, in a "small enough" range of the function, can be approximated via a first order Taylor series (which is basically a linear function). If the random variable is roughly normal then a linear transformation of it is also normal. Small range can be achieved when approximating the function around the mean, when the variance is "small enough". When g is applied to a random variable such as the mean, the delta method would tend to work better as the sample size increases, since it would help reduce the variance, and thus the Taylor approximation would be applied to a smaller range of the function g at the point of interest.
=== Proof in the univariate case ===
Demonstration of this result is fairly straightforward under the assumption that
g
(
x
)
{\displaystyle g(x)}
is differentiable near the neighborhood of
θ
{\displaystyle \theta }
and
g
′
(
x
)
{\displaystyle g'(x)}
is continuous at
θ
{\displaystyle \theta }
with
g
′
(
θ
)
≠
0
{\displaystyle g'(\theta )\neq 0}
. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem):
g
(
X
n
)
=
g
(
θ
)
+
g
′
(
θ
~
)
(
X
n
−
θ
)
,
{\displaystyle g(X_{n})=g(\theta )+g'({\tilde {\theta }})(X_{n}-\theta ),}
where
θ
~
{\displaystyle {\tilde {\theta }}}
lies between Xn and θ.
Note that since
X
n
→
P
θ
{\displaystyle X_{n}\,{\xrightarrow {P}}\,\theta }
and
|
θ
~
−
θ
|
<
|
X
n
−
θ
|
{\displaystyle |{\tilde {\theta }}-\theta |<|X_{n}-\theta |}
, it must be that
θ
~
→
P
θ
{\displaystyle {\tilde {\theta }}\,{\xrightarrow {P}}\,\theta }
and since g′(θ) is continuous, applying the continuous mapping theorem yields
g
′
(
θ
~
)
→
P
g
′
(
θ
)
,
{\displaystyle g'({\tilde {\theta }})\,{\xrightarrow {P}}\,g'(\theta ),}
where
→
P
{\displaystyle {\xrightarrow {P}}}
denotes convergence in probability.
Rearranging the terms and multiplying by
n
{\displaystyle {\sqrt {n}}}
gives
n
[
g
(
X
n
)
−
g
(
θ
)
]
=
g
′
(
θ
~
)
n
[
X
n
−
θ
]
.
{\displaystyle {\sqrt {n}}[g(X_{n})-g(\theta )]=g'\left({\tilde {\theta }}\right){\sqrt {n}}[X_{n}-\theta ].}
Since
n
[
X
n
−
θ
]
→
D
N
(
0
,
σ
2
)
{\displaystyle {{\sqrt {n}}[X_{n}-\theta ]{\xrightarrow {D}}{\mathcal {N}}(0,\sigma ^{2})}}
by assumption, it follows immediately from appeal to Slutsky's theorem that
n
[
g
(
X
n
)
−
g
(
θ
)
]
→
D
N
(
0
,
σ
2
[
g
′
(
θ
)
]
2
)
.
{\displaystyle {{\sqrt {n}}[g(X_{n})-g(\theta )]{\xrightarrow {D}}{\mathcal {N}}(0,\sigma ^{2}[g'(\theta )]^{2})}.}
This concludes the proof.
==== Proof with an explicit order of approximation ====
Alternatively, one can add one more step at the end, to obtain the order of approximation:
n
[
g
(
X
n
)
−
g
(
θ
)
]
=
g
′
(
θ
~
)
n
[
X
n
−
θ
]
=
n
[
X
n
−
θ
]
[
g
′
(
θ
~
)
+
g
′
(
θ
)
−
g
′
(
θ
)
]
=
n
[
X
n
−
θ
]
[
g
′
(
θ
)
]
+
n
[
X
n
−
θ
]
[
g
′
(
θ
~
)
−
g
′
(
θ
)
]
=
n
[
X
n
−
θ
]
[
g
′
(
θ
)
]
+
O
p
(
1
)
⋅
o
p
(
1
)
=
n
[
X
n
−
θ
]
[
g
′
(
θ
)
]
+
o
p
(
1
)
{\displaystyle {\begin{aligned}{\sqrt {n}}[g(X_{n})-g(\theta )]&=g'\left({\tilde {\theta }}\right){\sqrt {n}}[X_{n}-\theta ]\\[5pt]&={\sqrt {n}}[X_{n}-\theta ]\left[g'({\tilde {\theta }})+g'(\theta )-g'(\theta )\right]\\[5pt]&={\sqrt {n}}[X_{n}-\theta ]\left[g'(\theta )\right]+{\sqrt {n}}[X_{n}-\theta ]\left[g'({\tilde {\theta }})-g'(\theta )\right]\\[5pt]&={\sqrt {n}}[X_{n}-\theta ]\left[g'(\theta )\right]+O_{p}(1)\cdot o_{p}(1)\\[5pt]&={\sqrt {n}}[X_{n}-\theta ]\left[g'(\theta )\right]+o_{p}(1)\end{aligned}}}
This suggests that the error in the approximation converges to 0 in probability.
== Multivariate delta method ==
By definition, a consistent estimator B converges in probability to its true value β, and often a central limit theorem can be applied to obtain asymptotic normality:
n
(
B
−
β
)
→
D
N
(
0
,
Σ
)
,
{\displaystyle {\sqrt {n}}\left(B-\beta \right)\,{\xrightarrow {D}}\,N\left(0,\Sigma \right),}
where n is the number of observations and Σ is a (symmetric positive semi-definite) covariance matrix. Suppose we want to estimate the variance of a scalar-valued function h of the estimator B. Keeping only the first two terms of the Taylor series, and using vector notation for the gradient, we can estimate h(B) as
h
(
B
)
≈
h
(
β
)
+
∇
h
(
β
)
T
⋅
(
B
−
β
)
{\displaystyle h(B)\approx h(\beta )+\nabla h(\beta )^{T}\cdot (B-\beta )}
which implies the variance of h(B) is approximately
Var
(
h
(
B
)
)
≈
Var
(
h
(
β
)
+
∇
h
(
β
)
T
⋅
(
B
−
β
)
)
=
Var
(
h
(
β
)
+
∇
h
(
β
)
T
⋅
B
−
∇
h
(
β
)
T
⋅
β
)
=
Var
(
∇
h
(
β
)
T
⋅
B
)
=
∇
h
(
β
)
T
⋅
Cov
(
B
)
⋅
∇
h
(
β
)
=
∇
h
(
β
)
T
⋅
Σ
n
⋅
∇
h
(
β
)
{\displaystyle {\begin{aligned}\operatorname {Var} \left(h(B)\right)&\approx \operatorname {Var} \left(h(\beta )+\nabla h(\beta )^{T}\cdot (B-\beta )\right)\\[5pt]&=\operatorname {Var} \left(h(\beta )+\nabla h(\beta )^{T}\cdot B-\nabla h(\beta )^{T}\cdot \beta \right)\\[5pt]&=\operatorname {Var} \left(\nabla h(\beta )^{T}\cdot B\right)\\[5pt]&=\nabla h(\beta )^{T}\cdot \operatorname {Cov} (B)\cdot \nabla h(\beta )\\[5pt]&=\nabla h(\beta )^{T}\cdot {\frac {\Sigma }{n}}\cdot \nabla h(\beta )\end{aligned}}}
One can use the mean value theorem (for real-valued functions of many variables) to see that this does not rely on taking first order approximation.
The delta method therefore implies that
n
(
h
(
B
)
−
h
(
β
)
)
→
D
N
(
0
,
∇
h
(
β
)
T
⋅
Σ
⋅
∇
h
(
β
)
)
{\displaystyle {\sqrt {n}}\left(h(B)-h(\beta )\right)\,{\xrightarrow {D}}\,N\left(0,\nabla h(\beta )^{T}\cdot \Sigma \cdot \nabla h(\beta )\right)}
or in univariate terms,
n
(
h
(
B
)
−
h
(
β
)
)
→
D
N
(
0
,
σ
2
⋅
(
h
′
(
β
)
)
2
)
.
{\displaystyle {\sqrt {n}}\left(h(B)-h(\beta )\right)\,{\xrightarrow {D}}\,N\left(0,\sigma ^{2}\cdot \left(h^{\prime }(\beta )\right)^{2}\right).}
== Example: the binomial proportion ==
Suppose Xn is binomial with parameters
p
∈
(
0
,
1
]
{\displaystyle p\in (0,1]}
and n. Since
n
[
X
n
n
−
p
]
→
D
N
(
0
,
p
(
1
−
p
)
)
,
{\displaystyle {{\sqrt {n}}\left[{\frac {X_{n}}{n}}-p\right]\,{\xrightarrow {D}}\,N(0,p(1-p))},}
we can apply the Delta method with g(θ) = log(θ) to see
n
[
log
(
X
n
n
)
−
log
(
p
)
]
→
D
N
(
0
,
p
(
1
−
p
)
[
1
/
p
]
2
)
{\displaystyle {{\sqrt {n}}\left[\log \left({\frac {X_{n}}{n}}\right)-\log(p)\right]\,{\xrightarrow {D}}\,N(0,p(1-p)[1/p]^{2})}}
Hence, even though for any finite n, the variance of
log
(
X
n
n
)
{\displaystyle \log \left({\frac {X_{n}}{n}}\right)}
does not actually exist (since Xn can be zero), the asymptotic variance of
log
(
X
n
n
)
{\displaystyle \log \left({\frac {X_{n}}{n}}\right)}
does exist and is equal to
1
−
p
n
p
.
{\displaystyle {\frac {1-p}{np}}.}
Note that since p>0,
Pr
(
X
n
n
>
0
)
→
1
{\displaystyle \Pr \left({\frac {X_{n}}{n}}>0\right)\rightarrow 1}
as
n
→
∞
{\displaystyle n\rightarrow \infty }
, so with probability converging to one,
log
(
X
n
n
)
{\displaystyle \log \left({\frac {X_{n}}{n}}\right)}
is finite for large n.
Moreover, if
p
^
{\displaystyle {\hat {p}}}
and
q
^
{\displaystyle {\hat {q}}}
are estimates of different group rates from independent samples of sizes n and m respectively, then the logarithm of the estimated relative risk
p
^
q
^
{\displaystyle {\frac {\hat {p}}{\hat {q}}}}
has asymptotic variance equal to
1
−
p
p
n
+
1
−
q
q
m
.
{\displaystyle {\frac {1-p}{p\,n}}+{\frac {1-q}{q\,m}}.}
This is useful to construct a hypothesis test or to make a confidence interval for the relative risk.
== Alternative form ==
The delta method is often used in a form that is essentially identical to that above, but without the assumption that Xn or B is asymptotically normal. Often the only context is that the variance is "small". The results then just give approximations to the means and covariances of the transformed quantities. For example, the formulae presented in Klein (1953, p. 258) are:
Var
(
h
r
)
=
∑
i
(
∂
h
r
∂
B
i
)
2
Var
(
B
i
)
+
∑
i
∑
j
≠
i
(
∂
h
r
∂
B
i
)
(
∂
h
r
∂
B
j
)
Cov
(
B
i
,
B
j
)
Cov
(
h
r
,
h
s
)
=
∑
i
(
∂
h
r
∂
B
i
)
(
∂
h
s
∂
B
i
)
Var
(
B
i
)
+
∑
i
∑
j
≠
i
(
∂
h
r
∂
B
i
)
(
∂
h
s
∂
B
j
)
Cov
(
B
i
,
B
j
)
{\displaystyle {\begin{aligned}\operatorname {Var} \left(h_{r}\right)=&\sum _{i}\left({\frac {\partial h_{r}}{\partial B_{i}}}\right)^{2}\operatorname {Var} \left(B_{i}\right)+\sum _{i}\sum _{j\neq i}\left({\frac {\partial h_{r}}{\partial B_{i}}}\right)\left({\frac {\partial h_{r}}{\partial B_{j}}}\right)\operatorname {Cov} \left(B_{i},B_{j}\right)\\\operatorname {Cov} \left(h_{r},h_{s}\right)=&\sum _{i}\left({\frac {\partial h_{r}}{\partial B_{i}}}\right)\left({\frac {\partial h_{s}}{\partial B_{i}}}\right)\operatorname {Var} \left(B_{i}\right)+\sum _{i}\sum _{j\neq i}\left({\frac {\partial h_{r}}{\partial B_{i}}}\right)\left({\frac {\partial h_{s}}{\partial B_{j}}}\right)\operatorname {Cov} \left(B_{i},B_{j}\right)\end{aligned}}}
where hr is the rth element of h(B) and Bi is the ith element of B.
== Second-order delta method ==
When g′(θ) = 0 the delta method cannot be applied. However, if g′′(θ) exists and is not zero, the second-order delta method can be applied. By the Taylor expansion,
n
[
g
(
X
n
)
−
g
(
θ
)
]
=
1
2
n
[
X
n
−
θ
]
2
[
g
″
(
θ
)
]
+
o
p
(
1
)
{\displaystyle n[g(X_{n})-g(\theta )]={\frac {1}{2}}n[X_{n}-\theta ]^{2}\left[g''(\theta )\right]+o_{p}(1)}
, so that the variance of
g
(
X
n
)
{\displaystyle g\left(X_{n}\right)}
relies on up to the 4th moment of
X
n
{\displaystyle X_{n}}
.
The second-order delta method is also useful in conducting a more accurate approximation of
g
(
X
n
)
{\displaystyle g\left(X_{n}\right)}
's distribution when sample size is small.
n
[
g
(
X
n
)
−
g
(
θ
)
]
=
n
[
X
n
−
θ
]
g
′
(
θ
)
+
1
2
n
[
X
n
−
θ
]
2
n
g
″
(
θ
)
+
o
p
(
1
)
{\displaystyle {\sqrt {n}}[g(X_{n})-g(\theta )]={\sqrt {n}}[X_{n}-\theta ]g'(\theta )+{\frac {1}{2}}{\frac {n[X_{n}-\theta ]^{2}}{\sqrt {n}}}g''(\theta )+o_{p}(1)}
.
For example, when
X
n
{\displaystyle X_{n}}
follows the standard normal distribution,
g
(
X
n
)
{\displaystyle g\left(X_{n}\right)}
can be approximated as the weighted sum of a standard normal and a chi-square with 1 degree of freedom.
== Nonparametric delta method ==
A version of the delta method exists in nonparametric statistics. Let
X
i
∼
F
{\displaystyle X_{i}\sim F}
be an independent and identically distributed random variable with a sample of size
n
{\displaystyle n}
with an empirical distribution function
F
^
n
{\displaystyle {\hat {F}}_{n}}
, and let
T
{\displaystyle T}
be a functional. If
T
{\displaystyle T}
is Hadamard differentiable with respect to the Chebyshev metric, then
T
(
F
^
n
)
−
T
(
F
)
se
^
→
D
N
(
0
,
1
)
{\displaystyle {\frac {T({\hat {F}}_{n})-T(F)}{\widehat {\text{se}}}}\xrightarrow {D} N(0,1)}
where
se
^
=
τ
^
n
{\displaystyle {\widehat {\text{se}}}={\frac {\hat {\tau }}{\sqrt {n}}}}
and
τ
^
2
=
1
n
∑
i
=
1
n
L
^
2
(
X
i
)
{\displaystyle {\hat {\tau }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}{\hat {L}}^{2}(X_{i})}
, with
L
^
(
x
)
=
L
F
^
n
(
δ
x
)
{\displaystyle {\hat {L}}(x)=L_{{\hat {F}}_{n}}(\delta _{x})}
denoting the empirical influence function for
T
{\displaystyle T}
. A nonparametric
(
1
−
α
)
{\displaystyle (1-\alpha )}
pointwise asymptotic confidence interval for
T
(
F
)
{\displaystyle T(F)}
is therefore given by
T
(
F
^
n
)
±
z
α
/
2
se
^
{\displaystyle T({\hat {F}}_{n})\pm z_{\alpha /2}{\widehat {\text{se}}}}
where
z
q
{\displaystyle z_{q}}
denotes the
q
{\displaystyle q}
-quantile of the standard normal. See Wasserman (2006) p. 19f. for details and examples.
== See also ==
Taylor expansions for the moments of functions of random variables
Variance-stabilizing transformation
== References ==
== Further reading ==
Oehlert, G. W. (1992). "A Note on the Delta Method". The American Statistician. 46 (1): 27–29. doi:10.1080/00031305.1992.10475842. JSTOR 2684406.
Wolter, Kirk M. (1985). "Taylor Series Methods". Introduction to Variance Estimation. New York: Springer. pp. 221–247. ISBN 0-387-96119-4.
Wasserman, Larry (2006). All of Nonparametric Statistics. New York: Springer. pp. 19–20. ISBN 0-387-25145-6.
== External links ==
Asmussen, Søren (2005). "Some Applications of the Delta Method" (PDF). Lecture notes. Aarhus University. Archived from the original (PDF) on May 25, 2015.
Feiveson, Alan H. "Explanation of the delta method". Stata Corp. | Wikipedia/Delta_method |
In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation.
== Goodness of fit ==
One measure of goodness of fit is the coefficient of determination, often denoted, R2. In ordinary least squares with an intercept, it ranges between 0 and 1. However, an R2 close to 1 does not guarantee that the model fits the data well. For example, if the functional form of the model does not match the data, R2 can be high despite a poor model fit. Anscombe's quartet consists of four example data sets with similarly high R2 values, but data that sometimes clearly does not fit the regression line. Instead, the data sets include outliers, high-leverage points, or non-linearities.
One problem with the R2 as a measure of model validity is that it can always be increased by adding more variables into the model, except in the unlikely event that the additional variables are exactly uncorrelated with the dependent variable in the data sample being used. This problem can be avoided by doing an F-test of the statistical significance of the increase in the R2, or by instead using the adjusted R2.
== Analysis of residuals ==
The residuals from a fitted model are the differences between the responses observed at each combination of values of the explanatory variables and the corresponding prediction of the response computed using the regression function. Mathematically, the definition of the residual for the ith observation in the data set is written
e
i
=
y
i
−
f
(
x
i
;
β
^
)
,
{\displaystyle e_{i}=y_{i}-f(x_{i};{\hat {\beta }}),}
with yi denoting the ith response in the data set and xi the vector of explanatory variables, each set at the corresponding values found in the ith observation in the data set.
If the model fit to the data were correct, the residuals would approximate the random errors that make the relationship between the explanatory variables and the response variable a statistical relationship. Therefore, if the residuals appear to behave randomly, it suggests that the model fits the data well. On the other hand, if non-random structure is evident in the residuals, it is a clear sign that the model fits the data poorly. The next section details the types of plots to use to test different aspects of a model and gives the correct interpretations of different results that could be observed for each type of plot.
=== Graphical analysis of residuals ===
A basic, though not quantitatively precise, way to check for problems that render a model inadequate is to conduct a visual examination of the residuals (the mispredictions of the data used in quantifying the model) to look for obvious deviations from randomness. If a visual examination suggests, for example, the possible presence of heteroscedasticity (a relationship between the variance of the model errors and the size of an independent variable's observations), then statistical tests can be performed to confirm or reject this hunch; if it is confirmed, different modeling procedures are called for.
Different types of plots of the residuals from a fitted model provide information on the adequacy of different aspects of the model.
sufficiency of the functional part of the model: scatter plots of residuals versus predictors
non-constant variation across the data: scatter plots of residuals versus predictors; for data collected over time, also plots of residuals against time
drift in the errors (data collected over time): run charts of the response and errors versus time
independence of errors: lag plot
normality of errors: histogram and normal probability plot
Graphical methods have an advantage over numerical methods for model validation because they readily illustrate a broad range of complex aspects of the relationship between the model and the data.
=== Quantitative analysis of residuals ===
Numerical methods also play an important role in model validation. For example, the lack-of-fit test for assessing the correctness of the functional part of the model can aid in interpreting a borderline residual plot. One common situation when numerical validation methods take precedence over graphical methods is when the number of parameters being estimated is relatively close to the size of the data set. In this situation residual plots are often difficult to interpret due to constraints on the residuals imposed by the estimation of the unknown parameters. One area in which this typically happens is in optimization applications using designed experiments. Logistic regression with binary data is another area in which graphical residual analysis can be difficult.
Serial correlation of the residuals can indicate model misspecification, and can be checked for with the Durbin–Watson statistic. The problem of heteroskedasticity can be checked for in any of several ways.
== Out-of-sample evaluation ==
Cross-validation is the process of assessing how the results of a statistical analysis will generalize to an independent data set. If the model has been estimated over some, but not all, of the available data, then the model using the estimated parameters can be used to predict the held-back data. If, for example, the out-of-sample mean squared error, also known as the mean squared prediction error, is substantially higher than the in-sample mean square error, this is a sign of deficiency in the model.
A development in medical statistics is the use of out-of-sample cross validation techniques in meta-analysis. It forms the basis of the validation statistic, Vn, which is used to test the statistical validity of meta-analysis summary estimates. Essentially it measures a type of normalized prediction error and its distribution is a linear combination of χ2 variables of degree 1.
== See also ==
All models are wrong
Model selection
Prediction error
Prediction interval
Resampling (statistics)
Statistical conclusion validity
Statistical model specification
Statistical model validation
Validity (statistics)
Coefficient of determination
Lack-of-fit sum of squares
Reduced chi-squared
== References ==
== Further reading ==
Arboretti Giancristofaro, R.; Salmaso, L. (2003), "Model performance analysis and model validation in logistic regression", Statistica, 63: 375–396
Kmenta, Jan (1986), Elements of Econometrics (Second ed.), Macmillan, pp. 593–600; republished in 1997 by University of Michigan Press
== External links ==
How can I tell if a model fits my data? (NIST)
NIST/SEMATECH e-Handbook of Statistical Methods
Model Diagnostics (Eberly College of Science)
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Regression_model_validation |
In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments
E
(
X
k
)
{\displaystyle \operatorname {E} (X^{k})\,}
exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments
(cf. the problem of moments). If
lim
n
→
∞
E
(
X
n
k
)
=
E
(
X
k
)
{\displaystyle \lim _{n\to \infty }\operatorname {E} (X_{n}^{k})=\operatorname {E} (X^{k})\,}
for all values of k, then the sequence {Xn} converges to X in distribution.
The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices.
== Notes == | Wikipedia/Method_of_moments_(probability_theory) |
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable.
The method requires that a certain number of moment conditions be specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the parameters' true values. The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation.
The GMM estimators are known to be consistent, asymptotically normal, and most efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments, introduced by Karl Pearson in 1894. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997).
== Description ==
Suppose the available data consists of T observations {Yt } t = 1,...,T, where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter θ ∈ Θ. The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate.
A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.)
In order to apply GMM, we need to have "moment conditions", that is, we need to know a vector-valued function g(Y,θ) such that
m
(
θ
0
)
≡
E
[
g
(
Y
t
,
θ
0
)
]
=
0
,
{\displaystyle m(\theta _{0})\equiv \operatorname {E} [\,g(Y_{t},\theta _{0})\,]=0,}
where E denotes expectation, and Yt is a generic observation. Moreover, the function m(θ) must differ from zero for θ ≠ θ0, otherwise the parameter θ will not be point-identified.
The basic idea behind GMM is to replace the theoretical expected value E[⋅] with its empirical analog—sample average:
m
^
(
θ
)
≡
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
)
{\displaystyle {\hat {m}}(\theta )\equiv {\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},\theta )}
and then to minimize the norm of this expression with respect to θ. The minimizing value of θ is our estimate for θ0.
By the law of large numbers,
m
^
(
θ
)
≈
E
[
g
(
Y
t
,
θ
)
]
=
m
(
θ
)
{\displaystyle \scriptstyle {\hat {m}}(\theta )\,\approx \;\operatorname {E} [g(Y_{t},\theta )]\,=\,m(\theta )}
for large values of T, and thus we expect that
m
^
(
θ
0
)
≈
m
(
θ
0
)
=
0
{\displaystyle \scriptstyle {\hat {m}}(\theta _{0})\;\approx \;m(\theta _{0})\;=\;0}
. The generalized method of moments looks for a number
θ
^
{\displaystyle \scriptstyle {\hat {\theta }}}
which would make
m
^
(
θ
^
)
{\displaystyle \scriptstyle {\hat {m}}(\;\!{\hat {\theta }}\;\!)}
as close to zero as possible. Mathematically, this is equivalent to minimizing a certain norm of
m
^
(
θ
)
{\displaystyle \scriptstyle {\hat {m}}(\theta )}
(norm of m, denoted as ||m||, measures the distance between m and zero). The properties of the resulting estimator will depend on the particular choice of the norm function, and therefore the theory of GMM considers an entire family of norms, defined as
‖
m
^
(
θ
)
‖
W
2
=
m
^
(
θ
)
T
W
m
^
(
θ
)
,
{\displaystyle \|{\hat {m}}(\theta )\|_{W}^{2}={\hat {m}}(\theta )^{\mathsf {T}}\,W{\hat {m}}(\theta ),}
where W is a positive-definite weighting matrix, and
m
T
{\displaystyle m^{\mathsf {T}}}
denotes transposition. In practice, the weighting matrix W is computed based on the available data set, which will be denoted as
W
^
{\displaystyle \scriptstyle {\hat {W}}}
. Thus, the GMM estimator can be written as
θ
^
=
arg
min
θ
∈
Θ
(
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
)
)
T
W
^
(
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
)
)
{\displaystyle {\hat {\theta }}=\operatorname {arg} \min _{\theta \in \Theta }{\bigg (}{\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},\theta ){\bigg )}^{\mathsf {T}}{\hat {W}}{\bigg (}{\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},\theta ){\bigg )}}
Under suitable conditions this estimator is consistent, asymptotically normal, and with right choice of weighting matrix
W
^
{\displaystyle \scriptstyle {\hat {W}}}
also asymptotically efficient.
== Properties ==
=== Consistency ===
Consistency is a statistical property of an estimator stating that, having a sufficient number of observations, the estimator will converge in probability to the true value of parameter:
θ
^
→
p
θ
0
as
T
→
∞
.
{\displaystyle {\hat {\theta }}{\xrightarrow {p}}\theta _{0}\ {\text{as}}\ T\to \infty .}
Sufficient conditions for a GMM estimator to be consistent are as follows:
W
^
T
→
p
W
,
{\displaystyle {\hat {W}}_{T}{\xrightarrow {p}}W,}
where W is a positive semi-definite matrix,
W
E
[
g
(
Y
t
,
θ
)
]
=
0
{\displaystyle \,W\operatorname {E} [\,g(Y_{t},\theta )\,]=0}
only for
θ
=
θ
0
,
{\displaystyle \,\theta =\theta _{0},}
The space of possible parameters
Θ
⊂
R
k
{\displaystyle \Theta \subset \mathbb {R} ^{k}}
is compact,
g
(
Y
,
θ
)
{\displaystyle \,g(Y,\theta )}
is continuous at each θ with probability one,
E
[
sup
θ
∈
Θ
‖
g
(
Y
,
θ
)
‖
]
<
∞
.
{\displaystyle \operatorname {E} [\,\textstyle \sup _{\theta \in \Theta }\lVert g(Y,\theta )\rVert \,]<\infty .}
The second condition here (so-called Global identification condition) is often particularly hard to verify. There exist simpler necessary but not sufficient conditions, which may be used to detect non-identification problem:
Order condition. The dimension of moment function m(θ) should be at least as large as the dimension of parameter vector θ.
Local identification. If g(Y,θ) is continuously differentiable in a neighborhood of
θ
0
{\displaystyle \theta _{0}}
, then matrix
W
E
[
∇
θ
g
(
Y
t
,
θ
0
)
]
{\displaystyle W\operatorname {E} [\nabla _{\theta }g(Y_{t},\theta _{0})]}
must have full column rank.
In practice applied econometricians often simply assume that global identification holds, without actually proving it.: 2127
=== Asymptotic normality ===
Asymptotic normality is a useful property, as it allows us to construct confidence bands for the estimator, and conduct different tests. Before we can make a statement about the asymptotic distribution of the GMM estimator, we need to define two auxiliary matrices:
G
=
E
[
∇
θ
g
(
Y
t
,
θ
0
)
]
,
Ω
=
E
[
g
(
Y
t
,
θ
0
)
g
(
Y
t
,
θ
0
)
T
]
{\displaystyle G=\operatorname {E} [\,\nabla _{\!\theta }\,g(Y_{t},\theta _{0})\,],\qquad \Omega =\operatorname {E} [\,g(Y_{t},\theta _{0})g(Y_{t},\theta _{0})^{\mathsf {T}}\,]}
Then under conditions 1–6 listed below, the GMM estimator will be asymptotically normal with limiting distribution:
T
(
θ
^
−
θ
0
)
→
d
N
[
0
,
(
G
T
W
G
)
−
1
G
T
W
Ω
W
T
G
(
G
T
W
T
G
)
−
1
]
.
{\displaystyle {\sqrt {T}}{\big (}{\hat {\theta }}-\theta _{0}{\big )}\ {\xrightarrow {d}}\ {\mathcal {N}}{\big [}0,(G^{\mathsf {T}}WG)^{-1}G^{\mathsf {T}}W\Omega W^{\mathsf {T}}G(G^{\mathsf {T}}W^{\mathsf {T}}G)^{-1}{\big ]}.}
Conditions:
θ
^
{\displaystyle {\hat {\theta }}}
is consistent (see previous section),
The set of possible parameters
Θ
⊂
R
k
{\displaystyle \Theta \subset \mathbb {R} ^{k}}
is compact,
g
(
Y
,
θ
)
{\displaystyle \,g(Y,\theta )}
is continuously differentiable in some neighborhood N of
θ
0
{\displaystyle \theta _{0}}
with probability one,
E
[
‖
g
(
Y
t
,
θ
)
‖
2
]
<
∞
,
{\displaystyle \operatorname {E} [\,\lVert g(Y_{t},\theta )\rVert ^{2}\,]<\infty ,}
E
[
sup
θ
∈
N
‖
∇
θ
g
(
Y
t
,
θ
)
‖
]
<
∞
,
{\displaystyle \operatorname {E} [\,\textstyle \sup _{\theta \in N}\lVert \nabla _{\theta }g(Y_{t},\theta )\rVert \,]<\infty ,}
the matrix
G
′
W
G
{\displaystyle G'WG}
is nonsingular.
=== Relative Efficiency ===
So far we have said nothing about the choice of matrix W, except that it must be positive semi-definite. In fact any such matrix will produce a consistent and asymptotically normal GMM estimator, the only difference will be in the asymptotic variance of that estimator. It can be shown that taking
W
∝
Ω
−
1
{\displaystyle W\propto \ \Omega ^{-1}}
will result in the most efficient estimator in the class of all (generalized) method of moment estimators. Only infinite number of orthogonal conditions obtains the smallest variance, the Cramér–Rao bound.
In this case the formula for the asymptotic distribution of the GMM estimator simplifies to
T
(
θ
^
−
θ
0
)
→
d
N
[
0
,
(
G
T
Ω
−
1
G
)
−
1
]
{\displaystyle {\sqrt {T}}{\big (}{\hat {\theta }}-\theta _{0}{\big )}\ {\xrightarrow {d}}\ {\mathcal {N}}{\big [}0,(G^{\mathsf {T}}\,\Omega ^{-1}G)^{-1}{\big ]}}
The proof that such a choice of weighting matrix is indeed locally optimal is often adopted with slight modifications when establishing efficiency of other estimators. As a rule of thumb, a weighting matrix inches closer to optimality when it turns into an expression closer to the Cramér–Rao bound.
== Implementation ==
One difficulty with implementing the outlined method is that we cannot take W = Ω−1 because, by the definition of matrix Ω, we need to know the value of θ0 in order to compute this matrix, and θ0 is precisely the quantity we do not know and are trying to estimate in the first place. In the case of Yt being iid we can estimate W as
W
^
T
(
θ
^
)
=
(
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
^
)
g
(
Y
t
,
θ
^
)
T
)
−
1
.
{\displaystyle {\hat {W}}_{T}({\hat {\theta }})={\bigg (}{\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},{\hat {\theta }})g(Y_{t},{\hat {\theta }})^{\mathsf {T}}{\bigg )}^{-1}.}
Several approaches exist to deal with this issue, the first one being the most popular:
Another important issue in implementation of minimization procedure is that the function is supposed to search through (possibly high-dimensional) parameter space Θ and find the value of θ which minimizes the objective function. No generic recommendation for such procedure exists, it is a subject of its own field, numerical optimization.
== Sargan–Hansen J-test ==
When the number of moment conditions is greater than the dimension of the parameter vector θ, the model is said to be over-identified. Sargan (1958) proposed tests for over-identifying restrictions based on instrumental variables estimators that are distributed in large samples as Chi-square variables with degrees of freedom that depend on the number of over-identifying restrictions. Subsequently, Hansen (1982) applied this test to the mathematically equivalent formulation of GMM estimators. Note, however, that such statistics can be negative in empirical applications where the models are misspecified, and likelihood ratio tests can yield insights since the models are estimated under both null and alternative hypotheses (Bhargava and Sargan, 1983).
Conceptually we can check whether
m
^
(
θ
^
)
{\displaystyle {\hat {m}}({\hat {\theta }})}
is sufficiently close to zero to suggest that the model fits the data well. The GMM method has then replaced the problem of solving the equation
m
^
(
θ
)
=
0
{\displaystyle {\hat {m}}(\theta )=0}
, which chooses
θ
{\displaystyle \theta }
to match the restrictions exactly, by a minimization calculation. The minimization can always be conducted even when no
θ
0
{\displaystyle \theta _{0}}
exists such that
m
(
θ
0
)
=
0
{\displaystyle m(\theta _{0})=0}
. This is what J-test does. The J-test is also called a test for over-identifying restrictions.
Formally we consider two hypotheses:
H
0
:
m
(
θ
0
)
=
0
{\displaystyle H_{0}:\ m(\theta _{0})=0}
(the null hypothesis that the model is “valid”), and
H
1
:
m
(
θ
)
≠
0
,
∀
θ
∈
Θ
{\displaystyle H_{1}:\ m(\theta )\neq 0,\ \forall \theta \in \Theta }
(the alternative hypothesis that model is “invalid”; the data does not come close to meeting the restrictions)
Under hypothesis
H
0
{\displaystyle H_{0}}
, the following so-called J-statistic is asymptotically chi-squared distributed with k–l degrees of freedom. Define J to be:
J
≡
T
⋅
(
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
^
)
)
T
W
^
T
(
1
T
∑
t
=
1
T
g
(
Y
t
,
θ
^
)
)
→
d
χ
k
−
ℓ
2
{\displaystyle J\equiv T\cdot {\bigg (}{\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},{\hat {\theta }}){\bigg )}^{\mathsf {T}}{\hat {W}}_{T}{\bigg (}{\frac {1}{T}}\sum _{t=1}^{T}g(Y_{t},{\hat {\theta }}){\bigg )}\ {\xrightarrow {d}}\ \chi _{k-\ell }^{2}}
under
H
0
,
{\displaystyle H_{0},}
where
θ
^
{\displaystyle {\hat {\theta }}}
is the GMM estimator of the parameter
θ
0
{\displaystyle \theta _{0}}
, k is the number of moment conditions (dimension of vector g), and l is the number of estimated parameters (dimension of vector θ). Matrix
W
^
T
{\displaystyle {\hat {W}}_{T}}
must converge in probability to
Ω
−
1
{\displaystyle \Omega ^{-1}}
, the efficient weighting matrix (note that previously we only required that W be proportional to
Ω
−
1
{\displaystyle \Omega ^{-1}}
for estimator to be efficient; however in order to conduct the J-test W must be exactly equal to
Ω
−
1
{\displaystyle \Omega ^{-1}}
, not simply proportional).
Under the alternative hypothesis
H
1
{\displaystyle H_{1}}
, the J-statistic is asymptotically unbounded:
J
→
p
∞
{\displaystyle J\ {\xrightarrow {p}}\ \infty }
under
H
1
{\displaystyle H_{1}}
To conduct the test we compute the value of J from the data. It is a nonnegative number. We compare it with (for example) the 0.95 quantile of the
χ
k
−
ℓ
2
{\displaystyle \chi _{k-\ell }^{2}}
distribution:
H
0
{\displaystyle H_{0}}
is rejected at 95% confidence level if
J
>
q
0.95
χ
k
−
ℓ
2
{\displaystyle J>q_{0.95}^{\chi _{k-\ell }^{2}}}
H
0
{\displaystyle H_{0}}
cannot be rejected at 95% confidence level if
J
<
q
0.95
χ
k
−
ℓ
2
{\displaystyle J<q_{0.95}^{\chi _{k-\ell }^{2}}}
== Scope ==
Many other popular estimation techniques can be cast in terms of GMM optimization:
== An Alternative to the GMM ==
In method of moments, an alternative to the original (non-generalized) Method of Moments (MoM) is described, and references to some applications and a list of theoretical advantages and disadvantages relative to the traditional method are provided. This Bayesian-Like MoM (BL-MoM) is distinct from all the related methods described above, which are subsumed by the GMM. The literature does not contain a direct comparison between the GMM and the BL-MoM in specific applications.
== Implementations ==
R Programming wikibook, Method of Moments
R
Stata
EViews
SAS
Gretl
== See also ==
Method of maximum likelihood
Generalized empirical likelihood
Arellano–Bond estimator
Approximate Bayesian computation
== References ==
== Further reading ==
Huber, P. (1967). The behavior of maximum likelihood estimates under nonstandard conditions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability 1, 221-233.
Newey W., McFadden D. (1994). Large sample estimation and hypothesis testing, in Handbook of Econometrics, Ch.36. Elsevier Science.
Imbens, Guido W.; Spady, Richard H.; Johnson, Phillip (1998). "Information theoretic approaches to inference in moment condition models" (PDF). Econometrica. 66 (2): 333–357. doi:10.2307/2998561. JSTOR 2998561.
Sargan, J.D. (1958). The estimation of economic relationships using instrumental variables. Econometrica, 26, 393-415.
Sargan, J.D. (1959). The estimation of relationships with autocorrelated residuals by the use on instrumental variables. Journal of the Royal Statistical Society B, 21, 91-105.
Wang, C.Y., Wang, S., and Carroll, R. (1997). Estimation in choice-based sampling with measurement error and bootstrap analysis. Journal of Econometrics, 77, 65-86.
Bhargava, A., and Sargan, J.D. (1983). Estimating dynamic random effects from panel data covering short time periods. Econometrica, 51, 6, 1635-1659.
Hayashi, Fumio (2000). Econometrics. Princeton: Princeton University Press. ISBN 0-691-01018-8.
Hansen, Lars Peter (2002). "Method of Moments". In Smelser, N. J.; Bates, P. B. (eds.). International Encyclopedia of the Social and Behavior Sciences. Oxford: Pergamon.
Hall, Alastair R. (2005). Generalized Method of Moments. Advanced Texts in Econometrics. Oxford University Press. ISBN 0-19-877520-2.
Faciane, Kirby Adam Jr. (2006). Statistics for Empirical and Quantitative Finance. Statistics for Empirical and Quantitative Finance. H.C. Baird. ISBN 0-9788208-9-4.
Special issues of Journal of Business and Economic Statistics: vol. 14, no. 3 and vol. 20, no. 4.
Short Introduction to the Generalized Method of Moments | Wikipedia/Generalized_method_of_moments |
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with a prior distribution that is uniform in the region of interest. In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
== Principles ==
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector
θ
=
[
θ
1
,
θ
2
,
…
,
θ
k
]
T
{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}
so that this distribution falls within a parametric family
{
f
(
⋅
;
θ
)
∣
θ
∈
Θ
}
,
{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}
where
Θ
{\displaystyle \,\Theta \,}
is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample
y
=
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}
gives a real-valued function,
L
n
(
θ
)
=
L
n
(
θ
;
y
)
=
f
n
(
y
;
θ
)
,
{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}
which is called the likelihood function. For independent random variables,
f
n
(
y
;
θ
)
{\displaystyle f_{n}(\mathbf {y} ;\theta )}
will be the product of univariate density functions:
f
n
(
y
;
θ
)
=
∏
k
=
1
n
f
k
u
n
i
v
a
r
(
y
k
;
θ
)
.
{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.}
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is:
θ
^
=
a
r
g
m
a
x
θ
∈
Θ
L
n
(
θ
;
y
)
.
{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}
Intuitively, this selects the parameter values that make the observed data most probable. The specific value
θ
^
=
θ
^
n
(
y
)
∈
Θ
{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}
that maximizes the likelihood function
L
n
{\displaystyle \,{\mathcal {L}}_{n}\,}
is called the maximum likelihood estimate. Further, if the function
θ
^
n
:
R
n
→
Θ
{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}
so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space
Θ
{\displaystyle \,\Theta \,}
that is compact. For an open
Θ
{\displaystyle \,\Theta \,}
the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
ℓ
(
θ
;
y
)
=
ln
L
n
(
θ
;
y
)
.
{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}
Since the logarithm is a monotonic function, the maximum of
ℓ
(
θ
;
y
)
{\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}
occurs at the same value of
θ
{\displaystyle \theta }
as does the maximum of
L
n
.
{\displaystyle \,{\mathcal {L}}_{n}~.}
If
ℓ
(
θ
;
y
)
{\displaystyle \ell (\theta \,;\mathbf {y} )}
is differentiable in
Θ
,
{\displaystyle \,\Theta \,,}
sufficient conditions for the occurrence of a maximum (or a minimum) are
∂
ℓ
∂
θ
1
=
0
,
∂
ℓ
∂
θ
2
=
0
,
…
,
∂
ℓ
∂
θ
k
=
0
,
{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}
known as the likelihood equations. For some models, these equations can be explicitly solved for
θ
^
,
{\displaystyle \,{\widehat {\theta \,}}\,,}
but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Another problem is that in finite samples, there may exist multiple roots for the likelihood equations. Whether the identified root
θ
^
{\displaystyle \,{\widehat {\theta \,}}\,}
of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-called Hessian matrix
H
(
θ
^
)
=
[
∂
2
ℓ
∂
θ
1
2
|
θ
=
θ
^
∂
2
ℓ
∂
θ
1
∂
θ
2
|
θ
=
θ
^
…
∂
2
ℓ
∂
θ
1
∂
θ
k
|
θ
=
θ
^
∂
2
ℓ
∂
θ
2
∂
θ
1
|
θ
=
θ
^
∂
2
ℓ
∂
θ
2
2
|
θ
=
θ
^
…
∂
2
ℓ
∂
θ
2
∂
θ
k
|
θ
=
θ
^
⋮
⋮
⋱
⋮
∂
2
ℓ
∂
θ
k
∂
θ
1
|
θ
=
θ
^
∂
2
ℓ
∂
θ
k
∂
θ
2
|
θ
=
θ
^
…
∂
2
ℓ
∂
θ
k
2
|
θ
=
θ
^
]
,
{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,}
is negative semi-definite at
θ
^
{\displaystyle {\widehat {\theta \,}}}
, as this indicates local concavity. Conveniently, most common probability distributions – in particular the exponential family – are logarithmically concave.
=== Restricted parameter space ===
While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed as
Θ
=
{
θ
:
θ
∈
R
k
,
h
(
θ
)
=
0
}
,
{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,}
where
h
(
θ
)
=
[
h
1
(
θ
)
,
h
2
(
θ
)
,
…
,
h
r
(
θ
)
]
{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}
is a vector-valued function mapping
R
k
{\displaystyle \,\mathbb {R} ^{k}\,}
into
R
r
.
{\displaystyle \;\mathbb {R} ^{r}~.}
Estimating the true parameter
θ
{\displaystyle \theta }
belonging to
Θ
{\displaystyle \Theta }
then, as a practical matter, means to find the maximum of the likelihood function subject to the constraint
h
(
θ
)
=
0
.
{\displaystyle ~h(\theta )=0~.}
Theoretically, the most natural approach to this constrained optimization problem is the method of substitution, that is "filling out" the restrictions
h
1
,
h
2
,
…
,
h
r
{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}
to a set
h
1
,
h
2
,
…
,
h
r
,
h
r
+
1
,
…
,
h
k
{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}
in such a way that
h
∗
=
[
h
1
,
h
2
,
…
,
h
k
]
{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}
is a one-to-one function from
R
k
{\displaystyle \mathbb {R} ^{k}}
to itself, and reparameterize the likelihood function by setting
ϕ
i
=
h
i
(
θ
1
,
θ
2
,
…
,
θ
k
)
.
{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}
Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also. For instance, in a multivariate normal distribution the covariance matrix
Σ
{\displaystyle \,\Sigma \,}
must be positive-definite; this restriction can be imposed by replacing
Σ
=
Γ
T
Γ
,
{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}
where
Γ
{\displaystyle \Gamma }
is a real upper triangular matrix and
Γ
T
{\displaystyle \Gamma ^{\mathsf {T}}}
is its transpose.
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations
∂
ℓ
∂
θ
−
∂
h
(
θ
)
T
∂
θ
λ
=
0
{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}
and
h
(
θ
)
=
0
,
{\displaystyle h(\theta )=0\;,}
where
λ
=
[
λ
1
,
λ
2
,
…
,
λ
r
]
T
{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}
is a column-vector of Lagrange multipliers and
∂
h
(
θ
)
T
∂
θ
{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}
is the k × r Jacobian matrix of partial derivatives. Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero. This in turn allows for a statistical test of the "validity" of the constraint, known as the Lagrange multiplier test.
=== Nonparametric maximum likelihood estimation ===
Nonparametric maximum likelihood estimation can be performed using the empirical likelihood.
== Properties ==
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function
ℓ
^
(
θ
;
x
)
{\displaystyle {\widehat {\ell \,}}(\theta \,;x)}
. If the data are independent and identically distributed, then we have
ℓ
^
(
θ
;
x
)
=
∑
i
=
1
n
ln
f
(
x
i
∣
θ
)
,
{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}
this being the sample analogue of the expected log-likelihood
ℓ
(
θ
)
=
E
[
ln
f
(
x
i
∣
θ
)
]
{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}
, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
Consistency: the sequence of MLEs converges in probability to the value being estimated.
Equivariance: If
θ
^
{\displaystyle {\hat {\theta }}}
is the maximum likelihood estimator for
θ
{\displaystyle \theta }
, and if
g
(
θ
)
{\displaystyle g(\theta )}
is a bijective transform of
θ
{\displaystyle \theta }
, then the maximum likelihood estimator for
α
=
g
(
θ
)
{\displaystyle \alpha =g(\theta )}
is
α
^
=
g
(
θ
^
)
{\displaystyle {\hat {\alpha }}=g({\hat {\theta }})}
. The equivariance property can be generalized to non-bijective transforms, although it applies in that case on the maximum of an induced likelihood function which is not the true likelihood in general.
Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound), which also means that MLE has asymptotic normality.
Second-order efficiency after correction for bias.
=== Consistency ===
Under the conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that if the data were generated by
f
(
⋅
;
θ
0
)
{\displaystyle f(\cdot \,;\theta _{0})}
and we have a sufficiently large number of observations n, then it is possible to find the value of θ0 with arbitrary precision. In mathematical terms this means that as n goes to infinity the estimator
θ
^
{\displaystyle {\widehat {\theta \,}}}
converges in probability to its true value:
θ
^
m
l
e
→
p
θ
0
.
{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.}
Under slightly stronger conditions, the estimator converges almost surely (or strongly):
θ
^
m
l
e
→
a.s.
θ
0
.
{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.}
In practical applications, data is never generated by
f
(
⋅
;
θ
0
)
{\displaystyle f(\cdot \,;\theta _{0})}
. Rather,
f
(
⋅
;
θ
0
)
{\displaystyle f(\cdot \,;\theta _{0})}
is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that all models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.
The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence
ℓ
^
(
θ
∣
x
)
{\displaystyle {\widehat {\ell \,}}(\theta \mid x)}
is stochastically equicontinuous.
If one wants to demonstrate that the ML estimator
θ
^
{\displaystyle {\widehat {\theta \,}}}
converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:
sup
θ
∈
Θ
‖
ℓ
^
(
θ
∣
x
)
−
ℓ
(
θ
)
‖
→
a.s.
0.
{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.}
Additionally, if (as assumed above) the data were generated by
f
(
⋅
;
θ
0
)
{\displaystyle f(\cdot \,;\theta _{0})}
, then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. Specifically,
n
(
θ
^
m
l
e
−
θ
0
)
→
d
N
(
0
,
I
−
1
)
{\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}
where I is the Fisher information matrix.
=== Functional invariance ===
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if
θ
^
{\displaystyle {\widehat {\theta \,}}}
is the MLE for
θ
{\displaystyle \theta }
, and if
g
(
θ
)
{\displaystyle g(\theta )}
is any transformation of
θ
{\displaystyle \theta }
, then the MLE for
α
=
g
(
θ
)
{\displaystyle \alpha =g(\theta )}
is by definition
α
^
=
g
(
θ
^
)
.
{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,}
It maximizes the so-called profile likelihood:
L
¯
(
α
)
=
sup
θ
:
α
=
g
(
θ
)
L
(
θ
)
.
{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,}
The MLE is also equivariant with respect to certain transformations of the data. If
y
=
g
(
x
)
{\displaystyle y=g(x)}
where
g
{\displaystyle g}
is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
f
Y
(
y
)
=
f
X
(
g
−
1
(
y
)
)
|
(
g
−
1
(
y
)
)
′
|
{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|}
and hence the likelihood functions for
X
{\displaystyle X}
and
Y
{\displaystyle Y}
differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case if
X
∼
N
(
0
,
1
)
{\displaystyle X\sim {\mathcal {N}}(0,1)}
, then
Y
=
g
(
X
)
=
e
X
{\displaystyle Y=g(X)=e^{X}}
follows a log-normal distribution. The density of Y follows with
f
X
{\displaystyle f_{X}}
standard Normal and
g
−
1
(
y
)
=
log
(
y
)
{\displaystyle g^{-1}(y)=\log(y)}
,
|
(
g
−
1
(
y
)
)
′
|
=
1
y
{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}
for
y
>
0
{\displaystyle y>0}
.
=== Efficiency ===
As assumed above, if the data were generated by
f
(
⋅
;
θ
0
)
,
{\displaystyle ~f(\cdot \,;\theta _{0})~,}
then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. It is √n -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound. Specifically,
n
(
θ
^
mle
−
θ
0
)
→
d
N
(
0
,
I
−
1
)
,
{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}
where
I
{\displaystyle ~{\mathcal {I}}~}
is the Fisher information matrix:
I
j
k
=
E
[
−
∂
2
ln
f
θ
0
(
X
t
)
∂
θ
j
∂
θ
k
]
.
{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.}
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order 1/√n .
=== Second-order efficiency after correction for bias ===
However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that θmle has bias of order 1⁄n. This bias is equal to (componentwise)
b
h
≡
E
[
(
θ
^
m
l
e
−
θ
0
)
h
]
=
1
n
∑
i
,
j
,
k
=
1
m
I
h
i
I
j
k
(
1
2
K
i
j
k
+
J
j
,
i
k
)
{\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)}
where
I
j
k
{\displaystyle {\mathcal {I}}^{jk}}
(with superscripts) denotes the (j,k)-th component of the inverse Fisher information matrix
I
−
1
{\displaystyle {\mathcal {I}}^{-1}}
, and
1
2
K
i
j
k
+
J
j
,
i
k
=
E
[
1
2
∂
3
ln
f
θ
0
(
X
t
)
∂
θ
i
∂
θ
j
∂
θ
k
+
∂
ln
f
θ
0
(
X
t
)
∂
θ
j
∂
2
ln
f
θ
0
(
X
t
)
∂
θ
i
∂
θ
k
]
.
{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.}
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it:
θ
^
mle
∗
=
θ
^
mle
−
b
^
.
{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}
This estimator is unbiased up to the terms of order 1/ n , and is called the bias-corrected maximum likelihood estimator.
This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order 1/ n2 . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator is not third-order efficient.
=== Relation to Bayesian inference ===
A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter θ that maximizes the probability of θ given the data, given by Bayes' theorem:
P
(
θ
∣
x
1
,
x
2
,
…
,
x
n
)
=
f
(
x
1
,
x
2
,
…
,
x
n
∣
θ
)
P
(
θ
)
P
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}}
where
P
(
θ
)
{\displaystyle \operatorname {\mathbb {P} } (\theta )}
is the prior distribution for the parameter θ and where
P
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}
is the probability of the data averaged over all parameters. Since the denominator is independent of θ, the Bayesian estimator is obtained by maximizing
f
(
x
1
,
x
2
,
…
,
x
n
∣
θ
)
P
(
θ
)
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}
with respect to θ. If we further assume that the prior
P
(
θ
)
{\displaystyle \operatorname {\mathbb {P} } (\theta )}
is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function
f
(
x
1
,
x
2
,
…
,
x
n
∣
θ
)
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}
. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution
P
(
θ
)
{\displaystyle \operatorname {\mathbb {P} } (\theta )}
.
==== Application of maximum-likelihood estimation in Bayes decision theory ====
In many practical applications in machine learning, maximum-likelihood estimation is used as the model for parameter estimation.
The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.
Thus, the Bayes Decision Rule is stated as
"decide
w
1
{\displaystyle \;w_{1}\;}
if
P
(
w
1
|
x
)
>
P
(
w
2
|
x
)
;
{\displaystyle ~\operatorname {\mathbb {P} } (w_{1}|x)\;>\;\operatorname {\mathbb {P} } (w_{2}|x)~;~}
otherwise decide
w
2
{\displaystyle \;w_{2}\;}
"
where
w
1
,
w
2
{\displaystyle \;w_{1}\,,w_{2}\;}
are predictions of different classes. From a perspective of minimizing error, it can also be stated as
w
=
a
r
g
m
a
x
w
∫
−
∞
∞
P
(
error
∣
x
)
P
(
x
)
d
x
{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}
where
P
(
error
∣
x
)
=
P
(
w
1
∣
x
)
{\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}
if we decide
w
2
{\displaystyle \;w_{2}\;}
and
P
(
error
∣
x
)
=
P
(
w
2
∣
x
)
{\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}
if we decide
w
1
.
{\displaystyle \;w_{1}\;.}
By applying Bayes' theorem
P
(
w
i
∣
x
)
=
P
(
x
∣
w
i
)
P
(
w
i
)
P
(
x
)
{\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}}
,
and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:
h
Bayes
=
a
r
g
m
a
x
w
[
P
(
x
∣
w
)
P
(
w
)
]
,
{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}
where
h
Bayes
{\displaystyle h_{\text{Bayes}}}
is the prediction and
P
(
w
)
{\displaystyle \;\operatorname {\mathbb {P} } (w)\;}
is the prior probability.
=== Relation to minimizing Kullback–Leibler divergence and cross entropy ===
Finding
θ
^
{\displaystyle {\hat {\theta }}}
that maximizes the likelihood is asymptotically equivalent to finding the
θ
^
{\displaystyle {\hat {\theta }}}
that defines a probability distribution (
Q
θ
^
{\displaystyle Q_{\hat {\theta }}}
) that has a minimal distance, in terms of Kullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated by
P
θ
0
{\displaystyle P_{\theta _{0}}}
). In an ideal world, P and Q are the same (and the only thing unknown is
θ
{\displaystyle \theta }
that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends on
θ
^
{\displaystyle {\hat {\theta }}}
) to the real distribution
P
θ
0
{\displaystyle P_{\theta _{0}}}
.
== Examples ==
=== Discrete uniform distribution ===
Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution); thus, the sample size is 1. If n is unknown, then the maximum likelihood estimator
n
^
{\displaystyle {\widehat {n}}}
of n is the number m on the drawn ticket. (The likelihood is 0 for n < m, 1⁄n for n ≥ m, and this is greatest when n = m. Note that the maximum likelihood estimate of n occurs at the lower extreme of possible values {m, m + 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) The expected value of the number m on the drawn ticket, and therefore the expected value of
n
^
{\displaystyle {\widehat {n}}}
, is (n + 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator for n will systematically underestimate n by (n − 1)/2.
=== Discrete distribution, finite parameter space ===
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a 'head' p. The goal then becomes to determine p.
Suppose the coin is tossed 80 times: i.e. the sample might be something like x1 = H, x2 = T, ..., x80 = T, and the count of the number of heads "H" is observed.
The probability of tossing tails is 1 − p (so here p is θ above). Suppose the outcome is 49 heads and 31 tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probability p = 1⁄3, one which gives heads with probability p = 1⁄2 and another which gives heads with probability p = 2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:
P
[
H
=
49
∣
p
=
1
3
]
=
(
80
49
)
(
1
3
)
49
(
1
−
1
3
)
31
≈
0.000
,
P
[
H
=
49
∣
p
=
1
2
]
=
(
80
49
)
(
1
2
)
49
(
1
−
1
2
)
31
≈
0.012
,
P
[
H
=
49
∣
p
=
2
3
]
=
(
80
49
)
(
2
3
)
49
(
1
−
2
3
)
31
≈
0.054
.
{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}}
The likelihood is maximized when p = 2⁄3, and so this is the maximum likelihood estimate for p.
=== Discrete distribution, continuous parameter space ===
Now suppose that there was only one coin but its p could have been any value 0 ≤ p ≤ 1 . The likelihood function to be maximised is
L
(
p
)
=
f
D
(
H
=
49
∣
p
)
=
(
80
49
)
p
49
(
1
−
p
)
31
,
{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,}
and the maximisation is over all possible values 0 ≤ p ≤ 1 .
One way to maximize this function is by differentiating with respect to p and setting to zero:
0
=
∂
∂
p
(
(
80
49
)
p
49
(
1
−
p
)
31
)
,
0
=
49
p
48
(
1
−
p
)
31
−
31
p
49
(
1
−
p
)
30
=
p
48
(
1
−
p
)
30
[
49
(
1
−
p
)
−
31
p
]
=
p
48
(
1
−
p
)
30
[
49
−
80
p
]
.
{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}}
This is a product of three terms. The first term is 0 when p = 0. The second is 0 when p = 1. The third is zero when p = 49⁄80. The solution that maximizes the likelihood is clearly p = 49⁄80 (since p = 0 and p = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for p is 49⁄80.
This result is easily generalized by substituting a letter such as s in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields s⁄n which is the maximum likelihood estimator for any sequence of n Bernoulli trials resulting in s 'successes'.
=== Continuous distribution, continuous parameter space ===
For the normal distribution
N
(
μ
,
σ
2
)
{\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}
which has probability density function
f
(
x
∣
μ
,
σ
2
)
=
1
2
π
σ
2
exp
(
−
(
x
−
μ
)
2
2
σ
2
)
,
{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),}
the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is
f
(
x
1
,
…
,
x
n
∣
μ
,
σ
2
)
=
∏
i
=
1
n
f
(
x
i
∣
μ
,
σ
2
)
=
(
1
2
π
σ
2
)
n
/
2
exp
(
−
∑
i
=
1
n
(
x
i
−
μ
)
2
2
σ
2
)
.
{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).}
This family of distributions has two parameters: θ = (μ, σ); so we maximize the likelihood,
L
(
μ
,
σ
2
)
=
f
(
x
1
,
…
,
x
n
∣
μ
,
σ
2
)
{\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}
, over both parameters simultaneously, or if possible, individually.
Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
log
(
L
(
μ
,
σ
2
)
)
=
−
n
2
log
(
2
π
σ
2
)
−
1
2
σ
2
∑
i
=
1
n
(
x
i
−
μ
)
2
{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}}
(Note: the log-likelihood is closely related to information entropy and Fisher information.)
We now compute the derivatives of this log-likelihood as follows.
0
=
∂
∂
μ
log
(
L
(
μ
,
σ
2
)
)
=
0
−
−
2
n
(
x
¯
−
μ
)
2
σ
2
.
{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}
where
x
¯
{\displaystyle {\bar {x}}}
is the sample mean. This is solved by
μ
^
=
x
¯
=
∑
i
=
1
n
x
i
n
.
{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.}
This is indeed the maximum of the function, since it is the only turning point in μ and the second derivative is strictly less than zero. Its expected value is equal to the parameter μ of the given distribution,
E
[
μ
^
]
=
μ
,
{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,}
which means that the maximum likelihood estimator
μ
^
{\displaystyle {\widehat {\mu }}}
is unbiased.
Similarly we differentiate the log-likelihood with respect to σ and equate to zero:
0
=
∂
∂
σ
log
(
L
(
μ
,
σ
2
)
)
=
−
n
σ
+
1
σ
3
∑
i
=
1
n
(
x
i
−
μ
)
2
.
{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}}
which is solved by
σ
^
2
=
1
n
∑
i
=
1
n
(
x
i
−
μ
)
2
.
{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}
Inserting the estimate
μ
=
μ
^
{\displaystyle \mu ={\widehat {\mu }}}
we obtain
σ
^
2
=
1
n
∑
i
=
1
n
(
x
i
−
x
¯
)
2
=
1
n
∑
i
=
1
n
x
i
2
−
1
n
2
∑
i
=
1
n
∑
j
=
1
n
x
i
x
j
.
{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.}
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)
δ
i
≡
μ
−
x
i
{\displaystyle \delta _{i}\equiv \mu -x_{i}}
. Expressing the estimate in these variables yields
σ
^
2
=
1
n
∑
i
=
1
n
(
μ
−
δ
i
)
2
−
1
n
2
∑
i
=
1
n
∑
j
=
1
n
(
μ
−
δ
i
)
(
μ
−
δ
j
)
.
{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).}
Simplifying the expression above, utilizing the facts that
E
[
δ
i
]
=
0
{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}
and
E
[
δ
i
2
]
=
σ
2
{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}
, allows us to obtain
E
[
σ
^
2
]
=
n
−
1
n
σ
2
.
{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.}
This means that the estimator
σ
^
2
{\displaystyle {\widehat {\sigma }}^{2}}
is biased for
σ
2
{\displaystyle \sigma ^{2}}
. It can also be shown that
σ
^
{\displaystyle {\widehat {\sigma }}}
is biased for
σ
{\displaystyle \sigma }
, but that both
σ
^
2
{\displaystyle {\widehat {\sigma }}^{2}}
and
σ
^
{\displaystyle {\widehat {\sigma }}}
are consistent.
Formally we say that the maximum likelihood estimator for
θ
=
(
μ
,
σ
2
)
{\displaystyle \theta =(\mu ,\sigma ^{2})}
is
θ
^
=
(
μ
^
,
σ
^
2
)
.
{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).}
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
log
(
L
(
μ
^
,
σ
^
)
)
=
−
n
2
(
log
(
2
π
σ
^
2
)
+
1
)
{\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}}
This maximum log-likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
== Non-independent variables ==
It may be the case that variables are correlated, or more generally, not independent. Two random variables
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
f
(
y
1
,
y
2
)
=
f
(
y
1
)
f
(
y
2
)
{\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,}
Suppose one constructs an order-n Gaussian vector out of random variables
(
y
1
,
…
,
y
n
)
{\displaystyle (y_{1},\ldots ,y_{n})}
, where each variable has means given by
(
μ
1
,
…
,
μ
n
)
{\displaystyle (\mu _{1},\ldots ,\mu _{n})}
. Furthermore, let the covariance matrix be denoted by
Σ
{\displaystyle {\mathit {\Sigma }}}
. The joint probability density function of these n random variables then follows a multivariate normal distribution given by:
f
(
y
1
,
…
,
y
n
)
=
1
(
2
π
)
n
/
2
det
(
Σ
)
exp
(
−
1
2
[
y
1
−
μ
1
,
…
,
y
n
−
μ
n
]
Σ
−
1
[
y
1
−
μ
1
,
…
,
y
n
−
μ
n
]
T
)
{\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)}
In the bivariate case, the joint probability density function is given by:
f
(
y
1
,
y
2
)
=
1
2
π
σ
1
σ
2
1
−
ρ
2
exp
[
−
1
2
(
1
−
ρ
2
)
(
(
y
1
−
μ
1
)
2
σ
1
2
−
2
ρ
(
y
1
−
μ
1
)
(
y
2
−
μ
2
)
σ
1
σ
2
+
(
y
2
−
μ
2
)
2
σ
2
2
)
]
{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]}
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
=== Example ===
X
1
,
X
2
,
…
,
X
m
{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}
are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to be
n
{\displaystyle n}
:
x
1
+
x
2
+
⋯
+
x
m
=
n
{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}
. The probability of each box is
p
i
{\displaystyle p_{i}}
, with a constraint:
p
1
+
p
2
+
⋯
+
p
m
=
1
{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}
. This is a case in which the
X
i
{\displaystyle X_{i}}
s are not independent, the joint probability of a vector
x
1
,
x
2
,
…
,
x
m
{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}
is called the multinomial and has the form:
f
(
x
1
,
x
2
,
…
,
x
m
∣
p
1
,
p
2
,
…
,
p
m
)
=
n
!
∏
x
i
!
∏
p
i
x
i
=
(
n
x
1
,
x
2
,
…
,
x
m
)
p
1
x
1
p
2
x
2
⋯
p
m
x
m
{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}}
Each box taken separately against all the other boxes is a binomial and this is an extension thereof.
The log-likelihood of this is:
ℓ
(
p
1
,
p
2
,
…
,
p
m
)
=
log
n
!
−
∑
i
=
1
m
log
x
i
!
+
∑
i
=
1
m
x
i
log
p
i
{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}}
The constraint has to be taken into account and use the Lagrange multipliers:
L
(
p
1
,
p
2
,
…
,
p
m
,
λ
)
=
ℓ
(
p
1
,
p
2
,
…
,
p
m
)
+
λ
(
1
−
∑
i
=
1
m
p
i
)
{\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)}
By posing all the derivatives to be 0, the most natural estimate is derived
p
^
i
=
x
i
n
{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}}
Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures.
== Iterative procedures ==
Except for special cases, the likelihood equations
∂
ℓ
(
θ
;
y
)
∂
θ
=
0
{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0}
cannot be solved explicitly for an estimator
θ
^
=
θ
^
(
y
)
{\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}
. Instead, they need to be solved iteratively: starting from an initial guess of
θ
{\displaystyle \theta }
(say
θ
^
1
{\displaystyle {\widehat {\theta }}_{1}}
), one seeks to obtain a convergent sequence
{
θ
^
r
}
{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}
. Many methods for this kind of optimization problem are available, but the most commonly used ones are algorithms based on an updating formula of the form
θ
^
r
+
1
=
θ
^
r
+
η
r
d
r
(
θ
^
)
{\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)}
where the vector
d
r
(
θ
^
)
{\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}
indicates the descent direction of the rth "step," and the scalar
η
r
{\displaystyle \eta _{r}}
captures the "step length," also known as the learning rate.
=== Gradient descent method ===
(Note: here it is a maximization problem, so the sign before gradient is flipped)
η
r
∈
R
+
{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}
that is small enough for convergence and
d
r
(
θ
^
)
=
∇
ℓ
(
θ
^
r
;
y
)
{\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)}
Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method.
=== Newton–Raphson method ===
η
r
=
1
{\displaystyle \eta _{r}=1}
and
d
r
(
θ
^
)
=
−
H
r
−
1
(
θ
^
)
s
r
(
θ
^
)
{\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
where
s
r
(
θ
^
)
{\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}
is the score and
H
r
−
1
(
θ
^
)
{\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}
is the inverse of the Hessian matrix of the log-likelihood function, both evaluated the rth iteration. But because the calculation of the Hessian matrix is computationally costly, numerous alternatives have been proposed. The popular Berndt–Hall–Hall–Hausman algorithm approximates the Hessian with the outer product of the expected gradient, such that
d
r
(
θ
^
)
=
−
[
1
n
∑
t
=
1
n
∂
ℓ
(
θ
;
y
)
∂
θ
(
∂
ℓ
(
θ
;
y
)
∂
θ
)
T
]
−
1
s
r
(
θ
^
)
{\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
=== Quasi-Newton methods ===
Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix.
==== Davidon–Fletcher–Powell formula ====
DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:
H
k
+
1
=
(
I
−
γ
k
y
k
s
k
T
)
H
k
(
I
−
γ
k
s
k
y
k
T
)
+
γ
k
y
k
y
k
T
,
{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},}
where
y
k
=
∇
ℓ
(
x
k
+
s
k
)
−
∇
ℓ
(
x
k
)
,
{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}
γ
k
=
1
y
k
T
s
k
,
{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}
s
k
=
x
k
+
1
−
x
k
.
{\displaystyle s_{k}=x_{k+1}-x_{k}.}
==== Broyden–Fletcher–Goldfarb–Shanno algorithm ====
BFGS also gives a solution that is symmetric and positive-definite:
B
k
+
1
=
B
k
+
y
k
y
k
T
y
k
T
s
k
−
B
k
s
k
s
k
T
B
k
T
s
k
T
B
k
s
k
,
{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,}
where
y
k
=
∇
ℓ
(
x
k
+
s
k
)
−
∇
ℓ
(
x
k
)
,
{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}
s
k
=
x
k
+
1
−
x
k
.
{\displaystyle s_{k}=x_{k+1}-x_{k}.}
BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances
==== Fisher's scoring ====
Another popular method is to replace the Hessian with the Fisher information matrix,
I
(
θ
)
=
E
[
H
r
(
θ
^
)
]
{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}
, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such as generalized linear models.
Although popular, quasi-Newton methods may converge to a stationary point that is not necessarily a local or global maximum, but rather a local minimum or a saddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is both negative definite and well-conditioned.
== History ==
Early users of maximum likelihood include Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth. It was Ronald Fisher however, between 1912 and 1922, who singlehandedly created the modern version of the method.
Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called Wilks' theorem. The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptotically χ 2-distributed, which enables convenient determination of a confidence region around any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of the Fisher information matrix, which is provided by a theorem proven by Fisher. Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.
== See also ==
=== Related concepts ===
Akaike information criterion: a criterion to compare statistical models, based on MLE
Extremum estimator: a more general class of estimators to which MLE belongs
Fisher information: information matrix, its relationship to covariance matrix of ML estimates
Mean squared error: a measure of how 'good' an estimator of a distributional parameter is (be it the maximum likelihood estimator or some other estimator)
RANSAC: a method to estimate parameters of a mathematical model given data that contains outliers
Rao–Blackwell theorem: yields a process for finding the best possible unbiased estimator (in the sense of having minimal mean squared error); the MLE is often a good starting place for the process
Wilks' theorem: provides a means of estimating the size and shape of the region of roughly equally-probable estimates for the population's parameter values, using the information from a single sample, using a chi-squared distribution
=== Other estimation methods ===
Generalized method of moments: methods related to the likelihood equation in maximum likelihood estimation
M-estimator: an approach used in robust statistics
Maximum a posteriori (MAP) estimator: for a contrast in the way to calculate estimators when prior knowledge is postulated
Maximum spacing estimation: a related method that is more robust in many situations
Maximum entropy estimation
Method of moments (statistics): another popular method for finding parameters of distributions
Method of support, a variation of the maximum likelihood technique
Minimum-distance estimation
Partial likelihood methods for panel data
Quasi-maximum likelihood estimator: an MLE estimator that is misspecified, but still consistent
Restricted maximum likelihood: a variation using a likelihood function calculated from a transformed set of data
== References ==
== Further reading ==
Cramer, J.S. (1986). Econometric Applications of Maximum Likelihood Methods. New York, NY: Cambridge University Press. ISBN 0-521-25317-9.
Eliason, Scott R. (1993). Maximum Likelihood Estimation: Logic and Practice. Newbury Park: Sage. ISBN 0-8039-4107-2.
King, Gary (1989). Unifying Political Methodology: the Likehood Theory of Statistical Inference. Cambridge University Press. ISBN 0-521-36697-6.
Le Cam, Lucien (1990). "Maximum likelihood: An Introduction". ISI Review. 58 (2): 153–171. doi:10.2307/1403464. JSTOR 1403464.
Magnus, Jan R. (2017). "Maximum Likelihood". Introduction to the Theory of Econometrics. Amsterdam, NL: VU University Press. pp. 53–68. ISBN 978-90-8659-766-6.
Millar, Russell B. (2011). Maximum Likelihood Estimation and Inference. Hoboken, NJ: Wiley. ISBN 978-0-470-09482-2.
Pickles, Andrew (1986). An Introduction to Likelihood Analysis. Norwich: W. H. Hutchins & Sons. ISBN 0-86094-190-6.
Severini, Thomas A. (2000). Likelihood Methods in Statistics. New York, NY: Oxford University Press. ISBN 0-19-850650-3.
Ward, Michael D.; Ahlquist, John S. (2018). Maximum Likelihood for Social Science: Strategies for Analysis. Cambridge University Press. ISBN 978-1-316-63682-4.
== External links ==
Tilevik, Andreas (2022). Maximum likelihood vs least squares in linear regression (video)
"Maximum-likelihood method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Purcell, S. "Maximum Likelihood Estimation".
Sargent, Thomas; Stachurski, John. "Maximum Likelihood Estimation". Quantitative Economics with Python.
Toomet, Ott; Henningsen, Arne (2019-05-19). "maxLik: A package for maximum likelihood estimation in R".
Lesser, Lawrence M. (2007). "'MLE' song lyrics". Mathematical Sciences / College of Science. University of Texas. El Paso, TX. Retrieved 2021-03-06. | Wikipedia/Method_of_maximum_likelihood |
In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a tool for understanding a series and predicting future values. AR involves regressing the variable on its own lagged (i.e., past) values. MA involves modeling the error as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually denoted ARMA(p, q), where p is the order of AR and q is the order of MA.
The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.
ARMA models can be estimated by using the Box–Jenkins method.
== Mathematical formulation ==
=== Autoregressive model ===
The notation AR(p) refers to the autoregressive model of order p. The AR(p) model is written as
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}}
where
φ
1
,
…
,
φ
p
{\displaystyle \varphi _{1},\ldots ,\varphi _{p}}
are parameters and the random variable
ε
t
{\displaystyle \varepsilon _{t}}
is white noise, usually independent and identically distributed (i.i.d.) normal random variables.
In order for the model to remain stationary, the roots of its characteristic polynomial must lie outside the unit circle. For example, processes in the AR(1) model with
|
φ
1
|
≥
1
{\displaystyle |\varphi _{1}|\geq 1}
are not stationary because the root of
1
−
φ
1
B
=
0
{\displaystyle 1-\varphi _{1}B=0}
lies within the unit circle.
The augmented Dickey–Fuller test can assesses the stability of an intrinsic mode function and trend components. For stationary time series, the ARMA models can be used, while for non-stationary series, Long short-term memory models can be used to derive abstract features. The final value is obtained by reconstructing the predicted outcomes of each time series.
=== Moving average model ===
The notation MA(q) refers to the moving average model of order q:
X
t
=
μ
+
ε
t
+
∑
i
=
1
q
θ
i
ε
t
−
i
{\displaystyle X_{t}=\mu +\varepsilon _{t}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}\,}
where the
θ
1
,
.
.
.
,
θ
q
{\displaystyle \theta _{1},...,\theta _{q}}
are the parameters of the model,
μ
{\displaystyle \mu }
is the expectation of
X
t
{\displaystyle X_{t}}
(often assumed to equal 0), and
ε
1
{\displaystyle \varepsilon _{1}}
, ...,
ε
t
{\displaystyle \varepsilon _{t}}
are i.i.d. white noise error terms that are commonly normal random variables.
=== ARMA model ===
The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving-average terms. This model contains the AR(p) and MA(q) models,
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
=== In terms of lag operator ===
In some texts, the models is specified using the lag operator L. In these terms, the AR(p) model is given by
ε
t
=
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
φ
(
L
)
X
t
{\displaystyle \varepsilon _{t}=\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\varphi (L)X_{t}\,}
where
φ
{\displaystyle \varphi }
represents the polynomial
φ
(
L
)
=
1
−
∑
i
=
1
p
φ
i
L
i
.
{\displaystyle \varphi (L)=1-\sum _{i=1}^{p}\varphi _{i}L^{i}.\,}
The MA(q) model is given by
X
t
−
μ
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
=
θ
(
L
)
ε
t
,
{\displaystyle X_{t}-\mu =\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}=\theta (L)\varepsilon _{t},\,}
where
θ
{\displaystyle \theta }
represents the polynomial
θ
(
L
)
=
1
+
∑
i
=
1
q
θ
i
L
i
.
{\displaystyle \theta (L)=1+\sum _{i=1}^{q}\theta _{i}L^{i}.\,}
Finally, the combined ARMA(p, q) model is given by
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
,
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,,}
or more concisely,
φ
(
L
)
X
t
=
θ
(
L
)
ε
t
{\displaystyle \varphi (L)X_{t}=\theta (L)\varepsilon _{t}\,}
or
φ
(
L
)
θ
(
L
)
X
t
=
ε
t
.
{\displaystyle {\frac {\varphi (L)}{\theta (L)}}X_{t}=\varepsilon _{t}\,.}
This is the form used in Box, Jenkins & Reinsel.
Moreover, starting summations from
i
=
0
{\displaystyle i=0}
and setting
ϕ
0
=
−
1
{\displaystyle \phi _{0}=-1}
and
θ
0
=
1
{\displaystyle \theta _{0}=1}
, then we get an even more elegant formulation:
−
∑
i
=
0
p
ϕ
i
L
i
X
t
=
∑
i
=
0
q
θ
i
L
i
ε
t
.
{\displaystyle -\sum _{i=0}^{p}\phi _{i}L^{i}\;X_{t}=\sum _{i=0}^{q}\theta _{i}L^{i}\;\varepsilon _{t}\,.}
== Spectrum ==
The spectral density of an ARMA process is
S
(
f
)
=
σ
2
2
π
|
θ
(
e
−
i
f
)
ϕ
(
e
−
i
f
)
|
2
{\displaystyle S(f)={\frac {\sigma ^{2}}{2\pi }}\left\vert {\frac {\theta (e^{-if})}{\phi (e^{-if})}}\right\vert ^{2}}
where
σ
2
{\displaystyle \sigma ^{2}}
is the variance of the white noise,
θ
{\displaystyle \theta }
is the characteristic polynomial of the moving average part of the ARMA model, and
ϕ
{\displaystyle \phi }
is the characteristic polynomial of the autoregressive part of the ARMA model.
== Fitting models ==
=== Choosing p and q ===
An appropriate value of p in the ARMA(p, q) model can be found by plotting the partial autocorrelation functions. Similarly, q can be estimated by using the autocorrelation functions. Both p and q can be determined simultaneously using extended autocorrelation functions (EACF). Further information can be gleaned by considering the same functions for the residuals of a model fitted with an initial selection of p and q.
Brockwell & Davis recommend using Akaike information criterion (AIC) for finding p and q. Another option is the Bayesian information criterion (BIC).
=== Estimating coefficients ===
After choosing p and q, ARMA models can be fitted by least squares regression to find the values of the parameters which minimize the error term. It is good practice to find the smallest values of p and q which provide an acceptable fit to the data. For a pure AR model, the Yule-Walker equations may be used to provide a fit.
ARMA outputs are used primarily to forecast (predict), and not to infer causation as in other areas of econometrics and regression methods such as OLS and 2SLS.
=== Software implementations ===
In R, standard package stats has function arima, documented in ARIMA Modelling of Time Series. Package astsa has an improved script called sarima for fitting ARMA models (seasonal and nonseasonal) and sarima.sim to simulate data from these models. Extension packages contain related and extended functionality: package tseries includes the function arma(), documented in "Fit ARMA Models to Time Series"; packagefracdiff contains fracdiff() for fractionally integrated ARMA processes; and package forecast includes auto.arima for selecting a parsimonious set of p, q. The CRAN task view on Time Series contains links to most of these.
Mathematica has a complete library of time series functions including ARMA.
MATLAB includes functions such as arma, ar and arx to estimate autoregressive, exogenous autoregressive and ARMAX models. See System Identification Toolbox and Econometrics Toolbox for details.
Julia has community-driven packages that implement fitting with an ARMA model such as arma.jl.
Python has the statsmodelsS package which includes many models and functions for time series analysis, including ARMA. Formerly part of the scikit-learn library, it is now stand-alone and integrates well with Pandas.
PyFlux has a Python-based implementation of ARIMAX models, including Bayesian ARIMAX models.
IMSL Numerical Libraries are libraries of numerical analysis functionality including ARMA and ARIMA procedures implemented in standard programming languages like C, Java, C# .NET, and Fortran.
gretl can estimate ARMA models, as mentioned here
GNU Octave extra package octave-forge supports AR models.
Stata includes the function arima. for ARMA and ARIMA models.
SuanShu is a Java library of numerical methods that implements univariate/multivariate ARMA, ARIMA, ARMAX, etc models, documented in "SuanShu, a Java numerical and statistical library".
SAS has an econometric package, ETS, that estimates ARIMA models. See details.
== History and interpretations ==
The general ARMA model was described in the 1951 thesis of Peter Whittle, who used mathematical analysis (Laurent series and Fourier analysis) and statistical inference. ARMA models were popularized by a 1970 book by George E. P. Box and Jenkins, who expounded an iterative (Box–Jenkins) method for choosing and estimating them. This method was useful for low-order polynomials (of degree three or less).
ARMA is essentially an infinite impulse response filter applied to white noise, with some additional interpretation placed on it.
In digital signal processing, ARMA is represented as a digital filter with white noise at the input and the ARMA process at the output.
== Applications ==
ARMA is appropriate when a system is a function of a series of unobserved shocks (the MA or moving average part) as well as its own behavior. For example, stock prices may be shocked by fundamental information as well as exhibiting technical trending and mean-reversion effects due to market participants.
== Generalizations ==
There are various generalizations of ARMA. Nonlinear AR (NAR), nonlinear MA (NMA) and nonlinear ARMA (NARMA) model nonlinear dependence on past values and error terms. Vector AR (VAR) and vector ARMA (VARMA) model multivariate time series. Autoregressive integrated moving average (ARIMA) models non-stationary time series (that is, whose mean changes over time). Autoregressive conditional heteroskedasticity (ARCH) models time series where the variance changes. Seasonal ARIMA (SARIMA or periodic ARMA) models periodic variation. Autoregressive fractionally integrated moving average (ARFIMA, or Fractional ARIMA, FARIMA) model time-series that exhibits long memory. Multiscale AR (MAR) is indexed by the nodes of a tree instead of integers.
=== Autoregressive–moving-average model with exogenous inputs (ARMAX) ===
The notation ARMAX(p, q, b) refers to a model with p autoregressive terms, q moving average terms and b exogenous inputs terms. The last term is a linear combination of the last b terms of a known and external time series
d
t
{\displaystyle d_{t}}
. It is given by:
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
+
∑
i
=
1
b
η
i
d
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}+\sum _{i=1}^{b}\eta _{i}d_{t-i}.\,}
where
η
1
,
…
,
η
b
{\displaystyle \eta _{1},\ldots ,\eta _{b}}
are the parameters of the exogenous input
d
t
{\displaystyle d_{t}}
.
Some nonlinear variants of models with exogenous variables have been defined: see for example Nonlinear autoregressive exogenous model.
Statistical packages implement the ARMAX model through the use of "exogenous" (that is, independent) variables. Care must be taken when interpreting the output of those packages, because the estimated parameters usually (for example, in R and gretl) refer to the regression:
X
t
−
m
t
=
ε
t
+
∑
i
=
1
p
φ
i
(
X
t
−
i
−
m
t
−
i
)
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}-m_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}(X_{t-i}-m_{t-i})+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
where
m
t
{\displaystyle m_{t}}
incorporates all exogenous (or independent) variables:
m
t
=
c
+
∑
i
=
0
b
η
i
d
t
−
i
.
{\displaystyle m_{t}=c+\sum _{i=0}^{b}\eta _{i}d_{t-i}.\,}
== See also ==
Autoregressive integrated moving average (ARIMA)
Exponential smoothing
Linear predictive coding
Predictive analytics
Infinite impulse response
Finite impulse response
== References ==
== Further reading ==
Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 0521343399.
Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 052135532X.
Francq, C.; Zakoïan, J.-M. (2005), "Recent results for linear time series models with non independent innovations", in Duchesne, P.; Remillard, B. (eds.), Statistical Modeling and Analysis for Complex Data Problems, Springer, pp. 241–265, CiteSeerX 10.1.1.721.1754.
Shumway, R.H. and Stoffer, D.S. (2017). Time Series Analysis and Its Applications with R Examples. Springer. DOI: 10.1007/978-3-319-52452-8 | Wikipedia/Autoregressive_moving-average_model |
In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm (i.e. non-randomized) clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA (A Practical Adaptive & Novel Designs and Analysis toolkit) provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting.
== Purpose ==
The aim of an adaptive trial is to more quickly identify drugs or devices that have a therapeutic effect, and to zero in on patient populations for whom the drug is appropriate. When conducted efficiently, adaptive trials have the potential to find new treatments while minimizing the number of patients exposed to the risks of clinical trials. Specifically, adaptive trials can efficiently discover new treatments by reducing the number of patients enrolled in treatment groups that show minimal efficacy or higher adverse-event rates. Adaptive trials can adjust almost any part of its design, based on pre-set rules and statistical design, such as sample size, adding new groups, dropping less effective groups and changing the probability of being randomized to a particular group, for example.
== History ==
In 2004, a Strategic Path Initiative was introduced by the United States Food and Drug Administration (FDA) to modify the way drugs travel from lab to market. This initiative aimed at dealing with the high attrition levels observed in the clinical phase. It also attempted to offer flexibility to investigators to find the optimal clinical benefit without affecting the study's validity. Adaptive clinical trials initially came under this regime.
The FDA issued draft guidance on adaptive trial design in 2010. In 2012, the President's Council of Advisors on Science and Technology (PCAST) recommended that the FDA "run pilot projects to explore adaptive approval mechanisms to generate evidence across the lifecycle of a drug from the pre-market through the post-market phase." While not specifically related to clinical trials, the council also recommended that they "make full use of accelerated approval for all drugs meeting the statutory standard of addressing an unmet need for a serious or life-threatening disease, and demonstrating an impact on a clinical endpoint other than survival or irreversible morbidity, or on a surrogate endpoint, likely to predict clinical benefit."
By 2019, the FDA updated their 2010 recommendations and issued "Adaptive Design Clinical Trials for Drugs and Biologics Guidance". In October of 2021, the FDA Center for Veterinary Medicine issued the Guidance Document "Adaptive and Other Innovative Designs for Effectiveness Studies of New Animal Drugs".
== Characteristics ==
Traditionally, clinical trials are conducted in three steps:
The trial is designed.
The trial is conducted as prescribed by the design.
Once the data are ready, they are analysed according to a pre-specified analysis plan.
== Types ==
=== Overview ===
Any trial design that can change its design, during active enrollment, could be considered an adaptive clinical trial. There are a number of different types, and real life trials may combine elements from these different trial types: In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained.
=== Dose finding design ===
Phase I of clinical research focuses on selecting a particular dose of a drug to carry forward into future trials. Historically, such trials have had a "rules-based" (or "algorithm-based") design, such as the 3+3 design. However, these "A+B" rules-based designs are not appropriate for phase I studies and are inferior to adaptive, model-based designs. An example of a superior design is the continual reassessment method (CRM).
=== Group sequential design ===
Group sequential design is the application of sequential analysis to clinical trials. At each interim analysis, investigators will use the current data to decide whether the trial should either stop or should continue to recruit more participants. The trial might stop either because the evidence that the treatment is working is strong ("stopping for benefit") or weak ("stopping for futility"). Whether a trial may stop for futility only, benefit only, or either, is stated in advance. A design has "binding stopping rules" when the trial must stop when a particular threshold of (either strong or weak) evidence is crossed at a particular interim analysis. Otherwise it has "non-binding stopping rules", in which case other information can be taken into account, for example safety data. The number of interim analyses is specified in advance, and can be anything from a single interim analysis (a "two-stage" design") to an interim analysis after every participant ("continuous monitoring").
For trials with a binary (response/no response) outcome and a single treatment arm, a popular and simple group sequential design with two stages is the Simon design. In this design, there is a single interim analysis partway through the trial, at which point the trial either stops for futility or continues to the second stage. Mander and Thomson also proposed a design with a single interim analysis, at which point the trial could stop for either futility or benefit.
For single-arm, single-stage binary outcome trials, a trial's success or failure is determined by the number of responses observed by the end of the trial. This means that it may be possible to know the conclusion of the trial (success or failure) with certainty before all the data are available. Planning to stop a trial once the conclusion is known with certainty is called non-stochastic curtailment. This reduces the sample size on average. Planning to stop a trial when the probability of success, based on the results so far, is either above or below a certain threshold is called stochastic curtailment. This reduces the average sample size even more than non-stochastic curtailment. Stochastic and non-stochastic curtailment can also be used in two-arm binary outcome trials, where a trial's success or failure is determined by the number of responses observed on each arm by the end of the trial.
== Usage ==
The adaptive design method developed mainly in the early 21st century. In November 2019, the US Food and Drug Administration provided guidelines for using adaptive designs in clinical trials.
=== In 2020 COVID-19 related trials ===
In April 2020, the World Health Organization published an "R&D Blueprint (for the) novel Coronavirus" (Blueprint). The Blueprint documented a "large, international, multi-site, individually randomized controlled clinical trial" to allow "the concurrent evaluation of the benefits and risks of each promising candidate vaccine within 3–6 months of it being made available for the trial." The Blueprint listed a Global Target Product Profile (TPP) for COVID‑19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks.
The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trial – the "Solidarity trial" for vaccines – to enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID‑19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition prioritized which vaccines would go into Phase II and III clinical trials, and determined harmonized Phase III protocols for all vaccines achieving the pivotal trial stage.
The global "Solidarity" and European "Discovery" trials of hospitalized people with severe COVID‑19 infection applied adaptive design to rapidly alter trial parameters as results from the four experimental therapeutic strategies emerge. The US National Institute of Allergy and Infectious Diseases (NIAID) initiated an adaptive design, international Phase III trial (called "ACTT") to involve up to 800 hospitalized COVID‑19 people at 100 sites in multiple countries.
=== Breast cancer ===
An adaptive trial design enabled two experimental breast cancer drugs to deliver promising results after just six months of testing, far shorter than usual. Researchers assessed the results while the trial was in process and found that cancer had been eradicated in more than half of one group of patients. The trial, known as I-Spy 2, tested 12 experimental drugs.
==== I-SPY 1 ====
For its predecessor I-SPY 1, 10 cancer centers and the National Cancer Institute (NCI SPORE program and the NCI Cooperative groups) collaborated to identify response indicators that would best predict survival for women with high-risk breast cancer. During 2002–2006, the study monitored 237 patients undergoing neoadjuvant therapy before surgery. Iterative MRI and tissue samples monitored the biology of patients to chemotherapy given in a neoadjuvant setting, or presurgical setting. Evaluating chemotherapy's direct impact on tumor tissue took much less time than monitoring outcomes in thousands of patients over long time periods. The approach helped to standardize the imaging and tumor sampling processes, and led to miniaturized assays. Key findings included that tumor response was a good predictor of patient survival, and that tumor shrinkage during treatment was a good predictor of long-term outcome. Importantly, the vast majority of tumors identified as high risk by molecular signature. However, heterogeneity within this group of women and measuring response within tumor subtypes was more informative than viewing the group as a whole. Within genetic signatures, level of response to treatment appears to be a reasonable predictor of outcome. Additionally, its shared database has furthered the understanding of drug response and generated new targets and agents for subsequent testing.
==== I-SPY 2 ====
I-SPY 2 is an adaptive clinical trial of multiple Phase 2 treatment regimens combined with standard chemotherapy. I-SPY 2 linked 19 academic cancer centers, two community centers, the FDA, the NCI, pharmaceutical and biotech companies, patient advocates and philanthropic partners. The trial is sponsored by the Biomarker Consortium of the Foundation for the NIH (FNIH), and is co-managed by the FNIH and QuantumLeap Healthcare Collaborative. I-SPY 2 was designed to explore the hypothesis that different combinations of cancer therapies have varying degrees of success for different patients. Conventional clinical trials that evaluate post-surgical tumor response require a separate trial with long intervals and large populations to test each combination. Instead, I-SPY 2 is organized as a continuous process. It efficiently evaluates multiple therapy regimes by relying on the predictors developed in I-SPY 1 that help quickly determine whether patients with a particular genetic signature will respond to a given treatment regime. The trial is adaptive in that the investigators learn as they go, and do not continue treatments that appear to be ineffective. All patients are categorized based on tissue and imaging markers collected early and iteratively (a patient's markers may change over time) throughout the trial, so that early insights can guide treatments for later patients. Treatments that show positive effects for a patient group can be ushered to confirmatory clinical trials, while those that do not can be rapidly sidelined. Importantly, confirmatory trials can serve as a pathway for FDA Accelerated Approval. I-SPY 2 can simultaneously evaluate candidates developed by multiple companies, escalating or eliminating drugs based on immediate results. Using a single standard arm for comparison for all candidates in the trial saves significant costs over individual Phase 3 trials. All data are shared across the industry. As of January 2016 I-SPY 2 is comparing 11 new treatments against 'standard therapy', and is estimated to complete in Sept 2017. By mid 2016 several treatments had been selected for later stage trials.
=== Alzheimer's ===
Researchers under the EPAD project by the Innovative Medicines Initiative are utilizing an adaptive trial design to help speed development of Alzheimer's disease treatments, with a budget of 53 million euros. The first trial under the initiative was expected to begin in 2015 and to involve about a dozen companies. As of 2020, 2,000 people over the age of 50 have been recruited across Europe for a long term study on the earliest stages of Alzheimer's. The EPAD project plans to use the results from this study and other data to inform 1,500 person selected adaptive clinical trials of drugs to prevent Alzheimer's.
== Bayesian designs ==
The adjustable nature of adaptive trials inherently suggests the use of Bayesian statistical analysis. Bayesian statistics inherently address updating information such as that seen in adaptive trials that change from updated information derived from interim analysis. The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning.
According to FDA guidelines, an adaptive Bayesian clinical trial can involve:
Interim looks to stop or to adjust patient accrual
Interim looks to assess stopping the trial early either for success, futility or harm
Reversing the hypothesis of non-inferiority to superiority or vice versa
Dropping arms or doses or adjusting doses
Modification of the randomization rate to increase the probability that a patient is allocated to the most appropriate treatment (or arm in the multi-armed bandit model)
The Bayesian framework Continuous Individualized Risk Index which is based on dynamic measurements from cancer patients can be effectively used for adaptive trial designs. Platform trials rely heavily on Bayesian designs.
For regulatory submission of Bayesian clinical trial design, there exist two Bayesian decision rules that are frequently used by trial sponsors. First, posterior probability approach is mainly used in decision-making to quantify the evidence to address the question, "Does the current data provide convincing evidence in favor of the alternative hypothesis?" The key quantity of the posterior probability approach is the posterior probability of the alternative hypothesis being true based on the data observed up to the point of analysis. Second, predictive probability approach is mainly used in decision-making is to answer the question at an interim analysis: "Is the trial likely to present compelling evidence in favor of the alternative hypothesis if we gather additional data, potentially up to the maximum sample size (or current sample size)?" The key quantity of the predictive probability approach is the posterior predictive probability of the trial success given the interim data.
In most regulatory submissions, Bayesian trial designs are calibrated to possess good frequentist properties. In this spirit, and in adherence to regulatory practice, regulatory agencies typically recommend that sponsors provide the frequentist type I and II error rates for the sponsor's proposed Bayesian analysis plan. In other words, the Bayesian designs for the regulatory submission need to satisfy the type I and II error requirement in most cases in the frequentist sense. Some exception may happen in the context of external data borrowing where the type I error rate requirement can be relaxed to some degree depending on the confidence of the historical information.
== Statistical analysis ==
The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning.
== Added complexity ==
The logistics of managing traditional, non-adaptive design clinical trials may be complex. In adaptive design clinical trials, adapting the design as results arrive adds to the complexity of design, monitoring, drug supply, data capture and randomization. Furthermore, it should be stated in the trial's protocol exactly what kind of adaptation will be permitted. Publishing the trial protocol in advance increases the validity of the final results, as it makes clear that any adaptation that took place during the trial was planned, rather than ad hoc. According to PCAST "One approach is to focus studies on specific subsets of patients most likely to benefit, identified based on validated biomarkers. In some cases, using appropriate biomarkers can make it possible to dramatically decrease the sample size required to achieve statistical significance—for example, from 1500 to 50 patients."
Adaptive designs have added statistical complexity compared to traditional clinical trial designs. For example, any multiple testing, either from looking at multiple treatment arms or from looking at a single treatment arm multiple times, must be accounted for. Another example is statistical bias, which can be more likely when using adaptive designs, and again must be accounted for.
While an adaptive design may be an improvement over a non-adaptive design in some respects (for example, expected sample size), it is not always the case that an adaptive design is a better choice overall: in some cases, the added complexity of the adaptive design may not justify its benefits. An example of this is when the trial is based on a measurement that takes a long time to observe, as this would mean having an interim analysis when many participants have started treatment but cannot yet contribute to the interim results.
== Risks ==
Shorter trials may not reveal longer term risks, such as a cancer's return.
== Resources (external links) ==
"What are adaptive clinical trials?" (video). youtube.com. Medical Research Council Biostatistics Unit. 17 November 2022.
Burnett, Thomas; Mozgunov, Pavel; Pallmann, Philip; Villar, Sofia S.; Wheeler, Graham M.; Jaki, Thomas (2020). "Adding flexibility to clinical trial designs: An example-based guide to the practical use of adaptive designs". BMC Medicine. 18 (1): 352. doi:10.1186/s12916-020-01808-2. PMC 7677786. PMID 33208155.
Jennison, Christopher; Turnbull, Bruce (1999). Group Sequential Methods with Applications to Clinical Trials. Taylor & Francis. ISBN 0849303168.
Wason, James M. S.; Brocklehurst, Peter; Yap, Christina (2019). "When to keep it simple – adaptive designs are not always useful". BMC Medicine. 17 (1): 152. doi:10.1186/s12916-019-1391-9. PMC 6676635. PMID 31370839.
Wheeler, Graham M.; Mander, Adrian P.; Bedding, Alun; Brock, Kristian; Cornelius, Victoria; Grieve, Andrew P.; Jaki, Thomas; Love, Sharon B.; Odondi, Lang'o; Weir, Christopher J.; Yap, Christina; Bond, Simon J. (2019). "How to design a dose-finding study using the continual reassessment method". BMC Medical Research Methodology. 19 (1): 18. doi:10.1186/s12874-018-0638-z. PMC 6339349. PMID 30658575.
Grayling, Michael John; Wheeler, Graham Mark (2020). "A review of available software for adaptive clinical trial design". Clinical Trials. 17 (3): 323–331. doi:10.1177/1740774520906398. PMC 7736777. PMID 32063024. S2CID 189762427.
== See also ==
== References ==
== Sources ==
Kurtz, Esfahani, Scherer (July 2019). "Dynamic Risk Profiling Using Serial Tumor Biomarkers for Personalized Outcome Prediction". Cell. 178 (3): 699–713.e19. doi:10.1016/j.cell.2019.06.011. PMC 7380118. PMID 31280963.{{cite journal}}: CS1 maint: multiple names: authors list (link)
President's Council of Advisors on Science and Technology (September 2012). "Report To The President on Propelling Innovation in Drug Discovery, Development and Evaluation" (PDF). Executive Office of the President. Archived (PDF) from the original on 21 January 2017. Retrieved 4 January 2014.
Brennan, Zachary (5 June 2013). "CROs Slowly Shifting to Adaptive Clinical Trial Designs". Outsourcing-pharma.com. Retrieved 5 January 2014.
Spiegelhalter, David (April 2010). "Bayesian methods in clinical trials: Has there been any progress?" (PDF). Archived from the original (PDF) on 6 January 2014.
Carlin, Bradley P. (25 March 2009). "Bayesian Adaptive Methods for Clinical Trial Design and Analysis" (PDF).
== External links ==
Gottlieb K. (2016) The FDA adaptive trial design guidance in a nutshell - A review in Q&A format for decision makers. PeerJ Preprints 4:e1825v1 [1]
Coffey, C. S.; Kairalla, J. A. (2008). "Adaptive clinical trials: Progress and challenges". Drugs in R&D. 9 (4): 229–242. doi:10.2165/00126839-200809040-00003. PMID 18588354. S2CID 11861515.
Center for Drug Evaluation and Research (CDER); Center for Biologics Evaluation and Research (CBER) (February 2010). "Adaptive Design Clinical Trials for Drugs and Biologics" (PDF). Food and Drug Administration. Archived from the original (PDF) on 5 January 2014.
Yi, Cheng; Yu, Shen. "Bayesian Adaptive Designs for Clinical Trials" (PDF). M. D. Anderson.
Berry, Scott M.; Carlin, Bradley P.; Lee, J. Jack; Muller, Peter (20 July 2010). Bayesian Adaptive Methods for Clinical Trials. CRC Press. ISBN 978-1-4398-2551-8. Berry on BAMCT on YouTube
Press, W. H. (2009). "Bandit solutions provide unified ethical models for randomized clinical trials and comparative effectiveness research". Proceedings of the National Academy of Sciences. 106 (52): 22387–92. doi:10.1073/pnas.0912378106. PMC 2793317. PMID 20018711. | Wikipedia/Adaptive_design_(medicine) |
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis), as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".
== Definition ==
The basic form of a linear predictor function
f
(
i
)
{\displaystyle f(i)}
for data point i (consisting of p explanatory variables), for i = 1, ..., n, is
f
(
i
)
=
β
0
+
β
1
x
i
1
+
⋯
+
β
p
x
i
p
,
{\displaystyle f(i)=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip},}
where
x
i
k
{\displaystyle x_{ik}}
, for k = 1, ..., p, is the value of the k-th explanatory variable for data point i, and
β
0
,
…
,
β
p
{\displaystyle \beta _{0},\ldots ,\beta _{p}}
are the coefficients (regression coefficients, weights, etc.) indicating the relative effect of a particular explanatory variable on the outcome.
=== Notations ===
It is common to write the predictor function in a more compact form as follows:
The coefficients β0, β1, ..., βp are grouped into a single vector β of size p + 1.
For each data point i, an additional explanatory pseudo-variable xi0 is added, with a fixed value of 1, corresponding to the intercept coefficient β0.
The resulting explanatory variables xi0(= 1), xi1, ..., xip are then grouped into a single vector xi of size p + 1.
==== Vector Notation ====
This makes it possible to write the linear predictor function as follows:
f
(
i
)
=
β
⋅
x
i
{\displaystyle f(i)={\boldsymbol {\beta }}\cdot \mathbf {x} _{i}}
using the notation for a dot product between two vectors.
==== Matrix Notation ====
An equivalent form using matrix notation is as follows:
f
(
i
)
=
β
T
x
i
=
x
i
T
β
{\displaystyle f(i)={\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}=\mathbf {x} _{i}^{\mathrm {T} }{\boldsymbol {\beta }}}
where
β
{\displaystyle {\boldsymbol {\beta }}}
and
x
i
{\displaystyle \mathbf {x} _{i}}
are assumed to be a (p+1)-by-1 column vectors,
β
T
{\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }}
is the matrix transpose of
β
{\displaystyle {\boldsymbol {\beta }}}
(so
β
T
{\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }}
is a 1-by-(p+1) row vector), and
β
T
x
i
{\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}}
indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.
== Linear regression ==
An example of the usage of a linear predictor function is in linear regression, where each data point is associated with a continuous outcome yi, and the relationship written
y
i
=
f
(
i
)
+
ε
i
=
β
T
x
i
+
ε
i
,
{\displaystyle y_{i}=f(i)+\varepsilon _{i}={\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}\ +\varepsilon _{i},}
where
ε
i
{\displaystyle \varepsilon _{i}}
is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function.
== Stacking ==
In some models (standard linear regression, in particular), the equations for each of the data points i = 1, ..., n are stacked together and written in vector form as
y
=
X
β
+
ε
,
{\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\,}
where
y
=
(
y
1
y
2
⋮
y
n
)
,
X
=
(
x
1
′
x
2
′
⋮
x
n
′
)
=
(
x
11
⋯
x
1
p
x
21
⋯
x
2
p
⋮
⋱
⋮
x
n
1
⋯
x
n
p
)
,
β
=
(
β
1
⋮
β
p
)
,
ε
=
(
ε
1
ε
2
⋮
ε
n
)
.
{\displaystyle \mathbf {y} ={\begin{pmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{pmatrix}},\quad \mathbf {X} ={\begin{pmatrix}\mathbf {x} '_{1}\\\mathbf {x} '_{2}\\\vdots \\\mathbf {x} '_{n}\end{pmatrix}}={\begin{pmatrix}x_{11}&\cdots &x_{1p}\\x_{21}&\cdots &x_{2p}\\\vdots &\ddots &\vdots \\x_{n1}&\cdots &x_{np}\end{pmatrix}},\quad {\boldsymbol {\beta }}={\begin{pmatrix}\beta _{1}\\\vdots \\\beta _{p}\end{pmatrix}},\quad {\boldsymbol {\varepsilon }}={\begin{pmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{pmatrix}}.}
The matrix X is known as the design matrix and encodes all known information about the independent variables. The variables
ε
i
{\displaystyle \varepsilon _{i}}
are random variables, which in standard linear regression are distributed according to a standard normal distribution; they express the influence of any unknown factors on the outcome.
This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations. In particular, the optimal coefficients
β
^
{\displaystyle {\boldsymbol {\hat {\beta }}}}
as estimated by least squares can be written as follows:
β
^
=
(
X
T
X
)
−
1
X
T
y
.
{\displaystyle {\boldsymbol {\hat {\beta }}}=(X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }\mathbf {y} .}
The matrix
(
X
T
X
)
−
1
X
T
{\displaystyle (X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }}
is known as the Moore–Penrose pseudoinverse of X. The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others). In such cases, the singular value decomposition can be used to compute the pseudoinverse.
== Preprocessing of explanatory variables ==
When a fixed set of nonlinear functions are used to transform the value(s) of a data point, these functions are known as basis functions. An example is polynomial regression, which uses a linear predictor function to fit an arbitrary degree polynomial relationship (up to a given order) between two sets of data points (i.e. a single real-valued explanatory variable and a related real-valued dependent variable), by adding multiple explanatory variables corresponding to various powers of the existing explanatory variable. Mathematically, the form looks like this:
y
i
=
β
0
+
β
1
x
i
+
β
2
x
i
2
+
⋯
+
β
p
x
i
p
.
{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+\cdots +\beta _{p}x_{i}^{p}.}
In this case, for each data point i, a set of explanatory variables is created as follows:
(
x
i
1
=
x
i
,
x
i
2
=
x
i
2
,
…
,
x
i
p
=
x
i
p
)
{\displaystyle (x_{i1}=x_{i},\quad x_{i2}=x_{i}^{2},\quad \ldots ,\quad x_{ip}=x_{i}^{p})}
and then standard linear regression is run. The basis functions in this example would be
ϕ
(
x
)
=
(
ϕ
1
(
x
)
,
ϕ
2
(
x
)
,
…
,
ϕ
p
(
x
)
)
=
(
x
,
x
2
,
…
,
x
p
)
.
{\displaystyle {\boldsymbol {\phi }}(x)=(\phi _{1}(x),\phi _{2}(x),\ldots ,\phi _{p}(x))=(x,x^{2},\ldots ,x^{p}).}
This example shows that a linear predictor function can actually be much more powerful than it first appears: It only really needs to be linear in the coefficients. All sorts of non-linear functions of the explanatory variables can be fit by the model.
There is no particular need for the inputs to basis functions to be univariate or single-dimensional (or their outputs, for that matter, although in such a case, a K-dimensional output value is likely to be treated as K separate scalar-output basis functions). An example of this is radial basis functions (RBF's), which compute some transformed version of the distance to some fixed point:
ϕ
(
x
;
c
)
=
ϕ
(
|
|
x
−
c
|
|
)
=
ϕ
(
(
x
1
−
c
1
)
2
+
…
+
(
x
K
−
c
K
)
2
)
{\displaystyle \phi (\mathbf {x} ;\mathbf {c} )=\phi (||\mathbf {x} -\mathbf {c} ||)=\phi ({\sqrt {(x_{1}-c_{1})^{2}+\ldots +(x_{K}-c_{K})^{2}}})}
An example is the Gaussian RBF, which has the same functional form as the normal distribution:
ϕ
(
x
;
c
)
=
e
−
b
|
|
x
−
c
|
|
2
{\displaystyle \phi (\mathbf {x} ;\mathbf {c} )=e^{-b||\mathbf {x} -\mathbf {c} ||^{2}}}
which drops off rapidly as the distance from c increases.
A possible usage of RBF's is to create one for every observed data point. This means that the result of an RBF applied to a new data point will be close to 0 unless the new point is near to the point around which the RBF was applied. That is, the application of the radial basis functions will pick out the nearest point, and its regression coefficient will dominate. The result will be a form of nearest neighbor interpolation, where predictions are made by simply using the prediction of the nearest observed data point, possibly interpolating between multiple nearby data points when they are all similar distances away. This type of nearest neighbor method for prediction is often considered diametrically opposed to the type of prediction used in standard linear regression: But in fact, the transformations that can be applied to the explanatory variables in a linear predictor function are so powerful that even the nearest neighbor method can be implemented as a type of linear regression.
It is even possible to fit some functions that appear non-linear in the coefficients by transforming the coefficients into new coefficients that do appear linear. For example, a function of the form
a
+
b
2
x
i
1
+
c
x
i
2
{\displaystyle a+b^{2}x_{i1}+{\sqrt {c}}x_{i2}}
for coefficients
a
,
b
,
c
{\displaystyle a,b,c}
could be transformed into the appropriate linear function by applying the substitutions
b
′
=
b
2
,
c
′
=
c
,
{\displaystyle b'=b^{2},c'={\sqrt {c}},}
leading to
a
+
b
′
x
i
1
+
c
′
x
i
2
,
{\displaystyle a+b'x_{i1}+c'x_{i2},}
which is linear. Linear regression and similar techniques could be applied and will often still find the optimal coefficients, but their error estimates and such will be wrong.
The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (e.g. income, age, blood pressure, etc.) and discrete variables (e.g. sex, race, political party, etc.). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), i.e. separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have the given value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" would be converted to separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable.
Note that, for K categories, not all K dummy variables are independent of each other. For example, in the above blood type example, only three of the four dummy variables are independent, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it's really only necessary to encode three of the four possibilities as dummy variables, and in fact if all four possibilities are encoded, the overall model becomes non-identifiable. This causes problems for a number of methods, such as the simple closed-form solution used in linear regression. The solution is either to avoid such cases by eliminating one of the dummy variables, and/or introduce a regularization constraint (which necessitates a more powerful, typically iterative, method for finding the optimal coefficients).
== See also ==
Linear model
Linear regression
== References == | Wikipedia/Linear_predictor_function |
In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a tool for understanding a series and predicting future values. AR involves regressing the variable on its own lagged (i.e., past) values. MA involves modeling the error as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually denoted ARMA(p, q), where p is the order of AR and q is the order of MA.
The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.
ARMA models can be estimated by using the Box–Jenkins method.
== Mathematical formulation ==
=== Autoregressive model ===
The notation AR(p) refers to the autoregressive model of order p. The AR(p) model is written as
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}}
where
φ
1
,
…
,
φ
p
{\displaystyle \varphi _{1},\ldots ,\varphi _{p}}
are parameters and the random variable
ε
t
{\displaystyle \varepsilon _{t}}
is white noise, usually independent and identically distributed (i.i.d.) normal random variables.
In order for the model to remain stationary, the roots of its characteristic polynomial must lie outside the unit circle. For example, processes in the AR(1) model with
|
φ
1
|
≥
1
{\displaystyle |\varphi _{1}|\geq 1}
are not stationary because the root of
1
−
φ
1
B
=
0
{\displaystyle 1-\varphi _{1}B=0}
lies within the unit circle.
The augmented Dickey–Fuller test can assesses the stability of an intrinsic mode function and trend components. For stationary time series, the ARMA models can be used, while for non-stationary series, Long short-term memory models can be used to derive abstract features. The final value is obtained by reconstructing the predicted outcomes of each time series.
=== Moving average model ===
The notation MA(q) refers to the moving average model of order q:
X
t
=
μ
+
ε
t
+
∑
i
=
1
q
θ
i
ε
t
−
i
{\displaystyle X_{t}=\mu +\varepsilon _{t}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}\,}
where the
θ
1
,
.
.
.
,
θ
q
{\displaystyle \theta _{1},...,\theta _{q}}
are the parameters of the model,
μ
{\displaystyle \mu }
is the expectation of
X
t
{\displaystyle X_{t}}
(often assumed to equal 0), and
ε
1
{\displaystyle \varepsilon _{1}}
, ...,
ε
t
{\displaystyle \varepsilon _{t}}
are i.i.d. white noise error terms that are commonly normal random variables.
=== ARMA model ===
The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving-average terms. This model contains the AR(p) and MA(q) models,
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
=== In terms of lag operator ===
In some texts, the models is specified using the lag operator L. In these terms, the AR(p) model is given by
ε
t
=
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
φ
(
L
)
X
t
{\displaystyle \varepsilon _{t}=\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\varphi (L)X_{t}\,}
where
φ
{\displaystyle \varphi }
represents the polynomial
φ
(
L
)
=
1
−
∑
i
=
1
p
φ
i
L
i
.
{\displaystyle \varphi (L)=1-\sum _{i=1}^{p}\varphi _{i}L^{i}.\,}
The MA(q) model is given by
X
t
−
μ
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
=
θ
(
L
)
ε
t
,
{\displaystyle X_{t}-\mu =\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}=\theta (L)\varepsilon _{t},\,}
where
θ
{\displaystyle \theta }
represents the polynomial
θ
(
L
)
=
1
+
∑
i
=
1
q
θ
i
L
i
.
{\displaystyle \theta (L)=1+\sum _{i=1}^{q}\theta _{i}L^{i}.\,}
Finally, the combined ARMA(p, q) model is given by
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
,
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,,}
or more concisely,
φ
(
L
)
X
t
=
θ
(
L
)
ε
t
{\displaystyle \varphi (L)X_{t}=\theta (L)\varepsilon _{t}\,}
or
φ
(
L
)
θ
(
L
)
X
t
=
ε
t
.
{\displaystyle {\frac {\varphi (L)}{\theta (L)}}X_{t}=\varepsilon _{t}\,.}
This is the form used in Box, Jenkins & Reinsel.
Moreover, starting summations from
i
=
0
{\displaystyle i=0}
and setting
ϕ
0
=
−
1
{\displaystyle \phi _{0}=-1}
and
θ
0
=
1
{\displaystyle \theta _{0}=1}
, then we get an even more elegant formulation:
−
∑
i
=
0
p
ϕ
i
L
i
X
t
=
∑
i
=
0
q
θ
i
L
i
ε
t
.
{\displaystyle -\sum _{i=0}^{p}\phi _{i}L^{i}\;X_{t}=\sum _{i=0}^{q}\theta _{i}L^{i}\;\varepsilon _{t}\,.}
== Spectrum ==
The spectral density of an ARMA process is
S
(
f
)
=
σ
2
2
π
|
θ
(
e
−
i
f
)
ϕ
(
e
−
i
f
)
|
2
{\displaystyle S(f)={\frac {\sigma ^{2}}{2\pi }}\left\vert {\frac {\theta (e^{-if})}{\phi (e^{-if})}}\right\vert ^{2}}
where
σ
2
{\displaystyle \sigma ^{2}}
is the variance of the white noise,
θ
{\displaystyle \theta }
is the characteristic polynomial of the moving average part of the ARMA model, and
ϕ
{\displaystyle \phi }
is the characteristic polynomial of the autoregressive part of the ARMA model.
== Fitting models ==
=== Choosing p and q ===
An appropriate value of p in the ARMA(p, q) model can be found by plotting the partial autocorrelation functions. Similarly, q can be estimated by using the autocorrelation functions. Both p and q can be determined simultaneously using extended autocorrelation functions (EACF). Further information can be gleaned by considering the same functions for the residuals of a model fitted with an initial selection of p and q.
Brockwell & Davis recommend using Akaike information criterion (AIC) for finding p and q. Another option is the Bayesian information criterion (BIC).
=== Estimating coefficients ===
After choosing p and q, ARMA models can be fitted by least squares regression to find the values of the parameters which minimize the error term. It is good practice to find the smallest values of p and q which provide an acceptable fit to the data. For a pure AR model, the Yule-Walker equations may be used to provide a fit.
ARMA outputs are used primarily to forecast (predict), and not to infer causation as in other areas of econometrics and regression methods such as OLS and 2SLS.
=== Software implementations ===
In R, standard package stats has function arima, documented in ARIMA Modelling of Time Series. Package astsa has an improved script called sarima for fitting ARMA models (seasonal and nonseasonal) and sarima.sim to simulate data from these models. Extension packages contain related and extended functionality: package tseries includes the function arma(), documented in "Fit ARMA Models to Time Series"; packagefracdiff contains fracdiff() for fractionally integrated ARMA processes; and package forecast includes auto.arima for selecting a parsimonious set of p, q. The CRAN task view on Time Series contains links to most of these.
Mathematica has a complete library of time series functions including ARMA.
MATLAB includes functions such as arma, ar and arx to estimate autoregressive, exogenous autoregressive and ARMAX models. See System Identification Toolbox and Econometrics Toolbox for details.
Julia has community-driven packages that implement fitting with an ARMA model such as arma.jl.
Python has the statsmodelsS package which includes many models and functions for time series analysis, including ARMA. Formerly part of the scikit-learn library, it is now stand-alone and integrates well with Pandas.
PyFlux has a Python-based implementation of ARIMAX models, including Bayesian ARIMAX models.
IMSL Numerical Libraries are libraries of numerical analysis functionality including ARMA and ARIMA procedures implemented in standard programming languages like C, Java, C# .NET, and Fortran.
gretl can estimate ARMA models, as mentioned here
GNU Octave extra package octave-forge supports AR models.
Stata includes the function arima. for ARMA and ARIMA models.
SuanShu is a Java library of numerical methods that implements univariate/multivariate ARMA, ARIMA, ARMAX, etc models, documented in "SuanShu, a Java numerical and statistical library".
SAS has an econometric package, ETS, that estimates ARIMA models. See details.
== History and interpretations ==
The general ARMA model was described in the 1951 thesis of Peter Whittle, who used mathematical analysis (Laurent series and Fourier analysis) and statistical inference. ARMA models were popularized by a 1970 book by George E. P. Box and Jenkins, who expounded an iterative (Box–Jenkins) method for choosing and estimating them. This method was useful for low-order polynomials (of degree three or less).
ARMA is essentially an infinite impulse response filter applied to white noise, with some additional interpretation placed on it.
In digital signal processing, ARMA is represented as a digital filter with white noise at the input and the ARMA process at the output.
== Applications ==
ARMA is appropriate when a system is a function of a series of unobserved shocks (the MA or moving average part) as well as its own behavior. For example, stock prices may be shocked by fundamental information as well as exhibiting technical trending and mean-reversion effects due to market participants.
== Generalizations ==
There are various generalizations of ARMA. Nonlinear AR (NAR), nonlinear MA (NMA) and nonlinear ARMA (NARMA) model nonlinear dependence on past values and error terms. Vector AR (VAR) and vector ARMA (VARMA) model multivariate time series. Autoregressive integrated moving average (ARIMA) models non-stationary time series (that is, whose mean changes over time). Autoregressive conditional heteroskedasticity (ARCH) models time series where the variance changes. Seasonal ARIMA (SARIMA or periodic ARMA) models periodic variation. Autoregressive fractionally integrated moving average (ARFIMA, or Fractional ARIMA, FARIMA) model time-series that exhibits long memory. Multiscale AR (MAR) is indexed by the nodes of a tree instead of integers.
=== Autoregressive–moving-average model with exogenous inputs (ARMAX) ===
The notation ARMAX(p, q, b) refers to a model with p autoregressive terms, q moving average terms and b exogenous inputs terms. The last term is a linear combination of the last b terms of a known and external time series
d
t
{\displaystyle d_{t}}
. It is given by:
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
+
∑
i
=
1
b
η
i
d
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}+\sum _{i=1}^{b}\eta _{i}d_{t-i}.\,}
where
η
1
,
…
,
η
b
{\displaystyle \eta _{1},\ldots ,\eta _{b}}
are the parameters of the exogenous input
d
t
{\displaystyle d_{t}}
.
Some nonlinear variants of models with exogenous variables have been defined: see for example Nonlinear autoregressive exogenous model.
Statistical packages implement the ARMAX model through the use of "exogenous" (that is, independent) variables. Care must be taken when interpreting the output of those packages, because the estimated parameters usually (for example, in R and gretl) refer to the regression:
X
t
−
m
t
=
ε
t
+
∑
i
=
1
p
φ
i
(
X
t
−
i
−
m
t
−
i
)
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}-m_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}(X_{t-i}-m_{t-i})+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
where
m
t
{\displaystyle m_{t}}
incorporates all exogenous (or independent) variables:
m
t
=
c
+
∑
i
=
0
b
η
i
d
t
−
i
.
{\displaystyle m_{t}=c+\sum _{i=0}^{b}\eta _{i}d_{t-i}.\,}
== See also ==
Autoregressive integrated moving average (ARIMA)
Exponential smoothing
Linear predictive coding
Predictive analytics
Infinite impulse response
Finite impulse response
== References ==
== Further reading ==
Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 0521343399.
Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 052135532X.
Francq, C.; Zakoïan, J.-M. (2005), "Recent results for linear time series models with non independent innovations", in Duchesne, P.; Remillard, B. (eds.), Statistical Modeling and Analysis for Complex Data Problems, Springer, pp. 241–265, CiteSeerX 10.1.1.721.1754.
Shumway, R.H. and Stoffer, D.S. (2017). Time Series Analysis and Its Applications with R Examples. Springer. DOI: 10.1007/978-3-319-52452-8 | Wikipedia/Autoregressive_moving_average_model |
Hypertabastic survival models were introduced in 2007 by Mohammad Tabatabai, Zoran Bursac, David Williams, and Karan Singh. This distribution can be used to analyze time-to-event data in biomedical and public health areas and normally called survival analysis. In engineering, the time-to-event analysis is referred to as reliability theory and in business and economics it is called duration analysis. Other fields may use different names for the same analysis. These survival models are applicable in many fields such as biomedical, behavioral science, social science, statistics, medicine, bioinformatics, medical informatics, data science especially in machine learning, computational biology, business economics, engineering, and commercial entities. They not only look at the time to event, but whether or not the event occurred. These time-to-event models can be applied in a variety of applications for instance, time after diagnosis of cancer until death, comparison of individualized treatment with standard care in cancer research, time until an individual defaults on loans, relapsed time for drug and smoking cessation, time until property sold after being put on the market, time until an individual upgrades to a new phone, time until job relocation, time until bones receive microscopic fractures when undergoing different stress levels, time from marriage until divorce, time until infection due to catheter, and time from bridge completion until first repair.
== Hypertabastic cumulative distribution function ==
The Hypertabastic cumulative distribution function or simply the hypertabastic distribution function
F
(
t
)
{\displaystyle F(t)}
is defined as the probability that random variable
T
{\displaystyle T}
will take a value less than or equal to
t
{\displaystyle t}
. The hypertabastic distribution function is defined as
F
(
t
)
=
{
1
−
sech
(
α
(
1
−
t
β
coth
(
t
β
)
)
β
)
t
>
0
0
t
≤
0
{\displaystyle F(t)={\begin{cases}1-\operatorname {sech} ({\frac {\alpha (1-t^{\beta }\operatorname {coth} (t^{\beta }))}{\beta }})&t>0\\0&t\leq 0\end{cases}}}
,
where
sech
{\displaystyle \operatorname {sech} }
represents the hyperbolic secant function and
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
are parameters.
The parameters
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are both positive with
sech
{\displaystyle \operatorname {sech} }
and
coth
{\displaystyle \operatorname {coth} }
as hyperbolic secant and hyperbolic cotangent respectively. The Hypertabastic probability density function is
f
(
t
)
=
{
sech
(
W
(
t
)
)
(
α
t
2
β
−
1
csch
2
(
t
β
)
−
α
t
β
−
1
coth
(
t
β
)
)
tanh
(
W
(
t
)
)
t
>
0
0
t
<
0
{\displaystyle f(t)={\begin{cases}\operatorname {sech} (W(t))(\alpha t^{2\beta -1}\operatorname {csch} ^{2}(t^{\beta })-\alpha t^{\beta -1}\operatorname {coth} (t^{\beta }))\operatorname {tanh} (W(t))&t>0\\0&t<0\end{cases}}}
,
where
csch
{\displaystyle \operatorname {csch} }
and
tanh
{\displaystyle \operatorname {tanh} }
are hyperbolic cosecant and hyperbolic tangent respectively and
W
(
t
)
=
α
(
1
−
t
β
coth
(
t
β
)
)
β
{\displaystyle W(t)={\frac {\alpha (1-t^{\beta }\operatorname {coth} (t^{\beta }))}{\beta }}}
== Hypertabastic survival function ==
The Hypertabastic survival function is defined as
S
(
t
)
=
sech
[
α
(
1
−
t
β
coth
(
t
β
)
)
β
]
{\displaystyle S(t)=\operatorname {sech} [{\frac {\alpha (1-t^{\beta }\operatorname {coth} (t^{\beta }))}{\beta }}]}
,
where
S
(
t
)
{\displaystyle S(t)}
is the probability that waiting time exceeds
t
{\displaystyle t}
.
For
t
>
0
{\displaystyle t>0}
, the Restricted Expected (mean) Survival Time of the random variable
T
{\displaystyle T}
is denoted by
R
E
S
T
(
t
)
{\displaystyle REST(t)}
, and is defined as
R
E
S
T
(
t
)
=
∫
0
t
S
(
u
)
d
u
{\displaystyle REST(t)=\int _{0}^{t}{S(u)}du}
.
== Hypertabastic hazard function ==
For the continuous random variable
T
{\displaystyle T}
representing time to event, the Hypertabastic hazard function
h
(
t
)
{\displaystyle h(t)}
, which represents the instantaneous failure rate at time
t
{\displaystyle t}
given survival up to time
t
{\displaystyle t}
, is defined as
h
(
t
)
=
lim
Δ
(
t
)
→
0
+
P
(
t
≤
T
<
t
+
Δ
(
t
)
|
T
≥
t
)
Δ
(
t
)
=
α
(
t
2
β
−
1
csch
2
(
t
β
)
−
t
β
−
1
coth
(
t
β
)
)
tanh
(
W
(
t
)
)
{\displaystyle h(t)=\lim _{\Delta (t)\to 0^{+}}{\frac {P(t\leq T<t+\Delta (t)|T\geq t)}{\Delta (t)}}=\alpha (t^{2\beta -1}\operatorname {csch} ^{2}(t^{\beta })-t^{\beta -1}\operatorname {coth} (t^{\beta }))\operatorname {tanh} (W(t))}
.
The Hypertabastic hazard function has the flexibility to model varieties of hazard shapes.Spirko, L. (2017). Variable Selection and Supervised Dimension Reduction for Large-Scale Genomic Data with Censored Survival Outcomes (PDF) (PhD thesis). Temple University. These different hazard shapes could apply to different mechanisms for which the hazard functions may not agree with conventional models. The following is a list of possible shapes for the Hypertabastic hazard function:
For
0
<
β
≤
0.25
{\displaystyle 0<\beta \leq 0.25}
, the Hypertabastic hazard function is monotonically decreasing indicating higher likelihood of failure at early times. For
0.25
<
β
<
1
{\displaystyle 0.25<\beta <1}
, the Hypertabastic hazard curve first increases with time until it reaches its maximum failure rate and thereafter the failure decreases with time (unimodal). For
β
=
1
{\displaystyle \beta =1}
, the Hypertabastic hazard function initially increases with time, then it reaches its horizontal asymptote
α
{\displaystyle \alpha }
. For
1
<
β
<
2
{\displaystyle 1<\beta <2}
, the Hypertabastic hazard function first increases with time with an upward concavity until it reaches its inflection point and subsequently continues to increase with a downward concavity. For
β
=
2
{\displaystyle \beta =2}
, the Hypertabastic hazard function initially increases with an upward concavity until it reaches its point of inflection, thereafter becoming a linear asymptote with slope
α
{\displaystyle \alpha }
. For
β
>
2
{\displaystyle \beta >2}
, the Hypertabastic hazard function increases with an upward concavity.
The Hypertabastic cumulative hazard function is
H
(
t
)
=
∫
0
t
h
(
v
)
d
v
=
−
l
n
(
S
(
t
)
)
{\displaystyle H(t)=\int _{0}^{t}h(v)dv=-ln(S(t))}
== Hypertabastic proportional hazards model ==
The hazard function
h
(
t
|
x
,
θ
)
{\displaystyle h(t|\mathbf {x} ,\mathbf {\theta } )}
of the Hypertabastic proportional hazards model has the form
h
(
t
|
x
,
θ
)
=
h
(
t
)
g
(
θ
|
x
)
{\displaystyle h(t|\mathbf {x} ,\mathbf {\theta } )=h(t)g(\mathbf {\theta } |\mathbf {x} )}
,
where
x
{\displaystyle \mathbf {x} }
is a p-dimensional vector of explanatory variables and
θ
{\displaystyle \theta }
is a vector of unknown parameters. The combined effect of explanatory variables
g
(
θ
|
x
)
=
e
−
θ
0
−
∑
k
=
1
p
θ
k
x
k
{\displaystyle g(\mathbf {\theta } |\mathbf {x} )=e^{-\theta _{0}-\sum _{k=1}^{p}{\theta _{k}x_{k}}}}
is a non-negative function of
x
{\displaystyle x}
with
g
(
θ
|
0
)
=
e
−
θ
0
{\displaystyle g(\mathbf {\theta } |\mathbf {0} )=e^{-\theta _{0}}}
. The Hypertabastic survival function
S
(
t
|
x
,
θ
)
{\displaystyle S(t|\mathbf {x} ,\mathbf {\theta } )}
for the proportional hazards model is defined as:
S
(
t
|
x
,
θ
)
=
[
S
(
t
)
]
g
(
θ
|
x
)
{\displaystyle S(t|\mathbf {x} ,\mathbf {\theta } )=[S(t)]^{g(\mathbf {\theta } |\mathbf {x} )}}
and the Hypertabastic probability density function for the proportional hazard model is given by
f
(
t
|
x
,
θ
)
=
f
(
t
)
[
S
(
t
)
]
g
(
θ
|
x
)
−
1
g
(
θ
|
x
)
{\displaystyle f(t|\mathbf {x} ,\mathbf {\theta } )=f(t)[S(t)]^{g(\mathbf {\theta } |\mathbf {x} )-1}g(\mathbf {\theta } |\mathbf {x} )}
.
Depending on the type of censoring, the maximum likelihood function technique along with an appropriate log-likelihood function may be used to estimate the model parameters.
If the sample consists of right censored data and the model to use is Hypertabastic proportional hazards model, then, the proportional hazards log-likelihood function is
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
l
n
[
sech
(
W
(
t
i
)
)
]
g
(
θ
|
x
i
)
+
δ
i
l
n
[
(
α
t
i
−
1
+
2
β
csch
2
(
t
i
β
)
−
α
t
i
−
1
+
β
coth
(
t
i
β
)
)
tanh
(
W
(
t
i
)
)
g
(
θ
|
x
i
)
]
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}{(ln[{\operatorname {sech} (W(t_{i}))}]g(\mathbf {\theta } |\mathbf {x} _{i})+\delta _{i}ln{[(\alpha {t_{i}}^{-1+2\beta }\operatorname {csch} ^{2}{({t_{i}}^{\beta })}-\alpha {t_{i}}^{-1+\beta }\operatorname {coth} ({t_{i}}^{\beta }))\operatorname {tanh} (W(t_{i}))g(\mathbf {\theta } |\mathbf {x} _{i})]})}}
.
== Hypertabastic accelerated failure time model ==
When the covariates act multiplicatively on the time-scale, the model is called accelerated failure time model. The Hypertabastic survival function for the accelerated failure time model is given by
S
(
t
|
x
,
θ
)
=
S
(
t
g
(
θ
|
x
)
)
{\displaystyle S(t|\mathbf {x} ,\mathbf {\theta } )=S(tg(\mathbf {\theta } |\mathbf {x} ))}
.
The Hypertabastic accelerated failure time model has a hazard function
h
(
t
|
x
,
θ
)
{\displaystyle h(t|\mathbf {x} ,\mathbf {\theta } )}
of the form
h
(
t
|
x
,
θ
)
=
h
(
t
g
(
θ
|
x
)
)
g
(
θ
|
x
)
{\displaystyle h(t|\mathbf {x} ,\mathbf {\theta } )=h(tg(\mathbf {\theta } |\mathbf {x} ))g(\mathbf {\theta } |\mathbf {x} )}
.
The Hypertabastic probability density function for the accelerated failure time model is
f
(
t
|
x
,
θ
)
=
f
(
t
g
(
θ
|
x
)
)
g
(
θ
|
x
)
{\displaystyle f(t|\mathbf {x} ,\mathbf {\theta } )=f(tg(\mathbf {\theta } |\mathbf {x} ))g(\mathbf {\theta } |\mathbf {x} )}
.
For the right censored data, the log-likelihood function for the Hypertabastic accelerated failure time model is given by
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
l
n
[
sech
(
α
(
Z
(
t
i
)
)
β
coth
(
Z
(
t
i
)
β
)
β
)
]
+
δ
i
l
n
[
(
α
(
Z
(
t
i
)
)
−
1
+
2
β
csch
2
[
Z
(
t
i
)
β
]
−
α
Z
(
t
i
)
β
tanh
(
α
(
1
−
(
Z
(
t
i
)
)
β
coth
(
Z
(
t
i
)
)
β
)
β
)
)
]
g
(
θ
|
x
i
)
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}{(ln{[\operatorname {sech} ({\frac {\alpha {(Z(t_{i}))}^{\beta }\operatorname {coth} ({Z(t_{i})}^{\beta })}{\beta }})]}+\delta _{i}ln{[(\alpha {(Z(t_{i}))}^{-1+2\beta }\operatorname {csch} ^{2}{[{Z(t_{i})}^{\beta }]}-\alpha {Z(t_{i})}^{\beta }\operatorname {tanh} ({\frac {\alpha (1-{(Z(t_{i}))}^{\beta }\operatorname {coth} {(Z(t_{i}))}^{\beta })}{\beta }}))]}g(\mathbf {\theta } |\mathbf {x} _{i}))}}
,
where
Z
(
t
i
)
=
t
i
g
(
θ
|
x
i
)
{\displaystyle Z(t_{i})=t_{i}g(\mathbf {\theta } |\mathbf {x} _{i})}
.
A modified chi-squared type test, known as Nikulin-Rao-Robson statistic is used to test the goodness-of-fit for Hypertabastic accelerated failure time models and its comparison with unimodal hazard rate functions. Simulation studies have shown that the Hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution because of its flexible shape of hazard functions. The Hypertabastic distribution is a competitor for statistical modeling when compared with Birnbaum-Saunders and inverse Gaussian distributions
== Likelihood functions for survival analysis ==
Consider a sample of survival times of n individuals
t
1
,
t
2
,
…
,
t
n
{\displaystyle t_{1},t_{2},\ldots ,t_{n}}
with associated p-dimensional covariate vectors
x
1
,
x
2
,
…
,
x
n
{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n}}
and an unknown parameter vector
θ
=
(
θ
0
,
θ
1
,
…
,
θ
p
)
{\displaystyle \mathbf {\theta } =(\theta _{0},\theta _{1},\ldots ,\theta _{p})}
. Let
f
(
t
i
|
x
i
,
θ
)
,
F
(
t
i
|
x
i
,
θ
)
,
S
(
t
i
|
x
i
,
)
{\displaystyle f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } ),F(t_{i}|\mathbf {x} _{i},\theta ),S(t_{i}|\mathbf {x} _{i},)}
and
h
(
t
i
|
x
i
,
θ
)
{\displaystyle h(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )}
stand for the corresponding probability density function, cumulative distribution function, survival function and hazard function respectively.
In the absence of censoring (censoring normally occurs when the failure time of some individuals cannot be observed), the likelihood function is
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
f
(
t
i
|
x
i
,
θ
)
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )}
and the log-likelihood
L
L
(
θ
,
α
,
β
:
x
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })}
is
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
l
n
[
f
(
t
i
|
x
i
,
θ
)
]
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}ln{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}}
For the right censored data, the likelihood function is
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
[
f
(
t
i
|
x
i
,
θ
)
]
δ
i
[
S
(
t
i
|
x
i
,
θ
)
]
1
−
δ
i
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{\delta _{i}}[S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\delta _{i}}}}
or equivalently,
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
[
h
(
t
i
|
x
i
,
θ
)
]
δ
i
S
(
t
i
|
x
i
,
θ
)
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}{[h(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{\delta _{i}}S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )}}
,
and the log-likelihood function is
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
δ
i
l
n
[
f
(
t
i
|
x
i
,
θ
)
]
+
(
1
−
δ
i
)
l
n
[
S
(
t
i
|
x
i
,
θ
)
]
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}(\delta _{i}ln{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}+(1-\delta _{i})ln{[S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]})}
or equivalently,
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
δ
i
l
n
[
h
(
t
i
|
x
i
,
θ
]
+
l
n
[
S
(
t
i
|
x
i
,
θ
)
]
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}(\delta _{i}ln{[h(t_{i}|\mathbf {x} _{i},\mathbf {\theta } ]}+ln{[S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]})}
where
δ
i
=
{
0
t
i
is a right censored observation
1
o
t
h
e
r
w
i
s
e
{\displaystyle \delta _{i}={\begin{cases}0&t_{i}{\text{is a right censored observation}}\\1&otherwise\end{cases}}}
,
In the presence of left censored data, the likelihood function is
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
[
f
(
t
i
|
x
i
,
θ
)
]
γ
i
[
F
(
t
i
|
x
i
,
θ
)
]
1
−
γ
i
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{\gamma _{i}}[F(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\gamma _{i}}}
and the corresponding log-likelihood function is
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
γ
i
l
n
[
f
(
t
i
|
x
i
,
θ
)
]
+
(
1
−
γ
i
)
l
n
[
F
(
t
i
|
x
i
,
θ
)
]
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}(\gamma _{i}ln{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}+(1-\gamma _{i})ln{[F(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]})}
where
γ
i
=
{
0
t
i
is a left censored observation
1
o
t
h
e
r
w
i
s
e
{\displaystyle \gamma _{i}={\begin{cases}0&t_{i}{\text{is a left censored observation}}\\1&otherwise\end{cases}}}
,
In the presence of interval censored data, the likelihood function is
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
(
[
f
(
t
i
|
x
i
,
θ
)
]
ξ
i
[
F
(
v
i
|
x
i
,
θ
)
−
F
(
u
i
|
x
i
,
θ
)
]
1
−
ξ
i
)
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}([f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{\xi _{i}}[F(v_{i}|\mathbf {x} _{i},\mathbf {\theta } )-F(u_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\xi _{i}})}
and the log-likelihood function is
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
ξ
i
l
n
[
f
(
t
i
|
x
i
,
θ
)
]
+
(
1
−
ξ
i
)
l
n
[
F
(
v
i
|
x
i
,
θ
)
−
F
(
u
i
|
x
i
,
θ
)
]
)
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}(\xi _{i}ln{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}+(1-\xi _{i})ln{[F(v_{i}|\mathbf {x} _{i},\mathbf {\theta } )-F(u_{i}|\mathbf {x} _{i},\mathbf {\theta } )]})}
where
u
i
≤
t
i
≤
v
i
{\displaystyle u_{i}\leq t_{i}\leq v_{i}}
for all interval censored observations and
ξ
i
=
{
0
t
i
is an interval censored observation
1
o
t
h
e
r
w
i
s
e
{\displaystyle \xi _{i}={\begin{cases}0&t_{i}{\text{is an interval censored observation}}\\1&otherwise\end{cases}}}
,
If the intended sample consists of all types of censored data (right censored, left censored and interval censored), then its likelihood function takes the following form
L
(
θ
,
α
,
β
:
x
)
=
∏
i
=
1
n
(
[
S
(
t
i
|
x
i
,
θ
)
]
1
−
δ
i
[
F
(
t
i
|
x
i
,
θ
)
]
1
−
γ
i
[
F
(
v
i
|
x
i
,
θ
)
−
F
(
u
i
|
x
i
,
θ
)
]
1
−
ξ
i
[
f
(
t
i
|
x
i
,
θ
)
]
δ
i
+
γ
i
+
ξ
i
−
2
)
{\displaystyle L(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\prod _{i=1}^{n}([S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\delta _{i}}[F(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\gamma _{i}}[F(v_{i}|\mathbf {x} _{i},\mathbf {\theta } )-F(u_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{1-\xi _{i}}[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]^{\delta _{i}+\gamma _{i}+\xi _{i}-2})}
and its corresponding log-likelihood function is given by
L
L
(
θ
,
α
,
β
:
x
)
=
∑
i
=
1
n
(
1
−
δ
i
)
l
n
[
S
(
t
i
|
x
i
,
θ
)
]
+
(
1
−
γ
i
)
l
n
[
F
(
t
i
|
x
i
,
θ
)
]
(
1
−
ξ
i
)
l
n
[
F
(
v
i
|
x
i
,
θ
)
−
F
(
u
i
|
x
i
,
θ
)
]
+
(
δ
i
+
γ
i
+
ξ
i
−
2
)
l
n
[
f
(
t
i
|
x
i
,
θ
)
]
{\displaystyle LL(\mathbf {\theta } ,\alpha ,\beta :{\mathbf {x} })=\sum _{i=1}^{n}{(1-\delta _{i})ln{[S(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}+(1-\gamma _{i})ln{[F(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}(1-\xi _{i})ln{[F(v_{i}|\mathbf {x} _{i},\mathbf {\theta } )-F(u_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}+(\delta _{i}+\gamma _{i}+\xi _{i}-2)ln{[f(t_{i}|\mathbf {x} _{i},\mathbf {\theta } )]}}}
== Applications of hypertabastic survival models ==
=== Cutaneous or mucosal melanoma ===
The Hypertabastic Accelerated Failure Time model was used to analyze a total of 27,532 patients regarding the impact of histology on the survival of patients with cutaneous or mucosal melanoma. Understanding patients’ histological subtypes and their failure rate assessment would enable clinicians and healthcare providers to perform individualized treatment, resulting in a lower risk of complication and higher survivability of patients.
=== Oil field quantities ===
The quantities of 49 locations of the same area of an oil field was examined to identify its underlying distribution. Using generalized chi-squared, the distribution of oil field quantities was represented by the Hyperbolastic distribution and compared with the lognormal (LN), log-logistic (LL), Birnbaum-Saunders (BS) and inverse Gaussian (IG) distributions.
=== Remission duration for acute leukemia ===
The times of remission from clinical trial for acute leukemia of children study were
used to analyze the remission duration of acute leukemia
data for two groups of patients controlling for log of white blood cell counts. The Hypertabastic accelerated failure time model was used to analyze the remission duration of acute leukemia patient.
=== Brain tumor study of malignant glioma patients ===
A randomized clinical trial comparing two chemotherapy regimens for 447 individuals with malignant glioma. A total of 293 patients died within a five-year time period and the median survival time was about 11 months. The overall model fit, in comparison with other parametric distributions, was performed using the generalized chi-square test statistics and proportional hazards model.
=== Analysis of breast cancer patients ===
The Hypertabastic proportional hazard model was used to analyze numerous breast cancer data including the survival of breast cancer patients by exploring the role of a metastasis variable in combination with clinical and gene expression variables.
=== Analysis of hypertensive patients ===
One hundred five Nigerian patients who were diagnosed with hypertension from January 2013 to July 2018 were included in this study, where death was the event of interest. Six parametric models such as; exponential, Weibull, lognormal, log-logistic, Gompertz and Hypertabastic distributions were fitted to the data using goodness of fit tests such as S.E., AIC, and BIC to determine the best fit model. The parametric models were considered because they are all lifetime distributions. S.E., AIC, and BIC measures were used to compare these parametric models.
=== Analysis of cortical bone fracture ===
Stress fractures in older individuals are very important due to the growing number of elderly. Fatigue tests on 23 female bone samples from three individuals were analyzed. Hypertabastic survival and hazard functions of the normalized stress level and age were developed using previously published bone fatigue stress data. The event of interest was the number of cycles until the bone gets microscopic fracture. Furthermore, Hypertabastic proportional hazard models were used to investigate tensile fatigue and cycle-to-fatigue for cortical bone data.
=== Analysis of unemployment ===
Hypertabastic survival models have been used in the analysis of unemployment data and its comparison with the cox regression model.
=== Analysis of kidney carcinoma patients ===
Using National Cancer Institute data from 1975 to 2016, the impact of histological subtypes on the survival probability of 134,150 kidney carcinoma patients were examined. The study variables were a race/ethnicity, age, sex, tumor grade, type of surgery, geographical location of patient and stage of disease. The Hypertabastic proportional hazards model was used to analyze the survival time of patients diagnosed with kidney carcinoma to explore the effect of histological subtypes on their survival probability and assess the relationship between the histological subtypes, tumor stage, tumor grade, and type of surgery.
==== Kidney carcinoma SAS example code ====
Sample code in SAS:
=== Applications of hypertabastic survival models in bridge engineering ===
Although survival analysis tools and techniques have been widely used in medical and biomedical applications over the last few decades, their applications to engineering problems have been more sporadic and limited. The probabilistic assessment of service life of a wide variety of engineering systems, from small mechanical components to large bridge structures, can substantially benefit from the well-established survival analysis techniques. Modeling of time-to-event phenomena in engineering applications can be performed under the influence of numerical and categorical covariates using observational or test data. The "survival" of an engineering component or system is synonymous with the more commonly used term "reliability". The term "hazard rate" or "conditional failure rate" (defined as probability of survival per unit time assuming survival up to that time) is an important measure of the change in the rate of failure over time. In this context, failure is defined as reaching the target event in the time-to-event process. This could be defined as reaching a particular serviceability condition state, localized/partial structural failure, or global/catastrophic failure applied the Hypertabastic parametric accelerated failure time survival model to develop probabilistic models of bridge deck service life for Wisconsin. Bridge decks are typically concrete slabs on which traffic rides as seen in the Marquette Interchange bridge. The authors used the National Bridge Inventory (NBI) dataset to obtain the needed data for their study. NBI records include discrete numerical ratings for bridge decks (and other bridge components) as well as other basic information such as Average Daily Traffic (ADT) and deck surface area (obtained by multiplying the provided bridge length with bridge deck width). The numerical ratings range from 0 to 9 with 9 corresponding to brand new condition and 0 being complete failure. A deck condition rating of 5 was selected as the effective end of service life of bridge deck. The numerical covariates used were the ADT and deck surface area, while the categorical covariate was the superstructure material (structural steel or concrete).
The hypertabastic Proportional Hazards and Accelerated Failure Time models are useful techniques in analyzing bridge-related structures due to its flexibility of hazard curves, which can be monotonically increasing or decreasing with upward or downward concavity. It can also take the shape of a single mound curve. This flexibility in modeling various hazard shapes makes the model suitable for a wide variety of engineering problems.
Tabatabai et al. extended the Hypertabastic bridge deck models developed for Wisconsin bridges to bridges in six northern US states Nabizadeh, A. (2015). Reliability of Bridge Superstructures in Wisconsin. Master's Thesis (Thesis). UWM Digital Commons. and then to all 50 US states. The study of bridge decks in all 50 states indicated important differences in reliability of bridge decks in different states and regions. Stevens et al.
discuss the importance of survival analyses in identifying key bridge performance indicators and discuss the use of Hypertabastic survival models for bridges. and Nabizadeh et al.
further extended the use of Hypertabastic survival models to bridge superstructures. The covariates used were ADT, maximum bridge span length and superstructure type.
The survival function can be used to determine the expected life using the following equation (area under the entire survival curve)
E
L
0
=
∫
0
∞
S
(
t
)
d
t
{\displaystyle {EL}_{0}=\int _{0}^{\infty }S(t)dt}
It is important to note that both the survival function and the expected life would change as the time passes by. The conditional survival function
C
S
{\displaystyle C_{S}}
is a function of time
t
{\displaystyle t}
and survival time
t
s
{\displaystyle t_{s}}
and is defined as
C
S
(
t
,
t
s
)
=
{
1
0
≤
t
≤
t
s
S
(
t
)
S
(
t
s
)
t
>
t
s
{\displaystyle CS(t,t_{s})={\begin{cases}1&0\leq t\leq t_{s}\\{\frac {S(t)}{S(t_{s})}}&t>t_{s}\end{cases}}}
,
Nabizadeh et al. used the Hypertabastic survival functions developed for Wisconsin to analyze conditional survival functions and conditional expected service lives
(
E
L
c
(
t
s
)
)
{\displaystyle (EL_{c}(t_{s}))}
E
L
c
(
t
s
)
=
∫
0
∞
C
S
(
t
)
d
t
=
t
s
+
∫
t
s
∞
C
S
(
t
)
d
t
=
t
s
+
∫
t
s
∞
S
(
t
)
S
(
t
s
)
d
t
{\displaystyle {EL}_{c}(t_{s})=\int _{0}^{\infty }CS(t)dt=t_{s}+\int _{t_{s}}^{\infty }{CS(t)dt=t_{s}+\int _{t_{s}}^{\infty }{\frac {S(t)}{S(t_{s})}}}dt}
The conditional expected life would continue to increase as the survival time
t
s
{\displaystyle t_{s}}
increases. Nabizadeh et al. term this additional expected life as "survival dividend.”
An important mode of failure in bridge engineering is metal fatigue, which can result from repetitive applications of stress cycles to various details and connections in the structure. As the number of cycles
(
n
c
)
{\displaystyle (n_{c})}
increase, the probability of fatigue failure increases. An important factor in fatigue life
(
N
c
)
{\displaystyle (N_{c})}
is the stress range (Sr)(maximum minus minimum stress in a cycle). The probabilistic engineering fatigue problem can be treated as a "time"-to-event survival analysis problem if the number of cycles
(
n
c
)
{\displaystyle (n_{c})}
is treated as a fictitious time variable
(
t
)
{\displaystyle (t)}
This would facilitate the application of well-established survival analysis techniques to engineering fatigue problems and Tabatabai et al. The survival function
S
(
n
c
)
{\displaystyle S(n_{c})}
, probability density function
f
(
n
c
)
{\displaystyle f(n_{c})}
, hazard rate
h
(
n
c
)
{\displaystyle h(n_{c})}
, and cumulative probability of failure
F
(
n
c
)
{\displaystyle F(n_{c})}
can then be defined as
S
(
n
c
)
=
P
(
N
c
>
n
c
)
=
1
−
F
(
n
c
)
{\displaystyle S(n_{c})=P(N_{c}>n_{c})=1-F(n_{c})}
f
(
n
c
)
=
lim
δ
n
c
→
0
P
(
n
c
<
N
c
<
n
c
+
δ
n
c
)
δ
n
c
{\displaystyle f(n_{c})=\lim _{\delta n_{c}\to 0}{\frac {P(n_{c}<N_{c}<n_{c}+\delta n_{c})}{\delta n_{c}}}}
h
(
n
c
)
=
lim
δ
n
c
→
0
P
(
n
c
<
N
c
<
n
c
+
δ
n
c
|
N
c
>
n
c
)
δ
n
c
{\displaystyle h(n_{c})=\lim _{\delta n_{c}\to 0}{\frac {P(n_{c}<N_{c}<n_{c}+\delta n_{c}|N_{c}>n_{c})}{\delta n_{c}}}}
The hypertabastic accelerated failure time model was used to analyze probabilistic fatigue life for various detailed categories in steel bridges.
== References == | Wikipedia/Hypertabastic_survival_models |
Annualized failure rate (AFR) gives the estimated probability that a device or component will fail during a full year of use. It is a relation between the mean time between failure (MTBF) and the hours that a number of devices are run per year. AFR is estimated from a sample of like components—AFR and MTBF as given by vendors are population statistics that can not predict the behaviour of an individual unit.
== Hard disk drives ==
For example, AFR is used to characterize the reliability of hard disk drives.
The relationship between AFR and MTBF (in hours) is:
A
F
R
=
1
−
e
x
p
(
−
8766
/
M
T
B
F
)
{\displaystyle AFR=1-exp(-8766/MTBF)}
This equation assumes that the device or component is powered on for the full 8766 hours of a year, and gives the estimated fraction of an original sample of devices or components that will fail in one year, or, equivalently, 1 − AFR is the fraction of devices or components that will show no failures over a year. It is based on an exponential failure distribution (see failure rate for a full derivation).
Note: Some manufacturers count a year as 8760 hours.
This ratio can be approximated by, assuming a small AFR,
A
F
R
=
8766
M
T
B
F
{\displaystyle AFR={8766 \over MTBF}}
For example, a common specification for PATA and SATA drives may be an MTBF of 300,000 hours, giving an approximate theoretical 2.92% annualized failure rate i.e. a 2.92% chance that a given drive will fail during a year of use.
The AFR for a drive is derived from time-to-fail data from a reliability-demonstration test (RDT).
AFR will increase towards and beyond the end of the service life of a device or component. Google's 2007 study found, based on a large field sample of drives, that actual AFRs for individual drives ranged from 1.7% for first year drives to over 8.6% for three-year-old drives. A CMU 2007 study showed an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives.
== See also ==
Failure rate
Frequency of exceedance
== References == | Wikipedia/Annualized_failure_rate |
In actuarial science, force of mortality represents the instantaneous rate of mortality at a certain age measured on an annualized basis. It is identical in concept to failure rate, also called hazard function, in reliability theory.
== Motivation and definition ==
In a life table, we consider the probability of a person dying from age x to x + 1, called qx. In the continuous case, we could also consider the conditional probability of a person who has attained age (x) dying between ages x and x + Δx, which is
P
x
(
Δ
x
)
=
P
(
x
<
X
<
x
+
Δ
x
∣
X
>
x
)
=
F
X
(
x
+
Δ
x
)
−
F
X
(
x
)
(
1
−
F
X
(
x
)
)
{\displaystyle P_{x}(\Delta x)=P(x<X<x+\Delta \;x\mid \;X>x)={\frac {F_{X}(x+\Delta \;x)-F_{X}(x)}{(1-F_{X}(x))}}}
where FX(x) is the cumulative distribution function of the continuous age-at-death random variable, X. As Δx tends to zero, so does this probability in the continuous case. The approximate force of mortality is this probability divided by Δx. If we let Δx tend to zero, we get the function for force of mortality, denoted by
μ
(
x
)
{\displaystyle \mu (x)}
:
μ
(
x
)
=
lim
Δ
x
→
0
F
X
(
x
+
Δ
x
)
−
F
X
(
x
)
Δ
x
(
1
−
F
X
(
x
)
)
=
F
X
′
(
x
)
1
−
F
X
(
x
)
{\displaystyle \mu \,(x)=\lim _{\Delta x\rightarrow 0}{\frac {F_{X}(x+\Delta \;x)-F_{X}(x)}{\Delta x(1-F_{X}(x))}}={\frac {F'_{X}(x)}{1-F_{X}(x)}}}
Since fX(x)=F 'X(x) is the probability density function of X, and S(x) = 1 - FX(x) is the survival function, the force of mortality can also be expressed variously as:
μ
(
x
)
=
f
X
(
x
)
1
−
F
X
(
x
)
=
−
S
′
(
x
)
S
(
x
)
=
−
d
d
x
ln
[
S
(
x
)
]
.
{\displaystyle \mu \,(x)={\frac {f_{X}(x)}{1-F_{X}(x)}}=-{\frac {S'(x)}{S(x)}}=-{\frac {d}{dx}}\ln[S(x)].}
To understand conceptually how the force of mortality operates within a population, consider that the ages, x, where the probability density function fX(x) is zero, there is no chance of dying. Thus the force of mortality at these ages is zero. The force of mortality μ(x) uniquely defines a probability density function fX(x).
The force of mortality
μ
(
x
)
{\displaystyle \mu (x)}
can be interpreted as the conditional density of failure at age x, while f(x) is the unconditional density of failure at age x. The unconditional density of failure at age x is the product of the probability of survival to age x, and the conditional density of failure at age x, given survival to age x.
This is expressed in symbols as
μ
(
x
)
S
(
x
)
=
f
X
(
x
)
{\displaystyle \,\mu (x)S(x)=f_{X}(x)}
or equivalently
μ
(
x
)
=
f
X
(
x
)
S
(
x
)
.
{\displaystyle \mu (x)={\frac {f_{X}(x)}{S(x)}}.}
In many instances, it is also desirable to determine the survival probability function when the force of mortality is known. To do this, integrate the force of mortality over the interval x to x + t
∫
x
x
+
t
μ
(
y
)
d
y
=
∫
x
x
+
t
−
d
d
y
ln
[
S
(
y
)
]
d
y
{\displaystyle \int _{x}^{x+t}\mu (y)\,dy=\int _{x}^{x+t}-{\frac {d}{dy}}\ln[S(y)]\,dy}
.
By the fundamental theorem of calculus, this is simply
−
∫
x
x
+
t
μ
(
y
)
d
y
=
ln
[
S
(
x
+
t
)
]
−
ln
[
S
(
x
)
]
.
{\displaystyle -\int _{x}^{x+t}\mu (y)\,dy=\ln[S(x+t)]-\ln[S(x)].}
Let us denote
S
x
(
t
)
=
S
(
x
+
t
)
S
(
x
)
,
{\displaystyle S_{x}(t)={\frac {S(x+t)}{S(x)}},}
then taking the exponent to the base e, the survival probability of an individual of age x in terms of the force of mortality is
S
x
(
t
)
=
exp
(
−
∫
x
x
+
t
μ
(
y
)
d
y
)
.
{\displaystyle S_{x}(t)=\exp \left(-\int _{x}^{x+t}\mu (y)\,dy\,\right).}
== Examples ==
The simplest example is when the force of mortality is constant:
μ
(
y
)
=
λ
,
{\displaystyle \mu (y)=\lambda ,}
then the survival function is
S
x
(
t
)
=
e
−
∫
x
x
+
t
λ
d
y
=
e
−
λ
t
,
{\displaystyle S_{x}(t)=e^{-\int _{x}^{x+t}\lambda dy}=e^{-\lambda t},}
is the exponential distribution.
When the force of mortality is
μ
(
y
)
=
y
α
−
1
e
−
y
Γ
(
α
)
−
γ
(
α
,
y
)
,
{\displaystyle \mu (y)={\frac {y^{\alpha -1}e^{-y}}{\Gamma (\alpha )-\gamma (\alpha ,y)}},}
where γ(α,y) is the lower incomplete gamma function, the probability density function that of Gamma distribution
f
(
x
)
=
x
α
−
1
e
−
x
Γ
(
α
)
.
{\displaystyle f(x)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}.}
When the force of mortality is
μ
(
y
)
=
α
λ
α
y
α
−
1
,
{\displaystyle \mu (y)=\alpha \lambda ^{\alpha }y^{\alpha -1},}
where α ≥ 0, we have
∫
x
x
+
t
μ
(
y
)
d
y
=
α
λ
α
∫
x
x
+
t
y
α
−
1
d
y
=
λ
α
(
(
x
+
t
)
α
−
x
α
)
.
{\displaystyle \int _{x}^{x+t}\mu (y)dy=\alpha \lambda ^{\alpha }\int _{x}^{x+t}y^{\alpha -1}dy=\lambda ^{\alpha }((x+t)^{\alpha }-x^{\alpha }).}
Thus, the survival function is
S
x
(
t
)
=
e
−
∫
x
x
+
t
μ
(
y
)
d
y
=
A
(
x
)
e
−
(
λ
(
x
+
t
)
)
α
,
{\displaystyle S_{x}(t)=e^{-\int _{x}^{x+t}\mu (y)dy}=A(x)e^{-(\lambda (x+t))^{\alpha }},}
where
A
(
x
)
=
e
(
λ
x
)
α
.
{\displaystyle A(x)=e^{(\lambda x)^{\alpha }}.}
This is the survival function for Weibull distribution. For α = 1, it is same as the exponential distribution.
Another famous example is when the survival model follows Gompertz–Makeham law of mortality. In this case, the force of mortality is
μ
(
y
)
=
A
+
B
c
y
for
y
⩾
0.
{\displaystyle \mu (y)=A+Bc^{y}\quad {\text{for }}y\geqslant 0.}
Using the last formula, we have
∫
x
x
+
t
(
A
+
B
c
y
)
d
y
=
A
t
+
B
(
c
x
+
t
−
c
x
)
/
ln
[
c
]
.
{\displaystyle \int _{x}^{x+t}(A+Bc^{y})dy=At+B(c^{x+t}-c^{x})/\ln[c].}
Then
S
x
(
t
)
=
e
−
(
A
t
+
B
(
c
x
+
t
−
c
x
)
/
ln
[
c
]
)
=
e
−
A
t
g
c
x
(
c
t
−
1
)
{\displaystyle S_{x}(t)=e^{-(At+B(c^{x+t}-c^{x})/\ln[c])}=e^{-At}g^{c^{x}(c^{t}-1)}}
where
g
=
e
−
B
/
ln
[
c
]
.
{\displaystyle g=e^{-B/\ln[c]}.}
== See also ==
Failure rate
Hazard function
Actuarial present value
Actuarial science
Reliability theory
Life expectancy
== References == | Wikipedia/Force_of_mortality |
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
The reliability function is theoretically defined as the probability of success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling. Availability, testability, maintainability, and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in the cost-effectiveness of systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering, safety engineering, and system safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.
== History ==
The word reliability can be traced back to 1816 and is first attested to the poet Samuel Taylor Coleridge. Before World War II the term was linked mostly to repeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use of statistical process control was promoted by Dr. Walter A. Shewhart at Bell Labs, around the time that Waloddi Weibull was working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. The IEEE formed the Reliability Society in 1948. In 1950, the United States Department of Defense formed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment. This group recommended three main ways of working:
Improve component reliability.
Establish quality and reliability requirements for suppliers.
Collect field data and find root causes of failures.
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published by RCA and was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt electronics to replace older mechanical switching systems. Bellcore issued the first consumer prediction methodology for telecommunications, and SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve—see also reliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding the physics of failure. Failure rates for components kept dropping, but system-level issues became more prominent. Systems thinking has become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification.
The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheld GPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
== Overview ==
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.
=== Objective ===
The objectives of reliability engineering, in decreasing order of priority, are:
To apply engineering knowledge and specialist techniques to prevent or to reduce the likelihood or frequency of failures.
To identify and correct the causes of failures that do occur despite the efforts to prevent them.
To determine ways of coping with failures that do occur, if their causes have not been corrected.
To apply methods for estimating the likely reliability of new designs, and for analysing reliability data.
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
=== Scope and techniques ===
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
System availability and mission readiness analysis and related reliability and maintenance requirement allocation
Functional system failure analysis and derived requirements specification
Inherent (system) design reliability analysis and derived requirements specification for both hardware and software design
System diagnostics design
Fault tolerant systems (e.g. by redundancy)
Predictive and preventive maintenance (e.g. reliability-centered maintenance)
Human factors / human interaction / human errors
Manufacturing- and assembly-induced failures (effect on the detected "0-hour quality" and reliability)
Maintenance-induced failures
Transport-induced failures
Storage-induced failures
Use (load) studies, component stress analysis, and derived requirements specification
Software (systematic) failures
Failure / reliability testing (and derived requirements)
Field failure monitoring and corrective actions
Spare parts stocking (availability control)
Technical documentation, caution and warning analysis
Data and information acquisition/organisation (creation of a general reliability development hazard log and FRACAS system)
Chaos engineering
Effective reliability engineering requires understanding of the basics of failure mechanisms for which experience, broad engineering skills and good knowledge from many different special fields of engineering are required, for example:
Tribology
Stress (mechanics)
Fracture mechanics / fatigue
Thermal engineering
Fluid mechanics / shock-loading engineering
Electrical engineering
Chemical engineering (e.g. corrosion)
Material science
=== Definitions ===
Reliability may be defined in the following ways:
The idea that an item is fit for a purpose
The capacity of a designed, produced, or maintained item to perform as required
The capacity of a population of designed, produced or maintained items to perform as required
The resistance to failure of an item
The probability of an item to perform a required function under stated conditions
The durability of an object
=== Basics of a reliability assessment ===
Many engineering techniques are used in reliability risk assessments, such as reliability block diagrams, hazard analysis, failure mode and effects analysis (FMEA), fault tree analysis (FTA), Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work (SoW) requirements) that will be performed for that specific system.
Consistent with the creation of safety cases, for example per ARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take are to:
Thoroughly identify relevant unreliability "hazards", e.g. potential conditions, events, human errors, failure modes, interactions, failure mechanisms, and root causes, by specific analysis or tests.
Assess the associated system risk, by specific analysis or testing.
Propose mitigation, e.g. requirements, design changes, detection logic, maintenance, and training, by which the risks may be lowered and controlled at an acceptable level.
Determine the best mitigation and get agreement on final, acceptable risk levels, possibly based on cost/benefit analysis.
The risk here is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In a de minimis definition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
== Reliability and availability program plan ==
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separate document. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability, maintainability, and the resulting system availability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by other stakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and the total cost of ownership (TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/or predictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
== Reliability requirements ==
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overall availability needs and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.
The provision of only quantitative minimum targets (e.g., Mean Time Between Failure (MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else. The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, a systems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
== Reliability culture / human errors / human factors ==
In practice, most failures can be traced back to some type of human error, for example in:
Management decisions (e.g. in budgeting, timing, and required tasks)
Systems Engineering: Use studies (load cases)
Systems Engineering: Requirement analysis / setting
Systems Engineering: Configuration control
Assumptions
Calculations / simulations / FEM analysis
Design
Design drawings
Testing (e.g. incorrect load settings or failure measurement)
Statistical analysis
Manufacturing
Quality control
Maintenance
Maintenance manuals
Training
Classifying and ordering of information
Feedback of field information (e.g. incorrect or too vague)
etc.
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robust systems engineering process with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
== Reliability prediction and improvement ==
Reliability prediction combines:
creation of a proper reliability model (see further on this page)
estimation (and justification) of input parameters for this model (e.g. failure rates for a particular failure mode or event and the mean time to repair the system for a particular failure)
estimation of output reliability parameters at system or part level (i.e. system availability or frequency of a particular functional failure) The emphasis on quantification and target setting (e.g. MTBF) might imply there is a limit to achievable reliability, however, there is no inherent limit and development of higher reliability does not need to be more costly. In addition, they argue that prediction of reliability from historic data can be very misleading, with comparisons only valid for identical designs, products, manufacturing processes, and maintenance with identical operating loads and usage environments. Even minor changes in any of these could have major effects on reliability. Furthermore, the most unreliable and important items (i.e. the most interesting candidates for a reliability investigation) are most likely to be modified and re-engineered since historical data was gathered, making the standard (re-active or pro-active) statistical methods and processes used in e.g. medical or insurance industries less effective. Another surprising – but logical – argument is that to be able to accurately predict reliability by testing, the exact mechanisms of failure must be known and therefore – in most cases – could be prevented! Following the incorrect route of trying to quantify and solve a complex reliability engineering problem in terms of MTBF or probability using an-incorrect – for example, the re-active – approach is referred to by Barnard as "Playing the Numbers Game" and is regarded as bad practice.
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
=== Design for reliability ===
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability. DfR is often used as part of an overall Design for Excellence (DfX) strategy.
==== Statistics-based approach (i.e. MTBF) ====
Reliability design begins with the development of a (system) model. Reliability and availability models use block diagrams and Fault Tree Analysis to provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for example Mean time to repair (MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques is redundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures. RCM (Reliability Centered Maintenance) programs can be used for this.
==== Physics-of-failure-based approach ====
For electronic assemblies, there has been an increasing shift towards a different approach called physics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modern finite element method (FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is component derating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expected electric current.
==== Common tools and techniques ====
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Physics of failure (PoF)
Built-in self-test (BIT or BIST) (testability analysis)
Failure mode and effects analysis (FMEA)
Reliability hazard analysis
Reliability block-diagram analysis
Dynamic reliability block-diagram analysis
Fault tree analysis
Root cause analysis
Statistical engineering, design of experiments – e.g. on simulations / FEM models or with testing
Sneak circuit analysis
Accelerated testing
Reliability growth analysis (re-active reliability)
Weibull analysis (for testing or mainly "re-active" reliability)
Hypertabastic survival models
Thermal analysis by finite element analysis (FEA) and / or measurement
Thermal induced, shock and vibration fatigue analysis by FEA and / or measurement
Electromagnetic analysis
Avoidance of single point of failure (SPOF)
Functional analysis and functional failure analysis (e.g., function FMEA, FHA or FFA)
Predictive and preventive maintenance: reliability centered maintenance (RCM) analysis
Testability analysis
Failure diagnostics analysis (normally also incorporated in FMEA)
Human error analysis
Operational hazard analysis
Preventative/Planned Maintenance Optimization (PMO)
Manual screening
Integrated logistics support
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine the optimum balance between reliability requirements and other constraints.
=== The importance of language ===
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000) For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language and proposition logic, but also based on experience with similar items. This can for example be seen in descriptions of events in fault tree analysis, FMEA analysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does in safety engineering or in-general within systems engineering.
Correct use of language can also be key to identifying or reducing the risks of human error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English or Simplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
== Reliability modeling ==
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system's availability behavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis and reliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
The physics of failure approach uses an understanding of physical failure mechanisms involved, such as mechanical crack propagation or chemical corrosion degradation or failure;
The parts stress modelling approach is an empirical method for prediction based on counting the number and type of components of the system, and the stress they undergo during operation.
=== Reliability theory ===
Reliability is defined as the probability that a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R
(
t
)
=
P
r
{
T
>
t
}
=
∫
t
∞
f
(
x
)
d
x
{\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!}
,
where
f
(
x
)
{\displaystyle f(x)\!}
is the failure probability density function and
t
{\displaystyle t}
is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Reliability is predicated on "intended function:" Generally, this is taken to mean operation without failure. However, even if no individual part of the system fails, but the system as a whole does not do what was intended, then it is still charged against the system reliability. The system requirements specification is the criterion against which reliability is measured.
Reliability applies to a specified period of time. In practical terms, this means that a system has a specified chance that it will operate without failure before time
T
{\displaystyle T\!}
. Reliability engineering ensures that components and materials will meet the requirements during the specified time. Note that units other than time may sometimes be used (e.g. "a mission", "operation cycles").
Reliability is restricted to operation under stated (or explicitly defined) conditions. This constraint is necessary because it is impossible to design a system for unlimited conditions. A Mars rover will have different specified conditions than a family car. The operating environment must be addressed during design and testing. That same rover may be required to operate in varying conditions requiring additional scrutiny.
Two notable references on reliability theory and its mathematical and statistical foundations are Barlow, R. E. and Proschan, F. (1982) and Samaniego, F. J. (2007).
=== Quantitative system reliability parameters—theory ===
Quantitative requirements are specified using reliability parameters. The most common reliability parameter is the mean time to failure (MTTF), which can also be specified as the failure rate (this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used in system safety engineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobile airbags, thermal batteries and missiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, the probability of failure on demand (PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statistical confidence intervals.
== Reliability testing ==
The purpose of reliability testing or reliability verification is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered. It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D, design, and manufacturing.
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance. Most product on the market requires reliability testing, such as automotive, integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.
(The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statistical type I and type II errors could be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; some failure modes may take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing, design of experiments, and simulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test and burn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.
=== Reliability test requirements ===
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:
Product life span
Intended function
Operating Condition
Probability of Performance
User exceptions
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statistical confidence levels are used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component, subsystem and system. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
=== Testing method ===
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product. A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.
=== Accelerated testing ===
The purpose of accelerated life testing (ALT test) is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
To discover failure modes
To predict the normal field life from the high stress lab life
An accelerated testing program can be broken down into the following steps:
Define objective and scope of the test
Collect required information about the product
Identify the stress(es)
Determine level of stress(es)
Conduct the accelerated test and analyze the collected data.
Common ways to determine a life stress relationship are:
Arrhenius model
Eyring model
Inverse power law model
Temperature–humidity model
Temperature non-thermal model
== Software reliability ==
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, several software reliability models based on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman 1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testing is an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individual units, through integration and full-up system testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such as code coverage.
The Software Engineering Institute's capability maturity model is a common means of assessing the overall software development process for reliability and quality purposes.
== Structural reliability ==
Structural reliability or the reliability of structures is the application of reliability theory to the behavior of structures. It is used in both the design and maintenance of different types of structures including concrete and steel structures. In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
== Comparison to safety engineering ==
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereas safety engineering focuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).
=== Fault tolerance ===
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g. 2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have a fail-safe mode. For example, aircraft may use triple modular redundancy for flight computers and control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
=== Basic reliability and mission reliability ===
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
=== Detectability and common cause failures ===
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
== Reliability versus quality (Six Sigma) ==
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications. Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time. Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model. Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (see Reliability engineering vs Safety engineering above).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
== Reliability operational assessment ==
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations have quality control groups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematic root cause analysis that identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment are failure reporting, analysis, and corrective action systems (FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
== Reliability organizations ==
Systems of any significant complexity are developed by organizations of people, such as a commercial company or a government agency. The reliability engineering organization must be consistent with the company's organizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance or specialty engineering organization, which may include reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of an integrated product team.
== Education ==
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as a professional engineer by the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD), the IEEE Reliability Society, the American Society for Quality (ASQ), and the Society of Reliability Engineers (SRE).
== See also ==
== References ==
N. Diaz, R. Pascual, F. Ruggeri, E. López Droguett (2017). "Modeling age replacement policy under multiple time scales and stochastic usage profiles". International Journal of Production Economics. 188: 22–28. doi:10.1016/j.ijpe.2017.03.009.{{cite journal}}: CS1 maint: multiple names: authors list (link)
== Further reading ==
Barlow, R. E. and Proscan, F. (1981) Statistical Theory of Reliability and Life Testing, To Begin With Press, Silver Springs, MD.
Blanchard, Benjamin S. (1992), Logistics Engineering and Management (Fourth Ed.), Prentice-Hall, Inc., Englewood Cliffs, New Jersey.
Breitler, Alan L. and Sloan, C. (2005), Proceedings of the American Institute of Aeronautics and Astronautics (AIAA) Air Force T&E Days Conference, Nashville, TN, December, 2005: System Reliability Prediction: towards a General Approach Using a Neural Network.
Ebeling, Charles E., (1997), An Introduction to Reliability and Maintainability Engineering, McGraw-Hill Companies, Inc., Boston.
Denney, Richard (2005) Succeeding with Use Cases: Working Smart to Deliver Quality. Addison-Wesley Professional Publishing. ISBN. Discusses the use of software reliability engineering in use case driven software development.
Gano, Dean L. (2007), "Apollo Root Cause Analysis" (Third Edition), Apollonian Publications, LLC., Richland, Washington
Holmes, Oliver Wendell Sr. The Deacon's Masterpiece
Horsburgh, Peter (2018), "5 Habits of an Extraordinary Reliability Engineer", Reliability Web
Kapur, K.C., and Lamberson, L.R., (1977), Reliability in Engineering Design, John Wiley & Sons, New York.
Kececioglu, Dimitri, (1991) "Reliability Engineering Handbook", Prentice-Hall, Englewood Cliffs, New Jersey
Trevor Kletz (1998) Process Plants: A Handbook for Inherently Safer Design CRC ISBN 1-56032-619-0
Leemis, Lawrence, (1995) Reliability: Probabilistic Models and Statistical Methods, 1995, Prentice-Hall. ISBN 0-13-720517-1
Lees, Frank (2005). Loss Prevention in the Process Industries (3rd ed.). Elsevier. ISBN 978-0-7506-7555-0.
MacDiarmid, Preston; Morris, Seymour; et al., (1995), Reliability Toolkit: Commercial Practices Edition, Reliability Analysis Center and Rome Laboratory, Rome, New York.
Modarres, Mohammad; Kaminskiy, Mark; Krivtsov, Vasiliy (1999), Reliability Engineering and Risk Analysis: A Practical Guide, CRC Press, ISBN 0-8247-2000-8.
Musa, John (2005) Software Reliability Engineering: More Reliable Software Faster and Cheaper, 2nd. Edition, AuthorHouse. ISBN
Neubeck, Ken (2004) "Practical Reliability Analysis", Prentice Hall, New Jersey
Neufelder, Ann Marie, (1993), Ensuring Software Reliability, Marcel Dekker, Inc., New York.
O'Connor, Patrick D. T. (2002), Practical Reliability Engineering (Fourth Ed.), John Wiley & Sons, New York. ISBN 978-0-4708-4462-5.
Samaniego, Francisco J. (2007) "System Signatures and their Applications in Engineering Reliability", Springer (International Series in Operations Research and Management Science), New York.
Shooman, Martin, (1987), Software Engineering: Design, Reliability, and Management, McGraw-Hill, New York.
Tobias, Trindade, (1995), Applied Reliability, Chapman & Hall/CRC, ISBN 0-442-00469-9
Springer Series in Reliability Engineering
Nelson, Wayne B., (2004), Accelerated Testing—Statistical Models, Test Plans, and Data Analysis, John Wiley & Sons, New York, ISBN 0-471-69736-2
Bagdonavicius, V., Nikulin, M., (2002), "Accelerated Life Models. Modeling and Statistical analysis", CHAPMAN&HALL/CRC, Boca Raton, ISBN 1-58488-186-0
Todinov, M. (2016), "Reliability and Risk Models: setting reliability requirements", Wiley, 978-1-118-87332-8.
=== US standards, specifications, and handbooks ===
Aerospace Report Number: TOR-2007(8583)-6889 Reliability Program Requirements for Space Systems, The Aerospace Corporation (10 July 2007)
DoD 3235.1-H (3rd Ed) Test and Evaluation of System Reliability, Availability, and Maintainability (A Primer), U.S. Department of Defense (March 1982).
NASA GSFC 431-REF-000370 Flight Assurance Procedure: Performing a Failure Mode and Effects Analysis, National Aeronautics and Space Administration Goddard Space Flight Center (10 August 1996).
IEEE 1332–1998 IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment, Institute of Electrical and Electronics Engineers (1998).
JPL D-5703 Reliability Analysis Handbook, National Aeronautics and Space Administration Jet Propulsion Laboratory (July 1990).
MIL-STD-785B Reliability Program for Systems and Equipment Development and Production, U.S. Department of Defense (15 September 1980). (*Obsolete, superseded by ANSI/GEIA-STD-0009-2008 titled Reliability Program Standard for Systems Design, Development, and Manufacturing, 13 Nov 2008)
MIL-HDBK-217F Reliability Prediction of Electronic Equipment, U.S. Department of Defense (2 December 1991).
MIL-HDBK-217F (Notice 1) Reliability Prediction of Electronic Equipment, U.S. Department of Defense (10 July 1992).
MIL-HDBK-217F (Notice 2) Reliability Prediction of Electronic Equipment, U.S. Department of Defense (28 February 1995).
MIL-STD-690D Failure Rate Sampling Plans and Procedures, U.S. Department of Defense (10 June 2005).
MIL-HDBK-338B Electronic Reliability Design Handbook, U.S. Department of Defense (1 October 1998).
MIL-HDBK-2173 Reliability-Centered Maintenance (RCM) Requirements for Naval Aircraft, Weapon Systems, and Support Equipment, U.S. Department of Defense (30 January 1998); (superseded by NAVAIR 00-25-403).
MIL-STD-1543B Reliability Program Requirements for Space and Launch Vehicles, U.S. Department of Defense (25 October 1988).
MIL-STD-1629A Procedures for Performing a Failure Mode Effects and Criticality Analysis, U.S. Department of Defense (24 November 1980).
MIL-HDBK-781A Reliability Test Methods, Plans, and Environments for Engineering Development, Qualification, and Production, U.S. Department of Defense (1 April 1996).
NSWC-06 (Part A & B) Handbook of Reliability Prediction Procedures for Mechanical Equipment, Naval Surface Warfare Center (10 January 2006).
SR-332 Reliability Prediction Procedure for Electronic Equipment, Telcordia Technologies (January 2011).
FD-ARPP-01 Automated Reliability Prediction Procedure, Telcordia Technologies (January 2011).
GR-357 Generic Requirements for Assuring the Reliability of Components Used in Telecommunications Equipment, Telcordia Technologies (March 2001).
http://standards.sae.org/ja1000/1_199903/ SAE JA1000/1 Reliability Program Standard Implementation Guide
=== UK standards ===
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
PART 1: Issue 5: Management Responsibilities and Requirements for Programmes and Plans
PART 4: (ARMP-4)Issue 2: Guidance for Writing NATO R&M Requirements Documents
PART 6: Issue 1: IN-SERVICE R & M
PART 7 (ARMP-7) Issue 1: NATO R&M Terminology Applicable to ARMP's
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
PART 1: Issue 1: ONE-SHOT DEVICES/SYSTEMS
PART 2: Issue 1: SOFTWARE
PART 3: Issue 2: R&M CASE
PART 4: Issue 1: Testability
PART 5: Issue 1: IN-SERVICE RELIABILITY DEMONSTRATIONS
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
PART 2: Issue 1: IN-SERVICE MAINTAINABILITY DEMONSTRATIONS
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
PART 1: Issue 2: MAINTENANCE DATA & DEFECT REPORTING IN THE ROYAL NAVY, THE ARMY AND THE ROYAL AIR FORCE
PART 2: Issue 1: DATA CLASSIFICATION AND INCIDENT SENTENCING—GENERAL
PART 3: Issue 1: INCIDENT SENTENCING—SEA
PART 4: Issue 1: INCIDENT SENTENCING—LAND
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained from DSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
=== French standards ===
FIDES [1]. The FIDES methodology (UTE-C 80-811) is based on the physics of failures and supported by the analysis of test data, field returns and existing modelling.
UTE-C 80–810 or RDF2000 [2] Archived 17 July 2011 at the Wayback Machine. The RDF2000 methodology is based on the French telecom experience.
=== International standards ===
TC 56 Standards: Dependability Archived 10 September 2019 at the Wayback Machine
== External links ==
Media related to Reliability engineering at Wikimedia Commons
John P. Rankin Collection, The University of Alabama in Huntsville Archives and Special Collections NASA reliability engineering research on sneak circuits. | Wikipedia/Reliability_theory |
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
== In calculus and analysis ==
In calculus, a function
f
{\displaystyle f}
defined on a subset of the real numbers with real values is called monotonic if it is either entirely non-decreasing, or entirely non-increasing. That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is termed monotonically increasing (also increasing or non-decreasing) if for all
x
{\displaystyle x}
and
y
{\displaystyle y}
such that
x
≤
y
{\displaystyle x\leq y}
one has
f
(
x
)
≤
f
(
y
)
{\displaystyle f\!\left(x\right)\leq f\!\left(y\right)}
, so
f
{\displaystyle f}
preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing) if, whenever
x
≤
y
{\displaystyle x\leq y}
, then
f
(
x
)
≥
f
(
y
)
{\displaystyle f\!\left(x\right)\geq f\!\left(y\right)}
, so it reverses the order (see Figure 2).
If the order
≤
{\displaystyle \leq }
in the definition of monotonicity is replaced by the strict order
<
{\displaystyle <}
, one obtains a stronger requirement. A function with this property is called strictly increasing (also increasing). Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing (also decreasing). A function with either property is called strictly monotone. Functions that are strictly monotone are one-to-one (because for
x
{\displaystyle x}
not equal to
y
{\displaystyle y}
, either
x
<
y
{\displaystyle x<y}
or
x
>
y
{\displaystyle x>y}
and so, by monotonicity, either
f
(
x
)
<
f
(
y
)
{\displaystyle f\!\left(x\right)<f\!\left(y\right)}
or
f
(
x
)
>
f
(
y
)
{\displaystyle f\!\left(x\right)>f\!\left(y\right)}
, thus
f
(
x
)
≠
f
(
y
)
{\displaystyle f\!\left(x\right)\neq f\!\left(y\right)}
.)
To avoid ambiguity, the terms weakly monotone, weakly increasing and weakly decreasing are often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A function
f
{\displaystyle f}
is said to be absolutely monotonic over an interval
(
a
,
b
)
{\displaystyle \left(a,b\right)}
if the derivatives of all orders of
f
{\displaystyle f}
are nonnegative or all nonpositive at all points on the interval.
=== Inverse of function ===
All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, if
y
=
g
(
x
)
{\displaystyle y=g(x)}
is strictly increasing on the range
[
a
,
b
]
{\displaystyle [a,b]}
, then it has an inverse
x
=
h
(
y
)
{\displaystyle x=h(y)}
on the range
[
g
(
a
)
,
g
(
b
)
]
{\displaystyle [g(a),g(b)]}
.
The term monotonic is sometimes used in place of strictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.
=== Monotonic transformation ===
The term monotonic transformation (or monotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform (see also monotone preferences). In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.
=== Some basic applications and results ===
The following properties are true for a monotonic function
f
:
R
→
R
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }
:
f
{\displaystyle f}
has limits from the right and from the left at every point of its domain;
f
{\displaystyle f}
has a limit at positive or negative infinity (
±
∞
{\displaystyle \pm \infty }
) of either a real number,
∞
{\displaystyle \infty }
, or
−
∞
{\displaystyle -\infty }
.
f
{\displaystyle f}
can only have jump discontinuities;
f
{\displaystyle f}
can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and may even be dense in an interval (a, b). For example, for any summable sequence
(
a
i
)
(a_{i})
of positive numbers and any enumeration
(
q
i
)
{\displaystyle (q_{i})}
of the rational numbers, the monotonically increasing function
f
(
x
)
=
∑
q
i
≤
x
a
i
{\displaystyle f(x)=\sum _{q_{i}\leq x}a_{i}}
is continuous exactly at every irrational number (cf. picture). It is the cumulative distribution function of the discrete measure on the rational numbers, where
a
i
{\displaystyle a_{i}}
is the weight of
q
i
{\displaystyle q_{i}}
.
If
f
{\displaystyle f}
is differentiable at
x
∗
∈
R
{\displaystyle x^{*}\in {\mathbb {R}}}
and
f
′
(
x
∗
)
>
0
{\displaystyle f'(x^{*})>0}
, then there is a non-degenerate interval I such that
x
∗
∈
I
{\displaystyle x^{*}\in I}
and
f
{\displaystyle f}
is increasing on I. As a partial converse, if f is differentiable and increasing on an interval, I, then its derivative is positive at every point in I.
These properties are the reason why monotonic functions are useful in technical work in analysis. Other important properties of these functions include:
if
f
{\displaystyle f}
is a monotonic function defined on an interval
I
{\displaystyle I}
, then
f
{\displaystyle f}
is differentiable almost everywhere on
I
{\displaystyle I}
; i.e. the set of numbers
x
{\displaystyle x}
in
I
{\displaystyle I}
such that
f
{\displaystyle f}
is not differentiable in
x
{\displaystyle x}
has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function.
if this set is countable, then
f
{\displaystyle f}
is absolutely continuous
if
f
{\displaystyle f}
is a monotonic function defined on an interval
[
a
,
b
]
{\displaystyle \left[a,b\right]}
, then
f
{\displaystyle f}
is Riemann integrable.
An important application of monotonic functions is in probability theory. If
X
{\displaystyle X}
is a random variable, its cumulative distribution function
F
X
(
x
)
=
Prob
(
X
≤
x
)
{\displaystyle F_{X}\!\left(x\right)={\text{Prob}}\!\left(X\leq x\right)}
is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreasing.
When
f
{\displaystyle f}
is a strictly monotonic function, then
f
{\displaystyle f}
is injective on its domain, and if
T
{\displaystyle T}
is the range of
f
{\displaystyle f}
, then there is an inverse function on
T
{\displaystyle T}
for
f
{\displaystyle f}
. In contrast, each constant function is monotonic, but not injective, and hence cannot have an inverse.
The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on the y-axis.
== In topology ==
A map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be monotone if each of its fibers is connected; that is, for each element
y
∈
Y
,
{\displaystyle y\in Y,}
the (possibly empty) set
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
is a connected subspace of
X
.
{\displaystyle X.}
== In functional analysis ==
In functional analysis on a topological vector space
X
{\displaystyle X}
, a (possibly non-linear) operator
T
:
X
→
X
∗
{\displaystyle T:X\rightarrow X^{*}}
is said to be a monotone operator if
(
T
u
−
T
v
,
u
−
v
)
≥
0
∀
u
,
v
∈
X
.
{\displaystyle (Tu-Tv,u-v)\geq 0\quad \forall u,v\in X.}
Kachurovskii's theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset
G
{\displaystyle G}
of
X
×
X
∗
{\displaystyle X\times X^{*}}
is said to be a monotone set if for every pair
[
u
1
,
w
1
]
{\displaystyle [u_{1},w_{1}]}
and
[
u
2
,
w
2
]
{\displaystyle [u_{2},w_{2}]}
in
G
{\displaystyle G}
,
(
w
1
−
w
2
,
u
1
−
u
2
)
≥
0.
{\displaystyle (w_{1}-w_{2},u_{1}-u_{2})\geq 0.}
G
{\displaystyle G}
is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operator
G
(
T
)
{\displaystyle G(T)}
is a monotone set. A monotone operator is said to be maximal monotone if its graph is a maximal monotone set.
== In order theory ==
Order theory deals with arbitrary partially ordered sets and preordered sets as a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the strict relations
<
{\displaystyle <}
and
>
{\displaystyle >}
are of little use in many non-total orders and hence no additional terminology is introduced for them.
Letting
≤
{\displaystyle \leq }
denote the partial order relation of any partially ordered set, a monotone function, also called isotone, or order-preserving, satisfies the property
x
≤
y
⟹
f
(
x
)
≤
f
(
y
)
{\displaystyle x\leq y\implies f(x)\leq f(y)}
for all x and y in its domain. The composite of two monotone mappings is also monotone.
The dual notion is often called antitone, anti-monotone, or order-reversing. Hence, an antitone function f satisfies the property
x
≤
y
⟹
f
(
y
)
≤
f
(
x
)
,
{\displaystyle x\leq y\implies f(y)\leq f(x),}
for all x and y in its domain.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions are order embeddings (functions for which
x
≤
y
{\displaystyle x\leq y}
if and only if
f
(
x
)
≤
f
(
y
)
)
{\displaystyle f(x)\leq f(y))}
and order isomorphisms (surjective order embeddings).
== In the context of search algorithms ==
In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions. A heuristic
h
(
n
)
{\displaystyle h(n)}
is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n',
h
(
n
)
≤
c
(
n
,
a
,
n
′
)
+
h
(
n
′
)
.
{\displaystyle h(n)\leq c\left(n,a,n'\right)+h\left(n'\right).}
This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is also admissible, monotonicity is a stricter requirement than admissibility. Some heuristic algorithms such as A* can be proven optimal provided that the heuristic they use is monotonic.
== In Boolean functions ==
In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn (i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1, ..., an) ≤ f(b1, ..., bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that an n-ary Boolean function is monotonic when its representation as an n-cube labelled with truth values has no upward edge from true to false. (This labelled Hasse diagram is the dual of the function's labelled Venn diagram, which is the more common representation for n ≤ 3.)
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance "at least two of a, b, c hold" is a monotonic function of a, b, c, since it can be written for instance as ((a and b) or (a and c) or (b and c)).
The number of such functions on n variables is known as the Dedekind number of n.
SAT solving, generally an NP-hard task, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.
== See also ==
Monotone cubic interpolation
Pseudo-monotone operator
Spearman's rank correlation coefficient - measure of monotonicity in a set of data
Total monotonicity
Cyclical monotonicity
Operator monotone function
Monotone set function
Absolutely and completely monotonic functions and sequences
== Notes ==
== Bibliography ==
Bartle, Robert G. (1976). The elements of real analysis (second ed.).
Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. W. H. Freeman. ISBN 0-7167-0442-0.
Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manchester University Press. ISBN 0-7190-3341-1.
Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.
Riesz, Frigyes & Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN 978-0-486-66289-3.
Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.
Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first ed.). Norton. ISBN 978-0-393-95733-4. (Definition 9.31)
== External links ==
"Monotone function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram Demonstrations Project.
Weisstein, Eric W. "Monotonic Function". MathWorld. | Wikipedia/Decreasing_function |
The reliability theory of aging is an attempt to apply the principles of reliability theory to create a mathematical model of senescence. The theory was published in Russian by Leonid A. Gavrilov and Natalia S. Gavrilova as Biologiia prodolzhitelʹnosti zhizni in 1986, and in English translation as The Biology of Life Span: A Quantitative Approach in 1991.
One of the models suggested in the book is based on an analogy with the reliability theory. The underlying hypothesis is based on the previously suggested premise that humans are born in a highly defective state. This is then made worse by environmental and mutational damage; exceptionally high redundancy due to the extremely high number of low-reliable components (e.g.., cells) allows the organism to survive for a while.
The theory suggests an explanation of two aging phenomena for higher organisms: the Gompertz law of exponential increase in mortality rates with age and the "late-life mortality plateau" (mortality deceleration compared to the Gompertz law at higher ages).
The book criticizes a number of hypotheses known at the time, discusses drawbacks of the hypotheses put forth by the authors themselves, and concludes that regardless of the suggested mathematical models, the underlying biological mechanisms remain unknown.
== See also ==
• DNA damage theory of aging
== References == | Wikipedia/Reliability_theory_of_aging_and_longevity |
A case–control study (also known as case–referent study) is a type of observational study in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute. Case–control studies are often used to identify factors that may contribute to a medical condition by comparing subjects who have the condition with patients who do not have the condition but are otherwise similar. They require fewer resources but provide less evidence for causal inference than a randomized controlled trial. A case–control study is often used to produce an odds ratio. Some statistical methods make it possible to use a case–control study to also estimate relative risk, risk differences, and other quantities.
== Definition ==
Porta's Dictionary of Epidemiology defines the case–control study as: "an observational epidemiological study of persons with the disease (or another outcome variable) of interest and a suitable control group of persons without the disease (comparison group, reference group). The potential relationship of a suspected risk factor or an attribute to the disease is examined by comparing the diseased and nondiseased subjects with regard to how frequently the factor or attribute is present (or, if quantitative, the levels of the attribute) in each of the groups (diseased and nondiseased)."
The case–control study is frequently contrasted with cohort studies, wherein exposed and unexposed subjects are observed until they develop an outcome of interest.
=== Control group selection ===
Controls need not be in good health; inclusion of sick people is sometimes appropriate, as the control group should represent those at risk of becoming a case. Controls should come from the same population as the cases, and their selection should be independent of the exposures of interest.
Controls can carry the same disease as the experimental group, but of another grade/severity, therefore being different from the outcome of interest. However, because the difference between the cases and the controls will be smaller, this results in a lower power to detect an exposure effect.
As with any epidemiological study, greater numbers in the study will increase the power of the study. Numbers of cases and controls do not have to be equal. In many situations, it is much easier to recruit controls than to find cases. Increasing the number of controls above the number of cases, up to a ratio of about 4 to 1, may be a cost-effective way to improve the study.
=== Prospective vs. retrospective cohort studies ===
A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies.
A retrospective study, on the other hand, looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Many valuable case–control studies, such as Lane and Claypon's 1926 investigation of risk factors for breast cancer, were retrospective investigations. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigations are often criticised. If the outcome of interest is uncommon, however, the size of prospective investigation required to estimate relative risk is often too large to be feasible. In retrospective studies the odds ratio provides an estimate of relative risk. One should take special care to avoid sources of bias and confounding in retrospective studies.
== Strengths and weaknesses ==
Case–control studies are a relatively inexpensive and frequently used type of epidemiological study that can be carried out by small teams or individual researchers in single facilities in a way that more structured experimental studies often cannot be. They have pointed the way to a number of important discoveries and advances. The case–control study design is often used in the study of rare diseases or as a preliminary study where little is known about the association between the risk factor and disease of interest.
Compared to prospective cohort studies they tend to be less costly and shorter in duration. In several situations, they have greater statistical power than cohort studies, which must often wait for a 'sufficient' number of disease events to accrue.
Case–control studies are observational in nature and thus do not provide the same level of evidence as randomized controlled trials. The results may be confounded by other factors, to the extent of giving the opposite answer to better studies. A meta-analysis of what was considered 30 high-quality studies concluded that use of a product halved a risk, when in fact the risk was, if anything, increased. It may also be more difficult to establish the timeline of exposure to disease outcome in the setting of a case–control study than within a prospective cohort study design where the exposure is ascertained prior to following the subjects over time in order to ascertain their outcome status. The most important drawback in case–control studies relates to the difficulty of obtaining reliable information about an individual's exposure status over time. Case–control studies are therefore placed low in the hierarchy of evidence.
== Examples ==
One of the most significant triumphs of the case–control study was the demonstration of the link between tobacco smoking and lung cancer, by Richard Doll and Bradford Hill. They showed a statistically significant association in a large case–control study. Opponents argued for many years that this type of study cannot prove causation, but the eventual results of cohort studies confirmed the causal link which the case–control studies suggested, and it is now accepted that tobacco smoking is the cause of about 87% of all lung cancer mortality in the US.
== Analysis ==
Case–control studies were initially analyzed by testing whether or not there were significant differences between the proportion of exposed subjects among cases and controls. Subsequently, Cornfield pointed out that, when the disease outcome of interest is rare, the odds ratio of exposure can be used to estimate the relative risk (see rare disease assumption). The validity of the odds ratio depends highly on the nature of the disease studied, on the sampling methodology and on the type of follow-up. Although in classical case–control studies, it remains true that the odds ratio can only approximate the relative risk in the case of rare diseases, there is a number of other types of studies (case–cohort, nested case–control, cohort studies) in which it was later shown that the odds ratio of exposure can be used to estimate the relative risk or the incidence rate ratio of exposure without the need for the rare disease assumption.
When the logistic regression model is used to model the case–control data and the odds ratio is of interest, both the prospective and retrospective likelihood methods will lead to identical maximum likelihood estimations for covariate, except for the intercept. The usual methods of estimating more interpretable parameters than odds ratios—such as risk ratios, levels, and differences—is biased if applied to case–control data, but special statistical procedures provide easy to use consistent estimators.
== Impact on longevity and public health ==
Tetlock and Gardner claimed that the contributions of medical science to increasing human longevity and public health were negligible, and too often negative, until Scottish physician Archie Cochrane was able to convince the medical establishment to adopt randomized control trials after World War II.
== See also ==
Nested case–control study
Retrospective cohort study
Prospective cohort study
Randomized controlled trial
== References ==
== Further reading ==
Stolley, Paul D., Schlesselman, James J. (1982). Case–control studies: design, conduct, analysis. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-502933-X. (Still a very useful book, and a great place to start, but now a bit out of date.)
== External links ==
Wellcome Trust Case Control Consortium | Wikipedia/Case–control_study |
In statistical mechanics, the hard hexagon model is a 2-dimensional lattice model of a gas, where particles are allowed to be on the vertices of a triangular lattice but no two particles may be adjacent.
The model was solved by Rodney Baxter (1980), who found that it was related to the Rogers–Ramanujan identities.
== The partition function of the hard hexagon model ==
The hard hexagon model occurs within the framework of the grand canonical ensemble, where the total number of particles (the "hexagons") is allowed to vary naturally, and is fixed by a chemical potential. In the hard hexagon model, all valid states have zero energy, and so the only important thermodynamic control variable is the ratio of chemical potential to temperature μ/(kT). The exponential of this ratio, z = exp(μ/(kT)) is called the activity and larger values correspond roughly to denser configurations.
For a triangular lattice with N sites, the grand partition function is
Z
(
z
)
=
∑
n
z
n
g
(
n
,
N
)
=
1
+
N
z
+
1
2
N
(
N
−
7
)
z
2
+
⋯
{\displaystyle \displaystyle {\mathcal {Z}}(z)=\sum _{n}z^{n}g(n,N)=1+Nz+{\tfrac {1}{2}}N(N-7)z^{2}+\cdots }
where g(n, N) is the number of ways of placing n particles on distinct lattice sites such that no 2 are adjacent. The function κ is defined by
κ
(
z
)
=
lim
N
→
∞
Z
(
z
)
1
/
N
=
1
+
z
−
3
z
2
+
⋯
{\displaystyle \kappa (z)=\lim _{N\rightarrow \infty }{\mathcal {Z}}(z)^{1/N}=1+z-3z^{2}+\cdots }
so that log(κ) is the free energy per unit site. Solving the hard hexagon model means (roughly) finding an exact expression for κ as a function of z.
The mean density ρ is given for small z by
ρ
=
z
d
log
(
κ
)
d
z
=
z
−
7
z
2
+
58
z
3
−
519
z
4
+
4856
z
5
+
⋯
.
{\displaystyle \rho =z{\frac {d\log(\kappa )}{dz}}=z-7z^{2}+58z^{3}-519z^{4}+4856z^{5}+\cdots .}
The vertices of the lattice fall into 3 classes numbered 1, 2, and 3, given by the 3 different ways to fill space with hard hexagons. There are 3 local densities ρ1, ρ2, ρ3, corresponding to the 3 classes of sites. When the activity is large the system approximates one of these 3 packings, so the local densities differ, but when the activity is below a critical point the three local densities are the same. The critical point separating the low-activity homogeneous phase from the high-activity ordered phase is
z
c
=
(
11
+
5
5
)
/
2
=
ϕ
5
=
11.09017....
{\displaystyle z_{c}=(11+5{\sqrt {5}})/2=\phi ^{5}=11.09017....}
with golden ratio φ. Above the critical point the local densities differ and in the phase where most hexagons are on sites of type 1 can be expanded as
ρ
1
=
1
−
z
−
1
−
5
z
−
2
−
34
z
−
3
−
267
z
−
4
−
2037
z
−
5
−
⋯
{\displaystyle \rho _{1}=1-z^{-1}-5z^{-2}-34z^{-3}-267z^{-4}-2037z^{-5}-\cdots }
ρ
2
=
ρ
3
=
z
−
2
+
9
z
−
3
+
80
z
−
4
+
965
z
−
5
−
⋯
.
{\displaystyle \rho _{2}=\rho _{3}=z^{-2}+9z^{-3}+80z^{-4}+965z^{-5}-\cdots .}
== Solution ==
The solution is given for small values of z < zc by
z
=
−
x
H
(
x
)
5
G
(
x
)
5
{\displaystyle \displaystyle z={\frac {-xH(x)^{5}}{G(x)^{5}}}}
κ
=
H
(
x
)
3
Q
(
x
5
)
2
G
(
x
)
2
∏
n
≥
1
(
1
−
x
6
n
−
4
)
(
1
−
x
6
n
−
3
)
2
(
1
−
x
6
n
−
2
)
(
1
−
x
6
n
−
5
)
(
1
−
x
6
n
−
1
)
(
1
−
x
6
n
)
2
{\displaystyle \kappa ={\frac {H(x)^{3}Q(x^{5})^{2}}{G(x)^{2}}}\prod _{n\geq 1}{\frac {(1-x^{6n-4})(1-x^{6n-3})^{2}(1-x^{6n-2})}{(1-x^{6n-5})(1-x^{6n-1})(1-x^{6n})^{2}}}}
ρ
=
ρ
1
=
ρ
2
=
ρ
3
=
−
x
G
(
x
)
H
(
x
6
)
P
(
x
3
)
P
(
x
)
{\displaystyle \rho =\rho _{1}=\rho _{2}=\rho _{3}={\frac {-xG(x)H(x^{6})P(x^{3})}{P(x)}}}
where
G
(
x
)
=
∏
n
≥
1
1
(
1
−
x
5
n
−
4
)
(
1
−
x
5
n
−
1
)
{\displaystyle G(x)=\prod _{n\geq 1}{\frac {1}{(1-x^{5n-4})(1-x^{5n-1})}}}
H
(
x
)
=
∏
n
≥
1
1
(
1
−
x
5
n
−
3
)
(
1
−
x
5
n
−
2
)
{\displaystyle H(x)=\prod _{n\geq 1}{\frac {1}{(1-x^{5n-3})(1-x^{5n-2})}}}
P
(
x
)
=
∏
n
≥
1
(
1
−
x
2
n
−
1
)
=
Q
(
x
)
/
Q
(
x
2
)
{\displaystyle P(x)=\prod _{n\geq 1}(1-x^{2n-1})=Q(x)/Q(x^{2})}
Q
(
x
)
=
∏
n
≥
1
(
1
−
x
n
)
.
{\displaystyle Q(x)=\prod _{n\geq 1}(1-x^{n}).}
For large z > zc the solution (in the phase where most occupied sites have type 1) is given by
z
=
G
(
x
)
5
x
H
(
x
)
5
{\displaystyle \displaystyle z={\frac {G(x)^{5}}{xH(x)^{5}}}}
κ
=
x
−
1
3
G
(
x
)
3
Q
(
x
5
)
2
H
(
x
)
2
∏
n
≥
1
(
1
−
x
3
n
−
2
)
(
1
−
x
3
n
−
1
)
(
1
−
x
3
n
)
2
{\displaystyle \kappa =x^{-{\frac {1}{3}}}{\frac {G(x)^{3}Q(x^{5})^{2}}{H(x)^{2}}}\prod _{n\geq 1}{\frac {(1-x^{3n-2})(1-x^{3n-1})}{(1-x^{3n})^{2}}}}
ρ
1
=
H
(
x
)
Q
(
x
)
(
G
(
x
)
Q
(
x
)
+
x
2
H
(
x
9
)
Q
(
x
9
)
)
Q
(
x
3
)
2
{\displaystyle \rho _{1}={\frac {H(x)Q(x)(G(x)Q(x)+x^{2}H(x^{9})Q(x^{9}))}{Q(x^{3})^{2}}}}
ρ
2
=
ρ
3
=
x
2
H
(
x
)
Q
(
x
)
H
(
x
9
)
Q
(
x
9
)
Q
(
x
3
)
2
{\displaystyle \rho _{2}=\rho _{3}={\frac {x^{2}H(x)Q(x)H(x^{9})Q(x^{9})}{Q(x^{3})^{2}}}}
R
=
ρ
1
−
ρ
2
=
Q
(
x
)
Q
(
x
5
)
Q
(
x
3
)
2
.
{\displaystyle R=\rho _{1}-\rho _{2}={\frac {Q(x)Q(x^{5})}{Q(x^{3})^{2}}}.}
The functions G and H turn up in the Rogers–Ramanujan identities, and the function Q is the Euler function, which is closely related to the Dedekind eta function. If x = e2πiτ, then x−1/60G(x), x11/60H(x), x−1/24P(x), z, κ, ρ, ρ1, ρ2, and ρ3 are modular functions of τ, while x1/24Q(x) is a modular form of weight 1/2. Since any two modular functions are related by an algebraic relation, this implies that the functions κ, z, R, ρ are all algebraic functions of each other (of quite high degree) (Joyce 1988). In particular, the value of κ(1), which Eric Weisstein dubbed the hard hexagon entropy constant (Weisstein), is an algebraic number of degree 24 equal to 1.395485972... (OEIS: A085851).
== Related models ==
The hard hexagon model can be defined similarly on the square and honeycomb lattices. No exact solution is known for either of these models, but the critical point zc is near 3.7962±0.0001 for the square lattice and 7.92±0.08 for the honeycomb lattice; κ(1) is approximately 1.503048082... (OEIS: A085850) for the square lattice and 1.546440708... for the honeycomb lattice (Baxter 1999).
== References ==
Andrews, George E. (1981), "The hard-hexagon model and Rogers-Ramanujan type identities", Proceedings of the National Academy of Sciences of the United States of America, 78 (9): 5290–5292, Bibcode:1981PNAS...78.5290A, doi:10.1073/pnas.78.9.5290, ISSN 0027-8424, MR 0629656, PMC 348728, PMID 16593082
Baxter, Rodney J. (1980), "Hard hexagons: exact solution", Journal of Physics A: Mathematical and General, 13 (3): L61 – L70, Bibcode:1980JPhA...13L..61B, doi:10.1088/0305-4470/13/3/007, ISSN 0305-4470, MR 0560533
Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics (PDF), London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7, MR 0690578, archived from the original (PDF) on 2021-04-14, retrieved 2012-08-12
Joyce, G. S. (1988), "Exact results for the activity and isothermal compressibility of the hard-hexagon model", Journal of Physics A: Mathematical and General, 21 (20): L983 – L988, Bibcode:1988JPhA...21L.983J, doi:10.1088/0305-4470/21/20/005, ISSN 0305-4470, MR 0966792
Exton, H. (1983), q-Hypergeometric Functions and Applications, New York: Halstead Press, Chichester: Ellis Horwood
Weisstein, Eric W., "Hard Hexagon Entropy Constant", MathWorld
Baxter, R. J.; Enting, I. G.; Tsang, S. K. (April 1980), "Hard-square lattice gas", Journal of Statistical Physics, 22 (4): 465–489, Bibcode:1980JSP....22..465B, doi:10.1007/BF01012867, S2CID 121413715
Runnels, L. K.; Combs, L. L.; Salvant, James P. (15 November 1967), "Exact Finite Method of Lattice Statistics. II. Honeycomb-Lattice Gas of Hard Molecules", The Journal of Chemical Physics, 47 (10): 4015–4020, Bibcode:1967JChPh..47.4015R, doi:10.1063/1.1701569
Baxter, R. J. (1 June 1999), "Planar lattice gases with nearest-neighbor exclusion", Annals of Combinatorics, 3 (2): 191–203, arXiv:cond-mat/9811264, doi:10.1007/BF01608783, S2CID 13600601
== External links ==
Weisstein, Eric W. "Hard Hexagon Entropy Constant". MathWorld. | Wikipedia/Hard_hexagon_model |
In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single
system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902.
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.
== Physical considerations ==
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes.
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function.
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium.
=== Terminology ===
The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature.
The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent.
== Main types ==
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics.
"We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)
Three important thermodynamic ensembles were defined by Gibbs:
Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.
Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.
Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.
The calculations that can be made using each of these ensembles are explored further in their respective articles.
Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived.
For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.
=== Equivalence ===
In thermodynamic limit all ensembles should produce identical observables due to Legendre transforms, deviations to this rule occurs under conditions that state-variables are non-convex, such as small molecular measurements.
== Representations ==
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables.
In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily.
=== Requirements for representations ===
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system:
Test whether A, B are statistically equivalent.
If p is a real number such that 0 < p < 1, then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability 1 − p.
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set.
=== Quantum mechanical ===
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by
ρ
^
{\displaystyle {\hat {\rho }}}
. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator,
X
^
{\displaystyle {\hat {X}}}
. The expectation value of this operator on the statistical ensemble
ρ
{\displaystyle \rho }
is given by the following trace:
⟨
X
⟩
=
Tr
(
X
^
ρ
)
.
{\displaystyle \langle X\rangle =\operatorname {Tr} ({\hat {X}}\rho ).}
This can be used to evaluate averages (operator
X
^
{\displaystyle {\hat {X}}}
), variances (using operator
X
^
2
{\displaystyle {\hat {X}}^{2}}
), covariances (using operator
X
^
Y
^
{\displaystyle {\hat {X}}{\hat {Y}}}
), etc. The density matrix must always have a trace of 1:
Tr
ρ
^
=
1
{\displaystyle \operatorname {Tr} {\hat {\rho }}=1}
(this essentially is the condition that the probabilities must add up to one).
In general, the ensemble evolves over time according to the von Neumann equation.
Equilibrium ensembles (those that do not evolve over time,
d
ρ
^
/
d
t
=
0
{\displaystyle d{\hat {\rho }}/dt=0}
) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator
H
^
{\displaystyle {\hat {H}}}
(Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator
N
^
{\displaystyle {\hat {N}}}
. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is
ρ
^
=
∑
i
P
i
|
ψ
i
⟩
⟨
ψ
i
|
,
{\displaystyle {\hat {\rho }}=\sum _{i}P_{i}|\psi _{i}\rangle \langle \psi _{i}|,}
where the |ψi⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)
=== Classical mechanical ===
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation.
In a mechanical system with a defined number of parts, the phase space has n generalized coordinates called q1, ... qn, and n associated canonical momenta called p1, ... pn. The ensemble is then represented by a joint probability density function ρ(p1, ... pn, q1, ... qn).
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers N1 (first kind of particle), N2 (second kind of particle), and so on up to Ns (the last kind of particle; s is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function ρ(N1, ... Ns, p1, ... pn, q1, ... qn). The number of coordinates n varies with the numbers of particles.
Any mechanical quantity X can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by ρ:
⟨
X
⟩
=
∑
N
1
=
0
∞
⋯
∑
N
s
=
0
∞
∫
⋯
∫
ρ
X
d
p
1
⋯
d
q
n
.
{\displaystyle \langle X\rangle =\sum _{N_{1}=0}^{\infty }\cdots \sum _{N_{s}=0}^{\infty }\int \cdots \int \rho X\,dp_{1}\cdots dq_{n}.}
The condition of probability normalization applies, requiring
∑
N
1
=
0
∞
⋯
∑
N
s
=
0
∞
∫
⋯
∫
ρ
d
p
1
⋯
d
q
n
=
1.
{\displaystyle \sum _{N_{1}=0}^{\infty }\cdots \sum _{N_{s}=0}^{\infty }\int \cdots \int \rho \,dp_{1}\cdots dq_{n}=1.}
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, ρ, is related to the probability distribution over microstates, P by a factor
ρ
=
1
h
n
C
P
,
{\displaystyle \rho ={\frac {1}{h^{n}C}}P,}
where
h is an arbitrary but predetermined constant with the units of energy×time, setting the extent of the microstate and providing correct dimensions to ρ.
C is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns.
Since h can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of h influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of h when comparing different systems.
==== Correcting overcounting in phase space ====
Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems:
Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another.
Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox.
Foundational issues in defining the chemical potential and the grand canonical ensemble.
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting.
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' x coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor C introduced above would be set to C = 1, and the integral would be restricted to the selected subregion of phase space.)
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor C introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, C does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers.
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using
C
=
N
1
!
N
2
!
⋯
N
s
!
.
{\displaystyle C=N_{1}!N_{2}!\cdots N_{s}!.}
This is known as "correct Boltzmann counting".
== Ensembles in statistics ==
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like.
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks.
== Ensemble average ==
In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble.
Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit.
The grand canonical ensemble is an example of an open system.
=== Classical statistical mechanics ===
For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system:
A
¯
=
∫
A
exp
[
−
β
H
(
q
1
,
q
2
,
…
,
q
M
,
p
1
,
p
2
,
…
,
p
N
)
]
d
τ
∫
exp
[
−
β
H
(
q
1
,
q
2
,
…
,
q
M
,
p
1
,
p
2
,
…
,
p
N
)
]
d
τ
,
{\displaystyle {\bar {A}}={\frac {\displaystyle \int {A\exp \left[-\beta H(q_{1},q_{2},\dots ,q_{M},p_{1},p_{2},\dots ,p_{N})\right]\,d\tau }}{\displaystyle \int {\exp \left[-\beta H(q_{1},q_{2},\dots ,q_{M},p_{1},p_{2},\dots ,p_{N})\right]\,d\tau }}},}
where
A
¯
{\displaystyle {\bar {A}}}
is the ensemble average of the system property A,
β
{\displaystyle \beta }
is
1
k
T
{\displaystyle {\frac {1}{kT}}}
, known as thermodynamic beta,
H is the Hamiltonian of the classical system in terms of the set of coordinates
q
i
{\displaystyle q_{i}}
and their conjugate generalized momenta
p
i
{\displaystyle p_{i}}
,
d
τ
{\displaystyle d\tau }
is the volume element of the classical phase space of interest.
The denominator in this expression is known as the partition function and is denoted by the letter Z.
=== Quantum statistical mechanics ===
In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral:
A
¯
=
∑
i
A
i
e
−
β
E
i
∑
i
e
−
β
E
i
.
{\displaystyle {\bar {A}}={\frac {\sum _{i}A_{i}e^{-\beta E_{i}}}{\sum _{i}e^{-\beta E_{i}}}}.}
=== Canonical ensemble average ===
The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics.
The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) and particles (N) with its surroundings, but the volume (V) is kept constant.
== Operational interpretation ==
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance,
It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?)
It is not clear how to physically generate an ensemble.
In this section, we attempt to partially answer this question.
Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ...,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble.
In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ..., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes).
Assume the following time average exists:
σ
(
E
)
=
lim
N
→
∞
1
N
∑
k
=
1
N
Meas
(
E
,
X
k
)
{\displaystyle \sigma (E)=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{k=1}^{N}\operatorname {Meas} (E,X_{k})}
For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes–no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that:
σ
(
E
)
=
Tr
(
E
S
)
.
{\displaystyle \sigma (E)=\operatorname {Tr} (ES).}
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
== See also ==
Density matrix – Mathematical tool in quantum physics
Ensemble (fluid mechanics) – Imaginary collection of notionally identical experiments
Ensemble interpretation – Concept in Quantum mechanics
Phase space – Space of all possible states that a system can take
Liouville's theorem (Hamiltonian) – Key result in Hamiltonian mechanics and statistical mechanics
Maxwell–Boltzmann statistics – Statistical distribution used in many-particle mechanics
Replication (statistics) – Principle that variation can be better estimated with nonvarying repetition of conditions
== Notes ==
== References ==
== External links ==
Monte Carlo applet applied in statistical physics problems. | Wikipedia/Ensemble_(mathematical_physics) |
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in a wide variety of fields such as biology, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.: 1–4
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances.: 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.: 572–573
== History ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus."
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
== Principles: mechanics and ensembles ==
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
== Statistical thermodynamics ==
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
=== Fundamental postulate ===
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
=== Three thermodynamic ensembles ===
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.: 227 The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
=== Calculation methods ===
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
==== Exact ====
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
==== Monte Carlo ====
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
==== Other ====
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
== Non-equilibrium statistical mechanics ==
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
=== Stochastic methods ===
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
=== Near-equilibrium methods ===
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.: 664
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
=== Hybrid methods ===
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
== Applications ==
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
=== Quantum statistical mechanics ===
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
== Index of statistical mechanics topics ==
=== Physics ===
Probability amplitude
Statistical physics
Boltzmann factor
Feynman–Kac formula
Fluctuation theorem
Information entropy
Vacuum expectation value
Cosmic variance
Negative probability
Gibbs state
Master equation
Partition function (mathematics)
Quantum probability
=== Percolation theory ===
Percolation theory
Schramm–Loewner evolution
== See also ==
List of textbooks in thermodynamics and statistical mechanics
Laplace transform § Statistical mechanics
== References ==
== Further reading ==
Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2.
Müller-Kirsten, Harald J W. (2013). Basics of Statistical Physics (PDF). doi:10.1142/8709. ISBN 978-981-4449-53-3.
Kadanoff, Leo P. "Statistical Physics and other resources". Archived from the original on August 12, 2021. Retrieved June 18, 2023.
Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6.
Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005.
== External links ==
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Cohen, Doron (2011). "Lecture Notes in Statistical Mechanics and Mesoscopics". arXiv:1107.0568 [quant-ph].
Videos of lecture series in statistical mechanics on YouTube taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. | Wikipedia/Statistical_thermodynamics |
The partition function or configuration integral, as used in probability theory, information theory and dynamical systems, is a generalization of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated probability measure, the Gibbs measure, has the Markov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (the Hopfield network), and applications such as genomics, corpus linguistics and artificial intelligence, which employ Markov networks, and Markov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing the entropy for a fixed expectation value of the energy; this underlies the appearance of the partition function in maximum entropy methods and the algorithms derived therefrom.
The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculate expectation values and Green's functions, forming a bridge to Fredholm theory. It also provides a natural setting for the information geometry approach to information theory, where the Fisher information metric can be understood to be a correlation function derived from the partition function; it happens to define a Riemannian manifold.
When the setting for random variables is on complex projective space or projective Hilbert space, geometrized with the Fubini–Study metric, the theory of quantum mechanics and more generally quantum field theory results. In these theories, the partition function is heavily exploited in the path integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valued simplex of probability theory, an extra factor of i appears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.
== Definition ==
Given a set of random variables
X
i
{\displaystyle X_{i}}
taking on values
x
i
{\displaystyle x_{i}}
, and some sort of potential function or Hamiltonian
H
(
x
1
,
x
2
,
…
)
{\displaystyle H(x_{1},x_{2},\dots )}
, the partition function is defined as
Z
(
β
)
=
∑
x
i
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
)
{\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
The function H is understood to be a real-valued function on the space of states
{
X
1
,
X
2
,
…
}
{\displaystyle \{X_{1},X_{2},\dots \}}
, while
β
{\displaystyle \beta }
is a real-valued free parameter (conventionally, the inverse temperature). The sum over the
x
i
{\displaystyle x_{i}}
is understood to be a sum over all possible values that each of the random variables
X
i
{\displaystyle X_{i}}
may take. Thus, the sum is to be replaced by an integral when the
X
i
{\displaystyle X_{i}}
are continuous, rather than discrete. Thus, one writes
Z
(
β
)
=
∫
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
)
d
x
1
d
x
2
⋯
{\displaystyle Z(\beta )=\int \exp \left(-\beta H(x_{1},x_{2},\dots )\right)\,dx_{1}\,dx_{2}\cdots }
for the case of continuously-varying
X
i
{\displaystyle X_{i}}
.
When H is an observable, such as a finite-dimensional matrix or an infinite-dimensional Hilbert space operator or element of a C-star algebra, it is common to express the summation as a trace, so that
Z
(
β
)
=
tr
(
exp
(
−
β
H
)
)
{\displaystyle Z(\beta )=\operatorname {tr} \left(\exp \left(-\beta H\right)\right)}
When H is infinite-dimensional, then, for the above notation to be valid, the argument must be trace class, that is, of a form such that the summation exists and is bounded.
The number of variables
X
i
{\displaystyle X_{i}}
need not be countable, in which case the sums are to be replaced by functional integrals. Although there are many notations for functional integrals, a common one would be
Z
=
∫
D
φ
exp
(
−
β
H
[
φ
]
)
{\displaystyle Z=\int {\mathcal {D}}\varphi \exp \left(-\beta H[\varphi ]\right)}
Such is the case for the partition function in quantum field theory.
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as a generating function for correlation functions. This is discussed in greater detail below.
== The parameter β ==
The role or meaning of the parameter
β
{\displaystyle \beta }
can be understood in a variety of different ways. In classical thermodynamics, it is an inverse temperature. More generally, one would say that it is the variable that is conjugate to some (arbitrary) function
H
{\displaystyle H}
of the random variables
X
{\displaystyle X}
. The word conjugate here is used in the sense of conjugate generalized coordinates in Lagrangian mechanics, thus, properly
β
{\displaystyle \beta }
is a Lagrange multiplier. It is not uncommonly called the generalized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is the expectation value of
H
{\displaystyle H}
, even as many different probability distributions can give rise to exactly this same (fixed) value.
For the general case, one considers a set of functions
{
H
k
(
x
1
,
…
)
}
{\displaystyle \{H_{k}(x_{1},\dots )\}}
that each depend on the random variables
X
i
{\displaystyle X_{i}}
. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method of Lagrange multipliers. In the general case, maximum entropy methods illustrate the manner in which this is done.
Some specific examples are in order. In basic thermodynamics problems, when using the canonical ensemble, the use of just one parameter
β
{\displaystyle \beta }
reflects the fact that there is only one expectation value that must be held constant: the free energy (due to conservation of energy). For chemistry problems involving chemical reactions, the grand canonical ensemble provides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, the fugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).
For the general case, one has
Z
(
β
)
=
∑
x
i
exp
(
−
∑
k
β
k
H
k
(
x
i
)
)
{\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\sum _{k}\beta _{k}H_{k}(x_{i})\right)}
with
β
=
(
β
1
,
β
2
,
…
)
{\displaystyle \beta =(\beta _{1},\beta _{2},\dots )}
a point in a space.
For a collection of observables
H
k
{\displaystyle H_{k}}
, one would write
Z
(
β
)
=
tr
[
exp
(
−
∑
k
β
k
H
k
)
]
{\displaystyle Z(\beta )=\operatorname {tr} \left[\,\exp \left(-\sum _{k}\beta _{k}H_{k}\right)\right]}
As before, it is presumed that the argument of tr is trace class.
The corresponding Gibbs measure then provides a probability distribution such that the expectation value of each
H
k
{\displaystyle H_{k}}
is a fixed value. More precisely, one has
∂
∂
β
k
(
−
log
Z
)
=
⟨
H
k
⟩
=
E
[
H
k
]
{\displaystyle {\frac {\partial }{\partial \beta _{k}}}\left(-\log Z\right)=\langle H_{k}\rangle =\mathrm {E} \left[H_{k}\right]}
with the angle brackets
⟨
H
k
⟩
{\displaystyle \langle H_{k}\rangle }
denoting the expected value of
H
k
{\displaystyle H_{k}}
, and
E
[
⋅
]
{\displaystyle \operatorname {E} [\,\cdot \,]}
being a common alternative notation. A precise definition of this expectation value is given below.
Although the value of
β
{\displaystyle \beta }
is commonly taken to be real, it need not be, in general; this is discussed in the section Normalization below. The values of
β
{\displaystyle \beta }
can be understood to be the coordinates of points in a space; this space is in fact a manifold, as sketched below. The study of these spaces as manifolds constitutes the field of information geometry.
== Symmetry ==
The potential function itself commonly takes the form of a sum:
H
(
x
1
,
x
2
,
…
)
=
∑
s
V
(
s
)
{\displaystyle H(x_{1},x_{2},\dots )=\sum _{s}V(s)\,}
where the sum over s is a sum over some subset of the power set P(X) of the set
X
=
{
x
1
,
x
2
,
…
}
{\displaystyle X=\lbrace x_{1},x_{2},\dots \rbrace }
. For example, in statistical mechanics, such as the Ising model, the sum is over pairs of nearest neighbors. In probability theory, such as Markov networks, the sum might be over the cliques of a graph; so, for the Ising model and other lattice models, the maximal cliques are edges.
The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under the action of a group symmetry, such as translational invariance. Such symmetries can be discrete or continuous; they materialize in the correlation functions for the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice versa).
This symmetry has a critically important interpretation in probability theory: it implies that the Gibbs measure has the Markov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on the equivalence classes of the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such as Hopfield networks.
== As a measure ==
The value of the expression
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
)
{\displaystyle \exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
can be interpreted as a likelihood that a specific configuration of values
(
x
1
,
x
2
,
…
)
{\displaystyle (x_{1},x_{2},\dots )}
occurs in the system. Thus, given a specific configuration
(
x
1
,
x
2
,
…
)
{\displaystyle (x_{1},x_{2},\dots )}
,
P
(
x
1
,
x
2
,
…
)
=
1
Z
(
β
)
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
)
{\displaystyle P(x_{1},x_{2},\dots )={\frac {1}{Z(\beta )}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
is the probability of the configuration
(
x
1
,
x
2
,
…
)
{\displaystyle (x_{1},x_{2},\dots )}
occurring in the system, which is now properly normalized so that
0
≤
P
(
x
1
,
x
2
,
…
)
≤
1
{\displaystyle 0\leq P(x_{1},x_{2},\dots )\leq 1}
, and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide a measure (a probability measure) on the probability space; formally, it is called the Gibbs measure. It generalizes the narrower concepts of the grand canonical ensemble and canonical ensemble in statistical mechanics.
There exists at least one configuration
(
x
1
,
x
2
,
…
)
{\displaystyle (x_{1},x_{2},\dots )}
for which the probability is maximized; this configuration is conventionally called the ground state. If the configuration is unique, the ground state is said to be non-degenerate, and the system is said to be ergodic; otherwise the ground state is degenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be an invariant measure. When it does not commute, the symmetry is said to be spontaneously broken.
Conditions under which a ground state exists and is unique are given by the Karush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.
== Normalization ==
The values taken by
β
{\displaystyle \beta }
depend on the mathematical space over which the random field varies. Thus, real-valued random fields take values on a simplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range over complex projective space (or complex-valued projective Hilbert space), where the random variables are interpreted as probability amplitudes. The emphasis here is on the word projective, as the amplitudes are still normalized to one. The normalization for the potential function is the Jacobian for the appropriate mathematical space: it is 1 for ordinary probabilities, and i for Hilbert space; thus, in quantum field theory, one sees
i
t
H
{\displaystyle itH}
in the exponential, rather than
β
H
{\displaystyle \beta H}
. The partition function is very heavily exploited in the path integral formulation of quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.
== Expectation values ==
The partition function is commonly used as a probability-generating function for expectation values of various functions of the random variables. So, for example, taking
β
{\displaystyle \beta }
as an adjustable parameter, then the derivative of
log
(
Z
(
β
)
)
{\displaystyle \log(Z(\beta ))}
with respect to
β
{\displaystyle \beta }
E
[
H
]
=
⟨
H
⟩
=
−
∂
log
(
Z
(
β
)
)
∂
β
{\displaystyle \operatorname {E} [H]=\langle H\rangle =-{\frac {\partial \log(Z(\beta ))}{\partial \beta }}}
gives the average (expectation value) of H. In physics, this would be called the average energy of the system.
Given the definition of the probability measure above, the expectation value of any function f of the random variables X may now be written as expected: so, for discrete-valued X, one writes
⟨
f
⟩
=
∑
x
i
f
(
x
1
,
x
2
,
…
)
P
(
x
1
,
x
2
,
…
)
=
1
Z
(
β
)
∑
x
i
f
(
x
1
,
x
2
,
…
)
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
)
{\displaystyle {\begin{aligned}\langle f\rangle &=\sum _{x_{i}}f(x_{1},x_{2},\dots )P(x_{1},x_{2},\dots )\\&={\frac {1}{Z(\beta )}}\sum _{x_{i}}f(x_{1},x_{2},\dots )\exp \left(-\beta H(x_{1},x_{2},\dots )\right)\end{aligned}}}
The above notation makes sense for a finite number of discrete random variables. In more general settings, the summations should be replaced with integrals over a probability space.
Thus, for example, the entropy is given by
S
=
−
k
B
⟨
ln
P
⟩
=
−
k
B
∑
x
i
P
(
x
1
,
x
2
,
…
)
ln
P
(
x
1
,
x
2
,
…
)
=
k
B
(
β
⟨
H
⟩
+
log
Z
(
β
)
)
{\displaystyle {\begin{aligned}S&=-k_{\text{B}}\langle \ln P\rangle \\[1ex]&=-k_{\text{B}}\sum _{x_{i}}P(x_{1},x_{2},\dots )\ln P(x_{1},x_{2},\dots )\\&=k_{\text{B}}\left(\beta \langle H\rangle +\log Z(\beta )\right)\end{aligned}}}
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use in maximum entropy methods.
== Information geometry ==
The points
β
{\displaystyle \beta }
can be understood to form a space, and specifically, a manifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task of information geometry.
Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definite covariance matrix
g
i
j
(
β
)
=
∂
2
∂
β
i
∂
β
j
(
−
log
Z
(
β
)
)
=
⟨
(
H
i
−
⟨
H
i
⟩
)
(
H
j
−
⟨
H
j
⟩
)
⟩
{\displaystyle g_{ij}(\beta )={\frac {\partial ^{2}}{\partial \beta ^{i}\partial \beta ^{j}}}\left(-\log Z(\beta )\right)=\langle \left(H_{i}-\langle H_{i}\rangle \right)\left(H_{j}-\langle H_{j}\rangle \right)\rangle }
This matrix is positive semi-definite, and may be interpreted as a metric tensor, specifically, a Riemannian metric. Equipping the space of Lagrange multipliers with a metric in this way turns it into a Riemannian manifold. The study of such manifolds is referred to as information geometry; the metric above is the Fisher information metric. Here,
β
{\displaystyle \beta }
serves as a coordinate on the manifold. It is interesting to compare the above definition to the simpler Fisher information, from which it is inspired.
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:
g
i
j
(
β
)
=
⟨
(
H
i
−
⟨
H
i
⟩
)
(
H
j
−
⟨
H
j
⟩
)
⟩
=
∑
x
P
(
x
)
(
H
i
−
⟨
H
i
⟩
)
(
H
j
−
⟨
H
j
⟩
)
=
∑
x
P
(
x
)
(
H
i
+
∂
log
Z
∂
β
i
)
(
H
j
+
∂
log
Z
∂
β
j
)
=
∑
x
P
(
x
)
∂
log
P
(
x
)
∂
β
i
∂
log
P
(
x
)
∂
β
j
{\displaystyle {\begin{aligned}g_{ij}(\beta )&=\left\langle \left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\right\rangle \\&=\sum _{x}P(x)\left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\\&=\sum _{x}P(x)\left(H_{i}+{\frac {\partial \log Z}{\partial \beta _{i}}}\right)\left(H_{j}+{\frac {\partial \log Z}{\partial \beta _{j}}}\right)\\&=\sum _{x}P(x){\frac {\partial \log P(x)}{\partial \beta ^{i}}}{\frac {\partial \log P(x)}{\partial \beta ^{j}}}\\\end{aligned}}}
where we've written
P
(
x
)
{\displaystyle P(x)}
for
P
(
x
1
,
x
2
,
…
)
{\displaystyle P(x_{1},x_{2},\dots )}
and the summation is understood to be over all values of all random variables
X
k
{\displaystyle X_{k}}
. For continuous-valued random variables, the summations are replaced by integrals, of course.
Curiously, the Fisher information metric can also be understood as the flat-space Euclidean metric, after appropriate change of variables, as described in the main article on it. When the
β
{\displaystyle \beta }
are complex-valued, the resulting metric is the Fubini–Study metric. When written in terms of mixed states, instead of pure states, it is known as the Bures metric.
== Correlation functions ==
By introducing artificial auxiliary functions
J
k
{\displaystyle J_{k}}
into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing
Z
(
β
,
J
)
=
Z
(
β
,
J
1
,
J
2
,
…
)
=
∑
x
i
exp
(
−
β
H
(
x
1
,
x
2
,
…
)
+
∑
n
J
n
x
n
)
{\displaystyle {\begin{aligned}Z(\beta ,J)&=Z(\beta ,J_{1},J_{2},\dots )\\&=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )+\sum _{n}J_{n}x_{n}\right)\end{aligned}}}
one then has
E
[
x
k
]
=
⟨
x
k
⟩
=
∂
∂
J
k
log
Z
(
β
,
J
)
|
J
=
0
{\displaystyle \operatorname {E} [x_{k}]=\langle x_{k}\rangle =\left.{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
as the expectation value of
x
k
{\displaystyle x_{k}}
. In the path integral formulation of quantum field theory, these auxiliary functions are commonly referred to as source fields.
Multiple differentiations lead to the connected correlation functions of the random variables. Thus the correlation function
C
(
x
j
,
x
k
)
{\displaystyle C(x_{j},x_{k})}
between variables
x
j
{\displaystyle x_{j}}
and
x
k
{\displaystyle x_{k}}
is given by:
C
(
x
j
,
x
k
)
=
∂
∂
J
j
∂
∂
J
k
log
Z
(
β
,
J
)
|
J
=
0
{\displaystyle C(x_{j},x_{k})=\left.{\frac {\partial }{\partial J_{j}}}{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
== Gaussian integrals ==
For the case where H can be written as a quadratic form involving a differential operator, that is, as
H
=
1
2
∑
n
x
n
D
x
n
{\displaystyle H={\frac {1}{2}}\sum _{n}x_{n}Dx_{n}}
then partition function can be understood to be a sum or integral over Gaussians. The correlation function
C
(
x
j
,
x
k
)
{\displaystyle C(x_{j},x_{k})}
can be understood to be the Green's function for the differential operator (and generally giving rise to Fredholm theory). In the quantum field theory setting, such functions are referred to as propagators; higher order correlators are called n-point functions; working with them defines the effective action of a theory.
When the random variables are anti-commuting Grassmann numbers, then the partition function can be expressed as a determinant of the operator D. This is done by writing it as a Berezin integral (also called Grassmann integral).
== General properties ==
Partition functions are used to discuss critical scaling, universality and are subject to the renormalization group.
== See also ==
Exponential family
Partition function (statistical mechanics)
Partition problem
Markov random field
== References == | Wikipedia/Partition_function_(mathematics) |
In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review.
== Maximum Shannon entropy ==
Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy,
S
I
=
−
∑
i
p
i
ln
p
i
.
{\displaystyle S_{\text{I}}=-\sum _{i}p_{i}\ln p_{i}.}
This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function).
A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:
S
Th
(
P
,
V
,
T
,
…
)
(eqm)
=
k
B
S
I
(
P
,
V
,
T
,
…
)
{\displaystyle S_{\text{Th}}(P,V,T,\ldots )_{\text{(eqm)}}=k_{\text{B}}\,S_{\text{I}}(P,V,T,\ldots )}
kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant).
However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising:
S
I
=
−
∑
p
Γ
ln
p
Γ
{\displaystyle S_{\text{I}}=-\sum p_{\Gamma }\ln p_{\Gamma }}
This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time.
For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking.
Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy",
H
c
=
−
∫
p
(
x
)
log
p
(
x
)
m
(
x
)
d
x
.
{\displaystyle H_{\text{c}}=-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx.}
Hc is the negative of the Kullback–Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform – i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics.
== Philosophical implications ==
Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below.
=== The nature of the probabilities in statistical mechanics ===
Jaynes (1985, 2003, et passim) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle, and stands reliable today.
Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists".
The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality.
The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account.
=== Is entropy "real"? ===
The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt STh is as "real" as the entropy in classical thermodynamics.
Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description.
=== Is ergodic theory relevant? ===
The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in.
However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time.
If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model.
=== The second law ===
According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time:
Δ
S
I
=
0
{\displaystyle \Delta S_{\text{I}}=0\,}
However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities.
Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored.
The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy STh(1), should reproduce the expectation values of the observed macroscopic variables at time t2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy STh(2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect:
S
Th
(
2
)
≥
S
Th
(
1
)
{\displaystyle {S_{\text{Th}}}^{(2)}\geq {S_{\text{Th}}}^{(1)}}
At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6N-dimensional probability distribution, this result represents coarse graining—i.e., information loss by smoothing out very fine-scale detail.
=== Caveats with the argument ===
Some caveats should be considered with the above.
1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system.
It is also sometimes suggested that quantum measurement, especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy).
2. The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1). (Note that if we allow ourselves the abilities of Laplace's demon, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t1 is now also reduced from SI(1) to SI(2)).
We know that STh(2) > SI(2); but we can now no longer be certain that it is greater than STh(1) = SI(1). This then leaves open the possibility for fluctuations in STh. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture.
3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past.
The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see arrow of time).
== Criticisms ==
The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium.
The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result".
Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". The physically defined second entropy can also be considered from an informational viewpoint.
== See also ==
Edwin Thompson Jaynes
First law of thermodynamics
Second law of thermodynamics
Principle of maximum entropy
Principle of Minimum Discrimination Information
Kullback–Leibler divergence
Quantum relative entropy
Information theory and measure theory
Entropy power inequality
== References ==
=== Bibliography of cited references ===
Balescu, Radu (1997). Statistical Dynamics: Matter out of equilibrium. London: Imperial College Press. Bibcode:1997sdmo.book.....B.
Jaynes, E.T. (September 1968). "Prior Probabilities" (PDF). IEEE Transactions on Systems Science and Cybernetics. SSC–4 (3): 227–241. doi:10.1109/TSSC.1968.300117.
Guttmann, Y.M. (1999). The Concept of Probability in Statistical Physics, Cambridge University Press, Cambridge UK, ISBN 978-0-521-62128-1.
Jaynes, E.T. (1979). "Where do we stand on maximum entropy?" (PDF). In Levine, R.; Tribus M. (eds.). The Maximum Entropy Formalism. MIT Press. ISBN 978-0-262-12080-7.
Jaynes, E.T. (1985). "Some random observations". Synthese. 63: 115–138. doi:10.1007/BF00485957. S2CID 46975520.
Jaynes, E.T. (2003). Bretthorst, G.L. (ed.). Probability Theory: The Logic of Science. Cambridge: Cambridge University Press. ISBN 978-0-521-59271-0.
Kleidon, Axel; Lorenz, Ralph D. (2005). Non-equilibrium thermodynamics and the production of entropy: life, earth, and beyond. Springer. pp. 42–. ISBN 978-3-540-22495-2.
== Further reading == | Wikipedia/Maximum_entropy_thermodynamics |
In statistical mechanics, the two-dimensional square lattice Ising model is a simple lattice model of interacting magnetic spins. The model is notable for having nontrivial interactions, yet having an analytical solution. The model was solved by Lars Onsager for the special case that the external magnetic field H = 0. An analytical solution for the general case for
H
≠
0
{\displaystyle H\neq 0}
has yet to be found.
== Defining the partition function ==
Consider a 2D Ising model on a square lattice
Λ
{\displaystyle \Lambda }
with N sites and periodic boundary conditions in both the horizontal and vertical directions, which effectively reduces the topology of the model to a torus. Generally, the horizontal coupling
J
{\displaystyle J}
and the vertical coupling
J
∗
{\displaystyle J^{*}}
are not equal. With
β
=
1
k
T
{\displaystyle \textstyle \beta ={\frac {1}{kT}}}
and absolute temperature
T
{\displaystyle T}
and the Boltzmann constant
k
{\displaystyle k}
, the partition function
Z
N
(
K
≡
β
J
,
L
≡
β
J
∗
)
=
∑
{
σ
}
exp
(
K
∑
⟨
i
j
⟩
H
σ
i
σ
j
+
L
∑
⟨
i
j
⟩
V
σ
i
σ
j
)
.
{\displaystyle Z_{N}(K\equiv \beta J,L\equiv \beta J^{*})=\sum _{\{\sigma \}}\exp \left(K\sum _{\langle ij\rangle _{H}}\sigma _{i}\sigma _{j}+L\sum _{\langle ij\rangle _{V}}\sigma _{i}\sigma _{j}\right).}
== Critical temperature ==
The critical temperature
T
c
{\displaystyle T_{\text{c}}}
can be obtained from the Kramers–Wannier duality relation. Denoting the free energy per site as
F
(
K
,
L
)
{\displaystyle F(K,L)}
, one has:
β
F
(
K
∗
,
L
∗
)
=
β
F
(
K
,
L
)
+
1
2
log
[
sinh
(
2
K
)
sinh
(
2
L
)
]
{\displaystyle \beta F\left(K^{*},L^{*}\right)=\beta F\left(K,L\right)+{\frac {1}{2}}\log {\big [}\sinh \left(2K\right)\sinh \left(2L\right){\big ]}}
where
sinh
(
2
K
∗
)
sinh
(
2
L
)
=
1
{\displaystyle \sinh \left(2K^{*}\right)\sinh \left(2L\right)=1}
sinh
(
2
L
∗
)
sinh
(
2
K
)
=
1
{\displaystyle \sinh \left(2L^{*}\right)\sinh \left(2K\right)=1}
Assuming that there is only one critical line in the (K, L) plane, the duality relation implies that this is given by:
sinh
(
2
K
)
sinh
(
2
L
)
=
1
{\displaystyle \sinh \left(2K\right)\sinh \left(2L\right)=1}
For the isotropic case
J
=
J
∗
{\displaystyle J=J^{*}}
, one finds the famous relation for the critical temperature
T
c
{\displaystyle T_{c}}
k
T
c
J
=
2
ln
(
1
+
2
)
≈
2.26918531421
{\displaystyle {\frac {kT_{\text{c}}}{J}}={\frac {2}{\ln(1+{\sqrt {2}})}}\approx 2.26918531421}
== Dual lattice ==
Consider a configuration of spins
{
σ
}
{\displaystyle \{\sigma \}}
on the square lattice
Λ
{\displaystyle \Lambda }
. Let r and s denote the number of unlike neighbours in the vertical and horizontal directions respectively. Then the summand in
Z
N
{\displaystyle Z_{N}}
corresponding to
{
σ
}
{\displaystyle \{\sigma \}}
is given by
e
K
(
N
−
2
s
)
+
L
(
N
−
2
r
)
{\displaystyle e^{K(N-2s)+L(N-2r)}}
Construct a dual lattice
Λ
D
{\displaystyle \Lambda _{D}}
as depicted in the diagram. For every configuration
{
σ
}
{\displaystyle \{\sigma \}}
, a polygon is associated to the lattice by drawing a line on the edge of the dual lattice if the spins separated by the edge are unlike. Since by traversing a vertex of
Λ
{\displaystyle \Lambda }
the spins need to change an even number of times so that one arrives at the starting point with the same charge, every vertex of the dual lattice is connected to an even number of lines in the configuration, defining a polygon.
This reduces the partition function to
Z
N
(
K
,
L
)
=
2
e
N
(
K
+
L
)
∑
P
⊂
Λ
D
e
−
2
L
r
−
2
K
s
{\displaystyle Z_{N}(K,L)=2e^{N(K+L)}\sum _{P\subset \Lambda _{D}}e^{-2Lr-2Ks}}
summing over all polygons in the dual lattice, where r and s are the number of horizontal and vertical lines in the polygon, with the factor of 2 arising from the inversion of spin configuration.
== Low-temperature expansion ==
At low temperatures, K, L approach infinity, so that as
T
→
0
,
e
−
K
,
e
−
L
→
0
{\displaystyle T\rightarrow 0,\ \ e^{-K},e^{-L}\rightarrow 0}
, so that
Z
N
(
K
,
L
)
=
2
e
N
(
K
+
L
)
∑
P
⊂
Λ
D
e
−
2
L
r
−
2
K
s
{\displaystyle Z_{N}(K,L)=2e^{N(K+L)}\sum _{P\subset \Lambda _{D}}e^{-2Lr-2Ks}}
defines a low temperature expansion of
Z
N
(
K
,
L
)
{\displaystyle Z_{N}(K,L)}
.
== High-temperature expansion ==
Since
σ
σ
′
=
±
1
{\displaystyle \sigma \sigma '=\pm 1}
one has
e
K
σ
σ
′
=
cosh
K
+
sinh
K
(
σ
σ
′
)
=
cosh
K
(
1
+
tanh
K
(
σ
σ
′
)
)
.
{\displaystyle e^{K\sigma \sigma '}=\cosh K+\sinh K(\sigma \sigma ')=\cosh K(1+\tanh K(\sigma \sigma ')).}
Therefore
Z
N
(
K
,
L
)
=
(
cosh
K
cosh
L
)
N
∑
{
σ
}
∏
⟨
i
j
⟩
H
(
1
+
v
σ
i
σ
j
)
∏
⟨
i
j
⟩
V
(
1
+
w
σ
i
σ
j
)
{\displaystyle Z_{N}(K,L)=(\cosh K\cosh L)^{N}\sum _{\{\sigma \}}\prod _{\langle ij\rangle _{H}}(1+v\sigma _{i}\sigma _{j})\prod _{\langle ij\rangle _{V}}(1+w\sigma _{i}\sigma _{j})}
where
v
=
tanh
K
{\displaystyle v=\tanh K}
and
w
=
tanh
L
{\displaystyle w=\tanh L}
. Since there are N horizontal and vertical edges, there are a total of
2
2
N
{\displaystyle 2^{2N}}
terms in the expansion. Every term corresponds to a configuration of lines of the lattice, by associating a line connecting i and j if the term
v
σ
i
σ
j
{\displaystyle v\sigma _{i}\sigma _{j}}
(or
w
σ
i
σ
j
)
{\displaystyle w\sigma _{i}\sigma _{j})}
is chosen in the product. Summing over the configurations, using
∑
σ
i
=
±
1
σ
i
n
=
{
0
for
n
odd
2
for
n
even
{\displaystyle \sum _{\sigma _{i}=\pm 1}\sigma _{i}^{n}={\begin{cases}0&{\mbox{for }}n{\mbox{ odd}}\\2&{\mbox{for }}n{\mbox{ even}}\end{cases}}}
shows that only configurations with an even number of lines at each vertex (polygons) will contribute to the partition function, giving
Z
N
(
K
,
L
)
=
2
N
(
cosh
K
cosh
L
)
N
∑
P
⊂
Λ
v
r
w
s
{\displaystyle Z_{N}(K,L)=2^{N}(\cosh K\cosh L)^{N}\sum _{P\subset \Lambda }v^{r}w^{s}}
where the sum is over all polygons in the lattice. Since tanh K, tanh L
→
0
{\displaystyle \rightarrow 0}
as
T
→
∞
{\displaystyle T\rightarrow \infty }
, this gives the high temperature expansion of
Z
N
(
K
,
L
)
{\displaystyle Z_{N}(K,L)}
.
The two expansions can be related using the Kramers–Wannier duality.
== Exact solution ==
The free energy per site in the limit
N
→
∞
{\displaystyle N\to \infty }
is given as follows. Define the parameter
k
{\displaystyle k}
as
k
=
1
sinh
(
2
K
)
sinh
(
2
L
)
{\displaystyle k={\frac {1}{\sinh \left(2K\right)\sinh \left(2L\right)}}}
The Helmholtz free energy per site
F
{\displaystyle F}
can be expressed as
−
β
F
=
log
(
2
)
2
+
1
2
π
∫
0
π
log
[
cosh
(
2
K
)
cosh
(
2
L
)
+
1
k
1
+
k
2
−
2
k
cos
(
2
θ
)
]
d
θ
{\displaystyle -\beta F={\frac {\log(2)}{2}}+{\frac {1}{2\pi }}\int _{0}^{\pi }\log \left[\cosh \left(2K\right)\cosh \left(2L\right)+{\frac {1}{k}}{\sqrt {1+k^{2}-2k\cos(2\theta )}}\right]d\theta }
For the isotropic case
J
=
J
∗
{\displaystyle J=J^{*}}
, from the above expression one finds for the internal energy per site:
U
=
−
J
coth
(
2
β
J
)
[
1
+
2
π
(
2
tanh
2
(
2
β
J
)
−
1
)
∫
0
π
/
2
1
1
−
4
k
(
1
+
k
)
−
2
sin
2
(
θ
)
d
θ
]
{\displaystyle U=-J\coth(2\beta J)\left[1+{\frac {2}{\pi }}(2\tanh ^{2}(2\beta J)-1)\int _{0}^{\pi /2}{\frac {1}{\sqrt {1-4k(1+k)^{-2}\sin ^{2}(\theta )}}}d\theta \right]}
and the spontaneous magnetization is, for
T
<
T
c
{\displaystyle T<T_{\text{c}}}
,
M
=
[
1
−
sinh
−
4
(
2
β
J
)
]
1
/
8
{\displaystyle M=\left[1-\sinh ^{-4}(2\beta J)\right]^{1/8}}
and
M
=
0
{\displaystyle M=0}
for
T
≥
T
c
{\displaystyle T\geq T_{\text{c}}}
.
== Notes ==
== References ==
Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics (PDF), London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7, MR 0690578
Baxter, Rodney J. (2016). "The bulk, surface and corner free energies of the square lattice Ising model". Journal of Physics A: Mathematical and Theoretical. 50 (1). IOP Publishing: 014001. arXiv:1606.02029. doi:10.1088/1751-8113/50/1/014001. ISSN 1751-8113. S2CID 2467419.
Kurt Binder (2001) [1994], "Ising model", Encyclopedia of Mathematics, EMS Press
BRUSH, STEPHEN G. (1967-10-01). "History of the Lenz-Ising Model". Reviews of Modern Physics. 39 (4). American Physical Society (APS): 883–893. Bibcode:1967RvMP...39..883B. doi:10.1103/revmodphys.39.883. ISSN 0034-6861.
Huang, Kerson (1987), Statistical mechanics (2nd edition), Wiley, ISBN 978-0471815181
Hucht, Alfred (2021). "The square lattice Ising model on the rectangle III: Hankel and Toeplitz determinants". Journal of Physics A: Mathematical and Theoretical. 54 (37). IOP Publishing: 375201. arXiv:2103.10776. Bibcode:2021JPhA...54K5201H. doi:10.1088/1751-8121/ac0983. ISSN 1751-8113. S2CID 232290629.
Ising, Ernst (1925), "Beitrag zur Theorie des Ferromagnetismus", Z. Phys., 31 (1): 253–258, Bibcode:1925ZPhy...31..253I, doi:10.1007/BF02980577, S2CID 122157319
Itzykson, Claude; Drouffe, Jean-Michel (1989), Théorie statistique des champs, Volume 1, Savoirs actuels (CNRS), EDP Sciences Editions, ISBN 978-2868833600
Itzykson, Claude; Drouffe, Jean-Michel (1989), Statistical field theory, Volume 1: From Brownian motion to renormalization and lattice gauge theory, Cambridge University Press, ISBN 978-0521408059
Barry M. McCoy and Tai Tsun Wu (1973), The Two-Dimensional Ising Model. Harvard University Press, Cambridge Massachusetts, ISBN 0-674-91440-6
Montroll, Elliott W.; Potts, Renfrey B.; Ward, John C. (1963), "Correlations and spontaneous magnetization of the two-dimensional Ising model", Journal of Mathematical Physics, 4 (2): 308–322, Bibcode:1963JMP.....4..308M, doi:10.1063/1.1703955, ISSN 0022-2488, MR 0148406, archived from the original on 2013-01-12
Onsager, Lars (1944), "Crystal statistics. I. A two-dimensional model with an order-disorder transition", Phys. Rev., Series II, 65 (3–4): 117–149, Bibcode:1944PhRv...65..117O, doi:10.1103/PhysRev.65.117, MR 0010315
Onsager, Lars (1949), "Discussion", Supplemento al Nuovo Cimento, 6: 261
John Palmer (2007), Planar Ising Correlations. Birkhäuser, Boston, ISBN 978-0-8176-4248-8.
Yang, C. N. (1952), "The spontaneous magnetization of a two-dimensional Ising model", Physical Review, Series II, 85 (5): 808–816, Bibcode:1952PhRv...85..808Y, doi:10.1103/PhysRev.85.808, MR 0051740 | Wikipedia/Square-lattice_Ising_model |
In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol
G
{\displaystyle G}
) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure–volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
=
H
−
T
S
{\displaystyle G(p,T)=U+pV-TS=H-TS}
where:
U
{\textstyle U}
is the internal energy of the system
H
{\textstyle H}
is the enthalpy of the system
S
{\textstyle S}
is the entropy of the system
T
{\textstyle T}
is the temperature of the system
V
{\textstyle V}
is the volume of the system
p
{\textstyle p}
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change (
Δ
G
=
Δ
H
−
T
Δ
S
{\displaystyle \Delta G=\Delta H-T\Delta S}
, measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in
G
{\displaystyle G}
is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as: 400
the greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as
Δ
G
∘
=
Δ
H
∘
−
T
Δ
S
∘
{\displaystyle \Delta G^{\circ }=\Delta H^{\circ }-T\Delta S^{\circ }}
, where
H
{\displaystyle H}
is enthalpy,
T
{\displaystyle T}
is absolute temperature, and
S
{\displaystyle S}
is entropy.
== Overview ==
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.: 298–299
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
== History ==
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T, this equation may be written:
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.: 206
== Definitions ==
The Gibbs free energy is defined as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
,
{\displaystyle G(p,T)=U+pV-TS,}
which is the same as
G
(
p
,
T
)
=
H
−
T
S
,
{\displaystyle G(p,T)=H-TS,}
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
T
d
S
=
d
U
+
p
d
V
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
T
S
)
−
S
d
T
=
d
U
+
d
(
p
V
)
−
V
d
p
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
U
−
T
S
+
p
V
)
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
G
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
{\displaystyle {\begin{aligned}T\,\mathrm {d} S&=\mathrm {d} U+p\,\mathrm {d} V-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (TS)-S\,\mathrm {d} T&=\mathrm {d} U+\mathrm {d} (pV)-V\,\mathrm {d} p-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (U-TS+pV)&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} G&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \end{aligned}}}
where:
μi is the chemical potential of the ith chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the ith chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
G
N
=
G
∘
N
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}={\frac {G^{\circ }}{N}}+kT\ln {\frac {p}{p^{\circ }}}.}
or more conveniently as its chemical potential:
G
N
=
μ
=
μ
∘
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}=\mu =\mu ^{\circ }+kT\ln {\frac {p}{p^{\circ }}}.}
In non-ideal systems, fugacity comes into play.
== Derivation ==
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
d
U
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.}
The definition of G from above is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
.
Taking the total differential, we have
d
G
=
d
U
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
.
{\displaystyle \mathrm {d} G=\mathrm {d} U+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T.}
Replacing dU with the result from the first law gives
d
G
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
=
V
d
p
−
S
d
T
+
∑
i
μ
i
d
N
i
.
{\displaystyle {\begin{aligned}\mathrm {d} G&=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T\\&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.\end{aligned}}}
The natural variables of G are then p, T, and {Ni}.
=== Homogeneous systems ===
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
U
=
T
S
−
p
V
+
∑
i
μ
i
N
i
.
{\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}.}
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
G
=
U
+
p
V
−
T
S
=
(
T
S
−
p
V
+
∑
i
μ
i
N
i
)
+
p
V
−
T
S
=
∑
i
μ
i
N
i
.
{\displaystyle {\begin{aligned}G&=U+pV-TS\\&=\left(TS-pV+\sum _{i}\mu _{i}N_{i}\right)+pV-TS\\&=\sum _{i}\mu _{i}N_{i}.\end{aligned}}}
This result shows that the chemical potential of a substance
i
{\displaystyle i}
is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
== Gibbs free energy of reactions ==
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
and an infinitesimal change in G, at constant temperature and pressure, yields
d
G
=
d
U
+
p
d
V
−
T
d
S
.
{\displaystyle dG=dU+pdV-TdS.}
By the first law of thermodynamics, a change in the internal energy U is given by
d
U
=
δ
Q
+
δ
W
{\displaystyle dU=\delta Q+\delta W}
where δQ is energy added as heat, and δW is energy added as work. The work done on the system may be written as δW = −pdV + δWx, where −pdV is the mechanical work of compression/expansion done on or by the system and δWx is all other forms of work, which may include electrical, magnetic, etc. Then
d
U
=
δ
Q
−
p
d
V
+
δ
W
x
{\displaystyle dU=\delta Q-pdV+\delta W_{x}}
and the infinitesimal change in G is
d
G
=
δ
Q
−
T
d
S
+
δ
W
x
.
{\displaystyle dG=\delta Q-TdS+\delta W_{x}.}
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath),
T
d
S
≥
δ
Q
{\displaystyle TdS\geq \delta Q}
, and so it follows that
d
G
≤
δ
W
x
{\displaystyle dG\leq \delta W_{x}}
Assuming that only mechanical work is done, this simplifies to
d
G
≤
0
{\displaystyle dG\leq 0}
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
=== In electrochemical thermodynamics ===
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf
E
{\displaystyle {\mathcal {E}}}
, an electrical work term appears in the expression for the change in Gibbs energy:
d
G
=
−
S
d
T
+
V
d
p
+
E
d
Q
e
l
e
,
{\displaystyle dG=-SdT+Vdp+{\mathcal {E}}dQ_{ele},}
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (
E
{\displaystyle {\mathcal {E}}}
, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
(
∂
E
∂
T
)
Q
e
l
e
,
p
=
−
(
∂
S
∂
Q
e
l
e
)
T
,
p
{\displaystyle \left({\frac {\partial {\mathcal {E}}}{\partial T}}\right)_{Q_{ele},p}=-\left({\frac {\partial S}{\partial Q_{ele}}}\right)_{T,p}}
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
Δ
Q
e
l
e
=
−
n
0
F
0
,
{\displaystyle \Delta Q_{ele}=-n_{0}F_{0}\,,}
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
Δ
H
=
−
n
0
F
0
(
E
−
T
d
E
d
T
)
,
{\displaystyle \Delta H=-n_{0}F_{0}\left({\mathcal {E}}-T{\frac {d{\mathcal {E}}}{dT}}\right),}
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
== Useful identities to derive the Nernst equation ==
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
Δ
r
G
=
Δ
r
G
∘
+
R
T
ln
Q
r
{\displaystyle \Delta _{\text{r}}G=\Delta _{\text{r}}G^{\circ }+RT\ln Q_{\text{r}}}
(see chemical equilibrium),
Δ
r
G
∘
=
−
R
T
ln
K
eq
{\displaystyle \Delta _{\text{r}}G^{\circ }=-RT\ln K_{\text{eq}}}
(for a system at chemical equilibrium),
Δ
r
G
=
w
elec,rev
=
−
n
F
E
{\displaystyle \Delta _{\text{r}}G=w_{\text{elec,rev}}=-nF{\mathcal {E}}}
(for a reversible electrochemical process at constant temperature and pressure),
Δ
r
G
∘
=
−
n
F
E
∘
{\displaystyle \Delta _{\text{r}}G^{\circ }=-nF{\mathcal {E}}^{\circ }}
(definition of
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
),
and rearranging gives
n
F
E
∘
=
R
T
ln
K
eq
,
n
F
E
=
n
F
E
∘
−
R
T
ln
Q
r
,
E
=
E
∘
−
R
T
n
F
ln
Q
r
,
{\displaystyle {\begin{aligned}nF{\mathcal {E}}^{\circ }&=RT\ln K_{\text{eq}},\\nF{\mathcal {E}}&=nF{\mathcal {E}}^{\circ }-RT\ln Q_{\text{r}},\\{\mathcal {E}}&={\mathcal {E}}^{\circ }-{\frac {RT}{nF}}\ln Q_{\text{r}},\end{aligned}}}
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
ΔrG, Gibbs free energy change per mole of reaction,
ΔrG°, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298 K, 100 kPa, 1 M of each reactant and product),
R, gas constant,
T, absolute temperature,
ln, natural logarithm,
Qr, reaction quotient (unitless),
Keq, equilibrium constant (unitless),
welec,rev, electrical work in a reversible process (chemistry sign convention),
n, number of moles of electrons transferred in the reaction,
F = NAe ≈ 96485 C/mol, Faraday constant (charge per mole of electrons),
E
{\displaystyle {\mathcal {E}}}
, cell potential,
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
, standard cell potential.
Moreover, we also have
K
eq
=
e
−
Δ
r
G
∘
R
T
,
Δ
r
G
∘
=
−
R
T
(
ln
K
eq
)
=
−
2.303
R
T
(
log
10
K
eq
)
,
{\displaystyle {\begin{aligned}K_{\text{eq}}&=e^{-{\frac {\Delta _{\text{r}}G^{\circ }}{RT}}},\\\Delta _{\text{r}}G^{\circ }&=-RT\left(\ln K_{\text{eq}}\right)=-2.303\,RT\left(\log _{10}K_{\text{eq}}\right),\end{aligned}}}
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
Q
r
=
K
eq
{\displaystyle Q_{\text{r}}=K_{\text{eq}}}
and
Δ
r
G
=
0.
{\displaystyle \Delta _{\text{r}}G=0.}
== Standard Gibbs energy change of formation ==
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
== Graphical interpretation by Gibbs ==
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
== See also ==
Bioenergetics
Calphad (CALculation of PHAse Diagrams)
Critical point (thermodynamics)
Electron equivalent
Enthalpy–entropy compensation
Free entropy
Gibbs–Helmholtz equation
Grand potential
Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients
Spinodal – Spinodal Curves (Hessian matrix)
Standard molar entropy
Thermodynamic free energy
UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients
== Notes and references ==
== External links ==
IUPAC definition (Gibbs energy)
Gibbs Free Energy – Georgia State University | Wikipedia/Gibbs_free_energy |
A depletion force is an effective attractive force that arises between large colloidal particles that are suspended in a dilute solution of depletants, which are smaller solutes that are preferentially excluded from the vicinity of the large particles. One of the earliest reports of depletion forces that lead to particle coagulation is that of Bondy, who observed the separation or "creaming" of rubber latex upon addition of polymer depletant molecules (sodium alginate) to solution. More generally, depletants can include polymers, micelles, osmolytes, ink, mud, or paint dispersed in a continuous phase.
Depletion forces are often regarded as entropic forces, as was first explained by the established Asakura–Oosawa model. In this theory the depletion force arises from an increase in osmotic pressure of the surrounding solution when colloidal particles get close enough such that the excluded cosolutes (depletants) cannot fit in between them.
Because the particles were considered as hard-core (completely rigid) particles, the emerging picture of the underlying mechanism inducing the force was necessarily entropic.
== Causes ==
=== Sterics ===
The system of colloids and depletants in solution is typically modeled by treating the large colloids and small depletants as dissimilarly sized hard spheres. Hard spheres are characterized as non-interacting and impenetrable spheres. These two fundamental properties of hard spheres are described mathematically by the hard-sphere potential. The hard-sphere potential imposes steric constraint around large spheres which in turn gives rise to excluded volume, that is, volume that is unavailable for small spheres to occupy.
==== Hard-sphere potential ====
In a colloidal dispersion, the colloid-colloid interaction potential is approximated as the interaction potential between two hard spheres. For two hard spheres of diameter of
σ
{\displaystyle \sigma }
, the interaction potential as a function of interparticle separation is:
V
(
h
)
=
{
0
if
h
≥
σ
∞
if
h
<
σ
{\displaystyle V(h)=\left\{{\begin{matrix}0&{\mbox{if}}\quad h\geq \sigma \\\infty &{\mbox{if}}\quad h<\sigma \end{matrix}}\right.}
called the hard-sphere potential where
h
{\displaystyle h}
is the center-to-center distance between the spheres.
If both colloids and depletants are in a dispersion, there is interaction potential between colloidal particles and depletant particles that is described similarly by the hard-sphere potential. Again, approximating the particles to be hard-spheres, the interaction potential between colloids of diameter
D
{\displaystyle D}
and depletant sols of diameter
d
{\displaystyle d}
is:
V
(
h
)
=
{
0
if
h
≥
(
D
+
d
2
)
∞
if
h
<
(
D
+
d
2
)
{\displaystyle V(h)=\left\{{\begin{matrix}0&{\mbox{if}}\quad h\geq {\Big (}{\frac {D+d}{2}}{\Big )}\\\infty &{\mbox{if}}\quad h<{\Big (}{\frac {D+d}{2}}{\Big )}\end{matrix}}\right.}
where
h
{\displaystyle h}
is the center-to-center distance between the spheres. Typically, depletant particles are very small compared to the colloids so
d
≪
D
{\displaystyle d\ll D}
The underlying consequence of the hard-sphere potential is that dispersed colloids cannot penetrate each other and have no mutual attraction or repulsion.
==== Excluded volume ====
When both large colloidal particles and small depletants are in a suspension, there is a region which surrounds every large colloidal particle that is unavailable for the centers of the depletants to occupy. This steric restriction is due to the colloid-depletant hard-sphere potential. The total volume of the excluded region for two spheres is
V
E
=
π
(
D
+
d
)
3
3
{\displaystyle V_{\mathrm {E} }={\frac {\pi {\big (}D+d{\big )}^{3}}{3}}}
where
D
{\displaystyle D}
is the diameter of the large spheres and
d
{\displaystyle d}
is the diameter of the small spheres.
When the large spheres get close enough, the excluded volumes surrounding the spheres intersect. The overlapping volumes result in a reduced excluded volume, that is, an increase in the total free volume available to small spheres. The reduced excluded volume,
V
E
′
{\displaystyle V'_{\mathrm {E} }}
can be written
V
E
′
=
V
E
−
2
π
l
2
3
[
3
(
D
+
d
)
2
−
l
]
{\displaystyle V'_{\mathrm {E} }=V_{\mathrm {E} }-{\frac {2\pi l^{2}}{3}}{\bigg [}{\frac {3\left(D+d\right)}{2}}-l{\bigg ]}}
where
l
=
(
D
+
d
)
/
2
−
h
/
2
{\displaystyle l=(D+d)/2-h/2}
is half the width of the lens-shaped region of overlap volume formed by spherical caps. The volume available
V
A
{\displaystyle V_{\mathrm {A} }}
for small spheres is the difference between the total volume of the system and the excluded volume. To determine the available volume for small spheres, there are two distinguishable cases: first, the separation of the large spheres is big enough so small spheres can penetrate in between them; second, the large spheres are close enough so that small spheres cannot penetrate between them. For each case, the available volume for small spheres is given by
V
A
=
{
V
−
V
E
if
h
≥
D
+
d
V
−
V
E
′
if
h
<
D
+
d
{\displaystyle V_{\mathrm {A} }=\left\{{\begin{matrix}V-V_{\mathrm {E} }&{\mbox{if}}\quad h\geq D+d\\V-V'_{\mathrm {E} }&{\mbox{if}}\quad h<D+d\end{matrix}}\right.}
In the latter case small spheres are depleted from the interparticle region between large spheres and a depletion force ensues.
=== Thermodynamics ===
The depletion force is described as an entropic force because it is fundamentally a manifestation of the second law of thermodynamics, which states that a system tends to increase its entropy. The gain in translational entropy of the depletants, owing to the increased available volume, is much greater than the loss of entropy from flocculation of the colloids. The positive change in entropy lowers the Helmholtz free energy and causes colloidal flocculation to happen spontaneously. The system of colloids and depletants in a solution is modeled as a canonical ensemble of hard spheres for statistical determinations of thermodynamic quantities.
However, recent experiments and theoretical models found that depletion forces can be enthalpically driven. In these instances, the intricate balance of interactions between the solution components results in the net exclusion of cosolute from macromolecule. This exclusion results in an effective stabilization of the macromolecule self-association, which can be not only enthalpically dominated, but also entropically unfavorable.
==== Entropy and Helmholtz energy ====
The total volume available for small spheres increases when the excluded volumes around large spheres overlap. The increased volume allotted for small spheres allows them greater translational freedom which increases their entropy. Because the canonical ensemble is an athermal system at a constant volume the Helmholtz free energy is written
A
=
−
T
S
{\displaystyle A=-TS}
where
A
{\displaystyle A}
is the Helmholtz free energy,
S
{\displaystyle S}
is the entropy and
T
{\displaystyle T}
is the temperature. The system's net gain in entropy is positive from increased volume, thus the Helmholtz free energy is negative and depletion flocculation happens spontaneously.
The free energy of the system is obtained from a statistical definition of Helmholtz free energy
A
=
−
k
B
T
ln
Q
{\displaystyle A=-k_{\mathrm {B} }T\ln Q}
where
Q
{\displaystyle Q}
is the partition function for the canonical ensemble. The partition function contains statistical information that describes the canonical ensemble including its total volume, the total number of small spheres, the volume available for small spheres to occupy, and the de Broglie wavelength. If hard-spheres are assumed, the partition function
Q
{\displaystyle Q}
is
Q
=
V
A
N
N
!
Λ
3
N
{\displaystyle Q={\frac {V_{\mathrm {A} }^{N}}{N!\Lambda ^{3N}}}}
The volume available for small spheres,
V
A
{\displaystyle V_{\mathrm {A} }}
was calculated above.
N
{\displaystyle N}
is the number of small spheres and
Λ
{\displaystyle \Lambda }
is the de Broglie wavelength. Substituting
Q
{\displaystyle Q}
into the statistical definition, the Helmholtz free energy now reads
A
=
−
k
B
T
ln
(
V
A
N
N
!
Λ
3
N
)
{\displaystyle A=-k_{\mathrm {B} }T\ln {\bigg (}{\frac {V_{\mathrm {A} }^{N}}{N!\Lambda ^{3N}}}{\bigg )}}
The magnitude of the depletion force,
F
{\displaystyle {\mathcal {F}}}
is equal to the change in Helmholtz free energy with distance between two large spheres and is given by
F
=
−
(
∂
A
∂
h
)
T
{\displaystyle {\mathcal {F}}=-{\bigg (}{\frac {\partial A}{\partial h}}{\bigg )}_{T}}
The entropic nature of depletion forces was proven experimentally in some cases. For example, some polymeric crowders induce entropic depletion forces that stabilize proteins in their native state.
Other examples include many systems with hard-core only interactions.
=== Osmotic pressure ===
The depletion force is an effect of increased osmotic pressure in the surrounding solution.
When colloids get sufficiently close, that is when their excluded volumes overlap, depletants are expelled from the interparticle region. This region between colloids then becomes a phase of pure solvent. When this occurs, there is a higher depletant concentration in the surrounding solution than in the interparticle region. The resulting density gradient gives rise to an osmotic pressure that is anisotropic in nature, acting on the outer sides of the colloids and promoting flocculation. If the hard-sphere approximation is employed, the osmotic pressure is:
p
0
=
ρ
k
B
T
{\displaystyle p_{0}=\rho k_{\mathrm {B} }T}
where
p
0
{\displaystyle p_{0}}
is osmotic pressure and
ρ
{\displaystyle \rho }
is number density of small spheres and
k
B
{\displaystyle k_{\mathrm {B} }}
is the Boltzmann constant.
== Asakura–Oosawa model ==
Depletion forces were first described by Sho Asakura and Fumio Oosawa in 1954. In their model, the force is always considered to be attractive. Additionally, the force is considered to be proportional to the osmotic pressure. The Asakura–Oosawa model assumes low macromolecule densities and that the density distribution,
ρ
(
r
)
{\displaystyle \rho (r)}
, of the macromolecules is constant. Asakura and Oosawa described four cases in which depletion forces would occur. They first described the most general case as two solid plates in a solution of macromolecules. The principles for the first case were then extended to three additional cases.
=== Free energy change due to the depletion force ===
In the Asakura–Oosawa model for depletion forces, the change in free-energy imposed by an excluded cosolute,
Δ
G
{\displaystyle \Delta G}
, is:
Δ
G
(
r
)
=
Π
Δ
V
e
x
c
l
u
s
i
o
n
{\displaystyle \Delta G(r)=\Pi \Delta V_{exclusion}}
where
Π
{\displaystyle \Pi }
is the osmotic pressure, and
Δ
V
e
x
c
l
u
s
i
o
n
{\displaystyle \Delta V_{exclusion}}
is the change in excluded volume (which is related to molecular size and shape). The very same result can be derived using the Kirkwood-Buff solution theory.
=== Solid plates in a solution of macromolecules ===
In the first case, two solid plates are placed in a solution of rigid spherical macromolecules. If the distance between two plates,
a
{\displaystyle a}
, is smaller than the diameter of solute molecules,
d
{\displaystyle d}
, then no solute can enter between the plates. This results in pure solvent existing between the plates. The difference in concentration of macromolecules in the solution between the plates and the bulk solution causes a force equal to the osmotic pressure to act on the plates. In a very dilute and monodisperse solution the force is defined by
p
=
k
B
T
N
(
∂
ln
Q
∂
a
)
{\displaystyle p=k_{\mathrm {B} }TN\left({\frac {\partial \ln Q}{\partial a}}\right)}
where
p
{\displaystyle p}
is the force, and
N
{\displaystyle N}
is the total number of solute molecules. The force causes the entropy of the macromolecules to increase and is attractive when
a
<
d
{\displaystyle a<d}
=== Rod-like macromolecules ===
Asakura and Oosawa described the second case as consisting of two plates in a solution of rod like macromolecules. The rod like macromolecules are described as having a length,
l
{\displaystyle l}
, where
l
2
≪
A
{\displaystyle l^{2}\ll A}
, the area of the plates. As the length of the rods increases, the concentration of the rods between the plates is decreased as it becomes more difficult for the rods to enter between the plates due to steric hindrances. As a result, the force acting on the plates increases with the length of the rods until it becomes equal to the osmotic pressure. In this context, it is worth mentioning that even the isotropic-nematic transition of lyotropic liquid crystals, as first explained in Onsager's theory, can in itself be considered a special case of depletion forces.
=== Plates in a solution of polymers ===
The third case described by Asakura and Oosawa is two plates in a solution of polymers. Due to the size of the polymers, the concentration of polymers in the neighborhood of the plates is reduced, which result the conformational entropy of the polymers being decreased. The case can be approximated by modeling it as diffusion in a vessel with walls which absorb diffusing particles. The force,
p
{\displaystyle p}
, can then be calculated according to:
p
=
−
A
p
o
{
(
1
−
f
)
−
a
(
∂
f
∂
a
)
}
{\displaystyle p=-Ap_{o}{\Bigg \{}(1-f)-a\left({\frac {\partial f}{\partial a}}\right){\Bigg \}}}
In this equation
1
−
f
{\displaystyle 1-f}
is the attraction from the osmotic effect.
∂
f
∂
a
{\displaystyle {\frac {\partial f}{\partial a}}}
is the repulsion due to chain molecules confined between plates.
p
{\displaystyle p}
is on order of
⟨
r
⟩
{\displaystyle \langle r\rangle }
, the mean end-to-end distance of chain molecules in free space.
=== Large hard spheres in a solution of small hard spheres ===
The final case described by Asakura and Oosawa describes two large, hard spheres of diameter
D
{\displaystyle D}
, in a solution of small, hard spheres of diameter
d
{\displaystyle d}
. If the distance between the center of the spheres,
h
{\displaystyle h}
, is less than
(
D
+
d
)
{\displaystyle (D+d)}
, then the small spheres are excluded from the space between the large spheres. This results in the area between the large spheres having a reduced concentration of small spheres and therefore reduced entropy. This reduced entropy causes a force to act upon the large spheres pushing them together. This effect was convincingly demonstrated in experiments with vibrofluidized granular materials where attraction can be directly visualized.
== Improvements upon Asakura–Oosawa model ==
=== Derjaguin approximation ===
==== Theory ====
Asakura and Oosawa assumed low concentrations of macromolecules. However, at high concentrations of macromolecules, structural correlation effects in the macromolecular liquid become important. Additionally, the repulsive interaction strength strongly increases for large values of
R
/
r
{\displaystyle R/r}
(large radius/small radius). In order to account for these issues, the Derjaguin approximation, which is valid for any type of force law, has been applied to depletion forces. The Derjaguin approximation relates the force between two spheres to the force between two plates. The force is then integrated between small regions on one surface and the opposite surface, which is assumed to be locally flat.
==== Equations ====
If there are two spheres of radii
R
1
{\displaystyle R_{1}}
and
R
2
{\displaystyle R_{2}}
on the
Z
{\displaystyle Z}
axis, and the spheres are
h
+
R
1
+
R
2
{\displaystyle h+R_{1}+R_{2}}
distance apart, where
h
{\displaystyle h}
is much smaller than
R
1
{\displaystyle R_{1}}
and
R
2
{\displaystyle R_{2}}
, then the force,
F
{\displaystyle F}
, in the
z
{\displaystyle z}
direction is
F
(
h
)
≈
2
π
(
R
1
R
2
R
1
+
R
2
)
W
(
h
)
{\displaystyle F(h)\approx 2\pi \left({\frac {R_{1}R_{2}}{R_{1}+R_{2}}}\right)W(h)}
In this equation,
W
(
h
)
=
∫
h
∞
f
(
z
)
d
z
{\displaystyle W(h)=\textstyle \int _{h}^{\infty }f(z)dz}
, and
f
(
z
)
{\displaystyle f(z)}
is the normal force per unit area between two flat surfaces distance
z
{\displaystyle z}
apart.
When the Derjaguin approximation is applied to depletion forces, and 0 < h < 2RS, then the depletion force given by the Derjaguin approximation is
F
(
h
)
=
−
π
ϵ
(
R
B
+
R
S
)
[
p
(
ρ
)
(
2
R
S
−
h
)
+
γ
(
ρ
,
∞
)
]
{\displaystyle F(h)=-\pi \epsilon \left(R_{\text{B}}+R_{\text{S}}\right){\big [}p(\rho )(2R_{\text{S}}-h)+\gamma (\rho ,\infty ){\big ]}}
In this equation,
ϵ
{\displaystyle \epsilon }
is the geometrical factor, which is set to 1, and
γ
(
ρ
,
∞
)
=
2
γ
(
ρ
)
{\displaystyle \gamma (\rho ,\infty )=2\gamma (\rho )}
, the interfacial tension at the wall-fluid interface.
=== Density functional theory ===
==== Theory ====
Asakura and Oosawa assumed a uniform particle density, which is true in a homogeneous solution. However, if an external potential is applied to a solution, then the uniform particle density is disrupted, making Asakura and Oosawa's assumption invalid. Density functional theory accounts for variations in particle density by using the grand canonical potential. The grand canonical potential, which is a state function for the grand canonical ensemble, is used to calculate the probability density for microscopic states in macroscopic state. When applied to depletion forces, the grand canonical potential calculates the local particle densities in a solution.
==== Equations ====
Density functional theory states that when any fluid is exposed to an external potential,
V
(
R
)
{\displaystyle V(R)}
, then all equilibrium quantities become functions of number density profile,
ρ
(
R
)
{\displaystyle \rho (R)}
. As a result, the total free energy is minimized. The Grand canonical potential,
Ω
(
[
ρ
(
R
)
]
;
μ
,
T
)
{\displaystyle \Omega \left({\big [}\rho (R){\big ]};\mu ,T\right)}
, is then written
Ω
(
[
ρ
(
R
)
]
;
μ
,
T
)
=
A
(
[
ρ
(
R
)
]
;
T
)
−
∫
d
3
R
[
μ
−
V
(
R
)
]
ρ
(
R
)
,
{\displaystyle \Omega \left({\big [}\rho (R){\big ]};\mu ,T\right)=A\left({\big [}\rho (R){\big ]};T\right)-\int d^{3}R{\big [}\mu -V(R){\big ]}\rho (R),}
where
μ
{\displaystyle \mu }
is the chemical potential,
T
{\displaystyle T}
is the temperature, and
A
[
ρ
]
{\displaystyle A[\rho ]}
is the helmholtz free energy.
== Enthalpic depletion forces ==
The original Asakura–Oosawa model considered only hard-core interactions. In such an athermal mixture the origin of depletion forces is necessarily entropic. If the intermolecular potentials also include repulsive and/or attractive terms, and if the solvent is considered explicitly, the depletion interaction can have additional thermodynamic contributions.
The notion that depletion forces can also be enthalpically driven has surfaced due to recent experiments regarding protein stabilization induced by compatible osmolytes, such as trehalose, glycerol, and sorbitol. These osmolytes are preferentially excluded from protein surfaces, forming a layer of preferential hydration around the proteins. When the protein folds - this exclusion volume diminishes, making the folded state lower in free energy. Hence the excluded osmolytes shift the folding equilibrium towards the folded state. This effect was generally thought to be an entropic force, in the spirit of the original Asakura–Oosawa model and of macromolecular crowding. However, thermodynamic breakdown of the free-energy gain due to osmolyte addition showed the effect is in fact enthalpically driven, whereas entropy can even be disfavorable.
For many cases, the molecular origin of this enthalpically driven depletion force can be traced to an effective "soft" repulsion in the potential of mean force between macromolecule and cosolute. Both Monte-Carlo simulations and a simple analytic model demonstrate that when the hard-core potential (as in Asakura and Oosawa's model) is supplemented with an additional repulsive "softer" interaction, the depletion force can become enthalpically dominated.
== Measurement and experimentation ==
Depletion forces have been observed and measured using a variety of instrumentation including atomic force microscopy, optical tweezers, and hydrodynamic force balance machines.
=== Atomic force microscopy ===
Atomic force microscopy (AFM) is commonly used to directly measure the magnitude of depletion forces. This method uses the deflection of a very small cantilever contacting a sample which is measured by a laser. The force required to cause a certain amount of beam deflection can be determined from the change in angle of the laser. The small scale of AFM allows for dispersion particles to be measured directly yielding a relatively accurate measurement of depletion forces.
=== Optical tweezers ===
The force required to separate two colloid particles can be measured using optical tweezers. This method uses a focused laser beam to apply an attractive or repulsive force on dielectric micro and nanoparticles. This technique is used with dispersion particles by applying a force which resists depletion forces. The displacement of the particles is then measured and used to find the attractive force between the particles.
=== Hydrodynamic force balance ===
HFB machines measure the strength of particle interactions using liquid flow to separate the particles. This method is used to find depletion force strength by adhering to a static plate one particle in a dispersion particle doublet and applying shear force through fluid flow. The drag created by the dispersion particles resists the depletion force between them, pulling the free particle away from the adhered particle. A force balance of the particles at separation can be used to determine the depletion force between the particles.
== Colloidal destabilization ==
=== Mechanism ===
Depletion forces are used extensively as a method of destabilizing colloids. By introducing particles into a colloidal dispersion, attractive depletion forces can be induced between dispersed particles. These attractive interactions bring the dispersed particles together resulting in flocculation. This destabilizes the colloid as the particles are no longer dispersed in the liquid but concentrated in floc formations. Flocs are then easily removed through filtration processes leaving behind a non-dispersed, pure liquid.
=== Water treatment ===
The use of depletion forces to initiate flocculation is a common process in water treatment. The relatively small size of dispersed particles in waste water renders typical filtration methods ineffective. However, if the dispersion was to be destabilized and flocculation occur, particles can then be filtered out to produce pure water. Therefore, coagulants and flocculants are typically introduced to waste water which create these depletion forces between the dispersed particles.
=== Winemaking ===
Some wine production methods also use depletion forces to remove dispersed particles from wine. Unwanted colloidal particles can be found in wine originating from the must or produced during the winemaking process. These particles typically consist of carbohydrates, pigmentation molecules, or proteins which may adversely affect the taste and purity of the wine. Therefore, flocculants are often added to induce floc precipitation for easy filtration.
=== Common flocculants ===
The table below lists common flocculants along with their chemical formulas, net electrical charge, molecular weight and current applications.
== Biological systems ==
There are suggestions that depletion forces may be a significant contributor in some biological systems, specifically in membrane interactions between cells or any membranous structure. With concentrations of large molecules such as proteins or carbohydrates in the extracellular matrix, it is likely some depletion force effects are observed between cells or vesicles that are very close. However, due to the complexity of most biological systems, it is difficult to determine how much these depletion forces influence membrane interactions. Models of vesicle interactions with depletion forces have been developed, but these are greatly simplified and their applicability to real biological systems is questionable.
== Generalization: anisotropic colloids and systems without polymers ==
Depletion forces in colloid-polymer mixtures drive colloids to form aggregates that are densely packed locally. This local dense packing is also observed in colloidal systems without polymer depletants. Without polymer depletants the mechanism is similar, because the particles in dense colloidal suspension act, effectively, as depletants for one another This effect is particularly striking for anisotropically shaped colloidal particles, where the anisotropy of the shape leads to the emergence of directional entropic forces that are responsible for the ordering of hard anisotropic colloids into a wide range of crystal structures.
== References == | Wikipedia/Depletion_force |
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural networks.
In neuroscience, a biological neural network is a physical structure found in brains and complex nervous systems – a population of nerve cells connected by synapses.
In machine learning, an artificial neural network is a mathematical model used to approximate nonlinear functions. Artificial neural networks are used to solve artificial intelligence problems.
== In biology ==
In the context of biology, a neural network is a population of biological neurons chemically connected to each other by synapses. A given neuron can be connected to hundreds of thousands of synapses.
Each neuron sends and receives electrochemical signals called action potentials to its connected neighbors. A neuron can serve an excitatory role, amplifying and propagating signals it receives, or an inhibitory role, suppressing signals instead.
Populations of interconnected neurons that are smaller than neural networks are called neural circuits. Very large interconnected networks are called large scale brain networks, and many of these together form brains and nervous systems.
Signals generated by neural networks in the brain eventually travel through the nervous system and across neuromuscular junctions to muscle cells, where they cause contraction and thereby motion.
== In machine learning ==
In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines, today they are almost always implemented in software.
Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).
The "signal" input to each neuron is a number, specifically a linear combination of the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to its activation function. The behavior of the network depends on the strengths (or weights) of the connections between neurons. A network is trained by modifying these weights through empirical risk minimization or backpropagation in order to fit some preexisting dataset.
The term deep neural network refers to neural networks that have more than three layers, typically including at least two hidden layers in addition to the input and output layers.
Neural networks are used to solve problems in artificial intelligence, and have thereby found applications in many disciplines, including predictive modeling, adaptive control, facial recognition, handwriting recognition, general game playing, and generative AI.
== History ==
The theoretical base for contemporary neural networks was independently proposed by Alexander Bain in 1873 and William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949, Donald Hebb described Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.
Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of connectionism. However, starting with the invention of the perceptron, a simple artificial neural network, by Warren McCulloch and Walter Pitts in 1943, followed by the implementation of one in hardware by Frank Rosenblatt in 1957,
artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
== See also ==
Emergence
Biological cybernetics
Biologically-inspired computing
== References == | Wikipedia/Neural_networks |
In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single
system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902.
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.
== Physical considerations ==
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes.
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function.
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium.
=== Terminology ===
The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature.
The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent.
== Main types ==
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics.
"We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)
Three important thermodynamic ensembles were defined by Gibbs:
Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.
Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.
Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.
The calculations that can be made using each of these ensembles are explored further in their respective articles.
Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived.
For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.
=== Equivalence ===
In thermodynamic limit all ensembles should produce identical observables due to Legendre transforms, deviations to this rule occurs under conditions that state-variables are non-convex, such as small molecular measurements.
== Representations ==
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables.
In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily.
=== Requirements for representations ===
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system:
Test whether A, B are statistically equivalent.
If p is a real number such that 0 < p < 1, then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability 1 − p.
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set.
=== Quantum mechanical ===
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by
ρ
^
{\displaystyle {\hat {\rho }}}
. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator,
X
^
{\displaystyle {\hat {X}}}
. The expectation value of this operator on the statistical ensemble
ρ
{\displaystyle \rho }
is given by the following trace:
⟨
X
⟩
=
Tr
(
X
^
ρ
)
.
{\displaystyle \langle X\rangle =\operatorname {Tr} ({\hat {X}}\rho ).}
This can be used to evaluate averages (operator
X
^
{\displaystyle {\hat {X}}}
), variances (using operator
X
^
2
{\displaystyle {\hat {X}}^{2}}
), covariances (using operator
X
^
Y
^
{\displaystyle {\hat {X}}{\hat {Y}}}
), etc. The density matrix must always have a trace of 1:
Tr
ρ
^
=
1
{\displaystyle \operatorname {Tr} {\hat {\rho }}=1}
(this essentially is the condition that the probabilities must add up to one).
In general, the ensemble evolves over time according to the von Neumann equation.
Equilibrium ensembles (those that do not evolve over time,
d
ρ
^
/
d
t
=
0
{\displaystyle d{\hat {\rho }}/dt=0}
) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator
H
^
{\displaystyle {\hat {H}}}
(Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator
N
^
{\displaystyle {\hat {N}}}
. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is
ρ
^
=
∑
i
P
i
|
ψ
i
⟩
⟨
ψ
i
|
,
{\displaystyle {\hat {\rho }}=\sum _{i}P_{i}|\psi _{i}\rangle \langle \psi _{i}|,}
where the |ψi⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)
=== Classical mechanical ===
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation.
In a mechanical system with a defined number of parts, the phase space has n generalized coordinates called q1, ... qn, and n associated canonical momenta called p1, ... pn. The ensemble is then represented by a joint probability density function ρ(p1, ... pn, q1, ... qn).
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers N1 (first kind of particle), N2 (second kind of particle), and so on up to Ns (the last kind of particle; s is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function ρ(N1, ... Ns, p1, ... pn, q1, ... qn). The number of coordinates n varies with the numbers of particles.
Any mechanical quantity X can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by ρ:
⟨
X
⟩
=
∑
N
1
=
0
∞
⋯
∑
N
s
=
0
∞
∫
⋯
∫
ρ
X
d
p
1
⋯
d
q
n
.
{\displaystyle \langle X\rangle =\sum _{N_{1}=0}^{\infty }\cdots \sum _{N_{s}=0}^{\infty }\int \cdots \int \rho X\,dp_{1}\cdots dq_{n}.}
The condition of probability normalization applies, requiring
∑
N
1
=
0
∞
⋯
∑
N
s
=
0
∞
∫
⋯
∫
ρ
d
p
1
⋯
d
q
n
=
1.
{\displaystyle \sum _{N_{1}=0}^{\infty }\cdots \sum _{N_{s}=0}^{\infty }\int \cdots \int \rho \,dp_{1}\cdots dq_{n}=1.}
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, ρ, is related to the probability distribution over microstates, P by a factor
ρ
=
1
h
n
C
P
,
{\displaystyle \rho ={\frac {1}{h^{n}C}}P,}
where
h is an arbitrary but predetermined constant with the units of energy×time, setting the extent of the microstate and providing correct dimensions to ρ.
C is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns.
Since h can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of h influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of h when comparing different systems.
==== Correcting overcounting in phase space ====
Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems:
Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another.
Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox.
Foundational issues in defining the chemical potential and the grand canonical ensemble.
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting.
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' x coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor C introduced above would be set to C = 1, and the integral would be restricted to the selected subregion of phase space.)
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor C introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, C does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers.
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using
C
=
N
1
!
N
2
!
⋯
N
s
!
.
{\displaystyle C=N_{1}!N_{2}!\cdots N_{s}!.}
This is known as "correct Boltzmann counting".
== Ensembles in statistics ==
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like.
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks.
== Ensemble average ==
In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble.
Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit.
The grand canonical ensemble is an example of an open system.
=== Classical statistical mechanics ===
For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system:
A
¯
=
∫
A
exp
[
−
β
H
(
q
1
,
q
2
,
…
,
q
M
,
p
1
,
p
2
,
…
,
p
N
)
]
d
τ
∫
exp
[
−
β
H
(
q
1
,
q
2
,
…
,
q
M
,
p
1
,
p
2
,
…
,
p
N
)
]
d
τ
,
{\displaystyle {\bar {A}}={\frac {\displaystyle \int {A\exp \left[-\beta H(q_{1},q_{2},\dots ,q_{M},p_{1},p_{2},\dots ,p_{N})\right]\,d\tau }}{\displaystyle \int {\exp \left[-\beta H(q_{1},q_{2},\dots ,q_{M},p_{1},p_{2},\dots ,p_{N})\right]\,d\tau }}},}
where
A
¯
{\displaystyle {\bar {A}}}
is the ensemble average of the system property A,
β
{\displaystyle \beta }
is
1
k
T
{\displaystyle {\frac {1}{kT}}}
, known as thermodynamic beta,
H is the Hamiltonian of the classical system in terms of the set of coordinates
q
i
{\displaystyle q_{i}}
and their conjugate generalized momenta
p
i
{\displaystyle p_{i}}
,
d
τ
{\displaystyle d\tau }
is the volume element of the classical phase space of interest.
The denominator in this expression is known as the partition function and is denoted by the letter Z.
=== Quantum statistical mechanics ===
In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral:
A
¯
=
∑
i
A
i
e
−
β
E
i
∑
i
e
−
β
E
i
.
{\displaystyle {\bar {A}}={\frac {\sum _{i}A_{i}e^{-\beta E_{i}}}{\sum _{i}e^{-\beta E_{i}}}}.}
=== Canonical ensemble average ===
The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics.
The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) and particles (N) with its surroundings, but the volume (V) is kept constant.
== Operational interpretation ==
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance,
It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?)
It is not clear how to physically generate an ensemble.
In this section, we attempt to partially answer this question.
Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ...,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble.
In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ..., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes).
Assume the following time average exists:
σ
(
E
)
=
lim
N
→
∞
1
N
∑
k
=
1
N
Meas
(
E
,
X
k
)
{\displaystyle \sigma (E)=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{k=1}^{N}\operatorname {Meas} (E,X_{k})}
For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes–no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that:
σ
(
E
)
=
Tr
(
E
S
)
.
{\displaystyle \sigma (E)=\operatorname {Tr} (ES).}
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
== See also ==
Density matrix – Mathematical tool in quantum physics
Ensemble (fluid mechanics) – Imaginary collection of notionally identical experiments
Ensemble interpretation – Concept in Quantum mechanics
Phase space – Space of all possible states that a system can take
Liouville's theorem (Hamiltonian) – Key result in Hamiltonian mechanics and statistical mechanics
Maxwell–Boltzmann statistics – Statistical distribution used in many-particle mechanics
Replication (statistics) – Principle that variation can be better estimated with nonvarying repetition of conditions
== Notes ==
== References ==
== External links ==
Monte Carlo applet applied in statistical physics problems. | Wikipedia/Statistical_ensemble_(mathematical_physics) |
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable
X
{\displaystyle X}
, which may be any member
x
{\displaystyle x}
within the set
X
{\displaystyle {\mathcal {X}}}
and is distributed according to
p
:
X
→
[
0
,
1
]
{\displaystyle p\colon {\mathcal {X}}\to [0,1]}
, the entropy is
H
(
X
)
:=
−
∑
x
∈
X
p
(
x
)
log
p
(
x
)
,
{\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),}
where
Σ
{\displaystyle \Sigma }
denotes the sum over the variable's possible values. The choice of base for
log
{\displaystyle \log }
, the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.
The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.
Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition
E
[
−
log
p
(
X
)
]
{\displaystyle \mathbb {E} [-\log p(X)]}
generalizes the above.
== Introduction ==
The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event.
The information content, also called the surprisal or self-information, of an event
E
{\displaystyle E}
is a function that increases as the probability
p
(
E
)
{\displaystyle p(E)}
of an event decreases. When
p
(
E
)
{\displaystyle p(E)}
is close to 1, the surprisal of the event is low, but if
p
(
E
)
{\displaystyle p(E)}
is close to 0, the surprisal of the event is high. This relationship is described by the function
log
(
1
p
(
E
)
)
,
{\displaystyle \log \left({\frac {1}{p(E)}}\right),}
where
log
{\displaystyle \log }
is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization.
Hence, we can define the information, or surprisal, of an event
E
{\displaystyle E}
by
I
(
E
)
=
log
(
1
p
(
E
)
)
,
{\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right),}
or equivalently,
I
(
E
)
=
−
log
(
p
(
E
)
)
.
{\displaystyle I(E)=-\log(p(E)).}
Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.: 67 This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (
p
=
1
/
6
{\displaystyle p=1/6}
) than each outcome of a coin toss (
p
=
1
/
2
{\displaystyle p=1/2}
).
Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit (similarly, one trit with equiprobable values contains
log
2
3
{\displaystyle \log _{2}3}
(about 1.58496) bits of information because it can have one of three values). The minimum surprise is when p = 0 (impossibility) or p = 1 (certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity, there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits.
=== Example ===
Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.: 234
== Definition ==
Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable
X
{\textstyle X}
, which takes values in the set
X
{\displaystyle {\mathcal {X}}}
and is distributed according to
p
:
X
→
[
0
,
1
]
{\displaystyle p:{\mathcal {X}}\to [0,1]}
such that
p
(
x
)
:=
P
[
X
=
x
]
{\displaystyle p(x):=\mathbb {P} [X=x]}
:
H
(
X
)
=
E
[
I
(
X
)
]
=
E
[
−
log
p
(
X
)
]
.
{\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].}
Here
E
{\displaystyle \mathbb {E} }
is the expected value operator, and I is the information content of X.: 11 : 19–20
I
(
X
)
{\displaystyle \operatorname {I} (X)}
is itself a random variable.
The entropy can explicitly be written as:
H
(
X
)
=
−
∑
x
∈
X
p
(
x
)
log
b
p
(
x
)
,
{\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),}
where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.
In the case of
p
(
x
)
=
0
{\displaystyle p(x)=0}
for some
x
∈
X
{\displaystyle x\in {\mathcal {X}}}
, the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:: 13
lim
p
→
0
+
p
log
(
p
)
=
0.
{\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.}
One may also define the conditional entropy of two variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
taking values from sets
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle {\mathcal {Y}}}
respectively, as:: 16
H
(
X
|
Y
)
=
−
∑
x
,
y
∈
X
×
Y
p
X
,
Y
(
x
,
y
)
log
p
X
,
Y
(
x
,
y
)
p
Y
(
y
)
,
{\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},}
where
p
X
,
Y
(
x
,
y
)
:=
P
[
X
=
x
,
Y
=
y
]
{\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]}
and
p
Y
(
y
)
=
P
[
Y
=
y
]
{\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]}
. This quantity should be understood as the remaining randomness in the random variable
X
{\displaystyle X}
given the random variable
Y
{\displaystyle Y}
.
=== Measure theory ===
Entropy can be formally defined in the language of measure theory as follows: Let
(
X
,
Σ
,
μ
)
{\displaystyle (X,\Sigma ,\mu )}
be a probability space. Let
A
∈
Σ
{\displaystyle A\in \Sigma }
be an event. The surprisal of
A
{\displaystyle A}
is
σ
μ
(
A
)
=
−
ln
μ
(
A
)
.
{\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).}
The expected surprisal of
A
{\displaystyle A}
is
h
μ
(
A
)
=
μ
(
A
)
σ
μ
(
A
)
.
{\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).}
A
μ
{\displaystyle \mu }
-almost partition is a set family
P
⊆
P
(
X
)
{\displaystyle P\subseteq {\mathcal {P}}(X)}
such that
μ
(
∪
P
)
=
1
{\displaystyle \mu (\mathop {\cup } P)=1}
and
μ
(
A
∩
B
)
=
0
{\displaystyle \mu (A\cap B)=0}
for all distinct
A
,
B
∈
P
{\displaystyle A,B\in P}
. (This is a relaxation of the usual conditions for a partition.) The entropy of
P
{\displaystyle P}
is
H
μ
(
P
)
=
∑
A
∈
P
h
μ
(
A
)
.
{\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).}
Let
M
{\displaystyle M}
be a sigma-algebra on
X
{\displaystyle X}
. The entropy of
M
{\displaystyle M}
is
H
μ
(
M
)
=
sup
P
⊆
M
H
μ
(
P
)
.
{\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).}
Finally, the entropy of the probability space is
H
μ
(
Σ
)
{\displaystyle \mathrm {H} _{\mu }(\Sigma )}
, that is, the entropy with respect to
μ
{\displaystyle \mu }
of the sigma-algebra of all measurable subsets of
X
{\displaystyle X}
.
Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures.
== Example ==
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because
H
(
X
)
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
p
(
x
i
)
=
−
∑
i
=
1
2
1
2
log
2
1
2
=
−
∑
i
=
1
2
1
2
⋅
(
−
1
)
=
1.
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}}
However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then
H
(
X
)
=
−
p
log
2
p
−
q
log
2
q
=
−
0.7
log
2
(
0.7
)
−
0.3
log
2
(
0.3
)
≈
−
0.7
⋅
(
−
0.515
)
−
0.3
⋅
(
−
1.737
)
=
0.8816
<
1.
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}}
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.: 14–15
== Characterization ==
To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information:
I(p) is monotonically decreasing in p: an increase in the probability of an event decreases the information from an observed event, and vice versa.
I(1) = 0: events that always occur do not communicate information.
I(p1·p2) = I(p1) + I(p2): the information learned from independent events is the sum of the information learned from each event.
I(p) is a twice continuously differentiable function of p.
Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both.
Shannon discovered that a suitable choice of
I
{\displaystyle \operatorname {I} }
is given by:
I
(
p
)
=
log
(
1
p
)
=
−
log
(
p
)
.
{\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).}
In fact, the only possible values of
I
{\displaystyle \operatorname {I} }
are
I
(
u
)
=
k
log
u
{\displaystyle \operatorname {I} (u)=k\log u}
for
k
<
0
{\displaystyle k<0}
. Additionally, choosing a value for k is equivalent to choosing a value
x
>
1
{\displaystyle x>1}
for
k
=
−
1
/
log
x
{\displaystyle k=-1/\log x}
, so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties.
The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits.
The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
=== Alternative characterization ===
Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X).
Continuity: H should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
Symmetry: H should be unchanged if the outcomes xi are re-ordered. That is,
H
n
(
p
1
,
p
2
,
…
,
p
n
)
=
H
n
(
p
i
1
,
p
i
2
,
…
,
p
i
n
)
{\displaystyle \mathrm {H} _{n}\left(p_{1},p_{2},\ldots ,p_{n}\right)=\mathrm {H} _{n}\left(p_{i_{1}},p_{i_{2}},\ldots ,p_{i_{n}}\right)}
for any permutation
{
i
1
,
.
.
.
,
i
n
}
{\displaystyle \{i_{1},...,i_{n}\}}
of
{
1
,
.
.
.
,
n
}
{\displaystyle \{1,...,n\}}
.
Maximum:
H
n
{\displaystyle \mathrm {H} _{n}}
should be maximal if all the outcomes are equally likely i.e.
H
n
(
p
1
,
…
,
p
n
)
≤
H
n
(
1
n
,
…
,
1
n
)
{\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})\leq \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)}
.
Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e.
H
n
(
1
n
,
…
,
1
n
⏟
n
)
<
H
n
+
1
(
1
n
+
1
,
…
,
1
n
+
1
⏟
n
+
1
)
.
{\displaystyle \mathrm {H} _{n}{\bigg (}\underbrace {{\frac {1}{n}},\ldots ,{\frac {1}{n}}} _{n}{\bigg )}<\mathrm {H} _{n+1}{\bigg (}\underbrace {{\frac {1}{n+1}},\ldots ,{\frac {1}{n+1}}} _{n+1}{\bigg )}.}
Additivity: given an ensemble of n uniformly distributed elements that are partitioned into k boxes (sub-systems) with b1, ..., bk elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.
==== Discussion ====
The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n,
H
n
(
1
n
,
…
,
1
n
)
=
H
k
(
b
1
n
,
…
,
b
k
n
)
+
∑
i
=
1
k
b
i
n
H
b
i
(
1
b
i
,
…
,
1
b
i
)
.
{\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).}
Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source set with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory).
The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property,
P
(
A
∣
B
)
⋅
P
(
B
)
=
P
(
A
∩
B
)
{\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)}
. Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals
μ
(
A
)
⋅
ln
μ
(
A
)
{\displaystyle \mu (A)\cdot \ln \mu (A)}
for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string,
log
2
{\displaystyle \log _{2}}
lends itself to practical interpretations.
Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.
=== Alternative characterization via additivity and subadditivity ===
Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties:
Subadditivity:
H
(
X
,
Y
)
≤
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)}
for jointly distributed random variables
X
,
Y
{\displaystyle X,Y}
.
Additivity:
H
(
X
,
Y
)
=
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X)+\mathrm {H} (Y)}
when the random variables
X
,
Y
{\displaystyle X,Y}
are independent.
Expansibility:
H
n
+
1
(
p
1
,
…
,
p
n
,
0
)
=
H
n
(
p
1
,
…
,
p
n
)
{\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n})}
, i.e., adding an outcome with probability zero does not change the entropy.
Symmetry:
H
n
(
p
1
,
…
,
p
n
)
{\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})}
is invariant under permutation of
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
.
Small for small probabilities:
lim
q
→
0
+
H
2
(
1
−
q
,
q
)
=
0
{\displaystyle \lim _{q\to 0^{+}}\mathrm {H} _{2}(1-q,q)=0}
.
==== Discussion ====
It was shown that any function
H
{\displaystyle \mathrm {H} }
satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
.
It is worth noting that if we drop the "small for small probabilities" property, then
H
{\displaystyle \mathrm {H} }
must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.
== Further properties ==
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X:
Adding or removing an event with probability zero does not contribute to the entropy:
H
n
+
1
(
p
1
,
…
,
p
n
,
0
)
=
H
n
(
p
1
,
…
,
p
n
)
.
{\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n}).}
The maximal entropy of an event with n different outcomes is logb(n): it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable:: 29
H
(
p
1
,
…
,
p
n
)
≤
log
b
n
.
{\displaystyle \mathrm {H} (p_{1},\dots ,p_{n})\leq \log _{b}n.}
The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as:: 16
H
(
X
,
Y
)
=
H
(
X
|
Y
)
+
H
(
Y
)
=
H
(
Y
|
X
)
+
H
(
X
)
.
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X|Y)+\mathrm {H} (Y)=\mathrm {H} (Y|X)+\mathrm {H} (X).}
If
Y
=
f
(
X
)
{\displaystyle Y=f(X)}
where
f
{\displaystyle f}
is a function, then
H
(
f
(
X
)
|
X
)
=
0
{\displaystyle \mathrm {H} (f(X)|X)=0}
. Applying the previous formula to
H
(
X
,
f
(
X
)
)
{\displaystyle \mathrm {H} (X,f(X))}
yields
H
(
X
)
+
H
(
f
(
X
)
|
X
)
=
H
(
f
(
X
)
)
+
H
(
X
|
f
(
X
)
)
,
{\displaystyle \mathrm {H} (X)+\mathrm {H} (f(X)|X)=\mathrm {H} (f(X))+\mathrm {H} (X|f(X)),}
so
H
(
f
(
X
)
)
≤
H
(
X
)
{\displaystyle \mathrm {H} (f(X))\leq \mathrm {H} (X)}
, the entropy of a variable can only decrease when the latter is passed through a function.
If X and Y are two independent random variables, then knowing the value of Y doesn't influence our knowledge of the value of X (since the two don't influence each other by independence):
H
(
X
|
Y
)
=
H
(
X
)
.
{\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X).}
More generally, for any random variables X and Y, we have: 29
H
(
X
|
Y
)
≤
H
(
X
)
.
{\displaystyle \mathrm {H} (X|Y)\leq \mathrm {H} (X).}
The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e.,
H
(
X
,
Y
)
≤
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)}
, with equality if and only if the two events are independent.: 28
The entropy
H
(
p
)
{\displaystyle \mathrm {H} (p)}
is concave in the probability mass function
p
{\displaystyle p}
, i.e.: 30
H
(
λ
p
1
+
(
1
−
λ
)
p
2
)
≥
λ
H
(
p
1
)
+
(
1
−
λ
)
H
(
p
2
)
{\displaystyle \mathrm {H} (\lambda p_{1}+(1-\lambda )p_{2})\geq \lambda \mathrm {H} (p_{1})+(1-\lambda )\mathrm {H} (p_{2})}
for all probability mass functions
p
1
,
p
2
{\displaystyle p_{1},p_{2}}
and
0
≤
λ
≤
1
{\displaystyle 0\leq \lambda \leq 1}
.: 32
Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp.
== Aspects ==
=== Relationship to thermodynamic entropy ===
The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.
In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy
S
=
−
k
B
∑
i
p
i
ln
p
i
,
{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,}
where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Ludwig Boltzmann (1872).
The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927:
S
=
−
k
B
T
r
(
ρ
ln
ρ
)
,
{\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,}
where ρ is the density matrix of the quantum mechanical system and Tr is the trace.
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his equation:
S
=
k
B
ln
W
,
{\displaystyle S=k_{\text{B}}\ln W,}
where
S
{\displaystyle S}
is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.
In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.
=== Data compression ===
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.: 60–65
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks.
=== Entropy as a measure of diversity ===
Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.
=== Entropy of a sequence ===
There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:
the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event),
the joint entropy of the symbols forming the message or sequence (seen as a set of events),
the entropy rate of a stochastic process (message or sequence is seen as a succession of events).
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.
If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
=== Limitations of entropy in cryptography ===
In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)
2
127
{\displaystyle 2^{127}}
guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
=== Data as a Markov process ===
A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
H
(
S
)
=
−
∑
i
p
i
log
p
i
,
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},}
where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:
H
(
S
)
=
−
∑
i
p
i
∑
j
p
i
(
j
)
log
p
i
(
j
)
,
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),}
where i is a state (certain preceding characters) and
p
i
(
j
)
{\displaystyle p_{i}(j)}
is the probability of j given i as the previous character.
For a second order Markov source, the entropy rate is
H
(
S
)
=
−
∑
i
p
i
∑
j
p
i
(
j
)
∑
k
p
i
,
j
(
k
)
log
p
i
,
j
(
k
)
.
{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).}
== Efficiency (normalized entropy) ==
A source set
X
{\displaystyle {\mathcal {X}}}
with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:
η
(
X
)
=
H
H
max
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
(
p
(
x
i
)
)
log
b
(
n
)
.
{\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.}
Applying the basic properties of the logarithm, this quantity can also be expressed as:
η
(
X
)
=
−
∑
i
=
1
n
p
(
x
i
)
log
b
(
p
(
x
i
)
)
log
b
(
n
)
=
∑
i
=
1
n
log
b
(
p
(
x
i
)
−
p
(
x
i
)
)
log
b
(
n
)
=
∑
i
=
1
n
log
n
(
p
(
x
i
)
−
p
(
x
i
)
)
=
log
n
(
∏
i
=
1
n
p
(
x
i
)
−
p
(
x
i
)
)
.
{\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}}
Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy
log
b
(
n
)
{\displaystyle {\log _{b}(n)}}
. Furthermore, the efficiency is indifferent to the choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto.
== Entropy for continuous random variables ==
=== Differential entropy ===
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support
X
{\displaystyle \mathbb {X} }
on the real line is defined by analogy, using the above form of the entropy as an expectation:: 224
H
(
X
)
=
E
[
−
log
f
(
X
)
]
=
−
∫
X
f
(
x
)
log
f
(
x
)
d
x
.
{\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.}
This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann.
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit.
To do this, start with a continuous function f discretized into bins of size
Δ
{\displaystyle \Delta }
.
By the mean-value theorem there exists a value xi in each bin such that
f
(
x
i
)
Δ
=
∫
i
Δ
(
i
+
1
)
Δ
f
(
x
)
d
x
{\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx}
the integral of the function f can be approximated (in the Riemannian sense) by
∫
−
∞
∞
f
(
x
)
d
x
=
lim
Δ
→
0
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
,
{\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,}
where this limit and "bin size goes to zero" are equivalent.
We will denote
H
Δ
:=
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
Δ
)
{\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)}
and expanding the logarithm, we have
H
Δ
=
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
)
−
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
Δ
)
.
{\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).}
As Δ → 0, we have
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
→
∫
−
∞
∞
f
(
x
)
d
x
=
1
∑
i
=
−
∞
∞
f
(
x
i
)
Δ
log
(
f
(
x
i
)
)
→
∫
−
∞
∞
f
(
x
)
log
f
(
x
)
d
x
.
{\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}}
Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy:
h
[
f
]
=
lim
Δ
→
0
(
H
Δ
+
log
Δ
)
=
−
∫
−
∞
∞
f
(
x
)
log
f
(
x
)
d
x
,
{\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,}
which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).
=== Limiting density of discrete points ===
It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:
H
=
∫
−
∞
∞
f
(
x
)
log
(
f
(
x
)
Δ
)
d
x
,
{\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,}
and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as
N
→
∞
{\displaystyle N\rightarrow \infty }
would also include a term of
log
(
N
)
{\displaystyle \log(N)}
, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.
=== Relative entropy ===
Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as
D
K
L
(
p
‖
m
)
=
∫
log
(
f
(
x
)
)
p
(
d
x
)
=
∫
f
(
x
)
log
(
f
(
x
)
)
m
(
d
x
)
.
{\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).}
In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure m.
== Use in number theory ==
Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem.
Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH =
λ
(
n
+
H
)
{\displaystyle \lambda (n+H)}
. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem.
The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem.
While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.
== Use in combinatorics ==
Entropy has become a useful quantity in combinatorics.
=== Loomis–Whitney inequality ===
A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset A ⊆ Zd, we have
|
A
|
d
−
1
≤
∏
i
=
1
d
|
P
i
(
A
)
|
{\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|}
where Pi is the orthogonal projection in the ith coordinate:
P
i
(
A
)
=
{
(
x
1
,
…
,
x
i
−
1
,
x
i
+
1
,
…
,
x
d
)
:
(
x
1
,
…
,
x
d
)
∈
A
}
.
{\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.}
The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then
H
[
(
X
1
,
…
,
X
d
)
]
≤
1
r
∑
i
=
1
n
H
[
(
X
j
)
j
∈
S
i
]
{\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]}
where
(
X
j
)
j
∈
S
i
{\displaystyle (X_{j})_{j\in S_{i}}}
is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si).
We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of
(
X
j
)
j
∈
S
i
{\displaystyle (X_{j})_{j\in S_{i}}}
is contained in Pi(A) and hence
H
[
(
X
j
)
j
∈
S
i
]
≤
log
|
P
i
(
A
)
|
{\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|}
. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
=== Approximation to binomial coefficient ===
For integers 0 < k < n let q = k/n. Then
2
n
H
(
q
)
n
+
1
≤
(
n
k
)
≤
2
n
H
(
q
)
,
{\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},}
where : 43
H
(
q
)
=
−
q
log
2
(
q
)
−
(
1
−
q
)
log
2
(
1
−
q
)
.
{\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).}
A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately
2
n
H
(
k
/
n
)
{\displaystyle 2^{n\mathrm {H} (k/n)}}
.
== Use in machine learning ==
Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.
Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees
I
G
(
Y
,
X
)
{\displaystyle IG(Y,X)}
, which is equal to the difference between the entropy of
Y
{\displaystyle Y}
and the conditional entropy of
Y
{\displaystyle Y}
given
X
{\displaystyle X}
, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute
X
{\displaystyle X}
. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.
Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.
Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).
== See also ==
== Notes ==
== References ==
This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== Further reading ==
=== Textbooks on information theory ===
Cover, T.M., Thomas, J.A. (2006), Elements of Information Theory – 2nd Ed., Wiley-Interscience, ISBN 978-0-471-24195-9
MacKay, D.J.C. (2003), Information Theory, Inference and Learning Algorithms, Cambridge University Press, ISBN 978-0-521-64298-9
Arndt, C. (2004), Information Measures: Information and its Description in Science and Engineering, Springer, ISBN 978-3-540-40855-0
Gray, R. M. (2011), Entropy and Information Theory, Springer.
Martin, Nathaniel F.G.; England, James W. (2011). Mathematical Theory of Entropy. Cambridge University Press. ISBN 978-0-521-17738-2.
Shannon, C.E., Weaver, W. (1949) The Mathematical Theory of Communication, Univ of Illinois Press. ISBN 0-252-72548-4
Stone, J. V. (2014), Chapter 1 of Information Theory: A Tutorial Introduction Archived 3 June 2016 at the Wayback Machine, University of Sheffield, England. ISBN 978-0956372857.
== External links ==
"Entropy", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Entropy" Archived 4 June 2016 at the Wayback Machine at Rosetta Code—repository of implementations of Shannon entropy in different programming languages.
Entropy Archived 31 May 2016 at the Wayback Machine an interdisciplinary journal on all aspects of the entropy concept. Open access. | Wikipedia/Shannon_entropy |
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems.
== Boltzmann's principle ==
Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena.
A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium.
Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain.
Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number:
S
=
k
B
ln
Ω
{\displaystyle S=k_{\text{B}}\ln \Omega }
The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer.
Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate.
Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view.
Boltzmann's principle is regarded as the foundation of statistical mechanics.
== Gibbs entropy formula ==
The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if Ei is the energy of microstate i, and pi is the probability that it occurs during the system's fluctuations, then the entropy of the system is
S
=
−
k
B
∑
i
p
i
ln
(
p
i
)
{\displaystyle S=-k_{\text{B}}\,\sum _{i}p_{i}\ln(p_{i})}
The quantity
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, a multiplier of the summation expression. The summation is dimensionless, since the value
p
i
{\displaystyle p_{i}}
is a probability and therefore dimensionless, and ln is the natural logarithm. Hence the SI unit on both sides of the equation is that of heat capacity:
[
S
]
=
[
k
B
]
=
J
K
{\displaystyle [S]=[k_{\text{B}}]=\mathrm {\frac {J}{K}} }
This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) over which the sum is found is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).
Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas.
This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum-mechanical case.
It has been shown that the Gibbs entropy is equal to the classical "heat engine" entropy characterized by
d
S
=
δ
Q
/
T
{\displaystyle dS={\delta Q}/{T}}
, and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs entropy is the only entropy measure that is equivalent to the classical "heat engine" entropy under the following postulates:
=== Ensembles ===
The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations:
S
=
k
B
ln
Ω
mic
=
k
B
(
ln
Z
can
+
β
E
¯
)
=
k
B
(
ln
Z
gr
+
β
(
E
¯
−
μ
N
¯
)
)
{\displaystyle S=k_{\text{B}}\ln \Omega _{\text{mic}}=k_{\text{B}}(\ln Z_{\text{can}}+\beta {\bar {E}})=k_{\text{B}}(\ln {\mathcal {Z}}_{\text{gr}}+\beta ({\bar {E}}-\mu {\bar {N}}))}
Ω
mic
{\displaystyle \Omega _{\text{mic}}}
is the microcanonical partition function
Z
can
{\displaystyle Z_{\text{can}}}
is the canonical partition function
Z
gr
{\displaystyle {\mathcal {Z}}_{\text{gr}}}
is the grand canonical partition function
== Order through chaos and the second law of thermodynamics ==
We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are 100891344545564193334812497256 (100 choose 50) ≈ 1029 possible microstates.
Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system.
This is an example illustrating the second law of thermodynamics:
the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21)
== Counting of microstates ==
In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.)
To avoid coarse graining one can take the entropy as defined by the H-theorem.
S
=
−
k
B
H
B
:=
−
k
B
∫
f
(
q
i
,
p
i
)
ln
f
(
q
i
,
p
i
)
d
q
1
d
p
1
⋯
d
q
N
d
p
N
{\displaystyle S=-k_{\text{B}}H_{\text{B}}:=-k_{\text{B}}\int f(q_{i},p_{i})\,\ln f(q_{i},p_{i})\,dq_{1}\,dp_{1}\cdots dq_{N}\,dp_{N}}
However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and E + δE. In the thermodynamical limit, the specific entropy becomes independent on the choice of δE.
An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ln(1) = 0) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of 3.41 J/(mol⋅K), because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration).
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero (0 K) is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy:
E
ν
=
h
ν
0
(
n
+
1
2
)
{\displaystyle E_{\nu }=h\nu _{0}\left(n+{\tfrac {1}{2}}\right)}
where
h
{\displaystyle h}
is the Planck constant,
ν
0
{\displaystyle \nu _{0}}
is the characteristic frequency of the vibration, and
n
{\displaystyle n}
is the vibrational quantum number. Even when
n
=
0
{\displaystyle n=0}
(the zero-point energy),
E
n
{\displaystyle E_{n}}
does not equal 0, in adherence to the Heisenberg uncertainty principle.
== See also ==
== References == | Wikipedia/Entropy_(statistical_thermodynamics) |
Elementary Principles in Statistical Mechanics, published in March 1902, is a work of scientific literature by Josiah Willard Gibbs which is considered to be the foundation of modern statistical mechanics. Its full title was Elementary Principles in Statistical Mechanics, developed with especial reference to the rational foundation of thermodynamics.
== Overview ==
In this book, Gibbs carefully showed how the laws of thermodynamics would arise exactly from a generic classical mechanical system, if one allowed for a certain natural uncertainty about the state of that system.
The themes of thermodynamic connections to statistical mechanics had been explored in the preceding decades with Clausius, Maxwell, and Boltzmann, together writing thousands of pages on this topic. One of Gibbs' aims in writing the book was to distill these results into a cohesive and simple picture. Gibbs wrote in 1892 to his colleague Lord RayleighJust now I am trying to get ready for publication something on thermodynamics from the a-priori point of view, or rather on 'statistical mechanics' [...] I do not know that I shall have anything particularly new in substance, but shall be contented if I can so choose my standpoint (as seems to me possible) as to get a simpler view of the subject." He had been working on this topic for some time, at least as early as 1884 when he produced a paper (now lost except for its abstract) on the topic of statistical mechanics.
Gibbs' book simplified statistical mechanics into a treatise of 207 pages. At the same time, Gibbs fully generalized and expanded statistical mechanics into the form in which it is known today. Gibbs showed how statistical mechanics could be used even to extend thermodynamics beyond classical thermodynamics, to systems of any number of degrees of freedom (including microscopic systems) and non-extensive systems.
At the time of the book's writing, the prevailing understanding of nature was purely in classical terms: Quantum mechanics had not yet been conceived, and even basic facts taken for granted today (such as the existence of atoms) were still contested among scientists. Gibbs was careful in assuming the least about the nature of physical systems under study, and as a result the principles of statistical mechanics laid down by Gibbs have retained their accuracy (with some changes in detail but not in theme), in spite of the major upheavals of modern physics during the early 20th century.
== Content ==
V. Kumaran wrote the following comment regarding Elementary Principles in Statistical Mechanics:
... In this, he introduced the now standard concept of ‘ensemble’, which is a collection of a large number of indistinguishable replicas of the system under consideration, which interact with each other, but which are isolated from the rest of the universe. The replicas could be in different microscopic states, as determined by the positions and momenta of the constituent molecules, for example, but the macroscopic state determined by the pressure, temperature and / or other thermodynamic variables are identical.
Gibbs argued that the properties of the system, averaged over time, is identical to an average over all the members of the ensemble if the ‘ergodic hypothesis’ is valid. The ergodic hypothesis, which states that all the microstates of the system are sampled with equal probability, is applicable to most systems, with the exception of systems such as quenched glasses which are in metastable states. Thus, the ensemble averaging method provides us an easy way to calculate the thermodynamic properties of the system, without having to observe it for long periods of time.
Gibbs also used this tool to obtain relationships between systems constrained in different ways, for example, to relate the properties of a system at constant volume and energy with those at constant temperature and pressure. Even today, the concept of ensemble is widely used for sampling in computer simulations of the thermodynamic properties of materials, and has subsequently found uses in other fields such as quantum theory.
== References ==
== External links ==
Freely available digitized version on the Internet Archive | Wikipedia/Elementary_Principles_in_Statistical_Mechanics |
The random phase approximation (RPA) is an approximation method in condensed matter physics and nuclear physics. It was first introduced by David Bohm and David Pines as an important result in a series of seminal papers of 1952 and 1953. For decades physicists had been trying to incorporate the effect of microscopic quantum mechanical interactions between electrons in the theory of matter. Bohm and Pines' RPA accounts for the weak screened Coulomb interaction and is commonly used for describing the dynamic linear electronic response of electron systems. It was further developed to the relativistic form (RRPA) by solving the Dirac equation.
In the RPA, electrons are assumed to respond only to the total electric potential V(r) which is the sum of the external perturbing potential Vext(r) and a screening potential Vsc(r). The external perturbing potential is assumed to oscillate at a single frequency ω, so that the model yields via a self-consistent field (SCF) method a dynamic dielectric function denoted by εRPA(k, ω).
The contribution to the dielectric function from the total electric potential is assumed to average out, so that only the potential at wave vector k contributes. This is what is meant by the random phase approximation. The resulting dielectric function, also called the Lindhard dielectric function, correctly predicts a number of properties of the electron gas, including plasmons.
The RPA was criticized in the late 1950s for overcounting the degrees of freedom and the call for justification led to intense work among theoretical physicists. In a seminal paper Murray Gell-Mann and Keith Brueckner showed that the RPA can be derived from a summation of leading-order chain Feynman diagrams in a dense electron gas.
The consistency in these results became an important justification and motivated a very strong growth in theoretical physics in the late 50s and 60s.
== Applications ==
=== Ground state of an interacting bosonic system ===
The RPA vacuum
|
R
P
A
⟩
{\displaystyle \left|\mathrm {RPA} \right\rangle }
for a bosonic system can be expressed in terms of non-correlated bosonic vacuum
|
M
F
T
⟩
{\displaystyle \left|\mathrm {MFT} \right\rangle }
and original boson excitations
a
i
†
{\displaystyle \mathbf {a} _{i}^{\dagger }}
|
R
P
A
⟩
=
N
e
Z
i
j
a
i
†
a
j
†
/
2
|
M
F
T
⟩
{\displaystyle \left|\mathrm {RPA} \right\rangle ={\mathcal {N}}\mathbf {e} ^{Z_{ij}\mathbf {a} _{i}^{\dagger }\mathbf {a} _{j}^{\dagger }/2}\left|\mathrm {MFT} \right\rangle }
where Z is a symmetric matrix with
|
Z
|
≤
1
{\displaystyle |Z|\leq 1}
and
N
=
⟨
M
F
T
|
R
P
A
⟩
⟨
M
F
T
|
M
F
T
⟩
{\displaystyle {\mathcal {N}}={\frac {\left\langle \mathrm {MFT} \right|\left.\mathrm {RPA} \right\rangle }{\left\langle \mathrm {MFT} \right|\left.\mathrm {MFT} \right\rangle }}}
The normalization can be calculated by
⟨
R
P
A
|
R
P
A
⟩
=
N
2
⟨
M
F
T
|
e
z
i
(
q
~
i
)
2
/
2
e
z
j
(
q
~
j
†
)
2
/
2
|
M
F
T
⟩
=
1
{\displaystyle \langle \mathrm {RPA} |\mathrm {RPA} \rangle ={\mathcal {N}}^{2}\langle \mathrm {MFT} |\mathbf {e} ^{z_{i}({\tilde {\mathbf {q} }}_{i})^{2}/2}\mathbf {e} ^{z_{j}({\tilde {\mathbf {q} }}_{j}^{\dagger })^{2}/2}|\mathrm {MFT} \rangle =1}
where
Z
i
j
=
(
X
t
)
i
k
z
k
X
j
k
{\displaystyle Z_{ij}=(X^{\mathrm {t} })_{i}^{k}z_{k}X_{j}^{k}}
is the singular value decomposition of
Z
i
j
{\displaystyle Z_{ij}}
.
q
~
i
=
(
X
†
)
j
i
a
j
{\displaystyle {\tilde {\mathbf {q} }}^{i}=(X^{\dagger })_{j}^{i}\mathbf {a} ^{j}}
N
−
2
=
∑
m
i
∑
n
j
(
z
i
/
2
)
m
i
(
z
j
/
2
)
n
j
m
!
n
!
⟨
M
F
T
|
∏
i
j
(
q
~
i
)
2
m
i
(
q
~
j
†
)
2
n
j
|
M
F
T
⟩
{\displaystyle {\mathcal {N}}^{-2}=\sum _{m_{i}}\sum _{n_{j}}{\frac {(z_{i}/2)^{m_{i}}(z_{j}/2)^{n_{j}}}{m!n!}}\langle \mathrm {MFT} |\prod _{i\,j}({\tilde {\mathbf {q} }}_{i})^{2m_{i}}({\tilde {\mathbf {q} }}_{j}^{\dagger })^{2n_{j}}|\mathrm {MFT} \rangle }
=
∏
i
∑
m
i
(
z
i
/
2
)
2
m
i
(
2
m
i
)
!
m
i
!
2
=
{\displaystyle =\prod _{i}\sum _{m_{i}}(z_{i}/2)^{2m_{i}}{\frac {(2m_{i})!}{m_{i}!^{2}}}=}
∏
i
∑
m
i
(
z
i
)
2
m
i
(
1
/
2
m
i
)
=
det
(
1
−
|
Z
|
2
)
{\displaystyle \prod _{i}\sum _{m_{i}}(z_{i})^{2m_{i}}{1/2 \choose m_{i}}={\sqrt {\det(1-|Z|^{2})}}}
the connection between new and old excitations is given by
a
~
i
=
(
1
1
−
Z
2
)
i
j
a
j
+
(
1
1
−
Z
2
Z
)
i
j
a
j
†
{\displaystyle {\tilde {\mathbf {a} }}_{i}=\left({\frac {1}{\sqrt {1-Z^{2}}}}\right)_{ij}\mathbf {a} _{j}+\left({\frac {1}{\sqrt {1-Z^{2}}}}Z\right)_{ij}\mathbf {a} _{j}^{\dagger }}
.
== References == | Wikipedia/Random_phase_approximation |
Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to various topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering, and mechanical engineering, as well as other complex fields such as meteorology.
Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat.
The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics.
== Introduction ==
A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system.
In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.
This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field.
== History ==
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time.
The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865.
During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes.
== Etymology ==
Thermodynamics has an intricate etymology.
By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, thermo- ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word dynamics ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power".
In 1849, the adjective thermo-dynamic is used by William Thomson.
In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines.
Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology.
== Branches of thermodynamics ==
The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems.
=== Classical thermodynamics ===
Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics.
=== Statistical mechanics ===
Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level.
=== Chemical thermodynamics ===
Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation.
=== Equilibrium thermodynamics ===
Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings.
=== Non-equilibrium thermodynamics ===
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
== Laws of thermodynamics ==
Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following.
=== Zeroth law ===
The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers.
The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law.
=== First law ===
The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy,
Δ
U
{\displaystyle \Delta U}
, of a thermodynamic system is equal to the energy gained as heat,
Q
{\displaystyle Q}
, less the thermodynamic work,
W
{\displaystyle W}
, done by the system on its surroundings.
Δ
U
=
Q
−
W
{\displaystyle \Delta U=Q-W}
.
where
Δ
U
{\displaystyle \Delta U}
denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible),
Q
{\displaystyle Q}
denotes the quantity of energy supplied to the system as heat, and
W
{\displaystyle W}
denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work
W
{\displaystyle W}
done by a system on its surrounding requires that the system's internal energy
U
{\displaystyle U}
decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat
Q
{\displaystyle Q}
by an external energy source or as work by an external machine acting on the system (so that
U
{\displaystyle U}
is recovered) to make the system work continuously.
For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then
U
0
=
U
1
+
U
2
{\displaystyle U_{0}=U_{1}+U_{2}}
,
where U0 denotes the internal energy of the combined system, and U1 and U2 denote the internal energies of the respective separated systems.
Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state.
=== Second law ===
A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body.
The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium.
In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos.
=== Third law ===
The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".
Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine).
== System models ==
An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities.
Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole.
Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle.
Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries:
As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium.
Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes.
== States and processes ==
When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.
Several commonly studied thermodynamic processes are:
Adiabatic process: occurs without loss or gain of energy by heat
Isenthalpic process: occurs at a constant enthalpy
Isentropic process: a reversible adiabatic process, occurs at a constant entropy
Isobaric process: occurs at constant pressure
Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
Isothermal process: occurs at a constant temperature
Steady state process: occurs without a change in the internal energy
== Instrumentation ==
There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants.
== Conjugate variables ==
The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are:
Pressure-volume (the mechanical parameters);
Temperature-entropy (thermal parameters);
Chemical potential-particle number (material parameters).
== Potentials ==
Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics.
The five most well known potentials are:
where
T
{\displaystyle T}
is the temperature,
S
{\displaystyle S}
the entropy,
p
{\displaystyle p}
the pressure,
V
{\displaystyle V}
the volume,
μ
{\displaystyle \mu }
the chemical potential,
N
{\displaystyle N}
the number of particles in the system, and
i
{\displaystyle i}
is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.
== Axiomatic thermodynamics ==
Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics.
The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states.
== Applied fields ==
== See also ==
Thermodynamic process path
=== Lists and timelines ===
List of important publications in thermodynamics
List of textbooks on thermodynamics and statistical mechanics
List of thermal conductivities
List of thermodynamic properties
Table of thermodynamic equations
Timeline of thermodynamics
Thermodynamic equations
== Notes ==
== References ==
== Further reading ==
Goldstein, Martin & Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN 978-0-674-75325-9. OCLC 32826343. A nontechnical introduction, good on historical and interpretive matters.
Kazakov, Andrei; Muzny, Chris D.; Chirico, Robert D.; Diky, Vladimir V.; Frenkel, Michael (2008). "Web Thermo Tables – an On-Line Version of the TRC Thermodynamic Tables". Journal of Research of the National Institute of Standards and Technology. 113 (4): 209–220. doi:10.6028/jres.113.016. ISSN 1044-677X. PMC 4651616. PMID 27096122.
Gibbs J.W. (1928). The Collected Works of J. Willard Gibbs Thermodynamics. New York: Longmans, Green and Co. Vol. 1, pp. 55–349.
Guggenheim E.A. (1933). Modern thermodynamics by the methods of Willard Gibbs. London: Methuen & co. ltd.
Denbigh K. (1981). The Principles of Chemical Equilibrium: With Applications in Chemistry and Chemical Engineering. London: Cambridge University Press.
Stull, D.R., Westrum Jr., E.F. and Sinke, G.C. (1969). The Chemical Thermodynamics of Organic Compounds. London: John Wiley and Sons, Inc.{{cite book}}: CS1 maint: multiple names: authors list (link)
Bazarov I.P. (2010). Thermodynamics: Textbook. St. Petersburg: Lan publishing house. p. 384. ISBN 978-5-8114-1003-3. 5th ed. (in Russian)
Bawendi Moungi G., Alberty Robert A. and Silbey Robert J. (2004). Physical Chemistry. J. Wiley & Sons, Incorporated.
Alberty Robert A. (2003). Thermodynamics of Biochemical Reactions. Wiley-Interscience.
Alberty Robert A. (2006). Biochemical Thermodynamics: Applications of Mathematica. Vol. 48. John Wiley & Sons, Inc. pp. 1–458. ISBN 978-0-471-75798-6. PMID 16878778. {{cite book}}: |journal= ignored (help)
Dill Ken A., Bromberg Sarina (2011). Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. Garland Science. ISBN 978-0-8153-4430-8.
M. Scott Shell (2015). Thermodynamics and Statistical Mechanics: An Integrated Approach. Cambridge University Press. ISBN 978-1107656789.
Douglas E. Barrick (2018). Biomolecular Thermodynamics: From Theory to Applications. CRC Press. ISBN 978-1-4398-0019-5.
The following titles are more technical:
Bejan, Adrian (2016). Advanced Engineering Thermodynamics (4 ed.). Wiley. ISBN 978-1-119-05209-8.
Cengel, Yunus A., & Boles, Michael A. (2002). Thermodynamics – an Engineering Approach. McGraw Hill. ISBN 978-0-07-238332-4. OCLC 45791449.{{cite book}}: CS1 maint: multiple names: authors list (link)
Dunning-Davies, Jeremy (1997). Concise Thermodynamics: Principles and Applications. Horwood Publishing. ISBN 978-1-8985-6315-0. OCLC 36025958.
Kroemer, Herbert & Kittel, Charles (1980). Thermal Physics. W.H. Freeman Company. ISBN 978-0-7167-1088-2. OCLC 32932988.
== External links ==
Media related to Thermodynamics at Wikimedia Commons
Callendar, Hugh Longbourne (1911). "Thermodynamics" . Encyclopædia Britannica. Vol. 26 (11th ed.). pp. 808–814.
Thermodynamics Data & Property Calculation Websites
Thermodynamics Educational Websites
Biochemistry Thermodynamics
Thermodynamics and Statistical Mechanics
Engineering Thermodynamics – A Graphical Approach
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick | Wikipedia/Classical_thermodynamics |
The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
) centered at the position
r
{\displaystyle \mathbf {r} }
, and has momentum nearly equal to a given momentum vector
p
{\displaystyle \mathbf {p} }
(thus occupying a very small region of momentum space
d
3
p
{\displaystyle d^{3}\mathbf {p} }
), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
== Overview ==
=== The phase space and density function ===
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component px, py, pz. The entire space is 6-dimensional: a point in this space is (r, p) = (x, y, z, px, py, pz), and each coordinate is parameterized by time t. A relevant differential element is written
d
3
r
d
3
p
=
d
x
d
y
d
z
d
p
x
d
p
y
d
p
z
.
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} =dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}.}
Since the probability of N molecules, which all have r and p within
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
, is in question, at the heart of the equation is a quantity f which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t. This is a probability density function: f(r, p, t), defined so that,
d
N
=
f
(
r
,
p
,
t
)
d
3
r
d
3
p
{\displaystyle dN=f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
is the number of molecules which all have positions lying within a volume element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
about r and momenta lying within a momentum space element
d
3
p
{\displaystyle d^{3}\mathbf {p} }
about p, at time t. Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
N
=
∫
m
o
m
e
n
t
a
d
3
p
∫
p
o
s
i
t
i
o
n
s
d
3
r
f
(
r
,
p
,
t
)
=
∭
m
o
m
e
n
t
a
∭
p
o
s
i
t
i
o
n
s
f
(
x
,
y
,
z
,
p
x
,
p
y
,
p
z
,
t
)
d
x
d
y
d
z
d
p
x
d
p
y
d
p
z
{\displaystyle {\begin{aligned}N&=\int \limits _{\mathrm {momenta} }d^{3}\mathbf {p} \int \limits _{\mathrm {positions} }d^{3}\mathbf {r} \,f(\mathbf {r} ,\mathbf {p} ,t)\\[5pt]&=\iiint \limits _{\mathrm {momenta} }\quad \iiint \limits _{\mathrm {positions} }f(x,y,z,p_{x},p_{y},p_{z},t)\,dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}\end{aligned}}}
which is a 6-fold integral. While f is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r1, p1 for particle 1, r2, p2 for particle 2, etc. up to rN, pN for particle N.
It is assumed the particles in the system are identical (so each has an identical mass m). For a mixture of more than one chemical species, one distribution is needed for each, see below.
=== Principal statement ===
The general equation can then be written as
d
f
d
t
=
(
∂
f
∂
t
)
force
+
(
∂
f
∂
t
)
diff
+
(
∂
f
∂
t
)
coll
,
{\displaystyle {\frac {df}{dt}}=\left({\frac {\partial f}{\partial t}}\right)_{\text{force}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{diff}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}},}
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity v instead of momentum p; they are related in the definition of momentum by p = mv.
== The force and diffusion terms ==
Consider particles described by f, each experiencing an external force F not due to other particles (see the collision term for the latter treatment).
Suppose at time t some number of particles all have position r within element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
and momentum p within
d
3
p
{\displaystyle d^{3}\mathbf {p} }
. If a force F instantly acts on each particle, then at time t + Δt their position will be
r
+
Δ
r
=
r
+
p
m
Δ
t
{\displaystyle \mathbf {r} +\Delta \mathbf {r} =\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t}
and momentum p + Δp = p + FΔt. Then, in the absence of collisions, f must satisfy
f
(
r
+
p
m
Δ
t
,
p
+
F
Δ
t
,
t
+
Δ
t
)
d
3
r
d
3
p
=
f
(
r
,
p
,
t
)
d
3
r
d
3
p
{\displaystyle f\left(\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t,\mathbf {p} +\mathbf {F} \,\Delta t,t+\Delta t\right)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} =f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
Note that we have used the fact that the phase space volume element
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
changes, so
where Δf is the total change in f. Dividing (1) by
d
3
r
d
3
p
Δ
t
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} \,\Delta t}
and taking the limits Δt → 0 and Δf → 0, we have
The total differential of f is:
where ∇ is the gradient operator, · is the dot product,
∂
f
∂
p
=
e
^
x
∂
f
∂
p
x
+
e
^
y
∂
f
∂
p
y
+
e
^
z
∂
f
∂
p
z
=
∇
p
f
{\displaystyle {\frac {\partial f}{\partial \mathbf {p} }}=\mathbf {\hat {e}} _{x}{\frac {\partial f}{\partial p_{x}}}+\mathbf {\hat {e}} _{y}{\frac {\partial f}{\partial p_{y}}}+\mathbf {\hat {e}} _{z}{\frac {\partial f}{\partial p_{z}}}=\nabla _{\mathbf {p} }f}
is a shorthand for the momentum analogue of ∇, and êx, êy, êz are Cartesian unit vectors.
=== Final statement ===
Dividing (3) by dt and substituting into (2) gives:
∂
f
∂
t
+
p
m
⋅
∇
f
+
F
⋅
∂
f
∂
p
=
(
∂
f
∂
t
)
c
o
l
l
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\left({\frac {\partial f}{\partial t}}\right)_{\mathrm {coll} }}
In this context, F(r, t) is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since f cannot be solved unless the collision term in f is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
== The collision term (Stosszahlansatz) and molecular chaos ==
=== Two-body collision term ===
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "Stosszahlansatz" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
(
∂
f
∂
t
)
coll
=
∬
g
I
(
g
,
Ω
)
[
f
(
r
,
p
′
A
,
t
)
f
(
r
,
p
′
B
,
t
)
−
f
(
r
,
p
A
,
t
)
f
(
r
,
p
B
,
t
)
]
d
Ω
d
3
p
B
,
{\displaystyle \left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}=\iint gI(g,\Omega )[f(\mathbf {r} ,\mathbf {p'} _{A},t)f(\mathbf {r} ,\mathbf {p'} _{B},t)-f(\mathbf {r} ,\mathbf {p} _{A},t)f(\mathbf {r} ,\mathbf {p} _{B},t)]\,d\Omega \,d^{3}\mathbf {p} _{B},}
where pA and pB are the momenta of any two particles (labeled as A and B for convenience) before a collision, p′A and p′B are the momenta after the collision,
g
=
|
p
B
−
p
A
|
=
|
p
′
B
−
p
′
A
|
{\displaystyle g=|\mathbf {p} _{B}-\mathbf {p} _{A}|=|\mathbf {p'} _{B}-\mathbf {p'} _{A}|}
is the magnitude of the relative momenta (see relative velocity for more on this concept), and I(g, Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle dΩ, due to the collision.
=== Simplifications to the collision term ===
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
∂
f
∂
t
+
p
m
⋅
∇
f
+
F
⋅
∂
f
∂
p
=
ν
(
f
0
−
f
)
,
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\nu (f_{0}-f),}
where
ν
{\displaystyle \nu }
is the molecular collision frequency, and
f
0
{\displaystyle f_{0}}
is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
== General equation (for a mixture) ==
For a mixture of chemical species labelled by indices i = 1, 2, 3, ..., n the equation for species i is
∂
f
i
∂
t
+
p
i
m
i
⋅
∇
f
i
+
F
⋅
∂
f
i
∂
p
i
=
(
∂
f
i
∂
t
)
coll
,
{\displaystyle {\frac {\partial f_{i}}{\partial t}}+{\frac {\mathbf {p} _{i}}{m_{i}}}\cdot \nabla f_{i}+\mathbf {F} \cdot {\frac {\partial f_{i}}{\partial \mathbf {p} _{i}}}=\left({\frac {\partial f_{i}}{\partial t}}\right)_{\text{coll}},}
where fi = fi(r, pi, t), and the collision term is
(
∂
f
i
∂
t
)
c
o
l
l
=
∑
j
=
1
n
∬
g
i
j
I
i
j
(
g
i
j
,
Ω
)
[
f
i
′
f
j
′
−
f
i
f
j
]
d
Ω
d
3
p
′
,
{\displaystyle \left({\frac {\partial f_{i}}{\partial t}}\right)_{\mathrm {coll} }=\sum _{j=1}^{n}\iint g_{ij}I_{ij}(g_{ij},\Omega )[f'_{i}f'_{j}-f_{i}f_{j}]\,d\Omega \,d^{3}\mathbf {p'} ,}
where f′ = f′(p′i, t), the magnitude of the relative momenta is
g
i
j
=
|
p
i
−
p
j
|
=
|
p
i
′
−
p
j
′
|
,
{\displaystyle g_{ij}=|\mathbf {p} _{i}-\mathbf {p} _{j}|=|\mathbf {p} '_{i}-\mathbf {p} '_{j}|,}
and Iij is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.
== Applications and extensions ==
=== Conservation equations ===
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy.: 163 For a fluid consisting of only one kind of particle, the number density n is given by
n
=
∫
f
d
3
p
.
{\displaystyle n=\int f\,d^{3}\mathbf {p} .}
The average value of any function A is
⟨
A
⟩
=
1
n
∫
A
f
d
3
p
.
{\displaystyle \langle A\rangle ={\frac {1}{n}}\int Af\,d^{3}\mathbf {p} .}
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus
x
↦
x
i
{\displaystyle \mathbf {x} \mapsto x_{i}}
and
p
↦
p
i
=
m
v
i
{\displaystyle \mathbf {p} \mapsto p_{i}=mv_{i}}
, where
v
i
{\displaystyle v_{i}}
is the particle velocity vector. Define
A
(
p
i
)
{\displaystyle A(p_{i})}
as some function of momentum
p
i
{\displaystyle p_{i}}
only, whose total value is conserved in a collision. Assume also that the force
F
i
{\displaystyle F_{i}}
is a function of position only, and that f is zero for
p
i
→
±
∞
{\displaystyle p_{i}\to \pm \infty }
. Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
∫
A
∂
f
∂
t
d
3
p
=
∂
∂
t
(
n
⟨
A
⟩
)
,
{\displaystyle \int A{\frac {\partial f}{\partial t}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}(n\langle A\rangle ),}
∫
p
j
A
m
∂
f
∂
x
j
d
3
p
=
1
m
∂
∂
x
j
(
n
⟨
A
p
j
⟩
)
,
{\displaystyle \int {\frac {p_{j}A}{m}}{\frac {\partial f}{\partial x_{j}}}\,d^{3}\mathbf {p} ={\frac {1}{m}}{\frac {\partial }{\partial x_{j}}}(n\langle Ap_{j}\rangle ),}
∫
A
F
j
∂
f
∂
p
j
d
3
p
=
−
n
F
j
⟨
∂
A
∂
p
j
⟩
,
{\displaystyle \int AF_{j}{\frac {\partial f}{\partial p_{j}}}\,d^{3}\mathbf {p} =-nF_{j}\left\langle {\frac {\partial A}{\partial p_{j}}}\right\rangle ,}
∫
A
(
∂
f
∂
t
)
coll
d
3
p
=
∂
∂
t
coll
(
n
⟨
A
⟩
)
=
0
,
{\displaystyle \int A\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}_{\text{coll}}(n\langle A\rangle )=0,}
where the last term is zero, since A is conserved in a collision. The values of A correspond to moments of velocity
v
i
{\displaystyle v_{i}}
(and momentum
p
i
{\displaystyle p_{i}}
, as they are linearly dependent).
==== Zeroth moment ====
Letting
A
=
m
(
v
i
)
0
=
m
{\displaystyle A=m(v_{i})^{0}=m}
, the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:: 12, 168
∂
∂
t
ρ
+
∂
∂
x
j
(
ρ
V
j
)
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\rho +{\frac {\partial }{\partial x_{j}}}(\rho V_{j})=0,}
where
ρ
=
m
n
{\displaystyle \rho =mn}
is the mass density, and
V
i
=
⟨
v
i
⟩
{\displaystyle V_{i}=\langle v_{i}\rangle }
is the average fluid velocity.
==== First moment ====
Letting
A
=
m
(
v
i
)
1
=
p
i
{\displaystyle A=m(v_{i})^{1}=p_{i}}
, the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:: 15, 169
∂
∂
t
(
ρ
V
i
)
+
∂
∂
x
j
(
ρ
V
i
V
j
+
P
i
j
)
−
n
F
i
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}(\rho V_{i})+{\frac {\partial }{\partial x_{j}}}(\rho V_{i}V_{j}+P_{ij})-nF_{i}=0,}
where
P
i
j
=
ρ
⟨
(
v
i
−
V
i
)
(
v
j
−
V
j
)
⟩
{\displaystyle P_{ij}=\rho \langle (v_{i}-V_{i})(v_{j}-V_{j})\rangle }
is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
==== Second moment ====
Letting
A
=
m
(
v
i
)
2
2
=
p
i
p
i
2
m
{\displaystyle A={\frac {m(v_{i})^{2}}{2}}={\frac {p_{i}p_{i}}{2m}}}
, the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:: 19, 169
∂
∂
t
(
u
+
1
2
ρ
V
i
V
i
)
+
∂
∂
x
j
(
u
V
j
+
1
2
ρ
V
i
V
i
V
j
+
J
q
j
+
P
i
j
V
i
)
−
n
F
i
V
i
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\left(u+{\tfrac {1}{2}}\rho V_{i}V_{i}\right)+{\frac {\partial }{\partial x_{j}}}\left(uV_{j}+{\tfrac {1}{2}}\rho V_{i}V_{i}V_{j}+J_{qj}+P_{ij}V_{i}\right)-nF_{i}V_{i}=0,}
where
u
=
1
2
ρ
⟨
(
v
i
−
V
i
)
(
v
i
−
V
i
)
⟩
{\textstyle u={\tfrac {1}{2}}\rho \langle (v_{i}-V_{i})(v_{i}-V_{i})\rangle }
is the kinetic thermal energy density, and
J
q
i
=
1
2
ρ
⟨
(
v
i
−
V
i
)
(
v
k
−
V
k
)
(
v
k
−
V
k
)
⟩
{\textstyle J_{qi}={\tfrac {1}{2}}\rho \langle (v_{i}-V_{i})(v_{k}-V_{k})(v_{k}-V_{k})\rangle }
is the heat flux vector.
=== Hamiltonian mechanics ===
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
L
^
[
f
]
=
C
[
f
]
,
{\displaystyle {\hat {\mathbf {L} }}[f]=\mathbf {C} [f],}
where L is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is
L
^
N
R
=
∂
∂
t
+
p
m
⋅
∇
+
F
⋅
∂
∂
p
.
{\displaystyle {\hat {\mathbf {L} }}_{\mathrm {NR} }={\frac {\partial }{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla +\mathbf {F} \cdot {\frac {\partial }{\partial \mathbf {p} }}\,.}
=== Quantum theory and violation of particle number conservation ===
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
=== General relativity and astronomy ===
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
L
^
G
R
[
f
]
=
p
α
∂
f
∂
x
α
−
Γ
α
β
γ
p
β
p
γ
∂
f
∂
p
α
=
C
[
f
]
,
{\displaystyle {\hat {\mathbf {L} }}_{\mathrm {GR} }[f]=p^{\alpha }{\frac {\partial f}{\partial x^{\alpha }}}-\Gamma ^{\alpha }{}_{\beta \gamma }p^{\beta }p^{\gamma }{\frac {\partial f}{\partial p^{\alpha }}}=C[f],}
where Γαβγ is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant (xi, pi) phase space as opposed to fully contravariant (xi, pi) phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
== Solving the equation ==
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
== Limitations and further uses of the Boltzmann equation ==
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
== See also ==
== Notes ==
== References ==
Harris, Stewart (1971). An introduction to the theory of the Boltzmann equation. Dover Books. p. 221. ISBN 978-0-486-43831-3.. Very inexpensive introduction to the modern framework (starting from a formal deduction from Liouville and the Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy (BBGKY) in which the Boltzmann equation is placed). Most statistical mechanics textbooks like Huang still treat the topic using Boltzmann's original arguments. To derive the equation, these books use a heuristic explanation that does not bring out the range of validity and the characteristic assumptions that distinguish Boltzmann's from other transport equations like Fokker–Planck or Landau equations.
Arkeryd, Leif (1972). "On the Boltzmann equation part I: Existence". Arch. Rational Mech. Anal. 45 (1): 1–16. Bibcode:1972ArRMA..45....1A. doi:10.1007/BF00253392.
Arkeryd, Leif (1972). "On the Boltzmann equation part II: The full initial value problem". Arch. Rational Mech. Anal. 45 (1): 17–34. Bibcode:1972ArRMA..45...17A. doi:10.1007/BF00253393.
Arkeryd, Leif (1972). "On the Boltzmann equation part I: Existence". Arch. Rational Mech. Anal. 45 (1): 1–16. Bibcode:1972ArRMA..45....1A. doi:10.1007/BF00253392.
== External links ==
The Boltzmann Transport Equation by Franz Vesely
Boltzmann gaseous behaviors solved | Wikipedia/Boltzmann_transport_equation |
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in a wide variety of fields such as biology, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.: 1–4
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances.: 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.: 572–573
== History ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus."
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
== Principles: mechanics and ensembles ==
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
== Statistical thermodynamics ==
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
=== Fundamental postulate ===
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
=== Three thermodynamic ensembles ===
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.: 227 The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
=== Calculation methods ===
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
==== Exact ====
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
==== Monte Carlo ====
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
==== Other ====
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
== Non-equilibrium statistical mechanics ==
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
=== Stochastic methods ===
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
=== Near-equilibrium methods ===
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.: 664
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
=== Hybrid methods ===
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
== Applications ==
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
=== Quantum statistical mechanics ===
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
== Index of statistical mechanics topics ==
=== Physics ===
Probability amplitude
Statistical physics
Boltzmann factor
Feynman–Kac formula
Fluctuation theorem
Information entropy
Vacuum expectation value
Cosmic variance
Negative probability
Gibbs state
Master equation
Partition function (mathematics)
Quantum probability
=== Percolation theory ===
Percolation theory
Schramm–Loewner evolution
== See also ==
List of textbooks in thermodynamics and statistical mechanics
Laplace transform § Statistical mechanics
== References ==
== Further reading ==
Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2.
Müller-Kirsten, Harald J W. (2013). Basics of Statistical Physics (PDF). doi:10.1142/8709. ISBN 978-981-4449-53-3.
Kadanoff, Leo P. "Statistical Physics and other resources". Archived from the original on August 12, 2021. Retrieved June 18, 2023.
Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6.
Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005.
== External links ==
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Cohen, Doron (2011). "Lecture Notes in Statistical Mechanics and Mesoscopics". arXiv:1107.0568 [quant-ph].
Videos of lecture series in statistical mechanics on YouTube taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. | Wikipedia/Non-equilibrium_statistical_mechanics |
In physics, the von Neumann entropy, named after John von Neumann, is a measure of the statistical uncertainty within a description of a quantum system. It extends the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics, and it is the quantum counterpart of the Shannon entropy from classical information theory. For a quantum-mechanical system described by a density matrix ρ, the von Neumann entropy is
S
=
−
tr
(
ρ
ln
ρ
)
,
{\displaystyle S=-\operatorname {tr} (\rho \ln \rho ),}
where
tr
{\displaystyle \operatorname {tr} }
denotes the trace and
ln
{\displaystyle \operatorname {ln} }
denotes the matrix version of the natural logarithm. If the density matrix ρ is written in a basis of its eigenvectors
|
1
⟩
,
|
2
⟩
,
|
3
⟩
,
…
{\displaystyle |1\rangle ,|2\rangle ,|3\rangle ,\dots }
as
ρ
=
∑
j
η
j
|
j
⟩
⟨
j
|
,
{\displaystyle \rho =\sum _{j}\eta _{j}\left|j\right\rangle \left\langle j\right|,}
then the von Neumann entropy is merely
S
=
−
∑
j
η
j
ln
η
j
.
{\displaystyle S=-\sum _{j}\eta _{j}\ln \eta _{j}.}
In this form, S can be seen as the Shannon entropy of the eigenvalues, reinterpreted as probabilities.
The von Neumann entropy and quantities based upon it are widely used in the study of quantum entanglement.
== Fundamentals ==
In quantum mechanics, probabilities for the outcomes of experiments made upon a system are calculated from the quantum state describing that system. Each physical system is associated with a vector space, or more specifically a Hilbert space. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. A density operator, the mathematical representation of a quantum state, is a positive semi-definite, self-adjoint operator of trace one acting on the Hilbert space of the system. A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e.,
P
(
x
)
=
1
{\displaystyle P(x)=1}
for some outcome
x
{\displaystyle x}
). The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it. For any system, the state space is a convex set: Any mixed state can be written as a convex combination of pure states, though not in a unique way. The von Neumann entropy quantifies the extent to which a state is mixed.
The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for
2
×
2
{\displaystyle 2\times 2}
self-adjoint matrices:
ρ
=
1
2
(
I
+
r
x
σ
x
+
r
y
σ
y
+
r
z
σ
z
)
,
{\displaystyle \rho ={\tfrac {1}{2}}\left(I+r_{x}\sigma _{x}+r_{y}\sigma _{y}+r_{z}\sigma _{z}\right),}
where the real numbers
(
r
x
,
r
y
,
r
z
)
{\displaystyle (r_{x},r_{y},r_{z})}
are the coordinates of a point within the unit ball and
σ
x
=
(
0
1
1
0
)
,
σ
y
=
(
0
−
i
i
0
)
,
σ
z
=
(
1
0
0
−
1
)
.
{\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}
The von Neumann entropy vanishes when
ρ
{\displaystyle \rho }
is a pure state, i.e., when the point
(
r
x
,
r
y
,
r
z
)
{\displaystyle (r_{x},r_{y},r_{z})}
lies on the surface of the unit ball, and it attains its maximum value when
ρ
{\displaystyle \rho }
is the maximally mixed state, which is given by
r
x
=
r
y
=
r
z
=
0
{\displaystyle r_{x}=r_{y}=r_{z}=0}
.
== Properties ==
Some properties of the von Neumann entropy:
S(ρ) is zero if and only if ρ represents a pure state.
S(ρ) is maximal and equal to
ln
N
{\displaystyle \ln N}
for a maximally mixed state, N being the dimension of the Hilbert space.
S(ρ) is invariant under changes in the basis of ρ, that is, S(ρ) = S(UρU†), with U a unitary transformation.
S(ρ) is concave, that is, given a collection of positive numbers λi which sum to unity (
Σ
i
λ
i
=
1
{\displaystyle \Sigma _{i}\lambda _{i}=1}
) and density operators ρi, we have
S
(
∑
i
=
1
k
λ
i
ρ
i
)
≥
∑
i
=
1
k
λ
i
S
(
ρ
i
)
.
{\displaystyle S{\bigg (}\sum _{i=1}^{k}\lambda _{i}\rho _{i}{\bigg )}\geq \sum _{i=1}^{k}\lambda _{i}S(\rho _{i}).}
S(ρ) is additive for independent systems. Given two density matrices ρA , ρB describing independent systems A and B, we have
S
(
ρ
A
⊗
ρ
B
)
=
S
(
ρ
A
)
+
S
(
ρ
B
)
.
{\displaystyle S(\rho _{A}\otimes \rho _{B})=S(\rho _{A})+S(\rho _{B}).}
S(ρ) is strongly subadditive. That is, for any three systems A, B, and C:
S
(
ρ
A
B
C
)
+
S
(
ρ
B
)
≤
S
(
ρ
A
B
)
+
S
(
ρ
B
C
)
.
{\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC}).}
This automatically means that S(ρ) is subadditive:
S
(
ρ
A
C
)
≤
S
(
ρ
A
)
+
S
(
ρ
C
)
.
{\displaystyle S(\rho _{AC})\leq S(\rho _{A})+S(\rho _{C}).}
Below, the concept of subadditivity is discussed, followed by its generalization to strong subadditivity.
=== Subadditivity ===
If ρA, ρB are the reduced density matrices of the general state ρAB, then
|
S
(
ρ
A
)
−
S
(
ρ
B
)
|
≤
S
(
ρ
A
B
)
≤
S
(
ρ
A
)
+
S
(
ρ
B
)
.
{\displaystyle \left|S(\rho _{A})-S(\rho _{B})\right|\leq S(\rho _{AB})\leq S(\rho _{A})+S(\rho _{B}).}
The right hand inequality is known as subadditivity, and the left is sometimes known as the triangle inequality. While in Shannon's theory the entropy of a composite system can never be lower than the entropy of any of its parts, in quantum theory this is not the case; i.e., it is possible that S(ρAB) = 0, while S(ρA) = S(ρB) > 0. This is expressed by saying that the Shannon entropy is monotonic but the von Neumann entropy is not. For example, take the Bell state of two spin-1/2 particles:
|
ψ
⟩
=
|
↑↓
⟩
+
|
↓↑
⟩
.
{\displaystyle \left|\psi \right\rangle =\left|\uparrow \downarrow \right\rangle +\left|\downarrow \uparrow \right\rangle .}
This is a pure state with zero entropy, but each spin has maximum entropy when considered individually, because its reduced density matrix is the maximally mixed state. This indicates that it is an entangled state; the use of entropy as an entanglement measure is discussed further below.
=== Strong subadditivity ===
The von Neumann entropy is also strongly subadditive. Given three Hilbert spaces, A, B, C,
S
(
ρ
A
B
C
)
+
S
(
ρ
B
)
≤
S
(
ρ
A
B
)
+
S
(
ρ
B
C
)
.
{\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC}).}
By using the proof technique that establishes the left side of the triangle inequality above, one can show that the strong subadditivity inequality is equivalent to the following inequality:
S
(
ρ
A
)
+
S
(
ρ
C
)
≤
S
(
ρ
A
B
)
+
S
(
ρ
B
C
)
{\displaystyle S(\rho _{A})+S(\rho _{C})\leq S(\rho _{AB})+S(\rho _{BC})}
where ρAB, etc. are the reduced density matrices of a density matrix ρABC. If we apply ordinary subadditivity to the left side of this inequality, we then find
S
(
ρ
A
C
)
≤
S
(
ρ
A
B
)
+
S
(
ρ
B
C
)
.
{\displaystyle S(\rho _{AC})\leq S(\rho _{AB})+S(\rho _{BC}).}
By symmetry, for any tripartite state ρABC, each of the three numbers S(ρAB), S(ρBC), S(ρAC) is less than or equal to the sum of the other two.
=== Minimum Shannon entropy ===
Given a quantum state and a specification of a quantum measurement, we can calculate the probabilities for the different possible results of that measurement, and thus we can find the Shannon entropy of that probability distribution. A quantum measurement can be specified mathematically as a positive operator valued measure, or POVM. In the simplest case, a system with a finite-dimensional Hilbert space and measurement with a finite number of outcomes, a POVM is a set of positive semi-definite matrices
{
F
i
}
{\displaystyle \{F_{i}\}}
on the Hilbert space that sum to the identity matrix,
∑
i
=
1
n
F
i
=
I
.
{\displaystyle \sum _{i=1}^{n}F_{i}=\operatorname {I} .}
The POVM element
F
i
{\displaystyle F_{i}}
is associated with the measurement outcome
i
{\displaystyle i}
, such that the probability of obtaining it when making a measurement on the quantum state
ρ
{\displaystyle \rho }
is given by
Prob
(
i
)
=
tr
(
ρ
F
i
)
.
{\displaystyle {\text{Prob}}(i)=\operatorname {tr} (\rho F_{i}).}
A POVM is rank-1 if all of the elements are proportional to rank-1 projection operators. The von Neumann entropy is the minimum achievable Shannon entropy, where the minimization is taken over all rank-1 POVMs.
=== Holevo χ quantity ===
If ρi are density operators and λi is a collection of positive numbers which sum to unity (
Σ
i
λ
i
=
1
{\displaystyle \Sigma _{i}\lambda _{i}=1}
), then
ρ
=
∑
i
=
1
k
λ
i
ρ
i
{\displaystyle \rho =\sum _{i=1}^{k}\lambda _{i}\rho _{i}}
is a valid density operator, and the difference between its von Neumann entropy and the weighted average of the entropies of the ρi is bounded by the Shannon entropy of the λi:
S
(
∑
i
=
1
k
λ
i
ρ
i
)
−
∑
i
=
1
k
λ
i
S
(
ρ
i
)
≤
−
∑
i
=
1
k
λ
i
log
λ
i
.
{\displaystyle S{\bigg (}\sum _{i=1}^{k}\lambda _{i}\rho _{i}{\bigg )}-\sum _{i=1}^{k}\lambda _{i}S(\rho _{i})\leq -\sum _{i=1}^{k}\lambda _{i}\log \lambda _{i}.}
Equality is attained when the supports of the ρi – the spaces spanned by their eigenvectors corresponding to nonzero eigenvalues – are orthogonal. The difference on the left-hand side of this inequality is known as the Holevo χ quantity and also appears in Holevo's theorem, an important result in quantum information theory.
== Change under time evolution ==
=== Unitary ===
The time evolution of an isolated system is described by a unitary operator:
ρ
→
U
ρ
U
†
.
{\displaystyle \rho \to U\rho U^{\dagger }.}
Unitary evolution takes pure states into pure states, and it leaves the von Neumann entropy unchanged. This follows from the fact that the entropy of
ρ
{\displaystyle \rho }
is a function of the eigenvalues of
ρ
{\displaystyle \rho }
.
=== Measurement ===
A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product:
E
i
=
A
i
†
A
i
.
{\displaystyle E_{i}=A_{i}^{\dagger }A_{i}.}
The Kraus operators
A
i
{\displaystyle A_{i}}
, named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products
A
i
†
A
i
{\displaystyle A_{i}^{\dagger }A_{i}}
are. If upon performing the measurement the outcome
E
i
{\displaystyle E_{i}}
is obtained, then the initial state
ρ
{\displaystyle \rho }
is updated to
ρ
→
ρ
′
=
A
i
ρ
A
i
†
P
r
o
b
(
i
)
=
A
i
ρ
A
i
†
tr
(
ρ
E
i
)
.
{\displaystyle \rho \to \rho '={\frac {A_{i}\rho A_{i}^{\dagger }}{\mathrm {Prob} (i)}}={\frac {A_{i}\rho A_{i}^{\dagger }}{\operatorname {tr} (\rho E_{i})}}.}
An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM elements are projection operators, then the Kraus operators can be taken to be the projectors themselves:
ρ
→
ρ
′
=
Π
i
ρ
Π
i
tr
(
ρ
Π
i
)
.
{\displaystyle \rho \to \rho '={\frac {\Pi _{i}\rho \Pi _{i}}{\operatorname {tr} (\rho \Pi _{i})}}.}
If the initial state
ρ
{\displaystyle \rho }
is pure, and the projectors
Π
i
{\displaystyle \Pi _{i}}
have rank 1, they can be written as projectors onto the vectors
|
ψ
⟩
{\displaystyle |\psi \rangle }
and
|
i
⟩
{\displaystyle |i\rangle }
, respectively. The formula simplifies thus to
ρ
=
|
ψ
⟩
⟨
ψ
|
→
ρ
′
=
|
i
⟩
⟨
i
|
ψ
⟩
⟨
ψ
|
i
⟩
⟨
i
|
|
⟨
i
|
ψ
⟩
|
2
=
|
i
⟩
⟨
i
|
.
{\displaystyle \rho =|\psi \rangle \langle \psi |\to \rho '={\frac {|i\rangle \langle i|\psi \rangle \langle \psi |i\rangle \langle i|}{|\langle i|\psi \rangle |^{2}}}=|i\rangle \langle i|.}
We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:
ρ
→
∑
i
A
i
ρ
A
i
†
.
{\displaystyle \rho \to \sum _{i}A_{i}\rho A_{i}^{\dagger }.}
It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost. Channels defined by projective measurements can never decrease the von Neumann entropy; they leave the entropy unchanged only if they do not change the density matrix. A quantum channel will increase or leave constant the von Neumann entropy of every input state if and only if the channel is unital, i.e., if it leaves fixed the maximally mixed state. An example of a channel that decreases the von Neumann entropy is the amplitude damping channel for a qubit, which sends all mixed states towards a pure state.
== Thermodynamic meaning ==
The quantum version of the canonical distribution, the Gibbs states, are found by maximizing the von Neumann entropy under the constraint that the expected value of the Hamiltonian is fixed. A Gibbs state is a density operator with the same eigenvectors as the Hamiltonian, and its eigenvalues are
λ
i
=
1
Z
exp
(
−
E
i
k
B
T
)
,
{\displaystyle \lambda _{i}={\frac {1}{Z}}\exp \left(-{\frac {E_{i}}{k_{B}T}}\right),}
where T is the temperature,
k
B
{\displaystyle k_{B}}
is the Boltzmann constant, and Z is the partition function. The von Neumann entropy of a Gibbs state is, up to a factor
k
B
{\displaystyle k_{B}}
, the thermodynamic entropy.
== Generalizations and derived quantities ==
=== Conditional entropy ===
Let
ρ
A
B
{\displaystyle \rho _{AB}}
be a joint state for the bipartite quantum system AB. Then the conditional von Neumann entropy
S
(
A
|
B
)
{\displaystyle S(A|B)}
is the difference between the entropy of
ρ
A
B
{\displaystyle \rho _{AB}}
and the entropy of the marginal state for subsystem B alone:
S
(
A
|
B
)
=
S
(
ρ
A
B
)
−
S
(
ρ
B
)
.
{\displaystyle S(A|B)=S(\rho _{AB})-S(\rho _{B}).}
This is bounded above by
S
(
ρ
A
)
{\displaystyle S(\rho _{A})}
. In other words, conditioning the description of subsystem A upon subsystem B cannot increase the entropy associated with A.
Quantum mutual information can be defined as the difference between the entropy of the joint state and the total entropy of the marginals:
S
(
A
:
B
)
=
S
(
ρ
A
)
+
S
(
ρ
B
)
−
S
(
ρ
A
B
)
,
{\displaystyle S(A:B)=S(\rho _{A})+S(\rho _{B})-S(\rho _{AB}),}
which can also be expressed in terms of conditional entropy:
S
(
A
:
B
)
=
S
(
A
)
−
S
(
A
|
B
)
=
S
(
B
)
−
S
(
B
|
A
)
.
{\displaystyle S(A:B)=S(A)-S(A|B)=S(B)-S(B|A).}
=== Relative entropy ===
Let
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
be two density operators in the same state space. The relative entropy is defined to be
S
(
σ
|
ρ
)
=
tr
[
ρ
(
log
ρ
−
log
σ
)
]
.
{\displaystyle S(\sigma |\rho )=\operatorname {tr} [\rho (\log \rho -\log \sigma )].}
The relative entropy is always greater than or equal to zero; it equals zero if and only if
ρ
=
σ
{\displaystyle \rho =\sigma }
. Unlike the von Neumann entropy itself, the relative entropy is monotonic, in that it decreases (or remains constant) when part of a system is traced over:
S
(
σ
A
|
ρ
A
)
≤
S
(
σ
A
B
|
ρ
A
B
)
.
{\displaystyle S(\sigma _{A}|\rho _{A})\leq S(\sigma _{AB}|\rho _{AB}).}
=== Entanglement measures ===
Just as energy is a resource that facilitates mechanical operations, entanglement is a resource that facilitates performing tasks that involve communication and computation. The mathematical definition of entanglement can be paraphrased as saying that maximal knowledge about the whole of a system does not imply maximal knowledge about the individual parts of that system. If the quantum state that describes a pair of particles is entangled, then the results of measurements upon one half of the pair can be strongly correlated with the results of measurements upon the other. However, entanglement is not the same as "correlation" as understood in classical probability theory and in daily life. Instead, entanglement can be thought of as potential correlation that can be used to generate actual correlation in an appropriate experiment. The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term. Entropy provides one tool that can be used to quantify entanglement. If the overall system is described by a pure state, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure. It is thus known as the entanglement entropy.
It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution {1/n, ..., 1/n}. Therefore, a bipartite pure state ρ ∈ HA ⊗ HB is said to be a maximally entangled state if the reduced state of each subsystem of ρ is the diagonal matrix
(
1
n
⋱
1
n
)
.
{\displaystyle {\begin{pmatrix}{\frac {1}{n}}&&\\&\ddots &\\&&{\frac {1}{n}}\end{pmatrix}}.}
For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure. Some of the other measures are also entropic in character. For example, the relative entropy of entanglement is given by minimizing the relative entropy between a given state
ρ
{\displaystyle \rho }
and the set of nonentangled, or separable, states. The entanglement of formation is defined by minimizing, over all possible ways of writing of
ρ
{\displaystyle \rho }
as a convex combination of pure states, the average entanglement entropy of those pure states. The squashed entanglement is based on the idea of extending a bipartite state
ρ
A
B
{\displaystyle \rho _{AB}}
to a state describing a larger system,
ρ
A
B
E
{\displaystyle \rho _{ABE}}
, such that the partial trace of
ρ
A
B
E
{\displaystyle \rho _{ABE}}
over E yields
ρ
A
B
{\displaystyle \rho _{AB}}
. One then finds the infimum of the quantity
1
2
[
S
(
ρ
A
E
)
+
S
(
ρ
B
E
)
−
S
(
ρ
E
)
−
S
(
ρ
A
B
E
)
]
,
{\displaystyle {\frac {1}{2}}[S(\rho _{AE})+S(\rho _{BE})-S(\rho _{E})-S(\rho _{ABE})],}
over all possible choices of
ρ
A
B
E
{\displaystyle \rho _{ABE}}
.
=== Quantum Rényi entropies ===
Just as the Shannon entropy function is one member of the broader family of classical Rényi entropies, so too can the von Neumann entropy be generalized to the quantum Rényi entropies:
S
α
(
ρ
)
=
1
1
−
α
ln
[
tr
ρ
α
]
=
1
1
−
α
ln
∑
i
=
1
N
λ
i
α
.
{\displaystyle S_{\alpha }(\rho )={\frac {1}{1-\alpha }}\ln[\operatorname {tr} \rho ^{\alpha }]={\frac {1}{1-\alpha }}\ln \sum _{i=1}^{N}\lambda _{i}^{\alpha }.}
In the limit that
α
→
1
{\displaystyle \alpha \to 1}
, this recovers the von Neumann entropy. The quantum Rényi entropies are all additive for product states, and for any
α
{\displaystyle \alpha }
, the Rényi entropy
S
α
{\displaystyle S_{\alpha }}
vanishes for pure states and is maximized by the maximally mixed state. For any given state
ρ
{\displaystyle \rho }
,
S
α
(
ρ
)
{\displaystyle S_{\alpha }(\rho )}
is a continuous, nonincreasing function of the parameter
α
{\displaystyle \alpha }
. A weak version of subadditivity can be proven:
S
α
(
ρ
A
)
−
S
0
(
ρ
B
)
≤
S
α
(
ρ
A
B
)
≤
S
α
(
ρ
A
)
+
S
0
(
ρ
B
)
.
{\displaystyle S_{\alpha }(\rho _{A})-S_{0}(\rho _{B})\leq S_{\alpha }(\rho _{AB})\leq S_{\alpha }(\rho _{A})+S_{0}(\rho _{B}).}
Here,
S
0
{\displaystyle S_{0}}
is the quantum version of the Hartley entropy, i.e., the logarithm of the rank of the density matrix.
== History ==
The density matrix was introduced, with different motivations, by von Neumann and by Lev Landau. The motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector. On the other hand, von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. He introduced the expression now known as von Neumann entropy by arguing that a probabilistic combination of pure states is analogous to a mixture of ideal gases. Von Neumann first published on the topic in 1927. His argument was built upon earlier work by Albert Einstein and Leo Szilard.
Max Delbrück and Gert Molière proved the concavity and subadditivity properties of the von Neumann entropy in 1936. Quantum relative entropy was introduced by Hisaharu Umegaki in 1962. The subadditivity and triangle inequalities were proved in 1970 by Huzihiro Araki and Elliott H. Lieb. Strong subadditivity is a more difficult theorem. It was conjectured by Oscar Lanford and Derek Robinson in 1968. Lieb and Mary Beth Ruskai proved the theorem in 1973, using a matrix inequality proved earlier by Lieb.
== References ==
Bengtsson, Ingemar; Życzkowski, Karol (2017). Geometry of Quantum States: An Introduction to Quantum Entanglement (2nd ed.). Cambridge University Press. ISBN 978-1-107-02625-4.
Holevo, Alexander S. (2001). Statistical Structure of Quantum Theory. Lecture Notes in Physics. Monographs. Springer. ISBN 3-540-42082-7.
Nielsen, Michael A.; Chuang, Isaac L. (2010). Quantum Computation and Quantum Information (10th anniversary ed.). Cambridge: Cambridge Univ. Press. ISBN 978-0-521-63503-5.
Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. ISBN 0-7923-2549-4.
Rau, Jochen (2017). Statistical Physics and Thermodynamics. Oxford University Press. ISBN 978-0-19-959506-8.
Rau, Jochen (2021). Quantum Theory: An Information Processing Approach. Oxford University Press. ISBN 978-0-19-289630-8.
Rieffel, Eleanor; Polak, Wolfgang (2011). Quantum Computing: A Gentle Introduction. Scientific and engineering computation. Cambridge, Mass: MIT Press. ISBN 978-0-262-01506-6.
Wilde, Mark M. (2017). Quantum Information Theory (2nd ed.). Cambridge University Press. arXiv:1106.1445. doi:10.1017/9781316809976. ISBN 9781316809976.
Zwiebach, Barton (2022). Mastering Quantum Mechanics: Essentials, Theory, and Applications. MIT Press. ISBN 978-0-262-04613-8. | Wikipedia/Von_Neumann_entropy |
The quantum Heisenberg model, developed by Werner Heisenberg, is a statistical mechanical model used in the study of critical points and phase transitions of magnetic systems, in which the spins of the magnetic systems are treated quantum mechanically. It is related to the prototypical Ising model, where at each site of a lattice, a spin
σ
i
∈
{
±
1
}
{\displaystyle \sigma _{i}\in \{\pm 1\}}
represents a microscopic magnetic dipole to which the magnetic moment is either up or down. Except the coupling between magnetic dipole moments, there is also a multipolar version of Heisenberg model called the multipolar exchange interaction.
== Overview ==
For quantum mechanical reasons (see exchange interaction or Magnetism § Quantum-mechanical origin of magnetism), the dominant coupling between two dipoles may cause nearest-neighbors to have lowest energy when they are aligned. Under this assumption (so that magnetic interactions only occur between adjacent dipoles) and on a 1-dimensional periodic lattice, the Hamiltonian can be written in the form
H
^
=
−
J
∑
j
=
1
N
σ
j
σ
j
+
1
−
h
∑
j
=
1
N
σ
j
{\displaystyle {\hat {H}}=-J\sum _{j=1}^{N}\sigma _{j}\sigma _{j+1}-h\sum _{j=1}^{N}\sigma _{j}}
,
where
J
{\displaystyle J}
is the coupling constant and dipoles are represented by classical vectors (or "spins") σj, subject to the periodic boundary condition
σ
N
+
1
=
σ
1
{\displaystyle \sigma _{N+1}=\sigma _{1}}
.
The Heisenberg model is a more realistic model in that it treats the spins quantum-mechanically, by replacing the spin by a quantum operator acting upon the tensor product
(
C
2
)
⊗
N
{\displaystyle (\mathbb {C} ^{2})^{\otimes N}}
, of dimension
2
N
{\displaystyle 2^{N}}
. To define it, recall the Pauli spin-1/2 matrices
σ
x
=
(
0
1
1
0
)
{\displaystyle \sigma ^{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}}}
,
σ
y
=
(
0
−
i
i
0
)
{\displaystyle \sigma ^{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}}
,
σ
z
=
(
1
0
0
−
1
)
{\displaystyle \sigma ^{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}}
,
and for
1
≤
j
≤
N
{\displaystyle 1\leq j\leq N}
and
a
∈
{
x
,
y
,
z
}
{\displaystyle a\in \{x,y,z\}}
denote
σ
j
a
=
I
⊗
j
−
1
⊗
σ
a
⊗
I
⊗
N
−
j
{\displaystyle \sigma _{j}^{a}=I^{\otimes j-1}\otimes \sigma ^{a}\otimes I^{\otimes N-j}}
, where
I
{\displaystyle I}
is the
2
×
2
{\displaystyle 2\times 2}
identity matrix.
Given a choice of real-valued coupling constants
J
x
,
J
y
,
{\displaystyle J_{x},J_{y},}
and
J
z
{\displaystyle J_{z}}
, the Hamiltonian is given by
H
^
=
−
1
2
∑
j
=
1
N
(
J
x
σ
j
x
σ
j
+
1
x
+
J
y
σ
j
y
σ
j
+
1
y
+
J
z
σ
j
z
σ
j
+
1
z
+
h
σ
j
z
)
{\displaystyle {\hat {H}}=-{\frac {1}{2}}\sum _{j=1}^{N}(J_{x}\sigma _{j}^{x}\sigma _{j+1}^{x}+J_{y}\sigma _{j}^{y}\sigma _{j+1}^{y}+J_{z}\sigma _{j}^{z}\sigma _{j+1}^{z}+h\sigma _{j}^{z})}
where the
h
{\displaystyle h}
on the right-hand side indicates the external magnetic field, with periodic boundary conditions. The objective is to determine the spectrum of the Hamiltonian, from which the partition function can be calculated and the thermodynamics of the system can be studied.
It is common to name the model depending on the values of
J
x
{\displaystyle J_{x}}
,
J
y
{\displaystyle J_{y}}
and
J
z
{\displaystyle J_{z}}
: if
J
x
≠
J
y
≠
J
z
{\displaystyle J_{x}\neq J_{y}\neq J_{z}}
, the model is called the Heisenberg XYZ model; in the case of
J
=
J
x
=
J
y
≠
J
z
=
Δ
{\displaystyle J=J_{x}=J_{y}\neq J_{z}=\Delta }
, it is the Heisenberg XXZ model; if
J
x
=
J
y
=
J
z
=
J
{\displaystyle J_{x}=J_{y}=J_{z}=J}
, it is the Heisenberg XXX model. The spin 1/2 Heisenberg model in one dimension may be solved exactly using the Bethe ansatz. In the algebraic formulation, these are related to particular quantum affine algebras and elliptic quantum groups in the XXZ and XYZ cases respectively. Other approaches do so without Bethe ansatz.
=== XXX model ===
The physics of the Heisenberg XXX model strongly depends on the sign of the coupling constant
J
{\displaystyle J}
and the dimension of the space. For positive
J
{\displaystyle J}
the ground state is always ferromagnetic. At negative
J
{\displaystyle J}
the ground state is antiferromagnetic in two and three dimensions. In one dimension the nature of correlations in the antiferromagnetic Heisenberg model depends on the spin of the magnetic dipoles. If the spin is integer then only short-range order is present. A system of half-integer spins exhibits quasi-long range order.
A simplified version of Heisenberg model is the one-dimensional Ising model, where the transverse magnetic field is in the x-direction, and the interaction is only in the z-direction:
H
^
=
−
J
∑
j
=
1
N
σ
j
z
σ
j
+
1
z
−
g
J
∑
j
=
1
N
σ
j
x
{\displaystyle {\hat {H}}=-J\sum _{j=1}^{N}\sigma _{j}^{z}\sigma _{j+1}^{z}-gJ\sum _{j=1}^{N}\sigma _{j}^{x}}
.
At small g and large g, the ground state degeneracy is different, which implies that there must be a quantum phase transition in between. It can be solved exactly for the critical point using the duality analysis. The duality transition of the Pauli matrices is
σ
i
z
=
∏
j
≤
i
S
j
x
{\textstyle \sigma _{i}^{z}=\prod _{j\leq i}S_{j}^{x}}
and
σ
i
x
=
S
i
z
S
i
+
1
z
{\displaystyle \sigma _{i}^{x}=S_{i}^{z}S_{i+1}^{z}}
, where
S
x
{\displaystyle S^{x}}
and
S
z
{\displaystyle S^{z}}
are also Pauli matrices which obey the Pauli matrix algebra.
Under periodic boundary conditions, the transformed Hamiltonian can be shown is of a very similar form:
H
^
=
−
g
J
∑
j
=
1
N
S
j
z
S
j
+
1
z
−
J
∑
j
=
1
N
S
j
x
{\displaystyle {\hat {H}}=-gJ\sum _{j=1}^{N}S_{j}^{z}S_{j+1}^{z}-J\sum _{j=1}^{N}S_{j}^{x}}
but for the
g
{\displaystyle g}
attached to the spin interaction term. Assuming that there's only one critical point, we can conclude that the phase transition happens at
g
=
1
{\displaystyle g=1}
.
== Solution by Bethe ansatz ==
=== XXX1/2 model ===
Following the approach of Ludwig Faddeev (1996), the spectrum of the Hamiltonian for the XXX model
H
=
1
4
∑
α
,
n
(
σ
n
α
σ
n
+
1
α
−
1
)
{\displaystyle H={\frac {1}{4}}\sum _{\alpha ,n}(\sigma _{n}^{\alpha }\sigma _{n+1}^{\alpha }-1)}
can be determined by the Bethe ansatz. In this context, for an appropriately defined family of operators
B
(
λ
)
{\displaystyle B(\lambda )}
dependent on a spectral parameter
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
acting on the total Hilbert space
H
=
⨂
n
=
1
N
h
n
{\displaystyle {\mathcal {H}}=\bigotimes _{n=1}^{N}h_{n}}
with each
h
n
≅
C
2
{\displaystyle h_{n}\cong \mathbb {C} ^{2}}
, a Bethe vector is a vector of the form
Φ
(
λ
1
,
⋯
,
λ
m
)
=
B
(
λ
1
)
⋯
B
(
λ
m
)
v
0
{\displaystyle \Phi (\lambda _{1},\cdots ,\lambda _{m})=B(\lambda _{1})\cdots B(\lambda _{m})v_{0}}
where
v
0
=
⨂
n
=
1
N
|
↑
⟩
{\displaystyle v_{0}=\bigotimes _{n=1}^{N}|\uparrow \,\rangle }
.
If the
λ
k
{\displaystyle \lambda _{k}}
satisfy the Bethe equation
(
λ
k
+
i
/
2
λ
k
−
i
/
2
)
N
=
∏
j
≠
k
λ
k
−
λ
j
+
i
λ
k
−
λ
j
−
i
,
{\displaystyle \left({\frac {\lambda _{k}+i/2}{\lambda _{k}-i/2}}\right)^{N}=\prod _{j\neq k}{\frac {\lambda _{k}-\lambda _{j}+i}{\lambda _{k}-\lambda _{j}-i}},}
then the Bethe vector is an eigenvector of
H
{\displaystyle H}
with eigenvalue
−
∑
k
1
2
1
λ
k
2
+
1
/
4
{\displaystyle -\sum _{k}{\frac {1}{2}}{\frac {1}{\lambda _{k}^{2}+1/4}}}
.
The family
B
(
λ
)
{\displaystyle B(\lambda )}
as well as three other families come from a transfer matrix
T
(
λ
)
{\displaystyle T(\lambda )}
(in turn defined using a Lax matrix), which acts on
H
{\displaystyle {\mathcal {H}}}
along with an auxiliary space
h
a
≅
C
2
{\displaystyle h_{a}\cong \mathbb {C} ^{2}}
, and can be written as a
2
×
2
{\displaystyle 2\times 2}
block matrix with entries in
E
n
d
(
H
)
{\displaystyle \mathrm {End} ({\mathcal {H}})}
,
T
(
λ
)
=
(
A
(
λ
)
B
(
λ
)
C
(
λ
)
D
(
λ
)
)
,
{\displaystyle T(\lambda )={\begin{pmatrix}A(\lambda )&B(\lambda )\\C(\lambda )&D(\lambda )\end{pmatrix}},}
which satisfies fundamental commutation relations (FCRs) similar in form to the Yang–Baxter equation used to derive the Bethe equations. The FCRs also show there is a large commuting subalgebra given by the generating function
F
(
λ
)
=
t
r
a
(
T
(
λ
)
)
=
A
(
λ
)
+
D
(
λ
)
{\displaystyle F(\lambda )=\mathrm {tr} _{a}(T(\lambda ))=A(\lambda )+D(\lambda )}
, as
[
F
(
λ
)
,
F
(
μ
)
]
=
0
{\displaystyle [F(\lambda ),F(\mu )]=0}
, so when
F
(
λ
)
{\displaystyle F(\lambda )}
is written as a polynomial in
λ
{\displaystyle \lambda }
, the coefficients all commute, spanning a commutative subalgebra which
H
{\displaystyle H}
is an element of. The Bethe vectors are in fact simultaneous eigenvectors for the whole subalgebra.
=== XXXs model ===
For higher spins, say spin
s
{\displaystyle s}
, replace
σ
α
{\displaystyle \sigma ^{\alpha }}
with
S
α
{\displaystyle S^{\alpha }}
coming from the Lie algebra representation of the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
, of dimension
2
s
+
1
{\displaystyle 2s+1}
. The XXXs Hamiltonian
H
=
∑
α
,
n
(
S
n
α
S
n
+
1
α
−
(
S
n
α
S
n
+
1
α
)
2
)
{\displaystyle H=\sum _{\alpha ,n}(S_{n}^{\alpha }S_{n+1}^{\alpha }-(S_{n}^{\alpha }S_{n+1}^{\alpha })^{2})}
is solvable by Bethe ansatz with Bethe equations
(
λ
k
+
i
s
λ
k
−
i
s
)
N
=
∏
j
≠
k
λ
k
−
λ
j
+
i
λ
k
−
λ
j
−
i
.
{\displaystyle \left({\frac {\lambda _{k}+is}{\lambda _{k}-is}}\right)^{N}=\prod _{j\neq k}{\frac {\lambda _{k}-\lambda _{j}+i}{\lambda _{k}-\lambda _{j}-i}}.}
=== XXZs model ===
For spin
s
{\displaystyle s}
and a parameter
γ
{\displaystyle \gamma }
for the deformation from the XXX model, the BAE (Bethe ansatz equation) is
(
sinh
(
λ
k
+
i
s
γ
)
sinh
(
λ
k
−
i
s
γ
)
)
N
=
∏
j
≠
k
sinh
(
λ
k
−
λ
j
+
i
γ
)
sinh
(
λ
k
−
λ
j
−
i
γ
)
.
{\displaystyle \left({\frac {\sinh(\lambda _{k}+is\gamma )}{\sinh(\lambda _{k}-is\gamma )}}\right)^{N}=\prod _{j\neq k}{\frac {\sinh(\lambda _{k}-\lambda _{j}+i\gamma )}{\sinh(\lambda _{k}-\lambda _{j}-i\gamma )}}.}
Notably, for
s
=
1
2
{\displaystyle s={\frac {1}{2}}}
these are precisely the BAEs for the six-vertex model, after identifying
γ
=
2
η
{\displaystyle \gamma =2\eta }
, where
η
{\displaystyle \eta }
is the anisotropy parameter of the six-vertex model. This was originally thought to be coincidental until Baxter showed the XXZ Hamiltonian was contained in the algebra generated by the transfer matrix
T
(
ν
)
{\displaystyle T(\nu )}
, given exactly by
H
X
X
Z
1
/
2
=
−
i
sin
2
η
d
d
ν
log
T
(
ν
)
|
ν
=
−
i
η
−
1
2
cos
2
η
1
⊗
N
.
{\displaystyle H_{XXZ_{1/2}}=-i\sin 2\eta {\frac {d}{d\nu }}\log T(\nu ){\Big |}_{\nu =-i\eta }-{\frac {1}{2}}\cos 2\eta 1^{\otimes N}.}
== Applications ==
Another important object is entanglement entropy. One way to describe it is to subdivide the unique ground state into a block (several sequential spins) and the environment (the rest of the ground state). The entropy of the block can be considered as entanglement entropy. At zero temperature in the critical region (thermodynamic limit) it scales logarithmically with the size of the block. As the temperature increases the logarithmic dependence changes into a linear function. For large temperatures linear dependence follows from the second law of thermodynamics.
The Heisenberg model provides an important and tractable theoretical example for applying density matrix renormalisation.
The six-vertex model can be solved using the algebraic Bethe ansatz for the Heisenberg spin chain (Baxter 1982).
The half-filled Hubbard model in the limit of strong repulsive interactions can be mapped onto a Heisenberg model with
J
<
0
{\displaystyle J<0}
representing the strength of the superexchange interaction.
Limits of the model as the lattice spacing is sent to zero (and various limits are taken for variables appearing in the theory) describes integrable field theories, both non-relativistic such as the nonlinear Schrödinger equation, and relativistic, such as the
S
2
{\displaystyle S^{2}}
sigma model, the
S
3
{\displaystyle S^{3}}
sigma model (which is also a principal chiral model) and the sine-Gordon model.
Calculating certain correlation functions in the planar or large
N
{\displaystyle N}
limit of N = 4 supersymmetric Yang–Mills theory
== Extended symmetry ==
The integrability is underpinned by the existence of large symmetry algebras for the different models. For the XXX case this is the Yangian
Y
(
s
l
2
)
{\displaystyle Y({\mathfrak {sl}}_{2})}
, while in the XXZ case this is the quantum group
s
l
q
(
2
)
^
{\displaystyle {\hat {{\mathfrak {sl}}_{q}(2)}}}
, the q-deformation of the affine Lie algebra of
s
l
2
^
{\displaystyle {\hat {{\mathfrak {sl}}_{2}}}}
, as explained in the notes by Faddeev (1996).
These appear through the transfer matrix, and the condition that the Bethe vectors are generated from a state
Ω
{\displaystyle \Omega }
satisfying
C
(
λ
)
⋅
Ω
=
0
{\displaystyle C(\lambda )\cdot \Omega =0}
corresponds to the solutions being part of a highest-weight representation of the extended symmetry algebras.
== See also ==
Classical Heisenberg model
DMRG of the Heisenberg model
Quantum rotor model
t-J model
J1 J2 model
Majumdar–Ghosh model
AKLT model
Multipolar exchange interaction
== References ==
R.J. Baxter, Exactly solved models in statistical mechanics, London, Academic Press, 1982
Heisenberg, W. (1 September 1928). "Zur Theorie des Ferromagnetismus" [On the theory of ferromagnetism]. Zeitschrift für Physik (in German). 49 (9): 619–636. Bibcode:1928ZPhy...49..619H. doi:10.1007/BF01328601. S2CID 122524239.
Bethe, H. (1 March 1931). "Zur Theorie der Metalle" [On the theory of metals]. Zeitschrift für Physik (in German). 71 (3): 205–226. Bibcode:1931ZPhy...71..205B. doi:10.1007/BF01341708. S2CID 124225487.
== Notes == | Wikipedia/Heisenberg_model_(quantum) |
In statistical mechanics, Boltzmann's entropy formula (also known as the Boltzmann–Planck equation, not to be confused with the more general Boltzmann equation, which is a partial differential equation) is a probability equation relating the entropy
S
{\displaystyle S}
, also written as
S
B
{\displaystyle S_{\mathrm {B} }}
, of an ideal gas to the multiplicity (commonly denoted as
Ω
{\displaystyle \Omega }
or
W
{\displaystyle W}
), the number of real microstates corresponding to the gas's macrostate:
where
k
B
{\displaystyle k_{\mathrm {B} }}
is the Boltzmann constant (also written as simply
k
{\displaystyle k}
) and equal to 1.380649 × 10−23 J/K, and
ln
{\displaystyle \ln }
is the natural logarithm function (or log base e, as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
== History ==
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of W was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of N identical particles, of which Ni are in the i-th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. W was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. W can be counted using the formula for permutations
where i ranges over all possible molecular conditions and "!" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. W is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
== Introduction of the natural logarithm ==
In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation.
Boltzmann writes:
“The first task is to determine the permutation number, previously designated by
𝒫
, for any state distribution. Denoting by J the sum of the permutations
𝒫
for all possible state distributions, the quotient
𝒫
/J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations
𝒫
for
the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. …
“The most likely state distribution will be for those w0, w1 … values for which
𝒫
is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of
𝒫
is a product, it is easiest to determine the minimum of its logarithm, …”
Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula.
== Generalization ==
Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable.
But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath.
For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy formula, is:
This reduces to equation (1) if the probabilities pi are all equal.
Boltzmann used a
ρ
ln
ρ
{\displaystyle \rho \ln \rho }
formula as early as 1866. He interpreted ρ as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878.
Boltzmann himself used an expression equivalent to (3) in his later work and recognized it as more general than equation (1). That is, equation (1) is a corollary of
equation (3)—and not vice versa. In every situation where equation (1) is valid,
equation (3) is valid also—and not vice versa.
== Boltzmann entropy excludes statistical dependencies ==
The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems.
The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of N separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a single particle (rather than the 6N-dimensional phase space of the system as a whole), the Gibbs entropy formula
simplifies to the Boltzmann entropy
S
B
{\displaystyle S_{\mathrm {B} }}
.
This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy.
For anything but the most dilute of real gases,
S
B
{\displaystyle S_{\mathrm {B} }}
leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a holode, rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the canonical one.
== See also ==
History of entropy
H theorem
Gibbs entropy formula
nat (unit)
Shannon entropy
von Neumann entropy
== References ==
== External links ==
Introduction to Boltzmann's Equation
Vorlesungen über Gastheorie, Ludwig Boltzmann (1896) vol. I, J.A. Barth, Leipzig
Vorlesungen über Gastheorie, Ludwig Boltzmann (1898) vol. II. J.A. Barth, Leipzig. | Wikipedia/Boltzmann_entropy |
The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).
Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the best choice.
== History ==
The principle was first expounded by E. T. Jaynes in two papers in 1957, where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes argued that the Gibbsian method of statistical mechanics is sound by also arguing that the entropy of statistical mechanics and the information entropy of information theory are the same concept. Consequently, statistical mechanics should be considered a particular application of a general tool of logical inference and information theory.
== Overview ==
In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method.
The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods, statistical mechanics and logical inference in particular.
The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods.
However these statements do not imply that thermodynamical systems need not be shown to be ergodic to justify treatment as a statistical ensemble.
In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data.
== Testable information ==
The principle of maximum entropy is useful explicitly only when applied to testable information. Testable information is a statement about a probability distribution whose truth or falsity is well-defined. For example, the statements
the expectation of the variable
x
{\displaystyle x}
is 2.87
and
p
2
+
p
3
>
0.6
{\displaystyle p_{2}+p_{3}>0.6}
(where
p
2
{\displaystyle p_{2}}
and
p
3
{\displaystyle p_{3}}
are probabilities of events) are statements of testable information.
Given testable information, the maximum entropy procedure consists of seeking the probability distribution which maximizes information entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method of Lagrange multipliers.
Entropy maximization with no testable information respects the universal "constraint" that the sum of the probabilities is one. Under this constraint, the maximum entropy discrete probability distribution is the uniform distribution,
p
i
=
1
n
f
o
r
a
l
l
i
∈
{
1
,
…
,
n
}
.
{\displaystyle p_{i}={\frac {1}{n}}\ {\rm {for\ all}}\ i\in \{\,1,\dots ,n\,\}.}
== Applications ==
The principle of maximum entropy is commonly applied in two ways to inferential problems:
=== Prior probabilities ===
The principle of maximum entropy is often used to obtain prior probability distributions for Bayesian inference. Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution.
A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links with channel coding.
=== Posterior probabilities ===
Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules.
=== Maximum entropy models ===
Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations.
=== Probability density estimation ===
One of the main applications of the maximum entropy principle is in discrete and continuous density estimation.
Similar to support vector machine estimators,
the maximum entropy principle may require the solution to a quadratic programming problem, and thus provide
a sparse mixture model as the optimal density estimator. One important advantage of the method is its ability to incorporate prior information in the density estimation.
== General solution for the maximum entropy distribution with linear constraints ==
=== Discrete case ===
We have some testable information I about a quantity x taking values in {x1, x2,..., xn}. We assume this information has the form of m constraints on the expectations of the functions fk; that is, we require our probability distribution to satisfy the moment inequality/equality constraints:
∑
i
=
1
n
Pr
(
x
i
)
f
k
(
x
i
)
≥
F
k
k
=
1
,
…
,
m
.
{\displaystyle \sum _{i=1}^{n}\Pr(x_{i})f_{k}(x_{i})\geq F_{k}\qquad k=1,\ldots ,m.}
where the
F
k
{\displaystyle F_{k}}
are observables. We also require the probability density to sum to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
∑
i
=
1
n
Pr
(
x
i
)
=
1.
{\displaystyle \sum _{i=1}^{n}\Pr(x_{i})=1.}
The probability distribution with maximum information entropy subject to these inequality/equality constraints is of the form:
Pr
(
x
i
)
=
1
Z
(
λ
1
,
…
,
λ
m
)
exp
[
λ
1
f
1
(
x
i
)
+
⋯
+
λ
m
f
m
(
x
i
)
]
,
{\displaystyle \Pr(x_{i})={\frac {1}{Z(\lambda _{1},\ldots ,\lambda _{m})}}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],}
for some
λ
1
,
…
,
λ
m
{\displaystyle \lambda _{1},\ldots ,\lambda _{m}}
. It is sometimes called the Gibbs distribution. The normalization constant is determined by:
Z
(
λ
1
,
…
,
λ
m
)
=
∑
i
=
1
n
exp
[
λ
1
f
1
(
x
i
)
+
⋯
+
λ
m
f
m
(
x
i
)
]
,
{\displaystyle Z(\lambda _{1},\ldots ,\lambda _{m})=\sum _{i=1}^{n}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],}
and is conventionally called the partition function. (The Pitman–Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)
The λk parameters are Lagrange multipliers. In the case of equality constraints their values are determined from the solution of the nonlinear equations
F
k
=
∂
∂
λ
k
log
Z
(
λ
1
,
…
,
λ
m
)
.
{\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\ldots ,\lambda _{m}).}
In the case of inequality constraints, the Lagrange multipliers are determined from the solution of a convex optimization program with linear constraints.
In both cases, there is no closed form solution, and the computation of the Lagrange multipliers usually requires numerical methods.
=== Continuous case ===
For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy).
H
c
=
−
∫
p
(
x
)
log
p
(
x
)
q
(
x
)
d
x
{\displaystyle H_{c}=-\int p(x)\log {\frac {p(x)}{q(x)}}\,dx}
where q(x), which Jaynes called the "invariant measure", is proportional to the limiting density of discrete points. For now, we shall assume that q is known; we will discuss it further after the solution equations are given.
A closely related quantity, the relative entropy, is usually defined as the Kullback–Leibler divergence of p from q (although it is sometimes, confusingly, defined as the negative of this). The inference principle of minimizing this, due to Kullback, is known as the Principle of Minimum Discrimination Information.
We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval). We assume this information has the form of m constraints on the expectations of the functions fk, i.e. we require our probability density function to satisfy the inequality (or purely equality) moment constraints:
∫
p
(
x
)
f
k
(
x
)
d
x
≥
F
k
k
=
1
,
…
,
m
.
{\displaystyle \int p(x)f_{k}(x)\,dx\geq F_{k}\qquad k=1,\dotsc ,m.}
where the
F
k
{\displaystyle F_{k}}
are observables. We also require the probability density to integrate to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
∫
p
(
x
)
d
x
=
1.
{\displaystyle \int p(x)\,dx=1.}
The probability density function with maximum Hc subject to these constraints is:
p
(
x
)
=
1
Z
(
λ
1
,
…
,
λ
m
)
q
(
x
)
exp
[
λ
1
f
1
(
x
)
+
⋯
+
λ
m
f
m
(
x
)
]
{\displaystyle p(x)={\frac {1}{Z(\lambda _{1},\dotsc ,\lambda _{m})}}q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]}
with the partition function determined by
Z
(
λ
1
,
…
,
λ
m
)
=
∫
q
(
x
)
exp
[
λ
1
f
1
(
x
)
+
⋯
+
λ
m
f
m
(
x
)
]
d
x
.
{\displaystyle Z(\lambda _{1},\dotsc ,\lambda _{m})=\int q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]\,dx.}
As in the discrete case, in the case where all moment constraints are equalities, the values of the
λ
k
{\displaystyle \lambda _{k}}
parameters are determined by the system of nonlinear equations:
F
k
=
∂
∂
λ
k
log
Z
(
λ
1
,
…
,
λ
m
)
.
{\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\dotsc ,\lambda _{m}).}
In the case with inequality moment constraints the Lagrange multipliers are determined from the solution of a convex optimization program.
The invariant measure function q(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given. Then the maximum entropy probability density function is
p
(
x
)
=
A
⋅
q
(
x
)
,
a
<
x
<
b
{\displaystyle p(x)=A\cdot q(x),\qquad a<x<b}
where A is a normalization constant. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as the principle of transformation groups or marginalization theory.
=== Examples ===
For several examples of maximum entropy distributions, see the article on maximum entropy probability distributions.
== Justifications for the principle of maximum entropy ==
Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates.
=== Information entropy as a measure of 'uninformativeness' ===
Consider a discrete probability distribution among
m
{\displaystyle m}
mutually exclusive propositions. The most informative distribution would occur when one of the propositions was known to be true. In that case, the information entropy would be equal to zero. The least informative distribution would occur when there is no reason to favor any one of the propositions over the others. In that case, the only reasonable probability distribution would be uniform, and then the information entropy would be equal to its maximum possible value,
log
m
{\displaystyle \log m}
. The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero (completely informative) to
log
m
{\displaystyle \log m}
(completely uninformative).
By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution. The dependence of the solution on the dominating measure represented by
m
(
x
)
{\displaystyle m(x)}
is however a source of criticisms of the approach since this dominating measure is in fact arbitrary.
=== The Wallis derivation ===
The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell–Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept. The information entropy function is not assumed a priori, but rather is found in the course of the argument; and the argument leads naturally to the procedure of maximizing the information entropy, rather than treating it in some other way.
Suppose an individual wishes to make a probability assignment among
m
{\displaystyle m}
mutually exclusive propositions. They have some testable information, but are not sure how to go about including this information in their probability assessment. They therefore conceive of the following random experiment. They will distribute
N
{\displaystyle N}
quanta of probability (each worth
1
/
N
{\displaystyle 1/N}
) at random among the
m
{\displaystyle m}
possibilities. (One might imagine that they will throw
N
{\displaystyle N}
balls into
m
{\displaystyle m}
buckets while blindfolded. In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size.) Once the experiment is done, they will check if the probability assignment thus obtained is consistent with their information. (For this step to be successful, the information must be a constraint given by an open set in the space of probability measures). If it is inconsistent, they will reject it and try again. If it is consistent, their assessment will be
p
i
=
n
i
N
{\displaystyle p_{i}={\frac {n_{i}}{N}}}
where
p
i
{\displaystyle p_{i}}
is the probability of the
i
{\displaystyle i}
th proposition, while ni is the number of quanta that were assigned to the
i
{\displaystyle i}
th proposition (i.e. the number of balls that ended up in bucket
i
{\displaystyle i}
).
Now, in order to reduce the 'graininess' of the probability assignment, it will be necessary to use quite a large number of quanta of probability. Rather than actually carry out, and possibly have to repeat, the rather long random experiment, the protagonist decides to simply calculate and use the most probable result. The probability of any particular result is the multinomial distribution,
P
r
(
p
)
=
W
⋅
m
−
N
{\displaystyle Pr(\mathbf {p} )=W\cdot m^{-N}}
where
W
=
N
!
n
1
!
n
2
!
⋯
n
m
!
{\displaystyle W={\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}}
is sometimes known as the multiplicity of the outcome.
The most probable result is the one which maximizes the multiplicity
W
{\displaystyle W}
. Rather than maximizing
W
{\displaystyle W}
directly, the protagonist could equivalently maximize any monotonic increasing function of
W
{\displaystyle W}
. They decide to maximize
1
N
log
W
=
1
N
log
N
!
n
1
!
n
2
!
⋯
n
m
!
=
1
N
log
N
!
(
N
p
1
)
!
(
N
p
2
)
!
⋯
(
N
p
m
)
!
=
1
N
(
log
N
!
−
∑
i
=
1
m
log
(
(
N
p
i
)
!
)
)
.
{\displaystyle {\begin{aligned}{\frac {1}{N}}\log W&={\frac {1}{N}}\log {\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}\\[6pt]&={\frac {1}{N}}\log {\frac {N!}{(Np_{1})!\,(Np_{2})!\,\dotsb \,(Np_{m})!}}\\[6pt]&={\frac {1}{N}}\left(\log N!-\sum _{i=1}^{m}\log((Np_{i})!)\right).\end{aligned}}}
At this point, in order to simplify the expression, the protagonist takes the limit as
N
→
∞
{\displaystyle N\to \infty }
, i.e. as the probability levels go from grainy discrete values to smooth continuous values. Using Stirling's approximation, they find
lim
N
→
∞
(
1
N
log
W
)
=
1
N
(
N
log
N
−
∑
i
=
1
m
N
p
i
log
(
N
p
i
)
)
=
log
N
−
∑
i
=
1
m
p
i
log
(
N
p
i
)
=
log
N
−
log
N
∑
i
=
1
m
p
i
−
∑
i
=
1
m
p
i
log
p
i
=
(
1
−
∑
i
=
1
m
p
i
)
log
N
−
∑
i
=
1
m
p
i
log
p
i
=
−
∑
i
=
1
m
p
i
log
p
i
=
H
(
p
)
.
{\displaystyle {\begin{aligned}\lim _{N\to \infty }\left({\frac {1}{N}}\log W\right)&={\frac {1}{N}}\left(N\log N-\sum _{i=1}^{m}Np_{i}\log(Np_{i})\right)\\[6pt]&=\log N-\sum _{i=1}^{m}p_{i}\log(Np_{i})\\[6pt]&=\log N-\log N\sum _{i=1}^{m}p_{i}-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=\left(1-\sum _{i=1}^{m}p_{i}\right)\log N-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=H(\mathbf {p} ).\end{aligned}}}
All that remains for the protagonist to do is to maximize entropy under the constraints of their testable information. They have found that the maximum entropy distribution is the most probable of all "fair" random distributions, in the limit as the probability levels go from discrete to continuous.
=== Compatibility with Bayes' theorem ===
Giffin and Caticha (2007) state that Bayes' theorem and the principle of maximum entropy are completely compatible and can be seen as special cases of the "method of maximum relative entropy". They state that this method reproduces every aspect of orthodox Bayesian inference methods. In addition this new method opens the door to tackling problems that could not be addressed by either the maximal entropy principle or orthodox Bayesian methods individually. Moreover, recent contributions (Lazar 2003, and Schennach 2005) show that frequentist relative-entropy-based inference approaches (such as empirical likelihood and exponentially tilted empirical likelihood – see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis.
Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.
It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross-entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information.
== Relevance to physics ==
The principle of maximum entropy bears a relation to a key assumption of kinetic theory of gases known as molecular chaos or Stosszahlansatz. This asserts that the distribution function characterizing particles entering a collision can be factorized. Though this statement can be understood as a strictly physical hypothesis, it can also be interpreted as a heuristic hypothesis regarding the most probable configuration of particles before colliding.
== See also ==
== Notes ==
== References ==
Bajkova, A. T. (1992). "The generalization of maximum entropy method for reconstruction of complex functions". Astronomical and Astrophysical Transactions. 1 (4): 313–320. Bibcode:1992A&AT....1..313B. doi:10.1080/10556799208230532.
Fornalski, K.W.; Parzych, G.; Pylak, M.; Satuła, D.; Dobrzyński, L. (2010). "Application of Bayesian reasoning and the Maximum Entropy Method to some reconstruction problems" (PDF). Acta Physica Polonica A. 117 (6): 892–899. Bibcode:2010AcPPA.117..892F. doi:10.12693/APhysPolA.117.892.
Giffin, A. and Caticha, A., 2007, Updating Probabilities with Data and Moments
Guiasu, S.; Shenitzer, A. (1985). "The principle of maximum entropy". The Mathematical Intelligencer. 7 (1): 42–48. doi:10.1007/bf03023004. S2CID 53059968.
Harremoës, P.; Topsøe (2001). "Maximum entropy fundamentals". Entropy. 3 (3): 191–226. Bibcode:2001Entrp...3..191H. doi:10.3390/e3030191.
Jaynes, E. T. (1963). "Information Theory and Statistical Mechanics". In Ford, K. (ed.). Statistical Physics. New York: Benjamin. p. 181.
Jaynes, E. T., 1986 (new version online 1996), "Monkeys, kangaroos and N", in Maximum-Entropy and Bayesian Methods in Applied Statistics, J. H. Justice (ed.), Cambridge University Press, Cambridge, p. 26.
Kapur, J. N.; and Kesavan, H. K., 1992, Entropy Optimization Principles with Applications, Boston: Academic Press. ISBN 0-12-397670-7
Kitamura, Y., 2006, Empirical Likelihood Methods in Econometrics: Theory and Practice, Cowles Foundation Discussion Papers 1569, Cowles Foundation, Yale University.
Lazar, N (2003). "Bayesian empirical likelihood". Biometrika. 90 (2): 319–326. doi:10.1093/biomet/90.2.319.
Owen, A. B., 2001, Empirical Likelihood, Chapman and Hall/CRC. ISBN 1-58-488071-6.
Schennach, S. M. (2005). "Bayesian exponentially tilted empirical likelihood". Biometrika. 92 (1): 31–46. doi:10.1093/biomet/92.1.31.
Uffink, Jos (1995). "Can the Maximum Entropy Principle be explained as a consistency requirement?" (PDF). Studies in History and Philosophy of Modern Physics. 26B (3): 223–261. Bibcode:1995SHPMP..26..223U. CiteSeerX 10.1.1.27.6392. doi:10.1016/1355-2198(95)00015-1. hdl:1874/2649. Archived from the original (PDF) on 2006-06-03.
== Further reading ==
Boyd, Stephen; Lieven Vandenberghe (2004). Convex Optimization (PDF). Cambridge University Press. p. 362. ISBN 0-521-83378-7. Retrieved 2008-08-24.
Ratnaparkhi A. (1997) "A simple introduction to maximum entropy models for natural language processing" Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania. An easy-to-read introduction to maximum entropy methods in the context of natural language processing.
Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J. L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M. I.; Sher, A.; Hottowy, P.; Dabrowski, W.; Litke, A. M.; Beggs, J. M. (2008). "A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks in Vitro". Journal of Neuroscience. 28 (2): 505–518. doi:10.1523/JNEUROSCI.3359-07.2008. PMC 6670549. PMID 18184793. Open access article containing pointers to various papers and software implementations of Maximum Entropy Model on the net. | Wikipedia/Principle_of_maximum_entropy |
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems.
== Boltzmann's principle ==
Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena.
A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium.
Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain.
Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number:
S
=
k
B
ln
Ω
{\displaystyle S=k_{\text{B}}\ln \Omega }
The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer.
Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate.
Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view.
Boltzmann's principle is regarded as the foundation of statistical mechanics.
== Gibbs entropy formula ==
The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if Ei is the energy of microstate i, and pi is the probability that it occurs during the system's fluctuations, then the entropy of the system is
S
=
−
k
B
∑
i
p
i
ln
(
p
i
)
{\displaystyle S=-k_{\text{B}}\,\sum _{i}p_{i}\ln(p_{i})}
The quantity
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, a multiplier of the summation expression. The summation is dimensionless, since the value
p
i
{\displaystyle p_{i}}
is a probability and therefore dimensionless, and ln is the natural logarithm. Hence the SI unit on both sides of the equation is that of heat capacity:
[
S
]
=
[
k
B
]
=
J
K
{\displaystyle [S]=[k_{\text{B}}]=\mathrm {\frac {J}{K}} }
This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) over which the sum is found is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).
Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas.
This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum-mechanical case.
It has been shown that the Gibbs entropy is equal to the classical "heat engine" entropy characterized by
d
S
=
δ
Q
/
T
{\displaystyle dS={\delta Q}/{T}}
, and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs entropy is the only entropy measure that is equivalent to the classical "heat engine" entropy under the following postulates:
=== Ensembles ===
The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations:
S
=
k
B
ln
Ω
mic
=
k
B
(
ln
Z
can
+
β
E
¯
)
=
k
B
(
ln
Z
gr
+
β
(
E
¯
−
μ
N
¯
)
)
{\displaystyle S=k_{\text{B}}\ln \Omega _{\text{mic}}=k_{\text{B}}(\ln Z_{\text{can}}+\beta {\bar {E}})=k_{\text{B}}(\ln {\mathcal {Z}}_{\text{gr}}+\beta ({\bar {E}}-\mu {\bar {N}}))}
Ω
mic
{\displaystyle \Omega _{\text{mic}}}
is the microcanonical partition function
Z
can
{\displaystyle Z_{\text{can}}}
is the canonical partition function
Z
gr
{\displaystyle {\mathcal {Z}}_{\text{gr}}}
is the grand canonical partition function
== Order through chaos and the second law of thermodynamics ==
We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are 100891344545564193334812497256 (100 choose 50) ≈ 1029 possible microstates.
Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system.
This is an example illustrating the second law of thermodynamics:
the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21)
== Counting of microstates ==
In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.)
To avoid coarse graining one can take the entropy as defined by the H-theorem.
S
=
−
k
B
H
B
:=
−
k
B
∫
f
(
q
i
,
p
i
)
ln
f
(
q
i
,
p
i
)
d
q
1
d
p
1
⋯
d
q
N
d
p
N
{\displaystyle S=-k_{\text{B}}H_{\text{B}}:=-k_{\text{B}}\int f(q_{i},p_{i})\,\ln f(q_{i},p_{i})\,dq_{1}\,dp_{1}\cdots dq_{N}\,dp_{N}}
However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and E + δE. In the thermodynamical limit, the specific entropy becomes independent on the choice of δE.
An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ln(1) = 0) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of 3.41 J/(mol⋅K), because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration).
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero (0 K) is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy:
E
ν
=
h
ν
0
(
n
+
1
2
)
{\displaystyle E_{\nu }=h\nu _{0}\left(n+{\tfrac {1}{2}}\right)}
where
h
{\displaystyle h}
is the Planck constant,
ν
0
{\displaystyle \nu _{0}}
is the characteristic frequency of the vibration, and
n
{\displaystyle n}
is the vibrational quantum number. Even when
n
=
0
{\displaystyle n=0}
(the zero-point energy),
E
n
{\displaystyle E_{n}}
does not equal 0, in adherence to the Heisenberg uncertainty principle.
== See also ==
== References == | Wikipedia/Gibbs_entropy |
In statistical mechanics, the radial distribution function, (or pair correlation function)
g
(
r
)
{\displaystyle g(r)}
in a system of particles (atoms, molecules, colloids, etc.), describes how density varies as a function of distance from a reference particle.
If a given particle is taken to be at the origin O, and if
ρ
=
N
/
V
{\displaystyle \rho =N/V}
is the average number density of particles, then the local time-averaged density at a distance
r
{\displaystyle r}
from O is
ρ
g
(
r
)
{\displaystyle \rho g(r)}
. This simplified definition holds for a homogeneous and isotropic system. A more general case will be considered below.
In simplest terms it is a measure of the probability of finding a particle at a distance of
r
{\displaystyle r}
away from a given reference particle, relative to that for an ideal gas. The general algorithm involves determining how many particles are within a distance of
r
{\displaystyle r}
and
r
+
d
r
{\displaystyle r+dr}
away from a particle. This general theme is depicted to the right, where the red particle is our reference particle, and the blue particles are those whose centers are within the circular shell, dotted in orange.
The radial distribution function is usually determined by calculating the distance between all particle pairs and binning them into a histogram. The histogram is then normalized with respect to an ideal gas, where particle histograms are completely uncorrelated. For three dimensions, this normalization is the number density of the system
(
ρ
)
{\displaystyle (\rho )}
multiplied by the volume of the spherical shell, which symbolically can be expressed as
ρ
4
π
r
2
d
r
{\displaystyle \rho \,4\pi r^{2}dr}
.
Given a potential energy function, the radial distribution function can be computed either via computer simulation methods like the Monte Carlo method, or via the Ornstein–Zernike equation, using approximative closure relations like the Percus–Yevick approximation or the hypernetted-chain theory. It can also be determined experimentally, by radiation scattering techniques or by direct visualization for large enough (micrometer-sized) particles via traditional or confocal microscopy.
The radial distribution function is of fundamental importance since it can be used, using the Kirkwood–Buff solution theory, to link the microscopic details to macroscopic properties. Moreover, by the reversion of the Kirkwood–Buff theory, it is possible to attain the microscopic details of the radial distribution function from the macroscopic properties. The radial distribution function may also be inverted to predict the potential energy function using the Ornstein–Zernike equation or structure-optimized potential refinement.
== Definition ==
Consider a system of
N
{\displaystyle N}
particles in a volume
V
{\displaystyle V}
(for an average number density
ρ
=
N
/
V
{\displaystyle \rho =N/V}
) and at a temperature
T
{\displaystyle T}
(let us also define
β
=
1
k
T
{\displaystyle \textstyle \beta ={\frac {1}{kT}}}
;
k
{\displaystyle k}
is the Boltzmann constant). The particle coordinates are
r
i
{\displaystyle \mathbf {r} _{i}}
, with
i
=
1
,
…
,
N
{\displaystyle \textstyle i=1,\,\ldots ,\,N}
. The potential energy due to the interaction between particles is
U
N
(
r
1
…
,
r
N
)
{\displaystyle \textstyle U_{N}(\mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{N})}
and we do not consider the case of an externally applied field.
The appropriate averages are taken in the canonical ensemble
(
N
,
V
,
T
)
{\displaystyle (N,V,T)}
, with
Z
N
=
∫
⋯
∫
e
−
β
U
N
d
r
1
⋯
d
r
N
{\displaystyle \textstyle Z_{N}=\int \cdots \int \mathrm {e} ^{-\beta U_{N}}\mathrm {d} \mathbf {r} _{1}\cdots \mathrm {d} \mathbf {r} _{N}}
the configurational integral, taken over all possible combinations of particle positions. The probability of an elementary configuration, namely finding particle 1 in
d
r
1
{\displaystyle \textstyle \mathrm {d} \mathbf {r} _{1}}
, particle 2 in
d
r
2
{\displaystyle \textstyle \mathrm {d} \mathbf {r} _{2}}
, etc. is given by
The total number of particles is huge, so that
P
(
N
)
{\displaystyle P^{(N)}}
in itself is not very useful. However, one can also obtain the probability of a reduced configuration, where the positions of only
n
<
N
{\displaystyle n<N}
particles are fixed, in
r
1
…
,
r
n
{\displaystyle \textstyle \mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{n}}
, with no constraints on the remaining
N
−
n
{\displaystyle N-n}
particles. To this end, one has to integrate (1) over the remaining coordinates
r
n
+
1
…
,
r
N
{\displaystyle \mathbf {r} _{n+1}\,\ldots ,\,\mathbf {r} _{N}}
:
P
(
n
)
(
r
1
,
…
,
r
n
)
=
1
Z
N
∫
⋯
∫
e
−
β
U
N
d
3
r
n
+
1
⋯
d
3
r
N
{\displaystyle P^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})={\frac {1}{Z_{N}}}\int \cdots \int \mathrm {e} ^{-\beta U_{N}}\,\mathrm {d} ^{3}\mathbf {r} _{n+1}\cdots \mathrm {d} ^{3}\mathbf {r} _{N}\,}
.
If the particles are non-interacting, in the sense that the potential energy of each particle does not depend on any of the other particles,
U
N
(
r
1
,
…
,
r
N
)
=
∑
i
=
1
N
U
1
(
r
i
)
{\textstyle U_{N}(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})=\sum _{i=1}^{N}U_{1}(\mathbf {r} _{i})}
, then the partition function factorizes, and the probability of an elementary configuration decomposes with independent arguments to a product of single particle probabilities,
Z
N
=
∏
i
=
1
N
∫
d
3
r
i
e
−
β
U
1
=
Z
1
N
P
(
n
)
(
r
1
,
…
,
r
n
)
=
P
(
1
)
(
r
1
)
⋯
P
(
1
)
(
r
n
)
{\displaystyle {\begin{aligned}Z_{N}&=\prod _{i=1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}e^{-\beta U_{1}}=Z_{1}^{N}\\P^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{n})&=P^{(1)}(\mathbf {r} _{1})\cdots P^{(1)}(\mathbf {r} _{n})\end{aligned}}}
Note how for non-interacting particles the probability is symmetric in its arguments. This is not true in general, and the order in which the positions occupy the argument slots of
P
(
n
)
{\displaystyle P^{(n)}}
matters. Given a set of positions, the way that the
N
{\displaystyle N}
particles can occupy those positions is
N
!
{\displaystyle N!}
The probability that those positions ARE occupied is found by summing over all configurations in which a particle is at each of those locations. This can be done by taking every permutation,
π
{\displaystyle \pi }
, in the symmetric group on
N
{\displaystyle N}
objects,
S
N
{\displaystyle S_{N}}
, to write
∑
π
∈
S
N
P
(
N
)
(
r
π
(
1
)
,
…
,
r
π
(
N
)
)
{\textstyle \sum _{\pi \in S_{N}}P^{(N)}(\mathbf {r} _{\pi (1)},\ldots ,\mathbf {r} _{\pi (N)})}
. For fewer positions, we integrate over extraneous arguments, and include a correction factor to prevent overcounting,
ρ
(
n
)
(
r
1
,
…
,
r
n
)
=
1
(
N
−
n
)
!
(
∏
i
=
n
+
1
N
∫
d
3
r
i
)
∑
π
∈
S
N
P
(
N
)
(
r
π
(
1
)
,
…
,
r
π
(
N
)
)
{\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})&={\frac {1}{(N-n)!}}\left(\prod _{i=n+1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}\right)\sum _{\pi \in S_{N}}P^{(N)}(\mathbf {r} _{\pi (1)},\ldots ,\mathbf {r} _{\pi (N)})\\\end{aligned}}}
This quantity is called the n-particle density function. For indistinguishable particles, one could permute all the particle positions,
∀
i
,
r
i
→
r
π
(
i
)
{\displaystyle \forall i,\mathbf {r} _{i}\rightarrow \mathbf {r} _{\pi (i)}}
, without changing the probability of an elementary configuration,
P
(
r
π
(
1
)
,
…
,
r
π
(
N
)
)
=
P
(
r
1
,
…
,
r
N
)
{\displaystyle P(\mathbf {r} _{\pi (1)},\dots ,\mathbf {r} _{\pi (N)})=P(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})}
, so that the n-particle density function reduces to
ρ
(
n
)
(
r
1
,
…
,
r
n
)
=
N
!
(
N
−
n
)
!
P
(
n
)
(
r
1
,
…
,
r
n
)
{\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})&={\frac {N!}{(N-n)!}}P^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})\end{aligned}}}
Integrating the n-particle density gives the permutation factor
N
P
n
{\displaystyle _{N}P_{n}}
, counting the number of ways one can sequentially pick particles to place at the
n
{\displaystyle n}
positions out of the total
N
{\displaystyle N}
particles. Now let's turn to how we interpret this functions for different values of
n
{\displaystyle n}
.
For
n
=
1
{\displaystyle n=1}
, we have the one-particle density. For a crystal it is a periodic function with sharp maxima at the lattice sites. For a non-interacting gas, it is independent of the position
r
1
{\displaystyle \textstyle \mathbf {r} _{1}}
and equal to the overall number density,
ρ
{\displaystyle \rho }
, of the system. To see this first note that
U
N
=
∞
{\displaystyle U_{N}=\infty }
in the volume occupied by the gas, and 0 everywhere else. The partition function in this case is
Z
N
=
∏
i
=
1
N
∫
d
3
r
i
1
=
V
N
{\displaystyle Z_{N}=\prod _{i=1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}\ 1=V^{N}}
from which the definition gives the desired result
ρ
(
1
)
(
r
)
=
N
!
(
N
−
1
)
!
1
V
N
∏
i
=
2
N
∫
d
3
r
i
1
=
N
V
=
ρ
.
{\displaystyle \rho ^{(1)}(\mathbf {r} )={\frac {N!}{(N-1)!}}{\frac {1}{V^{N}}}\prod _{i=2}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}1={\frac {N}{V}}=\rho .}
In fact, for this special case every n-particle density is independent of coordinates, and can be computed explicitly
ρ
(
n
)
(
r
1
,
…
,
r
n
)
=
N
!
(
N
−
n
)
!
1
V
N
∏
i
=
n
+
1
N
∫
d
3
r
i
1
=
N
!
(
N
−
n
)
!
1
V
n
{\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{n})&={\frac {N!}{(N-n)!}}{\frac {1}{V^{N}}}\prod _{i=n+1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}1\\&={\frac {N!}{(N-n)!}}{\frac {1}{V^{n}}}\end{aligned}}}
For
N
≫
n
{\displaystyle N\gg n}
, the non-interacting n-particle density is approximately
ρ
non-interacting
(
n
)
(
r
1
,
…
,
r
N
)
=
(
1
−
n
(
n
−
1
)
/
2
N
+
⋯
)
ρ
n
≈
ρ
n
{\displaystyle \rho _{\text{non-interacting}}^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})=\left(1-n(n-1)/2N+\cdots \right)\rho ^{n}\approx \rho ^{n}}
. With this in hand, the n-point correlation function
g
(
n
)
{\displaystyle g^{(n)}}
is defined by factoring out the non-interacting contribution,
ρ
(
n
)
(
r
1
,
…
,
r
n
)
=
ρ
non-interacting
(
n
)
g
(
n
)
(
r
1
…
,
r
n
)
{\displaystyle \rho ^{(n)}(\mathbf {r} _{1},\ldots ,\,\mathbf {r} _{n})=\rho _{\text{non-interacting}}^{(n)}g^{(n)}(\mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{n})}
Explicitly, this definition reads
g
(
n
)
(
r
1
,
…
,
r
n
)
=
V
N
N
!
(
∏
i
=
n
+
1
N
1
V
∫
d
3
r
i
)
1
Z
N
∑
π
∈
S
N
e
−
β
U
(
r
π
(
1
)
,
…
,
r
π
(
N
)
)
{\displaystyle {\begin{aligned}g^{(n)}(\mathbf {r} _{1},\ldots ,\,\mathbf {r} _{n})&={\frac {V^{N}}{N!}}\left(\prod _{i=n+1}^{N}{\frac {1}{V}}\!\!\int \!\!\mathrm {d} ^{3}\mathbf {r} _{i}\right){\frac {1}{Z_{N}}}\sum _{\pi \in S_{N}}e^{-\beta U(\mathbf {r} _{\pi (1)},\ldots ,\,\mathbf {r} _{\pi (N)})}\end{aligned}}}
where it is clear that the n-point correlation function is dimensionless.
== Relations involving g(r) ==
=== Structure factor ===
The second-order correlation function
g
(
2
)
(
r
1
,
r
2
)
{\displaystyle g^{(2)}(\mathbf {r} _{1},\mathbf {r} _{2})}
is of special importance, as it is directly related (via a Fourier transform) to the structure factor of the system and can thus be determined experimentally using X-ray diffraction or neutron diffraction.
If the system consists of spherically symmetric particles,
g
(
2
)
(
r
1
,
r
2
)
{\displaystyle g^{(2)}(\mathbf {r} _{1},\mathbf {r} _{2})}
depends only on the relative distance between them,
r
12
=
r
2
−
r
1
{\displaystyle \mathbf {r} _{12}=\mathbf {r} _{2}-\mathbf {r} _{1}}
. We will drop the sub- and superscript:
g
(
r
)
≡
g
(
2
)
(
r
12
)
{\displaystyle \textstyle g(\mathbf {r} )\equiv g^{(2)}(\mathbf {r} _{12})}
. Taking particle 0 as fixed at the origin of the coordinates,
ρ
g
(
r
)
d
3
r
=
d
n
(
r
)
{\displaystyle \textstyle \rho g(\mathbf {r} )d^{3}r=\mathrm {d} n(\mathbf {r} )}
is the average number of particles (among the remaining
N
−
1
{\displaystyle N-1}
) to be found in the volume
d
3
r
{\displaystyle \textstyle d^{3}r}
around the position
r
{\displaystyle \textstyle \mathbf {r} }
.
We can formally count these particles and take the average via the expression
d
n
(
r
)
d
3
r
=
⟨
∑
i
≠
0
δ
(
r
−
r
i
)
⟩
{\displaystyle \textstyle {\frac {\mathrm {d} n(\mathbf {r} )}{d^{3}r}}=\langle \sum _{i\neq 0}\delta (\mathbf {r} -\mathbf {r} _{i})\rangle }
, with
⟨
⋅
⟩
{\displaystyle \textstyle \langle \cdot \rangle }
the ensemble average, yielding:
where the second equality requires the equivalence of particles
1
,
…
,
N
−
1
{\displaystyle \textstyle 1,\,\ldots ,\,N-1}
. The formula above is useful for relating
g
(
r
)
{\displaystyle g(\mathbf {r} )}
to the static structure factor
S
(
q
)
{\displaystyle S(\mathbf {q} )}
, defined by
S
(
q
)
=
⟨
∑
i
j
e
−
i
q
(
r
i
−
r
j
)
⟩
/
N
{\displaystyle \textstyle S(\mathbf {q} )=\langle \sum _{ij}\mathrm {e} ^{-i\mathbf {q} (\mathbf {r} _{i}-\mathbf {r} _{j})}\rangle /N}
, since we have:
S
(
q
)
=
1
+
1
N
⟨
∑
i
≠
j
e
−
i
q
(
r
i
−
r
j
)
⟩
=
1
+
1
N
⟨
∫
V
d
r
e
−
i
q
r
∑
i
≠
j
δ
[
r
−
(
r
i
−
r
j
)
]
⟩
=
1
+
N
(
N
−
1
)
N
∫
V
d
r
e
−
i
q
r
⟨
δ
(
r
−
r
1
)
⟩
,
{\displaystyle {\begin{aligned}S(\mathbf {q} )&=1+{\frac {1}{N}}\langle \sum _{i\neq j}\mathrm {e} ^{-i\mathbf {q} (\mathbf {r} _{i}-\mathbf {r} _{j})}\rangle =1+{\frac {1}{N}}\left\langle \int _{V}\mathrm {d} \mathbf {r} \,\mathrm {e} ^{-i\mathbf {q} \mathbf {r} }\sum _{i\neq j}\delta \left[\mathbf {r} -(\mathbf {r} _{i}-\mathbf {r} _{j})\right]\right\rangle \\&=1+{\frac {N(N-1)}{N}}\int _{V}\mathrm {d} \mathbf {r} \,\mathrm {e} ^{-i\mathbf {q} \mathbf {r} }\left\langle \delta (\mathbf {r} -\mathbf {r} _{1})\right\rangle ,\end{aligned}}}
and thus:
S
(
q
)
=
1
+
ρ
∫
V
d
r
e
−
i
q
r
g
(
r
)
{\displaystyle S(\mathbf {q} )=1+\rho \int _{V}\mathrm {d} \mathbf {r} \,\mathrm {e} ^{-i\mathbf {q} \mathbf {r} }g(\mathbf {r} )}
, proving the Fourier relation alluded to above.
This equation is only valid in the sense of distributions, since
g
(
r
)
{\displaystyle g(\mathbf {r} )}
is not normalized:
lim
r
→
∞
g
(
r
)
=
1
{\displaystyle \textstyle \lim _{r\rightarrow \infty }g(\mathbf {r} )=1}
, so that
∫
V
d
r
g
(
r
)
{\displaystyle \textstyle \int _{V}\mathrm {d} \mathbf {r} g(\mathbf {r} )}
diverges as the volume
V
{\displaystyle V}
, leading to a Dirac peak at the origin for the structure factor. Since this contribution is inaccessible experimentally we can subtract it from the equation above and redefine the structure factor as a regular function:
S
′
(
q
)
=
S
(
q
)
−
ρ
δ
(
q
)
=
1
+
ρ
∫
V
d
r
e
−
i
q
r
[
g
(
r
)
−
1
]
{\displaystyle S'(\mathbf {q} )=S(\mathbf {q} )-\rho \delta (\mathbf {q} )=1+\rho \int _{V}\mathrm {d} \mathbf {r} \,\mathrm {e} ^{-i\mathbf {q} \mathbf {r} }[g(\mathbf {r} )-1]}
.
Finally, we rename
S
(
q
)
≡
S
′
(
q
)
{\displaystyle S(\mathbf {q} )\equiv S'(\mathbf {q} )}
and, if the system is a liquid, we can invoke its isotropy:
=== Compressibility equation ===
Evaluating (6) in
q
=
0
{\displaystyle q=0}
and using the relation between the isothermal compressibility
χ
T
{\displaystyle \textstyle \chi _{T}}
and the structure factor at the origin yields the compressibility equation:
=== Potential of mean force ===
It can be shown that the radial distribution function is related to the two-particle potential of mean force
w
(
2
)
(
r
)
{\displaystyle w^{(2)}(r)}
by:
In the dilute limit, the potential of mean force is the exact pair potential under which the equilibrium point configuration has a given
g
(
r
)
{\displaystyle g(r)}
.
=== Energy equation ===
If the particles interact via identical pairwise potentials:
U
N
=
∑
i
>
j
=
1
N
u
(
|
r
i
−
r
j
|
)
{\displaystyle \textstyle U_{N}=\sum _{i>j=1}^{N}u(\left|\mathbf {r} _{i}-\mathbf {r} _{j}\right|)}
, the average internal energy per particle is:: Section 2.5
=== Pressure equation of state ===
Developing the virial equation yields the pressure equation of state:
=== Thermodynamic properties in 3D ===
The radial distribution function is an important measure because several key thermodynamic properties, such as potential energy and pressure can be calculated from it.
For a 3-D system where particles interact via pairwise potentials, the potential energy of the system can be calculated as follows:
P
E
=
N
2
4
π
ρ
∫
0
∞
r
2
u
(
r
)
g
(
r
)
d
r
,
{\displaystyle PE={\frac {N}{2}}4\pi \rho \int _{0}^{\infty }r^{2}u(r)g(r)dr,}
where N is the number of particles in the system,
ρ
{\displaystyle \rho }
is the number density,
u
(
r
)
{\displaystyle u(r)}
is the pair potential.
The pressure of the system can also be calculated by relating the 2nd virial coefficient to
g
(
r
)
{\displaystyle g(r)}
. The pressure can be calculated as follows:
P
=
ρ
k
T
−
2
3
π
ρ
2
∫
0
∞
d
r
d
u
(
r
)
d
r
r
3
g
(
r
)
{\displaystyle P=\rho kT-{\frac {2}{3}}\pi \rho ^{2}\int _{0}^{\infty }dr{\frac {du(r)}{dr}}r^{3}g(r)}
.
Note that the results of potential energy and pressure will not be as accurate as directly calculating these properties because of the averaging involved with the calculation of
g
(
r
)
{\displaystyle g(r)}
.
== Approximations ==
For dilute systems (e.g. gases), the correlations in the positions of the particles that
g
(
r
)
{\displaystyle g(r)}
accounts for are only due to the potential
u
(
r
)
{\displaystyle u(r)}
engendered by the reference particle, neglecting indirect effects. In the first approximation, it is thus simply given by the Boltzmann distribution law:
If
u
(
r
)
{\displaystyle u(r)}
were zero for all
r
{\displaystyle r}
– i.e., if the particles did not exert any influence on each other, then
g
(
r
)
=
1
{\displaystyle g(r)=1}
for all
r
{\displaystyle \mathbf {r} }
and the mean local density would be equal to the mean density
ρ
{\displaystyle \rho }
: the presence of a particle at O would not influence the particle distribution around it and the gas would be ideal. For distances
r
{\displaystyle r}
such that
u
(
r
)
{\displaystyle u(r)}
is significant, the mean local density will differ from the mean density
ρ
{\displaystyle \rho }
, depending on the sign of
u
(
r
)
{\displaystyle u(r)}
(higher for negative interaction energy and lower for positive
u
(
r
)
{\displaystyle u(r)}
).
As the density of the gas increases, the low-density limit becomes less and less accurate since a particle situated in
r
{\displaystyle \mathbf {r} }
experiences not only the interaction with the particle at O but also with the other neighbours, themselves influenced by the reference particle. This mediated interaction increases with the density, since there are more neighbours to interact with: it makes physical sense to write a density expansion of
g
(
r
)
{\displaystyle g(r)}
, which resembles the virial equation:
This similarity is not accidental; indeed, substituting (12) in the relations above for the thermodynamic parameters (Equations 7, 9 and 10) yields the corresponding virial expansions. The auxiliary function
y
(
r
)
{\displaystyle y(r)}
is known as the cavity distribution function.: Table 4.1 It has been shown that for classical fluids at a fixed density and a fixed positive temperature, the effective pair potential that generates a given
g
(
r
)
{\displaystyle g(r)}
under equilibrium is unique up to an additive constant, if it exists.
In recent years, some attention has been given to develop pair correlation functions for spatially-discrete data such as lattices or networks.
== Experimental ==
One can determine
g
(
r
)
{\displaystyle g(r)}
indirectly (via its relation with the structure factor
S
(
q
)
{\displaystyle S(q)}
) using neutron scattering or x-ray scattering data. The technique can be used at very short length scales (down to the atomic level) but involves significant space and time averaging (over the sample size and the acquisition time, respectively). In this way, the radial distribution function has been determined for a wide variety of systems, ranging from liquid metals to charged colloids. Going from the experimental
S
(
q
)
{\displaystyle S(q)}
to
g
(
r
)
{\displaystyle g(r)}
is not straightforward and the analysis can be quite involved.
It is also possible to calculate
g
(
r
)
{\displaystyle g(r)}
directly by extracting particle positions from traditional or confocal microscopy. This technique is limited to particles large enough for optical detection (in the micrometer range), but it has the advantage of being time-resolved so that, aside from the statical information, it also gives access to dynamical parameters (e.g. diffusion constants) and also space-resolved (to the level of the individual particle), allowing it to reveal the morphology and dynamics of local structures in colloidal crystals, glasses, gels, and hydrodynamic interactions.
Direct visualization of a full (distance-dependent and angle-dependent) pair correlation function was achieved by a scanning tunneling microscopy in the case of 2D molecular gases.
== Higher-order correlation functions ==
It has been noted that radial distribution functions alone are insufficient to characterize structural information. Distinct point processes may possess identical or practically indistinguishable radial distribution functions, known as the degeneracy problem. In such cases, higher order correlation functions are needed to further describe the structure.
Higher-order distribution functions
g
(
k
)
{\displaystyle \textstyle g^{(k)}}
with
k
>
2
{\displaystyle \textstyle k>2}
were less studied, since they are generally less important for the thermodynamics of the system; at the same time, they are not accessible by conventional scattering techniques. They can however be measured by coherent X-ray scattering and are interesting insofar as they can reveal local symmetries in disordered systems.
== See also ==
Ornstein–Zernike equation
Structure Factor
== References ==
Widom, B. (2002). Statistical Mechanics: A Concise Introduction for Chemists. Cambridge University Press.
McQuarrie, D. A. (1976). Statistical Mechanics. Harper Collins Publishers. | Wikipedia/Radial_distribution_function |
In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.
A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.
There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.
== Force fields for molecular systems ==
The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds, and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as
E
total
=
E
bonded
+
E
nonbonded
{\displaystyle E_{\text{total}}=E_{\text{bonded}}+E_{\text{nonbonded}}}
where the components of the covalent and noncovalent contributions are given by the following summations:
E
bonded
=
E
bond
+
E
angle
+
E
dihedral
{\displaystyle E_{\text{bonded}}=E_{\text{bond}}+E_{\text{angle}}+E_{\text{dihedral}}}
E
nonbonded
=
E
electrostatic
+
E
van der Waals
{\displaystyle E_{\text{nonbonded}}=E_{\text{electrostatic}}+E_{\text{van der Waals}}}
The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additionally, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.
The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.
=== Bond stretching ===
As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula:
E
bond
=
k
i
j
2
(
l
i
j
−
l
0
,
i
j
)
2
,
{\displaystyle E_{\text{bond}}={\frac {k_{ij}}{2}}(l_{ij}-l_{0,ij})^{2},}
where
k
i
j
{\displaystyle k_{ij}}
is the force constant,
l
i
j
{\displaystyle l_{ij}}
is the bond length, and
l
0
,
i
j
{\displaystyle l_{0,ij}}
is the value for the bond length between atoms
i
{\displaystyle i}
and
j
{\displaystyle j}
when all other terms in the force field are set to 0. The term
l
0
,
i
j
{\displaystyle l_{0,ij}}
is at times differently defined or taken at different thermodynamic conditions.
The bond stretching constant
k
i
j
{\displaystyle k_{ij}}
can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant
k
i
j
{\displaystyle k_{ij}}
determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.
Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better, one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.
=== Electrostatic interactions ===
Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges
q
i
{\displaystyle q_{i}}
to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law:
E
Coulomb
=
1
4
π
ε
0
q
i
q
j
r
i
j
,
{\displaystyle E_{\text{Coulomb}}={\frac {1}{4\pi \varepsilon _{0}}}{\frac {q_{i}q_{j}}{r_{ij}}},}
where
r
i
j
{\displaystyle r_{ij}}
is the distance between two atoms
i
{\displaystyle i}
and
j
{\displaystyle j}
. The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes 1, 2 bonded atoms, 1, 3 bonded atoms, as well as 1, 4 bonded atoms.
Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.
== Force fields for crystal systems ==
Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, particularly multi-body interactions, these interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. Additionally, Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.
== Parameterization ==
In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.
A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.
Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.
Heuristic force field parametrization procedures have been very successful for many years, but recently criticized since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.
Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.
== Force field databases ==
A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).
== Transferability and mixing function types ==
Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.
In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.
== Limitations ==
All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.
Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.
All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.
Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.
It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ −1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were −4 to −6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.
== Force fields available in literature ==
Different force fields are designed for different purposes:
=== Classical ===
AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
CVFF – also used broadly for small molecules and macromolecules.
ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
QCFF/PI – A general force fields for conjugated molecules.
UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.
=== Polarizable ===
Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.
AMBER – polarizable force field developed by Jim Caldwell and coworkers.
AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
DRF90 – developed by P. Th. van Duijnen and coworkers.
NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
PHAST – polarizable potential developed by Chris Cioce and coworkers.
ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov
XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.
=== Reactive ===
EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.
=== Coarse-grained ===
DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman with later modifications by Español and Warren The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids, later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.
=== Machine learning ===
MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
FFLUX (originally QCTFF) A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.
=== Water ===
The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.
=== Modified amino acids ===
Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.
=== Other ===
LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.
== See also ==
== References ==
== Further reading == | Wikipedia/Force_field_(chemistry) |
In statistical mechanics, Boltzmann's entropy formula (also known as the Boltzmann–Planck equation, not to be confused with the more general Boltzmann equation, which is a partial differential equation) is a probability equation relating the entropy
S
{\displaystyle S}
, also written as
S
B
{\displaystyle S_{\mathrm {B} }}
, of an ideal gas to the multiplicity (commonly denoted as
Ω
{\displaystyle \Omega }
or
W
{\displaystyle W}
), the number of real microstates corresponding to the gas's macrostate:
where
k
B
{\displaystyle k_{\mathrm {B} }}
is the Boltzmann constant (also written as simply
k
{\displaystyle k}
) and equal to 1.380649 × 10−23 J/K, and
ln
{\displaystyle \ln }
is the natural logarithm function (or log base e, as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
== History ==
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of W was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of N identical particles, of which Ni are in the i-th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. W was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. W can be counted using the formula for permutations
where i ranges over all possible molecular conditions and "!" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. W is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
== Introduction of the natural logarithm ==
In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation.
Boltzmann writes:
“The first task is to determine the permutation number, previously designated by
𝒫
, for any state distribution. Denoting by J the sum of the permutations
𝒫
for all possible state distributions, the quotient
𝒫
/J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations
𝒫
for
the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. …
“The most likely state distribution will be for those w0, w1 … values for which
𝒫
is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of
𝒫
is a product, it is easiest to determine the minimum of its logarithm, …”
Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula.
== Generalization ==
Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable.
But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath.
For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy formula, is:
This reduces to equation (1) if the probabilities pi are all equal.
Boltzmann used a
ρ
ln
ρ
{\displaystyle \rho \ln \rho }
formula as early as 1866. He interpreted ρ as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878.
Boltzmann himself used an expression equivalent to (3) in his later work and recognized it as more general than equation (1). That is, equation (1) is a corollary of
equation (3)—and not vice versa. In every situation where equation (1) is valid,
equation (3) is valid also—and not vice versa.
== Boltzmann entropy excludes statistical dependencies ==
The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems.
The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of N separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a single particle (rather than the 6N-dimensional phase space of the system as a whole), the Gibbs entropy formula
simplifies to the Boltzmann entropy
S
B
{\displaystyle S_{\mathrm {B} }}
.
This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy.
For anything but the most dilute of real gases,
S
B
{\displaystyle S_{\mathrm {B} }}
leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a holode, rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the canonical one.
== See also ==
History of entropy
H theorem
Gibbs entropy formula
nat (unit)
Shannon entropy
von Neumann entropy
== References ==
== External links ==
Introduction to Boltzmann's Equation
Vorlesungen über Gastheorie, Ludwig Boltzmann (1896) vol. I, J.A. Barth, Leipzig
Vorlesungen über Gastheorie, Ludwig Boltzmann (1898) vol. II. J.A. Barth, Leipzig. | Wikipedia/Boltzmann's_entropy_formula |
In classical thermodynamics, entropy (from Greek τρoπή (tropḗ) 'transformation') is a property of a thermodynamic system that expresses the direction or outcome of spontaneous changes in the system. The term was introduced by Rudolf Clausius in the mid-19th century to explain the relationship of the internal energy that is available or unavailable for transformations in form of heat and work. Entropy predicts that certain processes are irreversible or impossible, despite not violating the conservation of energy. The definition of entropy is central to the establishment of the second law of thermodynamics, which states that the entropy of isolated systems cannot decrease with time, as they always tend to arrive at a state of thermodynamic equilibrium, where the entropy is highest. Entropy is therefore also considered to be a measure of disorder in the system.
Ludwig Boltzmann explained the entropy as a measure of the number of possible microscopic configurations Ω of the individual atoms and molecules of the system (microstates) which correspond to the macroscopic state (macrostate) of the system. He showed that the thermodynamic entropy is k ln Ω, where the factor k has since been known as the Boltzmann constant.
== Concept ==
Differences in pressure, density, and temperature of a thermodynamic system tend to equalize over time. For example, in a room containing a glass of melting ice, the difference in temperature between the warm room and the cold glass of ice and water is equalized by energy flowing as heat from the room to the cooler ice and water mixture. Over time, the temperature of the glass and its contents and the temperature of the room achieve a balance. The entropy of the room has decreased. However, the entropy of the glass of ice and water has increased more than the entropy of the room has decreased. In an isolated system, such as the room and ice water taken together, the dispersal of energy from warmer to cooler regions always results in a net increase in entropy. Thus, when the system of the room and ice water system has reached thermal equilibrium, the entropy change from the initial state is at its maximum. The entropy of the thermodynamic system is a measure of the progress of the equalization.
Many irreversible processes result in an increase of entropy. One of them is mixing of two or more different substances, occasioned by bringing them together by removing a wall that separates them, keeping the temperature and pressure constant. The mixing is accompanied by the entropy of mixing. In the important case of mixing of ideal gases, the combined system does not change its internal energy by work or heat transfer; the entropy increase is then entirely due to the spreading of the different substances into their new common volume.
From a macroscopic perspective, in classical thermodynamics, the entropy is a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. Entropy is a key ingredient of the Second law of thermodynamics, which has important consequences e.g. for the performance of heat engines, refrigerators, and heat pumps.
== Definition ==
According to the Clausius equality, for a closed homogeneous system, in which only reversible processes take place,
∮
δ
Q
T
=
0.
{\displaystyle \oint {\frac {\delta Q}{T}}=0.}
With
T
{\displaystyle T}
being the uniform temperature of the closed system and
δ
Q
{\displaystyle \delta Q}
the incremental reversible transfer of heat energy into that system.
That means the line integral
∫
L
δ
Q
T
{\textstyle \int _{L}{\frac {\delta Q}{T}}}
is path-independent.
A state function
S
{\displaystyle S}
, called entropy, may be defined which satisfies
d
S
=
δ
Q
T
.
{\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}.}
== Entropy measurement ==
The thermodynamic state of a uniform closed system is determined by its temperature T and pressure P. A change in entropy can be written as
d
S
=
(
∂
S
∂
T
)
P
d
T
+
(
∂
S
∂
P
)
T
d
P
.
{\displaystyle \mathrm {d} S=\left({\frac {\partial S}{\partial T}}\right)_{P}\mathrm {d} T+\left({\frac {\partial S}{\partial P}}\right)_{T}\mathrm {d} P.}
The first contribution depends on the heat capacity at constant pressure CP through
(
∂
S
∂
T
)
P
=
C
P
T
.
{\displaystyle \left({\frac {\partial S}{\partial T}}\right)_{P}={\frac {C_{P}}{T}}.}
This is the result of the definition of the heat capacity by δQ = CP dT and T dS = δQ. The second term may be rewritten with one of the Maxwell relations
(
∂
S
∂
P
)
T
=
−
(
∂
V
∂
T
)
P
{\displaystyle \left({\frac {\partial S}{\partial P}}\right)_{T}=-\left({\frac {\partial V}{\partial T}}\right)_{P}}
and the definition of the volumetric thermal-expansion coefficient
α
V
=
1
V
(
∂
V
∂
T
)
P
{\displaystyle \alpha _{V}={\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{P}}
so that
d
S
=
C
P
T
d
T
−
α
V
V
d
P
.
{\displaystyle \mathrm {d} S={\frac {C_{P}}{T}}\mathrm {d} T-\alpha _{V}V\mathrm {d} P.}
With this expression the entropy S at arbitrary P and T can be related to the entropy S0 at some reference state at P0 and T0 according to
S
(
P
,
T
)
=
S
(
P
0
,
T
0
)
+
∫
T
0
T
C
P
(
P
0
,
T
′
)
T
′
d
T
′
−
∫
P
0
P
α
V
(
P
′
,
T
)
V
(
P
′
,
T
)
d
P
′
.
{\displaystyle S(P,T)=S(P_{0},T_{0})+\int _{T_{0}}^{T}{\frac {C_{P}(P_{0},T^{\prime })}{T^{\prime }}}\mathrm {d} T^{\prime }-\int _{P_{0}}^{P}\alpha _{V}(P^{\prime },T)V(P^{\prime },T)\mathrm {d} P^{\prime }.}
In classical thermodynamics, the entropy of the reference state can be put equal to zero at any convenient temperature and pressure. For example, for pure substances, one can take the entropy of the solid at the melting point at 1 bar equal to zero. From a more fundamental point of view, the third law of thermodynamics suggests that there is a preference to take S = 0 at T = 0 (absolute zero) for perfectly ordered materials such as crystals.
S(P, T) is determined by followed a specific path in the P-T diagram: integration over T at constant pressure P0, so that dP = 0, and in the second integral one integrates over P at constant temperature T, so that dT = 0. As the entropy is a function of state the result is independent of the path.
The above relation shows that the determination of the entropy requires knowledge of the heat capacity and the equation of state (which is the relation between P,V, and T of the substance involved). Normally these are complicated functions and numerical integration is needed. In simple cases it is possible to get analytical expressions for the entropy. In the case of an ideal gas, the heat capacity is constant and the ideal gas law PV = nRT gives that αVV = V/T = nR/p, with n the number of moles and R the molar ideal-gas constant. So, the molar entropy of an ideal gas is given by
S
m
(
P
,
T
)
=
S
m
(
P
0
,
T
0
)
+
C
P
ln
T
T
0
−
R
ln
P
P
0
.
{\displaystyle S_{m}(P,T)=S_{m}(P_{0},T_{0})+C_{P}\ln {\frac {T}{T_{0}}}-R\ln {\frac {P}{P_{0}}}.}
In this expression CP now is the molar heat capacity.
The entropy of inhomogeneous systems is the sum of the entropies of the various subsystems. The laws of thermodynamics hold rigorously for inhomogeneous systems even though they may be far from internal equilibrium. The only condition is that the thermodynamic parameters of the composing subsystems are (reasonably) well-defined.
== Temperature-entropy diagrams ==
Entropy values of important substances may be obtained from reference works or with commercial software in tabular form or as diagrams. One of the most common diagrams is the temperature-entropy diagram (TS-diagram). For example, Fig.2 shows the TS-diagram of nitrogen, depicting the melting curve and saturated liquid and vapor values with isobars and isenthalps.
== Entropy change in irreversible transformations ==
We now consider inhomogeneous systems in which internal transformations (processes) can take place. If we calculate the entropy S1 before and S2 after such an internal process the Second Law of Thermodynamics demands that S2 ≥ S1 where the equality sign holds if the process is reversible. The difference Si = S2 − S1 is the entropy production due to the irreversible process. The Second law demands that the entropy of an isolated system cannot decrease.
Suppose a system is thermally and mechanically isolated from the environment (isolated system). For example, consider an insulating rigid box divided by a movable partition into two volumes, each filled with gas. If the pressure of one gas is higher, it will expand by moving the partition, thus performing work on the other gas. Also, if the gases are at different temperatures, heat can flow from one gas to the other provided the partition allows heat conduction. Our above result indicates that the entropy of the system as a whole will increase during these processes. There exists a maximum amount of entropy the system may possess under the circumstances. This entropy corresponds to a state of stable equilibrium, since a transformation to any other equilibrium state would cause the entropy to decrease, which is forbidden. Once the system reaches this maximum-entropy state, no part of the system can perform work on any other part. It is in this sense that entropy is a measure of the energy in a system that cannot be used to do work.
An irreversible process degrades the performance of a thermodynamic system, designed to do work or produce cooling, and results in entropy production. The entropy generation during a reversible process is zero. Thus entropy production is a measure of the irreversibility and may be used to compare engineering processes and machines.
== Thermal machines ==
Clausius' identification of S as a significant quantity was motivated by the study of reversible and irreversible thermodynamic transformations. A heat engine is a thermodynamic system that can undergo a sequence of transformations which ultimately return it to its original state. Such a sequence is called a cyclic process, or simply a cycle. During some transformations, the engine may exchange energy with its environment. The net result of a cycle is
mechanical work done by the system (which can be positive or negative, the latter meaning that work is done on the engine),
heat transferred from one part of the environment to another. In the steady state, by the conservation of energy, the net energy lost by the environment is equal to the work done by the engine.
If every transformation in the cycle is reversible, the cycle is reversible, and it can be run in reverse, so that the heat transfers occur in the opposite directions and the amount of work done switches sign.
=== Heat engines ===
Consider a heat engine working between two temperatures TH and Ta. With Ta we have ambient temperature in mind, but, in principle it may also be some other low temperature. The heat engine is in thermal contact with two heat reservoirs which are supposed to have a very large heat capacity so that their temperatures do not change significantly if heat QH is removed from the hot reservoir and Qa is added to the lower reservoir. Under normal operation TH > Ta and QH, Qa, and W are all positive.
As our thermodynamical system we take a big system which includes the engine and the two reservoirs. It is indicated in Fig.3 by the dotted rectangle. It is inhomogeneous, closed (no exchange of matter with its surroundings), and adiabatic (no exchange of heat with its surroundings). It is not isolated since per cycle a certain amount of work W is produced by the system given by the first law of thermodynamics
W
=
Q
H
−
Q
a
.
{\displaystyle W=Q_{H}-Q_{a}.}
We used the fact that the engine itself is periodic, so its internal energy has not changed after one cycle. The same is true for its entropy, so the entropy increase S2 − S1 of our system after one cycle is given by the reduction of entropy of the hot source and the increase of the cold sink. The entropy increase of the total system S2 - S1 is equal to the entropy production Si due to irreversible processes in the engine so
S
i
=
−
Q
H
T
H
+
Q
a
T
a
.
{\displaystyle S_{i}=-{\frac {Q_{H}}{T_{H}}}+{\frac {Q_{a}}{T_{a}}}.}
The Second law demands that Si ≥ 0. Eliminating Qa from the two relations gives
W
=
(
1
−
T
a
T
H
)
Q
H
−
T
a
S
i
.
{\displaystyle W=\left(1-{\frac {T_{a}}{T_{H}}}\right)Q_{H}-T_{a}S_{i}.}
The first term is the maximum possible work for a heat engine, given by a reversible engine, as one operating along a Carnot cycle. Finally
W
=
W
max
−
T
a
S
i
.
{\displaystyle W=W_{\text{max}}-T_{a}S_{i}.}
This equation tells us that the production of work is reduced by the generation of entropy. The term TaSi gives the lost work, or dissipated energy, by the machine.
Correspondingly, the amount of heat, discarded to the cold sink, is increased by the entropy generation
Q
a
=
T
a
T
H
Q
H
+
T
a
S
i
=
Q
a
,
min
+
T
a
S
i
.
{\displaystyle Q_{a}={\frac {T_{a}}{T_{H}}}Q_{H}+T_{a}S_{i}=Q_{a,{\text{min}}}+T_{a}S_{i}.}
These important relations can also be obtained without the inclusion of the heat reservoirs. See the article on entropy production.
=== Refrigerators ===
The same principle can be applied to a refrigerator working between a low temperature TL and ambient temperature. The schematic drawing is exactly the same as Fig.3 with TH replaced by TL, QH by QL, and the sign of W reversed. In this case the entropy production is
S
i
=
Q
a
T
a
−
Q
L
T
L
{\displaystyle S_{i}={\frac {Q_{a}}{T_{a}}}-{\frac {Q_{L}}{T_{L}}}}
and the work needed to extract heat QL from the cold source is
W
=
Q
L
(
T
a
T
L
−
1
)
+
T
a
S
i
.
{\displaystyle W=Q_{L}\left({\frac {T_{a}}{T_{L}}}-1\right)+T_{a}S_{i}.}
The first term is the minimum required work, which corresponds to a reversible refrigerator, so we have
W
=
W
min
+
T
a
S
i
{\displaystyle W=W_{\text{min}}+T_{a}S_{i}}
i.e., the refrigerator compressor has to perform extra work to compensate for the dissipated energy due to irreversible processes which lead to entropy production.
== See also ==
Entropy
Enthalpy
Entropy production
Fundamental thermodynamic relation
Thermodynamic free energy
History of entropy
Entropy (statistical views)
== References ==
== Further reading ==
E.A. Guggenheim Thermodynamics, an advanced treatment for chemists and physicists North-Holland Publishing Company, Amsterdam, 1959.
C. Kittel and H. Kroemer Thermal Physics W.H. Freeman and Company, New York, 1980.
Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction at a lower level than this entry. | Wikipedia/Entropy_(classical_thermodynamics) |
In thermodynamics and solid-state physics, the Debye model is a method developed by Peter Debye in 1912 to estimate phonon contribution to the specific heat (heat capacity) in a solid. It treats the vibrations of the atomic lattice (heat) as phonons in a box in contrast to the Einstein photoelectron model, which treats the solid as many individual, non-interacting quantum harmonic oscillators. The Debye model correctly predicts the low-temperature dependence of the heat capacity of solids, which is proportional to the cube of temperature – the Debye T 3 law. Similarly to the Einstein photoelectron model, it recovers the Dulong–Petit law at high temperatures. Due to simplifying assumptions, its accuracy suffers at intermediate temperatures.
== Derivation ==
The Debye model treats atomic vibrations as phonons confined in the solid's volume. It is analogous to Planck's law of black body radiation, which treats electromagnetic radiation as a photon gas confined in a vacuum space. Most of the calculation steps are identical, as both are examples of a massless Bose gas with a linear dispersion relation.
For a cube of side-length
L
{\displaystyle L}
, the resonating modes of the sonic disturbances (considering for now only those aligned with one axis), treated as particles in a box, have wavelengths given as
λ
n
=
2
L
n
,
{\displaystyle \lambda _{n}={2L \over n}\,,}
where
n
{\displaystyle n}
is an integer. The energy of a phonon is given as
E
n
=
h
ν
n
,
{\displaystyle E_{n}\ =h\nu _{n}\,,}
where
h
{\displaystyle h}
is the Planck constant and
ν
n
{\displaystyle \nu _{n}}
is the frequency of the phonon. Making the approximation that the frequency is inversely proportional to the wavelength,
E
n
=
h
ν
n
=
h
c
s
λ
n
=
h
c
s
n
2
L
,
{\displaystyle E_{n}=h\nu _{n}={hc_{\rm {s}} \over \lambda _{n}}={hc_{s}n \over 2L}\,,}
in which
c
s
{\displaystyle c_{s}}
is the speed of sound inside the solid. In three dimensions, energy can be generalized to
E
n
2
=
p
n
2
c
s
2
=
(
h
c
s
2
L
)
2
(
n
x
2
+
n
y
2
+
n
z
2
)
,
{\displaystyle E_{n}^{2}={p_{n}^{2}c_{\rm {s}}^{2}}=\left({hc_{\rm {s}} \over 2L}\right)^{2}\left(n_{x}^{2}+n_{y}^{2}+n_{z}^{2}\right)\,,}
in which
p
n
{\displaystyle p_{n}}
is the magnitude of the three-dimensional momentum of the phonon, and
n
x
{\displaystyle n_{x}}
,
n
y
{\displaystyle n_{y}}
, and
n
z
{\displaystyle n_{z}}
are the components of the resonating mode along each of the three axes.
The approximation that the frequency is inversely proportional to the wavelength (giving a constant speed of sound) is good for low-energy phonons but not for high-energy phonons, which is a limitation of the Debye model. This approximation leads to incorrect results at intermediate temperatures, whereas the results are exact at the low and high temperature limits.
The total energy in the box,
U
{\displaystyle U}
, is given by
U
=
∑
n
E
n
N
¯
(
E
n
)
,
{\displaystyle U=\sum _{n}E_{n}\,{\bar {N}}(E_{n})\,,}
where
N
¯
(
E
n
)
{\displaystyle {\bar {N}}(E_{n})}
is the number of phonons in the box with energy
E
n
{\displaystyle E_{n}}
; the total energy is equal to the sum of energies over all energy levels, and the energy at a given level is found by multiplying its energy by the number of phonons with that energy. In three dimensions, each combination of modes in each of the three axes corresponds to an energy level, giving the total energy as:
U
=
∑
n
x
∑
n
y
∑
n
z
E
n
N
¯
(
E
n
)
.
{\displaystyle U=\sum _{n_{x}}\sum _{n_{y}}\sum _{n_{z}}E_{n}\,{\bar {N}}(E_{n})\,.}
The Debye model and Planck's law of black body radiation differ here with respect to this sum. Unlike electromagnetic photon radiation in a box, there are a finite number of phonon energy states because a phonon cannot have an arbitrarily high frequency. Its frequency is bounded by its propagation medium—the atomic lattice of the solid. The following illustration describes transverse phonons in a cubic solid at varying frequencies:
It is reasonable to assume that the minimum wavelength of a phonon is twice the atomic separation, as shown in the lowest example. With
N
{\displaystyle N}
atoms in a cubic solid, each axis of the cube measures as being
N
3
{\displaystyle {\sqrt[{3}]{N}}}
atoms long. Atomic separation is then given by
L
/
N
3
{\displaystyle L/{\sqrt[{3}]{N}}}
, and the minimum wavelength is
λ
m
i
n
=
2
L
N
3
,
{\displaystyle \lambda _{\rm {min}}={2L \over {\sqrt[{3}]{N}}}\,,}
making the maximum mode number
n
m
a
x
{\displaystyle n_{max}}
:
n
m
a
x
=
N
3
.
{\displaystyle n_{\rm {max}}={\sqrt[{3}]{N}}\,.}
This contrasts with photons, for which the maximum mode number is infinite. This number bounds the upper limit of the triple energy sum
U
=
∑
n
x
N
3
∑
n
y
N
3
∑
n
z
N
3
E
n
N
¯
(
E
n
)
.
{\displaystyle U=\sum _{n_{x}}^{\sqrt[{3}]{N}}\sum _{n_{y}}^{\sqrt[{3}]{N}}\sum _{n_{z}}^{\sqrt[{3}]{N}}E_{n}\,{\bar {N}}(E_{n})\,.}
If
E
n
{\displaystyle E_{n}}
is a function that is slowly varying with respect to
n
{\displaystyle n}
, the sums can be approximated with integrals:
U
≈
∫
0
N
3
∫
0
N
3
∫
0
N
3
E
(
n
)
N
¯
(
E
(
n
)
)
d
n
x
d
n
y
d
n
z
.
{\displaystyle U\approx \int _{0}^{\sqrt[{3}]{N}}\int _{0}^{\sqrt[{3}]{N}}\int _{0}^{\sqrt[{3}]{N}}E(n)\,{\bar {N}}\left(E(n)\right)\,dn_{x}\,dn_{y}\,dn_{z}\,.}
To evaluate this integral, the function
N
¯
(
E
)
{\displaystyle {\bar {N}}(E)}
, the number of phonons with energy
E
,
{\displaystyle E\,,}
must also be known. Phonons obey Bose–Einstein statistics, and their distribution is given by the Bose–Einstein statistics formula:
⟨
N
⟩
B
E
=
1
e
E
/
k
T
−
1
.
{\displaystyle \langle N\rangle _{BE}={1 \over e^{E/kT}-1}\,.}
Because a phonon has three possible polarization states (one longitudinal, and two transverse, which approximately do not affect its energy) the formula above must be multiplied by 3,
N
¯
(
E
)
=
3
e
E
/
k
T
−
1
.
{\displaystyle {\bar {N}}(E)={3 \over e^{E/kT}-1}\,.}
Considering all three polarization states together also means that an effective sonic velocity
c
e
f
f
{\displaystyle c_{\rm {eff}}}
must be determined and used as the value of the standard sonic velocity
c
s
.
{\displaystyle c_{s}.}
The Debye temperature
T
D
{\displaystyle T_{\rm {D}}}
defined below is proportional to
c
e
f
f
{\displaystyle c_{\rm {eff}}}
; more precisely,
T
D
−
3
∝
c
e
f
f
−
3
:=
1
3
c
l
o
n
g
−
3
+
2
3
c
t
r
a
n
s
−
3
{\displaystyle T_{\rm {D}}^{-3}\propto c_{\rm {eff}}^{-3}:={\frac {1}{3}}c_{\rm {long}}^{-3}+{\frac {2}{3}}c_{\rm {trans}}^{-3}}
, where longitudinal and transversal sound-wave velocities are averaged, weighted by the number of polarization states. The Debye temperature or the effective sonic velocity is a measure of the hardness of the crystal.
Substituting
N
¯
(
E
)
{\displaystyle {\bar {N}}(E)}
into the energy integral yields
U
=
∫
0
N
3
∫
0
N
3
∫
0
N
3
E
(
n
)
3
e
E
(
n
)
/
k
T
−
1
d
n
x
d
n
y
d
n
z
.
{\displaystyle U=\int _{0}^{\sqrt[{3}]{N}}\int _{0}^{\sqrt[{3}]{N}}\int _{0}^{\sqrt[{3}]{N}}E(n)\,{3 \over e^{E(n)/kT}-1}\,dn_{x}\,dn_{y}\,dn_{z}\,.}
These integrals are evaluated for photons easily because their frequency, at least semi-classically, is unbound. The same is not true for phonons, so in order to approximate this triple integral, Peter Debye used spherical coordinates,
(
n
x
,
n
y
,
n
z
)
=
(
n
sin
θ
cos
ϕ
,
n
sin
θ
sin
ϕ
,
n
cos
θ
)
,
{\displaystyle \ (n_{x},n_{y},n_{z})=(n\sin \theta \cos \phi ,n\sin \theta \sin \phi ,n\cos \theta )\,,}
and approximated the cube with an eighth of a sphere,
U
≈
∫
0
π
/
2
∫
0
π
/
2
∫
0
R
E
(
n
)
3
e
E
(
n
)
/
k
T
−
1
n
2
sin
θ
d
n
d
θ
d
ϕ
,
{\displaystyle U\approx \int _{0}^{\pi /2}\int _{0}^{\pi /2}\int _{0}^{R}E(n)\,{3 \over e^{E(n)/kT}-1}n^{2}\sin \theta \,dn\,d\theta \,d\phi \,,}
where
R
{\displaystyle R}
is the radius of this sphere. As the energy function does not depend on either of the angles, the equation can be simplified to
3
∫
0
π
/
2
∫
0
π
/
2
sin
θ
d
θ
d
ϕ
∫
0
R
E
(
n
)
1
e
E
(
n
)
/
k
T
−
1
n
2
d
n
=
3
π
2
∫
0
R
E
(
n
)
1
e
E
(
n
)
/
k
T
−
1
n
2
d
n
{\displaystyle \,3\int _{0}^{\pi /2}\int _{0}^{\pi /2}\sin \theta \,d\theta \,d\phi \,\int _{0}^{R}E(n)\,{\frac {1}{e^{E(n)/kT}-1}}n^{2}dn\,={\frac {3\pi }{2}}\int _{0}^{R}E(n)\,{\frac {1}{e^{E(n)/kT}-1}}n^{2}dn\,}
The number of particles in the original cube and in the eighth of a sphere should be equivalent. The volume of the cube is
N
{\displaystyle N}
unit cell volumes,
N
=
1
8
4
3
π
R
3
,
{\displaystyle N={1 \over 8}{4 \over 3}\pi R^{3}\,,}
such that the radius must be
R
=
6
N
π
3
.
{\displaystyle R={\sqrt[{3}]{6N \over \pi }}\,.}
The substitution of integration over a sphere for the correct integral over a cube introduces another source of inaccuracy into the resulting model.
After making the spherical substitution and substituting in the function
E
(
n
)
{\displaystyle E(n)\,}
, the energy integral becomes
U
=
3
π
2
∫
0
R
h
c
s
n
2
L
n
2
e
h
c
s
n
/
2
L
k
T
−
1
d
n
{\displaystyle U={3\pi \over 2}\int _{0}^{R}\,{hc_{s}n \over 2L}{n^{2} \over e^{hc_{\rm {s}}n/2LkT}-1}\,dn}
.
Changing the integration variable to
x
=
h
c
s
n
2
L
k
T
{\displaystyle x={hc_{\rm {s}}n \over 2LkT}}
,
U
=
3
π
2
k
T
(
2
L
k
T
h
c
s
)
3
∫
0
h
c
s
R
/
2
L
k
T
x
3
e
x
−
1
d
x
.
{\displaystyle U={3\pi \over 2}kT\left({2LkT \over hc_{\rm {s}}}\right)^{3}\int _{0}^{hc_{\rm {s}}R/2LkT}{x^{3} \over e^{x}-1}\,dx.}
To simplify the appearance of this expression, define the Debye temperature
T
D
{\displaystyle T_{\rm {D}}}
T
D
=
d
e
f
h
c
s
R
2
L
k
=
h
c
s
2
L
k
6
N
π
3
=
h
c
s
2
k
6
π
N
V
3
{\displaystyle T_{\rm {D}}\ {\stackrel {\mathrm {def} }{=}}\ {hc_{\rm {s}}R \over 2Lk}={hc_{\rm {s}} \over 2Lk}{\sqrt[{3}]{6N \over \pi }}={hc_{\rm {s}} \over 2k}{\sqrt[{3}]{{6 \over \pi }{N \over V}}}}
where
V
{\displaystyle V}
is the volume of the cubic box of side-length
L
{\displaystyle L}
.
Some authors describe the Debye temperature as shorthand for some constants and material-dependent variables. However,
k
T
D
{\displaystyle kT_{\rm {D}}}
is roughly equal to the phonon energy of the minimum wavelength mode, and so we can interpret the Debye temperature as the temperature at which the highest-frequency mode is excited. Additionally, since all other modes are of a lower energy than the highest-frequency mode, all modes are excited at this temperature.
From the total energy, the specific internal energy can be calculated:
U
N
k
=
9
T
(
T
T
D
)
3
∫
0
T
D
/
T
x
3
e
x
−
1
d
x
=
3
T
D
3
(
T
D
T
)
,
{\displaystyle {\frac {U}{Nk}}=9T\left({T \over T_{\rm {D}}}\right)^{3}\int _{0}^{T_{\rm {D}}/T}{x^{3} \over e^{x}-1}\,dx=3TD_{3}\left({T_{\rm {D}} \over T}\right)\,,}
where
D
3
(
x
)
{\displaystyle D_{3}(x)}
is the third Debye function. Differentiating this function with respect to
T
{\displaystyle T}
produces the dimensionless heat capacity:
C
V
N
k
=
9
(
T
T
D
)
3
∫
0
T
D
/
T
x
4
e
x
(
e
x
−
1
)
2
d
x
.
{\displaystyle {\frac {C_{V}}{Nk}}=9\left({T \over T_{\rm {D}}}\right)^{3}\int _{0}^{T_{\rm {D}}/T}{x^{4}e^{x} \over \left(e^{x}-1\right)^{2}}\,dx\,.}
These formulae treat the Debye model at all temperatures. The more elementary formulae given further down give the asymptotic behavior in the limit of low and high temperatures. The essential reason for the exactness at low and high energies is, respectively, that the Debye model gives the exact dispersion relation
E
(
ν
)
{\displaystyle E(\nu )}
at low frequencies, and corresponds to the exact density of states
(
∫
g
(
ν
)
d
ν
≡
3
N
)
{\textstyle (\int g(\nu )\,d\nu \equiv 3N)}
at high temperatures, concerning the number of vibrations per frequency interval.
== Debye's derivation ==
Debye derived his equation differently and more simply. Using continuum mechanics, he found that the number of vibrational states with a frequency less than a particular value was asymptotic to
n
∼
1
3
ν
3
V
F
,
{\displaystyle n\sim {1 \over 3}\nu ^{3}VF\,,}
in which
V
{\displaystyle V}
is the volume and
F
{\displaystyle F}
is a factor that he calculated from elasticity coefficients and density. Combining this formula with the expected energy of a harmonic oscillator at temperature
T
{\displaystyle T}
(already used by Einstein in his model) would give an energy of
U
=
∫
0
∞
h
ν
3
V
F
e
h
ν
/
k
T
−
1
d
ν
,
{\displaystyle U=\int _{0}^{\infty }\,{h\nu ^{3}VF \over e^{h\nu /kT}-1}\,d\nu \,,}
if the vibrational frequencies continued to infinity. This form gives the
T
3
{\displaystyle T^{3}}
behaviour which is correct at low temperatures. But Debye realized that there could not be more than
3
N
{\displaystyle 3N}
vibrational states for N atoms. He made the assumption that in an atomic solid, the spectrum of frequencies of the vibrational states would continue to follow the above rule, up to a maximum frequency
ν
m
{\displaystyle \nu _{m}}
chosen so that the total number of states is
3
N
=
1
3
ν
m
3
V
F
.
{\displaystyle 3N={1 \over 3}\nu _{m}^{3}VF\,.}
Debye knew that this assumption was not really correct (the higher frequencies are more closely spaced than assumed), but it guarantees the proper behaviour at high temperature (the Dulong–Petit law). The energy is then given by
U
=
∫
0
ν
m
h
ν
3
V
F
e
h
ν
/
k
T
−
1
d
ν
,
=
V
F
k
T
(
k
T
/
h
)
3
∫
0
T
D
/
T
x
3
e
x
−
1
d
x
.
{\displaystyle {\begin{aligned}U&=\int _{0}^{\nu _{m}}\,{h\nu ^{3}VF \over e^{h\nu /kT}-1}\,d\nu \,,\\&=VFkT(kT/h)^{3}\int _{0}^{T_{\rm {D}}/T}\,{x^{3} \over e^{x}-1}\,dx\,.\end{aligned}}}
Substituting
T
D
{\displaystyle T_{\rm {D}}}
for
h
ν
m
/
k
{\displaystyle h\nu _{m}/k}
,
U
=
9
N
k
T
(
T
/
T
D
)
3
∫
0
T
D
/
T
x
3
e
x
−
1
d
x
,
=
3
N
k
T
D
3
(
T
D
/
T
)
,
{\displaystyle {\begin{aligned}U&=9NkT(T/T_{\rm {D}})^{3}\int _{0}^{T_{\rm {D}}/T}\,{x^{3} \over e^{x}-1}\,dx\,,\\&=3NkTD_{3}(T_{\rm {D}}/T)\,,\end{aligned}}}
where
D
3
{\displaystyle D_{3}}
is the function later given the name of third-order Debye function.
== Another derivation ==
First the vibrational frequency distribution is derived from Appendix VI of Terrell L. Hill's An Introduction to Statistical Mechanics. Consider a three-dimensional isotropic elastic solid with N atoms in the shape of a rectangular parallelepiped with side-lengths
L
x
,
L
y
,
L
z
{\displaystyle L_{x},L_{y},L_{z}}
. The elastic wave will obey the wave equation and will be plane waves; consider the wave vector
k
=
(
k
x
,
k
y
,
k
z
)
{\displaystyle \mathbf {k} =(k_{x},k_{y},k_{z})}
and define
l
x
=
k
x
|
k
|
,
l
y
=
k
y
|
k
|
,
l
z
=
k
z
|
k
|
{\displaystyle l_{x}={\frac {k_{x}}{|\mathbf {k} |}},l_{y}={\frac {k_{y}}{|\mathbf {k} |}},l_{z}={\frac {k_{z}}{|\mathbf {k} |}}}
, such that
Solutions to the wave equation are
u
(
x
,
y
,
z
,
t
)
=
sin
(
2
π
ν
t
)
sin
(
2
π
l
x
x
λ
)
sin
(
2
π
l
y
y
λ
)
sin
(
2
π
l
z
z
λ
)
{\displaystyle u(x,y,z,t)=\sin(2\pi \nu t)\sin \left({\frac {2\pi l_{x}x}{\lambda }}\right)\sin \left({\frac {2\pi l_{y}y}{\lambda }}\right)\sin \left({\frac {2\pi l_{z}z}{\lambda }}\right)}
and with the boundary conditions
u
=
0
{\displaystyle u=0}
at
x
,
y
,
z
=
0
,
x
=
L
x
,
y
=
L
y
,
z
=
L
z
{\displaystyle x,y,z=0,x=L_{x},y=L_{y},z=L_{z}}
,
where
n
x
,
n
y
,
n
z
{\displaystyle n_{x},n_{y},n_{z}}
are positive integers. Substituting (2) into (1) and also using the dispersion relation
c
s
=
λ
ν
{\displaystyle c_{s}=\lambda \nu }
,
n
x
2
(
2
ν
L
x
/
c
s
)
2
+
n
y
2
(
2
ν
L
y
/
c
s
)
2
+
n
z
2
(
2
ν
L
z
/
c
s
)
2
=
1.
{\displaystyle {\frac {n_{x}^{2}}{(2\nu L_{x}/c_{s})^{2}}}+{\frac {n_{y}^{2}}{(2\nu L_{y}/c_{s})^{2}}}+{\frac {n_{z}^{2}}{(2\nu L_{z}/c_{s})^{2}}}=1.}
The above equation, for fixed frequency
ν
{\displaystyle \nu }
, describes an eighth of an ellipse in "mode space" (an eighth because
n
x
,
n
y
,
n
z
{\displaystyle n_{x},n_{y},n_{z}}
are positive). The number of modes with frequency less than
ν
{\displaystyle \nu }
is thus the number of integral points inside the ellipse, which, in the limit of
L
x
,
L
y
,
L
z
→
∞
{\displaystyle L_{x},L_{y},L_{z}\to \infty }
(i.e. for a very large parallelepiped) can be approximated to the volume of the ellipse. Hence, the number of modes
N
(
ν
)
{\displaystyle N(\nu )}
with frequency in the range
[
0
,
ν
]
{\displaystyle [0,\nu ]}
is
where
V
=
L
x
L
y
L
z
{\displaystyle V=L_{x}L_{y}L_{z}}
is the volume of the parallelepiped. The wave speed in the longitudinal direction is different from the transverse direction and that the waves can be polarised one way in the longitudinal direction and two ways in the transverse direction and ca be defined as
3
c
s
3
=
1
c
long
3
+
2
c
trans
3
{\displaystyle {\frac {3}{c_{s}^{3}}}={\frac {1}{c_{\text{long}}^{3}}}+{\frac {2}{c_{\text{trans}}^{3}}}}
.
Following the derivation from A First Course in Thermodynamics, an upper limit to the frequency of vibration is defined
ν
D
{\displaystyle \nu _{D}}
; since there are
N
{\displaystyle N}
atoms in the solid, there are
3
N
{\displaystyle 3N}
quantum harmonic oscillators (3 for each x-, y-, z- direction) oscillating over the range of frequencies
[
0
,
ν
D
]
{\displaystyle [0,\nu _{D}]}
.
ν
D
{\displaystyle \nu _{D}}
can be determined using
By defining
ν
D
=
k
T
D
h
{\displaystyle \nu _{\rm {D}}={\frac {kT_{\rm {D}}}{h}}}
, where k is the Boltzmann constant and h is the Planck constant, and substituting (4) into (3),
this definition is more standard; the energy contribution for all oscillators oscillating at frequency
ν
{\displaystyle \nu }
can be found. Quantum harmonic oscillators can have energies
E
i
=
(
i
+
1
/
2
)
h
ν
{\displaystyle E_{i}=(i+1/2)h\nu }
where
i
=
0
,
1
,
2
,
…
{\displaystyle i=0,1,2,\dotsc }
and using Maxwell-Boltzmann statistics, the number of particles with energy
E
i
{\displaystyle E_{i}}
is
n
i
=
1
A
e
−
E
i
/
(
k
T
)
=
1
A
e
−
(
i
+
1
/
2
)
h
ν
/
(
k
T
)
.
{\displaystyle n_{i}={\frac {1}{A}}e^{-E_{i}/(kT)}={\frac {1}{A}}e^{-(i+1/2)h\nu /(kT)}.}
The energy contribution for oscillators with frequency
ν
{\displaystyle \nu }
is then
By noting that
∑
i
=
0
∞
n
i
=
d
N
(
ν
)
{\displaystyle \sum _{i=0}^{\infty }n_{i}=dN(\nu )}
(because there are
d
N
(
ν
)
{\displaystyle dN(\nu )}
modes oscillating with frequency
ν
{\displaystyle \nu }
),
1
A
e
−
1
/
2
h
ν
/
(
k
T
)
∑
i
=
0
∞
e
−
i
h
ν
/
(
k
T
)
=
1
A
e
−
1
/
2
h
ν
/
(
k
T
)
1
1
−
e
−
h
ν
/
(
k
T
)
=
d
N
(
ν
)
.
{\displaystyle {\frac {1}{A}}e^{-1/2h\nu /(kT)}\sum _{i=0}^{\infty }e^{-ih\nu /(kT)}={\frac {1}{A}}e^{-1/2h\nu /(kT)}{\frac {1}{1-e^{-h\nu /(kT)}}}=dN(\nu ).}
From above, we can get an expression for 1/A; substituting it into (6),
d
U
=
d
N
(
ν
)
e
1
/
2
h
ν
/
(
k
T
)
(
1
−
e
−
h
ν
/
(
k
T
)
)
∑
i
=
0
∞
h
ν
(
i
+
1
/
2
)
e
−
h
ν
(
i
+
1
/
2
)
/
(
k
T
)
=
d
N
(
ν
)
(
1
−
e
−
h
ν
/
(
k
T
)
)
∑
i
=
0
∞
h
ν
(
i
+
1
/
2
)
e
−
h
ν
i
/
(
k
T
)
=
d
N
(
ν
)
h
ν
(
1
2
+
(
1
−
e
−
h
ν
/
(
k
T
)
)
∑
i
=
0
∞
i
e
−
h
ν
i
/
(
k
T
)
)
=
d
N
(
ν
)
h
ν
(
1
2
+
1
e
h
ν
/
(
k
T
)
−
1
)
.
{\displaystyle {\begin{aligned}dU&=dN(\nu )e^{1/2h\nu /(kT)}(1-e^{-h\nu /(kT)})\sum _{i=0}^{\infty }h\nu (i+1/2)e^{-h\nu (i+1/2)/(kT)}\\\\&=dN(\nu )(1-e^{-h\nu /(kT)})\sum _{i=0}^{\infty }h\nu (i+1/2)e^{-h\nu i/(kT)}\\&=dN(\nu )h\nu \left({\frac {1}{2}}+(1-e^{-h\nu /(kT)})\sum _{i=0}^{\infty }ie^{-h\nu i/(kT)}\right)\\&=dN(\nu )h\nu \left({\frac {1}{2}}+{\frac {1}{e^{h\nu /(kT)}-1}}\right).\end{aligned}}}
Integrating with respect to ν yields
U
=
9
N
h
4
k
3
T
D
3
∫
0
ν
D
(
1
2
+
1
e
h
ν
/
(
k
T
)
−
1
)
ν
3
d
ν
.
{\displaystyle U={\frac {9Nh^{4}}{k^{3}T_{\rm {D}}^{3}}}\int _{0}^{\nu _{D}}\left({\frac {1}{2}}+{\frac {1}{e^{h\nu /(kT)}-1}}\right)\nu ^{3}d\nu .}
== Temperature limits ==
The temperature of a Debye solid is said to be low if
T
≪
T
D
{\displaystyle T\ll T_{\rm {D}}}
, leading to
C
V
N
k
∼
9
(
T
T
D
)
3
∫
0
∞
x
4
e
x
(
e
x
−
1
)
2
d
x
.
{\displaystyle {\frac {C_{V}}{Nk}}\sim 9\left({T \over T_{\rm {D}}}\right)^{3}\int _{0}^{\infty }{x^{4}e^{x} \over \left(e^{x}-1\right)^{2}}\,dx.}
This definite integral can be evaluated exactly:
C
V
N
k
∼
12
π
4
5
(
T
T
D
)
3
.
{\displaystyle {\frac {C_{V}}{Nk}}\sim {12\pi ^{4} \over 5}\left({T \over T_{\rm {D}}}\right)^{3}.}
In the low-temperature limit, the limitations of the Debye model mentioned above do not apply, and it gives a correct relationship between (phononic) heat capacity, temperature, the elastic coefficients, and the volume per atom (the latter quantities being contained in the Debye temperature).
The temperature of a Debye solid is said to be high if
T
≫
T
D
{\displaystyle T\gg T_{\rm {D}}}
. Using
e
x
−
1
≈
x
{\displaystyle e^{x}-1\approx x}
if
|
x
|
≪
1
{\displaystyle |x|\ll 1}
leads to
C
V
N
k
∼
9
(
T
T
D
)
3
∫
0
T
D
/
T
x
4
x
2
d
x
{\displaystyle {\frac {C_{V}}{Nk}}\sim 9\left({T \over T_{\rm {D}}}\right)^{3}\int _{0}^{T_{\rm {D}}/T}{x^{4} \over x^{2}}\,dx}
which upon integration gives
C
V
N
k
∼
3
.
{\displaystyle {\frac {C_{V}}{Nk}}\sim 3\,.}
This is the Dulong–Petit law, and is fairly accurate although it does not take into account anharmonicity, which causes the heat capacity to rise further. The total heat capacity of the solid, if it is a conductor or semiconductor, may also contain a non-negligible contribution from the electrons.
== Debye versus Einstein ==
The Debye and Einstein models correspond closely to experimental data, but the Debye model is correct at low temperatures whereas the Einstein model is not. To visualize the difference between the models, one would naturally plot the two on the same set of axes, but this is not immediately possible as both the Einstein model and the Debye model provide a functional form for the heat capacity. As models, they require scales to relate them to their real-world counterparts. One can see that the scale of the Einstein model is given by
ϵ
/
k
{\displaystyle \epsilon /k}
:
C
V
=
3
N
k
(
ϵ
k
T
)
2
e
ϵ
/
k
T
(
e
ϵ
/
k
T
−
1
)
2
.
{\displaystyle C_{V}=3Nk\left({\epsilon \over kT}\right)^{2}{e^{\epsilon /kT} \over \left(e^{\epsilon /kT}-1\right)^{2}}.}
The scale of the Debye model is
T
D
{\displaystyle T_{\rm {D}}}
, the Debye temperature. Both are usually found by fitting the models to the experimental data. (The Debye temperature can theoretically be calculated from the speed of sound and crystal dimensions.) Because the two methods approach the problem from different directions and different geometries, Einstein and Debye scales are not the same, that is to say
ϵ
k
≠
T
D
,
{\displaystyle {\epsilon \over k}\neq T_{\rm {D}}\,,}
which means that plotting them on the same set of axes makes no sense. They are two models of the same thing, but of different scales. If one defines the Einstein condensation temperature as
T
E
=
d
e
f
ϵ
k
,
{\displaystyle T_{\rm {E}}\ {\stackrel {\mathrm {def} }{=}}\ {\epsilon \over k}\,,}
then one can say
T
E
≠
T
D
,
{\displaystyle T_{\rm {E}}\neq T_{\rm {D}}\,,}
and, to relate the two, the ratio
T
E
T
D
{\displaystyle {\frac {T_{\rm {E}}}{T_{\rm {D}}}}\,}
is used.
The Einstein solid is composed of single-frequency quantum harmonic oscillators,
ϵ
=
ℏ
ω
=
h
ν
{\displaystyle \epsilon =\hbar \omega =h\nu }
. That frequency, if it indeed existed, would be related to the speed of sound in the solid. If one imagines the propagation of sound as a sequence of atoms hitting one another, then the frequency of oscillation must correspond to the minimum wavelength sustainable by the atomic lattice,
λ
m
i
n
{\displaystyle \lambda _{min}}
, where
ν
=
c
s
λ
=
c
s
N
3
2
L
=
c
s
2
N
V
3
{\displaystyle \nu ={c_{\rm {s}} \over \lambda }={c_{\rm {s}}{\sqrt[{3}]{N}} \over 2L}={c_{\rm {s}} \over 2}{\sqrt[{3}]{N \over V}}}
,
which makes the Einstein temperature
T
E
=
ϵ
k
=
h
ν
k
=
h
c
s
2
k
N
V
3
,
{\displaystyle T_{\rm {E}}={\epsilon \over k}={h\nu \over k}={hc_{\rm {s}} \over 2k}{\sqrt[{3}]{N \over V}}\,,}
and the sought ratio is therefore
T
E
T
D
=
π
6
3
=
0.805995977...
{\displaystyle {T_{\rm {E}} \over T_{\rm {D}}}={\sqrt[{3}]{\pi \over 6}}\ =0.805995977...}
Using the ratio, both models can be plotted on the same graph. It is the cube root of the ratio of the volume of one octant of a three-dimensional sphere to the volume of the cube that contains it, which is just the correction factor used by Debye when approximating the energy integral above. Alternatively, the ratio of the two temperatures can be seen to be the ratio of Einstein's single frequency at which all oscillators oscillate and Debye's maximum frequency. Einstein's single frequency can then be seen to be a mean of the frequencies available to the Debye model.
== Debye temperature table ==
Even though the Debye model is not completely correct, it gives a good approximation for the low temperature heat capacity of insulating, crystalline solids where other contributions (such as highly mobile conduction electrons) are negligible. For metals, the electron contribution to the heat is proportional to
T
{\displaystyle T}
, which at low temperatures dominates the Debye
T
3
{\displaystyle T^{3}}
result for lattice vibrations. In this case, the Debye model can only be said to approximate the lattice contribution to the specific heat. The following table lists Debye temperatures for several pure elements and sapphire:
The Debye model's fit to experimental data is often phenomenologically improved by allowing the Debye temperature to become temperature dependent; for example, the value for ice increases from about 222 K to 300 K as the temperature goes from absolute zero to about 100 K.
== Extension to other quasi-particles ==
For other bosonic quasi-particles, e.g., magnons (quantized spin waves) in ferromagnets instead of the phonons (quantized sound waves), one can derive analogous results. In this case at low frequencies one has different dispersion relations of momentum and energy, e.g.,
E
(
ν
)
∝
k
2
{\displaystyle E(\nu )\propto k^{2}}
in the case of magnons, instead of
E
(
ν
)
∝
k
{\displaystyle E(\nu )\propto k}
for phonons (with
k
=
2
π
/
λ
{\displaystyle k=2\pi /\lambda }
). One also has different density of states (e.g.,
∫
g
(
ν
)
d
ν
≡
N
{\displaystyle \int g(\nu ){\rm {d}}\nu \equiv N\,}
). As a consequence, in ferromagnets one gets a magnon contribution to the heat capacity,
Δ
C
V
|
m
a
g
n
o
n
∝
T
3
/
2
{\displaystyle \Delta C_{\,{\rm {V|\,magnon}}}\,\propto T^{3/2}}
, which dominates at sufficiently low temperatures the phonon contribution,
Δ
C
V
|
p
h
o
n
o
n
∝
T
3
{\displaystyle \,\Delta C_{\,{\rm {V|\,phonon}}}\propto T^{3}}
. In metals, in contrast, the main low-temperature contribution to the heat capacity,
∝
T
{\displaystyle \propto T}
, comes from the electrons. It is fermionic, and is calculated by different methods going back to Sommerfeld's free electron model.
== Extension to liquids ==
It was long thought that phonon theory is not able to explain the heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin scattering experiments with neutrons and with X-rays, confirming an intuition of Yakov Frenkel, have shown that transverse phonons do exist in liquids, albeit restricted to frequencies above a threshold called the Frenkel frequency. Since most energy is contained in these high-frequency modes, a simple modification of the Debye model is sufficient to yield a good approximation to experimental heat capacities of simple liquids. More recently, it has been shown that instantaneous normal modes associated with relaxations from saddle points in the liquid energy landscape, which dominate the frequency spectrum of liquids at low frequencies, may determine the specific heat of liquids as a function of temperature over a broad range.
== Debye frequency ==
The Debye frequency (Symbol:
ω
D
e
b
y
e
{\displaystyle \omega _{\rm {Debye}}}
or
ω
D
{\displaystyle \omega _{\rm {D}}}
) is a parameter in the Debye model that refers to a cut-off angular frequency for waves of a harmonic chain of masses, used to describe the movement of ions in a crystal lattice and more specifically, to correctly predict that the heat capacity in such crystals is constant at high temperatures (Dulong–Petit law). The concept was first introduced by Peter Debye in 1912.
Throughout this section, periodic boundary conditions are assumed.
=== Definition ===
Assuming the dispersion relation is
ω
=
v
s
|
k
|
,
{\displaystyle \omega =v_{\rm {s}}|\mathbf {k} |,}
with
v
s
{\displaystyle v_{\rm {s}}}
the speed of sound in the crystal and k the wave vector, the value of the Debye frequency is as follows:
For a one-dimensional monatomic chain, the Debye frequency is equal to
ω
D
=
v
s
π
/
a
=
v
s
π
N
/
L
=
v
s
π
λ
,
{\displaystyle \omega _{\rm {D}}=v_{\rm {s}}\pi /a=v_{\rm {s}}\pi N/L=v_{\rm {s}}\pi \lambda ,}
with
a
{\displaystyle a}
as the distance between two neighbouring atoms in the chain when the system is in its ground state of energy, here being that none of the atoms are moving with respect to one another;
N
{\displaystyle N}
the total number of atoms in the chain;
L
{\displaystyle L}
the size of the system, which is the length of the chain; and
λ
{\displaystyle \lambda }
the linear number density. For
L
{\displaystyle L}
,
N
{\displaystyle N}
, and
a
{\displaystyle a}
, the relation
L
=
N
a
{\displaystyle L=Na}
holds.
For a two-dimensional monatomic square lattice, the Debye frequency is equal to
ω
D
2
=
4
π
a
2
v
s
2
=
4
π
N
A
v
s
2
≡
4
π
σ
v
s
2
,
{\displaystyle \omega _{\rm {D}}^{2}={\frac {4\pi }{a^{2}}}v_{\rm {s}}^{2}={\frac {4\pi N}{A}}v_{\rm {s}}^{2}\equiv 4\pi \sigma v_{\rm {s}}^{2},}
with
A
≡
L
2
=
N
a
2
{\displaystyle A\equiv L^{2}=Na^{2}}
is the size (area) of the surface, and
σ
{\displaystyle \sigma }
the surface number density.
For a three-dimensional monatomic primitive cubic crystal, the Debye frequency is equal to
ω
D
3
=
6
π
2
a
3
v
s
3
=
6
π
2
N
V
v
s
3
≡
6
π
2
ρ
v
s
3
,
{\displaystyle \omega _{\rm {D}}^{3}={\frac {6\pi ^{2}}{a^{3}}}v_{\rm {s}}^{3}={\frac {6\pi ^{2}N}{V}}v_{\rm {s}}^{3}\equiv 6\pi ^{2}\rho v_{\rm {s}}^{3},}
with
V
≡
L
3
=
N
a
3
{\displaystyle V\equiv L^{3}=Na^{3}}
the size of the system, and
ρ
{\displaystyle \rho }
the volume number density.
The general formula for the Debye frequency as a function of
n
{\displaystyle n}
, the number of dimensions for a (hyper)cubic lattice is
ω
D
n
=
2
n
π
n
/
2
Γ
(
1
+
n
2
)
N
L
n
v
s
n
,
{\displaystyle \omega _{\rm {D}}^{n}=2^{n}\pi ^{n/2}\Gamma \left(1+{\tfrac {n}{2}}\right){\frac {N}{L^{n}}}v_{\rm {s}}^{n},}
with
Γ
{\displaystyle \Gamma }
being the gamma function.
The speed of sound in the crystal depends on the mass of the atoms, the strength of their interaction, the pressure on the system, and the polarisation of the spin wave (longitudinal or transverse), among others. For the following, the speed of sound is assumed to be the same for any polarisation, although this limits the applicability of the result.
The assumed dispersion relation is easily proven inaccurate for a one-dimensional chain of masses, but in Debye's model, this does not prove to be problematic.
=== Relation to Debye's temperature ===
The Debye temperature
θ
D
{\displaystyle \theta _{\rm {D}}}
, another parameter in Debye model, is related to the Debye frequency by the relation
θ
D
=
ℏ
k
B
ω
D
,
{\displaystyle \theta _{\rm {D}}={\frac {\hbar }{k_{\rm {B}}}}\omega _{\rm {D}},}
where
ℏ
{\displaystyle \hbar }
is the reduced Planck constant and
k
B
{\displaystyle k_{\rm {B}}}
is the Boltzmann constant.
=== Debye's derivation ===
==== Three-dimensional crystal ====
In Debye's derivation of the heat capacity, he sums over all possible modes of the system, accounting for different directions and polarisations. He assumed the total number of modes per polarization to be
N
{\displaystyle N}
, the amount of masses in the system, and the total to be
∑
m
o
d
e
s
3
=
3
N
,
{\displaystyle \sum _{\rm {modes}}3=3N,}
with three polarizations per mode. The sum runs over all modes without differentiating between different polarizations, and then counts the total number of polarization-mode combinations. Debye made this assumption based on an assumption from classical mechanics that the number of modes per polarization in a chain of masses should always be equal to the number of masses in the chain.
The left hand side can be made explicit to show how it depends on the Debye frequency, introduced first as a cut-off frequency beyond which no frequencies exist. By relating the cut-off frequency to the maximum number of modes, an expression for the cut-off frequency can be derived.
First of all, by assuming
L
{\displaystyle L}
to be very large (
L
{\displaystyle L}
≫ 1, with
L
{\displaystyle L}
the size of the system in any of the three directions) the smallest wave vector in any direction could be approximated by:
d
k
i
=
2
π
/
L
{\displaystyle dk_{i}=2\pi /L}
, with
i
=
x
,
y
,
z
{\displaystyle i=x,y,z}
. Smaller wave vectors cannot exist because of the periodic boundary conditions. Thus the summation would become
∑
m
o
d
e
s
3
=
3
V
(
2
π
)
3
∭
d
k
,
{\displaystyle \sum _{\rm {modes}}3={\frac {3V}{(2\pi )^{3}}}\iiint d\mathbf {k} ,}
where
k
≡
(
k
x
,
k
y
,
k
z
)
{\displaystyle \mathbf {k} \equiv (k_{x},k_{y},k_{z})}
;
V
≡
L
3
{\displaystyle V\equiv L^{3}}
is the size of the system; and the integral is (as the summation) over all possible modes, which is assumed to be a finite region (bounded by the cut-off frequency).
The triple integral could be rewritten as a single integral over all possible values of the absolute value of
k
{\displaystyle \mathbf {k} }
(see Jacobian for spherical coordinates). The result is
3
V
(
2
π
)
3
∭
d
k
=
3
V
2
π
2
∫
0
k
D
|
k
|
2
d
k
,
{\displaystyle {\frac {3V}{(2\pi )^{3}}}\iiint d\mathbf {k} ={\frac {3V}{2\pi ^{2}}}\int _{0}^{k_{\rm {D}}}|\mathbf {k} |^{2}d\mathbf {k} ,}
with
k
D
{\displaystyle k_{\rm {D}}}
the absolute value of the wave vector corresponding with the Debye frequency, so
k
D
=
ω
D
/
v
s
{\displaystyle k_{\rm {D}}=\omega _{\rm {D}}/v_{\rm {s}}}
.
Since the dispersion relation is
ω
=
v
s
|
k
|
{\displaystyle \omega =v_{\rm {s}}|\mathbf {k} |}
, it can be written as an integral over all possible
ω
{\displaystyle \omega }
:
3
V
2
π
2
∫
0
k
D
|
k
|
2
d
k
=
3
V
2
π
2
v
s
3
∫
0
ω
D
ω
2
d
ω
,
{\displaystyle {\frac {3V}{2\pi ^{2}}}\int _{0}^{k_{\rm {D}}}|\mathbf {k} |^{2}d\mathbf {k} ={\frac {3V}{2\pi ^{2}v_{\rm {s}}^{3}}}\int _{0}^{\omega _{\rm {D}}}\omega ^{2}d\omega ,}
After solving the integral it is again equated to
3
N
{\displaystyle 3N}
to find
V
2
π
2
v
s
3
ω
D
3
=
3
N
.
{\displaystyle {\frac {V}{2\pi ^{2}v_{\rm {s}}^{3}}}\omega _{\rm {D}}^{3}=3N.}
It can be rearranged into
ω
D
3
=
6
π
2
N
V
v
s
3
.
{\displaystyle \omega _{\rm {D}}^{3}={\frac {6\pi ^{2}N}{V}}v_{\rm {s}}^{3}.}
==== One-dimensional chain in 3D space ====
The same derivation could be done for a one-dimensional chain of atoms. The number of modes remains unchanged, because there are still three polarizations, so
∑
m
o
d
e
s
3
=
3
N
.
{\displaystyle \sum _{\rm {modes}}3=3N.}
The rest of the derivation is analogous to the previous, so the left hand side is rewritten with respect to the Debye frequency:
∑
m
o
d
e
s
3
=
3
L
2
π
∫
−
k
D
k
D
d
k
=
3
L
π
v
s
∫
0
ω
D
d
ω
.
{\displaystyle \sum _{\rm {modes}}3={\frac {3L}{2\pi }}\int _{-k_{\rm {D}}}^{k_{\rm {D}}}dk={\frac {3L}{\pi v_{\rm {s}}}}\int _{0}^{\omega _{\rm {D}}}d\omega .}
The last step is multiplied by two is because the integrand in the first integral is even and the bounds of integration are symmetric about the origin, so the integral can be rewritten as from 0 to
k
D
{\displaystyle k_{D}}
after scaling by a factor of 2. This is also equivalent to the statement that the volume of a one-dimensional ball is twice its radius. Applying a change a substitution of
k
=
ω
v
s
{\displaystyle k={\frac {\omega }{v_{s}}}}
, our bounds are now 0 to
ω
D
=
k
D
v
s
{\displaystyle \omega _{D}=k_{D}v_{s}}
, which gives us our rightmost integral. We continue;
3
L
π
v
s
∫
0
ω
D
d
ω
=
3
L
π
v
s
ω
D
=
3
N
.
{\displaystyle {\frac {3L}{\pi v_{\rm {s}}}}\int _{0}^{\omega _{\rm {D}}}d\omega ={\frac {3L}{\pi v_{\rm {s}}}}\omega _{\rm {D}}=3N.}
Conclusion:
ω
D
=
π
v
s
N
L
.
{\displaystyle \omega _{\rm {D}}={\frac {\pi v_{\rm {s}}N}{L}}.}
==== Two-dimensional crystal ====
The same derivation could be done for a two-dimensional crystal. The number of modes remains unchanged, because there are still three polarizations. The derivation is analogous to the previous two. We start with the same equation,
∑
m
o
d
e
s
3
=
3
N
.
{\displaystyle \sum _{\rm {modes}}3=3N.}
And then the left hand side is rewritten and equated to
3
N
{\displaystyle 3N}
∑
m
o
d
e
s
3
=
3
A
(
2
π
)
2
∬
d
k
=
3
A
2
π
v
s
2
∫
0
ω
D
ω
d
ω
=
3
A
ω
D
2
4
π
v
s
2
=
3
N
,
{\displaystyle \sum _{\rm {modes}}3={\frac {3A}{(2\pi )^{2}}}\iint d\mathbf {k} ={\frac {3A}{2\pi v_{\rm {s}}^{2}}}\int _{0}^{\omega _{\rm {D}}}\omega d\omega ={\frac {3A\omega _{\rm {D}}^{2}}{4\pi v_{\rm {s}}^{2}}}=3N,}
where
A
≡
L
2
{\displaystyle A\equiv L^{2}}
is the size of the system.
It can be rewritten as
ω
D
2
=
4
π
N
A
v
s
2
.
{\displaystyle \omega _{\rm {D}}^{2}={\frac {4\pi N}{A}}v_{\rm {s}}^{2}.}
=== Polarization dependence ===
In reality, longitudinal waves often have a different wave velocity from that of transverse waves. Making the assumption that the velocities are equal simplified the final result, but reintroducing the distinction improves the accuracy of the final result.
The dispersion relation becomes
ω
i
=
v
s
,
i
|
k
|
{\displaystyle \omega _{i}=v_{s,i}|\mathbf {k} |}
, with
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
, each corresponding to one of the three polarizations. The cut-off frequency
ω
D
{\displaystyle \omega _{\rm {D}}}
, however, does not depend on
i
{\displaystyle i}
. We can write the total number of modes as
∑
i
∑
m
o
d
e
s
1
{\displaystyle \sum _{i}\sum _{\rm {modes}}1}
, which is again equal to
3
N
{\displaystyle 3N}
. Here the summation over the modes is now dependent on
i
{\displaystyle i}
.
==== One-dimensional chain in 3D space ====
The summation over the modes is rewritten
∑
i
∑
m
o
d
e
s
1
=
∑
i
L
π
v
s
,
i
∫
0
ω
D
d
ω
i
=
3
N
.
{\displaystyle \sum _{i}\sum _{\rm {modes}}1=\sum _{i}{\frac {L}{\pi v_{s,i}}}\int _{0}^{\omega _{\rm {D}}}d\omega _{i}=3N.}
The result is
L
ω
D
π
(
1
v
s
,
1
+
1
v
s
,
2
+
1
v
s
,
3
)
=
3
N
.
{\displaystyle {\frac {L\omega _{\rm {D}}}{\pi }}({\frac {1}{v_{s,1}}}+{\frac {1}{v_{s,2}}}+{\frac {1}{v_{s,3}}})=3N.}
Thus the Debye frequency is found
ω
D
=
π
N
L
3
1
v
s
,
1
+
1
v
s
,
2
+
1
v
s
,
3
=
3
π
N
L
v
s
,
1
v
s
,
2
v
s
,
3
v
s
,
2
v
s
,
3
+
v
s
,
1
v
s
,
3
+
v
s
,
1
v
s
,
2
=
π
N
L
v
e
f
f
.
{\displaystyle \omega _{\rm {D}}={\frac {\pi N}{L}}{\frac {3}{{\frac {1}{v_{s,1}}}+{\frac {1}{v_{s,2}}}+{\frac {1}{v_{s,3}}}}}={\frac {3\pi N}{L}}{\frac {v_{s,1}v_{s,2}v_{s,3}}{v_{s,2}v_{s,3}+v_{s,1}v_{s,3}+v_{s,1}v_{s,2}}}={\frac {\pi N}{L}}v_{\mathrm {eff} }\,.}
The calculated effective velocity
v
e
f
f
{\displaystyle v_{\mathrm {eff} }}
is the harmonic mean of the velocities for each polarization. By assuming the two transverse polarizations to have the same phase speed and frequency,
ω
D
=
3
π
N
L
v
s
,
t
v
s
,
l
2
v
s
,
l
+
v
s
,
t
.
{\displaystyle \omega _{\rm {D}}={\frac {3\pi N}{L}}{\frac {v_{s,t}v_{s,l}}{2v_{s,l}+v_{s,t}}}.}
Setting
v
s
,
t
=
v
s
,
l
{\displaystyle v_{s,t}=v_{s,l}}
recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
==== Two-dimensional crystal ====
The same derivation can be done for a two-dimensional crystal to find
ω
D
2
=
4
π
N
A
3
1
v
s
,
1
2
+
1
v
s
,
2
2
+
1
v
s
,
3
2
=
12
π
N
A
(
v
s
,
1
v
s
,
2
v
s
,
3
)
2
(
v
s
,
2
v
s
,
3
)
2
+
(
v
s
,
1
v
s
,
3
)
2
+
(
v
s
,
1
v
s
,
2
)
2
=
4
π
N
A
v
e
f
f
2
.
{\displaystyle \omega _{\rm {D}}^{2}={\frac {4\pi N}{A}}{\frac {3}{{\frac {1}{v_{s,1}^{2}}}+{\frac {1}{v_{s,2}^{2}}}+{\frac {1}{v_{s,3}^{2}}}}}={\frac {12\pi N}{A}}{\frac {(v_{s,1}v_{s,2}v_{s,3})^{2}}{(v_{s,2}v_{s,3})^{2}+(v_{s,1}v_{s,3})^{2}+(v_{s,1}v_{s,2})^{2}}}={\frac {4\pi N}{A}}v_{\mathrm {eff} }^{2}\,.}
The calculated effective velocity
v
e
f
f
{\displaystyle v_{\mathrm {eff} }}
is the square root of the harmonic mean of the squares of velocities. By assuming the two transverse polarizations to be the same,
ω
D
2
=
12
π
N
A
(
v
s
,
t
v
s
,
l
)
2
2
v
s
,
l
2
+
v
s
,
t
2
.
{\displaystyle \omega _{\rm {D}}^{2}={\frac {12\pi N}{A}}{\frac {(v_{s,t}v_{s,l})^{2}}{2v_{s,l}^{2}+v_{s,t}^{2}}}.}
Setting
v
s
,
t
=
v
s
,
l
{\displaystyle v_{s,t}=v_{s,l}}
recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
==== Three-dimensional crystal ====
The same derivation can be done for a three-dimensional crystal to find (the derivation is analogous to previous derivations)
ω
D
2
=
6
π
2
N
V
3
1
v
s
,
1
3
+
1
v
s
,
2
3
+
1
v
s
,
3
3
=
18
π
2
N
V
(
v
s
,
1
v
s
,
2
v
s
,
3
)
3
(
v
s
,
2
v
s
,
3
)
3
+
(
v
s
,
1
v
s
,
3
)
3
+
(
v
s
,
1
v
s
,
2
)
3
=
6
π
2
N
V
v
e
f
f
3
.
{\displaystyle \omega _{\rm {D}}^{2}={\frac {6\pi ^{2}N}{V}}{\frac {3}{{\frac {1}{v_{s,1}^{3}}}+{\frac {1}{v_{s,2}^{3}}}+{\frac {1}{v_{s,3}^{3}}}}}={\frac {18\pi ^{2}N}{V}}{\frac {(v_{s,1}v_{s,2}v_{s,3})^{3}}{(v_{s,2}v_{s,3})^{3}+(v_{s,1}v_{s,3})^{3}+(v_{s,1}v_{s,2})^{3}}}={\frac {6\pi ^{2}N}{V}}v_{\mathrm {eff} }^{3}\,.}
The calculated effective velocity
v
e
f
f
{\displaystyle v_{\mathrm {eff} }}
is the cube root of the harmonic mean of the cubes of velocities. By assuming the two transverse polarizations to be the same,
ω
D
3
=
18
π
2
N
V
(
v
s
,
t
v
s
,
l
)
3
2
v
s
,
l
3
+
v
s
,
t
3
.
{\displaystyle \omega _{\rm {D}}^{3}={\frac {18\pi ^{2}N}{V}}{\frac {(v_{s,t}v_{s,l})^{3}}{2v_{s,l}^{3}+v_{s,t}^{3}}}.}
Setting
v
s
,
t
=
v
s
,
l
{\displaystyle v_{s,t}=v_{s,l}}
recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
=== Derivation with the actual dispersion relation ===
This problem could be made more applicable by relaxing the assumption of linearity of the dispersion relation. Instead of using the dispersion relation
ω
=
v
s
k
{\displaystyle \omega =v_{\rm {s}}k}
, a more accurate dispersion relation can be used. In classical mechanics, it is known that for an equidistant chain of masses which interact harmonically with each other, the dispersion relation is
ω
(
k
)
=
2
κ
m
|
sin
(
k
a
2
)
|
,
{\displaystyle \omega (k)=2{\sqrt {\frac {\kappa }{m}}}\left|\sin \left({\frac {ka}{2}}\right)\right|,}
with
m
{\displaystyle m}
being the mass of each atom,
κ
{\displaystyle \kappa }
the spring constant for the harmonic oscillator, and
a
{\displaystyle a}
still being the spacing between atoms in the ground state. After plotting this relation, Debye's estimation of the cut-off wavelength based on the linear assumption remains accurate, because for every wavenumber bigger than
π
/
a
{\displaystyle \pi /a}
(that is, for
λ
{\displaystyle \lambda }
is smaller than
2
a
{\displaystyle 2a}
), a wavenumber that is smaller than
π
/
a
{\displaystyle \pi /a}
could be found with the same angular frequency. This means the resulting physical manifestation for the mode with the larger wavenumber is indistinguishable from the one with the smaller wavenumber. Therefore, the study of the dispersion relation can be limited to the first Brillouin zone
k
∈
[
−
π
a
,
π
a
]
{\textstyle k\in \left[-{\frac {\pi }{a}},{\frac {\pi }{a}}\right]}
without any loss of accuracy or information. This is possible because the system consists of discretized points, as is demonstrated in the animated picture. Dividing the dispersion relation by
k
{\displaystyle k}
and inserting
π
/
a
{\displaystyle \pi /a}
for
k
{\displaystyle k}
, we find the speed of a wave with
k
=
π
/
a
{\displaystyle k=\pi /a}
to be
v
s
(
k
=
π
/
a
)
=
2
a
π
κ
m
.
{\displaystyle v_{\rm {s}}(k=\pi /a)={\frac {2a}{\pi }}{\sqrt {\frac {\kappa }{m}}}.}
By simply inserting
k
=
π
/
a
{\displaystyle k=\pi /a}
in the original dispersion relation we find
ω
(
k
=
π
/
a
)
=
2
κ
m
=
ω
D
.
{\displaystyle \omega (k=\pi /a)=2{\sqrt {\frac {\kappa }{m}}}=\omega _{\rm {D}}.}
Combining these results the same result is once again found
ω
D
=
π
v
s
a
.
{\displaystyle \omega _{\rm {D}}={\frac {\pi v_{\rm {s}}}{a}}.}
However, for any chain with greater complexity, including diatomic chains, the associated cut-off frequency and wavelength are not very accurate, since the cut-off wavelength is twice as big and the dispersion relation consists of additional branches, two total for a diatomic chain. It is also not certain from this result whether for higher-dimensional systems the cut-off frequency was accurately predicted by Debye when taking into account the more accurate dispersion relation.
=== Alternative derivation ===
For a one-dimensional chain, the formula for the Debye frequency can also be reproduced using a theorem for describing aliasing. The Nyquist–Shannon sampling theorem is used for this derivation, the main difference being that in the case of a one-dimensional chain, the discretization is not in time, but in space.
The cut-off frequency can be determined from the cut-off wavelength. From the sampling theorem, we know that for wavelengths smaller than
2
a
{\displaystyle 2a}
, or twice the sampling distance, every mode is a repeat of a mode with wavelength larger than
2
a
{\displaystyle 2a}
, so the cut-off wavelength should be at
λ
D
=
2
a
{\displaystyle \lambda _{\rm {D}}=2a}
. This results again in
k
D
=
2
π
λ
D
=
π
/
a
{\displaystyle k_{\rm {D}}={\frac {2\pi }{\lambda _{D}}}=\pi /a}
, rendering
ω
D
=
π
v
s
a
.
{\displaystyle \omega _{\rm {D}}={\frac {\pi v_{\rm {s}}}{a}}.}
It does not matter which dispersion relation is used, as the same cut-off frequency would be calculated.
== See also ==
Bose gas
Gas in a box
Grüneisen parameter
Bloch–Grüneisen temperature
Electrical resistivity and conductivity#Temperature dependence
== References ==
== Further reading ==
CRC Handbook of Chemistry and Physics, 56th Edition (1975–1976)
Schroeder, Daniel V. An Introduction to Thermal Physics. Addison-Wesley, San Francisco (2000). Section 7.5.
== External links ==
Experimental determination of specific heat, thermal and heat conductivity of quartz using a cryostat.
Simon, Steven H. (2014) The Oxford Solid State Basics (most relevant ones: 1, 2 and 6) | Wikipedia/Debye_model |
In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance.
== Canonical partition function ==
=== Definition ===
Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous.
==== Classical discrete system ====
For a canonical ensemble that is classical and discrete, the canonical partition function is defined as
Z
=
∑
i
e
−
β
E
i
,
{\displaystyle Z=\sum _{i}e^{-\beta E_{i}},}
where
i
{\displaystyle i}
is the index for the microstates of the system;
e
{\displaystyle e}
is Euler's number;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant;
E
i
{\displaystyle E_{i}}
is the total energy of the system in the respective microstate.
The exponential factor
e
−
β
E
i
{\displaystyle e^{-\beta E_{i}}}
is otherwise known as the Boltzmann factor.
==== Classical continuous system ====
In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as
Z
=
1
h
3
∫
e
−
β
H
(
q
,
p
)
d
3
q
d
3
p
,
{\displaystyle Z={\frac {1}{h^{3}}}\int e^{-\beta H(q,p)}\,d^{3}q\,d^{3}p,}
where
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
(
q
,
p
)
{\displaystyle H(q,p)}
is the Hamiltonian of the system;
q
{\displaystyle q}
is the canonical position;
p
{\displaystyle p}
is the canonical momentum.
To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant).
For generalized cases, the partition function of
N
{\displaystyle N}
particles in
d
{\displaystyle d}
-dimensions is given by
Z
=
1
h
N
d
∫
∏
i
=
1
N
e
−
β
H
(
q
i
,
p
i
)
d
d
q
i
d
d
p
i
,
{\displaystyle Z={\frac {1}{h^{Nd}}}\int \prod _{i=1}^{N}e^{-\beta {\mathcal {H}}({\textbf {q}}_{i},{\textbf {p}}_{i})}\,d^{d}{\textbf {q}}_{i}\,d^{d}{\textbf {p}}_{i},}
==== Classical continuous system (multiple identical particles) ====
For a gas of
N
{\displaystyle N}
identical classical non-interacting particles in three dimensions, the partition function is
Z
=
1
N
!
h
3
N
∫
exp
(
−
β
∑
i
=
1
N
H
(
q
i
,
p
i
)
)
d
3
q
1
⋯
d
3
q
N
d
3
p
1
⋯
d
3
p
N
=
Z
single
N
N
!
{\displaystyle Z={\frac {1}{N!h^{3N}}}\int \,\exp \left(-\beta \sum _{i=1}^{N}H({\textbf {q}}_{i},{\textbf {p}}_{i})\right)\;d^{3}q_{1}\cdots d^{3}q_{N}\,d^{3}p_{1}\cdots d^{3}p_{N}={\frac {Z_{\text{single}}^{N}}{N!}}}
where
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
i
{\displaystyle i}
is the index for the particles of the system;
H
{\displaystyle H}
is the Hamiltonian of a respective particle;
q
i
{\displaystyle q_{i}}
is the canonical position of the respective particle;
p
i
{\displaystyle p_{i}}
is the canonical momentum of the respective particle;
d
3
{\displaystyle d^{3}}
is shorthand notation to indicate that
q
i
{\displaystyle q_{i}}
and
p
i
{\displaystyle p_{i}}
are vectors in three-dimensional space.
Z
single
{\displaystyle Z_{\text{single}}}
is the classical continuous partition function of a single particle as given in the previous section.
The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant).
==== Quantum mechanical discrete system ====
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:
Z
=
tr
(
e
−
β
H
^
)
,
{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}
where:
tr
(
∘
)
{\displaystyle \operatorname {tr} (\circ )}
is the trace of a matrix;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
^
{\displaystyle {\hat {H}}}
is the Hamiltonian operator.
The dimension of
e
−
β
H
^
{\displaystyle e^{-\beta {\hat {H}}}}
is the number of energy eigenstates of the system.
==== Quantum mechanical continuous system ====
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as
Z
=
1
h
∫
⟨
q
,
p
|
e
−
β
H
^
|
q
,
p
⟩
d
q
d
p
,
{\displaystyle Z={\frac {1}{h}}\int \left\langle q,p\right\vert e^{-\beta {\hat {H}}}\left\vert q,p\right\rangle \,dq\,dp,}
where:
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
^
{\displaystyle {\hat {H}}}
is the Hamiltonian operator;
q
{\displaystyle q}
is the canonical position;
p
{\displaystyle p}
is the canonical momentum.
In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows:
Z
=
∑
j
g
j
e
−
β
E
j
,
{\displaystyle Z=\sum _{j}g_{j}\,e^{-\beta E_{j}},}
where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es.
The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):
Z
=
tr
(
e
−
β
H
^
)
,
{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}
where Ĥ is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series.
The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity:
1
=
∫
|
x
,
p
⟩
⟨
x
,
p
|
d
x
d
p
h
,
{\displaystyle {\boldsymbol {1}}=\int |x,p\rangle \langle x,p|{\frac {dx\,dp}{h}},}
where |x, p⟩ is a normalised Gaussian wavepacket centered at position x and momentum p. Thus
Z
=
∫
tr
(
e
−
β
H
^
|
x
,
p
⟩
⟨
x
,
p
|
)
d
x
d
p
h
=
∫
⟨
x
,
p
|
e
−
β
H
^
|
x
,
p
⟩
d
x
d
p
h
.
{\displaystyle Z=\int \operatorname {tr} \left(e^{-\beta {\hat {H}}}|x,p\rangle \langle x,p|\right){\frac {dx\,dp}{h}}=\int \langle x,p|e^{-\beta {\hat {H}}}|x,p\rangle {\frac {dx\,dp}{h}}.}
A coherent state is an approximate eigenstate of both operators
x
^
{\displaystyle {\hat {x}}}
and
p
^
{\displaystyle {\hat {p}}}
, hence also of the Hamiltonian Ĥ, with errors of the size of the uncertainties. If Δx and Δp can be regarded as zero, the action of Ĥ reduces to multiplication by the classical Hamiltonian, and Z reduces to the classical configuration integral.
=== Connection to probability theory ===
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy E − Ei:
p
i
=
Ω
B
(
E
−
E
i
)
Ω
(
S
,
B
)
(
E
)
.
{\displaystyle p_{i}={\frac {\Omega _{B}(E-E_{i})}{\Omega _{(S,B)}(E)}}.}
Assuming that the heat bath's internal energy is much larger than the energy of S (E ≫ Ei), we can Taylor-expand
Ω
B
{\displaystyle \Omega _{B}}
to first order in Ei and use the thermodynamic relation
∂
S
B
/
∂
E
=
1
/
T
{\displaystyle \partial S_{B}/\partial E=1/T}
, where here
S
B
{\displaystyle S_{B}}
,
T
{\displaystyle T}
are the entropy and temperature of the bath respectively:
k
ln
p
i
=
k
ln
Ω
B
(
E
−
E
i
)
−
k
ln
Ω
(
S
,
B
)
(
E
)
≈
−
∂
(
k
ln
Ω
B
(
E
)
)
∂
E
E
i
+
k
ln
Ω
B
(
E
)
−
k
ln
Ω
(
S
,
B
)
(
E
)
≈
−
∂
S
B
∂
E
E
i
+
k
ln
Ω
B
(
E
)
Ω
(
S
,
B
)
(
E
)
≈
−
E
i
T
+
k
ln
Ω
B
(
E
)
Ω
(
S
,
B
)
(
E
)
{\displaystyle {\begin{aligned}k\ln p_{i}&=k\ln \Omega _{B}(E-E_{i})-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial {\big (}k\ln \Omega _{B}(E){\big )}}{\partial E}}E_{i}+k\ln \Omega _{B}(E)-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial S_{B}}{\partial E}}E_{i}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\\[5pt]&\approx -{\frac {E_{i}}{T}}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\end{aligned}}}
Thus
p
i
∝
e
−
E
i
/
(
k
T
)
=
e
−
β
E
i
.
{\displaystyle p_{i}\propto e^{-E_{i}/(kT)}=e^{-\beta E_{i}}.}
Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:
Z
=
∑
i
e
−
β
E
i
=
Ω
(
S
,
B
)
(
E
)
Ω
B
(
E
)
.
{\displaystyle Z=\sum _{i}e^{-\beta E_{i}}={\frac {\Omega _{(S,B)}(E)}{\Omega _{B}(E)}}.}
=== Calculating the thermodynamic total energy ===
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:
⟨
E
⟩
=
∑
s
E
s
P
s
=
1
Z
∑
s
E
s
e
−
β
E
s
=
−
1
Z
∂
∂
β
Z
(
β
,
E
1
,
E
2
,
…
)
=
−
∂
ln
Z
∂
β
{\displaystyle {\begin{aligned}\langle E\rangle =\sum _{s}E_{s}P_{s}&={\frac {1}{Z}}\sum _{s}E_{s}e^{-\beta E_{s}}\\[1ex]&=-{\frac {1}{Z}}{\frac {\partial }{\partial \beta }}Z(\beta ,E_{1},E_{2},\dots )\\[1ex]&=-{\frac {\partial \ln Z}{\partial \beta }}\end{aligned}}}
or, equivalently,
⟨
E
⟩
=
k
B
T
2
∂
ln
Z
∂
T
.
{\displaystyle \langle E\rangle =k_{\text{B}}T^{2}{\frac {\partial \ln Z}{\partial T}}.}
Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner
E
s
=
E
s
(
0
)
+
λ
A
s
for all
s
{\displaystyle E_{s}=E_{s}^{(0)}+\lambda A_{s}\qquad {\text{for all}}\;s}
then the expected value of A is
⟨
A
⟩
=
∑
s
A
s
P
s
=
−
1
β
∂
∂
λ
ln
Z
(
β
,
λ
)
.
{\displaystyle \langle A\rangle =\sum _{s}A_{s}P_{s}=-{\frac {1}{\beta }}{\frac {\partial }{\partial \lambda }}\ln Z(\beta ,\lambda ).}
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.
=== Relation to thermodynamic variables ===
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is
⟨
E
⟩
=
−
∂
ln
Z
∂
β
.
{\displaystyle \langle E\rangle =-{\frac {\partial \ln Z}{\partial \beta }}.}
The variance in the energy (or "energy fluctuation") is
⟨
(
Δ
E
)
2
⟩
≡
⟨
(
E
−
⟨
E
⟩
)
2
⟩
=
⟨
E
2
⟩
−
⟨
E
⟩
2
=
∂
2
ln
Z
∂
β
2
.
{\displaystyle \left\langle (\Delta E)^{2}\right\rangle \equiv \left\langle (E-\langle E\rangle )^{2}\right\rangle =\left\langle E^{2}\right\rangle -{\left\langle E\right\rangle }^{2}={\frac {\partial ^{2}\ln Z}{\partial \beta ^{2}}}.}
The heat capacity is
C
v
=
∂
⟨
E
⟩
∂
T
=
1
k
B
T
2
⟨
(
Δ
E
)
2
⟩
.
{\displaystyle C_{v}={\frac {\partial \langle E\rangle }{\partial T}}={\frac {1}{k_{\text{B}}T^{2}}}\left\langle (\Delta E)^{2}\right\rangle .}
In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:
⟨
X
⟩
=
±
∂
ln
Z
∂
β
Y
.
{\displaystyle \langle X\rangle =\pm {\frac {\partial \ln Z}{\partial \beta Y}}.}
The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be
⟨
(
Δ
X
)
2
⟩
≡
⟨
(
X
−
⟨
X
⟩
)
2
⟩
=
∂
⟨
X
⟩
∂
β
Y
=
∂
2
ln
Z
∂
(
β
Y
)
2
.
{\displaystyle \left\langle (\Delta X)^{2}\right\rangle \equiv \left\langle (X-\langle X\rangle )^{2}\right\rangle ={\frac {\partial \langle X\rangle }{\partial \beta Y}}={\frac {\partial ^{2}\ln Z}{\partial (\beta Y)^{2}}}.}
In the special case of entropy, entropy is given by
S
≡
−
k
B
∑
s
P
s
ln
P
s
=
k
B
(
ln
Z
+
β
⟨
E
⟩
)
=
∂
∂
T
(
k
B
T
ln
Z
)
=
−
∂
A
∂
T
{\displaystyle S\equiv -k_{\text{B}}\sum _{s}P_{s}\ln P_{s}=k_{\text{B}}(\ln Z+\beta \langle E\rangle )={\frac {\partial }{\partial T}}(k_{\text{B}}T\ln Z)=-{\frac {\partial A}{\partial T}}}
where A is the Helmholtz free energy defined as A = U − TS, where U = ⟨E⟩ is the total energy and S is the entropy, so that
A
=
⟨
E
⟩
−
T
S
=
−
k
B
T
ln
Z
.
{\displaystyle A=\langle E\rangle -TS=-k_{\text{B}}T\ln Z.}
Furthermore, the heat capacity can be expressed as
C
v
=
T
∂
S
∂
T
=
−
T
∂
2
A
∂
T
2
.
{\displaystyle C_{\text{v}}=T{\frac {\partial S}{\partial T}}=-T{\frac {\partial ^{2}A}{\partial T^{2}}}.}
=== Partition functions of subsystems ===
Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:
Z
=
∏
j
=
1
N
ζ
j
.
{\displaystyle Z=\prod _{j=1}^{N}\zeta _{j}.}
If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case
Z
=
ζ
N
.
{\displaystyle Z=\zeta ^{N}.}
However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):
Z
=
ζ
N
N
!
.
{\displaystyle Z={\frac {\zeta ^{N}}{N!}}.}
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.
=== Meaning and significance ===
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is
P
s
=
1
Z
e
−
β
E
s
.
{\displaystyle P_{s}={\frac {1}{Z}}e^{-\beta E_{s}}.}
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:
∑
s
P
s
=
1
Z
∑
s
e
−
β
E
s
=
1
Z
Z
=
1.
{\displaystyle \sum _{s}P_{s}={\frac {1}{Z}}\sum _{s}e^{-\beta E_{s}}={\frac {1}{Z}}Z=1.}
This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies.
== Grand canonical partition function ==
We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.
The grand canonical partition function, denoted by
Z
{\displaystyle {\mathcal {Z}}}
, is the following sum over microstates
Z
(
μ
,
V
,
T
)
=
∑
i
exp
(
N
i
μ
−
E
i
k
B
T
)
.
{\displaystyle {\mathcal {Z}}(\mu ,V,T)=\sum _{i}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}
Here, each microstate is labelled by
i
{\displaystyle i}
, and has total particle number
N
i
{\displaystyle N_{i}}
and total energy
E
i
{\displaystyle E_{i}}
. This partition function is closely related to the grand potential,
Φ
G
{\displaystyle \Phi _{\rm {G}}}
, by the relation
−
k
B
T
ln
Z
=
Φ
G
=
⟨
E
⟩
−
T
S
−
μ
⟨
N
⟩
.
{\displaystyle -k_{\text{B}}T\ln {\mathcal {Z}}=\Phi _{\rm {G}}=\langle E\rangle -TS-\mu \langle N\rangle .}
This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state
i
{\displaystyle i}
:
p
i
=
1
Z
exp
(
N
i
μ
−
E
i
k
B
T
)
.
{\displaystyle p_{i}={\frac {1}{\mathcal {Z}}}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as
Z
(
z
,
V
,
T
)
=
∑
N
i
z
N
i
Z
(
N
i
,
V
,
T
)
,
{\displaystyle {\mathcal {Z}}(z,V,T)=\sum _{N_{i}}z^{N_{i}}Z(N_{i},V,T),}
where
z
≡
exp
(
μ
/
k
B
T
)
{\displaystyle z\equiv \exp(\mu /k_{\text{B}}T)}
is known as the absolute activity (or fugacity) and
Z
(
N
i
,
V
,
T
)
{\displaystyle Z(N_{i},V,T)}
is the canonical partition function.
== See also ==
Partition function (mathematics)
Partition function (quantum field theory)
Virial theorem
Widom insertion method
== References == | Wikipedia/Canonical_partition_function |
In quantum mechanics, a density matrix (or density operator) is a matrix used in calculating the probabilities of the outcomes of measurements performed on physical systems. It is a generalization of the state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states.: 73 : 100 These arise in quantum mechanics in two different situations:
when the preparation of a system can randomly produce different pure states, and thus one must deal with the statistics of possible preparations, and
when one wants to describe a physical system that is entangled with another, without describing their combined state. This case is typical for a system interacting with some environment (e.g. decoherence). In this case, the density matrix of an entangled system differs from that of an ensemble of pure states that, combined, would give the same statistical results upon measurement.
Density matrices are thus crucial tools in areas of quantum mechanics that deal with mixed states, such as quantum statistical mechanics, open quantum systems and quantum information.
== Definition and motivation ==
The density matrix is a representation of a linear operator called the density operator. The density matrix is obtained from the density operator by a choice of an orthonormal basis in the underlying space. In practice, the terms density matrix and density operator are often used interchangeably.
Pick a basis with states
|
0
⟩
{\displaystyle |0\rangle }
,
|
1
⟩
{\displaystyle |1\rangle }
in a two-dimensional Hilbert space, then the density operator is represented by the matrix
(
ρ
i
j
)
=
(
ρ
00
ρ
01
ρ
10
ρ
11
)
=
(
p
0
ρ
01
ρ
01
∗
p
1
)
{\displaystyle (\rho _{ij})=\left({\begin{matrix}\rho _{00}&\rho _{01}\\\rho _{10}&\rho _{11}\end{matrix}}\right)=\left({\begin{matrix}p_{0}&\rho _{01}\\\rho _{01}^{*}&p_{1}\end{matrix}}\right)}
where the diagonal elements are real numbers that sum to one (also called populations of the two states
|
0
⟩
{\displaystyle |0\rangle }
,
|
1
⟩
{\displaystyle |1\rangle }
).
The off-diagonal elements are complex conjugates of each other (also called coherences); they are restricted in magnitude by the requirement that
(
ρ
i
j
)
{\displaystyle (\rho _{ij})}
be a positive semi-definite operator, see below.
A density operator is a positive semi-definite, self-adjoint operator of trace one acting on the Hilbert space of the system. This definition can be motivated by considering a situation where some pure states
|
ψ
j
⟩
{\displaystyle |\psi _{j}\rangle }
(which are not necessarily orthogonal) are prepared with probability
p
j
{\displaystyle p_{j}}
each. This is known as an ensemble of pure states. The probability of obtaining projective measurement result
m
{\displaystyle m}
when using projectors
Π
m
{\displaystyle \Pi _{m}}
is given by: 99
p
(
m
)
=
∑
j
p
j
⟨
ψ
j
|
Π
m
|
ψ
j
⟩
=
tr
[
Π
m
(
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
)
]
,
{\displaystyle p(m)=\sum _{j}p_{j}\left\langle \psi _{j}\right|\Pi _{m}\left|\psi _{j}\right\rangle =\operatorname {tr} \left[\Pi _{m}\left(\sum _{j}p_{j}\left|\psi _{j}\right\rangle \left\langle \psi _{j}\right|\right)\right],}
which makes the density operator, defined as
ρ
=
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
,
{\displaystyle \rho =\sum _{j}p_{j}\left|\psi _{j}\right\rangle \left\langle \psi _{j}\right|,}
a convenient representation for the state of this ensemble. It is easy to check that this operator is positive semi-definite, self-adjoint, and has trace one. Conversely, it follows from the spectral theorem that every operator with these properties can be written as
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
{\textstyle \sum _{j}p_{j}\left|\psi _{j}\right\rangle \left\langle \psi _{j}\right|}
for some states
|
ψ
j
⟩
{\displaystyle \left|\psi _{j}\right\rangle }
and coefficients
p
j
{\displaystyle p_{j}}
that are non-negative and add up to one.: 102 However, this representation will not be unique, as shown by the Schrödinger–HJW theorem.
Another motivation for the definition of density operators comes from considering local measurements on entangled states. Let
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
be a pure entangled state in the composite Hilbert space
H
1
⊗
H
2
{\displaystyle {\mathcal {H}}_{1}\otimes {\mathcal {H}}_{2}}
. The probability of obtaining measurement result
m
{\displaystyle m}
when measuring projectors
Π
m
{\displaystyle \Pi _{m}}
on the Hilbert space
H
1
{\displaystyle {\mathcal {H}}_{1}}
alone is given by: 107
p
(
m
)
=
⟨
Ψ
|
(
Π
m
⊗
I
)
|
Ψ
⟩
=
tr
[
Π
m
(
tr
2
|
Ψ
⟩
⟨
Ψ
|
)
]
,
{\displaystyle p(m)=\left\langle \Psi \right|\left(\Pi _{m}\otimes I\right)\left|\Psi \right\rangle =\operatorname {tr} \left[\Pi _{m}\left(\operatorname {tr} _{2}\left|\Psi \right\rangle \left\langle \Psi \right|\right)\right],}
where
tr
2
{\displaystyle \operatorname {tr} _{2}}
denotes the partial trace over the Hilbert space
H
2
{\displaystyle {\mathcal {H}}_{2}}
. This makes the operator
ρ
=
tr
2
|
Ψ
⟩
⟨
Ψ
|
{\displaystyle \rho =\operatorname {tr} _{2}\left|\Psi \right\rangle \left\langle \Psi \right|}
a convenient tool to calculate the probabilities of these local measurements. It is known as the reduced density matrix of
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
on subsystem 1. It is easy to check that this operator has all the properties of a density operator. Conversely, the Schrödinger–HJW theorem implies that all density operators can be written as
tr
2
|
Ψ
⟩
⟨
Ψ
|
{\displaystyle \operatorname {tr} _{2}\left|\Psi \right\rangle \left\langle \Psi \right|}
for some state
|
Ψ
⟩
{\displaystyle \left|\Psi \right\rangle }
.
== Pure and mixed states ==
A pure quantum state is a state that can not be written as a probabilistic mixture, or convex combination, of other quantum states. There are several equivalent characterizations of pure states in the language of density operators.: 73 A density operator represents a pure state if and only if:
it can be written as an outer product of a state vector
|
ψ
⟩
{\displaystyle |\psi \rangle }
with itself, that is,
ρ
=
|
ψ
⟩
⟨
ψ
|
.
{\displaystyle \rho =|\psi \rangle \langle \psi |.}
it is a projection, in particular of rank one.
it is idempotent, that is
ρ
=
ρ
2
.
{\displaystyle \rho =\rho ^{2}.}
it has purity one, that is,
tr
(
ρ
2
)
=
1.
{\displaystyle \operatorname {tr} (\rho ^{2})=1.}
It is important to emphasize the difference between a probabilistic mixture (i.e. an ensemble) of quantum states and the superposition of two states. If an ensemble is prepared to have half of its systems in state
|
ψ
1
⟩
{\displaystyle |\psi _{1}\rangle }
and the other half in
|
ψ
2
⟩
{\displaystyle |\psi _{2}\rangle }
, it can be described by the density matrix:
ρ
=
1
2
(
1
0
0
1
)
,
{\displaystyle \rho ={\frac {1}{2}}{\begin{pmatrix}1&0\\0&1\end{pmatrix}},}
where
|
ψ
1
⟩
{\displaystyle |\psi _{1}\rangle }
and
|
ψ
2
⟩
{\displaystyle |\psi _{2}\rangle }
are assumed orthogonal and of dimension 2, for simplicity. On the other hand, a quantum superposition of these two states with equal probability amplitudes results in the pure state
|
ψ
⟩
=
(
|
ψ
1
⟩
+
|
ψ
2
⟩
)
/
2
,
{\displaystyle |\psi \rangle =(|\psi _{1}\rangle +|\psi _{2}\rangle )/{\sqrt {2}},}
with density matrix
|
ψ
⟩
⟨
ψ
|
=
1
2
(
1
1
1
1
)
.
{\displaystyle |\psi \rangle \langle \psi |={\frac {1}{2}}{\begin{pmatrix}1&1\\1&1\end{pmatrix}}.}
Unlike the probabilistic mixture, this superposition can display quantum interference.: 81
Geometrically, the set of density operators is a convex set, and the pure states are the extremal points of that set. The simplest case is that of a two-dimensional Hilbert space, known as a qubit. An arbitrary mixed state for a qubit can be written as a linear combination of the Pauli matrices, which together with the identity matrix provide a basis for
2
×
2
{\displaystyle 2\times 2}
self-adjoint matrices:: 126
ρ
=
1
2
(
I
+
r
x
σ
x
+
r
y
σ
y
+
r
z
σ
z
)
,
{\displaystyle \rho ={\frac {1}{2}}\left(I+r_{x}\sigma _{x}+r_{y}\sigma _{y}+r_{z}\sigma _{z}\right),}
where the real numbers
(
r
x
,
r
y
,
r
z
)
{\displaystyle (r_{x},r_{y},r_{z})}
are the coordinates of a point within the unit ball and
σ
x
=
(
0
1
1
0
)
,
σ
y
=
(
0
−
i
i
0
)
,
σ
z
=
(
1
0
0
−
1
)
.
{\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}
Points with
r
x
2
+
r
y
2
+
r
z
2
=
1
{\displaystyle r_{x}^{2}+r_{y}^{2}+r_{z}^{2}=1}
represent pure states, while mixed states are represented by points in the interior. This is known as the Bloch sphere picture of qubit state space.
=== Example: light polarization ===
An example of pure and mixed states is light polarization. An individual photon
can be described as having right or left circular polarization, described by the orthogonal quantum states
|
R
⟩
{\displaystyle |\mathrm {R} \rangle }
and
|
L
⟩
{\displaystyle |\mathrm {L} \rangle }
or a superposition of the two: it can be in any state
α
|
R
⟩
+
β
|
L
⟩
{\displaystyle \alpha |\mathrm {R} \rangle +\beta |\mathrm {L} \rangle }
(with
|
α
|
2
+
|
β
|
2
=
1
{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}
), corresponding to linear, circular, or elliptical polarization. Consider now a vertically polarized photon, described by the state
|
V
⟩
=
(
|
R
⟩
+
|
L
⟩
)
/
2
{\displaystyle |\mathrm {V} \rangle =(|\mathrm {R} \rangle +|\mathrm {L} \rangle )/{\sqrt {2}}}
. If we pass it through a circular polarizer that allows either only
|
R
⟩
{\displaystyle |\mathrm {R} \rangle }
polarized light, or only
|
L
⟩
{\displaystyle |\mathrm {L} \rangle }
polarized light, half of the photons are absorbed in both cases. This may make it seem like half of the photons are in state
|
R
⟩
{\displaystyle |\mathrm {R} \rangle }
and the other half in state
|
L
⟩
{\displaystyle |\mathrm {L} \rangle }
, but this is not correct: if we pass
(
|
R
⟩
+
|
L
⟩
)
/
2
{\displaystyle (|\mathrm {R} \rangle +|\mathrm {L} \rangle )/{\sqrt {2}}}
through a linear polarizer there is no absorption whatsoever, but if we pass either state
|
R
⟩
{\displaystyle |\mathrm {R} \rangle }
or
|
L
⟩
{\displaystyle |\mathrm {L} \rangle }
half of the photons are absorbed.
Unpolarized light (such as the light from an incandescent light bulb) cannot be described as any state of the form
α
|
R
⟩
+
β
|
L
⟩
{\displaystyle \alpha |\mathrm {R} \rangle +\beta |\mathrm {L} \rangle }
(linear, circular, or elliptical polarization). Unlike polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and it cannot be made polarized by passing it through any wave plate. However, unpolarized light can be described as a statistical ensemble, e. g. as each photon having either
|
R
⟩
{\displaystyle |\mathrm {R} \rangle }
polarization or
|
L
⟩
{\displaystyle |\mathrm {L} \rangle }
polarization with probability 1/2. The same behavior would occur if each photon had either vertical polarization
|
V
⟩
{\displaystyle |\mathrm {V} \rangle }
or horizontal polarization
|
H
⟩
{\displaystyle |\mathrm {H} \rangle }
with probability 1/2. These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. For this example of unpolarized light, the density operator equals: 75
ρ
=
1
2
|
R
⟩
⟨
R
|
+
1
2
|
L
⟩
⟨
L
|
=
1
2
|
H
⟩
⟨
H
|
+
1
2
|
V
⟩
⟨
V
|
=
1
2
(
1
0
0
1
)
.
{\displaystyle \rho ={\frac {1}{2}}|\mathrm {R} \rangle \langle \mathrm {R} |+{\frac {1}{2}}|\mathrm {L} \rangle \langle \mathrm {L} |={\frac {1}{2}}|\mathrm {H} \rangle \langle \mathrm {H} |+{\frac {1}{2}}|\mathrm {V} \rangle \langle \mathrm {V} |={\frac {1}{2}}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}.}
There are also other ways to generate unpolarized light: one possibility is to introduce uncertainty in the preparation of the photon, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the light beam acquire different polarizations. Another possibility is using entangled states: a radioactive decay can emit two photons traveling in opposite directions, in the quantum state
(
|
R
,
L
⟩
+
|
L
,
R
⟩
)
/
2
{\displaystyle (|\mathrm {R} ,\mathrm {L} \rangle +|\mathrm {L} ,\mathrm {R} \rangle )/{\sqrt {2}}}
. The joint state of the two photons together is pure, but the density matrix for each photon individually, found by taking the partial trace of the joint density matrix, is completely mixed.: 106
== Equivalent ensembles and purifications ==
A given density operator does not uniquely determine which ensemble of pure states gives rise to it; in general there are infinitely many different ensembles generating the same density matrix. Those cannot be distinguished by any measurement. The equivalent ensembles can be completely characterized: let
{
p
j
,
|
ψ
j
⟩
}
{\displaystyle \{p_{j},|\psi _{j}\rangle \}}
be an ensemble. Then for any complex matrix
U
{\displaystyle U}
such that
U
†
U
=
I
{\displaystyle U^{\dagger }U=I}
(a partial isometry), the ensemble
{
q
i
,
|
φ
i
⟩
}
{\displaystyle \{q_{i},|\varphi _{i}\rangle \}}
defined by
q
i
|
φ
i
⟩
=
∑
j
U
i
j
p
j
|
ψ
j
⟩
{\displaystyle {\sqrt {q_{i}}}\left|\varphi _{i}\right\rangle =\sum _{j}U_{ij}{\sqrt {p_{j}}}\left|\psi _{j}\right\rangle }
will give rise to the same density operator, and all equivalent ensembles are of this form.
A closely related fact is that a given density operator has infinitely many different purifications, which are pure states that generate the density operator when a partial trace is taken. Let
ρ
=
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
{\displaystyle \rho =\sum _{j}p_{j}|\psi _{j}\rangle \langle \psi _{j}|}
be the density operator generated by the ensemble
{
p
j
,
|
ψ
j
⟩
}
{\displaystyle \{p_{j},|\psi _{j}\rangle \}}
, with states
|
ψ
j
⟩
{\displaystyle |\psi _{j}\rangle }
not necessarily orthogonal. Then for all partial isometries
U
{\displaystyle U}
we have that
|
Ψ
⟩
=
∑
j
p
j
|
ψ
j
⟩
U
|
a
j
⟩
{\displaystyle |\Psi \rangle =\sum _{j}{\sqrt {p_{j}}}|\psi _{j}\rangle U|a_{j}\rangle }
is a purification of
ρ
{\displaystyle \rho }
, where
|
a
j
⟩
{\displaystyle |a_{j}\rangle }
is an orthogonal basis, and furthermore all purifications of
ρ
{\displaystyle \rho }
are of this form.
== Measurement ==
Let
A
{\displaystyle A}
be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states
|
ψ
j
⟩
{\displaystyle \textstyle |\psi _{j}\rangle }
occurs with probability
p
j
{\displaystyle p_{j}}
. Then the corresponding density operator equals
ρ
=
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
.
{\displaystyle \rho =\sum _{j}p_{j}|\psi _{j}\rangle \langle \psi _{j}|.}
The expectation value of the measurement can be calculated by extending from the case of pure states:
⟨
A
⟩
=
∑
j
p
j
⟨
ψ
j
|
A
|
ψ
j
⟩
=
∑
j
p
j
tr
(
|
ψ
j
⟩
⟨
ψ
j
|
A
)
=
tr
(
∑
j
p
j
|
ψ
j
⟩
⟨
ψ
j
|
A
)
=
tr
(
ρ
A
)
,
{\displaystyle \langle A\rangle =\sum _{j}p_{j}\langle \psi _{j}|A|\psi _{j}\rangle =\sum _{j}p_{j}\operatorname {tr} \left(|\psi _{j}\rangle \langle \psi _{j}|A\right)=\operatorname {tr} \left(\sum _{j}p_{j}|\psi _{j}\rangle \langle \psi _{j}|A\right)=\operatorname {tr} (\rho A),}
where
tr
{\displaystyle \operatorname {tr} }
denotes trace. Thus, the familiar expression
⟨
A
⟩
=
⟨
ψ
|
A
|
ψ
⟩
{\displaystyle \langle A\rangle =\langle \psi |A|\psi \rangle }
for pure states is replaced by
⟨
A
⟩
=
tr
(
ρ
A
)
{\displaystyle \langle A\rangle =\operatorname {tr} (\rho A)}
for mixed states.: 73
Moreover, if
A
{\displaystyle A}
has spectral resolution
A
=
∑
i
a
i
P
i
,
{\displaystyle A=\sum _{i}a_{i}P_{i},}
where
P
i
{\displaystyle P_{i}}
is the projection operator into the eigenspace corresponding to eigenvalue
a
i
{\displaystyle a_{i}}
, the post-measurement density operator is given by
ρ
i
′
=
P
i
ρ
P
i
tr
[
ρ
P
i
]
{\displaystyle \rho _{i}'={\frac {P_{i}\rho P_{i}}{\operatorname {tr} \left[\rho P_{i}\right]}}}
when outcome i is obtained. In the case where the measurement result is not known the ensemble is instead described by
ρ
′
=
∑
i
P
i
ρ
P
i
.
{\displaystyle \;\rho '=\sum _{i}P_{i}\rho P_{i}.}
If one assumes that the probabilities of measurement outcomes are linear functions of the projectors
P
i
{\displaystyle P_{i}}
, then they must be given by the trace of the projector with a density operator. Gleason's theorem shows that in Hilbert spaces of dimension 3 or larger the assumption of linearity can be replaced with an assumption of non-contextuality. This restriction on the dimension can be removed by assuming non-contextuality for POVMs as well, but this has been criticized as physically unmotivated.
== Entropy ==
The von Neumann entropy
S
{\displaystyle S}
of a mixture can be expressed in terms of the eigenvalues of
ρ
{\displaystyle \rho }
or in terms of the trace and logarithm of the density operator
ρ
{\displaystyle \rho }
. Since
ρ
{\displaystyle \rho }
is a positive semi-definite operator, it has a spectral decomposition such that
ρ
=
∑
i
λ
i
|
φ
i
⟩
⟨
φ
i
|
{\displaystyle \rho =\textstyle \sum _{i}\lambda _{i}|\varphi _{i}\rangle \langle \varphi _{i}|}
, where
|
φ
i
⟩
{\displaystyle |\varphi _{i}\rangle }
are orthonormal vectors,
λ
i
≥
0
{\displaystyle \lambda _{i}\geq 0}
, and
∑
λ
i
=
1
{\displaystyle \textstyle \sum \lambda _{i}=1}
. Then the entropy of a quantum system with density matrix
ρ
{\displaystyle \rho }
is
S
=
−
∑
i
λ
i
ln
λ
i
=
−
tr
(
ρ
ln
ρ
)
.
{\displaystyle S=-\sum _{i}\lambda _{i}\ln \lambda _{i}=-\operatorname {tr} (\rho \ln \rho ).}
This definition implies that the von Neumann entropy of any pure state is zero.: 217 If
ρ
i
{\displaystyle \rho _{i}}
are states that have support on orthogonal subspaces, then the von Neumann entropy of a convex combination of these states,
ρ
=
∑
i
p
i
ρ
i
,
{\displaystyle \rho =\sum _{i}p_{i}\rho _{i},}
is given by the von Neumann entropies of the states
ρ
i
{\displaystyle \rho _{i}}
and the Shannon entropy of the probability distribution
p
i
{\displaystyle p_{i}}
:
S
(
ρ
)
=
H
(
p
i
)
+
∑
i
p
i
S
(
ρ
i
)
.
{\displaystyle S(\rho )=H(p_{i})+\sum _{i}p_{i}S(\rho _{i}).}
When the states
ρ
i
{\displaystyle \rho _{i}}
do not have orthogonal supports, the sum on the right-hand side is strictly greater than the von Neumann entropy of the convex combination
ρ
{\displaystyle \rho }
.: 518
Given a density operator
ρ
{\displaystyle \rho }
and a projective measurement as in the previous section, the state
ρ
′
{\displaystyle \rho '}
defined by the convex combination
ρ
′
=
∑
i
P
i
ρ
P
i
,
{\displaystyle \rho '=\sum _{i}P_{i}\rho P_{i},}
which can be interpreted as the state produced by performing the measurement but not recording which outcome occurred,: 159 has a von Neumann entropy larger than that of
ρ
{\displaystyle \rho }
, except if
ρ
=
ρ
′
{\displaystyle \rho =\rho '}
. It is however possible for the
ρ
′
{\displaystyle \rho '}
produced by a generalized measurement, or POVM, to have a lower von Neumann entropy than
ρ
{\displaystyle \rho }
.: 514
== Von Neumann equation for time evolution ==
Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville–von Neumann equation) describes how a density operator evolves in time. The von Neumann equation dictates that
i
ℏ
d
d
t
ρ
=
[
H
,
ρ
]
,
{\displaystyle i\hbar {\frac {d}{dt}}\rho =[H,\rho ]~,}
where the brackets denote a commutator.
This equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference:
i
ℏ
d
d
t
A
H
=
−
[
H
,
A
H
]
,
{\displaystyle i\hbar {\frac {d}{dt}}A_{\text{H}}=-[H,A_{\text{H}}]~,}
where
A
H
(
t
)
{\displaystyle A_{\text{H}}(t)}
is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value
⟨
A
⟩
{\displaystyle \langle A\rangle }
comes out the same as in the Schrödinger picture.
If the Hamiltonian is time-independent, the von Neumann equation can be easily solved to yield
ρ
(
t
)
=
e
−
i
H
t
/
ℏ
ρ
(
0
)
e
i
H
t
/
ℏ
.
{\displaystyle \rho (t)=e^{-iHt/\hbar }\rho (0)e^{iHt/\hbar }.}
For a more general Hamiltonian, if
G
(
t
)
{\displaystyle G(t)}
is the wavefunction propagator over some interval, then the time evolution of the density matrix over that same interval is given by
ρ
(
t
)
=
G
(
t
)
ρ
(
0
)
G
(
t
)
†
.
{\displaystyle \rho (t)=G(t)\rho (0)G(t)^{\dagger }.}
If one enters the interaction picture, choosing to focus on some component
H
1
{\displaystyle H_{1}}
of the Hamiltonian
H
=
H
0
+
H
1
{\displaystyle H=H_{0}+H_{1}}
, the equation for the evolution of the interaction-picture density operator
ρ
I
(
t
)
{\displaystyle \rho _{\,\mathrm {I} }(t)}
possesses identical structure to the von Neumann equation, except the Hamiltonian must also be transformed into the new picture:
i
ℏ
d
d
t
ρ
I
(
t
)
=
[
H
1
,
I
(
t
)
,
ρ
I
(
t
)
]
,
{\displaystyle {\displaystyle i\hbar {\frac {d}{dt}}\rho _{\text{I}}(t)=[H_{1,{\text{I}}}(t),\rho _{\text{I}}(t)],}}
where
H
1
,
I
(
t
)
=
e
i
H
0
t
/
ℏ
H
1
e
−
i
H
0
t
/
ℏ
{\displaystyle {\displaystyle H_{1,{\text{I}}}(t)=e^{iH_{0}t/\hbar }H_{1}e^{-iH_{0}t/\hbar }}}
.
== Wigner functions and classical analogies ==
The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function,
W
(
x
,
p
)
=
d
e
f
1
π
ℏ
∫
−
∞
∞
ψ
∗
(
x
+
y
)
ψ
(
x
−
y
)
e
2
i
p
y
/
ℏ
d
y
.
{\displaystyle W(x,p)\,\ {\stackrel {\mathrm {def} }{=}}\ \,{\frac {1}{\pi \hbar }}\int _{-\infty }^{\infty }\psi ^{*}(x+y)\psi (x-y)e^{2ipy/\hbar }\,dy.}
The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation,
∂
W
(
x
,
p
,
t
)
∂
t
=
−
{
{
W
(
x
,
p
,
t
)
,
H
(
x
,
p
)
}
}
,
{\displaystyle {\frac {\partial W(x,p,t)}{\partial t}}=-\{\{W(x,p,t),H(x,p)\}\},}
where
H
(
x
,
p
)
{\displaystyle H(x,p)}
is the Hamiltonian, and
{
{
⋅
,
⋅
}
}
{\displaystyle \{\{\cdot ,\cdot \}\}}
is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of a vanishing Planck constant
ℏ
{\displaystyle \hbar }
,
W
(
x
,
p
,
t
)
{\displaystyle W(x,p,t)}
reduces to the classical Liouville probability density function in phase space.
== Example applications ==
Density matrices are a basic tool of quantum mechanics, and appear at least occasionally in almost any type of quantum-mechanical calculation. Some specific examples where density matrices are especially helpful and common are as follows:
Statistical mechanics uses density matrices, most prominently to express the idea that a system is prepared at a nonzero temperature. Constructing a density matrix using a canonical ensemble gives a result of the form
ρ
=
exp
(
−
β
H
)
/
Z
(
β
)
{\displaystyle \rho =\exp(-\beta H)/Z(\beta )}
, where
β
{\displaystyle \beta }
is the inverse temperature
(
k
B
T
)
−
1
{\displaystyle (k_{\rm {B}}T)^{-1}}
and
H
{\displaystyle H}
is the system's Hamiltonian. The normalization condition that the trace of
ρ
{\displaystyle \rho }
be equal to 1 defines the partition function to be
Z
(
β
)
=
t
r
exp
(
−
β
H
)
{\displaystyle Z(\beta )=\mathrm {tr} \exp(-\beta H)}
. If the number of particles involved in the system is itself not certain, then a grand canonical ensemble can be applied, where the states summed over to make the density matrix are drawn from a Fock space.: 174
Quantum decoherence theory typically involves non-isolated quantum systems developing entanglement with other systems, including measurement apparatuses. Density matrices make it much easier to describe the process and calculate its consequences. Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible, as the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.
Similarly, in quantum computation, quantum information theory, open quantum systems, and other fields where state preparation is noisy and decoherence can occur, density matrices are frequently used. Noise is often modelled via a depolarizing channel or an amplitude damping channel. Quantum tomography is a process by which, given a set of data representing the results of quantum measurements, a density matrix consistent with those measurement results is computed.
When analyzing a system with many electrons, such as an atom or molecule, an imperfect but useful first approximation is to treat the electrons as uncorrelated or each having an independent single-particle wavefunction. This is the usual starting point when building the Slater determinant in the Hartree–Fock method. If there are
N
{\displaystyle N}
electrons filling the
N
{\displaystyle N}
single-particle wavefunctions
|
ψ
i
⟩
{\displaystyle |\psi _{i}\rangle }
and if only single-particle observables are considered, then their expectation values for the
N
{\displaystyle N}
-electron system can be computed using the density matrix
∑
i
=
1
N
|
ψ
i
⟩
⟨
ψ
i
|
{\textstyle \sum _{i=1}^{N}|\psi _{i}\rangle \langle \psi _{i}|}
(the one-particle density matrix of the
N
{\displaystyle N}
-electron system).
== C*-algebraic formulation of states ==
It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable. For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces that realize A as a subalgebra of operators.
Geometrically, a pure state on a C*-algebra A is a state that is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A.
The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics.
The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures.
== History ==
The formalism of density operators and matrices was introduced in 1927 by John von Neumann and independently, but less systematically, by Lev Landau and later in 1946 by Felix Bloch. Von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. The name density matrix itself relates to its classical correspondence to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics, which was introduced by Eugene Wigner in 1932.
In contrast, the motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector.
== See also ==
== Notes and references == | Wikipedia/Von_Neumann_equation |
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
== Overview ==
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution:
⟨
A
⟩
=
∫
P
S
A
r
→
e
−
β
E
r
→
Z
d
r
→
{\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}{\frac {e^{-\beta E_{\vec {r}}}}{Z}}d{\vec {r}}}
.
where
E
(
r
→
)
=
E
r
→
{\displaystyle E({\vec {r}})=E_{\vec {r}}}
is the energy of the system for a given state defined by
r
→
{\displaystyle {\vec {r}}}
- a vector with all the degrees of freedom (for instance, for a mechanical system,
r
→
=
(
q
→
,
p
→
)
{\displaystyle {\vec {r}}=\left({\vec {q}},{\vec {p}}\right)}
),
β
≡
1
/
k
b
T
{\displaystyle \beta \equiv 1/k_{b}T}
and
Z
=
∫
P
S
e
−
β
E
r
→
d
r
→
{\displaystyle Z=\int _{PS}e^{-\beta E_{\vec {r}}}d{\vec {r}}}
is the partition function.
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as
1
/
N
{\displaystyle 1/{\sqrt {N}}}
, independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
== Importance sampling ==
An estimation, under Monte Carlo integration, of an integral defined as
⟨
A
⟩
=
∫
P
S
A
r
→
e
−
β
E
r
→
d
r
→
/
Z
{\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}e^{-\beta E_{\vec {r}}}d{\vec {r}}/Z}
is
⟨
A
⟩
≃
1
N
∑
i
=
1
N
A
r
→
i
e
−
β
E
r
→
i
/
Z
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}e^{-\beta E_{{\vec {r}}_{i}}}/Z}
where
r
→
i
{\displaystyle {\vec {r}}_{i}}
are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations).
From all the phase space, some zones of it are generally more important to the mean of the variable
A
{\displaystyle A}
than others. In particular, those that have the value of
e
−
β
E
r
→
i
{\displaystyle e^{-\beta E_{{\vec {r}}_{i}}}}
sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique.
Lets assume
p
(
r
→
)
{\displaystyle p({\vec {r}})}
is a distribution that chooses the states that are known to be more relevant to the integral.
The mean value of
A
{\displaystyle A}
can be rewritten as
⟨
A
⟩
=
∫
P
S
p
−
1
(
r
→
)
A
r
→
p
−
1
(
r
→
)
e
−
β
E
r
→
/
Z
d
r
→
=
∫
P
S
p
−
1
(
r
→
)
A
r
→
∗
e
−
β
E
r
→
/
Z
d
r
→
{\displaystyle \langle A\rangle =\int _{PS}p^{-1}({\vec {r}}){\frac {A_{\vec {r}}}{p^{-1}({\vec {r}})}}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}=\int _{PS}p^{-1}({\vec {r}})A_{\vec {r}}^{*}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}}
,
where
A
r
→
∗
{\displaystyle A_{\vec {r}}^{*}}
are the sampled values taking into account the importance probability
p
(
r
→
)
{\displaystyle p({\vec {r}})}
. This integral can be estimated by
⟨
A
⟩
≃
1
N
∑
i
=
1
N
p
−
1
(
r
→
i
)
A
r
→
i
∗
e
−
β
E
r
→
i
/
Z
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}p^{-1}({\vec {r}}_{i})A_{{\vec {r}}_{i}}^{*}e^{-\beta E_{{\vec {r}}_{i}}}/Z}
where
r
→
i
{\displaystyle {\vec {r}}_{i}}
are now randomly generated using the
p
(
r
→
)
{\displaystyle p({\vec {r}})}
distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used.
=== Canonical ===
Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution,
p
(
r
→
)
{\displaystyle p({\vec {r}})}
, to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let
p
(
r
→
)
=
e
−
β
E
r
→
Z
{\displaystyle p({\vec {r}})={\frac {e^{-\beta E_{\vec {r}}}}{Z}}}
be the distribution to use. Substituting on the previous sum,
⟨
A
⟩
≃
1
N
∑
i
=
1
N
A
r
→
i
∗
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}^{*}}
.
So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution
p
(
r
→
)
{\displaystyle p({\vec {r}})}
and perform means over
A
r
→
∗
{\displaystyle A_{\vec {r}}^{*}}
.
One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of
r
→
i
{\displaystyle {\vec {r}}_{i}}
, one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity.
=== Multi-canonical ===
As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used.
The multicanonic approach uses a different choice for importance sampling:
p
(
r
→
)
=
1
Ω
(
E
r
→
)
{\displaystyle p({\vec {r}})={\frac {1}{\Omega (E_{\vec {r}})}}}
where
Ω
(
E
)
{\displaystyle \Omega (E)}
is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every energy is treated equally.
The major drawback of this choice is the fact that, on most systems,
Ω
(
E
)
{\displaystyle \Omega (E)}
is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on
β
{\displaystyle \beta }
.
== Implementation ==
On this section, the implementation will focus on the Ising model. Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally
N
=
L
2
{\displaystyle N=L^{2}}
spins, and so, the phase space is discrete and is characterized by N spins,
r
→
=
(
σ
1
,
σ
2
,
.
.
.
,
σ
N
)
{\displaystyle {\vec {r}}=(\sigma _{1},\sigma _{2},...,\sigma _{N})}
where
σ
i
∈
{
−
1
,
1
}
{\displaystyle \sigma _{i}\in \{-1,1\}}
is the spin of each lattice site. The system's energy is given by
E
(
r
→
)
=
∑
i
=
1
N
∑
j
∈
v
i
z
i
(
1
−
J
i
j
σ
i
σ
j
)
{\displaystyle E({\vec {r}})=\sum _{i=1}^{N}\sum _{j\in viz_{i}}(1-J_{ij}\sigma _{i}\sigma _{j})}
, where
v
i
z
i
{\displaystyle viz_{i}}
are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated.
On this example, the objective is to obtain
⟨
M
⟩
{\displaystyle \langle M\rangle }
and
⟨
M
2
⟩
{\displaystyle \langle M^{2}\rangle }
(for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition,
M
(
r
→
)
=
∑
i
=
1
N
σ
i
{\displaystyle M({\vec {r}})=\sum _{i=1}^{N}\sigma _{i}}
.
=== Canonical ===
First, the system must be initialized: let
β
=
1
/
k
b
T
{\displaystyle \beta =1/k_{b}T}
be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it).
With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called single spin flip. The following steps are to be made to perform a single measurement.
step 1: generate a state that follows the
p
(
r
→
)
{\displaystyle p({\vec {r}})}
distribution:
step 1.1: Perform TT times the following iteration:
step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin
σ
i
{\displaystyle \sigma _{i}}
.
step 1.1.2: pick a random number
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
.
step 1.1.3: calculate the energy change of trying to flip the spin i:
Δ
E
=
2
σ
i
∑
j
∈
v
i
z
i
σ
j
{\displaystyle \Delta E=2\sigma _{i}\sum _{j\in viz_{i}}\sigma _{j}}
and its magnetization change:
Δ
M
=
−
2
σ
i
{\displaystyle \Delta M=-2\sigma _{i}}
step 1.1.4: if
α
<
min
(
1
,
e
−
β
Δ
E
)
{\displaystyle \alpha <\min(1,e^{-\beta \Delta E})}
, flip the spin (
σ
i
=
−
σ
i
{\displaystyle \sigma _{i}=-\sigma _{i}}
), otherwise, don't.
step 1.1.5: update the several macroscopic variables in case the spin flipped:
E
=
E
+
Δ
E
{\displaystyle E=E+\Delta E}
,
M
=
M
+
Δ
M
{\displaystyle M=M+\Delta M}
after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method.
step 2: perform the measurement:
step 2.1: save, on a histogram, the values of M and M2.
As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a tunneling time. One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return.
A major drawback of this method with the single spin flip choice in systems like Ising model is that the tunneling time scales as a power law as
N
2
+
z
{\displaystyle N^{2+z}}
where z is greater than 0.5, phenomenon known as critical slowing down.
== Applicability ==
The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model, lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
== Generalizations ==
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered.
== See also ==
Monte Carlo integration
Metropolis algorithm
Importance sampling
Quantum Monte Carlo
Monte Carlo molecular modeling
== References ==
Allen, M.P. & Tildesley, D.J. (1987). Computer Simulation of Liquids. Oxford University Press. ISBN 0-19-855645-4.
Frenkel, D. & Smit, B. (2001). Understanding Molecular Simulation. Academic Press. ISBN 0-12-267351-4.
Binder, K. & Heermann, D.W. (2002). Monte Carlo Simulation in Statistical Physics. An Introduction (4th ed.). Springer. ISBN 3-540-43221-3.
Spanier, Jerome; Gelbard, Ely M. (2008). "Importance Sampling". Monte Carlo Principles and Neutron Transport Problems. Dover. pp. 110–124. ISBN 978-0-486-46293-6. | Wikipedia/Monte_Carlo_method_in_statistical_mechanics |
A linear response function describes the input-output relationship of a signal transducer, such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance; see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related.
== Mathematical definition ==
Denote the input of a system by
h
(
t
)
{\displaystyle h(t)}
(e.g. a force), and the response of the system by
x
(
t
)
{\displaystyle x(t)}
(e.g. a position). Generally, the value of
x
(
t
)
{\displaystyle x(t)}
will depend not only on the present value of
h
(
t
)
{\displaystyle h(t)}
, but also on past values. Approximately
x
(
t
)
{\displaystyle x(t)}
is a weighted sum of the previous values of
h
(
t
′
)
{\displaystyle h(t')}
, with the weights given by the linear response function
χ
(
t
−
t
′
)
{\displaystyle \chi (t-t')}
:
x
(
t
)
=
∫
−
∞
t
d
t
′
χ
(
t
−
t
′
)
h
(
t
′
)
+
⋯
.
{\displaystyle x(t)=\int _{-\infty }^{t}dt'\,\chi (t-t')h(t')+\cdots \,.}
The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function.
The complex-valued Fourier transform
χ
~
(
ω
)
{\displaystyle {\tilde {\chi }}(\omega )}
of the linear response function is very useful as it describes the output of the system if the input is a sine wave
h
(
t
)
=
h
0
sin
(
ω
t
)
{\displaystyle h(t)=h_{0}\sin(\omega t)}
with frequency
ω
{\displaystyle \omega }
. The output reads
x
(
t
)
=
|
χ
~
(
ω
)
|
h
0
sin
(
ω
t
+
arg
χ
~
(
ω
)
)
,
{\displaystyle x(t)=\left|{\tilde {\chi }}(\omega )\right|h_{0}\sin(\omega t+\arg {\tilde {\chi }}(\omega ))\,,}
with amplitude gain
|
χ
~
(
ω
)
|
{\displaystyle |{\tilde {\chi }}(\omega )|}
and phase shift
arg
χ
~
(
ω
)
{\displaystyle \arg {\tilde {\chi }}(\omega )}
.
== Example ==
Consider a damped harmonic oscillator with input given by an external driving force
h
(
t
)
{\displaystyle h(t)}
,
x
¨
(
t
)
+
γ
x
˙
(
t
)
+
ω
0
2
x
(
t
)
=
h
(
t
)
.
{\displaystyle {\ddot {x}}(t)+\gamma {\dot {x}}(t)+\omega _{0}^{2}x(t)=h(t).}
The complex-valued Fourier transform of the linear response function is given by
χ
~
(
ω
)
=
x
~
(
ω
)
h
~
(
ω
)
=
1
ω
0
2
−
ω
2
+
i
γ
ω
.
{\displaystyle {\tilde {\chi }}(\omega )={\frac {{\tilde {x}}(\omega )}{{\tilde {h}}(\omega )}}={\frac {1}{\omega _{0}^{2}-\omega ^{2}+i\gamma \omega }}.}
The amplitude gain is given by the magnitude of the complex number
χ
~
(
ω
)
,
{\displaystyle {\tilde {\chi }}(\omega ),}
and the phase shift by the arctan of the imaginary part of the function divided by the real one.
From this representation, we see that for small
γ
{\displaystyle \gamma }
the Fourier transform
χ
~
(
ω
)
{\displaystyle {\tilde {\chi }}(\omega )}
of the linear response function yields a pronounced maximum ("Resonance") at the frequency
ω
≈
ω
0
{\displaystyle \omega \approx \omega _{0}}
. The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit. The width of the maximum,
Δ
ω
,
{\displaystyle \Delta \omega ,}
typically is much smaller than
ω
0
,
{\displaystyle \omega _{0},}
so that the Quality factor
Q
:=
ω
0
/
Δ
ω
{\displaystyle Q:=\omega _{0}/\Delta \omega }
can be extremely large.
== Kubo formula ==
The exposition of linear response theory, in the context of quantum statistics, can be found in a paper by Ryogo Kubo. This defines particularly the Kubo formula, which considers the general case that the "force" h(t) is a perturbation of the basic operator of the system, the Hamiltonian,
H
^
0
→
H
^
0
−
h
(
t
′
)
B
^
(
t
′
)
{\displaystyle {\hat {H}}_{0}\to {\hat {H}}_{0}-h(t'){\hat {B}}(t')}
where
B
^
{\displaystyle {\hat {B}}}
corresponds to a measurable quantity as input, while the output x(t) is the perturbation of the thermal expectation of another measurable quantity
A
^
(
t
)
{\displaystyle {\hat {A}}(t)}
. The Kubo formula then defines the quantum-statistical calculation of the susceptibility
χ
(
t
−
t
′
)
{\displaystyle \chi (t-t')}
by a general formula involving only the mentioned operators.
As a consequence of the principle of causality the complex-valued function
χ
~
(
ω
)
{\displaystyle {\tilde {\chi }}(\omega )}
has poles only in the lower half-plane. This leads to the Kramers–Kronig relations, which relates the real and the imaginary parts of
χ
~
(
ω
)
{\displaystyle {\tilde {\chi }}(\omega )}
by integration. The simplest example is once more the damped harmonic oscillator.
== See also ==
Convolution
Green–Kubo relations
Fluctuation theorem
Dispersion (optics)
Lindbladian
Semilinear response
Green's function
Impulse response
Resolvent formalism
Propagator
== References ==
== External links ==
Linear Response Functions in Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.): DMFT at 25: Infinite Dimensions, Verlag des Forschungszentrum Jülich, 2014 ISBN 978-3-89336-953-9 | Wikipedia/Linear_response_theory |
In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance.
== Canonical partition function ==
=== Definition ===
Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous.
==== Classical discrete system ====
For a canonical ensemble that is classical and discrete, the canonical partition function is defined as
Z
=
∑
i
e
−
β
E
i
,
{\displaystyle Z=\sum _{i}e^{-\beta E_{i}},}
where
i
{\displaystyle i}
is the index for the microstates of the system;
e
{\displaystyle e}
is Euler's number;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant;
E
i
{\displaystyle E_{i}}
is the total energy of the system in the respective microstate.
The exponential factor
e
−
β
E
i
{\displaystyle e^{-\beta E_{i}}}
is otherwise known as the Boltzmann factor.
==== Classical continuous system ====
In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as
Z
=
1
h
3
∫
e
−
β
H
(
q
,
p
)
d
3
q
d
3
p
,
{\displaystyle Z={\frac {1}{h^{3}}}\int e^{-\beta H(q,p)}\,d^{3}q\,d^{3}p,}
where
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
(
q
,
p
)
{\displaystyle H(q,p)}
is the Hamiltonian of the system;
q
{\displaystyle q}
is the canonical position;
p
{\displaystyle p}
is the canonical momentum.
To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant).
For generalized cases, the partition function of
N
{\displaystyle N}
particles in
d
{\displaystyle d}
-dimensions is given by
Z
=
1
h
N
d
∫
∏
i
=
1
N
e
−
β
H
(
q
i
,
p
i
)
d
d
q
i
d
d
p
i
,
{\displaystyle Z={\frac {1}{h^{Nd}}}\int \prod _{i=1}^{N}e^{-\beta {\mathcal {H}}({\textbf {q}}_{i},{\textbf {p}}_{i})}\,d^{d}{\textbf {q}}_{i}\,d^{d}{\textbf {p}}_{i},}
==== Classical continuous system (multiple identical particles) ====
For a gas of
N
{\displaystyle N}
identical classical non-interacting particles in three dimensions, the partition function is
Z
=
1
N
!
h
3
N
∫
exp
(
−
β
∑
i
=
1
N
H
(
q
i
,
p
i
)
)
d
3
q
1
⋯
d
3
q
N
d
3
p
1
⋯
d
3
p
N
=
Z
single
N
N
!
{\displaystyle Z={\frac {1}{N!h^{3N}}}\int \,\exp \left(-\beta \sum _{i=1}^{N}H({\textbf {q}}_{i},{\textbf {p}}_{i})\right)\;d^{3}q_{1}\cdots d^{3}q_{N}\,d^{3}p_{1}\cdots d^{3}p_{N}={\frac {Z_{\text{single}}^{N}}{N!}}}
where
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
i
{\displaystyle i}
is the index for the particles of the system;
H
{\displaystyle H}
is the Hamiltonian of a respective particle;
q
i
{\displaystyle q_{i}}
is the canonical position of the respective particle;
p
i
{\displaystyle p_{i}}
is the canonical momentum of the respective particle;
d
3
{\displaystyle d^{3}}
is shorthand notation to indicate that
q
i
{\displaystyle q_{i}}
and
p
i
{\displaystyle p_{i}}
are vectors in three-dimensional space.
Z
single
{\displaystyle Z_{\text{single}}}
is the classical continuous partition function of a single particle as given in the previous section.
The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant).
==== Quantum mechanical discrete system ====
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:
Z
=
tr
(
e
−
β
H
^
)
,
{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}
where:
tr
(
∘
)
{\displaystyle \operatorname {tr} (\circ )}
is the trace of a matrix;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
^
{\displaystyle {\hat {H}}}
is the Hamiltonian operator.
The dimension of
e
−
β
H
^
{\displaystyle e^{-\beta {\hat {H}}}}
is the number of energy eigenstates of the system.
==== Quantum mechanical continuous system ====
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as
Z
=
1
h
∫
⟨
q
,
p
|
e
−
β
H
^
|
q
,
p
⟩
d
q
d
p
,
{\displaystyle Z={\frac {1}{h}}\int \left\langle q,p\right\vert e^{-\beta {\hat {H}}}\left\vert q,p\right\rangle \,dq\,dp,}
where:
h
{\displaystyle h}
is the Planck constant;
β
{\displaystyle \beta }
is the thermodynamic beta, defined as
1
k
B
T
{\displaystyle {\tfrac {1}{k_{\text{B}}T}}}
;
H
^
{\displaystyle {\hat {H}}}
is the Hamiltonian operator;
q
{\displaystyle q}
is the canonical position;
p
{\displaystyle p}
is the canonical momentum.
In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows:
Z
=
∑
j
g
j
e
−
β
E
j
,
{\displaystyle Z=\sum _{j}g_{j}\,e^{-\beta E_{j}},}
where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es.
The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):
Z
=
tr
(
e
−
β
H
^
)
,
{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}
where Ĥ is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series.
The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity:
1
=
∫
|
x
,
p
⟩
⟨
x
,
p
|
d
x
d
p
h
,
{\displaystyle {\boldsymbol {1}}=\int |x,p\rangle \langle x,p|{\frac {dx\,dp}{h}},}
where |x, p⟩ is a normalised Gaussian wavepacket centered at position x and momentum p. Thus
Z
=
∫
tr
(
e
−
β
H
^
|
x
,
p
⟩
⟨
x
,
p
|
)
d
x
d
p
h
=
∫
⟨
x
,
p
|
e
−
β
H
^
|
x
,
p
⟩
d
x
d
p
h
.
{\displaystyle Z=\int \operatorname {tr} \left(e^{-\beta {\hat {H}}}|x,p\rangle \langle x,p|\right){\frac {dx\,dp}{h}}=\int \langle x,p|e^{-\beta {\hat {H}}}|x,p\rangle {\frac {dx\,dp}{h}}.}
A coherent state is an approximate eigenstate of both operators
x
^
{\displaystyle {\hat {x}}}
and
p
^
{\displaystyle {\hat {p}}}
, hence also of the Hamiltonian Ĥ, with errors of the size of the uncertainties. If Δx and Δp can be regarded as zero, the action of Ĥ reduces to multiplication by the classical Hamiltonian, and Z reduces to the classical configuration integral.
=== Connection to probability theory ===
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy E − Ei:
p
i
=
Ω
B
(
E
−
E
i
)
Ω
(
S
,
B
)
(
E
)
.
{\displaystyle p_{i}={\frac {\Omega _{B}(E-E_{i})}{\Omega _{(S,B)}(E)}}.}
Assuming that the heat bath's internal energy is much larger than the energy of S (E ≫ Ei), we can Taylor-expand
Ω
B
{\displaystyle \Omega _{B}}
to first order in Ei and use the thermodynamic relation
∂
S
B
/
∂
E
=
1
/
T
{\displaystyle \partial S_{B}/\partial E=1/T}
, where here
S
B
{\displaystyle S_{B}}
,
T
{\displaystyle T}
are the entropy and temperature of the bath respectively:
k
ln
p
i
=
k
ln
Ω
B
(
E
−
E
i
)
−
k
ln
Ω
(
S
,
B
)
(
E
)
≈
−
∂
(
k
ln
Ω
B
(
E
)
)
∂
E
E
i
+
k
ln
Ω
B
(
E
)
−
k
ln
Ω
(
S
,
B
)
(
E
)
≈
−
∂
S
B
∂
E
E
i
+
k
ln
Ω
B
(
E
)
Ω
(
S
,
B
)
(
E
)
≈
−
E
i
T
+
k
ln
Ω
B
(
E
)
Ω
(
S
,
B
)
(
E
)
{\displaystyle {\begin{aligned}k\ln p_{i}&=k\ln \Omega _{B}(E-E_{i})-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial {\big (}k\ln \Omega _{B}(E){\big )}}{\partial E}}E_{i}+k\ln \Omega _{B}(E)-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial S_{B}}{\partial E}}E_{i}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\\[5pt]&\approx -{\frac {E_{i}}{T}}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\end{aligned}}}
Thus
p
i
∝
e
−
E
i
/
(
k
T
)
=
e
−
β
E
i
.
{\displaystyle p_{i}\propto e^{-E_{i}/(kT)}=e^{-\beta E_{i}}.}
Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:
Z
=
∑
i
e
−
β
E
i
=
Ω
(
S
,
B
)
(
E
)
Ω
B
(
E
)
.
{\displaystyle Z=\sum _{i}e^{-\beta E_{i}}={\frac {\Omega _{(S,B)}(E)}{\Omega _{B}(E)}}.}
=== Calculating the thermodynamic total energy ===
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:
⟨
E
⟩
=
∑
s
E
s
P
s
=
1
Z
∑
s
E
s
e
−
β
E
s
=
−
1
Z
∂
∂
β
Z
(
β
,
E
1
,
E
2
,
…
)
=
−
∂
ln
Z
∂
β
{\displaystyle {\begin{aligned}\langle E\rangle =\sum _{s}E_{s}P_{s}&={\frac {1}{Z}}\sum _{s}E_{s}e^{-\beta E_{s}}\\[1ex]&=-{\frac {1}{Z}}{\frac {\partial }{\partial \beta }}Z(\beta ,E_{1},E_{2},\dots )\\[1ex]&=-{\frac {\partial \ln Z}{\partial \beta }}\end{aligned}}}
or, equivalently,
⟨
E
⟩
=
k
B
T
2
∂
ln
Z
∂
T
.
{\displaystyle \langle E\rangle =k_{\text{B}}T^{2}{\frac {\partial \ln Z}{\partial T}}.}
Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner
E
s
=
E
s
(
0
)
+
λ
A
s
for all
s
{\displaystyle E_{s}=E_{s}^{(0)}+\lambda A_{s}\qquad {\text{for all}}\;s}
then the expected value of A is
⟨
A
⟩
=
∑
s
A
s
P
s
=
−
1
β
∂
∂
λ
ln
Z
(
β
,
λ
)
.
{\displaystyle \langle A\rangle =\sum _{s}A_{s}P_{s}=-{\frac {1}{\beta }}{\frac {\partial }{\partial \lambda }}\ln Z(\beta ,\lambda ).}
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.
=== Relation to thermodynamic variables ===
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is
⟨
E
⟩
=
−
∂
ln
Z
∂
β
.
{\displaystyle \langle E\rangle =-{\frac {\partial \ln Z}{\partial \beta }}.}
The variance in the energy (or "energy fluctuation") is
⟨
(
Δ
E
)
2
⟩
≡
⟨
(
E
−
⟨
E
⟩
)
2
⟩
=
⟨
E
2
⟩
−
⟨
E
⟩
2
=
∂
2
ln
Z
∂
β
2
.
{\displaystyle \left\langle (\Delta E)^{2}\right\rangle \equiv \left\langle (E-\langle E\rangle )^{2}\right\rangle =\left\langle E^{2}\right\rangle -{\left\langle E\right\rangle }^{2}={\frac {\partial ^{2}\ln Z}{\partial \beta ^{2}}}.}
The heat capacity is
C
v
=
∂
⟨
E
⟩
∂
T
=
1
k
B
T
2
⟨
(
Δ
E
)
2
⟩
.
{\displaystyle C_{v}={\frac {\partial \langle E\rangle }{\partial T}}={\frac {1}{k_{\text{B}}T^{2}}}\left\langle (\Delta E)^{2}\right\rangle .}
In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:
⟨
X
⟩
=
±
∂
ln
Z
∂
β
Y
.
{\displaystyle \langle X\rangle =\pm {\frac {\partial \ln Z}{\partial \beta Y}}.}
The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be
⟨
(
Δ
X
)
2
⟩
≡
⟨
(
X
−
⟨
X
⟩
)
2
⟩
=
∂
⟨
X
⟩
∂
β
Y
=
∂
2
ln
Z
∂
(
β
Y
)
2
.
{\displaystyle \left\langle (\Delta X)^{2}\right\rangle \equiv \left\langle (X-\langle X\rangle )^{2}\right\rangle ={\frac {\partial \langle X\rangle }{\partial \beta Y}}={\frac {\partial ^{2}\ln Z}{\partial (\beta Y)^{2}}}.}
In the special case of entropy, entropy is given by
S
≡
−
k
B
∑
s
P
s
ln
P
s
=
k
B
(
ln
Z
+
β
⟨
E
⟩
)
=
∂
∂
T
(
k
B
T
ln
Z
)
=
−
∂
A
∂
T
{\displaystyle S\equiv -k_{\text{B}}\sum _{s}P_{s}\ln P_{s}=k_{\text{B}}(\ln Z+\beta \langle E\rangle )={\frac {\partial }{\partial T}}(k_{\text{B}}T\ln Z)=-{\frac {\partial A}{\partial T}}}
where A is the Helmholtz free energy defined as A = U − TS, where U = ⟨E⟩ is the total energy and S is the entropy, so that
A
=
⟨
E
⟩
−
T
S
=
−
k
B
T
ln
Z
.
{\displaystyle A=\langle E\rangle -TS=-k_{\text{B}}T\ln Z.}
Furthermore, the heat capacity can be expressed as
C
v
=
T
∂
S
∂
T
=
−
T
∂
2
A
∂
T
2
.
{\displaystyle C_{\text{v}}=T{\frac {\partial S}{\partial T}}=-T{\frac {\partial ^{2}A}{\partial T^{2}}}.}
=== Partition functions of subsystems ===
Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:
Z
=
∏
j
=
1
N
ζ
j
.
{\displaystyle Z=\prod _{j=1}^{N}\zeta _{j}.}
If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case
Z
=
ζ
N
.
{\displaystyle Z=\zeta ^{N}.}
However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):
Z
=
ζ
N
N
!
.
{\displaystyle Z={\frac {\zeta ^{N}}{N!}}.}
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.
=== Meaning and significance ===
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is
P
s
=
1
Z
e
−
β
E
s
.
{\displaystyle P_{s}={\frac {1}{Z}}e^{-\beta E_{s}}.}
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:
∑
s
P
s
=
1
Z
∑
s
e
−
β
E
s
=
1
Z
Z
=
1.
{\displaystyle \sum _{s}P_{s}={\frac {1}{Z}}\sum _{s}e^{-\beta E_{s}}={\frac {1}{Z}}Z=1.}
This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies.
== Grand canonical partition function ==
We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.
The grand canonical partition function, denoted by
Z
{\displaystyle {\mathcal {Z}}}
, is the following sum over microstates
Z
(
μ
,
V
,
T
)
=
∑
i
exp
(
N
i
μ
−
E
i
k
B
T
)
.
{\displaystyle {\mathcal {Z}}(\mu ,V,T)=\sum _{i}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}
Here, each microstate is labelled by
i
{\displaystyle i}
, and has total particle number
N
i
{\displaystyle N_{i}}
and total energy
E
i
{\displaystyle E_{i}}
. This partition function is closely related to the grand potential,
Φ
G
{\displaystyle \Phi _{\rm {G}}}
, by the relation
−
k
B
T
ln
Z
=
Φ
G
=
⟨
E
⟩
−
T
S
−
μ
⟨
N
⟩
.
{\displaystyle -k_{\text{B}}T\ln {\mathcal {Z}}=\Phi _{\rm {G}}=\langle E\rangle -TS-\mu \langle N\rangle .}
This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state
i
{\displaystyle i}
:
p
i
=
1
Z
exp
(
N
i
μ
−
E
i
k
B
T
)
.
{\displaystyle p_{i}={\frac {1}{\mathcal {Z}}}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as
Z
(
z
,
V
,
T
)
=
∑
N
i
z
N
i
Z
(
N
i
,
V
,
T
)
,
{\displaystyle {\mathcal {Z}}(z,V,T)=\sum _{N_{i}}z^{N_{i}}Z(N_{i},V,T),}
where
z
≡
exp
(
μ
/
k
B
T
)
{\displaystyle z\equiv \exp(\mu /k_{\text{B}}T)}
is known as the absolute activity (or fugacity) and
Z
(
N
i
,
V
,
T
)
{\displaystyle Z(N_{i},V,T)}
is the canonical partition function.
== See also ==
Partition function (mathematics)
Partition function (quantum field theory)
Virial theorem
Widom insertion method
== References == | Wikipedia/Grand_partition_function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.