text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In the gravitational two-body problem, the specific orbital energy
ε
{\displaystyle \varepsilon }
(or specific vis-viva energy) of two orbiting bodies is the constant quotient of their mechanical energy (the sum of their mutual potential energy,
ε
p
{\displaystyle \varepsilon _{p}}
, and their kinetic energy,
ε
k
{\displaystyle \varepsilon _{k}}
) to their reduced mass.
According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time:
ε
=
ε
k
+
ε
p
=
v
2
2
−
μ
r
=
−
1
2
μ
2
h
2
(
1
−
e
2
)
=
−
μ
2
a
{\displaystyle {\begin{aligned}\varepsilon &=\varepsilon _{k}+\varepsilon _{p}\\&={\frac {v^{2}}{2}}-{\frac {\mu }{r}}=-{\frac {1}{2}}{\frac {\mu ^{2}}{h^{2}}}\left(1-e^{2}\right)=-{\frac {\mu }{2a}}\end{aligned}}}
where
v
{\displaystyle v}
is the relative orbital speed;
r
{\displaystyle r}
is the orbital distance between the bodies;
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu ={G}(m_{1}+m_{2})}
is the sum of the standard gravitational parameters of the bodies;
h
{\displaystyle h}
is the specific relative angular momentum in the sense of relative angular momentum divided by the reduced mass;
e
{\displaystyle e}
is the orbital eccentricity;
a
{\displaystyle a}
is the semi-major axis.
It is a kind of specific energy, typically expressed in units of
MJ
kg
{\displaystyle {\frac {\text{MJ}}{\text{kg}}}}
(megajoule per kilogram) or
km
2
s
2
{\displaystyle {\frac {{\text{km}}^{2}}{{\text{s}}^{2}}}}
(squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy.
== Equation forms for different orbits ==
For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to:
ε
=
−
μ
2
a
{\displaystyle \varepsilon =-{\frac {\mu }{2a}}}
where
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu =G\left(m_{1}+m_{2}\right)}
is the standard gravitational parameter;
a
{\displaystyle a}
is semi-major axis of the orbit.
For a parabolic orbit this equation simplifies to
ε
=
0.
{\displaystyle \varepsilon =0.}
For a hyperbolic trajectory this specific orbital energy is either given by
ε
=
μ
2
a
.
{\displaystyle \varepsilon ={\mu \over 2a}.}
or the same as for an ellipse, depending on the convention for the sign of a.
In this case the specific orbital energy is also referred to as characteristic energy (or
C
3
{\displaystyle C_{3}}
) and is equal to the excess specific energy compared to that for a parabolic orbit.
It is related to the hyperbolic excess velocity
v
∞
{\displaystyle v_{\infty }}
(the orbital velocity at infinity) by
2
ε
=
C
3
=
v
∞
2
.
{\displaystyle 2\varepsilon =C_{3}=v_{\infty }^{2}.}
It is relevant for interplanetary missions.
Thus, if orbital position vector (
r
{\displaystyle \mathbf {r} }
) and orbital velocity vector (
v
{\displaystyle \mathbf {v} }
) are known at one position, and
μ
{\displaystyle \mu }
is known, then the energy can be computed and from that, for any other position, the orbital speed.
== Rate of change ==
For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is
μ
2
a
2
{\displaystyle {\frac {\mu }{2a^{2}}}}
where
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu ={G}(m_{1}+m_{2})}
is the standard gravitational parameter;
a
{\displaystyle a\,\!}
is semi-major axis of the orbit.
In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy.
== Additional energy ==
If the central body has radius R, then the additional specific energy of an elliptic orbit compared to being stationary at the surface is
−
μ
2
a
+
μ
R
=
μ
(
2
a
−
R
)
2
a
R
{\displaystyle -{\frac {\mu }{2a}}+{\frac {\mu }{R}}={\frac {\mu (2a-R)}{2aR}}}
The quantity
2
a
−
R
{\displaystyle 2a-R}
is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and
a
{\displaystyle a}
just little more than
R
{\displaystyle R}
the additional specific energy is
(
g
R
/
2
)
{\displaystyle (gR/2)}
; which is the kinetic energy of the horizontal component of the velocity, i.e.
1
2
V
2
=
1
2
g
R
{\textstyle {\frac {1}{2}}V^{2}={\frac {1}{2}}gR}
,
V
=
g
R
{\displaystyle V={\sqrt {gR}}}
.
== Examples ==
=== ISS ===
The International Space Station has an orbital period of 91.74 minutes (5504 s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738 km.
The specific orbital energy associated with this orbit is −29.6 MJ/kg: the potential energy is −59.2 MJ/kg, and the kinetic energy 29.6 MJ/kg. Compared with the potential energy at the surface, which is −62.6 MJ/kg., the extra potential energy is 3.4 MJ/kg, and the total extra energy is 33.0 MJ/kg. The average speed is 7.7 km/s, the net delta-v to reach this orbit is 8.1 km/s (the actual delta-v is typically 1.5–2.0 km/s more for atmospheric drag and gravity drag).
The increase per meter would be 4.4 J/kg; this rate corresponds to one half of the local gravity of 8.8 m/s2.
For an altitude of 100 km (radius is 6471 km):
The energy is −30.8 MJ/kg: the potential energy is −61.6 MJ/kg, and the kinetic energy 30.8 MJ/kg. Compare with the potential energy at the surface, which is −62.6 MJ/kg. The extra potential energy is 1.0 MJ/kg, the total extra energy is 31.8 MJ/kg.
The increase per meter would be 4.8 J/kg; this rate corresponds to one half of the local gravity of 9.5 m/s2. The speed is 7.8 km/s, the net delta-v to reach this orbit is 8.0 km/s.
Taking into account the rotation of the Earth, the delta-v is up to 0.46 km/s less (starting at the equator and going east) or more (if going west).
=== Voyager 1 ===
For Voyager 1, with respect to the Sun:
μ
=
G
M
{\displaystyle \mu =GM}
= 132,712,440,018 km3⋅s−2 is the standard gravitational parameter of the Sun
r = 17 billion kilometers
v = 17.1 km/s
Hence:
ε
=
ε
k
+
ε
p
=
v
2
2
−
μ
r
=
146
k
m
2
s
−
2
−
8
k
m
2
s
−
2
=
138
k
m
2
s
−
2
{\displaystyle \varepsilon =\varepsilon _{k}+\varepsilon _{p}={\frac {v^{2}}{2}}-{\frac {\mu }{r}}=\mathrm {146\,km^{2}s^{-2}} -\mathrm {8\,km^{2}s^{-2}} =\mathrm {138\,km^{2}s^{-2}} }
Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by
v
∞
=
16.6
k
m
/
s
{\displaystyle v_{\infty }=\mathrm {16.6\,km/s} }
However, Voyager 1 does not have enough velocity to leave the Milky Way. The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun.
== Applying thrust ==
Assume:
a is the acceleration due to thrust (the time-rate at which delta-v is spent)
g is the gravitational field strength
v is the velocity of the rocket
Then the time-rate of change of the specific energy of the rocket is
v
⋅
a
{\displaystyle \mathbf {v} \cdot \mathbf {a} }
: an amount
v
⋅
(
a
−
g
)
{\displaystyle \mathbf {v} \cdot (\mathbf {a} -\mathbf {g} )}
for the kinetic energy and an amount
v
⋅
g
{\displaystyle \mathbf {v} \cdot \mathbf {g} }
for the potential energy.
The change of the specific energy of the rocket per unit change of delta-v is
v
⋅
a
|
a
|
{\displaystyle {\frac {\mathbf {v\cdot a} }{|\mathbf {a} |}}}
which is |v| times the cosine of the angle between v and a.
Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v, and when |v| is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag. When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis. Such maneuver is called an Oberth maneuver or powered flyby.
When applying delta-v to decrease specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v, and again when |v| is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis.
If a is in the direction of v:
Δ
ε
=
∫
v
d
(
Δ
v
)
=
∫
v
a
d
t
{\displaystyle \Delta \varepsilon =\int v\,d(\Delta v)=\int v\,adt}
== Tangential velocities at altitude ==
== See also ==
Specific energy change of rockets
Characteristic energy C3 (Double the specific orbital energy)
== References == | Wikipedia/Specific_orbital_energy |
Uric acid is a heterocyclic compound of carbon, nitrogen, oxygen, and hydrogen with the formula C5H4N4O3. It forms ions and salts known as urates and acid urates, such as ammonium acid urate. Uric acid is a product of the metabolic breakdown of purine nucleotides, and it is a normal component of urine. High blood concentrations of uric acid can lead to gout and are associated with other medical conditions, including diabetes and the formation of ammonium acid urate kidney stones.
== Chemistry ==
Uric acid was first isolated from kidney stones in 1776 by Swedish chemist Carl Wilhelm Scheele. In 1882, the Ukrainian chemist Ivan Horbaczewski first synthesized uric acid by melting urea with glycine.
Uric acid displays lactam–lactim tautomerism. Uric acid crystallizes in the lactam form, with computational chemistry also indicating that tautomer to be the most stable. Uric acid is a diprotic acid with pKa1 = 5.4 and pKa2 = 10.3. At physiological pH, urate predominates in solution.
== Biochemistry ==
The enzyme xanthine oxidase (XO) catalyzes the formation of uric acid from xanthine and hypoxanthine. XO, which is found in mammals, functions primarily as a dehydrogenase and rarely as an oxidase, despite its name. Xanthine in turn is produced from other purines. Xanthine oxidase is a large enzyme whose active site consists of the metal molybdenum bound to sulfur and oxygen. Uric acid is released in hypoxic conditions (low oxygen saturation).
=== Water solubility ===
In general, the water solubility of uric acid and its alkali metal and alkaline earth salts is rather low. All these salts exhibit greater solubility in hot water than cold, allowing for easy recrystallization. This low solubility is significant for the etiology of gout. The solubility of the acid and its salts in ethanol is very low or negligible. In ethanol/water mixtures, the solubilities are somewhere between the end values for pure ethanol and pure water.
The figures given indicate what mass of water is required to dissolve a unit mass of compound indicated. The lower the number, the more soluble the substance in the said solvent.
== Genetic and physiological diversity ==
=== Primates ===
In humans uric acid (actually hydrogen urate ion) is the final oxidation (breakdown) product of purine metabolism and is excreted in urine, whereas in most other mammals, the enzyme uricase further oxidizes uric acid to allantoin. The loss of uricase in higher primates parallels the similar loss of the ability to synthesize ascorbic acid, leading to the suggestion that urate may partially substitute for ascorbate in such species. Both uric acid and ascorbic acid are strong reducing agents (electron donors) and potent antioxidants. In humans, over half the antioxidant capacity of blood plasma comes from hydrogen urate ion.
==== Humans ====
The normal concentration range of uric acid (or hydrogen urate ion) in human blood is 25 to 80 mg/L for men and 15 to 60 mg/L for women (but see below for slightly different values). An individual can have serum values as high as 96 mg/L and not have gout. In humans, about 70% of daily uric acid disposal occurs via the kidneys, and in 5–25% of humans, impaired renal (kidney) excretion leads to hyperuricemia. Normal excretion of uric acid in the urine is 270 to 360 mg per day (concentration of 270 to 360 mg/L if one litre of urine is produced per day – higher than the solubility of uric acid because it is in the form of dissolved acid urates), roughly 1% as much as the daily excretion of urea.
=== Dogs ===
The Dalmatian has a genetic defect in uric acid uptake by the liver and kidneys, resulting in decreased conversion to allantoin, so this breed excretes uric acid, and not allantoin, in the urine.
=== Birds, reptiles and desert-dwelling mammals ===
In birds and reptiles, and in some desert-dwelling mammals (such as the kangaroo rat), uric acid also is the end product of purine metabolism, but it is excreted in feces as a dry mass. This involves a complex metabolic pathway that is energetically costly in comparison to processing of other nitrogenous wastes such as urea (from the urea cycle) or ammonia, but has the advantages of reducing water loss and preventing dehydration.
=== Invertebrates ===
Platynereis dumerilii, a marine polychaete worm, uses uric acid as a sexual pheromone. The female of the species releases uric acid into the water during mating, which induces males to release sperm.
=== Bacteria ===
Uric acid metabolism is done in the human gut by ~1/5 of bacteria species that come from 4 of 6 major phyla. Such metabolism is anaerobic involving uncharacterized ammonia lyase, peptidase, carbamoyl transferase, and oxidoreductase enzymes. The result is that uric acid is converted into xanthine or lactate and the short chain fatty acids such as acetate and butyrate. Radioisotope studies suggest about 1/3 of uric acid is removed in healthy people in their gut with this being roughly 2/3 in those with kidney disease. In mouse models, such bacteria compensate for the loss of uricase leading researchers to raise the possibility "that antibiotics targeting anaerobic bacteria, which would ablate gut bacteria, increase the risk for developing gout in humans".
== Genetics ==
Although foods such as meat and seafood can elevate serum urate levels, genetic variation is a much greater contributor to high serum urate. A proportion of people have mutations in the urate transport proteins responsible for the excretion of uric acid by the kidneys. Variants of a number of genes, linked to serum urate, have so far been identified: SLC2A9; ABCG2; SLC17A1; SLC22A11; SLC22A12; SLC16A9; GCKR; LRRC16A; and PDZK1. GLUT9, encoded by the SLC2A9 gene, is known to transport both uric acid and fructose.
Myogenic hyperuricemia, as a result of the purine nucleotide cycle running when ATP reservoirs in muscle cells are low, is a common pathophysiologic feature of glycogenoses, such as GSD-III, which is a metabolic myopathy impairing the ability of ATP (energy) production for muscle cells. In these metabolic myopathies, myogenic hyperuricemia is exercise-induced; inosine, hypoxanthine and uric acid increase in plasma after exercise and decrease over hours with rest. Excess AMP (adenosine monophosphate) is converted into uric acid.
AMP → IMP → Inosine → Hypoxanthine → Xanthine → Uric Acid
== Clinical significance and research ==
In human blood plasma, the reference range of uric acid is typically 3.4–7.2 mg per 100 mL(200–430 μmol/L) for men, and 2.4–6.1 mg per 100 mL for women (140–360 μmol/L). Uric acid concentrations in blood plasma above and below the normal range are known as, respectively, hyperuricemia and hypouricemia. Likewise, uric acid concentrations in urine above and below normal are known as hyperuricosuria and hypouricosuria. Uric acid levels in saliva may be associated with blood uric acid levels.
=== High uric acid ===
Hyperuricemia (high levels of uric acid), which induces gout, has various potential origins:
Diet may be a factor. High intake of dietary purine, high-fructose corn syrup, and sucrose can increase levels of uric acid.
Serum uric acid can be elevated by reduced excretion via the kidneys.
Fasting or rapid weight loss can temporarily elevate uric acid levels.
Certain drugs, such as thiazide diuretics, can increase blood uric acid levels by interfering with renal clearance.
Tumor lysis syndrome, a metabolic complication of certain cancers or chemotherapy, due to nucleobase and potassium release into the plasma.
Pseudohypoxia (disrupted NADH/NAD+ ratio) caused by diabetic hyperglycemia and excessive alcohol consumption.
==== Gout ====
A 2011 survey in the United States indicated that 3.9% of the population had gout, whereas 21.4% had hyperuricemia without having symptoms.
Excess blood uric acid (serum urate) can induce gout, a painful condition resulting from needle-like crystals of uric acid termed monosodium urate crystals precipitating in joints, capillaries, skin, and other tissues. Gout can occur where serum uric acid levels are as low as 6 mg per 100 mL (357 μmol/L), but an individual can have serum values as high as 9.6 mg per 100 mL (565 μmol/L) and not have gout.
In humans, purines are metabolized into uric acid, which is then excreted in the urine. Consumption of large amounts of some types of purine-rich foods, particularly meat and seafood, increases gout risk. Purine-rich foods include liver, kidney, and sweetbreads, and certain types of seafood, including anchovies, herring, sardines, mussels, scallops, trout, haddock, mackerel, and tuna. Moderate intake of purine-rich vegetables, however, is not associated with an increased risk of gout.
One treatment for gout in the 19th century was administration of lithium salts; lithium urate is more soluble. Today, inflammation during attacks is more commonly treated with NSAIDs, colchicine, or corticosteroids, and urate levels are managed with allopurinol. Allopurinol, which weakly inhibits xanthine oxidase, is an analog of hypoxanthine that is hydroxylated by xanthine oxidoreductase at the 2-position to give oxipurinol.
==== Tumor lysis syndrome ====
Tumor lysis syndrome, an emergency condition that may result from blood cancers, produces high uric acid levels in blood when tumor cells release their contents into the blood, either spontaneously or following chemotherapy. Tumor lysis syndrome may lead to acute kidney injury when uric acid crystals are deposited in the kidneys. Treatment includes hyperhydration to dilute and excrete uric acid via urine, rasburicase to reduce levels of poorly soluble uric acid in blood, or allopurinol to inhibit purine catabolism from adding to uric acid levels.
==== Lesch–Nyhan syndrome ====
Lesch–Nyhan syndrome, a rare inherited disorder, is also associated with high serum uric acid levels. Spasticity, involuntary movement, and cognitive retardation as well as manifestations of gout are seen in this syndrome.
==== Cardiovascular disease ====
Hyperuricemia is associated with an increase in risk factors for cardiovascular disease. It is also possible that high levels of uric acid may have a causal role in the development of atherosclerotic cardiovascular disease, but this is controversial and the data are conflicting.
==== Uric acid stone formation ====
Kidney stones can form through deposits of sodium urate microcrystals.
Saturation levels of uric acid in blood may result in one form of kidney stones when the urate crystallizes in the kidney. These uric acid stones are radiolucent, so do not appear on an abdominal plain X-ray. Uric acid crystals can also promote the formation of calcium oxalate stones, acting as "seed crystals".
==== Diabetes ====
Hyperuricemia is associated with components of metabolic syndrome, including in children.
=== Low uric acid ===
Low uric acid (hypouricemia) can have numerous causes. Low dietary zinc intakes cause lower uric acid levels. This effect can be even more pronounced in women taking oral contraceptive medication. Sevelamer, a drug indicated for prevention of hyperphosphataemia in people with chronic kidney failure, can significantly reduce serum uric acid.
==== Multiple sclerosis ====
Meta-analysis of 10 case-control studies found that the serum uric acid levels of patients with multiple sclerosis were significantly lower compared to those of healthy controls, possibly indicating a diagnostic biomarker for multiple sclerosis.
==== Normalizing low uric acid ====
Correcting low or deficient zinc levels can help elevate serum uric acid.
== See also ==
Theacrine or 1,3,7,9-tetramethyluric acid, a purine alkaloid found in some teas
Uracil – purine nucleobase named by Robert Behrend who was attempting to synthesize derivatives of uric acid
Metabolic myopathy
Purine nucleotide cycle
== References ==
== External links ==
Uric acid blood test – MedlinePlus | Wikipedia/Urate |
Surgery is a medical specialty that uses manual and instrumental techniques to diagnose or treat pathological conditions (e.g., trauma, disease, injury, malignancy), to alter bodily functions (e.g., malabsorption created by bariatric surgery such as gastric bypass), to reconstruct or alter aesthetics and appearance (cosmetic surgery), or to remove unwanted tissues (body fat, glands, scars or skin tags) or foreign bodies.
The act of performing surgery may be called a surgical procedure or surgical operation, or simply "surgery" or "operation". In this context, the verb "operate" means to perform surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments, surgical facility or surgical nurse. Most surgical procedures are performed by a pair of operators: a surgeon who is the main operator performing the surgery, and a surgical assistant who provides in-procedure manual assistance during surgery. Modern surgical operations typically require a surgical team that typically consists of the surgeon, the surgical assistant, an anaesthetist (often also complemented by an anaesthetic nurse), a scrub nurse (who handles sterile equipment), a circulating nurse and a surgical technologist, while procedures that mandate cardiopulmonary bypass will also have a perfusionist. All surgical procedures are considered invasive and often require a period of postoperative care (sometimes intensive care) for the patient to recover from the iatrogenic trauma inflicted by the procedure. The duration of surgery can span from several minutes to tens of hours depending on the specialty, the nature of the condition, the target body parts involved and the circumstance of each procedure, but most surgeries are designed to be one-off interventions that are typically not intended as an ongoing or repeated type of treatment.
In British colloquialism, the term "surgery" can also refer to the facility where surgery is performed, or simply the office/clinic of a physician, dentist or veterinarian.
== Definitions ==
As a general rule, a procedure is considered surgical when it involves cutting of a person's tissues or closure of a previously sustained wound. Other procedures that do not necessarily fall under this rubric, such as angioplasty or endoscopy, may be considered surgery if they involve "common" surgical procedure or settings, such as use of antiseptic measures and sterile fields, sedation/anesthesia, proactive hemostasis, typical surgical instruments, suturing or stapling. All forms of surgery are considered invasive procedures; the so-called "noninvasive surgery" ought to be more appropriately called minimally invasive procedures, which usually refers to a procedure that utilizes natural orifices (e.g. most urological procedures) or does not penetrate the structure being excised (e.g. endoscopic polyp excision, rubber band ligation, laser eye surgery), are percutaneous (e.g. arthroscopy, catheter ablation, angioplasty and valvuloplasty), or to a radiosurgical procedure (e.g. irradiation of a tumor).
=== Types of surgery ===
Surgical procedures are commonly categorized by urgency, type of procedure, body system involved, the degree of invasiveness, and special instrumentation.
Based on timing:
Elective surgery is done to correct a non-life-threatening condition, and is carried out at the person's convenience, or to the surgeon's and the surgical facility's availability.
Semi-elective surgery is one that is better done early to avoid complications or potential deterioration of the patient's condition, but such risk are sufficiently low that the procedure can be postponed for a short period time.
Emergency surgery is surgery which must be done without any delay to prevent death or serious disabilities or loss of limbs and functions.
Based on purpose:
Exploratory surgery is performed to establish or aid a diagnosis.
Therapeutic surgery is performed to treat a previously diagnosed condition.
Curative surgery is a therapeutic procedure done to permanently remove a pathology.
Plastic surgery is done to improve a body part's function or appearance.
Reconstructive plastic surgery is done to improve the function or subjective appearance of a damaged or malformed body part.
Cosmetic surgery is done to subjectively improve the appearance of an otherwise normal body part.
Bariatric surgery is done to assist weight loss when dietary and pharmaceutical methods alone have failed.
Non-survival surgery, or terminal surgery, is where Euthanasia is performed while the subject is under Anesthesia so that the subject will not regain conscious pain perception. This type of surgery is usually done in Animal testing experiments.
By type of procedure:
Amputation involves removing an entire body part, usually a limb or digit; castration is the amputation of testes; circumcision is the removal of prepuce from the penis or clitoral hood from the clitoris (see female circumcision). Replantation involves reattaching a severed body part.
Resection is the removal of all or part of an internal organ and/or connective tissue. A segmental resection specifically removes an independent vascular region of an organ such as a hepatic segment, a bronchopulmonary segment or a renal lobe. Excision is the resection of only part of an organ, tissue or other body part (e.g. skin) without discriminating specific vascular territories. Exenteration is the complete removal of all organs and soft tissue content (especially lymphoid tissues) within a body cavity.
Extirpation is the complete excision or surgical destruction of a body part.
Ablation is destruction of tissue through the use of energy-transmitting devices such as electrocautery/fulguration, laser, focused ultrasound or freezing.
Repair involves the direct closure or restoration of an injured, mutilated or deformed organ or body part, usually by suturing or internal fixation. Reconstruction is an extensive repair of a complex body part (such as joints), often with some degrees of structural/functional replacement and commonly involves grafting and/or use of implants.
Grafting is the relocation and establishment of a tissue from one part of the body to another. A flap is the relocation of a tissue without complete separation of its original attachment, and a free flap is a completely detached flap that carries an intact neurovascular structure ready for grafting onto a new location.
Bypass involves the relocation/grafting of a tubular structure onto another in order to reroute the content flow of that target structure from a specific segment directly to a more distal ("downstream") segment.
Implantation is insertion of artificial medical devices to replace or augment existing tissue.
Transplantation is the replacement of an organ or body part by insertion of another from a different human (or animal) into the person undergoing surgery. Harvesting is the resection of an organ or body part from a live human or animal (known as the donor) for transplantation into another patient (known as the recipient).
By organ system: Surgical specialties are traditionally and academically categorized by the organ, organ system or body region involved. Examples include:
Cardiac surgery — the heart and mediastinal great vessels;
Thoracic surgery — the thoracic cavity including the lungs;
Gastrointestinal surgery — the digestive tract and its accessory organs;
Vascular surgery — the extra-mediastinal great vessels and peripheral circulatory system;
Urological surgery — the genitourinary system;
ENT surgery — ear, nose and throat, also known as head and neck surgery when including the neck region;
Oral and maxillofacial surgery — the oral cavity, jaws, and face;
Neurosurgery — the central nervous system, and;
Orthopedic surgery — the musculoskeletal system.
By degree of invasiveness of surgical procedures:
Conventional open surgery (such as a laparotomy) requires a large incision to access the area of interest, and directly exposes the internal body cavity to the outside.
Minimally-invasive surgery involves much smaller surface incisions or even natural orifices (nostril, mouth, anus or urethra) to insert miniaturized instruments within a body cavity or structure, as in laparoscopic surgery or angioplasty.
Hybrid surgery uses a combination of open and minimally-invasive techniques, and may include hand ports or larger incisions to assist with performance of elements of the procedure.
By equipment used:
Laser surgery involves use of laser ablation to divide tissue instead of a scalpel, scissors or similar sharp-edged instruments.
Cryosurgery uses low-temperature cryoablation to freeze and destroy a target tissue.
Electrosurgery involves use of electrocautery to cut and coagulate tissue.
Microsurgery involves the use of an operating microscope for the surgeon to see and manipulate small structures.
Endoscopic surgery uses optical instruments to relay the image from inside an enclosed body cavity to the outside, and the surgeon performs the procedure using specialized handheld instruments inserted through trocars placed through the body wall. Most modern endoscopic procedures are video-assisted, meaning the images are viewed on a display screen rather than through the eyepiece on the endoscope.
Robotic surgery makes use of robotics such as the Da Vinci or the ZEUS robotic surgical systems, to remotely control endoscopic or minimally-invasive instruments.
=== Terminology ===
Resection and excisional procedures start with a prefix for the target organ to be excised (cut out) and end in the suffix -ectomy. For example, removal of part of the stomach would be called a subtotal gastrectomy.
Procedures involving cutting into an organ or tissue end in -otomy. A surgical procedure cutting through the abdominal wall to gain access to the abdominal cavity is a laparotomy.
Minimally invasive procedures, involving small incisions through which an endoscope is inserted, end in -oscopy. For example, such surgery in the abdominal cavity is called laparoscopy.
Procedures for formation of a permanent or semi-permanent opening called a stoma in the body end in -ostomy, such as creation of a colostomy, a connection of colon and the abdominal wall. This prefix is also used for connection between two viscera, such as how an esophagojejunostomy refers to a connection created between the esophagus and the jejunum.
Plastic and reconstruction procedures start with the name for the body part to be reconstructed and end in -plasty. For example, rhino- is a prefix meaning "nose", therefore a rhinoplasty is a reconstructive or cosmetic surgery for the nose. A pyloroplasty refers to a type of reconstruction of the gastric pylorus.
Procedures that involve cutting the muscular layers of an organ end in -myotomy. A pyloromyotomy refers to cutting the muscular layers of the gastric pylorus.
Repair of a damaged or abnormal structure ends in -orraphy. This includes herniorrhaphy, another name for a hernia repair.
Reoperation, revision, or "redo" procedures refer to a planned or unplanned return to the operating theater after a surgery is performed to re-address an aspect of patient care. Unplanned reasons for reoperation include postoperative complications such as bleeding or hematoma formation, development of a seroma or abscess, anastomotic leak, tissue necrosis requiring debridement or excision, or in the case of malignancy, close or involved resection margins that may require re-excision to avoid local recurrence. Reoperation can be performed in the acute phase, or it can be also performed months to years later if the surgery failed to solve the indicated problem. Reoperation can also be planned as a staged operation where components of the procedure are performed or reversed under separate anesthesia.
== Description of surgical procedure ==
=== Setting ===
Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Outpatient surgery occurs in a hospital outpatient department or freestanding ambulatory surgery center, and the person who had surgery is discharged the same working day. Office-based surgery occurs in a physician's office, and the person is discharged the same day.
At a hospital, modern surgery is often performed in an operating theater using surgical instruments, an operating table, and other equipment. Among United States hospitalizations for non-maternal and non-neonatal conditions in 2012, more than one-fourth of stays and half of hospital costs involved stays that included operating room (OR) procedures. The environment and procedures used in surgery are governed by the principles of aseptic technique: the strict separation of "sterile" (free of microorganisms) things from "unsterile" or "contaminated" things. All surgical instruments must be sterilized, and an instrument must be replaced or re-sterilized if it becomes contaminated (i.e. handled in an unsterile manner, or allowed to touch an unsterile surface). Operating room staff must wear sterile attire (scrubs, a scrub cap, a sterile surgical gown, sterile latex or non-latex polymer gloves and a surgical mask), and they must scrub hands and arms with an approved disinfectant agent before each procedure.
=== Preoperative care ===
Prior to surgery, the person is given a medical examination, receives certain pre-operative tests, and their physical status is rated according to the ASA physical status classification system. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the person requiring surgery may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. People preparing for surgery are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure), to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the person vomits during or after the procedure.
Some medical systems have a practice of routinely performing chest x-rays before surgery. The premise behind this practice is that the physician might discover some unknown medical condition which would complicate the surgery, and that upon discovering this with the chest x-ray, the physician would adapt the surgery practice accordingly. However, medical specialty professional organizations recommend against routine pre-operative chest x-rays for people who have an unremarkable medical history and presented with a physical exam which did not indicate a chest x-ray. Routine x-ray examination is more likely to result in problems like misdiagnosis, overtreatment, or other negative outcomes than it is to result in a benefit to the person. Likewise, other tests including complete blood count, prothrombin time, partial thromboplastin time, basic metabolic panel, and urinalysis should not be done unless the results of these tests can help evaluate surgical risk.
=== Preparing for surgery ===
A surgical team may include a surgeon, anesthetist, a circulating nurse, and a "scrub tech", or surgical technician, as well as other assistants who provide equipment and supplies as required. While informed consent discussions may be performed in a clinic or acute care setting, the pre-operative holding area is where documentation is reviewed and where family members can also meet the surgical team. Nurses in the preoperative holding area confirm orders and answer additional questions of the family members of the patient prior to surgery. In the pre-operative holding area, the person preparing for surgery changes out of their street clothes and are asked to confirm the details of his or her surgery as previously discussed during the process of informed consent. A set of vital signs are recorded, a peripheral IV line is placed, and pre-operative medications (antibiotics, sedatives, etc.) are given.
When the patient enters the operating room and is appropriately anesthetized, the team will then position the patient in an appropriate surgical position. If hair is present at the surgical site, it is clipped (instead of shaving). The skin surface within the operating field is cleansed and prepared by applying an antiseptic (typically chlorhexidine gluconate in alcohol, as this is twice as effective as povidone-iodine at reducing the risk of infection). Sterile drapes are then used to cover the borders of the operating field. Depending on the type of procedure, the cephalad drapes are secured to a pair of poles near the head of the bed to form an "ether screen", which separate the anesthetist/anesthesiologist's working area (unsterile) from the surgical site (sterile).
Anesthesia is administered to prevent pain from the trauma of cutting, tissue manipulation, application of thermal energy, and suturing. Depending on the type of operation, anesthesia may be provided locally, regionally, or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the person can remain conscious or minimally sedated. In contrast, general anesthesia may render the person unconscious and paralyzed during surgery. The person is typically intubated to protect their airway and placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents. The choice of surgical method and anesthetic technique aims to solve the indicated problem, minimize the risk of complications, optimize the time needed for recovery, and limit the surgical stress response.
=== Intraoperative phase ===
The intraoperative phase begins when the surgery subject is received in the surgical area (such as the operating theater or surgical department), and lasts until the subject is transferred to a recovery area (such as a post-anesthesia care unit).
An incision is made to access the surgical site. Blood vessels may be clamped or cauterized to prevent bleeding, and retractors may be used to expose the site or keep the incision open. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then the peritoneum. In certain cases, bone may be cut to further access the interior of the body; for example, cutting the skull for brain surgery or cutting the sternum for thoracic (chest) surgery to open up the rib cage. Whilst in surgery aseptic technique is used to prevent infection or further spreading of the disease. The surgeons' and assistants' hands, wrists and forearms are washed thoroughly for at least 4 minutes to prevent germs getting into the operative field, then sterile gloves are placed onto their hands. An antiseptic solution is applied to the area of the person's body that will be operated on. Sterile drapes are placed around the operative site. Surgical masks are worn by the surgical team to avoid germs on droplets of liquid from their mouths and noses from contaminating the operative site.
Work to correct the problem in body then proceeds. This work may involve:
excision – cutting out an organ, tumor, or other tissue.
resection – partial removal of an organ or other bodily structure.
reconnection of organs, tissues, etc., particularly if severed. Resection of organs such as intestines involves reconnection. Internal suturing or stapling may be used. Surgical connection between blood vessels or other tubular or hollow structures such as loops of intestine is called anastomosis.
reduction – the movement or realignment of a body part to its normal position. e.g. Reduction of a broken nose involves the physical manipulation of the bone or cartilage from their displaced state back to their original position to restore normal airflow and aesthetics.
ligation – tying off blood vessels, ducts, or "tubes".
grafts – may be severed pieces of tissue cut from the same (or different) body or flaps of tissue still partly connected to the body but resewn for rearranging or restructuring of the area of the body in question. Although grafting is often used in cosmetic surgery, it is also used in other surgery. Grafts may be taken from one area of the person's body and inserted to another area of the body. An example is bypass surgery, where clogged blood vessels are bypassed with a graft from another part of the body. Alternatively, grafts may be from other persons, cadavers, or animals.
insertion of prosthetic parts when needed. Pins or screws to set and hold bones may be used. Sections of bone may be replaced with prosthetic rods or other parts. Sometimes a plate is inserted to replace a damaged area of skull. Artificial hip replacement has become more common. Heart pacemakers or valves may be inserted. Many other types of prostheses are used.
creation of a stoma, a permanent or semi-permanent opening in the body
in transplant surgery, the donor organ (taken out of the donor's body) is inserted into the recipient's body and reconnected to the recipient in all necessary ways (blood vessels, ducts, etc.).
arthrodesis – surgical connection of adjacent bones so the bones can grow together into one. Spinal fusion is an example of adjacent vertebrae connected allowing them to grow together into one piece.
modifying the digestive tract in bariatric surgery for weight loss.
repair of a fistula, hernia, or prolapse.
repair according to the ICD-10-PCS, in the Medical and Surgical Section 0, root operation Q, means restoring, to the extent possible, a body part to its normal anatomic structure and function. This definition, repair, is used only when the method used to accomplish the repair is not one of the other root operations. Examples would be colostomy takedown, herniorrhaphy of a hernia, and the surgical suture of a laceration.
other procedures, including:
clearing clogged ducts, blood or other vessels
removal of calculi (stones)
draining of accumulated fluids
debridement – removal of dead, damaged, or diseased tissue
Blood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped or reversed, and the person is taken off ventilation and extubated (if general anesthesia was administered).
=== Postoperative care ===
After completion of surgery, the person is transferred to the post anesthesia care unit and closely monitored. When the person is judged to have recovered from the anesthesia, he/she is either transferred to a surgical ward elsewhere in the hospital or discharged home. During the post-operative period, the person's general function is assessed, the outcome of the procedure is assessed, and the surgical site is checked for signs of infection. There are several risk factors associated with postoperative complications, such as immune deficiency and obesity. Obesity has long been considered a risk factor for adverse post-surgical outcomes. It has been linked to many disorders such as obesity hypoventilation syndrome, atelectasis and pulmonary embolism, adverse cardiovascular effects, and wound healing complications. If removable skin closures are used, they are removed after 7 to 10 days post-operatively, or after healing of the incision is well under way.
It is not uncommon for surgical drains to be required to remove blood or fluid from the surgical wound during recovery. Mostly these drains stay in until the volume tapers off, then they are removed. These drains can become clogged, leading to abscess.
Postoperative therapy may include adjuvant treatment such as chemotherapy, radiation therapy, or administration of medication such as anti-rejection medication for transplants. For postoperative nausea and vomiting (PONV), solutions like saline, water, controlled breathing placebo and aromatherapy can be used in addition to medication. Other follow-up studies or rehabilitation may be prescribed during and after the recovery period. A recent post-operative care philosophy has been early ambulation. Ambulation is getting the patient moving around. This can be as simple as sitting up or even walking around. The goal is to get the patient moving as early as possible. It has been found to shorten the patient's length of stay. Length of stay is the amount of time a patient spends in the hospital after surgery before they are discharged. In a recent study done with lumbar decompressions, the patient's length of stay was decreased by 1–3 days.
The use of topical antibiotics on surgical wounds to reduce infection rates has been questioned. Antibiotic ointments are likely to irritate the skin, slow healing, and could increase risk of developing contact dermatitis and antibiotic resistance. It has also been suggested that topical antibiotics should only be used when a person shows signs of infection and not as a preventative. A systematic review published by Cochrane (organisation) in 2016, though, concluded that topical antibiotics applied over certain types of surgical wounds reduce the risk of surgical site infections, when compared to no treatment or use of antiseptics. The review also did not find conclusive evidence to suggest that topical antibiotics increased the risk of local skin reactions or antibiotic resistance.
Through a retrospective analysis of national administrative data, the association between mortality and day of elective surgical procedure suggests a higher risk in procedures carried out later in the working week and on weekends. The odds of death were 44% and 82% higher respectively when comparing procedures on a Friday to a weekend procedure. This "weekday effect" has been postulated to be from several factors including poorer availability of services on a weekend, and also, decrease number and level of experience over a weekend.
Postoperative pain affects an estimated 80% of people who underwent surgery. While pain is expected after surgery, there is growing evidence that pain may be inadequately treated in many people in the acute period immediately after surgery. It has been reported that incidence of inadequately controlled pain after surgery ranged from 25.1% to 78.4% across all surgical disciplines. There is insufficient evidence to determine if giving opioid pain medication pre-emptively (before surgery) reduces postoperative pain the amount of medication needed after surgery.
Postoperative recovery has been defined as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities. Most people are discharged from the hospital or surgical center before they are fully recovered. The recovery process may include complications such as postoperative cognitive dysfunction and postoperative depression.
== Epidemiology ==
=== United States ===
In 2011, of the 38.6 million hospital stays in U.S. hospitals, 29% included at least one operating room procedure. These stays accounted for 48% of the total $387 billion in hospital costs.
The overall number of procedures remained stable from 2001 to 2011. In 2011, over 15 million operating room procedures were performed in U.S. hospitals.
Data from 2003 to 2011 showed that U.S. hospital costs were highest for the surgical service line; the surgical service line costs were $17,600 in 2003 and projected to be $22,500 in 2013. For hospital stays in 2012 in the United States, private insurance had the highest percentage of surgical expenditure. in 2012, mean hospital costs in the United States were highest for surgical stays.
== Special populations ==
=== Elderly people ===
Older adults have widely varying physical health. Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Assessment of older people before elective surgery can accurately predict the person's recovery trajectories. One frailty scale uses five items: unintentional weight loss, muscle weakness, exhaustion, low physical activity, and slowed walking speed. A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. People who are frail and elderly (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people.
=== Children ===
Surgery on children requires considerations that are not common in adult surgery. Children and adolescents are still developing physically and mentally making it difficult for them to make informed decisions and give consent for surgical treatments. Bariatric surgery in youth is among the controversial topics related to surgery in children.
=== Vulnerable populations ===
Doctors perform surgery with the consent of the person undergoing surgery. Some people are able to give better informed consent than others. Populations such as incarcerated persons, people living with dementia, the mentally incompetent, persons subject to coercion, and other people who are not able to make decisions with the same authority as others, have special needs when making decisions about their personal healthcare, including surgery.
== Global surgery ==
Global surgery has been defined as 'the multidisciplinary enterprise of providing improved and equitable surgical care to the world's population, with its core belief as the issues of need, access and quality". Halfdan T. Mahler, the 3rd Director-General of the World Health Organization (WHO), first brought attention to the disparities in surgery and surgical care in 1980 when he stated in his address to the World Congress of the International College of Surgeons, "'the vast majority of the world's population has no access whatsoever to skilled surgical care and little is being done to find a solution.As such, surgical care globally has been described as the 'neglected stepchild of global health,' a term coined by Paul Farmer to highlight the urgent need for further work in this area. Furthermore, Jim Young Kim, the former President of the World Bank, proclaimed in 2014 that "surgery is an indivisible, indispensable part of health care and of progress towards universal health coverage."
In 2015, the Lancet Commission on Global Surgery (LCoGS) published the landmark report titled "Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development", describing the large, pre-existing burden of surgical diseases in low- and middle-income countries (LMICs) and future directions for increasing universal access to safe surgery by the year 2030. The Commission highlighted that about 5 billion people lack access to safe and affordable surgical and anesthesia care and 143 million additional procedures were needed every year to prevent further morbidity and mortality from treatable surgical conditions as well as a $12.3 trillion loss in economic productivity by the year 2030. This was especially true in the poorest countries, which account for over one-third of the population but only 3.5% of all surgeries that occur worldwide. It emphasized the need to significantly improve the capacity for Bellwether procedures – laparotomy, caesarean section, open fracture care – which are considered a minimum level of care that first-level hospitals should be able to provide in order to capture the most basic emergency surgical care. In terms of the financial impact on the patients, the lack of adequate surgical and anesthesia care has resulted in 33 million individuals every year facing catastrophic health expenditure – the out-of-pocket healthcare cost exceeding 40% of a given household's income.
In alignment with the LCoGS call for action, the World Health Assembly adopted the resolution WHA68.15 in 2015 that stated, "Strengthening emergency and essential surgical care and anesthesia as a component of universal health coverage." This not only mandated the WHO to prioritize strengthening the surgical and anesthesia care globally, but also led to governments of the member states recognizing the urgent need for increasing capacity in surgery and anesthesia. Additionally, the third edition of Disease Control Priorities (DCP3), published in 2015 by the World Bank, declared surgery as essential and featured an entire volume dedicated to building surgical capacity.
Data from WHO and the World Bank indicate that scaling up infrastructure to enable access to surgical care in regions where it is currently limited or is non-existent is a low-cost measure relative to the significant morbidity and mortality caused by lack of surgical treatment. In fact, a systematic review found that the cost-effectiveness ratio – dollars spent per DALYs averted – for surgical interventions is on par or exceeds those of major public health interventions such as oral rehydration therapy, breastfeeding promotion, and even HIV/AIDS antiretroviral therapy. This finding challenged the common misconception that surgical care is financially prohibitive endeavor not worth pursuing in LMICs.
A key policy framework that arose from this renewed global commitment towards surgical care worldwide is the National Surgical Obstetric and Anesthesia Plan (NSOAP). NSOAP focuses on policy-to-action capacity building for surgical care with tangible steps as follows: (1) analysis of baseline indicators, (2) partnership with local champions, (3) broad stakeholder engagement, (4) consensus building and synthesis of ideas, (5) language refinement, (6) costing, (7) dissemination, and (8) implementation. This approach has been widely adopted and has served as guiding principles between international collaborators and local institutions and governments. Successful implementations have allowed for sustainability in terms of longterm monitoring, quality improvement, and continued political and financial support.
== Human rights ==
Access to surgical care is increasingly recognized as an integral aspect of healthcare and therefore is evolving into a normative derivation of human right to health. The ICESCR Article 12.1 and 12.2 define the human right to health as "the right of everyone to the enjoyment of the highest attainable standard of physical and mental health" In the August 2000, the UN Committee on Economic, Social and Cultural Rights (CESCR) interpreted this to mean "right to the enjoyment of a variety of facilities, goods, services, and conditions necessary for the realization of the highest attainable health". Surgical care can be thereby viewed as a positive right – an entitlement to protective healthcare.
Woven through the International Human and Health Rights literature is the right to be free from surgical disease. The 1966 ICESCR Article 12.2a described the need for "provision for the reduction of the stillbirth-rate and of infant mortality and for the healthy development of the child" which was subsequently interpreted to mean "requiring measures to improve… emergency obstetric services". Article 12.2d of the ICESCR stipulates the need for "the creation of conditions which would assure to all medical service and medical attention in the event of sickness", and is interpreted in the 2000 comment to include timely access to "basic preventative, curative services… for appropriate treatment of injury and disability.". Obstetric care shares close ties with reproductive rights, which includes access to reproductive health.
Surgeons and public health advocates, such as Kelly McQueen, have described surgery as "Integral to the right to health". This is reflected in the establishment of the WHO Global Initiative for Emergency and Essential Surgical Care in 2005, the 2013 formation of the Lancet Commission for Global Surgery, the 2015 World Bank Publication of Volume 1 of its Disease Control Priorities Project "Essential Surgery", and the 2015 World Health Assembly 68.15 passing of the Resolution for Strengthening Emergency and Essential Surgical Care and Anesthesia as a Component of Universal Health Coverage. The Lancet Commission for Global Surgery outlined the need for access to "available, affordable, timely and safe" surgical and anesthesia care; dimensions paralleled in ICESCR General Comment No. 14, which similarly outlines need for available, accessible, affordable and timely healthcare.
== History ==
=== Trepanation ===
Surgical treatments date back to the prehistoric era. The oldest for which there is evidence is trepanation, in which a hole is drilled or scraped into the skull, thus exposing the dura mater in order to treat health problems related to intracranial pressure.
=== Ancient Egypt ===
Prehistoric surgical techniques are seen in Ancient Egypt, where a mandible dated to approximately 2650 BC shows two perforations just below the root of the first molar, indicating the draining of an abscessed tooth. Surgical texts from ancient Egypt date back about 3500 years ago. Surgical operations were performed by priests, specialized in medical treatments similar to today, and used sutures to close wounds. Infections were treated with honey.
=== India ===
9,000-year-old skeletal remains of a prehistoric individual from the Indus River valley show evidence of teeth having been drilled. Sushruta Samhita is one of the oldest known surgical texts and its period is usually placed in the first millennium BCE. It describes in detail the examination, diagnosis, treatment, and prognosis of numerous ailments, as well as procedures for various forms of cosmetic surgery, plastic surgery and rhinoplasty.
=== Sri Lanka ===
In 1982 archaeologists were able to find significant evidence when the ancient land, called 'Alahana Pirivena' situated in Polonnaruwa, with ruins, was excavated. In that place ruins of an ancient hospital emerged. The hospital building was 147.5 feet in width and 109.2 feet in length. The instruments which were used for complex surgeries were there among the things discovered from the place, including forceps, scissors, probes, lancets, and scalpels. The instruments discovered may be dated to 11th century AD.
=== Ancient and Medieval Greece ===
In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepieia (Greek: Ασκληπιεία, sing. Asclepieion Ασκληπιείον), functioned as centers of medical advice, prognosis, and healing. In the Asclepieion of Epidaurus, some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place. The Greek Galen was one of the greatest surgeons of the ancient world and performed many audacious operations – including brain and eye surgery – that were not tried again for almost two millennia. Hippocrates stated in the oath (c. 400 BCE) "I will not use the knife, even upon those suffering from stones, but I will leave this to those who are trained in this craft."
Researchers from the Adelphi University discovered in the Paliokastro on Thasos ten skeletal remains, four women and six men, who were buried between the fourth and seventh centuries A.D. Their bones illuminated their physical activities, traumas, and even a complex form of brain surgery. According to the researchers: "The very serious trauma cases sustained by both males and females had been treated surgically or orthopedically by a very experienced physician/surgeon with great training in trauma care. We believe it to have been a military physician". The researchers were impressed by the complexity of the brain surgical operation.
In 1991 at the Polystylon fort in Greece, researchers discovered the head of a Byzantine warrior of the 14th century. Analysis of the lower jaw revealed that a surgery has been performed, when the warrior was alive, to the jaw which had been badly fractured and it tied back together until it healed.
=== Islamic world ===
During the Islamic Golden Age, largely based upon Paul of Aegina's Pragmateia, the writings of Albucasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi), an Andalusian-Arab physician and scientist who practiced in the Zahra suburb of Córdoba, were influential. Al-Zahrawi specialized in curing disease by cauterization. He invented several surgical instruments for purposes such as inspection of the interior of the urethra and for removing foreign bodies from the throat, the ear, and other body organs. He was also the first to illustrate the various cannulae and to treat warts with an iron tube and caustic metal as a boring instrument. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy. Al-Zahrawi pioneered techniques of neurosurgery and neurological diagnosis, treating head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. The first clinical description of an operative procedure for hydrocephalus was given by Al-Zahrawi, who clearly describes the evacuation of superficial intracranial fluid in hydrocephalic children.
=== Early modern Europe ===
In Europe, the demand grew for surgeons to formally study for many years before practicing; universities such as Montpellier, Padua and Bologna were particularly renowned. In the 12th century, Rogerius Salernitanus composed his Chirurgia, laying the foundation for modern Western surgical manuals. Barber-surgeons generally had a bad reputation that was not to improve until the development of academic surgery as a specialty of medicine, rather than an accessory field. Basic surgical principles for asepsis etc., are known as Halsteads principles.
There were some important advances to the art of surgery during this period. The professor of anatomy at the University of Padua, Andreas Vesalius, was a pivotal figure in the Renaissance transition from classical medicine and anatomy based on the works of Galen, to an empirical approach of 'hands-on' dissection. In his anatomic treaties De humani corporis fabrica, he exposed the many anatomical errors in Galen and advocated that all surgeons should train by engaging in practical dissections themselves.
The second figure of importance in this era was Ambroise Paré (sometimes spelled "Ambrose"), a French army surgeon from the 1530s until his death in 1590. The practice for cauterizing gunshot wounds on the battlefield had been to use boiling oil; an extremely dangerous and painful procedure. Paré began to employ a less irritating emollient, made of egg yolk, rose oil and turpentine. He also described more efficient techniques for the effective ligation of the blood vessels during an amputation.
=== Modern surgery ===
The discipline of surgery was put on a sound, scientific footing during the Age of Enlightenment in Europe. An important figure in this regard was the Scottish surgical scientist, John Hunter, generally regarded as the father of modern scientific surgery. He brought an empirical and experimental approach to the science and was renowned around Europe for the quality of his research and his written works. Hunter reconstructed surgical knowledge from scratch; refusing to rely on the testimonies of others, he conducted his own surgical experiments to determine the truth of the matter. To aid comparative analysis, he built up a collection of over 13,000 specimens of separate organ systems, from the simplest plants and animals to humans.
He greatly advanced knowledge of venereal disease and introduced many new techniques of surgery, including new methods for repairing damage to the Achilles tendon and a more effective method for applying ligature of the arteries in case of an aneurysm. He was also one of the first to understand the importance of pathology, the danger of the spread of infection and how the problem of inflammation of the wound, bone lesions and even tuberculosis often undid any benefit that was gained from the intervention. He consequently adopted the position that all surgical procedures should be used only as a last resort.
Other important 18th- and early 19th-century surgeons included Percival Pott (1713–1788) who described tuberculosis on the spine and first demonstrated that a cancer may be caused by an environmental carcinogen (he noticed a connection between chimney sweep's exposure to soot and their high incidence of scrotal cancer). Astley Paston Cooper (1768–1841) first performed a successful ligation of the abdominal aorta, and James Syme (1799–1870) pioneered the Symes Amputation for the ankle joint and successfully carried out the first hip disarticulation.
Modern pain control through anesthesia was discovered in the mid-19th century. Before the advent of anesthesia, surgery was a traumatically painful procedure and surgeons were encouraged to be as swift as possible to minimize patient suffering. This also meant that operations were largely restricted to amputations and external growth removals. Beginning in the 1840s, surgery began to change dramatically in character with the discovery of effective and practical anaesthetic chemicals such as ether, first used by the American surgeon Crawford Long, and chloroform, discovered by Scottish obstetrician James Young Simpson and later pioneered by John Snow, physician to Queen Victoria. In addition to relieving patient suffering, anaesthesia allowed more intricate operations in the internal regions of the human body. In addition, the discovery of muscle relaxants such as curare allowed for safer applications.
==== Infection and antisepsis ====
The introduction of anesthetics encouraged more surgery, which inadvertently caused more dangerous patient post-operative infections. The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory handwashing for everyone entering the maternal wards and was rewarded with a plunge in maternal and fetal deaths; however, the Royal Society dismissed his advice.
Until the pioneering work of British surgeon Joseph Lister in the 1860s, most medical men believed that chemical damage from exposures to bad air (see "miasma") was responsible for infections in wounds, and facilities for washing hands or a patient's wounds were not available. Lister became aware of the work of French chemist Louis Pasteur, who showed that rotting and fermentation could occur under anaerobic conditions if micro-organisms were present. Pasteur suggested three methods to eliminate the micro-organisms responsible for gangrene: filtration, exposure to heat, or exposure to chemical solutions. Lister confirmed Pasteur's conclusions with his own experiments and decided to use his findings to develop antiseptic techniques for wounds. As the first two methods suggested by Pasteur were inappropriate for the treatment of human tissue, Lister experimented with the third, spraying carbolic acid on his instruments. He found that this remarkably reduced the incidence of gangrene and he published his results in The Lancet. Later, on 9 August 1867, he read a paper before the British Medical Association in Dublin, on the Antiseptic Principle of the Practice of Surgery, which was reprinted in the British Medical Journal. His work was groundbreaking and laid the foundations for a rapid advance in infection control that saw modern antiseptic operating theatres widely used within 50 years.
Lister continued to develop improved methods of antisepsis and asepsis when he realised that infection could be better avoided by preventing bacteria from getting into wounds in the first place. This led to the rise of sterile surgery. Lister introduced the Steam Steriliser to sterilize equipment, instituted rigorous hand washing and later implemented the wearing of rubber gloves. These three crucial advances – the adoption of a scientific methodology toward surgical operations, the use of anaesthetic and the introduction of sterilised equipment – laid the groundwork for the modern invasive surgical techniques of today.
The use of X-rays as an important medical diagnostic tool began with their discovery in 1895 by German physicist Wilhelm Röntgen. He noticed that these rays could penetrate the skin, allowing the skeletal structure to be captured on a specially treated photographic plate.
== Surgical specialties ==
== Learned societies ==
== See also ==
=== List of Surgery-related fields ===
== Notes ==
== References ==
== Further reading ==
Bartolo, M., Bargellesi, S., Castioni, C. A., Intiso, D., Fontana, A., Copetti, M., Scarponi, F., Bonaiuti, D., & Intensive Care and Neurorehabilitation Italian Study Group (2017). Mobilization in early rehabilitation in intensive care unit patients with severe acquired brain injury: An observational study. Journal of rehabilitation medicine, 49(9), 715–722.
Ni, C.-yan, Wang, Z.-hong, Huang, Z.-ping, Zhou, H., Fu, L.-juan, Cai, H., Huang, X.-xuan, Yang, Y., Li, H.-fen, & Zhou, W.-ping. (2018). Early enforced mobilization after liver resection: A prospective randomized controlled trial. International Journal of Surgery, 54, 254–258.
Lei, Y. T., Xie, J. W., Huang, Q., Huang, W., & Pei, F. X. (2021). Benefits of early ambulation within 24 h after total knee arthroplasty: a multicenter retrospective cohort study in China. Military Medical Research, 8(1), 17.
Stethen, T. W., Ghazi, Y. A., Heidel, R. E., Daley, B. J., Barnes, L., Patterson, D., & McLoughlin, J. M. (2018). Walking to recovery: the effects of missed ambulation events on postsurgical recovery after bowel resection. Journal of gastrointestinal oncology, 9(5), 953–961.
Yakkanti, R. R., Miller, A. J., Smith, L. S., Feher, A. W., Mont, M. A., & Malkani, A. L. (2019). Impact of early mobilization on length of stay after primary total knee arthroplasty. Annals of translational medicine, 7(4), 69. | Wikipedia/Surgery |
Kidney stone disease is known as renal calculus disease, nephrolithiasis or urolithiasis in medical terminology. "Renal" is Latin for kidney, while "nephro" is the Greek equivalent. "Lithiasis" (Gr.) and "calculus" (Lat.- pl. calculi) both mean stone(s). Kidney stone disease is a crystallopathy and occurs when there are too many minerals in the urine and not enough liquid or hydration. This imbalance causes tiny pieces of crystal to aggregate and form hard masses, or calculi (stones) in the upper urinary tract. Because renal calculi typically form in the kidney, if small enough, they are able to leave the urinary tract via the urine stream. A small calculus may pass without causing symptoms. However, if a stone grows to more than 5 millimeters (0.2 inches), it can cause blockage of the ureter, resulting in extremely sharp and severe pain (renal colic) in the lower back that often radiates downward to the groin. A calculus may also result in blood in the urine, vomiting (due to severe pain), or painful urination. About half of all people who have had a kidney stone are likely to develop another within ten years.
Most calculi form by a combination of genetics and environmental factors. Risk factors include high urine calcium levels, obesity, certain foods, some medications, calcium supplements, gout, hyperparathyroidism and not drinking enough fluids. Calculi form in the kidney when minerals in urine are at high concentration. The diagnosis is usually based on symptoms, urine testing, and medical imaging. Blood tests may also be useful. Calculi are typically classified by their location: nephrolithiasis (in the kidney), ureterolithiasis (in the ureter), cystolithiasis (in the bladder), or by what they are made of (calcium oxalate, uric acid, struvite, cystine).
In those who have had renal calculi, drinking fluids, especially water,
is a way to prevent them. Drinking fluids such that more than two liters of urine are produced per day is recommended. If fluid intake alone is not effective to prevent renal calculi, the medications thiazide diuretic, citrate, or allopurinol may be suggested. Soft drinks containing phosphoric acid (typically colas) should be avoided. When a calculus causes no symptoms, no treatment is needed. For those with symptoms, pain control is usually the first measure, using medications such as nonsteroidal anti-inflammatory drugs or opioids. Larger calculi may be helped to pass with the medication tamsulosin or may require procedures such as extracorporeal shock wave lithotripsy, ureteroscopy, or percutaneous nephrolithotomy.
Renal calculi have affected humans throughout history with a description of surgery to remove them dating from as early as 600 BC in ancient India by Sushruta. Between 1% and 15% of people globally are affected by renal calculi at some point in their lives. In 2015, 22.1 million cases occurred, resulting in about 16,100 deaths. They have become more common in the Western world since the 1970s. Generally, more men are affected than women. The prevalence and incidence of the disease rises worldwide and continues to be challenging for patients, physicians, and healthcare systems alike. In this context, epidemiological studies are striving to elucidate the worldwide changes in the patterns and the burden of the disease and identify modifiable risk factors that contribute to the development of renal calculi.
== Signs and symptoms ==
The hallmark of a stone that obstructs the ureter or renal pelvis is excruciating, intermittent pain that radiates from the flank to the groin or to the inner thigh. This is due to the transfer of referred pain signals from the lower thoracic splanchnic nerves to the lumbar splanchnic nerves as the stone passes down from the kidney or proximal ureter to the distal ureter. This pain, known as renal colic, is often described as one of the strongest pain sensations known. Renal colic caused by kidney stones is commonly accompanied by urinary urgency, restlessness, hematuria, sweating, nausea, and vomiting. It typically comes in waves lasting 20 to 60 minutes caused by peristaltic contractions of the ureter as it attempts to expel the stone.
The embryological link between the urinary tract, the genital system, and the gastrointestinal tract is the basis of the radiation of pain to the gonads, as well as the nausea and vomiting that are also common in urolithiasis. Postrenal azotemia and hydronephrosis can be observed following the obstruction of urine flow through one or both ureters.
Pain in the lower-left quadrant can sometimes be confused with diverticulitis because the sigmoid colon overlaps the ureter, and the exact location of the pain may be difficult to isolate due to the proximity of these two structures.
== Risk factors ==
Dehydration from low fluid intake is a factor in stone formation. Individuals living in warm climates are at higher risk due to increased fluid loss. Obesity, immobility, and sedentary lifestyles are other leading risk factors.
High dietary intake of animal protein, sodium, sugars including honey, refined sugars, fructose and high fructose corn syrup, and excessive consumption of fruit juices may increase the risk of kidney stone formation due to increased uric acid excretion and elevated urinary oxalate levels (whereas tea, coffee, wine and beer may decrease the risk).
Kidney stones can result from an underlying metabolic condition, such as distal renal tubular acidosis, Dent's disease, hyperparathyroidism, primary hyperoxaluria, or medullary sponge kidney. 3–20% of people who form kidney stones have medullary sponge kidney.
Kidney stones are more common in people with Crohn's disease; Crohn's disease is associated with hyperoxaluria and malabsorption of magnesium.
A person with recurrent kidney stones may be screened for such disorders. This is typically done with a 24-hour urine collection. The urine is analyzed for features that promote stone formation.
=== Calcium oxalate ===
Calcium is one component of the most common type of human kidney stones, calcium oxalate. Some studies suggest that people who take calcium or vitamin D as a dietary supplement have a higher risk of developing kidney stones. In the United States, kidney stone formation was used as an indicator of excess calcium intake by the Reference Daily Intake committee for calcium in adults.
In the early 1990s, a study conducted for the Women's Health Initiative in the US found that postmenopausal women who consumed 1000 mg of supplemental calcium and 400 international units of vitamin D per day for seven years had a 17% higher risk of developing kidney stones than subjects taking a placebo. The Nurses' Health Study also showed an association between supplemental calcium intake and kidney stone formation.
Unlike supplemental calcium, high intakes of dietary calcium do not appear to cause kidney stones and may actually protect against their development. This is perhaps related to the role of calcium in binding ingested oxalate in the gastrointestinal tract. As the amount of calcium intake decreases, the amount of oxalate available for absorption into the bloodstream increases; this oxalate is then excreted in greater amounts into the urine by the kidneys. In the urine, oxalate is a very strong promoter of calcium oxalate precipitation—about 15 times stronger than calcium.
A 2004 study found that diets low in calcium are associated with a higher overall risk for kidney stone formation. For most individuals, other risk factors for kidney stones, such as high intakes of dietary oxalates and low fluid intake, play a greater role than calcium intake.
=== Other electrolytes ===
Calcium is not the only electrolyte that influences the formation of kidney stones. For example, by increasing urinary calcium excretion, high dietary sodium may increase the risk of stone formation.
Drinking fluoridated tap water may increase the risk of kidney stone formation by a similar mechanism, though further epidemiologic studies are warranted to determine whether fluoride in drinking water is associated with an increased incidence of kidney stones. High dietary intake of potassium appears to reduce the risk of stone formation because potassium promotes the urinary excretion of citrate, an inhibitor of calcium crystal formation.
Kidney stones are more likely to develop, and to grow larger, if a person has low dietary magnesium. Magnesium inhibits stone formation.
=== Animal protein ===
Diets in Western nations typically contain a large proportion of animal protein. Eating animal protein creates an acid load that increases urinary excretion of calcium and uric acid and reduced citrate. Urinary excretion of excess sulfurous amino acids (e.g., cysteine and methionine), uric acid, and other acidic metabolites from animal protein acidifies the urine, which promotes the formation of kidney stones. Low urinary-citrate excretion is also commonly found in those with a high dietary intake of animal protein, whereas vegetarians tend to have higher levels of citrate excretion. Low urinary citrate, too, promotes stone formation.
=== Vitamins ===
The evidence linking vitamin C supplements with an increased rate of kidney stones is inconclusive. The excess dietary intake of vitamin C might increase the risk of calcium-oxalate stone formation. The link between vitamin D intake and kidney stones is also tenuous.
Excessive vitamin D supplementation may increase the risk of stone formation by increasing the intestinal absorption of calcium; correction of a deficiency does not.
== Pathophysiology ==
=== Supersaturation of urine ===
Kidney stones are primarily composed of calcium salts, with the most common being calcium oxalate (70-80%), followed by calcium phosphate and uric acid. When urine contains high concentrations of these ions, they can form crystals and eventually stones.
The formation of kidney stones occurs in three main phases:
nucleation (initial crystal formation)
growth (expansion of single crystals)
aggregation (clumping together of multiple crystals)
When the urine becomes supersaturated (when the urine solvent contains more solutes than it can hold in solution) with one or more calculogenic (crystal-forming) substances, initial seed crystals may form through the process of nucleation. Heterogeneous nucleation (where there is a solid surface present on which a crystal can grow) proceeds more rapidly than homogeneous nucleation (where a crystal must grow in a liquid medium with no such surface), because it requires less energy. Adhering to cells on the surface of a renal papilla, a seed crystal can grow and aggregate into an organized mass. Depending on the chemical composition of the crystal, the stone-forming process may proceed more rapidly when the urine pH is unusually high or low.
Supersaturation of the urine with respect to a calculogenic compound is pH-dependent. For example, at a pH of 7.0, the solubility of uric acid in urine is 158 mg/100 mL. Reducing the pH to 5.0 decreases the solubility of uric acid to less than 8 mg/100 mL. The formation of uric-acid stones requires a combination of hyperuricosuria (high urine uric-acid levels) and low urine pH; hyperuricosuria alone is not associated with uric-acid stone formation if the urine pH is alkaline. Supersaturation of the urine is a necessary, but not a sufficient, condition for the development of any urinary calculus. Supersaturation is likely the underlying cause of uric acid and cystine stones, but calcium-based stones (especially calcium oxalate stones) may have a more complex cause.
=== Randall's plaque ===
While supersaturation of urine may lead to crystalluria, it does not necessarily promote the formation of a kidney stone because the particle may not reach the sufficient size needed for renal attachment. On the other hand, Randall's plaques, which were first identified by Alexander Randall in 1937, are calcium phosphate deposits that form in the papillary interstitium and are thought to be the nidus required for stone development. In addition to Randall's plugs, which form in the Duct of Bellini, these structures can generate reactive oxygen species that further enhance stone formation.
=== Pathogenic bacteria ===
Some bacteria have roles in promoting stone formation. Specifically, urease-positive bacteria, such as Proteus mirabilis can produce the enzyme urease, which converts urea to ammonia and carbon dioxide. This increases the urinary pH and promotes struvite stone formation. Additionally, non-urease producing bacteria can provide bacterial components that promote calcium oxalate crystallization, though this mechanism is poorly understood.
=== Inhibitors of stone formation ===
Normal urine contains chelating agents, such as citrate, that inhibit the nucleation, growth, and aggregation of calcium-containing crystals. Other endogenous inhibitors include calgranulin (an S-100 calcium-binding protein), Tamm–Horsfall protein, glycosaminoglycans, uropontin (a form of osteopontin), nephrocalcin (an acidic glycoprotein), prothrombin F1 peptide, and bikunin (uronic acid-rich protein). The biochemical mechanisms of action of these substances have not yet been thoroughly elucidated. However, when these substances fall below their normal proportions, stones can form from an aggregation of crystals.
Sufficient dietary intake of magnesium and citrate inhibits the formation of calcium oxalate and calcium phosphate stones; in addition, magnesium and citrate operate synergistically to inhibit kidney stones. The efficacy of magnesium in subduing stone formation and growth is dose-dependent.
=== Hypocitraturia ===
Hypocitraturia or low urinary-citrate excretion (variably defined as less than 320 mg/day) can be a contributing cause of kidney stones in up to 2/3 of cases. The protective role of citrate is linked to several mechanisms; citrate reduces urinary supersaturation of calcium salts by forming soluble complexes with calcium ions and by inhibiting crystal growth and aggregation. Therapy with potassium citrate is commonly prescribed in clinical practice to increase urinary citrate and to reduce stone formation rates. Alkali citrate is also used to increase urine citrate levels. It can be prescribed or found over-the-counter in pill, liquid or powder form.
== Diagnosis ==
Diagnosis of kidney stones is made on the basis of information obtained from the history, physical examination, urinalysis, and radiographic studies. Clinical diagnosis is usually made on the basis of the location and severity of the pain, which is typically colicky in nature (comes and goes in spasmodic waves). Pain in the back occurs when calculi produce an obstruction in the kidney. Physical examination may reveal fever and tenderness at the costovertebral angle on the affected side.
=== Imaging studies ===
Calcium-containing stones are relatively radiodense (opaque to X-rays), and they can often be detected by a traditional radiography of the abdomen that includes the kidneys, ureters, and bladder (KUB film). KUB radiography, although useful in monitoring size of stone or passage of stone in stone formers, might not be useful in the acute setting due to low sensitivity. Some 60% of all renal stones are radiopaque. In general, calcium phosphate stones have the greatest density, followed by calcium oxalate and magnesium ammonium phosphate stones. Cystine calculi are only faintly radiodense, while uric acid stones are usually entirely radiolucent.
In people with a history of stones, those who are less than 50 years of age and are presenting with the symptoms of stones without any concerning signs do not require helical CT scan imaging. A computed tomography (CT) scan is also not typically recommended in children.
Otherwise a noncontrast helical CT scan with 5 millimeters (0.2 in) sections is the diagnostic method to use to detect kidney stones and confirm the diagnosis of kidney stone disease. Near all stones are detectable on CT scans with the exception of those composed of certain drug residues in the urine, such as from indinavir.
Where a CT scan is unavailable, an intravenous pyelogram may be performed to help confirm the diagnosis of urolithiasis. This involves intravenous injection of a contrast agent followed by a KUB film. Uroliths present in the kidneys, ureters, or bladder may be better defined by the use of this contrast agent. Stones can also be detected by a retrograde pyelogram, where a similar contrast agent is injected directly into the distal ostium of the ureter (where the ureter terminates as it enters the bladder).
Renal ultrasonography can sometimes be useful, because it gives details about the presence of hydronephrosis, suggesting that the stone is blocking the outflow of urine. Radiolucent stones, which do not appear on KUB, may show up on ultrasound imaging studies. Other advantages of renal ultrasonography include its low cost and absence of radiation exposure. Ultrasound imaging is useful for detecting stones in situations where X-rays or CT scans are discouraged, such as in children or pregnant women. Despite these advantages, renal ultrasonography in 2009 was not considered a substitute for noncontrast helical CT scan in the initial diagnostic evaluation of urolithiasis. The main reason for this is that, compared with CT, renal ultrasonography more often fails to detect small stones (especially ureteral stones) and other serious disorders that could be causing the symptoms.
On the contrary, a 2014 study suggested that ultrasonography should be used as the initial diagnostic imaging test, with further imaging studies be performed at the discretion of the physician on the basis of clinical judgment, and using ultrasonography rather than CT as an initial diagnostic test results in less radiation exposure and equally good outcome.
=== Laboratory examination ===
Laboratory investigations typically carried out include:
microscopic examination of the urine, which may show red blood cells, bacteria, leukocytes, urinary casts, and crystals;
urine culture to identify any infecting organisms present in the urinary tract and sensitivity to determine the susceptibility of these organisms to specific antibiotics;
complete blood count, looking for neutrophilia (increased neutrophil granulocyte count) suggestive of bacterial infection, as seen in the setting of struvite stones;
renal function tests to look for abnormally high blood calcium levels (hypercalcemia);
24 hour urine collection to measure total daily urinary volume, magnesium, sodium, uric acid, calcium, citrate, oxalate, and phosphate;
collection of stones (by urinating through a StoneScreen kidney stone collection cup or a simple tea strainer) is useful. Chemical analysis of collected stones can establish their composition, which in turn can help to guide future preventive and therapeutic management.
=== Composition ===
==== Calcium-containing stones ====
By far, the most common type of kidney stones worldwide contains calcium. For example, calcium-containing stones represent about 80% of all cases in the United States; these typically contain calcium oxalate either alone or in combination with calcium phosphate in the form of apatite or brushite. Factors that promote the precipitation of oxalate crystals in the urine, such as primary hyperoxaluria, are associated with the development of calcium oxalate stones. The formation of calcium phosphate stones is associated with conditions such as hyperparathyroidism and renal tubular acidosis.
Oxaluria is increased in patients with certain gastrointestinal disorders including inflammatory bowel disease such as Crohn's disease or in patients who have undergone resection of the small bowel or small-bowel bypass procedures. Oxaluria is also increased in patients who consume increased amounts of oxalate (found in vegetables and nuts). Primary hyperoxaluria is a rare autosomal recessive condition that usually presents in childhood.
Calcium oxalate crystals can come in two varieties. Calcium oxalate monohydrate can appear as 'dumbbells' or as long ovals that resemble the individual posts in a picket fence. Calcium oxalate dihydrate have a tetragonal "envelope" appearance.
==== Struvite stones ====
About 10–15% of urinary calculi are composed of struvite (hexa-hydrated ammonium magnesium phosphate, NH4MgPO4·6H2O). Struvite stones (also known as "infection stones," urease, or triple-phosphate stones) form most often in the presence of infection by urea-splitting bacteria. Using the enzyme urease, these organisms metabolize urea into ammonia and carbon dioxide. This alkalinizes the urine, resulting in favorable conditions for the formation of struvite stones. Proteus mirabilis, Proteus vulgaris, and Morganella morganii are the most common organisms isolated; less common organisms include Ureaplasma urealyticum and some species of Providencia, Klebsiella, Serratia, and Enterobacter. These infection stones are commonly observed in people who have factors that predispose them to urinary tract infections, such as those with spinal cord injury and other forms of neurogenic bladder, ileal conduit urinary diversion, vesicoureteral reflux, and obstructive uropathies. They are also commonly seen in people with underlying metabolic disorders, such as idiopathic hypercalciuria, hyperparathyroidism, and gout. Infection stones can grow rapidly, forming large calyceal staghorn (antler-shaped) calculi requiring invasive surgery such as percutaneous nephrolithotomy for definitive treatment.
Struvite stones (triple-phosphate/magnesium ammonium phosphate) have a 'coffin lid' morphology by microscopy.
==== Uric acid stones ====
About 5–10% of all stones are formed from uric acid. People with certain metabolic abnormalities, including obesity, may produce uric acid stones. They also may form in association with conditions that cause hyperuricosuria (an excessive amount of uric acid in the urine) with or without hyperuricemia (an excessive amount of uric acid in the serum). They may also form in association with disorders of acid/base metabolism where the urine is excessively acidic (low pH), resulting in precipitation of uric acid crystals. A diagnosis of uric acid urolithiasis is supported by the presence of a radiolucent stone in the face of persistent urine acidity, in conjunction with the finding of uric acid crystals in fresh urine samples.
As noted above (section on calcium oxalate stones), people with inflammatory bowel disease (Crohn's disease, ulcerative colitis) tend to have hyperoxaluria and form oxalate stones. They also have a tendency to form urate stones. Urate stones are especially common after colon resection.
Uric acid stones appear as pleomorphic crystals, usually diamond-shaped. They may also look like squares or rods which are polarizable.
==== Other types ====
People with certain rare inborn errors of metabolism have a propensity to accumulate crystal-forming substances in their urine. For example, those with cystinuria, cystinosis, and Fanconi syndrome may form stones composed of cystine. Cystine stone formation can be treated with urine alkalinization and dietary protein restriction. People affected by xanthinuria often produce stones composed of xanthine. People affected by adenine phosphoribosyltransferase deficiency may produce 2,8-dihydroxyadenine stones, alkaptonurics produce homogentisic acid stones, and iminoglycinurics produce stones of glycine, proline, and hydroxyproline. Urolithiasis has also been noted to occur in the setting of therapeutic drug use, with crystals of drug forming within the renal tract in some people currently being treated with agents such as indinavir, sulfadiazine, and triamterene.
=== Location ===
Urolithiasis refers to stones originating anywhere in the urinary system, including the kidneys and bladder. Nephrolithiasis refers to the presence of such stones in the kidneys. Calyceal calculi are aggregations in either the minor or major calyx, parts of the kidney that pass urine into the ureter (the tube connecting the kidneys to the urinary bladder). The condition is called ureterolithiasis when a calculus is located in the ureter. Stones may also form or pass into the bladder, a condition referred to as bladder stones.
=== Size ===
Stones less than 5 mm (0.2 in) in diameter pass spontaneously in up to 98% of cases, while those measuring 5 to 10 mm (0.2 to 0.4 in) in diameter pass spontaneously in less than 53% of cases.
Stones that are large enough to fill out the renal calyces are called staghorn stones and are composed of struvite in a vast majority of cases, which forms only in the presence of urease-forming bacteria. Other forms that can possibly grow to become staghorn stones are those composed of cystine, calcium oxalate monohydrate, and uric acid.
== Prevention ==
Preventative measures depend on the type of stones. In those with calcium stones, drinking plenty of fluids, thiazide diuretics and citrate are effective as is allopurinol in those with high uric acid levels in urine.
=== Dietary measures ===
Specific therapy should be tailored to the type of stones involved. Diet can have an effect on the development of kidney stones. Preventive strategies include some combination of dietary modifications and medications with the goal of reducing the excretory load of calculogenic compounds on the kidneys. Dietary recommendations to minimize the formation of kidney stones include:
increasing total fluid intake to achieve more than two liters per day of urine output;
limiting cola, including sugar-sweetened soft drinks; to less than one liter per week.
limiting animal protein intake to no more than two meals daily (an association between animal protein and recurrence of kidney stones has been shown in men);
increasing citrate, including from lemon and lime juice; citric acid in its natural form, such as from citrus fruits, "prevents small stones from becoming 'problem stones' by coating them and preventing other material from attaching and building onto the stones"; citrate inhibits the formation of kidney stones on all phases—nucleation, growth and aggregation—by raising the limit at which oxalate remain stable, slowing oxalate crystal growth, and notably, reducing crystal aggregation within the kidney tubules;
increase alkaline load by consuming more fruits and vegetables (because uric acid crystals form in acidic environment);
reducing sodium intake is associated with a reduction in urine calcium excretion.
Maintenance of dilute urine by means of vigorous fluid therapy is beneficial in all forms of kidney stones, so increasing urine volume is a key principle for the prevention of kidney stones. Fluid intake should be sufficient to maintain a urine output of at least 2 litres (68 US fl oz) per day. A high fluid intake may reduce the likelihood of kidney stone recurrence or may increase the time between stone development without unwanted effects.
Calcium binds with available oxalate in the gastrointestinal tract, thereby preventing its absorption into the bloodstream. Reducing oxalate absorption decreases kidney stone risk in susceptible people. Because of this, some doctors recommend increasing dairy intake so that its calcium content will serve as an oxalate binder. Taking calcium citrate tablets during or after meals containing high oxalate foods may be useful if dietary calcium cannot be increased by other means as in those with lactose intolerance. The preferred calcium supplement for people at risk of stone formation is calcium citrate, as opposed to calcium carbonate, because it helps to increase urinary citrate excretion.
Aside from vigorous oral hydration and eating more dietary calcium, other prevention strategies include avoidance of higher doses of supplemental vitamin C (since ascorbate is metabolized to oxalate) and restriction of oxalate-rich foods such as leaf vegetables, rhubarb, soy products and chocolate. However, no randomized, controlled trial of oxalate restriction has been performed to test the hypothesis that oxalate restriction reduces stone formation. Some evidence indicates magnesium intake decreases the risk of symptomatic kidney stones.
=== Urine alkalinization ===
The mainstay for medical management of uric acid stones is alkalinization (increasing the pH) of the urine. Uric acid stones are among the few types amenable to dissolution therapy, referred to as chemolysis. Chemolysis is usually achieved through the use of oral medications, although in some cases, intravenous agents or even instillation of certain irrigating agents directly onto the stone can be performed, using antegrade nephrostomy or retrograde ureteral catheters. Acetazolamide is a medication that alkalinizes the urine. In addition to acetazolamide or as an alternative, certain dietary supplements are available that produce a similar alkalinization of the urine. These include alkali citrate, sodium bicarbonate, potassium citrate, magnesium citrate, and bicitrate (a combination of citric acid monohydrate and sodium citrate dihydrate). Aside from alkalinization of the urine, these supplements have the added advantage of increasing the urinary citrate level, which helps to reduce the aggregation of calcium oxalate stones.
Increasing the urine pH to around 6.5 provides optimal conditions for dissolution of uric acid stones. Increasing the urine pH to a value higher than 7.0 may increase the risk of calcium phosphate stone formation, though this concept is controversial since citrate does inhibit calcium phosphate crystallization. Testing the urine periodically with nitrazine paper can help to ensure the urine pH remains in this optimal range. Using this approach, stone dissolution rate can be expected to be around 10 mm (0.4 in) of stone radius per month.
==== Slaked lime ====
Calcium hydroxide decreases urinary calcium when combined with food rich in oxalic acid such as green leafy vegetables.
=== Diuretics ===
One of the recognized medical therapies for prevention of stones is the thiazide and thiazide-like diuretics, such as chlorthalidone or indapamide. These drugs inhibit the formation of calcium-containing stones by reducing urinary calcium excretion. Sodium restriction is necessary for clinical effect of thiazides, as sodium excess promotes calcium excretion. Thiazides work best for renal leak hypercalciuria (high urine calcium levels), a condition in which high urinary calcium levels are caused by a primary kidney defect. Thiazides are useful for treating absorptive hypercalciuria, a condition in which high urinary calcium is a result of excess absorption from the gastrointestinal tract.
=== Allopurinol ===
For people with hyperuricosuria and calcium stones, allopurinol is one of the few treatments that have been shown to reduce kidney stone recurrences. Allopurinol interferes with the production of uric acid in the liver. The drug is also used in people with gout or hyperuricemia (high serum uric acid levels). Dosage is adjusted to maintain a reduced urinary excretion of uric acid. Serum uric acid level at or below 6 mg/100 mL is often a therapeutic goal. Hyperuricemia is not necessary for the formation of uric acid stones; hyperuricosuria can occur in the presence of normal or even low serum uric acid. Some practitioners advocate adding allopurinol only in people in whom hyperuricosuria and hyperuricemia persist, despite the use of a urine-alkalinizing agent such as sodium bicarbonate or potassium citrate.
== Treatment ==
Stone size influences the rate of spontaneous stone passage. For example, up to 98% of small stones (less than 5 mm (0.2 in) in diameter) may pass spontaneously through urination within four weeks of the onset of symptoms, but for larger stones (5 to 10 mm (0.2 to 0.4 in) in diameter), the rate of spontaneous passage decreases to less than 53%. Initial stone location also influences the likelihood of spontaneous stone passage. Rates increase from 48% for stones located in the proximal ureter to 79% for stones located at the vesicoureteric junction, regardless of stone size. Assuming no high-grade obstruction or associated infection is found in the urinary tract, and symptoms are relatively mild, various nonsurgical measures can be used to encourage the passage of a stone. Repeat stone formers benefit from more intense management, including proper fluid intake and use of certain medications, as well as careful monitoring.
=== Pain management ===
Management of pain often requires intravenous administration of NSAIDs or opioids. NSAIDs appear somewhat better than opioids or paracetamol in those with normal kidney function. Medications by mouth are often effective for less severe discomfort. The use of antispasmodics does not have further benefit.
=== Medical expulsive therapy ===
The use of medications to speed the spontaneous passage of stones in the ureter is referred to as medical expulsive therapy. Several agents, including alpha adrenergic blockers (such as tamsulosin) and calcium channel blockers (such as nifedipine), may be effective. Alpha-blockers likely result in more people passing their stones, and they may pass their stones in a shorter time. People taking alpha-blockers may also use less pain medication and may not need to visit the hospital. Alpha-blockers appear to be more effective for larger stones (over 5 mm in size) than smaller stones. However, use of alpha-blockers may be associated with a slight increase in serious, unwanted effects from this medication. A combination of tamsulosin and a corticosteroid may be better than tamsulosin alone. These treatments also appear to be useful in addition to lithotripsy.
=== Lithotripsy ===
Extracorporeal shock wave lithotripsy (ESWL) is a noninvasive technique for the removal of kidney stones. Most ESWL is carried out when the stone is present near the renal pelvis. ESWL involves the use of a lithotriptor machine to deliver externally applied, focused, high-intensity pulses of ultrasonic energy to cause fragmentation of a stone over a period of around 30–60 minutes. Following its introduction in the United States in February 1984, ESWL was rapidly and widely accepted as a treatment alternative for renal and ureteral stones. It is currently used in the treatment of uncomplicated stones located in the kidney and upper ureter, provided the aggregate stone burden (stone size and number) is less than 20 mm (0.8 in) and the anatomy of the involved kidney is normal.
For a stone greater than 10 millimetres (0.39 in), ESWL may not help break the stone in one treatment; instead, two or three treatments may be needed. Some 80-85% of simple renal calculi can be effectively treated with ESWL. A number of factors can influence its efficacy, including chemical composition of the stone, presence of anomalous renal anatomy and the specific location of the stone within the kidney, presence of hydronephrosis, body mass index, and distance of the stone from the surface of the skin.
Common adverse effects of ESWL include acute trauma, such as bruising at the site of shock administration, and damage to blood vessels of the kidney. In fact, the vast majority of people who are treated with a typical dose of shock waves using currently accepted treatment settings are likely to experience some degree of acute kidney injury. ESWL-induced acute kidney injury is dose-dependent (increases with the total number of shock waves administered and with the power setting of the lithotriptor) and can be severe, including internal bleeding and subcapsular hematomas. On rare occasions, such cases may require blood transfusion and even lead to acute kidney failure. Hematoma rates may be related to the type of lithotriptor used; hematoma rates of less than 1% and up to 13% have been reported for different lithotriptor machines. Recent studies show reduced acute tissue injury when the treatment protocol includes a brief pause following the initiation of treatment, and both improved stone breakage and a reduction in injury when ESWL is carried out at slow shock wave rate.
In addition to the aforementioned potential for acute kidney injury, animal studies suggest these acute injuries may progress to scar formation, resulting in loss of functional renal volume. Recent prospective studies also indicate elderly people are at increased risk of developing new-onset hypertension following ESWL. In addition, a retrospective case-control study published by researchers from the Mayo Clinic in 2006 has found an increased risk of developing diabetes mellitus and hypertension in people who had undergone ESWL, compared with age and gender-matched people who had undergone nonsurgical treatment. Whether or not acute trauma progresses to long-term effects probably depends on multiple factors that include the shock wave dose (i.e., the number of shock waves delivered, rate of delivery, power setting, acoustic characteristics of the particular lithotriptor, and frequency of retreatment), as well as certain intrinsic predisposing pathophysiologic risk factors.
To address these concerns, the American Urological Association established the Shock Wave Lithotripsy Task Force to provide an expert opinion on the safety and risk-benefit ratio of ESWL. The task force published a white paper outlining their conclusions in 2009. They concluded the risk-benefit ratio remains favorable for many people. The advantages of ESWL include its noninvasive nature, the fact that it is technically easy to treat most upper urinary tract calculi, and that, at least acutely, it is a well-tolerated, low-morbidity treatment for the vast majority of people. However, they recommended slowing the shock wave firing rate from 120 pulses per minute to 60 pulses per minute to reduce the risk of renal injury and increase the degree of stone fragmentation.
Alpha-blockers are sometimes prescribed after shock wave lithotripsy to help the pieces of the stone leave the person's body. By relaxing muscles and helping to keep blood vessels open, alpha blockers may relax the ureter muscles to allow the kidney stone fragments to pass. When compared to usual care or placebo treatment, alpha blockers may lead to faster clearing of stones, a reduced need for extra treatment and fewer unwanted effects. They may also clear kidney stones in more adults than the standard shock wave lithotripsy procedure. The unwanted effects associated with alpha blockers are hospital emergency visits and return to hospital for stone-related issues, but these effects were more common in adults who did not receive alpha-blockers as a part of their treatment.
=== Surgery ===
Most stones under 5 mm (0.2 in) pass spontaneously. Prompt surgery may, nonetheless, be required in persons with only one working kidney, bilateral obstructing stones, a urinary tract infection and thus, it is presumed, an infected kidney, or intractable pain. Beginning in the mid-1980s, less invasive treatments such as extracorporeal shock wave lithotripsy, ureteroscopy, and percutaneous nephrolithotomy began to replace open surgery as the modalities of choice for the surgical management of urolithiasis. More recently, flexible ureteroscopy has been adapted to facilitate retrograde nephrostomy creation for percutaneous nephrolithotomy. This approach is still under investigation, though early results are favorable. Percutaneous nephrolithotomy or, rarely, anatrophic nephrolithotomy, is the treatment of choice for large or complicated stones (such as calyceal staghorn calculi) or stones that cannot be extracted using less invasive procedures.
==== Ureteroscopic surgery ====
Ureteroscopy has become increasingly popular as flexible and rigid fiberoptic ureteroscopes have become smaller. One ureteroscopic technique involves the placement of a ureteral stent (a small tube extending from the bladder, up the ureter and into the kidney) to provide immediate relief of an obstructed kidney. Stent placement can be useful for saving a kidney at risk for postrenal acute kidney failure due to the increased hydrostatic pressure, swelling and infection (pyelonephritis and pyonephrosis) caused by an obstructing stone. Ureteral stents vary in length from 24 to 30 cm (9.4 to 11.8 in) and most have a shape commonly referred to as a "double-J" or "double pigtail", because of the curl at both ends. They are designed to allow urine to flow past an obstruction in the ureter. They may be retained in the ureter for days to weeks as infections resolve and as stones are dissolved or fragmented by ESWL or by some other treatment. The stents dilate the ureters, which can facilitate instrumentation, and they also provide a clear landmark to aid in the visualization of the ureters and any associated stones on radiographic examinations. The presence of indwelling ureteral stents may cause minimal to moderate discomfort, frequency or urgency incontinence, and infection, which in general resolves on removal. Most ureteral stents can be removed cystoscopically during an office visit under topical anesthesia after resolution of urolithiasis. Research is currently uncertain if placing a temporary stent during ureteroscopy leads to different outcomes than not placing a stent in terms of number of hospital visits for post operative problems, short or long term pain, need for narcotic pain medication, risk of UTI, need for a repeat procedure or narrowing of the ureter from scarring.
More definitive ureteroscopic techniques for stone extraction (rather than simply bypassing the obstruction) include basket extraction and ultrasound ureterolithotripsy. Laser lithotripsy is another technique, which involves the use of a holmium:yttrium aluminium garnet (Ho:YAG) laser to fragment stones in the bladder, ureters, and kidneys.
Ureteroscopic techniques are generally more effective than ESWL for treating stones located in the lower ureter, with success rates of 93–100% using Ho:YAG laser lithotripsy. Although ESWL has been traditionally preferred by many practitioners for treating stones located in the upper ureter, more recent experience suggests ureteroscopic techniques offer distinct advantages in the treatment of upper ureteral stones. Specifically, the overall success rate is higher, fewer repeat interventions and postoperative visits are needed, and treatment costs are lower after ureteroscopic treatment when compared with ESWL. These advantages are especially apparent with stones greater than 10 mm (0.4 in) in diameter. However, because ureteroscopy of the upper ureter is much more challenging than ESWL, many urologists still prefer to use ESWL as a first-line treatment for stones of less than 10 mm, and ureteroscopy for those greater than 10 mm in diameter. Ureteroscopy is the preferred treatment in pregnant and morbidly obese people, as well as those with bleeding disorders.
== Epidemiology ==
Kidney stones affect all geographical, cultural, and racial groups. The lifetime risk is about 10-15% in the developed world, but can be as high as 20-25% in the Middle East. The increased risk of dehydration in hot climates, coupled with a diet 50% lower in calcium and 250% higher in oxalates compared to Western diets, accounts for the higher net risk in the Middle East. In the Middle East, uric acid stones are more common than calcium-containing stones. The number of deaths due to kidney stones is estimated at 19,000 per year being fairly consistent between 1990 and 2010.
In North America and Europe, the annual number of new cases per year of kidney stones is roughly 0.5%. In the United States, the frequency in the population of urolithiasis has increased from 3.2% to 5.2% from the mid-1970s to the mid-1990s. In the United States, about 9% of the population has had a kidney stone.
The total cost for treating urolithiasis was US$2 billion in 2003. About 65–80% of those with kidney stones are men; most stones in women are due to either metabolic defects (such as cystinuria) or infections in the case of struvite stones. Urinary tract calculi disorders are more common in men than in women. Men most commonly experience their first episode between 30 and 40 years of age, whereas for women, the age at first presentation is somewhat later. The age of onset shows a bimodal distribution in women, with episodes peaking at 35 and 55 years. Recurrence rates are estimated at 50% over a 10-year and 75% over 20-year period, with some people experiencing ten or more episodes over the course of a lifetime.
A 2010 review concluded that rates of disease are increasing.
== History ==
The existence of kidney stones was first recorded thousands of years ago, with various explanations given; Joseph Glanville's Saducismus Triumphatus, for example, gives a detailed description of Abraham Mechelburg's voiding of small stones through his penis' virga, attributing the issue to witchcraft.
In 1901, a stone discovered in the pelvis of an ancient Egyptian mummy was dated to 4,800 BC.
Medical texts from ancient Mesopotamia, India, China, Persia, Greece, and Rome all mentioned calculous disease. Part of the Hippocratic Oath suggests there were practicing surgeons in ancient Greece to whom physicians were to defer for lithotomies, or the surgical removal of stones. The Roman medical treatise De Medicina by Aulus Cornelius Celsus contained a description of lithotomy, and this work served as the basis for this procedure until the 18th century.
Examples of people who had kidney stone disease include Napoleon I, Epicurus, Napoleon III, Peter the Great, Louis XIV, George IV, Oliver Cromwell, Lyndon B. Johnson, Benjamin Franklin, Michel de Montaigne, Francis Bacon, Isaac Newton, Samuel Pepys, William Harvey, Herman Boerhaave, and Antonio Scarpa.
New techniques in lithotomy began to emerge starting in 1520, but the operation remained risky. After Henry Jacob Bigelow popularized the technique of litholapaxy in 1878, the mortality rate dropped from about 24% to 2.4%. However, other treatment techniques continued to produce a high level of mortality, especially among inexperienced urologists. In 1980, Dornier MedTech introduced extracorporeal shock wave lithotripsy for breaking up stones via acoustical pulses, and this technique has since come into widespread use.
=== Etymology ===
The term renal calculus is from the Latin rēnēs, meaning "kidneys", and calculus, meaning "pebble". Lithiasis (stone formation) in the kidneys is called nephrolithiasis (), from nephro-, meaning kidney, + -lith, meaning stone, and -iasis, meaning disorder. A distinction between nephrolithiasis and urolithiasis can be made because not all urinary stones (uroliths) form in the kidney; they can also form in the bladder. But the distinction is often clinically irrelevant (with similar disease process and treatment either way) and the words are thus often used loosely as synonyms.
== Children ==
Although kidney stones do not often occur in children, the incidence is increasing. These stones are in the kidney in two thirds of reported cases, and in the ureter in the remaining cases. Older children are at greater risk independent of whether or not they are male or female.
As with adults, most pediatric kidney stones are predominantly composed of calcium oxalate; struvite and calcium phosphate stones are less common. Calcium oxalate stones in children are associated with high amounts of calcium, oxalate, and magnesium in acidic urine.
Treatment of kidney stones in children is similar to treatments for adults, including shock wave lithotripsy, medication, and treatment using scope through the bladder, kidney or skin. Of these treatments, research is uncertain if shock waves are more effective than medication or a scope through the bladder, but it is likely less successful than a scope through skin into the kidney. When going in with a scope through the kidney, a regular and a mini-sized scope likely have similar success rates of stone removal. Alpha-blockers, a type of medication, may increase the successful removal of kidney stones when compared with a placebo and without ibuprofen.
== Research ==
Metabolic syndrome and its associated diseases of obesity and diabetes as general risk factors for kidney stone disease are under research to determine if urinary excretion of calcium, oxalate and urate are higher than in people with normal weight or underweight, and if diet and physical activity have roles. Dietary, fluid intake, and lifestyle factors remain major topics for research on prevention of kidney stones, as of 2017.
=== Gut microbiota ===
The gut microbiota has been explored as a contributing factor for stone disease, indicating that some bacteria may be different in people forming kidney stones. One bacterium, Oxalobacter formigenes, is potentially beneficial for mitigating calcium oxalate stones because of its ability to metabolize oxalate as its sole carbon source, but 2018 research suggests that it is instead part of a network of oxalate degrading bacteria. Additionally, one study found that oral antibiotic use, which alters the gut microbiota, can increase the odds of a person developing a kidney stone.
== In other animals ==
Among ruminants, uroliths more commonly cause problems in males than in females; the sigmoid flexure of the ruminant male urinary tract is more likely to obstruct passage. Early-castrated males are at greater risk, because of lesser urethral diameter.
Low Ca:P intake ratio is conducive to phosphatic (e.g. struvite) urolith formation. Incidence among wether lambs can be minimized by maintaining a dietary Ca:P intake ratio of 2:1.
Alkaline (higher) pH favors formation of carbonate and phosphate calculi. For domestic ruminants, dietary cation: anion balance is sometimes adjusted to assure a slightly acidic urine pH, for prevention of calculus formation.
Differing generalizations regarding effects of pH on formation of silicate uroliths may be found. In this connection, it may be noted that under some circumstances, calcium carbonate accompanies silica in siliceous uroliths.
Pelleted feeds may be conducive to formation of phosphate uroliths, because of increased urinary phosphorus excretion. This is attributable to lower saliva production where pelleted rations containing finely ground constituents are fed. With less blood phosphate partitioned into saliva, more tends to be excreted in urine. (Most saliva phosphate is fecally excreted.)
Oxalate uroliths can occur in ruminants, although such problems from oxalate ingestion may be relatively uncommon. Ruminant urolithiasis associated with oxalate ingestion has been reported. However, no renal tubular damage or visible deposition of calcium oxalate crystals in kidneys was found in yearling wether sheep fed diets containing soluble oxalate at 6.5 percent of dietary dry matter for about 100 days.
Conditions limiting water intake can result in stone formation.
Various surgical interventions, e.g. amputation of the urethral process at its base near the glans penis in male ruminants, perineal urethrostomy, or tube cystostomy may be considered for relief of obstructive urolithiasis.
== See also ==
Nephrocalcinosis
Kidney disease
Kidney stone formation in space
== References ==
=== Notes ===
== External links ==
Information from the European Urological Association
Kidney Stone Guide Book Archived 3 August 2020 at the Wayback Machine – University of Chicago Kidney Stone Program | Wikipedia/Kidney_stone_disease |
Sialolithiasis (also termed salivary calculi, or salivary stones) is a crystallopathy where a calcified mass or sialolith forms within a salivary gland, usually in the duct of the submandibular gland (also termed "Wharton's duct"). Less commonly the parotid gland or rarely the sublingual gland or a minor salivary gland may develop salivary stones.
The usual symptoms are pain and swelling of the affected salivary gland, both of which get worse when salivary flow is stimulated, e.g. with the sight, thought, smell or taste of food, or with hunger or chewing. This is often termed "mealtime syndrome." Inflammation or infection of the gland may develop as a result. Sialolithiasis may also develop because of the presence of existing chronic infection of the glands, dehydration (e.g. use of phenothiazines), Sjögren's syndrome and/or increased local levels of calcium, but in many instances the cause is idiopathic (unknown).
The condition is usually managed by removing the stone, and several different techniques are available. Rarely, removal of the submandibular gland may become necessary in cases of recurrent stone formation. Sialolithiasis is common, accounting for about 50% of all disease occurring in the major salivary glands and causing symptoms in about 0.45% of the general population. Persons aged 30–60 and males are more likely to develop sialolithiasis.
== Classification ==
The term is derived from the Greek words sialon (σίαλον, saliva) and lithos (stone), and the Greek -iasis meaning "process" or "morbid condition". A calculus (plural calculi) is a hard, stone-like concretion that forms within an organ or duct inside the body. They are usually made from mineral salts, and other types of calculi include tonsiloliths (tonsil stones) and renal calculi (kidney stones). Sialolithiasis refers to the formation of calculi within a salivary gland. If a calculus forms in the duct that drains the saliva from a salivary gland into the mouth, then saliva will be trapped in the gland. This may cause painful swelling and inflammation of the gland. Inflammation of a salivary gland is termed sialadenitis. Inflammation associated with blockage of the duct is sometimes termed "obstructive sialadenitis". Because saliva is stimulated to flow more with the thought, sight or smell of food, or with chewing, pain and swelling will often get suddenly worse just before and during a meal ("peri-prandial"), and then slowly decrease after eating, this is termed meal time syndrome. However, calculi are not the only reasons that a salivary gland may become blocked and give rise to the meal time syndrome. Obstructive salivary gland disease, or obstructive sialadenitis, may also occur due to fibromucinous plugs, duct stenosis, foreign bodies, anatomic variations, or malformations of the duct system leading to a mechanical obstruction associated with stasis of saliva in the duct.
Salivary stones may be divided according to which gland they form in. About 85% of stones occur in the submandibular gland, and 5–10% occur in the parotid gland. In about 0–5% of cases, the sublingual gland or a minor salivary gland is affected. When minor glands are rarely involved, caliculi are more likely in the minor glands of the buccal mucosa and the maxillary labial mucosa. Submandibular stones are further classified as anterior or posterior in relation to an imaginary transverse line drawn between the mandibular first molar teeth. Stones may be radiopaque, i.e. they will show up on conventional radiographs, or radiolucent, where they not be visible on radiographs (although some of their effects on the gland may still be visible). They may also symptomatic or asymptomatic, according to whether they cause any problems or not.
== Signs and symptoms ==
Signs and symptoms are variable and depend largely upon whether the obstruction of the duct is complete or partial, and how much resultant pressure is created within the gland. The development of infection in the gland also influences the signs and symptoms.
Pain, which is intermittent, and may suddenly get worse before mealtimes, and then slowly get better (partial obstruction).
Swelling of the gland, also usually intermittent, often suddenly appearing or increasing before mealtimes, and then slowly going down (partial obstruction).
Tenderness of the involved gland.
Palpable hard lump, if the stone is located near the end of the duct. If the stone is near the submandibular duct orifice, the lump may be felt under the tongue.
Lack of saliva coming from the duct (total obstruction).
Erythema (redness) of the floor of the mouth (infection).
Pus discharging from the duct (infection).
Cervical lymphadenitis (infection).
Bad breath.
Rarely, when stones form in the minor salivary glands, there is usually only slight local swelling in the form of a small nodule and tenderness.
== Causes ==
There are thought to be a series of stages that lead to the formation of a calculus (lithogenesis). Initially, factors such as abnormalities in calcium metabolism, dehydration, reduced salivary flow rate, altered acidity (pH) of saliva caused by oropharyngeal infections, and altered solubility of crystalloids, leading to precipitation of mineral salts, are involved. Other sources state that no systemic abnormality of calcium or phosphate metabolism is responsible.
The next stage involves the formation of a nidus which is successively layered with organic and inorganic material, eventually forming a calcified mass. In about 15-20% of cases the sialolith will not be sufficiently calcified to appear radiopaque on a radiograph, and will therefore be difficult to detect.
Other sources suggest a retrograde theory of lithogenesis, where food debris, bacteria or foreign bodies from the mouth enter the ducts of a salivary gland and are trapped by abnormalities in the sphincter mechanism of the duct opening (the papilla), which are reported in 90% of cases. Fragments of bacteria from salivary calculi were reported to be Streptococci species which are part of the normal oral microbiota and are present in dental plaque.
Stone formation occurs most commonly in the submandibular gland for several reasons. The concentration of calcium in saliva produced by the submandibular gland is twice that of the saliva produced by the parotid gland. The submandibular gland saliva is also relatively alkaline and mucous. The submandibular duct (Wharton's duct) is long, meaning that saliva secretions must travel further before being discharged into the mouth. The duct possesses two bends, the first at the posterior border of the mylohyoid muscle and the second near the duct orifice. The flow of saliva from the submandibular gland is often against gravity due to variations in the location of the duct orifice. The orifice itself is smaller than that of the parotid. These factors all promote slowing and stasis of saliva in the submandibular duct, making the formation of an obstruction with subsequent calcification more likely.
Salivary calculi sometimes are associated with other salivary diseases, e.g. sialoliths occur in two thirds of cases of chronic sialadenitis, although obstructive sialadenitis is often a consequence of sialolithiasis. Gout may also cause salivary stones, although in this case they are composed of uric acid crystals rather than the normal composition of salivary stones.
== Diagnosis ==
Diagnosis is usually made by characteristic history and physical examination. Diagnosis can be confirmed by x-ray (80% of salivary gland calculi are visible on x-ray), by sialogram, or by ultrasound.
== Treatment ==
Some current treatment options are:
Non-invasive:
For small stones, hydration, moist heat therapy, NSAIDs (nonsteroidal anti-inflammatory drugs) occasionally, and having the patient take any food or beverage that is bitter and/or sour. Sucking on citrus fruits, such as a lemon or orange, may increase salivation and promote spontaneous expulsion of stones within the size range of 2–10 mm.
Some stones may be massaged out by a specialist.
Shock wave therapy (Extracorporeal shock wave lithotripsy).
Minimally invasive:
Sialendoscopy
Surgical:
An ENT or oral/maxillofacial surgeon may cannulate the duct to remove the stone (sialectomy).
A surgeon may make a small incision near the stone to remove it.
In some cases when stones continually reoccur the offending salivary duct is removed.
Supporting treatment:
To prevent infection while the stone is lodged in the duct, antibiotics are sometimes used.
== Epidemiology ==
The prevalence of salivary stones in the general population is about 1.2% according to post mortem studies, but the prevalence of salivary stones which cause symptoms is about 0.45% in the general population. Sialolithiasis accounts for about 50% of all disease occurring in major salivary glands, and for about 66% of all obstructive salivary gland diseases. Salivary gland stones are twice as common in males as in females. The most common age range in which they occur is between 30 and 60, and they are uncommon in children.
== References ==
== External links == | Wikipedia/Salivary_duct_calculus |
Medical ultrasound includes diagnostic techniques (mainly imaging) using ultrasound, as well as therapeutic applications of ultrasound. In diagnosis, it is used to create an image of internal body structures such as tendons, muscles, joints, blood vessels, and internal organs, to measure some characteristics (e.g., distances and velocities) or to generate an informative audible sound. The usage of ultrasound to produce visual images for medicine is called medical ultrasonography or simply sonography, or echography. The practice of examining pregnant women using ultrasound is called obstetric ultrasonography, and was an early development of clinical ultrasonography. The machine used is called an ultrasound machine, a sonograph or an echograph. The visual image formed using this technique is called an ultrasonogram, a sonogram or an echogram.
Ultrasound is composed of sound waves with frequencies greater than 20,000 Hz, which is the approximate upper threshold of human hearing. Ultrasonic images, also known as sonograms, are created by sending pulses of ultrasound into tissue using a probe. The ultrasound pulses echo off tissues with different reflection properties and are returned to the probe which records and displays them as an image.
A general-purpose ultrasonic transducer may be used for most imaging purposes but some situations may require the use of a specialized transducer. Most ultrasound examination is done using a transducer on the surface of the body, but improved visualization is often possible if a transducer can be placed inside the body. For this purpose, special-use transducers, including transvaginal, endorectal, and transesophageal transducers are commonly employed. At the extreme, very small transducers can be mounted on small diameter catheters and placed within blood vessels to image the walls and disease of those vessels.
== Types ==
The imaging mode refers to probe and machine settings that result in specific dimensions of the ultrasound image.
Several modes of ultrasound are used in medical imaging:
A-mode: Amplitude mode refers to the mode in which the amplitude of the transducer voltage is recorded as a function of two-way travel time of an ultrasound pulse. A single pulse is transmitted through the body and scatters back to the same transducer element. The voltage amplitudes recorded correlate linearly to acoustic pressure amplitudes. A-mode is one-dimensional.
B-mode: In brightness mode, an array of transducer elements scans a plane through the body resulting in a two-dimensional image. Each pixel value of the image correlates to voltage amplitude registered from the backscattered signal. The dimensions of B-mode images are voltage as a function of angle and two-way time.
M-mode: In motion mode, A-mode pulses are emitted in succession. The backscattered signal is converted to lines of bright pixels, whose brightness linearly correlates to backscattered voltage amplitudes. Each next line is plotted adjacent to the previous, resulting in an image that looks like a B-mode image. The M-mode image dimensions are however voltage as a function of two-way time and recording time. This mode is an ultrasound analogy to streak video recording in high-speed photography. As moving tissue transitions produce backscattering, this can be used to determine the displacement of specific organ structures, most commonly the heart.
Most machines convert two-way time to imaging depth using as assumed speed of sound of 1540 m/s. As the actual speed of sound varies greatly in different tissue types, an ultrasound image is therefore not a true tomographic representation of the body.
Three-dimensional imaging is done by combining B-mode images, using dedicated rotating or stationary probes. This has also been referred to as C-mode.
An imaging technique refers to a method of signal generation and processing that results in a specific application.
Most imaging techniques are operating in B-mode.
Doppler sonography: This imaging technique makes use of the Doppler effect in detection and measuring moving targets, typically blood.
Harmonic imaging: backscattered signal from tissue is filtered to comprise only frequency content of at least twice the centre frequency of the transmitted ultrasound. Harmonic imaging used for perfusion detection when using ultrasound contrast agents and for the detection of tissue harmonics. Common pulse schemes for the creation of harmonic response without the need of real-time Fourier analysis are pulse inversion and power modulation.
B-flow is an imaging technique that digitally highlights moving reflectors (mainly red blood cells) while suppressing the signals from the surrounding stationary tissue. It aims to visualize flowing blood and surrounding stationary tissues simultaneously. It is thus an alternative or complement to Doppler ultrasonography in visualizing blood flow.
Therapeutic ultrasound aimed at a specific tumor or calculus is not an imaging mode. However, for positioning a treatment probe to focus on a specific region of interest, A-mode and B-mode are typically used, often during treatment.
== Advantages and drawbacks ==
Compared to other medical imaging modalities, ultrasound has several advantages. It provides images in real-time, is portable, and can consequently be brought to the bedside. It is substantially lower in cost than other imaging strategies. Drawbacks include various limits on its field of view, the need for patient cooperation, dependence on patient physique, difficulty imaging structures obscured by bone, air or gases, and the necessity of a skilled operator, usually with professional training.
== Uses ==
Sonography (ultrasonography) is widely used in medicine. It is possible to perform both diagnosis and therapeutic procedures, using ultrasound to guide interventional procedures such as biopsies or to drain collections of fluid, which can be both diagnostic and therapeutic. Sonographers are medical professionals who perform scans which are traditionally interpreted by radiologists, physicians who specialize in the application and interpretation of medical imaging modalities, or by cardiologists in the case of cardiac ultrasonography (echocardiography). Sonography is effective for imaging soft tissues of the body. Superficial structures such as muscle, tendon, testis, breast, thyroid and parathyroid glands, and the neonatal brain are imaged at higher frequencies (7–18 MHz), which provide better linear (axial) and horizontal (lateral) resolution. Deeper structures such as liver and kidney are imaged at lower frequencies (1–6 MHz) with lower axial and lateral resolution as a price of deeper tissue penetration.
=== Anesthesiology ===
In anesthesiology, ultrasound is commonly used to guide the placement of needles when injecting local anesthetic solutions in the proximity of nerves identified within the ultrasound image (nerve block). It is also used for vascular access such as cannulation of large central veins and for difficult arterial cannulation. Transcranial Doppler is frequently used by neuro-anesthesiologists for obtaining information about flow-velocity in the basal cerebral vessels.
=== Angiology (vascular) ===
In angiology or vascular medicine, duplex ultrasound (B Mode imaging combined with Doppler flow measurement) is used to diagnose arterial and venous disease. This is particularly important in potential neurologic problems, where carotid ultrasound is commonly used for assessing blood flow and potential or suspected stenosis in the carotid arteries, while transcranial Doppler is used for imaging flow in the intracerebral arteries.
Intravascular ultrasound (IVUS) uses a specially designed catheter with a miniaturized ultrasound probe attached to its distal end, which is then threaded inside a blood vessel. The proximal end of the catheter is attached to computerized ultrasound equipment and allows the application of ultrasound technology, such as a piezoelectric transducer or capacitive micromachined ultrasonic transducer, to visualize the endothelium of blood vessels in living individuals.
In the case of the common and potentially, serious problem of blood clots in the deep veins of the leg, ultrasound plays a key diagnostic role, while ultrasonography of chronic venous insufficiency of the legs focuses on more superficial veins to assist with planning of suitable interventions to relieve symptoms or improve cosmetics.
=== Cardiology (heart) ===
Echocardiography is an essential tool in cardiology, assisting in evaluation of heart valve function, such as stenosis or insufficiency, strength of cardiac muscle contraction, and hypertrophy or dilatation of the main chambers. (ventricle and atrium)
=== Emergency medicine ===
Point of care ultrasound has many applications in emergency medicine. These include differentiating cardiac from pulmonary causes of acute breathlessness, and the Focused Assessment with Sonography for Trauma (FAST) exam, extended to include assessment for significant hemoperitoneum or pericardial tamponade after trauma (EFAST). Other uses include assisting with differentiating causes of abdominal pain such as gallstones and kidney stones. Emergency Medicine Residency Programs have a substantial history of promoting the use of bedside ultrasound during physician training.
=== Gastroenterology/Colorectal surgery ===
Both abdominal and endoanal ultrasound are frequently used in gastroenterology and colorectal surgery. In abdominal sonography, the major organs of the abdomen such as the pancreas, aorta, inferior vena cava, liver, gall bladder, bile ducts, kidneys, and spleen may be imaged. However, sound waves may be blocked by gas in the bowel and attenuated to differing degrees by fat, sometimes limiting diagnostic capabilities. The appendix can sometimes be seen when inflamed (e.g.: appendicitis) and ultrasound is the initial imaging choice, avoiding radiation if possible, although it frequently needs to be followed by other imaging methods such as CT. Endoanal ultrasound is used particularly in the investigation of anorectal symptoms such as fecal incontinence or obstructed defecation. It images the immediate perianal anatomy and is able to detect occult defects such as tearing of the anal sphincter.
=== Hepatology ===
Ultrasonography of liver tumors allows for both detection and characterization.
Ultrasound imaging studies are often obtained during the evaluation process of Fatty liver disease. Ultrasonography reveals a "bright" liver with increased echogenicity. Pocket-sized ultrasound devices might be used as point-of-care screening tools to diagnose liver steatosis.
=== Gynecology and obstetrics ===
Gynecologic ultrasonography examines female pelvic organs (specifically the uterus, ovaries, and fallopian tubes) as well as the bladder, adnexa, and pouch of Douglas. It uses transducers designed for approaches through the lower abdominal wall, curvilinear and sector, and specialty transducers such as transvaginal ultrasound.
Obstetrical sonography was originally developed in the late 1950s and 1960s by Sir Ian Donald and is commonly used during pregnancy to check the development and presentation of the fetus. It can be used to identify many conditions that could be potentially harmful to the mother and/or baby possibly remaining undiagnosed or with delayed diagnosis in the absence of sonography. It is currently believed that the risk of delayed diagnosis is greater than the small risk, if any, associated with undergoing an ultrasound scan. However, its use for non-medical purposes such as fetal "keepsake" videos and photos is discouraged.
Obstetric ultrasound is primarily used to:
Date the pregnancy (gestational age)
Confirm fetal viability
Determine location of fetus, intrauterine vs ectopic
Check the location of the placenta in relation to the cervix
Check for the number of fetuses (multiple pregnancy)
Check for major physical abnormalities.
Assess fetal growth (for evidence of intrauterine growth restriction (IUGR))
Check for fetal movement and heartbeat.
Determine the sex of the baby
According to the European Committee of Medical Ultrasound Safety (ECMUS)
Ultrasonic examinations should only be performed by competent personnel who are trained and updated in safety matters. Ultrasound produces heating, pressure changes and mechanical disturbances in tissue. Diagnostic levels of ultrasound can produce temperature rises that are hazardous to sensitive organs and the embryo/fetus. Biological effects of non-thermal origin have been reported in animals but, to date, no such effects have been demonstrated in humans, except when a micro-bubble contrast agent is present.Nonetheless, care should be taken to use low power settings and avoid pulsed wave scanning of the fetal brain unless specifically indicated in high risk pregnancies.
Figures released for the period 2005–2006 by the UK Government (Department of Health) show that non-obstetric ultrasound examinations constituted more than 65% of the total number of ultrasound scans conducted.
=== Hemodynamics (blood circulation) ===
Blood velocity can be measured in various blood vessels, such as middle cerebral artery or descending aorta, by relatively inexpensive and low risk ultrasound Doppler probes attached to portable monitors. These provide non-invasive or transcutaneous (non-piercing) minimal invasive blood flow assessment. Common examples are transcranial Doppler, esophageal Doppler and suprasternal Doppler.
=== Otolaryngology (head and neck) ===
Most structures of the neck, including the thyroid and parathyroid glands, lymph nodes, and salivary glands, are well-visualized by high-frequency ultrasound with exceptional anatomic detail. Ultrasound is the preferred imaging modality for thyroid tumors and lesions, and its use is important in the evaluation, preoperative planning, and postoperative surveillance of patients with thyroid cancer. Many other benign and malignant conditions in the head and neck can be differentiated, evaluated, and managed with the help of diagnostic ultrasound and ultrasound-guided procedures.
=== Neonatology ===
In neonatology, transcranial Doppler can be used for basic assessment of intracerebral structural abnormalities, suspected hemorrhage, ventriculomegaly or hydrocephalus and anoxic insults (periventricular leukomalacia). It can be performed through the soft spots in the skull of a newborn infant (Fontanelle) until these completely close at about 1 year of age by which time they have formed a virtually impenetrable acoustic barrier to ultrasound. The most common site for cranial ultrasound is the anterior fontanelle. The smaller the fontanelle, the more the image is compromised.
Lung ultrasound has been found to be useful in diagnosing common neonatal respiratory diseases such as transient tachypnea of the newborn, respiratory distress syndrome, congenital pneumonia, meconium aspiration syndrome, and pneumothorax. A neonatal lung ultrasound score, first described by Brat et al., has been found to highly correlate with oxygenation in the newborn.
=== Ophthalmology (eyes) ===
In ophthalmology and optometry, there are two major forms of eye exam using ultrasound:
A-scan ultrasound biometry, is commonly referred to as an A-scan (amplitude scan). A-mode provides data on the length of the eye, which is a major determinant in common sight disorders, especially for determining the power of an intraocular lens after cataract extraction.
B-scan ultrasonography, or B-scan-Brightness scan, is a B-mode scan that produces a cross-sectional view of the eye and the orbit. It is an essential tool in ophthalmology for diagnosing and managing a wide array of conditions affecting the posterior segment of the eye.It is non invasive and uses frequency 10–15 MHz. It is often used in conjunction with other imaging techniques (like OCT or fluorescein angiography) for a more comprehensive evaluation of ocular conditions.
=== Pulmonology (lungs) ===
Ultrasound is used to assess the lungs in a variety of settings including critical care, emergency medicine, trauma surgery, as well as general medicine. This imaging modality is used at the bedside or examination table to evaluate a number of different lung abnormalities as well as to guide procedures such as thoracentesis, (drainage of pleural fluid (effusion)), needle aspiration biopsy, and catheter placement. Although air present in the lungs does not allow good penetration of ultrasound waves, interpretation of specific artifacts created on the lung surface can be used to detect abnormalities.
==== Lung ultrasound basics ====
The Normal Lung Surface: The lung surface is composed of visceral and parietal pleura. These two surfaces are typically pushed together and make up the pleural line, which is the basis of lung (or pleural) ultrasound. This line is visible less than a centimeter below the rib line in most adults. On ultrasound, it is visualized as a hyperechoic (bright white) horizontal line if the ultrasound probe is applied perpendicularly to the skin.
Artifacts: Lung ultrasound relies on artifacts, which would otherwise be considered a hindrance in imaging. Air blocks the ultrasound beam and thus visualizing healthy lung tissue itself with this mode of imaging is not practical. Consequently, physicians and sonographers have learned to recognize patterns that ultrasound beams create when imaging healthy versus diseased lung tissue. Three commonly seen and utilized artifacts in lung ultrasound include lung sliding, A-lines, and B-lines.
§ Lung Sliding: The presence of lung sliding, which indicates the shimmering of the pleural line that occurs with movement of the visceral and parietal pleura against one another with respiration (sometimes described as 'ants marching'), is the most important finding in normal aerated lung. Lung sliding indicates both that the lung is present at the chest wall and that the lung is functioning.
§ A-lines: When the ultrasound beam makes contact with the pleural line, it is reflected back creating a bright white horizontal line. The subsequent reverberation artifacts that appear as equally spaced horizontal lines deep to the pleura are A-lines. Ultimately, A-lines are a reflection of the ultrasound beam from the pleura with the space between A-lines corresponding to the distance between the parietal pleura and the skin surface. A-lines indicate the presence of air, which means that these artifacts can be present in normal healthy lung (and also in patients with pneumothorax).
§ B-lines: B-lines are also reverberation artifacts. They are visualized as hyperechoic vertical lines extending from the pleura to the edge of the ultrasound screen. These lines are sharply defined and laser-like and typically do not fade as they progress down the screen. A few B-lines that move along with the sliding pleura can be seen in normal lung due to acoustic impedance differences between water and air. However, excessive B-lines (three or more) are abnormal and are typically indicative of underlying lung pathology.
==== Lung pathology assessed with ultrasound ====
Pulmonary edema: Lung ultrasound has been shown to be very sensitive for the detection of pulmonary edema. It allows for improvement in diagnosis and management of critically ill patients, particularly when used in combination with echocardiography. The sonographic feature that is present in pulmonary edema is multiple B-lines. B-lines can occur in a healthy lung; however, the presence of 3 or more in the anterior or lateral lung regions is always abnormal. In pulmonary edema, B-lines indicate an increase in the amount of water contained in the lungs outside of the pulmonary vasculature. B-lines can also be present in a number of other conditions including pneumonia, pulmonary contusion, and lung infarction. Additionally, it is important to note that there are multiple types of interactions between the pleural surface and the ultrasound wave that can generate artifacts with some similarity to B-lines but which do not have pathologic significance.
Pneumothorax: In clinical settings when pneumothorax is suspected, lung ultrasound can aid in diagnosis. In pneumothorax, air is present between the two layers of the pleura and lung sliding on ultrasound is therefore absent. The negative predictive value for lung sliding on ultrasound is reported as 99.2–100% – briefly, if lung sliding is present, a pneumothorax is effectively ruled out. The absence of lung sliding, however, is not necessarily specific for pneumothorax as there are other conditions that also cause this finding including acute respiratory distress syndrome, lung consolidations, pleural adhesions, and pulmonary fibrosis.
Pleural effusion: Lung ultrasound is a cost-effective, safe, and non-invasive imaging method that can aid in the prompt visualization and diagnosis of pleural effusions. Effusions can be diagnosed by a combination of physical exam, percussion, and auscultation of the chest. However, these exam techniques can be complicated by a variety of factors including the presence of mechanical ventilation, obesity, or patient positioning, all of which reduce the sensitivity of the physical exam. Consequently, lung ultrasound can be an additional tool to augment plain chest Xray and chest CT. Pleural effusions on ultrasound appear as structural images within the thorax rather than an artifact. They will typically have four distinct borders including the pleural line, two rib shadows, and a deep border. In critically ill patients with pleural effusion, ultrasound may guide procedures including needle insertion, thoracentesis, and chest-tube insertion.
Lung cancer staging: In pulmonology, endobronchial ultrasound (EBUS) probes are applied to standard flexible endoscopic probes and used by pulmonologists to allow for direct visualization of endobronchial lesions and lymph nodes prior to transbronchial needle aspiration. Among its many uses, EBUS aids in lung cancer staging by allowing for lymph node sampling without the need for major surgery.
COVID-19: Lung ultrasound has proved useful in the diagnosis of COVID-19 especially in cases where other investigations are not available.
=== Urinary tract ===
Ultrasound is routinely used in urology to determine the amount of fluid retained in a patient's bladder. In a pelvic sonogram, images include the uterus and ovaries or urinary bladder in females. In males, a sonogram will provide information about the bladder, prostate, or testicles (for example to urgently distinguish epididymitis from testicular torsion). In young males, it is used to distinguish more benign testicular masses (varicocele or hydrocele) from testicular cancer, which is curable but must be treated to preserve health and fertility. There are two methods of performing pelvic sonography – externally or internally. The internal pelvic sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic imaging of the pelvic floor can produce important diagnostic information regarding the precise relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation. It is also used to diagnose and, at higher frequencies, to treat (break up) kidney stones or kidney crystals (nephrolithiasis).
=== Penis and scrotum ===
Scrotal ultrasonography is used in the evaluation of testicular pain, and can help identify solid masses.
Ultrasound is an excellent method for the study of the penis, such as indicated in trauma, priapism, erectile dysfunction or suspected Peyronie's disease.
=== Musculoskeletal ===
Musculoskeletal ultrasound is used to examine tendons, muscles, nerves, ligaments, soft tissue masses, and bone surfaces.
It is helpful in diagnosing ligament sprains, muscles strains and joint pathology. It is an alternative or supplement to x-ray imaging in detecting fractures of the wrist, elbow and shoulder for patients up to 12 years (Fracture sonography).
Quantitative ultrasound is an adjunct musculoskeletal test for myopathic disease in children; estimates of lean body mass in adults; proxy measures of muscle quality (i.e., tissue composition) in older adults with sarcopenia
Ultrasound can also be used for needle guidance in muscle or joint injections, as in ultrasound-guided hip joint injection.
=== Kidneys ===
In nephrology, ultrasonography of the kidneys is essential in the diagnosis and management of kidney-related diseases. The kidneys are easily examined, and most pathological changes are distinguishable with ultrasound. It is an accessible, versatile, relatively economic, and fast aid for decision-making in patients with renal symptoms and for guidance in renal intervention. Using B-mode imaging, assessment of renal anatomy is easily performed, and US is often used as image guidance for renal interventions. Furthermore, novel applications in renal US have been introduced with contrast-enhanced ultrasound (CEUS), elastography and fusion imaging. However, renal US has certain limitations, and other modalities, such as CT (CECT) and MRI, should be considered for supplementary imaging in assessing renal disease.
=== Venous access ===
Intravenous access, for the collection of blood samples to assist in diagnosis or laboratory investigation including blood culture, or for administration of intravenous fluids for fluid maintenance of replacement or blood transfusion in sicker patients, is a common medical procedure. The need for intravenous access occurs in the outpatient laboratory, in the inpatient hospital units, and most critically in the Emergency Room and Intensive Care Unit. In many situations, intravenous access may be required repeatedly or over a significant time period. In these latter circumstances, a needle with an overlying catheter is introduced into the vein and the catheter is then inserted securely into the vein while the needle is withdrawn.
The chosen veins are most frequently selected from the arm, but in challenging situations, a deeper vein from the neck (external jugular vein) or upper arm (subclavian vein) may need to be used.
There are many reasons why the selection of a suitable vein may be problematic.
These include, but are not limited to, obesity, previous injury to veins from inflammatory reaction to previous 'blood draws', previous injury to veins from recreational drug use.
In these challenging situations, the insertion of a catheter into a vein has been greatly assisted by the use of ultrasound. The ultrasound unit may be 'cart-based' or 'handheld' using a linear transducer with a frequency of 10 to 15 megahertz. In most circumstances, choice of vein will be limited by the requirement that the vein is within 1.5 cms. from the skin surface.
The transducer may be placed longitudinally or transversely over the chosen vein.
Ultrasound training for intravenous cannulation is offered in most ultrasound training programs.
== Mechanism ==
The creation of an image from sound has three steps – transmitting a sound wave, receiving echoes, and interpreting those echoes.
=== Producing a sound wave ===
A sound wave is typically produced by a piezoelectric transducer encased in a plastic housing. Strong, short electrical pulses from the ultrasound machine drive the transducer at the desired frequency. The frequencies can vary between 1 and 18 MHz, though frequencies up to 50–100 megahertz have been used experimentally in a technique known as biomicroscopy in special regions, such as the anterior chamber of the eye.
Older technology transducers focused their beam with physical lenses. Contemporary technology transducers use digital antenna array techniques (piezoelectric elements in the transducer produce echoes at different times) to enable the ultrasound machine to change the direction and depth of focus. Near the transducer, the width of the ultrasound beam almost equals to the width of the transducer, after reaching a distance from the transducer (near zone length or Fresnel zone), the beam width narrows to half of the transducer width, and after that the width increases (far zone length or Fraunhofer's zone), where the lateral resolution decreases. Therefore, the wider the transducer width and the higher the frequency of ultrasound, the longer the Fresnel zone, and the lateral resolution can be maintained at a greater depth from the transducer. Ultrasound waves travel in pulses. Therefore, a shorter pulse length requires higher bandwidth (greater number of frequencies) to constitute the ultrasound pulse.
As stated, the sound is focused either by the shape of the transducer, a lens in front of the transducer, or a complex set of control pulses from the ultrasound scanner, in the beamforming or spatial filtering technique. This focusing produces an arc-shaped sound wave from the face of the transducer. The wave travels into the body and comes into focus at a desired depth.
Materials on the face of the transducer enable the sound to be transmitted efficiently into the body (often a rubbery coating, a form of impedance matching). In addition, a water-based gel is placed between the patient's skin and the probe to facilitate ultrasound transmission into the body. This is because air causes total reflection of ultrasound; impeding the transmission of ultrasound into the body.
The sound wave is partially reflected from the layers between different tissues or scattered from smaller structures. Specifically, sound is reflected anywhere where there are acoustic impedance changes in the body: e.g. blood cells in blood plasma, small structures in organs, etc. Some of the reflections return to the transducer.
=== Receiving the echoes ===
The return of the sound wave to the transducer results in the same process as sending the sound wave, in reverse. The returned sound wave vibrates the transducer and the transducer turns the vibrations into electrical pulses that travel to the ultrasonic scanner where they are processed and transformed into a digital image.
=== Forming the image ===
To make an image, the ultrasound scanner must determine two characteristics from each received echo:
How long it took the echo to be received from when the sound was transmitted. (Time and distance are equivalent.)
How strong the echo was.
Once the ultrasonic scanner determines these two, it can locate which pixel in the image to illuminate and with what intensity.
Transforming the received signal into a digital image may be explained by using a blank spreadsheet as an analogy. First picture a long, flat transducer at the top of the sheet. Send pulses down the 'columns' of the spreadsheet (A, B, C, etc.). Listen at each column for any return echoes. When an echo is heard, note how long it took for the echo to return. The longer the wait, the deeper the row (1,2,3, etc.). The strength of the echo determines the brightness setting for that cell (white for a strong echo, black for a weak echo, and varying shades of grey for everything in between.) When all the echoes are recorded on the sheet, a greyscale image has been accomplished.
In modern ultrasound systems, images are derived from the combined reception of echoes by multiple elements, rather than a single one. These elements in the transducer array work together to receive signals, a process essential for optimizing the ultrasonic beam's focus and producing detailed images. One predominant method for this is "delay-and-sum" beamforming. The time delay applied to each element is calculated based on the geometrical relationship between the imaging point, the transducer, and receiver positions. By integrating these time-adjusted signals, the system pinpoints focus onto specific tissue regions, enhancing image resolution and clarity. The utilization of multiple element reception combined with the delay-and-sum principles underpins the high-quality images characteristic of contemporary ultrasound scans.
=== Displaying the image ===
Images from the ultrasound scanner are transferred and displayed using the DICOM standard. Normally, very little post processing is applied.
== Sound in the body ==
Ultrasonography (sonography) uses a probe containing multiple acoustic transducers to send pulses of sound into a material. Whenever a sound wave encounters a material with a different density (acoustical impedance), some of the sound wave is scattered but part is reflected back to the probe and is detected as an echo. The time it takes for the echo to travel back to the probe is measured and used to calculate the depth of the tissue interface causing the echo. The greater the difference between acoustic impedances, the larger the echo is. If the pulse hits gases or solids, the density difference is so great that most of the acoustic energy is reflected and it becomes impossible to progress further.
The frequencies used for medical imaging are generally in the range of 1 to 18 MHz Higher frequencies have a correspondingly smaller wavelength, and can be used to make more detailed sonograms. However, the attenuation of the sound wave is increased at higher frequencies, so penetration of deeper tissues necessitates a lower frequency (3–5 MHz).
Penetrating deep into the body with sonography is difficult. Some acoustic energy is lost each time an echo is formed, but most of it (approximately
0.5
dB
cm depth
⋅
MHz
{\displaystyle \textstyle 0.5{\frac {\mbox{dB}}{{\mbox{cm depth}}\cdot {\mbox{MHz}}}}}
) is lost from acoustic absorption. (See Acoustic attenuation for further details on modeling of acoustic attenuation and absorption.)
The speed of sound varies as it travels through different materials, and is dependent on the acoustical impedance of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused and image resolution is reduced.
To generate a 2-D image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging or a 1-D phased array transducer may be used to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2-D representation of the slice into the body.
3-D images can be generated by acquiring a series of adjacent 2-D images. Commonly a specialized probe that mechanically scans a conventional 2-D image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues. Recently, 2-D phased array transducers that can sweep the beam in 3-D have been developed. These can image faster and can even be used to make live 3-D images of a beating heart.
Doppler ultrasonography is used to study blood flow and muscle motion. The different detected speeds are represented in color for ease of interpretation, for example leaky heart valves: the leak shows up as a flash of unique color. Colors may alternatively be used to represent the amplitudes of the received echoes.
== Expansions ==
An additional expansion of ultrasound is bi-planar ultrasound, in which the probe has two 2D planes perpendicular to each other, providing more efficient localization and detection. Furthermore, an omniplane probe can rotate 180° to obtain multiple images. In 3D ultrasound, many 2D planes are digitally added together to create a 3-dimensional image of the object.
=== Doppler ultrasonography ===
Doppler ultrasonography employs the Doppler effect to assess whether structures (usually blood) are moving towards or away from the probe, and their relative velocity. By calculating the frequency shift of a particular sample volume, flow in an artery or a jet of blood flow over a heart valve, its speed and direction can be determined and visualized, as an example. Color Doppler is the measurement of velocity by color scale. Color Doppler images are generally combined with gray scale (B-mode) images to display duplex ultrasonography images. Uses include:
Doppler echocardiography is the use of Doppler ultrasonography to examine the heart. An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output and E/A ratio (a measure of diastolic dysfunction). Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related measurements of interest.
Transcranial Doppler (TCD) and transcranial color Doppler (TCCD), measure the velocity of blood flow through the brain's blood vessels through the cranium. They are useful in the diagnosis of emboli, stenosis, vasospasm from a subarachnoid hemorrhage (bleeding from a ruptured aneurysm), and other problems.
Doppler fetal monitors use the Doppler effect to detect the fetal heartbeat during prenatal care. These are hand-held, and some models also display the heart rate in beats per minute (BPM). Use of this monitor is sometimes known as Doppler auscultation. The Doppler fetal monitor is commonly referred to simply as a Doppler or fetal Doppler and provides information similar to that provided by a fetal stethoscope.
=== Contrast ultrasonography (ultrasound contrast imaging) ===
A contrast medium for medical ultrasonography is a formulation of encapsulated gaseous microbubbles to increase echogenicity of blood, discovered by Dr. Raymond Gramiak in 1968 and named contrast-enhanced ultrasound. This contrast medical imaging modality is used throughout the world, for echocardiography in particular in the United States and for ultrasound radiology in Europe and Asia.
Microbubbles-based contrast media is administered intravenously into the patient blood stream during the ultrasonography examination. Due to their size, the microbubbles remain confined in blood vessels without extravasating towards the interstitial fluid. An ultrasound contrast media is therefore purely intravascular, making it an ideal agent to image organ microvasculature for diagnostic purposes. A typical clinical use of contrast ultrasonography is detection of a hypervascular metastatic tumor, which exhibits a contrast uptake (kinetics of microbubbles concentration in blood circulation) faster than healthy biological tissue surrounding the tumor. Other clinical applications using contrast exist, as in echocardiography to improve delineation of left ventricle for visualizing contractibility of heart muscle after a myocardial infarction. Finally, applications in quantitative perfusion (relative measurement of blood flow) have emerged for identifying early patient response to anti-cancerous drug treatment (methodology and clinical study by Dr. Nathalie Lassau in 2011), enabling the best oncological therapeutic options to be determined.
In oncological practice of medical contrast ultrasonography, clinicians use 'parametric imaging of vascular signatures' invented by Dr. Nicolas Rognin in 2010. This method is conceived as a cancer aided diagnostic tool, facilitating characterization of a suspicious tumor (malignant versus benign) in an organ. This method is based on medical computational science to analyze a time sequence of ultrasound contrast images, a digital video recorded in real-time during patient examination. Two consecutive signal processing steps are applied to each pixel of the tumor:
calculation of a vascular signature (contrast uptake difference with respect to healthy tissue surrounding the tumor);
automatic classification of the vascular signature into a unique parameter, the latter coded in one of the four following colors:
green for continuous hyper-enhancement (contrast uptake higher than healthy tissue one),
blue for continuous hypo-enhancement (contrast uptake lower than healthy tissue one),
red for fast hyper-enhancement (contrast uptake before healthy tissue one) or
yellow for fast hypo-enhancement (contrast uptake after healthy tissue one).
Once signal processing in each pixel is completed, a color spatial map of the parameter is displayed on a computer monitor, summarizing all vascular information of the tumor in a single image called a parametric image (see last figure of press article as clinical examples). This parametric image is interpreted by clinicians based on predominant colorization of the tumor: red indicates a suspicion of malignancy (risk of cancer), green or yellow – a high probability of benignity. In the first case (suspicion of malignant tumor), the clinician typically prescribes a biopsy to confirm the diagnostic or a CT scan examination as a second opinion. In the second case (quasi-certain of benign tumor), only a follow-up is needed with a contrast ultrasonography examination a few months later. The main clinical benefits are to avoid a systemic biopsy (with inherent risks of invasive procedures) of benign tumors or a CT scan examination exposing the patient to X-ray radiation. The parametric imaging of vascular signatures method proved to be effective in humans for characterization of tumors in the liver. In a cancer screening context, this method might be potentially applicable to other organs such as breast or prostate.
=== Molecular ultrasonography (ultrasound molecular imaging) ===
The current future of contrast ultrasonography is in molecular imaging with potential clinical applications expected in cancer screening to detect malignant tumors at their earliest stage of appearance. Molecular ultrasonography (or ultrasound molecular imaging) uses targeted microbubbles originally designed by Dr Alexander Klibanov in 1997; such targeted microbubbles specifically bind or adhere to tumoral microvessels by targeting biomolecular cancer expression (overexpression of certain biomolecules that occurs during neo-angiogenesis or inflammation in malignant tumors). As a result, a few minutes after their injection in blood circulation, the targeted microbubbles accumulate in the malignant tumor; facilitating its localization in a unique ultrasound contrast image. In 2013, the very first exploratory clinical trial in humans for prostate cancer was completed at Amsterdam in the Netherlands by Dr. Hessel Wijkstra.
In molecular ultrasonography, the technique of acoustic radiation force (also used for shear wave elastography) is applied in order to literally push the targeted microbubbles towards microvessels wall; first demonstrated by Dr. Paul Dayton in 1999. This allows maximization of binding to the malignant tumor; the targeted microbubbles being in more direct contact with cancerous biomolecules expressed at the inner surface of tumoral microvessels. At the stage of scientific preclinical research, the technique of acoustic radiation force was implemented as a prototype in clinical ultrasound systems and validated in vivo in 2D and 3D imaging modes.
=== Elastography (ultrasound elasticity imaging) ===
Ultrasound is also used for elastography, which is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses as it can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumors will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones.
There are many ultrasound elastography techniques.
=== Interventional ultrasonography ===
Interventional ultrasonography involves biopsy, emptying fluids, intrauterine Blood transfusion (Hemolytic disease of the newborn).
Thyroid cysts: High frequency thyroid ultrasound (HFUS) can be used to treat several gland conditions. The recurrent thyroid cyst that was usually treated in the past with surgery, can be treated effectively by a new procedure called percutaneous ethanol injection, or PEI. With ultrasound guided placement of a 25 gauge needle within the cyst, and after evacuation of the cyst fluid, about 50% of the cyst volume is injected back into the cavity, under strict operator visualization of the needle tip. The procedure is 80% successful in reducing the cyst to minute size.
Metastatic thyroid cancer neck lymph nodes: HFUS may also be used to treat metastatic thyroid cancer neck lymph nodes that occur in patients who either refuse, or are no longer candidates, for surgery. Small amounts of ethanol are injected under ultrasound guided needle placement. A power doppler blood flow study is done prior to injection. The blood flow can be destroyed and the node rendered inactive. Power doppler visualized blood flow can be eradicated, and there may be a drop in the cancer blood marker test, thyroglobulin, TG, as the node become non-functional. Another interventional use for HFUS is to mark a cancer node prior to surgery to help locate the node cluster at the surgery. A minute amount of methylene dye is injected, under careful ultrasound guided placement of the needle on the anterior surface, but not in the node. The dye will be evident to the thyroid surgeon when opening the neck. A similar localization procedure with methylene blue, can be done to locate parathyroid adenomas.
Joint injections can be guided by medical ultrasound, such as in ultrasound-guided hip joint injections.
=== Compression ultrasonography ===
Compression ultrasonography is when the probe is pressed against the skin. This can bring the target structure closer to the probe, increasing spatial resolution of it. Comparison of the shape of the target structure before and after compression can aid in diagnosis.
It is used in ultrasonography of deep venous thrombosis, wherein absence of vein compressibility is a strong indicator of thrombosis. Compression ultrasonography has both high sensitivity and specificity for detecting proximal deep vein thrombosis in symptomatic patients. Results are not reliable when the patient is asymptomatic, for example in high risk postoperative orthopedic patients.
=== Panoramic ultrasonography ===
Panoramic ultrasonography is the digital stitching of multiple ultrasound images into a broader one. It can display an entire abnormality and show its relationship to nearby structures on a single image.
=== Multiparametric ultrasonography ===
Multiparametric ultrasonography (mpUSS) combines multiple ultrasound techniques to produce a composite result. For example, one study combined B-mode, colour Doppler, real-time elastography, and contrast-enhanced ultrasound, achieving an accuracy similar to that of multiparametric MRI.
=== Speed-of-Sound Imaging ===
Speed-of-sound (SoS) imaging aims to find the spatial distribution of the SoS within the tissue. The idea is to find relative delay measurements for different transmission events and solve the limited-angle tomographic reconstruction problem using delay measurements and transmission geometry. Compared to shear-wave elastography, SoS imaging has better ex-vivo tissue differentiation for benign and malignant tumors.
== Attributes ==
As with all imaging modalities, ultrasonography has positive and negative attributes.
=== Strengths ===
Muscle, soft tissue, and bone surfaces are imaged very well including the delineation of interfaces between solid and fluid-filled spaces.
"Live" images can be dynamically selected, permitting diagnosis and documentation often rapidly. Live images also permit ultrasound-guided biopsies or injections, which can be cumbersome with other imaging modalities.
Organ structure can be demonstrated.
There are no known long-term side effects when used according to guidelines, and discomfort is minimal.
Ability to image local variations in the mechanical properties of soft tissue.
Equipment is widely available and comparatively flexible.
Small, easily carried scanners are available which permit bedside examinations.
Transducers have become relatively inexpensive compared to other modes of investigation, such as computed X-ray tomography, DEXA or magnetic resonance imaging.
Spatial resolution is better in high frequency ultrasound transducers than most other imaging modalities.
Use of an ultrasound research interface can offer a relatively inexpensive, real-time, and flexible method for capturing data required for specific research purposes of tissue characterization and development of new image processing techniques.
=== Weaknesses ===
Sonographic devices have trouble penetrating bone. For example, sonography of the adult brain is currently very limited.
Sonography performs very poorly when there is gas between the transducer and the organ of interest, due to the extreme differences in acoustic impedance. For example, overlying gas in the gastrointestinal tract often makes ultrasound scanning of the pancreas difficult. Lung imaging however can be useful in demarcating pleural effusions, detecting heart failure and pneumonia.
Even in the absence of bone or air, the depth penetration of ultrasound may be limited depending on the frequency of imaging. Consequently, there might be difficulties imaging structures deep in the body, especially in obese patients.
Image quality and accuracy of diagnosis is limited with obese patients and overlying subcutaneous fat attenuates the sound beam. A lower frequency transducer is required with subsequent lower resolution.
The method is operator-dependent. Skill and experience is needed to acquire good-quality images and make accurate diagnoses.
There is no scout image as there is with CT and MRI. Once an image has been acquired there is no exact way to tell which part of the body was imaged.
80% of sonographers experience Repetitive Strain Injuries (RSI) or so-called Work-Related Musculoskeletal Disorders (WMSD) because of bad ergonomic positions.
== Risks and side-effects ==
Ultrasonography is generally considered safe imaging, with the World Health Organizations stating:
"Diagnostic ultrasound is recognized as a safe, effective, and highly flexible imaging modality capable of providing clinically relevant information about most parts of the body in a rapid and cost-effective fashion".
Diagnostic ultrasound studies of the fetus are generally considered to be safe during pregnancy. However, this diagnostic procedure should be performed only when there is a valid medical indication, and the lowest possible ultrasonic exposure setting should be used to gain the necessary diagnostic information under the "as low as reasonably practicable" or ALARP principle.
Although there is no evidence that ultrasound could be harmful to the fetus, medical authorities typically strongly discourage the promotion, selling, or leasing of ultrasound equipment for making "keepsake fetal videos".
=== Studies on the safety of ultrasound ===
A meta-analysis of several ultrasonography studies published in 2000 found no statistically significant harmful effects from ultrasonography. It was noted that there is a lack of data on long-term substantive outcomes such as neurodevelopment.
A study at the Yale School of Medicine published in 2006 found a small but significant correlation between prolonged and frequent use of ultrasound and abnormal neuronal migration in mice.
A study performed in Sweden in 2001 has shown that subtle effects of neurological damage linked to ultrasound were implicated by an increased incidence in left-handedness in boys (a marker for brain problems when not hereditary) and speech delays.
The above findings, however, were not confirmed in a follow-up study.
A later study, however, performed on a larger sample of 8865 children, has established a statistically significant, albeit weak association of ultrasonography exposure and being non-right handed later in life.
== Regulation ==
Diagnostic and therapeutic ultrasound equipment is regulated in the US by the Food and Drug Administration, and worldwide by other national regulatory agencies. The FDA limits acoustic output using several metrics; generally, other agencies accept the FDA-established guidelines.
Currently, New Mexico, Oregon, and North Dakota are the only US states that regulate diagnostic medical sonographers. Certification examinations for sonographers are available in the US from three organizations: the American Registry for Diagnostic Medical Sonography, Cardiovascular Credentialing International and the American Registry of Radiologic Technologists.
The primary regulated metrics are Mechanical Index (MI), a metric associated with the cavitation bio-effect, and Thermal Index (TI) a metric associated with the tissue heating bio-effect. The FDA requires that the machine not exceed established limits, which are reasonably conservative in an effort to maintain diagnostic ultrasound as a safe imaging modality. This requires self-regulation on the part of the manufacturer in terms of machine calibration.
Ultrasound-based pre-natal care and sex screening technologies were launched in India in the 1980s. With concerns about its misuse for sex-selective abortion, the Government of India passed the Pre-natal Diagnostic Techniques Act (PNDT) in 1994 to distinguish and regulate legal and illegal uses of ultrasound equipment. The law was further amended as the Pre-Conception and Pre-natal Diagnostic Techniques (Regulation and Prevention of Misuse) (PCPNDT) Act in 2004 to deter and punish prenatal sex screening and sex selective abortion. It is currently illegal and a punishable crime in India to determine or disclose the sex of a fetus using ultrasound equipment.
== Use in other animals ==
Ultrasound is also a valuable tool in veterinary medicine, offering the same non-invasive imaging that helps in the diagnosis and monitoring of conditions in animals.
== History ==
After the French physicist Pierre Curie's discovery of piezoelectricity in 1880, ultrasonic waves could be deliberately generated for industry. In 1940, the American acoustical physicist Floyd Firestone devised the first ultrasonic echo imaging device, the Supersonic Reflectoscope, to detect internal flaws in metal castings. In 1941, Austrian neurologist Karl Theo Dussik, in collaboration with his brother, Friedrich, a physicist, was likely the first person to image the human body ultrasonically, outlining the ventricles of a human brain. Ultrasonic energy was first applied to the human body for medical purposes by Dr George Ludwig at the Naval Medical Research Institute, Bethesda, Maryland, in the late 1940s. English-born physicist John Wild (1914–2009) first used ultrasound to assess the thickness of bowel tissue as early as 1949; he has been described as the "father of medical ultrasound". Subsequent advances took place concurrently in several countries but it was not until 1961 that David Robinson and George Kossoff's work at the Australian Department of Health resulted in the first commercially practical water bath ultrasonic scanner. In 1963 Meyerdirk & Wright launched production of the first commercial, hand-held, articulated arm, compound contact B-mode scanner, which made ultrasound generally available for medical use.
=== France ===
Léandre Pourcelot, a researcher and teacher at INSA (Institut National des Sciences Appliquées), Lyon, co-published a report in 1965 at the Académie des sciences, "Effet Doppler et mesure du débit sanguin" ("Doppler effect and measure of the blood flow"), the basis of his design of a Doppler flow meter in 1967.
=== Scotland ===
Parallel developments in Glasgow, Scotland by Professor Ian Donald and colleagues at the Glasgow Royal Maternity Hospital (GRMH) led to the first diagnostic applications of the technique. Donald was an obstetrician with a self-confessed "childish interest in machines, electronic and otherwise", who, having treated the wife of one of the company's directors, was invited to visit the Research Department of boilermakers Babcock & Wilcox at Renfrew. He adapted their industrial ultrasound equipment to conduct experiments on various anatomical specimens and assess their ultrasonic characteristics. Together with the medical physicist Tom Brown. and fellow obstetrician John MacVicar, Donald refined the equipment to enable differentiation of pathology in live volunteer patients. These findings were reported in The Lancet on 7 June 1958 as "Investigation of Abdominal Masses by Pulsed Ultrasound" – possibly one of the most important papers published in the field of diagnostic medical imaging.
At GRMH, Professor Donald and James Willocks then refined their techniques to obstetric applications including fetal head measurement to assess the size and growth of the fetus. With the opening of the new Queen Mother's Hospital in Yorkhill in 1964, it became possible to improve these methods even further. Stuart Campbell's pioneering work on fetal cephalometry led to it acquiring long-term status as the definitive method of study of foetal growth. As the technical quality of the scans was further developed, it soon became possible to study pregnancy from start to finish and diagnose its many complications such as multiple pregnancy, fetal abnormality and placenta praevia. Diagnostic ultrasound has since been imported into practically every other area of medicine.
=== Sweden ===
Medical ultrasonography was used in 1953 at Lund University by cardiologist Inge Edler and Gustav Ludwig Hertz's son Carl Hellmuth Hertz, who was then a graduate student at the university's department of nuclear physics.
Edler had asked Hertz if it was possible to use radar to look into the body, but Hertz said this was impossible. However, he said, it might be possible to use ultrasonography. Hertz was familiar with using ultrasonic reflectoscopes of the American acoustical physicist Floyd Firestone's invention for nondestructive materials testing, and together Edler and Hertz developed the idea of applying this methodology in medicine.
The first successful measurement of heart activity was made on October 29, 1953, using a device borrowed from the ship construction company Kockums in Malmö. On December 16 the same year, the method was applied to generate an echo-encephalogram (ultrasonic probe of the brain). Edler and Hertz published their findings in 1954.
=== United States ===
In 1962, after about two years of work, Joseph Holmes, William Wright, and Ralph Meyerdirk developed the first compound contact B-mode scanner. Their work had been supported by U.S. Public Health Services and the University of Colorado. Wright and Meyerdirk left the university to form Physionic Engineering Inc., which launched the first commercial hand-held articulated arm compound contact B-mode scanner in 1963. This was the start of the most popular design in the history of ultrasound scanners.
In the late 1960s Gene Strandness and the bio-engineering group at the University of Washington conducted research on Doppler ultrasound as a diagnostic tool for vascular disease. Eventually, they developed technologies to use duplex imaging, or Doppler in conjunction with B-mode scanning, to view vascular structures in real time while also providing hemodynamic information.
The first demonstration of color Doppler was by Geoff Stevenson, who was involved in the early developments and medical use of Doppler shifted ultrasonic energy.
== Manufacturers ==
Major manufacturers of Medical Ultrasound Devices and Equipment are:
Canon Medical Systems Corporation
Esaote
GE Healthcare
Fujifilm
Mindray Medical International Limited
Koninklijke Philips N.V.
Samsung Medison
Siemens Healthineers
== Gallery ==
== See also ==
== Explanatory notes ==
== References ==
== External links ==
About the discovery of medical ultrasonography on ob-ultrasound.net
History of medical sonography (ultrasound) on ob-ultrasound.net | Wikipedia/Medical_ultrasonography |
In dentistry, calculus or tartar is a form of hardened dental plaque. It is caused by precipitation of minerals from saliva and gingival crevicular fluid (GCF) in plaque on the teeth. This process of precipitation kills the bacterial cells within dental plaque, but the rough and hardened surface that is formed provides an ideal surface for further plaque formation. This leads to calculus buildup, which compromises the health of the gingiva (gums). Calculus can form both along the gumline, where it is referred to as supragingival ('above the gum'), and within the narrow sulcus that exists between the teeth and the gingiva, where it is referred to as subgingival ('below the gum').
Calculus formation is associated with a number of clinical manifestations, including bad breath, receding gums and chronically inflamed gingiva. Brushing and flossing can remove plaque from which calculus forms; however, once formed, calculus is too hard (firmly attached) to be removed with a toothbrush. Calculus buildup can be removed with ultrasonic tools or dental hand instruments (such as a periodontal scaler).
== Etymology ==
The word comes from Latin calculus 'small stone', from calx 'limestone, lime', probably related to Greek χάλιξ chalix 'small stone, pebble, rubble', which many trace to a Proto-Indo-European root for 'split, break up'. Calculus was a term used for various kinds of stones. This spun off many modern words, including calculate ('use stones for mathematical purposes'), and calculus, which came to be used, in the 18th century, for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.
Tartar, on the other hand, originates in Greek as well (tartaron), but as the term for the white encrustation inside casks (a.k.a. potassium bitartrate, commonly known as cream of tartar). This came to be a term used for calcium phosphate on teeth in the early 19th century.
== Chemical composition ==
Calculus is composed of both inorganic (mineral) and organic (cellular and extracellular matrix) components.
=== In supra-gingival calculus ===
The mineral proportion of supragingival calculus ranges from approximately 40–60%, depending on its location in the dentition, and consists primarily of calcium phosphate crystals organized into four principal mineral phases, listed here in order of increasing ratio of phosphate to calcium:
hydroxyapatite, Ca5(PO4)3OH
whitlockite, Ca9(Mg,Fe)(PO4)6(PO3OH)
octacalcium phosphate, Ca8H2(PO4)6 · 5 H2O
and brushite, CaHPO4 · 2 H2O
The organic component is approximately 85% cellular and 15% extracellular matrix. Cell density within dental plaque and calculus is very high, consisting of an estimated 200,000,000 cells per milligram. The cells within calculus are primarily bacterial, but also include at least one species of archaea (Methanobrevibacter oralis) and several species of yeast (e.g., Candida albicans). The organic extracellular matrix in calculus consists primarily of proteins and lipids (fatty acids, triglycerides, glycolipids, and phospholipids), as well as extracellular DNA. Trace amounts of host, dietary, and environmental microdebris are also found within calculus, including salivary proteins, plant DNA, milk proteins, starch granules, textile fibers, and smoke particles.
=== In sub-gingival calculus ===
Sub-gingival calculus is composed almost entirely of two components: fossilized anaerobic bacteria whose biological composition has been replaced by calcium phosphate salts, and calcium phosphate salts that have joined the fossilized bacteria in calculus formations.
The following minerals are detectable in calculus by X-ray diffraction:
brushite (CaHPO4 · 2 H2O)
octacalcium phosphate (Ca8H2(PO4)6 · 5 H2O)
magnesium-containing whitlockite (Ca9(Mg,Fe)(PO4)6(PO3OH))
carbonate-containing hydroxyapatite (approximately Ca5(PO4)3OH but containing some carbonate).
== Calculus formation ==
Dental calculus typically forms in incremental layers that are easily visible using both electron microscopy and light microscopy. These layers form during periodic calcification events of the dental plaque, but the timing and triggers of these events are not well understood. The formation of calculus varies widely among individuals and at different locations within the mouth. Many variables have been identified that influence the formation of dental calculus, including age, sex, ethnic background, diet, location in the oral cavity, oral hygiene, bacterial plaque composition, host genetics, access to professional dental care, physical disabilities, systemic diseases, tobacco use, and drugs and medications.
Supragingival calculus formation is most abundant on the buccal (cheek) surfaces of the maxillary (upper jaw) molars and on the lingual (tongue) surfaces of the mandibular (lower jaw) incisors. These areas experience high salivary flow because of their proximity to the parotid and sublingual salivary glands.
Subgingival calculus forms below the gumline and is typically darkened in color by the presence of black-pigmented bacteria, whose cells are coated in a layer of iron obtained from heme during gingival bleeding. The reason fossilized bacteria are initially attracted to one part of the subgingival tooth surface over another is not fully understood. However, once the first layer is attached, more calculus components are naturally attracted to the same places due to electrical charge. This is because the calcium phosphate salts contained in them exist as electrically unstable ions (unlike calcium phosphate, the primary component of teeth). The fossilized bacteria pile up rather haphazardly, while free-floating ionic components (calcium phosphate salts) fill in the gaps.
The resultant hardened structure can be compared to concrete, with the fossilized bacteria playing the role of aggregate, and the smaller calcium phosphate salts being the cement. The "hardened" calculus formations are at the heart of periodontal disease and treatment.
== Clinical significance ==
Plaque accumulation causes the gingiva to become irritated and inflamed, and this is referred to as gingivitis. When the gingiva become so irritated that there is a loss of the connective tissue fibers that attach the gums to the teeth and bone that surrounds the tooth, this is known as periodontitis. Dental plaque is not the sole cause of periodontitis; however it is many times referred to as a primary aetiology. Plaque that remains in the oral cavity long enough will eventually calcify and become calculus. Calculus is detrimental to gingival health because it serves as a trap for increased plaque formation and retention; thus, calculus, along with other factors that cause a localized build-up of plaque, is referred to as a secondary aetiology of periodontitis.
When plaque is supragingival, the bacterial content contains a great proportion of aerobic bacteria and yeast, or those bacteria which utilize and can survive in an environment containing oxygen. Subgingival plaque contains a higher proportion of anaerobic bacteria, or those bacteria which cannot exist in an environment containing oxygen. Several anaerobic plaque bacteria, such as Porphyromonas gingivalis, secrete antigenic proteins that trigger a strong inflammatory response in the periodontium, the specialized tissues that surround and support the teeth. Prolonged inflammation of the periodontium leads to bone loss and weakening of the gingival fibers that attach the teeth to the gums, two major hallmarks of periodontitis. Supragingival calculus formation is nearly ubiquitous in humans, but to differing degrees. Almost all individuals with periodontitis exhibit considerable subgingival calculus deposits. Dental plaque bacteria have been linked to cardiovascular disease and mothers giving birth to pre-term low weight infants, but there is no conclusive evidence yet that periodontitis is a significant risk factor for either of these two conditions.
== Prevention ==
Toothpaste with pyrophosphates or zinc citrate has been shown to produce a statistically significant reduction in plaque accumulation, but the effect of zinc citrate is so modest that its clinical importance is questionable. Some calculus may form even without plaque deposits, by direct mineralisation of the pellicle.
== Calculus in other animals ==
Calculus formation in other animals is less well studied than in humans, but it is known to form in a wide range of species. Domestic pets, such as dogs and cats, frequently accumulate large calculus deposits. Animals with highly abrasive diets, such as ruminants and equids, rarely form thick deposits and instead tend to form thin calculus deposits that often have a metallic or opalescent sheen. In animals, calculus should not be confused with crown cementum, a layer of calcified dental tissue that encases the tooth root underneath the gingival margin and is gradually lost through periodontal disease.
== Archaeological significance ==
Dental calculus has been shown to contain well preserved microparticles, DNA and protein in archaeological samples. The information these molecules contain can reveal information about the oral microbiome of the host and the presence of pathogens. It is also possible to identify dietary sources as well as study dietary shifts and occasionally evidence of craft activities.
== Removal of calculus after formation ==
Plaque and calculus deposits are a major etiological factor in the development and progression of oral disease. An important part of the scope of practice of a dental hygienist is the removal of plaque and calculus deposits. This is achieved through the use of specifically designed instruments for debridement of tooth surfaces. Treatment with these types of instruments is necessary as calculus deposits cannot be removed by brushing or flossing alone. To effectively manage disease or maintain oral health, thorough removal of calculus deposits should be completed at frequent intervals. The recommended frequency of dental hygiene treatment can be made by a registered professional, and is dependent on individual patient needs. Factors that are taken into consideration include an individual's overall health status, tobacco use, amount of calculus present, and adherence to a professionally recommended home care routine.
Hand instruments are specially designed tools used by dental professionals to remove plaque and calculus deposits that have formed on the teeth. These tools include scalers, curettes, jaquettes, hoes, files and chisels. Each type of tool is designed to be used in specific areas of the mouth. Some commonly used instruments include sickle scalers which are designed with a pointed tip and are mainly used supragingivally. Curettes are mainly used to remove subgingival calculus, smooth root surfaces and to clean out periodontal pockets. Curettes can be divided into two subgroups: universals and area specific instruments. Universal curettes can be used in multiple areas, while area specific instruments are designed for select tooth surfaces. Gracey curettes are a popular type of area specific curettes. Due to their design, area specific curettes allow for better adaptation to the root surface and can be slightly more effective than universals. Hoes, chisels, and files are less widely used than scalers and curettes. These are beneficial when removing large amounts of calculus or tenacious calculus that cannot be removed with a curette or scaler alone. Chisels and hoes are used to remove bands of calculus, whereas files are used to crush burnished or tenacious calculus.
Ultrasonic scalers, also known as power scalers, are effective in removing calculus, stain, and plaque. These scalers are also useful for root planing, curettage, and surgical debridement. Not only is tenacious calculus and stain removed more effectively with ultrasonic scalers than with hand instrumentation alone, it is evident that the most satisfactory clinical results are when ultrasonics are used in adjunct to hand instrumentation. There are two types of ultrasonic scalers; piezoelectric and magnetostrictive. Oscillating material in both of these handpieces cause the tip of the scaler to vibrate at high speeds, between 18,000 and 50,000 Hz. The tip of each scaler uses a different vibration pattern for removal of calculus. The magnetostrictive power scaler vibration is elliptical, activating all sides of the tip, whereas the piezoelectric vibration is linear and is more active on the two sides of the tip.
Special tips for ultrasonic scalers are designed to address different areas of the mouth and varying amounts of calculus buildup. Larger tips are used for heavy subgingival or supragingival calculus deposits, whereas thinner tips are designed more for definitive subgingival debridement. As the high frequency vibrations loosen calculus and plaque, heat is generated at the tip. A water spray is directed towards the end of the tip to cool it as well as irrigate the gingiva during debridement. Only the first 1–2 mm of the tip on the ultrasonic scaler is most effective for removal, and therefore needs to come into direct contact with the calculus to fracture the deposits. Small adaptations are needed in order to keep the tip of the scaler touching the surface of the tooth, while overlapping oblique, horizontal, or vertical strokes are used for adequate calculus removal.
Current research on potentially more effective methods of subgingival calculus removal focuses on the use of near-ultraviolet and near-infrared lasers, such as Er,Cr:YSGG lasers. The use of lasers in periodontal therapy offers a unique clinical advantage over conventional hand instrumentation, as the thin and flexible fibers can deliver laser energy into periodontal pockets that are otherwise difficult to access. Near-infrared lasers, such as the Er,CR:YSGG laser, have been proposed as an effective adjunct for calculus removal as the emission wavelength is highly absorbed by water, a large component of calculus deposits. An optimal output power setting of 1.0-W with the near-infrared Er,Cr:YSGG laser has been shown to be effective for root scaling. Near-ultraviolet lasers have also shown promise as they allow the dental professional to remove calculus deposits quickly, without removing underlying healthy tooth structure, which often occurs during hand instrumentation. Additionally, near-ultraviolet lasers are effective at various irradiation angles for calculus removal. Discrepancies in the efficiency of removal are due to the physical and optical properties of the calculus deposits, not to the angle of laser use. Dental hygienists must receive additional theoretical and clinical training on the use of lasers, where legislation permits.
== See also ==
Calculus (medicine)
Toothbrush
Tooth decay
Teeth cleaning
== References == | Wikipedia/Calculus_(dental) |
The following are important identities in vector algebra. Identities that only involve the magnitude of a vector
‖
A
‖
{\displaystyle \|\mathbf {A} \|}
and the dot product (scalar product) of two vectors A·B, apply to vectors in any dimension, while identities that use the cross product (vector product) A×B only apply in three dimensions, since the cross product is only defined there.
Most of these relations can be dated to founder of vector calculus Josiah Willard Gibbs, if not earlier.
== Magnitudes ==
The magnitude of a vector A can be expressed using the dot product:
‖
A
‖
2
=
A
⋅
A
{\displaystyle \|\mathbf {A} \|^{2}=\mathbf {A\cdot A} }
In three-dimensional Euclidean space, the magnitude of a vector is determined from its three components using Pythagoras' theorem:
‖
A
‖
2
=
A
1
2
+
A
2
2
+
A
3
2
{\displaystyle \|\mathbf {A} \|^{2}=A_{1}^{2}+A_{2}^{2}+A_{3}^{2}}
== Inequalities ==
The Cauchy–Schwarz inequality:
A
⋅
B
≤
‖
A
‖
‖
B
‖
{\displaystyle \mathbf {A} \cdot \mathbf {B} \leq \left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}
The triangle inequality:
‖
A
+
B
‖
≤
‖
A
‖
+
‖
B
‖
{\displaystyle \|\mathbf {A+B} \|\leq \|\mathbf {A} \|+\|\mathbf {B} \|}
The reverse triangle inequality:
‖
A
−
B
‖
≥
|
‖
A
‖
−
‖
B
‖
|
{\displaystyle \|\mathbf {A-B} \|\geq {\Bigl |}\|\mathbf {A} \|-\|\mathbf {B} \|{\Bigr |}}
== Angles ==
The vector product and the scalar product of two vectors define the angle between them, say θ:
sin
θ
=
‖
A
×
B
‖
‖
A
‖
‖
B
‖
(
−
π
<
θ
≤
π
)
{\displaystyle \sin \theta ={\frac {\|\mathbf {A} \times \mathbf {B} \|}{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )}
To satisfy the right-hand rule, for positive θ, vector B is counter-clockwise from A, and for negative θ it is clockwise.
cos
θ
=
A
⋅
B
‖
A
‖
‖
B
‖
(
−
π
<
θ
≤
π
)
{\displaystyle \cos \theta ={\frac {\mathbf {A} \cdot \mathbf {B} }{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )}
The Pythagorean trigonometric identity then provides:
‖
A
×
B
‖
2
+
(
A
⋅
B
)
2
=
‖
A
‖
2
‖
B
‖
2
{\displaystyle \left\|\mathbf {A\times B} \right\|^{2}+(\mathbf {A} \cdot \mathbf {B} )^{2}=\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}}
If a vector A = (Ax, Ay, Az) makes angles α, β, γ with an orthogonal set of x-, y- and z-axes, then:
cos
α
=
A
x
A
x
2
+
A
y
2
+
A
z
2
=
A
x
‖
A
‖
,
{\displaystyle \cos \alpha ={\frac {A_{x}}{\sqrt {A_{x}^{2}+A_{y}^{2}+A_{z}^{2}}}}={\frac {A_{x}}{\|\mathbf {A} \|}}\ ,}
and analogously for angles β, γ. Consequently:
A
=
‖
A
‖
(
cos
α
i
^
+
cos
β
j
^
+
cos
γ
k
^
)
,
{\displaystyle \mathbf {A} =\left\|\mathbf {A} \right\|\left(\cos \alpha \ {\hat {\mathbf {i} }}+\cos \beta \ {\hat {\mathbf {j} }}+\cos \gamma \ {\hat {\mathbf {k} }}\right),}
with
i
^
,
j
^
,
k
^
{\displaystyle {\hat {\mathbf {i} }},\ {\hat {\mathbf {j} }},\ {\hat {\mathbf {k} }}}
unit vectors along the axis directions.
== Areas and volumes ==
The area Σ of a parallelogram with sides A and B containing the angle θ is:
Σ
=
A
B
sin
θ
,
{\displaystyle \Sigma =AB\sin \theta ,}
which will be recognized as the magnitude of the vector cross product of the vectors A and B lying along the sides of the parallelogram. That is:
Σ
=
‖
A
×
B
‖
=
‖
A
‖
2
‖
B
‖
2
−
(
A
⋅
B
)
2
.
{\displaystyle \Sigma =\left\|\mathbf {A} \times \mathbf {B} \right\|={\sqrt {\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}-\left(\mathbf {A} \cdot \mathbf {B} \right)^{2}}}\ .}
(If A, B are two-dimensional vectors, this is equal to the determinant of the 2 × 2 matrix with rows A, B.) The square of this expression is:
Σ
2
=
(
A
⋅
A
)
(
B
⋅
B
)
−
(
A
⋅
B
)
(
B
⋅
A
)
=
Γ
(
A
,
B
)
,
{\displaystyle \Sigma ^{2}=(\mathbf {A\cdot A} )(\mathbf {B\cdot B} )-(\mathbf {A\cdot B} )(\mathbf {B\cdot A} )=\Gamma (\mathbf {A} ,\ \mathbf {B} )\ ,}
where Γ(A, B) is the Gram determinant of A and B defined by:
Γ
(
A
,
B
)
=
|
A
⋅
A
A
⋅
B
B
⋅
A
B
⋅
B
|
.
{\displaystyle \Gamma (\mathbf {A} ,\ \mathbf {B} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} \end{vmatrix}}\ .}
In a similar fashion, the squared volume V of a parallelepiped spanned by the three vectors A, B, C is given by the Gram determinant of the three vectors:
V
2
=
Γ
(
A
,
B
,
C
)
=
|
A
⋅
A
A
⋅
B
A
⋅
C
B
⋅
A
B
⋅
B
B
⋅
C
C
⋅
A
C
⋅
B
C
⋅
C
|
,
{\displaystyle V^{2}=\Gamma (\mathbf {A} ,\ \mathbf {B} ,\ \mathbf {C} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} &\mathbf {A\cdot C} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} &\mathbf {B\cdot C} \\\mathbf {C\cdot A} &\mathbf {C\cdot B} &\mathbf {C\cdot C} \end{vmatrix}}\ ,}
Since A, B, C are three-dimensional vectors, this is equal to the square of the scalar triple product
det
[
A
,
B
,
C
]
=
|
A
,
B
,
C
|
{\displaystyle \det[\mathbf {A} ,\mathbf {B} ,\mathbf {C} ]=|\mathbf {A} ,\mathbf {B} ,\mathbf {C} |}
below.
This process can be extended to n-dimensions.
== Addition and multiplication of vectors ==
Commutativity of addition:
A
+
B
=
B
+
A
{\displaystyle \mathbf {A} +\mathbf {B} =\mathbf {B} +\mathbf {A} }
.
Commutativity of scalar product:
A
⋅
B
=
B
⋅
A
{\displaystyle \mathbf {A} \cdot \mathbf {B} =\mathbf {B} \cdot \mathbf {A} }
.
Anticommutativity of cross product:
A
×
B
=
−
(
B
×
A
)
{\displaystyle \mathbf {A} \times \mathbf {B} =\mathbf {-} (\mathbf {B} \times \mathbf {A} )}
.
Distributivity of multiplication by a scalar over addition:
c
(
A
+
B
)
=
c
A
+
c
B
{\displaystyle c(\mathbf {A} +\mathbf {B} )=c\mathbf {A} +c\mathbf {B} }
.
Distributivity of scalar product over addition:
(
A
+
B
)
⋅
C
=
A
⋅
C
+
B
⋅
C
{\displaystyle \left(\mathbf {A} +\mathbf {B} \right)\cdot \mathbf {C} =\mathbf {A} \cdot \mathbf {C} +\mathbf {B} \cdot \mathbf {C} }
.
Distributivity of vector product over addition:
(
A
+
B
)
×
C
=
A
×
C
+
B
×
C
{\displaystyle (\mathbf {A} +\mathbf {B} )\times \mathbf {C} =\mathbf {A} \times \mathbf {C} +\mathbf {B} \times \mathbf {C} }
.
Scalar triple product:
A
⋅
(
B
×
C
)
=
B
⋅
(
C
×
A
)
=
C
⋅
(
A
×
B
)
=
|
A
B
C
|
=
|
A
x
B
x
C
x
A
y
B
y
C
y
A
z
B
z
C
z
|
.
{\displaystyle \mathbf {A} \cdot (\mathbf {B} \times \mathbf {C} )=\mathbf {B} \cdot (\mathbf {C} \times \mathbf {A} )=\mathbf {C} \cdot (\mathbf {A} \times \mathbf {B} )=|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |={\begin{vmatrix}A_{x}&B_{x}&C_{x}\\A_{y}&B_{y}&C_{y}\\A_{z}&B_{z}&C_{z}\end{vmatrix}}.}
Vector triple product:
A
×
(
B
×
C
)
=
(
A
⋅
C
)
B
−
(
A
⋅
B
)
C
{\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )=(\mathbf {A} \cdot \mathbf {C} )\mathbf {B} -(\mathbf {A} \cdot \mathbf {B} )\mathbf {C} }
.
Jacobi identity:
A
×
(
B
×
C
)
+
C
×
(
A
×
B
)
+
B
×
(
C
×
A
)
=
0
.
{\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )+\mathbf {C} \times (\mathbf {A} \times \mathbf {B} )+\mathbf {B} \times (\mathbf {C} \times \mathbf {A} )=\mathbf {0} .}
Lagrange's identity:
|
A
×
B
|
2
=
(
A
⋅
A
)
(
B
⋅
B
)
−
(
A
⋅
B
)
2
{\displaystyle |\mathbf {A} \times \mathbf {B} |^{2}=(\mathbf {A} \cdot \mathbf {A} )(\mathbf {B} \cdot \mathbf {B} )-(\mathbf {A} \cdot \mathbf {B} )^{2}}
.
=== Quadruple product ===
The name "quadruple product" is used for two different products, the scalar-valued scalar quadruple product and the vector-valued vector quadruple product or vector product of four vectors.
==== Scalar quadruple product ====
The scalar quadruple product is defined as the dot product of two cross products:
(
a
×
b
)
⋅
(
c
×
d
)
,
{\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )\ ,}
where a, b, c, d are vectors in three-dimensional Euclidean space. It can be evaluated using the Binet-Cauchy identity:
(
a
×
b
)
⋅
(
c
×
d
)
=
(
a
⋅
c
)
(
b
⋅
d
)
−
(
a
⋅
d
)
(
b
⋅
c
)
.
{\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ .}
or using the determinant:
(
a
×
b
)
⋅
(
c
×
d
)
=
|
a
⋅
c
a
⋅
d
b
⋅
c
b
⋅
d
|
.
{\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )={\begin{vmatrix}\mathbf {a\cdot c} &\mathbf {a\cdot d} \\\mathbf {b\cdot c} &\mathbf {b\cdot d} \end{vmatrix}}\ .}
==== Vector quadruple product ====
The vector quadruple product is defined as the cross product of two cross products:
(
a
×
b
)
×
(
c
×
d
)
,
{\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )\ ,}
where a, b, c, d are vectors in three-dimensional Euclidean space. It can be evaluated using the identity:
(
a
×
b
)
×
(
c
×
d
)
=
(
a
⋅
(
b
×
d
)
)
c
−
(
a
⋅
(
b
×
c
)
)
d
.
{\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {d} ))\mathbf {c} -(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ))\mathbf {d} \ .}
Equivalent forms can be obtained using the identity:
(
b
⋅
(
c
×
d
)
)
a
−
(
c
⋅
(
d
×
a
)
)
b
+
(
d
⋅
(
a
×
b
)
)
c
−
(
a
⋅
(
b
×
c
)
)
d
=
0
.
{\displaystyle (\mathbf {b} \cdot (\mathbf {c} \times \mathbf {d} ))\mathbf {a} -(\mathbf {c} \cdot (\mathbf {d} \times \mathbf {a} ))\mathbf {b} +(\mathbf {d} \cdot (\mathbf {a} \times \mathbf {b} ))\mathbf {c} -(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ))\mathbf {d} =0\ .}
This identity can also be written using tensor notation and the Einstein summation convention as follows:
(
a
×
b
)
×
(
c
×
d
)
=
ε
i
j
k
a
i
c
j
d
k
b
l
−
ε
i
j
k
b
i
c
j
d
k
a
l
=
ε
i
j
k
a
i
b
j
d
k
c
l
−
ε
i
j
k
a
i
b
j
c
k
d
l
{\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=\varepsilon _{ijk}a^{i}c^{j}d^{k}b^{l}-\varepsilon _{ijk}b^{i}c^{j}d^{k}a^{l}=\varepsilon _{ijk}a^{i}b^{j}d^{k}c^{l}-\varepsilon _{ijk}a^{i}b^{j}c^{k}d^{l}}
where εijk is the Levi-Civita symbol.
Related relationships:
A consequence of the previous equation:
|
A
B
C
|
D
=
(
A
⋅
D
)
(
B
×
C
)
+
(
B
⋅
D
)
(
C
×
A
)
+
(
C
⋅
D
)
(
A
×
B
)
.
{\displaystyle |\mathbf {A} \,\mathbf {B} \,\mathbf {C} |\,\mathbf {D} =(\mathbf {A} \cdot \mathbf {D} )\left(\mathbf {B} \times \mathbf {C} \right)+\left(\mathbf {B} \cdot \mathbf {D} \right)\left(\mathbf {C} \times \mathbf {A} \right)+\left(\mathbf {C} \cdot \mathbf {D} \right)\left(\mathbf {A} \times \mathbf {B} \right).}
In 3 dimensions, a vector D can be expressed in terms of basis vectors {A,B,C} as:
D
=
D
⋅
(
B
×
C
)
|
A
B
C
|
A
+
D
⋅
(
C
×
A
)
|
A
B
C
|
B
+
D
⋅
(
A
×
B
)
|
A
B
C
|
C
.
{\displaystyle \mathbf {D} \ =\ {\frac {\mathbf {D} \cdot (\mathbf {B} \times \mathbf {C} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {A} +{\frac {\mathbf {D} \cdot (\mathbf {C} \times \mathbf {A} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {B} +{\frac {\mathbf {D} \cdot (\mathbf {A} \times \mathbf {B} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {C} .}
== Applications ==
These relations are useful for deriving various formulas in spherical and Euclidean geometry. For example, if four points are chosen on the unit sphere, A, B, C, D, and unit vectors drawn from the center of the sphere to the four points, a, b, c, d respectively, the identity:
(
a
×
b
)
⋅
(
c
×
d
)
=
(
a
⋅
c
)
(
b
⋅
d
)
−
(
a
⋅
d
)
(
b
⋅
c
)
,
{\displaystyle (\mathbf {a\times b} )\mathbf {\cdot } (\mathbf {c\times d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ ,}
in conjunction with the relation for the magnitude of the cross product:
‖
a
×
b
‖
=
a
b
sin
θ
a
b
,
{\displaystyle \|\mathbf {a\times b} \|=ab\sin \theta _{ab}\ ,}
and the dot product:
a
⋅
b
=
a
b
cos
θ
a
b
,
{\displaystyle \mathbf {a\cdot b} =ab\cos \theta _{ab}\ ,}
where a = b = 1 for the unit sphere, results in the identity among the angles attributed to Gauss:
sin
θ
a
b
sin
θ
c
d
cos
x
=
cos
θ
a
c
cos
θ
b
d
−
cos
θ
a
d
cos
θ
b
c
,
{\displaystyle \sin \theta _{ab}\sin \theta _{cd}\cos x=\cos \theta _{ac}\cos \theta _{bd}-\cos \theta _{ad}\cos \theta _{bc}\ ,}
where x is the angle between a × b and c × d, or equivalently, between the planes defined by these vectors.
== See also ==
Vector calculus identities
Vector space
Geometric algebra
== Notes ==
== References ==
== Further reading ==
Gibbs, Josiah Willard; Wilson, Edwin Bidwell (1901). Vector analysis: a text-book for the use of students of mathematics. Scribner. | Wikipedia/Vector_algebra_relations |
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function
f
:
U
→
R
,
{\displaystyle f\colon U\to \mathbb {R} ,}
where U is an open subset of
R
n
,
{\displaystyle \mathbb {R} ^{n},}
that satisfies Laplace's equation, that is,
∂
2
f
∂
x
1
2
+
∂
2
f
∂
x
2
2
+
⋯
+
∂
2
f
∂
x
n
2
=
0
{\displaystyle {\frac {\partial ^{2}f}{\partial x_{1}^{2}}}+{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}+\cdots +{\frac {\partial ^{2}f}{\partial x_{n}^{2}}}=0}
everywhere on U. This is usually written as
∇
2
f
=
0
{\displaystyle \nabla ^{2}f=0}
or
Δ
f
=
0
{\displaystyle \Delta f=0}
== Etymology of the term "harmonic" ==
The descriptor "harmonic" in the name "harmonic function" originates from a point on a taut string which is undergoing harmonic motion. The solution to the differential equation for this type of motion can be written in terms of sines and cosines, functions which are thus referred to as "harmonics." Fourier analysis involves expanding functions on the unit circle in terms of a series of these harmonics. Considering higher dimensional analogues of the harmonics on the unit n-sphere, one arrives at the spherical harmonics. These functions satisfy Laplace's equation and, over time, "harmonic" was used to refer to all functions satisfying Laplace's equation.
== Examples ==
Examples of harmonic functions of two variables are:
The real or imaginary part of any holomorphic function.
The function
f
(
x
,
y
)
=
e
x
sin
y
;
{\displaystyle \,\!f(x,y)=e^{x}\sin y;}
this is a special case of the example above, as
f
(
x
,
y
)
=
Im
(
e
x
+
i
y
)
,
{\displaystyle f(x,y)=\operatorname {Im} \left(e^{x+iy}\right),}
and
e
x
+
i
y
{\displaystyle e^{x+iy}}
is a holomorphic function. The second derivative with respect to x is
e
x
sin
y
,
{\displaystyle \,\!e^{x}\sin y,}
while the second derivative with respect to y is
−
e
x
sin
y
.
{\displaystyle \,\!-e^{x}\sin y.}
The function
f
(
x
,
y
)
=
ln
(
x
2
+
y
2
)
{\displaystyle \,\!f(x,y)=\ln \left(x^{2}+y^{2}\right)}
defined on
R
2
∖
{
0
}
.
{\displaystyle \mathbb {R} ^{2}\smallsetminus \lbrace 0\rbrace .}
This can describe the electric potential due to a line charge or the gravity potential due to a long cylindrical mass.
Examples of harmonic functions of three variables are given in the table below with
r
2
=
x
2
+
y
2
+
z
2
:
{\displaystyle r^{2}=x^{2}+y^{2}+z^{2}:}
Harmonic functions that arise in physics are determined by their singularities and boundary conditions (such as Dirichlet boundary conditions or Neumann boundary conditions). On regions without boundaries, adding the real or imaginary part of any entire function will produce a harmonic function with the same singularity, so in this case the harmonic function is not determined by its singularities; however, we can make the solution unique in physical situations by requiring that the solution approaches 0 as r approaches infinity. In this case, uniqueness follows by Liouville's theorem.
The singular points of the harmonic functions above are expressed as "charges" and "charge densities" using the terminology of electrostatics, and so the corresponding harmonic function will be proportional to the electrostatic potential due to these charge distributions. Each function above will yield another harmonic function when multiplied by a constant, rotated, and/or has a constant added. The inversion of each function will yield another harmonic function which has singularities which are the images of the original singularities in a spherical "mirror". Also, the sum of any two harmonic functions will yield another harmonic function.
Finally, examples of harmonic functions of n variables are:
The constant, linear and affine functions on all of
R
n
{\displaystyle \mathbb {R} ^{n}}
(for example, the electric potential between the plates of a capacitor, and the gravity potential of a slab)
The function
f
(
x
1
,
…
,
x
n
)
=
(
x
1
2
+
⋯
+
x
n
2
)
1
−
n
/
2
{\displaystyle f(x_{1},\dots ,x_{n})=\left({x_{1}}^{2}+\cdots +{x_{n}}^{2}\right)^{1-n/2}}
on
R
n
∖
{
0
}
{\displaystyle \mathbb {R} ^{n}\smallsetminus \lbrace 0\rbrace }
for n > 2.
== Properties ==
The set of harmonic functions on a given open set U can be seen as the kernel of the Laplace operator Δ and is therefore a vector space over
R
:
{\displaystyle \mathbb {R} \!:}
linear combinations of harmonic functions are again harmonic.
If f is a harmonic function on U, then all partial derivatives of f are also harmonic functions on U. The Laplace operator Δ and the partial derivative operator will commute on this class of functions.
In several ways, the harmonic functions are real analogues to holomorphic functions. All harmonic functions are analytic, that is, they can be locally expressed as power series. This is a general fact about elliptic operators, of which the Laplacian is a major example.
The uniform limit of a convergent sequence of harmonic functions is still harmonic. This is true because every continuous function satisfying the mean value property is harmonic. Consider the sequence on
(
−
∞
,
0
)
×
R
{\displaystyle (-\infty ,0)\times \mathbb {R} }
defined by
f
n
(
x
,
y
)
=
1
n
exp
(
n
x
)
cos
(
n
y
)
;
{\textstyle f_{n}(x,y)={\frac {1}{n}}\exp(nx)\cos(ny);}
this sequence is harmonic and converges uniformly to the zero function; however note that the partial derivatives are not uniformly convergent to the zero function (the derivative of the zero function). This example shows the importance of relying on the mean value property and continuity to argue that the limit is harmonic.
== Connections with complex function theory ==
The real and imaginary part of any holomorphic function yield harmonic functions on
R
2
{\displaystyle \mathbb {R} ^{2}}
(these are said to be a pair of harmonic conjugate functions). Conversely, any harmonic function u on an open subset Ω of
R
2
{\displaystyle \mathbb {R} ^{2}}
is locally the real part of a holomorphic function. This is immediately seen observing that, writing
z
=
x
+
i
y
,
{\displaystyle z=x+iy,}
the complex function
g
(
z
)
:=
u
x
−
i
u
y
{\displaystyle g(z):=u_{x}-iu_{y}}
is holomorphic in Ω because it satisfies the Cauchy–Riemann equations. Therefore, g locally has a primitive f, and u is the real part of f up to a constant, as ux is the real part of
f
′
=
g
.
{\displaystyle f'=g.}
Although the above correspondence with holomorphic functions only holds for functions of two real variables, harmonic functions in n variables still enjoy a number of properties typical of holomorphic functions. They are (real) analytic; they have a maximum principle and a mean-value principle; a theorem of removal of singularities as well as a Liouville theorem holds for them in analogy to the corresponding theorems in complex functions theory.
== Properties of harmonic functions ==
Some important properties of harmonic functions can be deduced from Laplace's equation.
=== Regularity theorem for harmonic functions ===
Harmonic functions are infinitely differentiable in open sets. In fact, harmonic functions are real analytic.
=== Maximum principle ===
Harmonic functions satisfy the following maximum principle: if K is a nonempty compact subset of U, then f restricted to K attains its maximum and minimum on the boundary of K. If U is connected, this means that f cannot have local maxima or minima, other than the exceptional case where f is constant. Similar properties can be shown for subharmonic functions.
=== The mean value property ===
If B(x, r) is a ball with center x and radius r which is completely contained in the open set
Ω
⊂
R
n
,
{\displaystyle \Omega \subset \mathbb {R} ^{n},}
then the value u(x) of a harmonic function
u
:
Ω
→
R
{\displaystyle u:\Omega \to \mathbb {R} }
at the center of the ball is given by the average value of u on the surface of the ball; this average value is also equal to the average value of u in the interior of the ball. In other words,
u
(
x
)
=
1
n
ω
n
r
n
−
1
∫
∂
B
(
x
,
r
)
u
d
σ
=
1
ω
n
r
n
∫
B
(
x
,
r
)
u
d
V
{\displaystyle u(x)={\frac {1}{n\omega _{n}r^{n-1}}}\int _{\partial B(x,r)}u\,d\sigma ={\frac {1}{\omega _{n}r^{n}}}\int _{B(x,r)}u\,dV}
where ωn is the volume of the unit ball in n dimensions and σ is the (n − 1)-dimensional surface measure.
Conversely, all locally integrable functions satisfying the (volume) mean-value property are both infinitely differentiable and harmonic.
In terms of convolutions, if
χ
r
:=
1
|
B
(
0
,
r
)
|
χ
B
(
0
,
r
)
=
n
ω
n
r
n
χ
B
(
0
,
r
)
{\displaystyle \chi _{r}:={\frac {1}{|B(0,r)|}}\chi _{B(0,r)}={\frac {n}{\omega _{n}r^{n}}}\chi _{B(0,r)}}
denotes the characteristic function of the ball with radius r about the origin, normalized so that
∫
R
n
χ
r
d
x
=
1
,
{\textstyle \int _{\mathbb {R} ^{n}}\chi _{r}\,dx=1,}
the function u is harmonic on Ω if and only if
u
(
x
)
=
u
∗
χ
r
(
x
)
{\displaystyle u(x)=u*\chi _{r}(x)\;}
for all x and r such that
B
(
x
,
r
)
⊂
Ω
.
{\displaystyle B(x,r)\subset \Omega .}
Sketch of the proof. The proof of the mean-value property of the harmonic functions and its converse follows immediately observing that the non-homogeneous equation, for any 0 < s < r
Δ
w
=
χ
r
−
χ
s
{\displaystyle \Delta w=\chi _{r}-\chi _{s}\;}
admits an easy explicit solution wr,s of class C1,1 with compact support in B(0, r). Thus, if u is harmonic in Ω
0
=
Δ
u
∗
w
r
,
s
=
u
∗
Δ
w
r
,
s
=
u
∗
χ
r
−
u
∗
χ
s
{\displaystyle 0=\Delta u*w_{r,s}=u*\Delta w_{r,s}=u*\chi _{r}-u*\chi _{s}\;}
holds in the set Ωr of all points x in Ω with
dist
(
x
,
∂
Ω
)
>
r
.
{\displaystyle \operatorname {dist} (x,\partial \Omega )>r.}
Since u is continuous in Ω,
u
∗
χ
s
{\displaystyle u*\chi _{s}}
converges to u as s → 0 showing the mean value property for u in Ω. Conversely, if u is any
L
l
o
c
1
{\displaystyle L_{\mathrm {loc} }^{1}\;}
function satisfying the mean-value property in Ω, that is,
u
∗
χ
r
=
u
∗
χ
s
{\displaystyle u*\chi _{r}=u*\chi _{s}\;}
holds in Ωr for all 0 < s < r then, iterating m times the convolution with χr one has:
u
=
u
∗
χ
r
=
u
∗
χ
r
∗
⋯
∗
χ
r
,
x
∈
Ω
m
r
,
{\displaystyle u=u*\chi _{r}=u*\chi _{r}*\cdots *\chi _{r}\,,\qquad x\in \Omega _{mr},}
so that u is
C
m
−
1
(
Ω
m
r
)
{\displaystyle C^{m-1}(\Omega _{mr})\;}
because the m-fold iterated convolution of χr is of class
C
m
−
1
{\displaystyle C^{m-1}\;}
with support B(0, mr). Since r and m are arbitrary, u is
C
∞
(
Ω
)
{\displaystyle C^{\infty }(\Omega )\;}
too. Moreover,
Δ
u
∗
w
r
,
s
=
u
∗
Δ
w
r
,
s
=
u
∗
χ
r
−
u
∗
χ
s
=
0
{\displaystyle \Delta u*w_{r,s}=u*\Delta w_{r,s}=u*\chi _{r}-u*\chi _{s}=0\;}
for all 0 < s < r so that Δu = 0 in Ω by the fundamental theorem of the calculus of variations, proving the equivalence between harmonicity and mean-value property.
This statement of the mean value property can be generalized as follows: If h is any spherically symmetric function supported in B(x, r) such that
∫
h
=
1
,
{\textstyle \int h=1,}
then
u
(
x
)
=
h
∗
u
(
x
)
.
{\displaystyle u(x)=h*u(x).}
In other words, we can take the weighted average of u about a point and recover u(x). In particular, by taking h to be a C∞ function, we can recover the value of u at any point even if we only know how u acts as a distribution. See Weyl's lemma.
=== Harnack's inequality ===
Let
V
⊂
V
¯
⊂
Ω
{\displaystyle V\subset {\overline {V}}\subset \Omega }
be a connected set in a bounded domain Ω.
Then for every non-negative harmonic function u,
Harnack's inequality
sup
V
u
≤
C
inf
V
u
{\displaystyle \sup _{V}u\leq C\inf _{V}u}
holds for some constant C that depends only on V and Ω.
=== Removal of singularities ===
The following principle of removal of singularities holds for harmonic functions. If f is a harmonic function defined on a dotted open subset
Ω
∖
{
x
0
}
{\displaystyle \Omega \smallsetminus \{x_{0}\}}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
, which is less singular at x0 than the fundamental solution (for n > 2), that is
f
(
x
)
=
o
(
|
x
−
x
0
|
2
−
n
)
,
as
x
→
x
0
,
{\displaystyle f(x)=o\left(\vert x-x_{0}\vert ^{2-n}\right),\qquad {\text{as }}x\to x_{0},}
then f extends to a harmonic function on Ω (compare Riemann's theorem for functions of a complex variable).
=== Liouville's theorem ===
Theorem: If f is a harmonic function defined on all of
R
n
{\displaystyle \mathbb {R} ^{n}}
which is bounded above or bounded below, then f is constant.
(Compare Liouville's theorem for functions of a complex variable).
Edward Nelson gave a particularly short proof of this theorem for the case of bounded functions, using the mean value property mentioned above:
Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since f is bounded, the averages of it over the two balls are arbitrarily close, and so f assumes the same value at any two points.
The proof can be adapted to the case where the harmonic function f is merely bounded above or below. By adding a constant and possibly multiplying by –1, we may assume that f is non-negative. Then for any two points x and y, and any positive number R, we let
r
=
R
+
d
(
x
,
y
)
.
{\displaystyle r=R+d(x,y).}
We then consider the balls BR(x) and Br(y) where by the triangle inequality, the first ball is contained in the second.
By the averaging property and the monotonicity of the integral, we have
f
(
x
)
=
1
vol
(
B
R
)
∫
B
R
(
x
)
f
(
z
)
d
z
≤
1
vol
(
B
R
)
∫
B
r
(
y
)
f
(
z
)
d
z
.
{\displaystyle f(x)={\frac {1}{\operatorname {vol} (B_{R})}}\int _{B_{R}(x)}f(z)\,dz\leq {\frac {1}{\operatorname {vol} (B_{R})}}\int _{B_{r}(y)}f(z)\,dz.}
(Note that since vol BR(x) is independent of x, we denote it merely as vol BR.) In the last expression, we may multiply and divide by vol Br and use the averaging property again, to obtain
f
(
x
)
≤
vol
(
B
r
)
vol
(
B
R
)
f
(
y
)
.
{\displaystyle f(x)\leq {\frac {\operatorname {vol} (B_{r})}{\operatorname {vol} (B_{R})}}f(y).}
But as
R
→
∞
,
{\displaystyle R\rightarrow \infty ,}
the quantity
vol
(
B
r
)
vol
(
B
R
)
=
(
R
+
d
(
x
,
y
)
)
n
R
n
{\displaystyle {\frac {\operatorname {vol} (B_{r})}{\operatorname {vol} (B_{R})}}={\frac {\left(R+d(x,y)\right)^{n}}{R^{n}}}}
tends to 1. Thus,
f
(
x
)
≤
f
(
y
)
.
{\displaystyle f(x)\leq f(y).}
The same argument with the roles of x and y reversed shows that
f
(
y
)
≤
f
(
x
)
{\displaystyle f(y)\leq f(x)}
, so that
f
(
x
)
=
f
(
y
)
.
{\displaystyle f(x)=f(y).}
Another proof uses the fact that given a Brownian motion Bt in
R
n
,
{\displaystyle \mathbb {R} ^{n},}
such that
B
0
=
x
0
,
{\displaystyle B_{0}=x_{0},}
we have
E
[
f
(
B
t
)
]
=
f
(
x
0
)
{\displaystyle E[f(B_{t})]=f(x_{0})}
for all t ≥ 0. In words, it says that a harmonic function defines a martingale for the Brownian motion. Then a probabilistic coupling argument finishes the proof.
== Generalizations ==
=== Weakly harmonic function ===
A function (or, more generally, a distribution) is weakly harmonic if it satisfies Laplace's equation
Δ
f
=
0
{\displaystyle \Delta f=0\,}
in a weak sense (or, equivalently, in the sense of distributions). A weakly harmonic function coincides almost everywhere with a strongly harmonic function, and is in particular smooth. A weakly harmonic distribution is precisely the distribution associated to a strongly harmonic function, and so also is smooth. This is Weyl's lemma.
There are other weak formulations of Laplace's equation that are often useful. One of which is Dirichlet's principle, representing harmonic functions in the Sobolev space H1(Ω) as the minimizers of the Dirichlet energy integral
J
(
u
)
:=
∫
Ω
|
∇
u
|
2
d
x
{\displaystyle J(u):=\int _{\Omega }|\nabla u|^{2}\,dx}
with respect to local variations, that is, all functions
u
∈
H
1
(
Ω
)
{\displaystyle u\in H^{1}(\Omega )}
such that
J
(
u
)
≤
J
(
u
+
v
)
{\displaystyle J(u)\leq J(u+v)}
holds for all
v
∈
C
c
∞
(
Ω
)
,
{\displaystyle v\in C_{c}^{\infty }(\Omega ),}
or equivalently, for all
v
∈
H
0
1
(
Ω
)
.
{\displaystyle v\in H_{0}^{1}(\Omega ).}
=== Harmonic functions on manifolds ===
Harmonic functions can be defined on an arbitrary Riemannian manifold, using the Laplace–Beltrami operator Δ. In this context, a function is called harmonic if
Δ
f
=
0.
{\displaystyle \ \Delta f=0.}
Many of the properties of harmonic functions on domains in Euclidean space carry over to this more general setting, including the mean value theorem (over geodesic balls), the maximum principle, and the Harnack inequality. With the exception of the mean value theorem, these are easy consequences of the corresponding results for general linear elliptic partial differential equations of the second order.
=== Subharmonic functions ===
A C2 function that satisfies Δf ≥ 0 is called subharmonic. This condition guarantees that the maximum principle will hold, although other properties of harmonic functions may fail. More generally, a function is subharmonic if and only if, in the interior of any ball in its domain, its graph lies below that of the harmonic function interpolating its boundary values on the ball.
=== Harmonic forms ===
One generalization of the study of harmonic functions is the study of harmonic forms on Riemannian manifolds, and it is related to the study of cohomology. Also, it is possible to define harmonic vector-valued functions, or harmonic maps of two Riemannian manifolds, which are critical points of a generalized Dirichlet energy functional (this includes harmonic functions as a special case, a result known as Dirichlet principle). This kind of harmonic map appears in the theory of minimal surfaces. For example, a curve, that is, a map from an interval in
R
{\displaystyle \mathbb {R} }
to a Riemannian manifold, is a harmonic map if and only if it is a geodesic.
=== Harmonic maps between manifolds ===
If M and N are two Riemannian manifolds, then a harmonic map
u
:
M
→
N
{\displaystyle u:M\to N}
is defined to be a critical point of the Dirichlet energy
D
[
u
]
=
1
2
∫
M
‖
d
u
‖
2
d
Vol
{\displaystyle D[u]={\frac {1}{2}}\int _{M}\left\|du\right\|^{2}\,d\operatorname {Vol} }
in which
d
u
:
T
M
→
T
N
{\displaystyle du:TM\to TN}
is the differential of u, and the norm is that induced by the metric on M and that on N on the tensor product bundle
T
∗
M
⊗
u
−
1
T
N
.
{\displaystyle T^{\ast }M\otimes u^{-1}TN.}
Important special cases of harmonic maps between manifolds include minimal surfaces, which are precisely the harmonic immersions of a surface into three-dimensional Euclidean space. More generally, minimal submanifolds are harmonic immersions of one manifold in another. Harmonic coordinates are a harmonic diffeomorphism from a manifold to an open subset of a Euclidean space of the same dimension.
== See also ==
== Notes ==
== References ==
== External links ==
"Harmonic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Harmonic Function". MathWorld.
Harmonic Function Theory by S.Axler, Paul Bourdon, and Wade Ramey | Wikipedia/Harmonic_function |
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form
(
−
∞
,
a
)
{\displaystyle (-\infty ,a)}
is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
Quasiconvexity is a more general property than convexity in that all convex functions are also quasiconvex, but not all quasiconvex functions are convex. Univariate unimodal functions are quasiconvex or quasiconcave, however this is not necessarily the case for functions with multiple arguments. For example, the 2-dimensional Rosenbrock function is unimodal but not quasiconvex and functions with star-convex sublevel sets can be unimodal without being quasiconvex.
== Definition and properties ==
A function
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
defined on a convex subset
S
{\displaystyle S}
of a real vector space is quasiconvex if for all
x
,
y
∈
S
{\displaystyle x,y\in S}
and
λ
∈
[
0
,
1
]
{\displaystyle \lambda \in [0,1]}
we have
f
(
λ
x
+
(
1
−
λ
)
y
)
≤
max
{
f
(
x
)
,
f
(
y
)
}
.
{\displaystyle f(\lambda x+(1-\lambda )y)\leq \max {\big \{}f(x),f(y){\big \}}.}
In words, if
f
{\displaystyle f}
is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, then
f
{\displaystyle f}
is quasiconvex. Note that the points
x
{\displaystyle x}
and
y
{\displaystyle y}
, and the point directly between them, can be points on a line or more generally points in n-dimensional space.
An alternative way (see introduction) of defining a quasi-convex function
f
(
x
)
{\displaystyle f(x)}
is to require that each sublevel set
S
α
(
f
)
=
{
x
∣
f
(
x
)
≤
α
}
{\displaystyle S_{\alpha }(f)=\{x\mid f(x)\leq \alpha \}}
is a convex set.
If furthermore
f
(
λ
x
+
(
1
−
λ
)
y
)
<
max
{
f
(
x
)
,
f
(
y
)
}
{\displaystyle f(\lambda x+(1-\lambda )y)<\max {\big \{}f(x),f(y){\big \}}}
for all
x
≠
y
{\displaystyle x\neq y}
and
λ
∈
(
0
,
1
)
{\displaystyle \lambda \in (0,1)}
, then
f
{\displaystyle f}
is strictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does.
A quasiconcave function is a function whose negative is quasiconvex, and a strictly quasiconcave function is a function whose negative is strictly quasiconvex. Equivalently a function
f
{\displaystyle f}
is quasiconcave if
f
(
λ
x
+
(
1
−
λ
)
y
)
≥
min
{
f
(
x
)
,
f
(
y
)
}
.
{\displaystyle f(\lambda x+(1-\lambda )y)\geq \min {\big \{}f(x),f(y){\big \}}.}
and strictly quasiconcave if
f
(
λ
x
+
(
1
−
λ
)
y
)
>
min
{
f
(
x
)
,
f
(
y
)
}
{\displaystyle f(\lambda x+(1-\lambda )y)>\min {\big \{}f(x),f(y){\big \}}}
A (strictly) quasiconvex function has (strictly) convex lower contour sets, while a (strictly) quasiconcave function has (strictly) convex upper contour sets.
A function that is both quasiconvex and quasiconcave is quasilinear.
A particular case of quasi-concavity, if
S
⊂
R
{\displaystyle S\subset \mathbb {R} }
, is unimodality, in which there is a locally maximal value.
== Applications ==
Quasiconvex functions have applications in mathematical analysis, in mathematical optimization, and in game theory and economics.
=== Mathematical optimization ===
In nonlinear optimization, quasiconvex programming studies iterative methods that converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization of convex programming. Quasiconvex programming is used in the solution of "surrogate" dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangian dual problems. In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated); however, such theoretically "efficient" methods use "divergent-series" step size rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.
=== Economics and partial differential equations: Minimax theorems ===
In microeconomics, quasiconcave utility functions imply that consumers have convex preferences. Quasiconvex functions are important
also in game theory, industrial organization, and general equilibrium theory, particularly for applications of Sion's minimax theorem. Generalizing a minimax theorem of John von Neumann, Sion's theorem is also used in the theory of partial differential equations.
== Preservation of quasiconvexity ==
=== Operations preserving quasiconvexity ===
maximum of quasiconvex functions (i.e.
f
=
max
{
f
1
,
…
,
f
n
}
{\displaystyle f=\max \left\lbrace f_{1},\ldots ,f_{n}\right\rbrace }
) is quasiconvex. Similarly, maximum of strict quasiconvex functions is strict quasiconvex. Similarly, the minimum of quasiconcave functions is quasiconcave, and the minimum of strictly-quasiconcave functions is strictly-quasiconcave.
composition with a non-decreasing function :
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
quasiconvex,
h
:
R
→
R
{\displaystyle h:\mathbb {R} \rightarrow \mathbb {R} }
non-decreasing, then
f
=
h
∘
g
{\displaystyle f=h\circ g}
is quasiconvex. Similarly, if
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
quasiconcave,
h
:
R
→
R
{\displaystyle h:\mathbb {R} \rightarrow \mathbb {R} }
non-decreasing, then
f
=
h
∘
g
{\displaystyle f=h\circ g}
is quasiconcave.
minimization (i.e.
f
(
x
,
y
)
{\displaystyle f(x,y)}
quasiconvex,
C
{\displaystyle C}
convex set, then
h
(
x
)
=
inf
y
∈
C
f
(
x
,
y
)
{\displaystyle h(x)=\inf _{y\in C}f(x,y)}
is quasiconvex)
=== Operations not preserving quasiconvexity ===
The sum of quasiconvex functions defined on the same domain need not be quasiconvex: In other words, if
f
(
x
)
,
g
(
x
)
{\displaystyle f(x),g(x)}
are quasiconvex, then
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
need not be quasiconvex.
The sum of quasiconvex functions defined on different domains (i.e. if
f
(
x
)
,
g
(
y
)
{\displaystyle f(x),g(y)}
are quasiconvex,
h
(
x
,
y
)
=
f
(
x
)
+
g
(
y
)
{\displaystyle h(x,y)=f(x)+g(y)}
) need not be quasiconvex. Such functions are called "additively decomposed" in economics and "separable" in mathematical optimization.
== Examples ==
Every convex function is quasiconvex.
A concave function can be quasiconvex. For example,
x
↦
log
(
x
)
{\displaystyle x\mapsto \log(x)}
is both concave and quasiconvex.
Any monotonic function is both quasiconvex and quasiconcave. More generally, a function which decreases up to a point and increases from that point on is quasiconvex (compare unimodality).
The floor function
x
↦
⌊
x
⌋
{\displaystyle x\mapsto \lfloor x\rfloor }
is an example of a quasiconvex function that is neither convex nor continuous.
== See also ==
Convex function
Concave function
Logarithmically concave function
Pseudoconvexity in the sense of several complex variables (not generalized convexity)
Pseudoconvex function
Invex function
Concavification
== References ==
Avriel, M., Diewert, W.E., Schaible, S. and Zang, I., Generalized Concavity, Plenum Press, 1988.
Crouzeix, J.-P. (2008). "Quasi-concavity". In Durlauf, Steven N.; Blume, Lawrence E (eds.). The New Palgrave Dictionary of Economics (Second ed.). Palgrave Macmillan. pp. 815–816. doi:10.1057/9780230226203.1375. ISBN 978-0-333-78676-5.
Singer, Ivan Abstract convex analysis. Canadian Mathematical Society Series of Monographs and Advanced Texts. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1997. xxii+491 pp. ISBN 0-471-16015-6
== External links ==
SION, M., "On general minimax theorems", Pacific J. Math. 8 (1958), 171-176.
Mathematical programming glossary
Concave and Quasi-Concave Functions - by Charles Wilson, NYU Department of Economics
Quasiconcavity and quasiconvexity - by Martin J. Osborne, University of Toronto Department of Economics | Wikipedia/Quasiconcave_function |
In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points.
== Formal definition ==
Consider a differentiable function
f
:
X
⊆
R
n
→
R
{\displaystyle f:X\subseteq \mathbb {R} ^{n}\rightarrow \mathbb {R} }
, defined on a (nonempty) convex open set
X
{\displaystyle X}
of the finite-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
. This function is said to be pseudoconvex if the following property holds:
Equivalently:
Here
∇
f
{\displaystyle \nabla f}
is the gradient of
f
{\displaystyle f}
, defined by:
∇
f
=
(
∂
f
∂
x
1
,
…
,
∂
f
∂
x
n
)
.
{\displaystyle \nabla f=\left({\frac {\partial f}{\partial x_{1}}},\dots ,{\frac {\partial f}{\partial x_{n}}}\right).}
Note that the definition may also be stated in terms of the directional derivative of
f
{\displaystyle f}
, in the direction given by the vector
v
=
y
−
x
{\displaystyle v=y-x}
. This is because, as
f
{\displaystyle f}
is differentiable, this directional derivative is given by:
== Properties ==
=== Relation to other types of "convexity" ===
Every convex function is pseudoconvex, but the converse is not true. For example, the function
f
(
x
)
=
x
+
x
3
{\displaystyle f(x)=x+x^{3}}
is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function
f
(
x
)
=
x
3
{\displaystyle f(x)=x^{3}}
is quasiconvex but not pseudoconvex. This can be summarized schematically as:
To see that
f
(
x
)
=
x
3
{\displaystyle f(x)=x^{3}}
is not pseudoconvex, consider its derivative at
x
=
0
{\displaystyle x=0}
:
f
′
(
0
)
=
0
{\displaystyle f^{\prime }(0)=0}
. Then, if
f
(
x
)
=
x
3
{\displaystyle f(x)=x^{3}}
was pseudoconvex, we should have:
In particular it should be true for
y
=
−
1
{\displaystyle y=-1}
. But it is not, as:
f
(
−
1
)
=
(
−
1
)
3
=
−
1
<
f
(
0
)
=
0
{\displaystyle f(-1)=(-1)^{3}=-1<f(0)=0}
.
=== Sufficient optimality condition ===
For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if
f
{\displaystyle f}
has a local minimum at
x
∗
{\displaystyle x^{*}}
in an open domain, then
x
∗
{\displaystyle x^{*}}
must be a stationary point of
f
{\displaystyle f}
(that is:
∇
f
(
x
∗
)
=
0
{\displaystyle \nabla f(x^{*})=0}
).
Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is: if
x
∗
{\displaystyle x^{*}}
is a stationary point of a pseudoconvex function
f
{\displaystyle f}
, then
f
{\displaystyle f}
has a global minimum at
x
∗
{\displaystyle x^{*}}
. Note also that the result guarantees a global minimum (not only local).
This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function:
This function is not pseudoconvex, but it is quasiconvex. Also, the point
x
=
0
{\displaystyle x=0}
is a critical point of
f
{\displaystyle f}
, as
f
′
(
0
)
=
0
{\displaystyle f^{\prime }(0)=0}
. However,
f
{\displaystyle f}
does not have a global minimum at
x
=
0
{\displaystyle x=0}
(not even a local minimum).
Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function:
f
(
x
)
=
x
3
+
x
{\displaystyle f(x)=x^{3}+x}
, whose derivative is always positive:
f
′
(
x
)
=
3
x
2
+
1
>
0
,
∀
x
∈
R
{\displaystyle f^{\prime }(x)=3x^{2}+1>0,\,\forall \,x\in \mathbb {R} }
.
== Examples ==
An example of a function that is pseudoconvex, but not convex, is:
f
(
x
)
=
x
2
x
2
+
k
,
k
>
0.
{\displaystyle f(x)={\frac {x^{2}}{x^{2}+k}},\,k>0.}
The figure shows this function for the case where
k
=
0.2
{\displaystyle k=0.2}
. This example may be generalized to two variables as:
The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex:
The figure shows this function for the case where
k
=
0.5
,
p
=
0.6
{\displaystyle k=0.5,p=0.6}
. As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable at
x
=
0
{\displaystyle x=0}
.
== Generalization to nondifferentiable functions ==
The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows. Given any function
f
:
X
→
R
{\displaystyle f:X\rightarrow \mathbb {R} }
, we can define the upper Dini derivative of
f
{\displaystyle f}
by:
where u is any unit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of the subdifferential
∂
f
{\displaystyle \partial f}
as follows:
where
[
x
,
y
]
{\displaystyle [x,y]}
denotes the line segment adjoining x and y.
== Related notions ==
A pseudoconcave function is a function whose negative is pseudoconvex. A pseudolinear function is a function that is both pseudoconvex and pseudoconcave. For example, linear–fractional programs have pseudolinear objective functions and linear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of the simplex algorithm (of George B. Dantzig).
Given a vector-valued function
η
{\displaystyle \eta }
, there is a more general notion of
η
{\displaystyle \eta }
-pseudoconvexity and
η
{\displaystyle \eta }
-pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case when
η
(
x
,
y
)
=
y
−
x
{\displaystyle \eta (x,y)=y-x}
.
== See also ==
Pseudoconvexity
Convex function
Quasiconvex function
Invex function
== Notes ==
== References ==
Floudas, Christodoulos A.; Pardalos, Panos M. (2001), "Generalized monotone multivalued maps", Encyclopedia of Optimization, Springer, p. 227, ISBN 978-0-7923-6932-5.
Mangasarian, O. L. (January 1965). "Pseudo-Convex Functions". Journal of the Society for Industrial and Applied Mathematics, Series A: Control. 3 (2): 281–290. doi:10.1137/0303020. ISSN 0363-0129..
Rapcsak, T. (1991-02-15). "On pseudolinear functions". European Journal of Operational Research. 50 (3): 353–360. doi:10.1016/0377-2217(91)90267-Y. ISSN 0377-2217. | Wikipedia/Pseudoconvex_function |
In mathematics, the hypograph or subgraph of a function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
is the set of points lying on or below its graph.
A related definition is that of such a function's epigraph, which is the set of points on or above the function's graph.
The domain (rather than the codomain) of the function is not particularly important for this definition; it can be an arbitrary set instead of
R
n
{\displaystyle \mathbb {R} ^{n}}
.
== Definition ==
The definition of the hypograph was inspired by that of the graph of a function, where the graph of
f
:
X
→
Y
{\displaystyle f:X\to Y}
is defined to be the set
graph
f
:=
{
(
x
,
y
)
∈
X
×
Y
:
y
=
f
(
x
)
}
.
{\displaystyle \operatorname {graph} f:=\left\{(x,y)\in X\times Y~:~y=f(x)\right\}.}
The hypograph or subgraph of a function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}
is the set
hyp
f
=
{
(
x
,
r
)
∈
X
×
R
:
r
≤
f
(
x
)
}
=
[
f
−
1
(
∞
)
×
R
]
∪
⋃
x
∈
f
−
1
(
R
)
(
{
x
}
×
(
−
∞
,
f
(
x
)
]
)
.
{\displaystyle {\begin{alignedat}{4}\operatorname {hyp} f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r\leq f(x)\right\}\\&=\left[f^{-1}(\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}(\{x\}\times (-\infty ,f(x)]).\end{alignedat}}}
Similarly, the set of points on or above the function is its epigraph.
The strict hypograph is the hypograph with the graph removed:
hyp
S
f
=
{
(
x
,
r
)
∈
X
×
R
:
r
<
f
(
x
)
}
=
hyp
f
∖
graph
f
=
⋃
x
∈
X
(
{
x
}
×
(
−
∞
,
f
(
x
)
)
)
.
{\displaystyle {\begin{alignedat}{4}\operatorname {hyp} _{S}f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r<f(x)\right\}\\&=\operatorname {hyp} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}(\{x\}\times (-\infty ,f(x))).\end{alignedat}}}
Despite the fact that
f
{\displaystyle f}
might take one (or both) of
±
∞
{\displaystyle \pm \infty }
as a value (in which case its graph would not be a subset of
X
×
R
{\displaystyle X\times \mathbb {R} }
), the hypograph of
f
{\displaystyle f}
is nevertheless defined to be a subset of
X
×
R
{\displaystyle X\times \mathbb {R} }
rather than of
X
×
[
−
∞
,
∞
]
.
{\displaystyle X\times [-\infty ,\infty ].}
== Properties ==
The hypograph of a function
f
{\displaystyle f}
is empty if and only if
f
{\displaystyle f}
is identically equal to negative infinity.
A function is concave if and only if its hypograph is a convex set. The hypograph of a real affine function
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} }
is a halfspace in
R
n
+
1
.
{\displaystyle \mathbb {R} ^{n+1}.}
A function is upper semicontinuous if and only if its hypograph is closed.
== See also ==
Effective domain
Epigraph (mathematics) – Region above a graph
Proper convex function
== Citations ==
== References ==
Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. | Wikipedia/Hypograph_(mathematics) |
Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the economy as a whole, which is studied in macroeconomics.
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results.
While microeconomics focuses on firms and individuals, macroeconomics focuses on the total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior.
== Assumptions and definitions ==
Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).
Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive.
The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable.
Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed.
The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well.
The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence.
The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as primitive. This model of microeconomic theory is referred to as revealed preference theory.
The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions.
Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed.
This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory.
The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set.
=== Allocation of scarce resources ===
Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labour, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth.
The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car.
== History ==
Economists commonly consider themselves microeconomists or macroeconomists. The difference between microeconomics and macroeconomics likely was introduced in 1933 by the Norwegian economist Ragnar Frisch, the co-recipient of the first Nobel Memorial Prize in Economic Sciences in 1969. However, Frisch did not actually use the word "microeconomics", instead drawing distinctions between "micro-dynamic" and "macro-dynamic" analysis in a way similar to how the words "microeconomics" and "macroeconomics" are used today. The first known use of the term "microeconomics" in a published article was from Pieter de Wolff in 1941, who broadened the term "micro-dynamics" into "microeconomics".
== Microeconomic theory ==
=== Consumer demand theory ===
Consumer demand theory relates preferences for the consumption of both goods and services to the consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption and the demand curve is one of the most closely studied relations in economics. It is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility subject to consumer budget constraints.
=== Production theory ===
Production theory is the study of production, or the economic process of converting inputs into outputs. Production uses resources to create a good or service that is suitable for use, gift-giving in a gift economy, or exchange in a market economy. This can include manufacturing, storing, shipping, and packaging. Some economists define production broadly as all economic activity other than consumption. They see every commercial activity other than the final purchase as some form of production.
=== Cost-of-production theory of value ===
The cost-of-production theory of value states that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production (including labor, capital, or land) and taxation. Technology can be viewed either as a form of fixed capital (e.g. an industrial plant) or circulating capital (e.g. intermediate goods).
In the mathematical model for the cost of production, the short-run total cost is equal to fixed cost plus total variable cost. The fixed cost refers to the cost that is incurred regardless of how much the firm produces. The variable cost is a function of the quantity of an object being produced. The cost function can be used to characterize production through the duality theory in economics, developed mainly by Ronald Shephard (1953, 1970) and other scholars (Sickles & Zelenyuk, 2019, ch. 2).
=== Fixed and variable costs ===
Fixed cost (FC) – This cost does not change with output. It includes business expenses such as rent, salaries and utility bills.
Variable cost (VC) – This cost changes as output changes. This includes raw materials, delivery costs and production supplies.
Over a short time period (few months), most costs are fixed costs as the firm will have to pay for salaries, contracted shipment and materials used to produce various goods. Over a longer time period (2-3 years), costs can become variable. Firms can decide to reduce output, purchase fewer materials and even sell some machinery. Over 10 years, most costs become variable as workers can be laid off or new machinery can be bought to replace the old machinery
Sunk costs – This is a fixed cost that has already been incurred and cannot be recovered. An example of this can be in R&D development like in the pharmaceutical industry. Hundreds of millions of dollars are spent to achieve new drug breakthroughs but this is challenging as its increasingly harder to find new breakthroughs and meet tighter regulation standards. Thus many projects are written off leading to losses of millions of dollars
=== Opportunity cost ===
Opportunity cost is closely related to the idea of time constraints. One can do only one thing at a time, which means that, inevitably, one is always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing one may have done instead. Opportunity cost depends only on the value of the next-best alternative. It does not matter whether one has five alternatives or 5,000.
Opportunity costs can tell when not to do something as well as when to do something. For example, one may like waffles, but like chocolate even more. If someone offers only waffles, one would take it. But if offered waffles or chocolate, one would take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if one chooses chocolate, they are still faced with the opportunity cost of giving up having waffles. But one is willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behavior because one has to decide what's best and give up the next-best alternative.
=== Price theory ===
Microeconomics is also known as price theory to highlight the significance of prices in relation to buyer and sellers as these agents determine prices due to their individual actions. Price theory is a field of economics that uses the supply and demand framework to explain and predict human behavior. It is associated with the Chicago School of Economics. Price theory studies competitive equilibrium in markets to yield testable hypotheses that can be rejected.
Price theory is not the same as microeconomics. Strategic behavior, such as the interactions among sellers in a market where they are few, is a significant part of microeconomics but is not emphasized in price theory. Price theorists focus on competition believing it to be a reasonable description of most markets that leaves room to study additional aspects of tastes and technology. As a result, price theory tends to use less game theory than microeconomics does.
Price theory focuses on how agents respond to prices, but its framework can be applied to a wide variety of socioeconomic issues that might not seem to involve prices at first glance. Price theorists have influenced several other fields including developing public choice theory and law and economics. Price theory has been applied to issues previously thought of as outside the purview of economics such as criminal justice, marriage, and addiction.
== Microeconomic models ==
=== Supply and demand ===
Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium.
Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium.
On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market.
Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing.
Other applications of demand and supply include the distribution of income among the factors of production, including labor and capital, through factor markets. In a competitive labor market for example the quantity of labor employed and the price of labor (the wage rate) depends on the demand for labor (from employers for production) and supply of labor (from potential workers). Labor economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labor income, labor mobility, and (un)employment, productivity through human capital, and related public-policy issues.
Demand-and-supply analysis is used to explain the behavior of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources.
== Market structure ==
Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems.
Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning.
Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits.
The concept of "market type" is different from the concept of "market structure". Nevertheless, there are a variety of types of markets.
The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm.
=== Perfect competition ===
Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market.
=== Imperfect competition ===
Imperfect competition is a type of market structure showing some but not all features of competitive markets. In perfect competition, market power is not achievable due to a high level of producers causing high levels of competition. Therefore, prices are brought down to a marginal cost level. In a monopoly, market power is achieved by one firm leading to prices being higher than the marginal cost level.
Between these two types of markets are firms that are neither perfectly competitive or monopolistic. Firms such as Pepsi and Coke and Sony, Nintendo and Microsoft dominate the cola and video game industry respectively. These firms are in imperfect competition
=== Monopolistic competition ===
Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities.
=== Monopoly ===
A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies).
Natural monopoly: A monopoly in an industry where one producer can produce output at a lower cost than many small producers.
=== Oligopoly ===
An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output. Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns.
Duopoly: A special case of an oligopoly, with only two firms. Game theory can elucidate behavior in duopolies and oligopolies.
=== Monopsony ===
A monopsony is a market where there is only one buyer and many sellers.
=== Bilateral monopoly ===
A bilateral monopoly is a market consisting of both a monopoly (a single seller) and a monopsony (a single buyer).
=== Oligopsony ===
An oligopsony is a market where there are a few buyers and many sellers.
== Game theory ==
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term "game" here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
== Information economics ==
Information economics is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. The economics of information has recently become of great interest to many - possibly due to the rise of information-based companies inside the technology industry. From a game theory approach, the usual constraints that agents have complete information can be loosened to further examine the consequences of having incomplete information. This gives rise to many results which are applicable to real life situations. For example, if one does loosen this assumption, then it is possible to scrutinize the actions of agents in situations of uncertainty. It is also possible to more fully understand the impacts – both positive and negative – of agents seeking out or acquiring information.
== Applied ==
Applied microeconomics includes a range of specialized areas of study, many of which draw on methods from other fields.
Economic history examines the evolution of the economy and economic institutions, using methods and techniques from the fields of economics, history, geography, sociology, psychology, and political science.
Education economics examines the organization of education provision and its implication for efficiency and equity, including the effects of education on productivity.
Financial economics examines topics such as the structure of optimal portfolios, the rate of return to capital, econometric analysis of security returns, and corporate financial behavior.
Health economics examines the organization of health care systems, including the role of the health care workforce and health insurance programs.
Industrial organization examines topics such as the entry and exit of firms, innovation, and the role of trademarks.
Law and economics applies microeconomic principles to the selection and enforcement of competing legal regimes and their relative efficiencies.
Political economy examines the role of political institutions in determining policy outcomes.
Public economics examines the design of government tax and expenditure policies and economic effects of these policies (e.g., social insurance programs).
Urban economics, which examines the challenges faced by cities, such as sprawl, air and water pollution, traffic congestion, and poverty, draws on the fields of urban geography and sociology.
Labor economics examines primarily labor markets, but comprises a large range of public policy issues such as immigration, minimum wages, or inequality.
== See also ==
Macroeconomics
First-order approach
Critique of political economy
== References ==
== Further reading ==
== External links ==
X-Lab: A Collaborative Micro-Economics and Social Sciences Research Laboratory
Simulations in Microeconomics Archived 2010-10-31 at the Wayback Machine
A brief history of microeconomics | Wikipedia/Microeconomic_theory |
In convex analysis, a non-negative function f : Rn → R+ is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality
f
(
θ
x
+
(
1
−
θ
)
y
)
≥
f
(
x
)
θ
f
(
y
)
1
−
θ
{\displaystyle f(\theta x+(1-\theta )y)\geq f(x)^{\theta }f(y)^{1-\theta }}
for all x,y ∈ dom f and 0 < θ < 1. If f is strictly positive, this is equivalent to saying that the logarithm of the function, log ∘ f, is concave; that is,
log
f
(
θ
x
+
(
1
−
θ
)
y
)
≥
θ
log
f
(
x
)
+
(
1
−
θ
)
log
f
(
y
)
{\displaystyle \log f(\theta x+(1-\theta )y)\geq \theta \log f(x)+(1-\theta )\log f(y)}
for all x,y ∈ dom f and 0 < θ < 1.
Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function.
Similarly, a function is log-convex if it satisfies the reverse inequality
f
(
θ
x
+
(
1
−
θ
)
y
)
≤
f
(
x
)
θ
f
(
y
)
1
−
θ
{\displaystyle f(\theta x+(1-\theta )y)\leq f(x)^{\theta }f(y)^{1-\theta }}
for all x,y ∈ dom f and 0 < θ < 1.
== Properties ==
A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex.
Every concave function that is nonnegative on its domain is log-concave. However, the reverse does not necessarily hold. An example is the Gaussian function f(x) = exp(−x2/2) which is log-concave since log f(x) = −x2/2 is a concave function of x. But f is not concave since the second derivative is positive for |x| > 1:
f
″
(
x
)
=
e
−
x
2
2
(
x
2
−
1
)
≰
0
{\displaystyle f''(x)=e^{-{\frac {x^{2}}{2}}}(x^{2}-1)\nleq 0}
From above two points, concavity
⇒
{\displaystyle \Rightarrow }
log-concavity
⇒
{\displaystyle \Rightarrow }
quasiconcavity.
A twice differentiable, nonnegative function with a convex domain is log-concave if and only if for all x satisfying f(x) > 0,
f
(
x
)
∇
2
f
(
x
)
⪯
∇
f
(
x
)
∇
f
(
x
)
T
{\displaystyle f(x)\nabla ^{2}f(x)\preceq \nabla f(x)\nabla f(x)^{T}}
,
i.e.
f
(
x
)
∇
2
f
(
x
)
−
∇
f
(
x
)
∇
f
(
x
)
T
{\displaystyle f(x)\nabla ^{2}f(x)-\nabla f(x)\nabla f(x)^{T}}
is
negative semi-definite. For functions of one variable, this condition simplifies to
f
(
x
)
f
″
(
x
)
≤
(
f
′
(
x
)
)
2
{\displaystyle f(x)f''(x)\leq (f'(x))^{2}}
== Operations preserving log-concavity ==
Products: The product of log-concave functions is also log-concave. Indeed, if f and g are log-concave functions, then log f and log g are concave by definition. Therefore
log
f
(
x
)
+
log
g
(
x
)
=
log
(
f
(
x
)
g
(
x
)
)
{\displaystyle \log \,f(x)+\log \,g(x)=\log(f(x)g(x))}
is concave, and hence also f g is log-concave.
Marginals: if f(x,y) : Rn+m → R is log-concave, then
g
(
x
)
=
∫
f
(
x
,
y
)
d
y
{\displaystyle g(x)=\int f(x,y)dy}
is log-concave (see Prékopa–Leindler inequality).
This implies that convolution preserves log-concavity, since h(x,y) = f(x-y) g(y) is log-concave if f and g are log-concave, and therefore
(
f
∗
g
)
(
x
)
=
∫
f
(
x
−
y
)
g
(
y
)
d
y
=
∫
h
(
x
,
y
)
d
y
{\displaystyle (f*g)(x)=\int f(x-y)g(y)dy=\int h(x,y)dy}
is log-concave.
== Log-concave distributions ==
Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability distribution with specified mean μ and Deviation risk measure D.
As it happens, many common probability distributions are log-concave. Some examples:
the normal distribution and multivariate normal distributions,
the exponential distribution,
the uniform distribution over any convex set,
the logistic distribution,
the extreme value distribution,
the Laplace distribution,
the chi distribution,
the hyperbolic secant distribution,
the Wishart distribution, if n ≥ p + 1,
the Dirichlet distribution, if all parameters are ≥ 1,
the gamma distribution if the shape parameter is ≥ 1,
the chi-square distribution if the number of degrees of freedom is ≥ 2,
the beta distribution if both shape parameters are ≥ 1, and
the Weibull distribution if the shape parameter is ≥ 1.
Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave.
The following distributions are non-log-concave for all parameters:
the Student's t-distribution,
the Cauchy distribution,
the Pareto distribution,
the log-normal distribution, and
the F-distribution.
Note that the cumulative distribution function (CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's:
the log-normal distribution,
the Pareto distribution,
the Weibull distribution when the shape parameter < 1, and
the gamma distribution when the shape parameter < 1.
The following are among the properties of log-concave distributions:
If a density is log-concave, so is its cumulative distribution function (CDF).
If a multivariate density is log-concave, so is the marginal density over any subset of variables.
The sum of two independent log-concave random variables is log-concave. This follows from the fact that the convolution of two log-concave functions is log-concave.
The product of two log-concave functions is log-concave. This means that joint densities formed by multiplying two probability densities (e.g. the normal-gamma distribution, which always has a shape parameter ≥ 1) will be log-concave. This property is heavily used in general-purpose Gibbs sampling programs such as BUGS and JAGS, which are thereby able to use adaptive rejection sampling over a wide variety of conditional distributions derived from the product of other distributions.
If a density is log-concave, so is its survival function.
If a density is log-concave, it has a monotone hazard rate (MHR), and is a regular distribution since the derivative of the logarithm of the survival function is the negative hazard rate, and by concavity is monotone i.e.
d
d
x
log
(
1
−
F
(
x
)
)
=
−
f
(
x
)
1
−
F
(
x
)
{\displaystyle {\frac {d}{dx}}\log \left(1-F(x)\right)=-{\frac {f(x)}{1-F(x)}}}
which is decreasing as it is the derivative of a concave function.
== See also ==
logarithmically concave sequence
logarithmically concave measure
logarithmically convex function
convex function
== Notes ==
== References ==
Barndorff-Nielsen, Ole (1978). Information and exponential families in statistical theory. Wiley Series in Probability and Mathematical Statistics. Chichester: John Wiley \& Sons, Ltd. pp. ix+238 pp. ISBN 0-471-99545-2. MR 0489333.
Dharmadhikari, Sudhakar; Joag-Dev, Kumar (1988). Unimodality, convexity, and applications. Probability and Mathematical Statistics. Boston, MA: Academic Press, Inc. pp. xiv+278. ISBN 0-12-214690-5. MR 0954608.
Pfanzagl, Johann; with the assistance of R. Hamböker (1994). Parametric Statistical Theory. Walter de Gruyter. ISBN 3-11-013863-8. MR 1291393.
Pečarić, Josip E.; Proschan, Frank; Tong, Y. L. (1992). Convex functions, partial orderings, and statistical applications. Mathematics in Science and Engineering. Vol. 187. Boston, MA: Academic Press, Inc. pp. xiv+467 pp. ISBN 0-12-549250-2. MR 1162312. | Wikipedia/Logarithmically_concave_function |
K-convex functions, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the
(
s
,
S
)
{\displaystyle (s,S)}
policy in inventory control theory. The policy is characterized by two numbers s and S,
S
≥
s
{\displaystyle S\geq s}
, such that when the inventory level falls below level s, an order is issued for a quantity that brings the inventory up to level S, and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of K-convexity to higher dimensional Euclidean spaces.
== Definition ==
Two equivalent definitions are as follows:
=== Definition 1 (The original definition) ===
Let K be a non-negative real number. A function
g
:
R
→
R
{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }
is K-convex if
g
(
u
)
+
z
[
g
(
u
)
−
g
(
u
−
b
)
b
]
≤
g
(
u
+
z
)
+
K
{\displaystyle g(u)+z\left[{\frac {g(u)-g(u-b)}{b}}\right]\leq g(u+z)+K}
for any
u
,
z
≥
0
,
{\displaystyle u,z\geq 0,}
and
b
>
0
{\displaystyle b>0}
.
=== Definition 2 (Definition with geometric interpretation) ===
A function
g
:
R
→
R
{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }
is K-convex if
g
(
λ
x
+
λ
¯
y
)
≤
λ
g
(
x
)
+
λ
¯
[
g
(
y
)
+
K
]
{\displaystyle g(\lambda x+{\bar {\lambda }}y)\leq \lambda g(x)+{\bar {\lambda }}[g(y)+K]}
for all
x
≤
y
,
λ
∈
[
0
,
1
]
{\displaystyle x\leq y,\lambda \in [0,1]}
, where
λ
¯
=
1
−
λ
{\displaystyle {\bar {\lambda }}=1-\lambda }
.
This definition admits a simple geometric interpretation related to the concept of visibility. Let
a
≥
0
{\displaystyle a\geq 0}
. A point
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
is said to be visible from
(
y
,
f
(
y
)
+
a
)
{\displaystyle (y,f(y)+a)}
if all intermediate points
(
λ
x
+
λ
¯
y
,
f
(
λ
x
+
λ
¯
y
)
)
,
0
≤
λ
≤
1
{\displaystyle (\lambda x+{\bar {\lambda }}y,f(\lambda x+{\bar {\lambda }}y)),0\leq \lambda \leq 1}
lie below the line segment joining these two points. Then the geometric characterization of K-convexity can be obtain as:
A function
g
{\displaystyle g}
is K-convex if and only if
(
x
,
g
(
x
)
)
{\displaystyle (x,g(x))}
is visible from
(
y
,
g
(
y
)
+
K
)
{\displaystyle (y,g(y)+K)}
for all
y
≥
x
{\displaystyle y\geq x}
.
=== Proof of Equivalence ===
It is sufficient to prove that the above definitions can be transformed to each other. This can be seen by using the transformation
λ
=
z
/
(
b
+
z
)
,
x
=
u
−
b
,
y
=
u
+
z
.
{\displaystyle \lambda =z/(b+z),\quad x=u-b,\quad y=u+z.}
== Properties ==
=== Property 1 ===
If
g
:
R
→
R
{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }
is K-convex, then it is L-convex for any
L
≥
K
{\displaystyle L\geq K}
. In particular, if
g
{\displaystyle g}
is convex, then it is also K-convex for any
K
≥
0
{\displaystyle K\geq 0}
.
=== Property 2 ===
If
g
1
{\displaystyle g_{1}}
is K-convex and
g
2
{\displaystyle g_{2}}
is L-convex, then for
α
≥
0
,
β
≥
0
,
g
=
α
g
1
+
β
g
2
{\displaystyle \alpha \geq 0,\beta \geq 0,\;g=\alpha g_{1}+\beta g_{2}}
is
(
α
K
+
β
L
)
{\displaystyle (\alpha K+\beta L)}
-convex.
=== Property 3 ===
If
g
{\displaystyle g}
is K-convex and
ξ
{\displaystyle \xi }
is a random variable such that
E
|
g
(
x
−
ξ
)
|
<
∞
{\displaystyle E|g(x-\xi )|<\infty }
for all
x
{\displaystyle x}
, then
E
g
(
x
−
ξ
)
{\displaystyle Eg(x-\xi )}
is also K-convex.
=== Property 4 ===
If
g
:
R
→
R
{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }
is K-convex, restriction of
g
{\displaystyle g}
on any convex set
D
⊂
R
{\displaystyle \mathbb {D} \subset \mathbb {R} }
is K-convex.
=== Property 5 ===
If
g
:
R
→
R
{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }
is a continuous K-convex function and
g
(
y
)
→
∞
{\displaystyle g(y)\rightarrow \infty }
as
|
y
|
→
∞
{\displaystyle |y|\rightarrow \infty }
, then there exit scalars
s
{\displaystyle s}
and
S
{\displaystyle S}
with
s
≤
S
{\displaystyle s\leq S}
such that
g
(
S
)
≤
g
(
y
)
{\displaystyle g(S)\leq g(y)}
, for all
y
∈
R
{\displaystyle y\in \mathbb {R} }
;
g
(
S
)
+
K
=
g
(
s
)
<
g
(
y
)
{\displaystyle g(S)+K=g(s)<g(y)}
, for all
y
<
s
{\displaystyle y<s}
;
g
(
y
)
{\displaystyle g(y)}
is a decreasing function on
(
−
∞
,
s
)
{\displaystyle (-\infty ,s)}
;
g
(
y
)
≤
g
(
z
)
+
K
{\displaystyle g(y)\leq g(z)+K}
for all
y
,
z
{\displaystyle y,z}
with
s
≤
y
≤
z
{\displaystyle s\leq y\leq z}
.
== References ==
== Further reading ==
Gallego, G.; Sethi, S. P. (2005). "
K
{\displaystyle {\mathcal {K}}}
-convexity in
R
n
{\displaystyle {\mathfrak {R}}^{n}}
" (PDF). Journal of Optimization Theory and Applications. 127 (1): 71–88. doi:10.1007/s10957-005-6393-4. MR 2174750. | Wikipedia/K-convex_function |
In mathematics, a function f is logarithmically convex or superconvex if
log
∘
f
{\displaystyle {\log }\circ f}
, the composition of the logarithm with f, is itself a convex function.
== Definition ==
Let X be a convex subset of a real vector space, and let f : X → R be a function taking non-negative values. Then f is:
Logarithmically convex if
log
∘
f
{\displaystyle {\log }\circ f}
is convex, and
Strictly logarithmically convex if
log
∘
f
{\displaystyle {\log }\circ f}
is strictly convex.
Here we interpret
log
0
{\displaystyle \log 0}
as
−
∞
{\displaystyle -\infty }
.
Explicitly, f is logarithmically convex if and only if, for all x1, x2 ∈ X and all t ∈ [0, 1], the two following equivalent conditions hold:
log
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
log
f
(
x
1
)
+
(
1
−
t
)
log
f
(
x
2
)
,
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
f
(
x
1
)
t
f
(
x
2
)
1
−
t
.
{\displaystyle {\begin{aligned}\log f(tx_{1}+(1-t)x_{2})&\leq t\log f(x_{1})+(1-t)\log f(x_{2}),\\f(tx_{1}+(1-t)x_{2})&\leq f(x_{1})^{t}f(x_{2})^{1-t}.\end{aligned}}}
Similarly, f is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all t ∈ (0, 1).
The above definition permits f to be zero, but if f is logarithmically convex and vanishes anywhere in X, then it vanishes everywhere in the interior of X.
=== Equivalent conditions ===
If f is a differentiable function defined on an interval I ⊆ R, then f is logarithmically convex if and only if the following condition holds for all x and y in I:
log
f
(
x
)
≥
log
f
(
y
)
+
f
′
(
y
)
f
(
y
)
(
x
−
y
)
.
{\displaystyle \log f(x)\geq \log f(y)+{\frac {f'(y)}{f(y)}}(x-y).}
This is equivalent to the condition that, whenever x and y are in I and x > y,
(
f
(
x
)
f
(
y
)
)
1
x
−
y
≥
exp
(
f
′
(
y
)
f
(
y
)
)
.
{\displaystyle \left({\frac {f(x)}{f(y)}}\right)^{\frac {1}{x-y}}\geq \exp \left({\frac {f'(y)}{f(y)}}\right).}
Moreover, f is strictly logarithmically convex if and only if these inequalities are always strict.
If f is twice differentiable, then it is logarithmically convex if and only if, for all x in I,
f
″
(
x
)
f
(
x
)
≥
f
′
(
x
)
2
.
{\displaystyle f''(x)f(x)\geq f'(x)^{2}.}
If the inequality is always strict, then f is strictly logarithmically convex. However, the converse is false: It is possible that f is strictly logarithmically convex and that, for some x, we have
f
″
(
x
)
f
(
x
)
=
f
′
(
x
)
2
{\displaystyle f''(x)f(x)=f'(x)^{2}}
. For example, if
f
(
x
)
=
exp
(
x
4
)
{\displaystyle f(x)=\exp(x^{4})}
, then f is strictly logarithmically convex, but
f
″
(
0
)
f
(
0
)
=
0
=
f
′
(
0
)
2
{\displaystyle f''(0)f(0)=0=f'(0)^{2}}
.
Furthermore,
f
:
I
→
(
0
,
∞
)
{\displaystyle f\colon I\to (0,\infty )}
is logarithmically convex if and only if
e
α
x
f
(
x
)
{\displaystyle e^{\alpha x}f(x)}
is convex for all
α
∈
R
{\displaystyle \alpha \in \mathbb {R} }
.
== Sufficient conditions ==
If
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are logarithmically convex, and if
w
1
,
…
,
w
n
{\displaystyle w_{1},\ldots ,w_{n}}
are non-negative real numbers, then
f
1
w
1
⋯
f
n
w
n
{\displaystyle f_{1}^{w_{1}}\cdots f_{n}^{w_{n}}}
is logarithmically convex.
If
{
f
i
}
i
∈
I
{\displaystyle \{f_{i}\}_{i\in I}}
is any family of logarithmically convex functions, then
g
=
sup
i
∈
I
f
i
{\displaystyle g=\sup _{i\in I}f_{i}}
is logarithmically convex.
If
f
:
X
→
I
⊆
R
{\displaystyle f\colon X\to I\subseteq \mathbf {R} }
is convex and
g
:
I
→
R
≥
0
{\displaystyle g\colon I\to \mathbf {R} _{\geq 0}}
is logarithmically convex and non-decreasing, then
g
∘
f
{\displaystyle g\circ f}
is logarithmically convex.
== Properties ==
A logarithmically convex function f is a convex function since it is the composite of the increasing convex function
exp
{\displaystyle \exp }
and the function
log
∘
f
{\displaystyle \log \circ f}
, which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
is convex, but its logarithm
log
f
(
x
)
=
2
log
|
x
|
{\displaystyle \log f(x)=2\log |x|}
is not. Therefore the squaring function is not logarithmically convex.
== Examples ==
f
(
x
)
=
exp
(
|
x
|
p
)
{\displaystyle f(x)=\exp(|x|^{p})}
is logarithmically convex when
p
≥
1
{\displaystyle p\geq 1}
and strictly logarithmically convex when
p
>
1
{\displaystyle p>1}
.
f
(
x
)
=
1
x
p
{\displaystyle f(x)={\frac {1}{x^{p}}}}
is strictly logarithmically convex on
(
0
,
∞
)
{\displaystyle (0,\infty )}
for all
p
>
0.
{\displaystyle p>0.}
Euler's gamma function is strictly logarithmically convex when restricted to the positive real numbers. In fact, by the Bohr–Mollerup theorem, this property can be used to characterize Euler's gamma function among the possible extensions of the factorial function to real arguments.
== See also ==
Logarithmically concave function
== Notes ==
== References ==
John B. Conway. Functions of One Complex Variable I, second edition. Springer-Verlag, 1995. ISBN 0-387-90328-3.
"Convexity, logarithmic", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Niculescu, Constantin; Persson, Lars-Erik (2006), Convex Functions and their Applications - A Contemporary Approach (1st ed.), Springer, doi:10.1007/0-387-31077-0, ISBN 978-0-387-24300-9, ISSN 1613-5237.
Montel, Paul (1928), "Sur les fonctions convexes et les fonctions sousharmoniques", Journal de Mathématiques Pures et Appliquées (in French), 7: 29–60.
This article incorporates material from logarithmically convex function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Logarithmically_convex_function |
In mathematics, Mazur's lemma is a result in the theory of normed vector spaces. It shows that any weakly convergent sequence in a normed space has a sequence of convex combinations of its members that converges strongly to the same limit, and is used in the proof of Tonelli's theorem.
== Statement of the lemma ==
== See also ==
Banach–Alaoglu theorem – Theorem in functional analysis
Bishop–Phelps theorem
Eberlein–Šmulian theorem – Relates three different kinds of weak compactness in a Banach space
James's theorem – theorem in mathematicsPages displaying wikidata descriptions as a fallback
Goldstine theorem
== References ==
Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 350. ISBN 0-387-00444-0.
Ekeland, Ivar & Temam, Roger (1976). Convex analysis and variational problems. Studies in Mathematics and its Applications, Vol. 1 (Second ed.). New York: North-Holland Publishing Co., Amsterdam-Oxford, American. p. 6. | Wikipedia/Mazur's_lemma |
In mathematics, a function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
is said to be closed if for each
α
∈
R
{\displaystyle \alpha \in \mathbb {R} }
, the sublevel set
{
x
∈
dom
f
|
f
(
x
)
≤
α
}
{\displaystyle \{x\in {\mbox{dom}}f\vert f(x)\leq \alpha \}}
is a closed set.
Equivalently, if the epigraph defined by
epi
f
=
{
(
x
,
t
)
∈
R
n
+
1
|
x
∈
dom
f
,
f
(
x
)
≤
t
}
{\displaystyle {\mbox{epi}}f=\{(x,t)\in \mathbb {R} ^{n+1}\vert x\in {\mbox{dom}}f,\;f(x)\leq t\}}
is closed, then the function
f
{\displaystyle f}
is closed.
This definition is valid for any function, but most used for convex functions. A proper convex function is closed if and only if it is lower semi-continuous.
== Properties ==
If
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
is a continuous function and
dom
f
{\displaystyle {\mbox{dom}}f}
is closed, then
f
{\displaystyle f}
is closed.
If
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
is a continuous function and
dom
f
{\displaystyle {\mbox{dom}}f}
is open, then
f
{\displaystyle f}
is closed if and only if it converges to
∞
{\displaystyle \infty }
along every sequence converging to a boundary point of
dom
f
{\displaystyle {\mbox{dom}}f}
.
A closed proper convex function f is the pointwise supremum of the collection of all affine functions h such that h ≤ f (called the affine minorants of f).
== References ==
Rockafellar, R. Tyrrell (1997) [1970]. Convex Analysis. Princeton, NJ: Princeton University Press. ISBN 978-0-691-01586-6. | Wikipedia/Closed_convex_function |
The SIAM Journal on Matrix Analysis and Applications is a peer-reviewed scientific journal covering matrix analysis and its applications. The relevant applications include signal processing, systems and control theory, statistics, Markov chains, mathematical biology, graph theory, and data science.
The journal was originally established as the SIAM Journal on Algebraic and Discrete Methods in 1980, until it split into SIAM Journal on Discrete Mathematics and the current title in 1988.
The journal is published by the Society for Industrial and Applied Mathematics. The founding editor-in-chief was Gene H. Golub, who established the journal in 1980. The current editor is Michele Benzi (Scuola Normale Superiore).
== References ==
== External links ==
Official website | Wikipedia/SIAM_Journal_on_Matrix_Analysis_and_Applications |
The Shapley–Folkman lemma is a result in convex geometry that describes the Minkowski addition of sets in a vector space. The lemma may be intuitively understood as saying that, if the number of summed sets exceeds the dimension of the vector space, then their Minkowski sum is approximately convex. It is named after mathematicians Lloyd Shapley and Jon Folkman, but was first published by the economist Ross M. Starr.
Related results provide more refined statements about how close the approximation is. For example, the Shapley–Folkman theorem provides an upper bound on the distance between any point in the Minkowski sum and its convex hull. This upper bound is sharpened by the Shapley–Folkman–Starr theorem (alternatively, Starr's corollary).
The Shapley–Folkman lemma has applications in economics, optimization and probability theory. In economics, it can be used to extend results proved for convex preferences to non-convex preferences. In optimization theory, it can be used to explain the successful solution of minimization problems that are sums of many functions. In probability, it can be used to prove a law of large numbers for random sets.
== Introductory example ==
A set is said to be convex if every line segment joining two of its points is a subset in the set. The solid disk
∙
{\displaystyle \bullet }
is a convex set, but the circle
∘
{\displaystyle \circ }
is not, because the line segment joining two distinct points
⊘
{\displaystyle \oslash }
is not a subset of the circle. The convex hull of a set
Q
{\displaystyle Q}
is the smallest convex set that contains
Q
{\displaystyle Q}
.
Minkowski addition is an operation on sets that forms the set of sums of members of the sets, with one member from each set. For example, adding the set consisting of the integers zero and one to itself yields the set consisting of zero, one, and two:
{
0
,
1
}
+
{
0
,
1
}
=
{
0
+
0
,
0
+
1
,
1
+
0
,
1
+
1
}
=
{
0
,
1
,
2
}
.
{\displaystyle \{0,1\}+\{0,1\}=\{0+0,0+1,1+0,1+1\}=\{0,1,2\}.}
This subset of the integers
{
0
,
1
,
2
}
{\displaystyle \{0,1,2\}}
is contained in the interval of real numbers
[
0
,
2
]
{\displaystyle [0,2]}
, which is its convex hull. The Shapley–Folkman lemma implies that every point in
[
0
,
2
]
{\displaystyle [0,2]}
is the sum of an integer from
{
0
,
1
}
{\displaystyle \{0,1\}}
and a real number from
[
0
,
1
]
{\displaystyle [0,1]}
: to get the convex hull of the Minkowski sum of
{
0
,
1
}
{\displaystyle \{0,1\}}
with itself, only one of the summands needs to be replaced by its convex hull.
The non-convex set
{
0
,
1
}
{\displaystyle \{0,1\}}
and its convex hull
[
0
,
1
]
{\displaystyle [0,1]}
are at Hausdorff distance
1
2
{\displaystyle {\tfrac {1}{2}}}
from each other,
and this distance remains the same for the non-convex set
{
0
,
1
,
2
}
{\displaystyle \{0,1,2\}}
and its convex hull
[
0
,
2
]
{\displaystyle [0,2]}
. In both cases the convex hull contains points such as
1
2
{\displaystyle {\tfrac {1}{2}}}
that are at distance
1
2
{\displaystyle {\tfrac {1}{2}}}
from the members of the non-convex set. In this example, the Minkowski sum operation does not decrease the distance between the sum and its convex hull. But when summing is replaced by averaging, by scaling the sum by the number of terms in the sum, the distance between the scaled Minkowski sum and its convex hull does go down. The distance between the average Minkowski sum
1
2
(
{
0
,
1
}
+
{
0
,
1
}
)
=
{
0
,
1
2
,
1
}
{\displaystyle {\frac {1}{2}}\left(\{0,1\}+\{0,1\}\right)=\left\{0,{\tfrac {1}{2}},1\right\}}
and its convex hull
[
0
,
1
]
{\displaystyle [0,1]}
is only
1
4
{\displaystyle {\tfrac {1}{4}}}
, which is half the distance
1
2
{\displaystyle {\tfrac {1}{2}}}
between its summand
{
0
,
1
}
{\displaystyle \{0,1\}}
and its convex hull
[
0
,
1
]
{\displaystyle [0,1]}
. As more sets are added together, the average of their sum "fills out" its convex hull: The maximum distance between the average and its convex hull approaches zero as the average includes more summands. This reduction in distance can be stated more formally as the Shapley–Folkman theorem and Shapley–Folkman–Starr theorem, as consequences of the Shapley–Folkman lemma.
== Preliminaries ==
The Shapley–Folkman lemma depends upon the following definitions and results from convex geometry.
=== Real vector spaces ===
A real vector space of two dimensions can be given a Cartesian coordinate system in which every point is identified by an ordered pair of real numbers, called "coordinates", which are conventionally denoted by
x
{\displaystyle x}
and
y
{\displaystyle y}
. Two points in the Cartesian plane can be added coordinate-wise:
(
x
1
,
y
1
)
+
(
x
2
,
y
2
)
=
(
x
1
+
x
2
,
y
1
+
y
2
)
;
{\displaystyle (x_{1},y_{1})+(x_{2},y_{2})=(x_{1}+x_{2},y_{1}+y_{2});}
further, a point can be multiplied by each real number
λ
{\displaystyle \lambda }
coordinate-wise:
λ
(
x
,
y
)
=
(
λ
x
,
λ
y
)
.
{\displaystyle \lambda (x,y)=(\lambda x,\lambda y).}
More generally, any real vector space of (finite) dimension
D
{\displaystyle D}
can be viewed as the set of all
D
{\displaystyle D}
-tuples of
D
{\displaystyle D}
real numbers
(
x
1
,
x
2
,
…
,
x
D
)
{\displaystyle (x_{1},x_{2},\ldots ,x_{D})}
on which two operations are defined: vector addition and multiplication by a real number. For finite-dimensional vector spaces, the operations of vector addition and real-number multiplication can each be defined coordinate-wise, following the example of the Cartesian plane.
=== Convex sets ===
In a real vector space, a non-empty set
Q
{\displaystyle Q}
is defined to be convex if, for each pair of its points, every point on the line segment that joins them is still in
Q
{\displaystyle Q}
. For example, a solid disk
∙
{\displaystyle \bullet }
is convex but a circle
∘
{\displaystyle \circ }
is not, because it does not contain a line segment joining opposite points. A solid cube is convex; however, anything that is hollow or dented, for example, a crescent shape, is non-convex. The empty set is convex, either by definition or vacuously.
More formally, a set
Q
{\displaystyle Q}
is convex if, for all points
q
1
{\displaystyle q_{1}}
and
q
2
{\displaystyle q_{2}}
in
Q
{\displaystyle Q}
and for every real number
λ
{\displaystyle \lambda }
in the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
, the point
(
1
−
λ
)
q
1
+
λ
q
2
{\displaystyle (1-\lambda )q_{1}+\lambda q_{2}}
is a member of
Q
{\displaystyle Q}
. By mathematical induction, a set
Q
{\displaystyle Q}
is convex if and only if every convex combination of members of
Q
{\displaystyle Q}
also belongs to
Q
{\displaystyle Q}
. By definition, a convex combination of indexed points
v
1
,
v
2
,
…
v
D
{\displaystyle v_{1},v_{2},\ldots v_{D}}
of a vector space is any weighted average
λ
1
v
1
+
λ
2
v
2
+
⋯
+
λ
D
v
D
{\displaystyle \lambda _{1}v_{1}+\lambda _{2}v_{2}+\cdots +\lambda _{D}v_{D}}
for indexed real numbers
λ
1
,
λ
2
,
…
λ
D
{\displaystyle \lambda _{1},\lambda _{2},\ldots \lambda _{D}}
satisfying the equation
λ
1
+
λ
2
+
⋯
+
λ
D
=
1
{\displaystyle \lambda _{1}+\lambda _{2}+\cdots +\lambda _{D}=1}
.
The definition of a convex set implies that the intersection of two convex sets is a convex set. More generally, the intersection of a family of convex sets is a convex set. In particular, the intersection of two disjoint sets is the empty set, which is convex.
=== Convex hull ===
For every subset
Q
{\displaystyle Q}
of a real vector space, its convex hull
conv
Q
{\displaystyle \operatorname {conv} Q}
is the minimal convex set that contains
Q
{\displaystyle Q}
. Thus
conv
Q
{\displaystyle \operatorname {conv} Q}
is the intersection of all the convex sets that cover
Q
{\displaystyle Q}
. The convex hull of a set can be equivalently defined to be the set of all convex combinations of points in
Q
{\displaystyle Q}
. For example, the convex hull of the set of integers
{
0
,
1
}
{\displaystyle \{0,1\}}
is the closed interval of real numbers
[
0
,
1
]
{\displaystyle [0,1]}
, which has the maximum and minimum of the given set as its endpoints. The convex hull of the unit circle is the closed unit disk, which contains the unit circle and its interior.
=== Minkowski addition ===
In any vector space (or algebraic structure with addition),
X
{\displaystyle X}
, the Minkowski sum of two non-empty sets
A
,
B
⊆
X
{\displaystyle A,B\subseteq X}
is defined to be the element-wise operation
A
+
B
=
{
x
+
y
∣
x
∈
A
,
y
∈
B
}
.
{\displaystyle A+B=\{x+y\mid x\in A,y\in B\}.}
For example,
{
0
,
1
}
+
{
0
,
1
}
=
{
0
+
0
,
0
+
1
,
1
+
0
,
1
+
1
}
=
{
0
,
1
,
2
}
.
{\displaystyle {\begin{aligned}\{0,1\}+\{0,1\}&=\{0+0,0+1,1+0,1+1\}\\&=\{0,1,2\}.\end{aligned}}}
This operation is clearly commutative and associative on the collection of non-empty sets. All such operations extend in a well-defined manner to recursive forms
∑
n
=
1
N
Q
n
=
Q
1
+
Q
2
+
…
+
Q
N
.
{\displaystyle \sum _{n=1}^{N}Q_{n}=Q_{1}+Q_{2}+\ldots +Q_{N}.}
By the principle of induction,
∑
n
=
1
N
Q
n
=
{
∑
n
=
1
N
q
n
|
q
n
∈
Q
n
,
1
≤
n
≤
N
}
.
{\displaystyle \sum _{n=1}^{N}Q_{n}=\left\{\sum _{n=1}^{N}q_{n}\mathrel {\Bigg |} q_{n}\in Q_{n},~1\leq n\leq N\right\}.}
=== Convex hulls of Minkowski sums ===
Minkowski addition behaves well with respect to taking convex hulls. Specifically, for all subsets
A
,
B
⊆
X
{\displaystyle A,B\subseteq X}
of a real vector space,
X
{\displaystyle X}
, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls. That is,
conv
(
A
+
B
)
=
(
conv
A
)
+
(
conv
B
)
.
{\displaystyle \operatorname {conv} (A+B)=(\operatorname {conv} A)+(\operatorname {conv} B).}
And by induction it follows that
conv
∑
n
=
1
N
Q
n
=
∑
n
=
1
N
conv
Q
n
{\displaystyle \operatorname {conv} \sum _{n=1}^{N}Q_{n}=\sum _{n=1}^{N}\operatorname {conv} Q_{n}}
for any
N
∈
N
{\displaystyle N\in \mathbb {N} }
and non-empty subsets
Q
n
∈
X
,
1
≤
n
≤
N
{\displaystyle Q_{n}\in X,\ 1\leq n\leq N}
.
== Statements of the three main results ==
=== Notation ===
In the following statements,
D
{\displaystyle D}
and
D
{\displaystyle D}
represent positive integers.
D
{\displaystyle D}
is the dimension of the ambient space
R
D
{\displaystyle \mathbb {R} ^{D}}
.
Q
1
,
…
,
Q
N
{\displaystyle Q_{1},\dots ,Q_{N}}
represent nonempty, bounded subsets of
R
D
{\displaystyle \mathbb {R} ^{D}}
. They are also called "summands".
N
{\displaystyle N}
is the number of summands.
Q
=
∑
n
=
1
N
Q
n
{\displaystyle Q=\sum _{n=1}^{N}Q_{n}}
denotes the Minkowski sum of the summands.
The variable
x
{\displaystyle x}
is used to represent an arbitrary vector in
conv
Q
{\displaystyle \operatorname {conv} Q}
.
=== Shapley–Folkman lemma ===
Because the convex hull and Minkowski sum operations can always be interchanged, as
conv
Q
=
∑
n
=
1
N
conv
(
Q
n
)
{\displaystyle \operatorname {conv} Q=\sum _{n=1}^{N}\operatorname {conv} (Q_{n})}
, it follows that for every
x
∈
conv
Q
{\displaystyle x\in \operatorname {conv} Q}
, there exist elements
q
n
∈
conv
Q
n
{\displaystyle q_{n}\in \operatorname {conv} Q_{n}}
such that
∑
n
=
1
N
q
n
=
x
{\displaystyle \sum _{n=1}^{N}q_{n}=x}
. The Shapley–Folkman lemma refines this statement.
For example, every point in
[
0
,
2
]
=
[
0
,
1
]
+
[
0
,
1
]
=
conv
{
0
,
1
}
+
conv
{
0
,
1
}
{\displaystyle [0,2]=[0,1]+[0,1]=\operatorname {conv} \{0,1\}+\operatorname {conv} \{0,1\}}
is the sum of an element in
{
0
,
1
}
{\displaystyle \{0,1\}}
and an element in
[
0
,
1
]
{\displaystyle [0,1]}
.
Shuffling indices if necessary, this means that every point in
conv
Q
{\displaystyle \operatorname {conv} Q}
can be decomposed as
x
=
∑
n
=
1
D
q
n
+
∑
n
=
D
+
1
N
q
n
{\displaystyle x=\sum _{n=1}^{D}q_{n}+\sum _{n=D+1}^{N}q_{n}}
where
q
n
∈
conv
Q
n
{\displaystyle q_{n}\in \operatorname {conv} Q_{n}}
for
1
≤
n
≤
D
{\displaystyle 1\leq n\leq D}
and
q
n
∈
Q
n
{\displaystyle q_{n}\in \operatorname {Q} _{n}}
for
D
+
1
≤
n
≤
N
{\displaystyle D+1\leq n\leq N}
. Note that the reindexing depends on the point
x
{\displaystyle x}
.
The lemma may be stated succinctly as
conv
(
∑
n
=
1
N
Q
n
)
⊆
⋃
I
⊆
{
1
,
2
,
…
N
}
:
|
I
|
=
D
(
∑
n
∈
I
conv
Q
n
+
∑
n
∉
I
Q
n
)
.
{\displaystyle \operatorname {conv} \left(\sum _{n=1}^{N}Q_{n}\right)\subseteq \bigcup _{I\subseteq \{1,2,\ldots N\}:~|I|=D}\left(\sum _{n\in I}\operatorname {conv} Q_{n}+\sum _{n\notin I}Q_{n}\right).}
==== The converse of Shapley–Folkman lemma ====
In particular, the Shapley–Folkman lemma requires the vector space to be finite-dimensional.
=== Shapley–Folkman theorem ===
Shapley and Folkman used their lemma to prove the following theorem, which quantifies the difference between
Q
{\displaystyle Q}
and
conv
Q
{\displaystyle \operatorname {conv} Q}
using Hausdorff distance. Hausdorff distances measure how close two sets are. For two sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, the Hausdorff distance is, intuitively, the smallest amount by which each must be expanded to cover the other. More formally,
if the distance from a point
x
{\displaystyle x}
to a set
Y
{\displaystyle Y}
is defined as the infimum of pairwise distances,
d
(
x
,
Y
)
=
inf
y
∈
Y
‖
x
−
y
‖
,
{\displaystyle d(x,Y)=\inf _{y\in Y}\|x-y\|,}
then let
X
ε
{\displaystyle X_{\varepsilon }}
denote the set of all points within
distance
ε
{\displaystyle \varepsilon }
of
X
{\displaystyle X}
; equivalently this is closure of the Minkowski sum of
X
{\displaystyle X}
with a ball of radius
ε
{\displaystyle \varepsilon }
. Then for
X
⊂
Y
{\displaystyle X\subset Y}
, the Hausdorff distance is
d
H
(
X
,
Y
)
=
inf
{
ε
∣
X
ε
⊃
Y
}
.
{\displaystyle d_{\mathrm {H} }(X,Y)=\inf\{\varepsilon \mid X_{\varepsilon }\supset Y\}.}
The Shapley–Folkman theorem quantifies how close to convexity
Q
{\displaystyle Q}
is by upper-bounding its Hausdorff distance to
conv
Q
{\displaystyle \operatorname {conv} Q}
, using the circumradii of its constituent sets. For any bounded set
S
⊂
R
D
,
{\displaystyle S\subset \mathbb {R} ^{D},}
define its circumradius
rad
S
{\displaystyle \operatorname {rad} S}
to be the smallest radius of a ball containing it. More formally, letting
x
{\displaystyle x}
denote the center of the smallest enclosing ball, it can be defined as
rad
S
=
inf
{
ε
∣
∃
x
∈
R
N
:
{
x
}
ε
⊃
S
}
.
{\displaystyle \operatorname {rad} S=\inf\{\varepsilon \mid \exists x\in \mathbb {R} ^{N}:\{x\}_{\varepsilon }\supset S\}.}
With this notation in place, the Shapley–Folkman theorem can be stated as:
Here the notation
∑
max
D
{\textstyle \sum _{\max D}}
means "the sum of the
D
{\displaystyle D}
largest terms". This upper bound depends on the dimension of ambient space and the shapes of the summands, but not on the number of summands.
=== Shapley–Folkman–Starr theorem ===
The Shapley–Folkman theorem can be strengthened by replacing the circumradius of the terms in a Minkowski by a smaller value, the inner radius, which intuitively measures the radius of the holes in a set rather than the radius of a set.
Define the inner radius
r
(
S
)
{\displaystyle r(S)}
of a bounded subset
S
⊂
R
D
{\displaystyle S\subset \mathbb {R} ^{D}}
to be the infimum of
r
{\displaystyle r}
such that, for any
x
∈
conv
S
{\displaystyle x\in \operatorname {conv} S}
, there exists a ball
B
{\displaystyle B}
of radius
r
{\displaystyle r}
such that
x
∈
conv
(
S
∩
B
)
{\displaystyle x\in \operatorname {conv} (S\cap B)}
.
== Proofs ==
There have been many proofs of these results, from the original, to the later Arrow and Hahn, Cassels, Schneider, etc. An abstract and elegant proof by Ekeland has been extended by Artstein. Different proofs have also appeared in unpublished papers. An elementary proof of the Shapley–Folkman lemma can be found in the book by Bertsekas, together with applications in estimating the duality gap in separable optimization problems and zero-sum games.
Usual proofs of these results are nonconstructive: they establish only the existence of the representation, but do not provide an algorithm for computing the representation. In 1981, Starr published an iterative algorithm for a less sharp version of the Shapley–Folkman–Starr theorem.
=== Via Carathéodory's theorem ===
The following proof of Shapley–Folkman lemma is from Zhou (1993). The proof idea is to lift the representation of
x
{\displaystyle x}
from
R
D
{\displaystyle \mathbb {R} ^{D}}
to
R
D
+
N
{\displaystyle \mathbb {R} ^{D+N}}
, use Carathéodory's theorem for conic hulls, then drop back to
R
D
{\displaystyle \mathbb {R} ^{D}}
.
=== Probabilistic ===
The following "probabilistic" proof of Shapley–Folkman–Starr theorem is from Cassels (1975).
We can interpret
conv
S
{\displaystyle \operatorname {conv} S}
in probabilistic terms:
∀
x
∈
conv
S
{\displaystyle \forall x\in \operatorname {conv} S}
, since
x
=
∑
w
n
q
n
{\displaystyle x=\sum w_{n}q_{n}}
for some
q
n
∈
S
{\displaystyle q_{n}\in S}
, we can define a random vector
X
{\displaystyle X}
, finitely supported in
S
{\displaystyle S}
, such that
P
r
(
X
=
q
n
)
=
w
n
{\displaystyle Pr(X=q_{n})=w_{n}}
, and
x
=
E
[
X
]
{\displaystyle x=\mathbb {E} [X]}
.
Then, it is natural to consider the "variance" of a set
S
{\displaystyle S}
as
V
a
r
(
S
)
:=
sup
x
∈
conv
S
inf
E
[
X
]
=
x
,
X
is finitely supported in
S
V
a
r
[
X
]
{\displaystyle Var(S):=\sup _{x\in \operatorname {conv} S}\inf _{\mathbb {E} [X]=x,X{\text{ is finitely supported in }}S}Var[X]}
With that,
d
(
S
,
conv
(
S
)
)
2
≤
V
a
r
(
S
)
≤
r
(
S
)
≤
r
a
d
(
S
)
{\displaystyle d(S,\operatorname {conv} (S))^{2}\leq Var(S)\leq r(S)\leq rad(S)}
.
== History ==
The lemma of Lloyd Shapley and Jon Folkman was first published by the economist Ross M. Starr, who was investigating the existence of economic equilibria while studying with Kenneth Arrow. In his paper, Starr studied a convexified economy, in which non-convex sets were replaced by their convex hulls; Starr proved that the convexified economy has equilibria that are closely approximated by "quasi-equilibria" of the original economy; moreover, he proved that every quasi-equilibrium has many of the optimal properties of true equilibria, which are proved to exist for convex economies.
Following Starr's 1969 paper, the Shapley–Folkman–Starr results have been widely used to show that central results of (convex) economic theory are good approximations to large economies with non-convexities; for example, quasi-equilibria closely approximate equilibria of a convexified economy. "The derivation of these results in general form has been one of the major achievements of postwar economic theory", wrote Roger Guesnerie.
The topic of non-convex sets in economics has been studied by many Nobel laureates: Shapley himself, Arrow, Robert Aumann, Gérard Debreu, Tjalling Koopmans, Paul Krugman, and Paul Samuelson. The complementary topic of convex sets in economics has been emphasized by these laureates, along with Leonid Hurwicz, Leonid Kantorovich, and Robert Solow.
== Applications ==
The Shapley–Folkman lemma enables researchers to extend results for Minkowski sums of convex sets to sums of general sets, which need not be convex. Such sums of sets arise in economics, in mathematical optimization, and in probability theory; in each of these three mathematical sciences, non-convexity is an important feature of applications.
=== Economics ===
In economics, a consumer's preferences are defined over all "baskets" of goods. Each basket is represented as a non-negative vector, whose coordinates represent the quantities of the goods. On this set of baskets, an indifference curve is defined for each consumer; a consumer's indifference curve contains all the baskets of commodities that the consumer regards as equivalent: That is, for every pair of baskets on the same indifference curve, the consumer does not prefer one basket over another. Through each basket of commodities passes one indifference curve. A consumer's preference set (relative to an indifference curve) is the union of the indifference curve and all the commodity baskets that the consumer prefers over the indifference curve. A consumer's preferences are convex if all such preference sets are convex.
An optimal basket of goods occurs where the budget-line supports a consumer's preference set, as shown in the diagram. This means that an optimal basket is on the highest possible indifference curve given the budget-line, which is defined in terms of a price vector and the consumer's income (endowment vector). Thus, the set of optimal baskets is a function of the prices, and this function is called the consumer's demand. If the preference set is convex, then at every price the consumer's demand is a convex set, for example, a unique optimal basket or a line-segment of baskets.
==== Non-convex preferences ====
However, if a preference set is non-convex, then some prices determine a budget-line that supports two separate optimal-baskets. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase half of an eagle and half of a lion (or a griffin)! Thus, the zoo-keeper's preferences are non-convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both.
When the consumer's preference set is non-convex, then (for some prices) the consumer's demand is not connected; a disconnected demand implies some discontinuous behavior by the consumer, as discussed by Harold Hotelling:
If indifference curves for purchases be thought of as possessing a wavy character, convex to the origin in some regions and concave in others, we are forced to the conclusion that it is only the portions convex to the origin that can be regarded as possessing any importance, since the others are essentially unobservable. They can be detected only by the discontinuities that may occur in demand with variation in price-ratios, leading to an abrupt jumping of a point of tangency across a chasm when the straight line is rotated. But, while such discontinuities may reveal the existence of chasms, they can never measure their depth. The concave portions of the indifference curves and their many-dimensional generalizations, if they exist, must forever remain in unmeasurable obscurity.
The difficulties of studying non-convex preferences were emphasized by Herman Wold and again by Paul Samuelson, who wrote that non-convexities are "shrouded in eternal darkness", according to Diewert.
Nonetheless, non-convex preferences were illuminated from 1959 to 1961 by a sequence of papers in The Journal of Political Economy (JPE). The main contributors were Farrell, Bator, Koopmans, and Rothenberg. In particular, Rothenberg's paper discussed the approximate convexity of sums of non-convex sets. These JPE-papers stimulated a paper by Lloyd Shapley and Martin Shubik, which considered convexified consumer-preferences and introduced the concept of an "approximate equilibrium". The JPE-papers and the Shapley–Shubik paper influenced another notion of "quasi-equilibria", due to Robert Aumann.
==== Starr's 1969 paper and contemporary economics ====
Previous publications on non-convexity and economics were collected in an annotated bibliography by Kenneth Arrow. He gave the bibliography to Starr, who was then an undergraduate enrolled in Arrow's (graduate) advanced mathematical-economics course. In his term-paper, Starr studied the general equilibria of an artificial economy in which non-convex preferences were replaced by their convex hulls. In the convexified economy, at each price, the aggregate demand was the sum of convex hulls of the consumers' demands. Starr's ideas interested the mathematicians Lloyd Shapley and Jon Folkman, who proved their eponymous lemma and theorem in "private correspondence", which was reported by Starr's published paper of 1969.
In his 1969 publication, Starr applied the Shapley–Folkman–Starr theorem. Starr proved that the "convexified" economy has general equilibria that can be closely approximated by "quasi-equilibria" of the original economy, when the number of agents exceeds the dimension of the goods: Concretely, Starr proved that there exists at least one quasi-equilibrium of prices
p
o
p
t
{\displaystyle p_{\mathrm {opt} }}
with the following properties:
For each quasi-equilibrium's prices
p
o
p
t
{\displaystyle p_{\mathrm {opt} }}
, all consumers can choose optimal baskets (maximally preferred and meeting their budget constraints).
At quasi-equilibrium prices
p
o
p
t
{\displaystyle p_{\mathrm {opt} }}
in the convexified economy, every good's market is in equilibrium: Its supply equals its demand.
For each quasi-equilibrium, the prices "nearly clear" the markets for the original economy: an upper bound on the distance between the set of equilibria of the "convexified" economy and the set of quasi-equilibria of the original economy followed from Starr's corollary to the Shapley–Folkman theorem.
Starr established that
"in the aggregate, the discrepancy between an allocation in the fictitious economy generated by [taking the convex hulls of all of the consumption and production sets] and some allocation in the real economy is bounded in a way that is independent of the number of economic agents. Therefore, the average agent experiences a deviation from intended actions that vanishes in significance as the number of agents goes to infinity".
Following Starr's 1969 paper, the Shapley–Folkman–Starr results have been widely used in economic theory. Roger Guesnerie summarized their economic implications: "Some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, preference nonconvexities do not destroy the standard results". "The derivation of these results in general form has been one of the major achievements of postwar economic theory", wrote Guesnerie. The topic of non-convex sets in economics has been studied by many Nobel laureates: Arrow (1972), Robert Aumann (2005), Gérard Debreu (1983), Tjalling Koopmans (1975), Paul Krugman (2008), and Paul Samuelson (1970); the complementary topic of convex sets in economics has been emphasized by these laureates, along with Leonid Hurwicz, Leonid Kantorovich (1975), and Robert Solow (1987). The Shapley–Folkman–Starr results have been featured in the economics literature: in microeconomics, in general-equilibrium theory, in public economics (including market failures), as well as in game theory, in mathematical economics, and in applied mathematics (for economists). The Shapley–Folkman–Starr results have also influenced economics research using measure and integration theory.
=== Mathematical optimization ===
The Shapley–Folkman lemma has been used to explain why large minimization problems with non-convexities can be nearly solved (with iterative methods whose convergence proofs are stated for only convex problems). The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions.
==== Preliminaries of optimization theory ====
Nonlinear optimization relies on the following definitions for functions:
The graph of a function f is the set of the pairs of arguments x and function evaluations f(x)
Graph(f) = { (x, f(x) ) }
The epigraph of a real-valued function f is the set of points above the graph
Epi(f) = { (x, u) : f(x) ≤ u }.
A real-valued function is defined to be a convex function if its epigraph is a convex set.
For example, the quadratic function f(x) = x2 is convex, as is the absolute value function g(x) = |x|. However, the sine function (pictured) is non-convex on the interval (0, π).
==== Additive optimization problems ====
In many optimization problems, the objective function f is separable: that is, f is the sum of many summand-functions, each of which has its own argument:
f
(
x
)
=
f
(
(
x
1
,
…
,
x
N
)
)
=
∑
f
n
(
x
n
)
.
{\displaystyle f(x)=f{\bigl (}(x_{1},\dots ,x_{N}){\bigr )}=\sum f_{n}(x_{n}).}
For example, problems of linear optimization are separable. Given a separable problem with an optimal solution, fix an optimal solution
x
min
=
(
x
1
,
…
,
x
N
)
min
{\displaystyle x_{\min }=(x_{1},\dots ,x_{N})_{\min }}
with the minimum value
f
(
x
min
)
{\displaystyle f(x_{\min })}
. For this separable problem, consider an optimal solution
(
x
min
,
f
(
x
min
)
)
{\displaystyle {\bigl (}x_{\min },f(x_{\min }){\bigr )}}
to the convexified problem, where convex hulls are taken of the graphs of the summand functions. Such an optimal solution is the limit of a sequence of points in the convexified problem
(
x
j
,
f
(
x
j
)
)
∈
∑
conv
Graph
(
f
n
)
.
{\displaystyle {\bigl (}x_{j},f(x_{j}){\bigr )}\in \sum \operatorname {conv} \operatorname {Graph} (f_{n}).}
The given optimal-point is a sum of points in the graphs of the original summands and of a small number of convexified summands, by the Shapley–Folkman lemma.
This analysis was published by Ivar Ekeland in 1974 to explain the apparent convexity of separable problems with many summands, despite the non-convexity of the summand problems. In 1973, the young mathematician Claude Lemaréchal was surprised by his success with convex minimization methods on problems that were known to be non-convex; for minimizing nonlinear problems, a solution of the dual problem need not provide useful information for solving the primal problem, unless the primal problem be convex and satisfy a constraint qualification. Lemaréchal's problem was additively separable, and each summand function was non-convex; nonetheless, a solution to the dual problem provided a close approximation to the primal problem's optimal value. Ekeland's analysis explained the success of methods of convex minimization on large and separable problems, despite the non-convexities of the summand functions. Ekeland and later authors argued that additive separability produced an approximately convex aggregate problem, even though the summand functions were non-convex. The crucial step in these publications is the use of the Shapley–Folkman lemma.
The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions.
=== Probability and measure theory ===
Convex sets are often studied with probability theory. Each point in the convex hull of a (non-empty) subset Q of a finite-dimensional space is the expected value of a simple random vector that takes its values in Q, as a consequence of Carathéodory's lemma. Thus, for a non-empty set Q, the collection of the expected values of the simple, Q-valued random vectors equals Q's convex hull; this equality implies that the Shapley–Folkman–Starr results are useful in probability theory. In the other direction, probability theory provides tools to examine convex sets generally and the Shapley–Folkman–Starr results specifically. The Shapley–Folkman–Starr results have been widely used in the probabilistic theory of random sets, for example, to prove a law of large numbers, a central limit theorem, and a large-deviations principle. These proofs of probabilistic limit theorems used the Shapley–Folkman–Starr results to avoid the assumption that all the random sets be convex.
A probability measure is a finite measure, and the Shapley–Folkman lemma has applications in non-probabilistic measure theory, such as the theories of volume and of vector measures. The Shapley–Folkman lemma enables a refinement of the Brunn–Minkowski inequality, which bounds the volume of sums in terms of the volumes of their summand-sets. The volume of a set is defined in terms of the Lebesgue measure, which is defined on subsets of Euclidean space. In advanced measure-theory, the Shapley–Folkman lemma has been used to prove Lyapunov's theorem, which states that the range of a vector measure is convex. Here, the traditional term "range" (alternatively, "image") is the set of values produced by the function.
A vector measure is a vector-valued generalization of a measure;
for example,
if p1 and p2 are probability measures defined on the same measurable space,
then the product function p1 p2 is a vector measure,
where p1 p2
is defined for every event ω
by
(p1 p2)(ω)=(p1(ω), p2(ω)).
Lyapunov's theorem has been used in economics, in ("bang-bang") control theory, and in statistical theory. Lyapunov's theorem has been called a continuous counterpart of the Shapley–Folkman lemma, which has itself been called a discrete analogue of Lyapunov's theorem.
== Notes ==
== Footnotes ==
== References ==
== External links ==
Starr, Ross M. (September 2009). "8 Convex sets, separation theorems, and non-convex sets in RN (Section 8.2.3 Measuring non-convexity, the Shapley–Folkman theorem)" (PDF). General equilibrium theory: An introduction. pp. 3–6. doi:10.1017/CBO9781139174749. ISBN 9781139174749. MR 1462618. (Draft of second edition, from Starr's course at the Economics Department of the University of California, San Diego). Archived from the original (PDF) on 1 July 2010. Retrieved 15 January 2011.
Starr, Ross M. (May 2007). "Shapley–Folkman theorem" (PDF). pp. 1–3. (Draft of article for the second edition of New Palgrave Dictionary of Economics). Retrieved 15 January 2011. | Wikipedia/Shapley–Folkman_lemma |
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value
−
∞
{\displaystyle -\infty }
and also is not identically equal to
+
∞
.
{\displaystyle +\infty .}
In convex analysis and variational analysis, a point (in the domain) at which some given function
f
{\displaystyle f}
is minimized is typically sought, where
f
{\displaystyle f}
is valued in the extended real number line
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
.
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}.}
Such a point, if it exists, is called a global minimum point of the function and its value at this point is called the global minimum (value) of the function. If the function takes
−
∞
{\displaystyle -\infty }
as a value then
−
∞
{\displaystyle -\infty }
is necessarily the global minimum value and the minimization problem can be answered; this is ultimately the reason why the definition of "proper" requires that the function never take
−
∞
{\displaystyle -\infty }
as a value. Assuming this, if the function's domain is empty or if the function is identically equal to
+
∞
{\displaystyle +\infty }
then the minimization problem once again has an immediate answer. Extended real-valued function for which the minimization problem is not solved by any one of these three trivial cases are exactly those that are called proper. Many (although not all) results whose hypotheses require that the function be proper add this requirement specifically to exclude these trivial cases.
If the problem is instead a maximization problem (which would be clearly indicated, such as by the function being concave rather than convex) then the definition of "proper" is defined in an analogous (albeit technically different) manner but with the same goal: to exclude cases where the maximization problem can be answered immediately. Specifically, a concave function
g
{\displaystyle g}
is called proper if its negation
−
g
,
{\displaystyle -g,}
which is a convex function, is proper in the sense defined above.
== Definitions ==
Suppose that
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
is a function taking values in the extended real number line
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
.
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}.}
If
f
{\displaystyle f}
is a convex function or if a minimum point of
f
{\displaystyle f}
is being sought, then
f
{\displaystyle f}
is called proper if
f
(
x
)
>
−
∞
{\displaystyle f(x)>-\infty }
for every
x
∈
X
{\displaystyle x\in X}
and if there also exists some point
x
0
∈
X
{\displaystyle x_{0}\in X}
such that
f
(
x
0
)
<
+
∞
.
{\displaystyle f\left(x_{0}\right)<+\infty .}
That is, a function is proper if it never attains the value
−
∞
{\displaystyle -\infty }
and its effective domain is nonempty.
This means that there exists some
x
∈
X
{\displaystyle x\in X}
at which
f
(
x
)
∈
R
{\displaystyle f(x)\in \mathbb {R} }
and
f
{\displaystyle f}
is also never equal to
−
∞
.
{\displaystyle -\infty .}
Convex functions that are not proper are called improper convex functions.
A proper concave function is by definition, any function
g
:
X
→
[
−
∞
,
∞
]
{\displaystyle g:X\to [-\infty ,\infty ]}
such that
f
:=
−
g
{\displaystyle f:=-g}
is a proper convex function. Explicitly, if
g
:
X
→
[
−
∞
,
∞
]
{\displaystyle g:X\to [-\infty ,\infty ]}
is a concave function or if a maximum point of
g
{\displaystyle g}
is being sought, then
g
{\displaystyle g}
is called proper if its domain is not empty, it never takes on the value
+
∞
,
{\displaystyle +\infty ,}
and it is not identically equal to
−
∞
.
{\displaystyle -\infty .}
== Properties ==
For every proper convex function
f
:
R
n
→
[
−
∞
,
∞
]
,
{\displaystyle f:\mathbb {R} ^{n}\to [-\infty ,\infty ],}
there exist some
b
∈
R
n
{\displaystyle b\in \mathbb {R} ^{n}}
and
r
∈
R
{\displaystyle r\in \mathbb {R} }
such that
f
(
x
)
≥
x
⋅
b
−
r
{\displaystyle f(x)\geq x\cdot b-r}
for every
x
∈
R
n
.
{\displaystyle x\in \mathbb {R} ^{n}.}
The sum of two proper convex functions is convex, but not necessarily proper. For instance if the sets
A
⊂
X
{\displaystyle A\subset X}
and
B
⊂
X
{\displaystyle B\subset X}
are non-empty convex sets in the vector space
X
,
{\displaystyle X,}
then the characteristic functions
I
A
{\displaystyle I_{A}}
and
I
B
{\displaystyle I_{B}}
are proper convex functions, but if
A
∩
B
=
∅
{\displaystyle A\cap B=\varnothing }
then
I
A
+
I
B
{\displaystyle I_{A}+I_{B}}
is identically equal to
+
∞
.
{\displaystyle +\infty .}
The infimal convolution of two proper convex functions is convex but not necessarily proper convex.
== See also ==
Effective domain
== Citations ==
== References ==
Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. | Wikipedia/Proper_convex_function |
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form
(
−
∞
,
a
)
{\displaystyle (-\infty ,a)}
is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
Quasiconvexity is a more general property than convexity in that all convex functions are also quasiconvex, but not all quasiconvex functions are convex. Univariate unimodal functions are quasiconvex or quasiconcave, however this is not necessarily the case for functions with multiple arguments. For example, the 2-dimensional Rosenbrock function is unimodal but not quasiconvex and functions with star-convex sublevel sets can be unimodal without being quasiconvex.
== Definition and properties ==
A function
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
defined on a convex subset
S
{\displaystyle S}
of a real vector space is quasiconvex if for all
x
,
y
∈
S
{\displaystyle x,y\in S}
and
λ
∈
[
0
,
1
]
{\displaystyle \lambda \in [0,1]}
we have
f
(
λ
x
+
(
1
−
λ
)
y
)
≤
max
{
f
(
x
)
,
f
(
y
)
}
.
{\displaystyle f(\lambda x+(1-\lambda )y)\leq \max {\big \{}f(x),f(y){\big \}}.}
In words, if
f
{\displaystyle f}
is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, then
f
{\displaystyle f}
is quasiconvex. Note that the points
x
{\displaystyle x}
and
y
{\displaystyle y}
, and the point directly between them, can be points on a line or more generally points in n-dimensional space.
An alternative way (see introduction) of defining a quasi-convex function
f
(
x
)
{\displaystyle f(x)}
is to require that each sublevel set
S
α
(
f
)
=
{
x
∣
f
(
x
)
≤
α
}
{\displaystyle S_{\alpha }(f)=\{x\mid f(x)\leq \alpha \}}
is a convex set.
If furthermore
f
(
λ
x
+
(
1
−
λ
)
y
)
<
max
{
f
(
x
)
,
f
(
y
)
}
{\displaystyle f(\lambda x+(1-\lambda )y)<\max {\big \{}f(x),f(y){\big \}}}
for all
x
≠
y
{\displaystyle x\neq y}
and
λ
∈
(
0
,
1
)
{\displaystyle \lambda \in (0,1)}
, then
f
{\displaystyle f}
is strictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does.
A quasiconcave function is a function whose negative is quasiconvex, and a strictly quasiconcave function is a function whose negative is strictly quasiconvex. Equivalently a function
f
{\displaystyle f}
is quasiconcave if
f
(
λ
x
+
(
1
−
λ
)
y
)
≥
min
{
f
(
x
)
,
f
(
y
)
}
.
{\displaystyle f(\lambda x+(1-\lambda )y)\geq \min {\big \{}f(x),f(y){\big \}}.}
and strictly quasiconcave if
f
(
λ
x
+
(
1
−
λ
)
y
)
>
min
{
f
(
x
)
,
f
(
y
)
}
{\displaystyle f(\lambda x+(1-\lambda )y)>\min {\big \{}f(x),f(y){\big \}}}
A (strictly) quasiconvex function has (strictly) convex lower contour sets, while a (strictly) quasiconcave function has (strictly) convex upper contour sets.
A function that is both quasiconvex and quasiconcave is quasilinear.
A particular case of quasi-concavity, if
S
⊂
R
{\displaystyle S\subset \mathbb {R} }
, is unimodality, in which there is a locally maximal value.
== Applications ==
Quasiconvex functions have applications in mathematical analysis, in mathematical optimization, and in game theory and economics.
=== Mathematical optimization ===
In nonlinear optimization, quasiconvex programming studies iterative methods that converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization of convex programming. Quasiconvex programming is used in the solution of "surrogate" dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangian dual problems. In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated); however, such theoretically "efficient" methods use "divergent-series" step size rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.
=== Economics and partial differential equations: Minimax theorems ===
In microeconomics, quasiconcave utility functions imply that consumers have convex preferences. Quasiconvex functions are important
also in game theory, industrial organization, and general equilibrium theory, particularly for applications of Sion's minimax theorem. Generalizing a minimax theorem of John von Neumann, Sion's theorem is also used in the theory of partial differential equations.
== Preservation of quasiconvexity ==
=== Operations preserving quasiconvexity ===
maximum of quasiconvex functions (i.e.
f
=
max
{
f
1
,
…
,
f
n
}
{\displaystyle f=\max \left\lbrace f_{1},\ldots ,f_{n}\right\rbrace }
) is quasiconvex. Similarly, maximum of strict quasiconvex functions is strict quasiconvex. Similarly, the minimum of quasiconcave functions is quasiconcave, and the minimum of strictly-quasiconcave functions is strictly-quasiconcave.
composition with a non-decreasing function :
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
quasiconvex,
h
:
R
→
R
{\displaystyle h:\mathbb {R} \rightarrow \mathbb {R} }
non-decreasing, then
f
=
h
∘
g
{\displaystyle f=h\circ g}
is quasiconvex. Similarly, if
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
quasiconcave,
h
:
R
→
R
{\displaystyle h:\mathbb {R} \rightarrow \mathbb {R} }
non-decreasing, then
f
=
h
∘
g
{\displaystyle f=h\circ g}
is quasiconcave.
minimization (i.e.
f
(
x
,
y
)
{\displaystyle f(x,y)}
quasiconvex,
C
{\displaystyle C}
convex set, then
h
(
x
)
=
inf
y
∈
C
f
(
x
,
y
)
{\displaystyle h(x)=\inf _{y\in C}f(x,y)}
is quasiconvex)
=== Operations not preserving quasiconvexity ===
The sum of quasiconvex functions defined on the same domain need not be quasiconvex: In other words, if
f
(
x
)
,
g
(
x
)
{\displaystyle f(x),g(x)}
are quasiconvex, then
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
need not be quasiconvex.
The sum of quasiconvex functions defined on different domains (i.e. if
f
(
x
)
,
g
(
y
)
{\displaystyle f(x),g(y)}
are quasiconvex,
h
(
x
,
y
)
=
f
(
x
)
+
g
(
y
)
{\displaystyle h(x,y)=f(x)+g(y)}
) need not be quasiconvex. Such functions are called "additively decomposed" in economics and "separable" in mathematical optimization.
== Examples ==
Every convex function is quasiconvex.
A concave function can be quasiconvex. For example,
x
↦
log
(
x
)
{\displaystyle x\mapsto \log(x)}
is both concave and quasiconvex.
Any monotonic function is both quasiconvex and quasiconcave. More generally, a function which decreases up to a point and increases from that point on is quasiconvex (compare unimodality).
The floor function
x
↦
⌊
x
⌋
{\displaystyle x\mapsto \lfloor x\rfloor }
is an example of a quasiconvex function that is neither convex nor continuous.
== See also ==
Convex function
Concave function
Logarithmically concave function
Pseudoconvexity in the sense of several complex variables (not generalized convexity)
Pseudoconvex function
Invex function
Concavification
== References ==
Avriel, M., Diewert, W.E., Schaible, S. and Zang, I., Generalized Concavity, Plenum Press, 1988.
Crouzeix, J.-P. (2008). "Quasi-concavity". In Durlauf, Steven N.; Blume, Lawrence E (eds.). The New Palgrave Dictionary of Economics (Second ed.). Palgrave Macmillan. pp. 815–816. doi:10.1057/9780230226203.1375. ISBN 978-0-333-78676-5.
Singer, Ivan Abstract convex analysis. Canadian Mathematical Society Series of Monographs and Advanced Texts. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1997. xxii+491 pp. ISBN 0-471-16015-6
== External links ==
SION, M., "On general minimax theorems", Pacific J. Math. 8 (1958), 171-176.
Mathematical programming glossary
Concave and Quasi-Concave Functions - by Charles Wilson, NYU Department of Economics
Quasiconcavity and quasiconvexity - by Martin J. Osborne, University of Toronto Department of Economics | Wikipedia/Quasiconvex_function |
In functional analysis, a branch of mathematics, the algebraic interior or radial kernel of a subset of a vector space is a refinement of the concept of the interior.
== Definition ==
Assume that
A
{\displaystyle A}
is a subset of a vector space
X
.
{\displaystyle X.}
The algebraic interior (or radial kernel) of
A
{\displaystyle A}
with respect to
X
{\displaystyle X}
is the set of all points at which
A
{\displaystyle A}
is a radial set.
A point
a
0
∈
A
{\displaystyle a_{0}\in A}
is called an internal point of
A
{\displaystyle A}
and
A
{\displaystyle A}
is said to be radial at
a
0
{\displaystyle a_{0}}
if for every
x
∈
X
{\displaystyle x\in X}
there exists a real number
t
x
>
0
{\displaystyle t_{x}>0}
such that for every
t
∈
[
0
,
t
x
]
,
{\displaystyle t\in [0,t_{x}],}
a
0
+
t
x
∈
A
.
{\displaystyle a_{0}+tx\in A.}
This last condition can also be written as
a
0
+
[
0
,
t
x
]
x
⊆
A
{\displaystyle a_{0}+[0,t_{x}]x\subseteq A}
where the set
a
0
+
[
0
,
t
x
]
x
:=
{
a
0
+
t
x
:
t
∈
[
0
,
t
x
]
}
{\displaystyle a_{0}+[0,t_{x}]x~:=~\left\{a_{0}+tx:t\in [0,t_{x}]\right\}}
is the line segment (or closed interval) starting at
a
0
{\displaystyle a_{0}}
and ending at
a
0
+
t
x
x
;
{\displaystyle a_{0}+t_{x}x;}
this line segment is a subset of
a
0
+
[
0
,
∞
)
x
,
{\displaystyle a_{0}+[0,\infty )x,}
which is the ray emanating from
a
0
{\displaystyle a_{0}}
in the direction of
x
{\displaystyle x}
(that is, parallel to/a translation of
[
0
,
∞
)
x
{\displaystyle [0,\infty )x}
).
Thus geometrically, an interior point of a subset
A
{\displaystyle A}
is a point
a
0
∈
A
{\displaystyle a_{0}\in A}
with the property that in every possible direction (vector)
x
≠
0
,
{\displaystyle x\neq 0,}
A
{\displaystyle A}
contains some (non-degenerate) line segment starting at
a
0
{\displaystyle a_{0}}
and heading in that direction (i.e. a subset of the ray
a
0
+
[
0
,
∞
)
x
{\displaystyle a_{0}+[0,\infty )x}
).
The algebraic interior of
A
{\displaystyle A}
(with respect to
X
{\displaystyle X}
) is the set of all such points. That is to say, it is the subset of points contained in a given set with respect to which it is radial points of the set.
If
M
{\displaystyle M}
is a linear subspace of
X
{\displaystyle X}
and
A
⊆
X
{\displaystyle A\subseteq X}
then this definition can be generalized to the algebraic interior of
A
{\displaystyle A}
with respect to
M
{\displaystyle M}
is:
aint
M
A
:=
{
a
∈
X
:
for all
m
∈
M
,
there exists some
t
m
>
0
such that
a
+
[
0
,
t
m
]
⋅
m
⊆
A
}
.
{\displaystyle \operatorname {aint} _{M}A:=\left\{a\in X:{\text{ for all }}m\in M,{\text{ there exists some }}t_{m}>0{\text{ such that }}a+\left[0,t_{m}\right]\cdot m\subseteq A\right\}.}
where
aint
M
A
⊆
A
{\displaystyle \operatorname {aint} _{M}A\subseteq A}
always holds and if
aint
M
A
≠
∅
{\displaystyle \operatorname {aint} _{M}A\neq \varnothing }
then
M
⊆
aff
(
A
−
A
)
,
{\displaystyle M\subseteq \operatorname {aff} (A-A),}
where
aff
(
A
−
A
)
{\displaystyle \operatorname {aff} (A-A)}
is the affine hull of
A
−
A
{\displaystyle A-A}
(which is equal to
span
(
A
−
A
)
{\displaystyle \operatorname {span} (A-A)}
).
Algebraic closure
A point
x
∈
X
{\displaystyle x\in X}
is said to be linearly accessible from a subset
A
⊆
X
{\displaystyle A\subseteq X}
if there exists some
a
∈
A
{\displaystyle a\in A}
such that the line segment
[
a
,
x
)
:=
a
+
[
0
,
1
)
(
x
−
a
)
{\displaystyle [a,x):=a+[0,1)(x-a)}
is contained in
A
.
{\displaystyle A.}
The algebraic closure of
A
{\displaystyle A}
with respect to
X
{\displaystyle X}
, denoted by
acl
X
A
,
{\displaystyle \operatorname {acl} _{X}A,}
consists of (
A
{\displaystyle A}
and) all points in
X
{\displaystyle X}
that are linearly accessible from
A
.
{\displaystyle A.}
== Algebraic Interior (Core) ==
In the special case where
M
:=
X
,
{\displaystyle M:=X,}
the set
aint
X
A
{\displaystyle \operatorname {aint} _{X}A}
is called the algebraic interior or core of
A
{\displaystyle A}
and it is denoted by
A
i
{\displaystyle A^{i}}
or
core
A
.
{\displaystyle \operatorname {core} A.}
Formally, if
X
{\displaystyle X}
is a vector space then the algebraic interior of
A
⊆
X
{\displaystyle A\subseteq X}
is
aint
X
A
:=
core
(
A
)
:=
{
a
∈
A
:
for all
x
∈
X
,
there exists some
t
x
>
0
,
such that for all
t
∈
[
0
,
t
x
]
,
a
+
t
x
∈
A
}
.
{\displaystyle \operatorname {aint} _{X}A:=\operatorname {core} (A):=\left\{a\in A:{\text{ for all }}x\in X,{\text{ there exists some }}t_{x}>0,{\text{ such that for all }}t\in \left[0,t_{x}\right],a+tx\in A\right\}.}
We call A algebraically open in X if
A
=
aint
X
A
{\displaystyle A=\operatorname {aint} _{X}A}
If
A
{\displaystyle A}
is non-empty, then these additional subsets are also useful for the statements of many theorems in convex functional analysis (such as the Ursescu theorem):
i
c
A
:=
{
i
A
if
aff
A
is a closed set,
∅
otherwise
{\displaystyle {}^{ic}A:={\begin{cases}{}^{i}A&{\text{ if }}\operatorname {aff} A{\text{ is a closed set,}}\\\varnothing &{\text{ otherwise}}\end{cases}}}
i
b
A
:=
{
i
A
if
span
(
A
−
a
)
is a barrelled linear subspace of
X
for any/all
a
∈
A
,
∅
otherwise
{\displaystyle {}^{ib}A:={\begin{cases}{}^{i}A&{\text{ if }}\operatorname {span} (A-a){\text{ is a barrelled linear subspace of }}X{\text{ for any/all }}a\in A{\text{,}}\\\varnothing &{\text{ otherwise}}\end{cases}}}
If
X
{\displaystyle X}
is a Fréchet space,
A
{\displaystyle A}
is convex, and
aff
A
{\displaystyle \operatorname {aff} A}
is closed in
X
{\displaystyle X}
then
i
c
A
=
i
b
A
{\displaystyle {}^{ic}A={}^{ib}A}
but in general it is possible to have
i
c
A
=
∅
{\displaystyle {}^{ic}A=\varnothing }
while
i
b
A
{\displaystyle {}^{ib}A}
is not empty.
=== Examples ===
If
A
=
{
x
∈
R
2
:
x
2
≥
x
1
2
or
x
2
≤
0
}
⊆
R
2
{\displaystyle A=\{x\in \mathbb {R} ^{2}:x_{2}\geq x_{1}^{2}{\text{ or }}x_{2}\leq 0\}\subseteq \mathbb {R} ^{2}}
then
0
∈
core
(
A
)
,
{\displaystyle 0\in \operatorname {core} (A),}
but
0
∉
int
(
A
)
{\displaystyle 0\not \in \operatorname {int} (A)}
and
0
∉
core
(
core
(
A
)
)
.
{\displaystyle 0\not \in \operatorname {core} (\operatorname {core} (A)).}
=== Properties of core ===
Suppose
A
,
B
⊆
X
.
{\displaystyle A,B\subseteq X.}
In general,
core
A
≠
core
(
core
A
)
.
{\displaystyle \operatorname {core} A\neq \operatorname {core} (\operatorname {core} A).}
But if
A
{\displaystyle A}
is a convex set then:
core
A
=
core
(
core
A
)
,
{\displaystyle \operatorname {core} A=\operatorname {core} (\operatorname {core} A),}
and
for all
x
0
∈
core
A
,
y
∈
A
,
0
<
λ
≤
1
{\displaystyle x_{0}\in \operatorname {core} A,y\in A,0<\lambda \leq 1}
then
λ
x
0
+
(
1
−
λ
)
y
∈
core
A
.
{\displaystyle \lambda x_{0}+(1-\lambda )y\in \operatorname {core} A.}
A
{\displaystyle A}
is an absorbing subset of a real vector space if and only if
0
∈
core
(
A
)
.
{\displaystyle 0\in \operatorname {core} (A).}
A
+
core
B
⊆
core
(
A
+
B
)
{\displaystyle A+\operatorname {core} B\subseteq \operatorname {core} (A+B)}
A
+
core
B
=
core
(
A
+
B
)
{\displaystyle A+\operatorname {core} B=\operatorname {core} (A+B)}
if
B
=
core
B
.
{\displaystyle B=\operatorname {core} B.}
Both the core and the algebraic closure of a convex set are again convex.
If
C
{\displaystyle C}
is convex,
c
∈
core
C
,
{\displaystyle c\in \operatorname {core} C,}
and
b
∈
acl
X
C
{\displaystyle b\in \operatorname {acl} _{X}C}
then the line segment
[
c
,
b
)
:=
c
+
[
0
,
1
)
b
{\displaystyle [c,b):=c+[0,1)b}
is contained in
core
C
.
{\displaystyle \operatorname {core} C.}
=== Relation to topological interior ===
Let
X
{\displaystyle X}
be a topological vector space,
int
{\displaystyle \operatorname {int} }
denote the interior operator, and
A
⊆
X
{\displaystyle A\subseteq X}
then:
int
A
⊆
core
A
{\displaystyle \operatorname {int} A\subseteq \operatorname {core} A}
If
A
{\displaystyle A}
is nonempty convex and
X
{\displaystyle X}
is finite-dimensional, then
int
A
=
core
A
.
{\displaystyle \operatorname {int} A=\operatorname {core} A.}
If
A
{\displaystyle A}
is convex with non-empty interior, then
int
A
=
core
A
.
{\displaystyle \operatorname {int} A=\operatorname {core} A.}
If
A
{\displaystyle A}
is a closed convex set and
X
{\displaystyle X}
is a complete metric space, then
int
A
=
core
A
.
{\displaystyle \operatorname {int} A=\operatorname {core} A.}
== Relative algebraic interior ==
If
M
=
aff
(
A
−
A
)
{\displaystyle M=\operatorname {aff} (A-A)}
then the set
aint
M
A
{\displaystyle \operatorname {aint} _{M}A}
is denoted by
i
A
:=
aint
aff
(
A
−
A
)
A
{\displaystyle {}^{i}A:=\operatorname {aint} _{\operatorname {aff} (A-A)}A}
and it is called the relative algebraic interior of
A
.
{\displaystyle A.}
This name stems from the fact that
a
∈
A
i
{\displaystyle a\in A^{i}}
if and only if
aff
A
=
X
{\displaystyle \operatorname {aff} A=X}
and
a
∈
i
A
{\displaystyle a\in {}^{i}A}
(where
aff
A
=
X
{\displaystyle \operatorname {aff} A=X}
if and only if
aff
(
A
−
A
)
=
X
{\displaystyle \operatorname {aff} (A-A)=X}
).
== Relative interior ==
If
A
{\displaystyle A}
is a subset of a topological vector space
X
{\displaystyle X}
then the relative interior of
A
{\displaystyle A}
is the set
rint
A
:=
int
aff
A
A
.
{\displaystyle \operatorname {rint} A:=\operatorname {int} _{\operatorname {aff} A}A.}
That is, it is the topological interior of A in
aff
A
,
{\displaystyle \operatorname {aff} A,}
which is the smallest affine linear subspace of
X
{\displaystyle X}
containing
A
.
{\displaystyle A.}
The following set is also useful:
ri
A
:=
{
rint
A
if
aff
A
is a closed subspace of
X
,
∅
otherwise
{\displaystyle \operatorname {ri} A:={\begin{cases}\operatorname {rint} A&{\text{ if }}\operatorname {aff} A{\text{ is a closed subspace of }}X{\text{,}}\\\varnothing &{\text{ otherwise}}\end{cases}}}
== Quasi relative interior ==
If
A
{\displaystyle A}
is a subset of a topological vector space
X
{\displaystyle X}
then the quasi relative interior of
A
{\displaystyle A}
is the set
qri
A
:=
{
a
∈
A
:
cone
¯
(
A
−
a
)
is a linear subspace of
X
}
.
{\displaystyle \operatorname {qri} A:=\left\{a\in A:{\overline {\operatorname {cone} }}(A-a){\text{ is a linear subspace of }}X\right\}.}
In a Hausdorff finite dimensional topological vector space,
qri
A
=
i
A
=
i
c
A
=
i
b
A
.
{\displaystyle \operatorname {qri} A={}^{i}A={}^{ic}A={}^{ib}A.}
== See also ==
Algebraic closure (convex analysis)
Bounding point – Mathematical concept related to subsets of vector spaces
Interior (topology) – Largest open subset of some given set
Order unit – Element of an ordered vector space
Quasi-relative interior – Generalization of algebraic interior
Radial set – Topological set
Relative interior – Generalization of topological interior
Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
== References ==
=== Bibliography ===
Aliprantis, Charalambos D.; Border, Kim C. (2006). Infinite Dimensional Analysis: A Hitchhiker's Guide (Third ed.). Berlin: Springer Science & Business Media. ISBN 978-3-540-29587-7. OCLC 262692874.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive. | Wikipedia/Algebraic_interior |
In vector calculus, an invex function is a differentiable function
f
{\displaystyle f}
from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
{\displaystyle \mathbb {R} }
for which there exists a vector valued function
η
{\displaystyle \eta }
such that
f
(
x
)
−
f
(
u
)
≥
η
(
x
,
u
)
⋅
∇
f
(
u
)
,
{\displaystyle f(x)-f(u)\geq \eta (x,u)\cdot \nabla f(u),\,}
for all x and u.
Invex functions were introduced by Hanson as a generalization of convex functions. Ben-Israel and Mond provided a simple proof that a function is invex if and only if every stationary point is a global minimum, a theorem first stated by Craven and Glover.
Hanson also showed that if the objective and the constraints of an optimization problem are invex with respect to the same function
η
(
x
,
u
)
{\displaystyle \eta (x,u)}
, then the Karush–Kuhn–Tucker conditions are sufficient for a global minimum.
== Type I invex functions ==
A slight generalization of invex functions called Type I invex functions are the most general class of functions for which the Karush–Kuhn–Tucker conditions are necessary and sufficient for a global minimum. Consider a mathematical program of the form
min
f
(
x
)
s.t.
g
(
x
)
≤
0
{\displaystyle {\begin{array}{rl}\min &f(x)\\{\text{s.t.}}&g(x)\leq 0\end{array}}}
where
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
and
g
:
R
n
→
R
m
{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
are differentiable functions. Let
F
=
{
x
∈
R
n
|
g
(
x
)
≤
0
}
{\displaystyle F=\{x\in \mathbb {R} ^{n}\;|\;g(x)\leq 0\}}
denote the feasible region of this program. The function
f
{\displaystyle f}
is a Type I objective function and the function
g
{\displaystyle g}
is a Type I constraint function at
x
0
{\displaystyle x_{0}}
with respect to
η
{\displaystyle \eta }
if there exists a vector-valued function
η
{\displaystyle \eta }
defined on
F
{\displaystyle F}
such that
f
(
x
)
−
f
(
x
0
)
≥
η
(
x
)
⋅
∇
f
(
x
0
)
{\displaystyle f(x)-f(x_{0})\geq \eta (x)\cdot \nabla {f(x_{0})}}
and
−
g
(
x
0
)
≥
η
(
x
)
⋅
∇
g
(
x
0
)
{\displaystyle -g(x_{0})\geq \eta (x)\cdot \nabla {g(x_{0})}}
for all
x
∈
F
{\displaystyle x\in {F}}
. Note that, unlike invexity, Type I invexity is defined relative to a point
x
0
{\displaystyle x_{0}}
.
Theorem (Theorem 2.1 in): If
f
{\displaystyle f}
and
g
{\displaystyle g}
are Type I invex at a point
x
∗
{\displaystyle x^{*}}
with respect to
η
{\displaystyle \eta }
, and the Karush–Kuhn–Tucker conditions are satisfied at
x
∗
{\displaystyle x^{*}}
, then
x
∗
{\displaystyle x^{*}}
is a global minimizer of
f
{\displaystyle f}
over
F
{\displaystyle F}
.
== E-invex function ==
Let
E
{\displaystyle E}
from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
n
{\displaystyle \mathbb {R} ^{n}}
and
f
{\displaystyle f}
from
M
{\displaystyle \mathbb {M} }
to
R
{\displaystyle \mathbb {R} }
be an
E
{\displaystyle E}
-differentiable function on a nonempty open set
M
⊂
R
n
{\displaystyle \mathbb {M} \subset \mathbb {R} ^{n}}
. Then
f
{\displaystyle f}
is said to be an E-invex function at
u
{\displaystyle u}
if there exists a vector valued function
η
{\displaystyle \eta }
such that
(
f
∘
E
)
(
x
)
−
(
f
∘
E
)
(
u
)
≥
∇
(
f
∘
E
)
(
u
)
⋅
η
(
E
(
x
)
,
E
(
u
)
)
,
{\displaystyle (f\circ E)(x)-(f\circ E)(u)\geq \nabla (f\circ E)(u)\cdot \eta (E(x),E(u)),\,}
for all
x
{\displaystyle x}
and
u
{\displaystyle u}
in
M
{\displaystyle \mathbb {M} }
.
E-invex functions were introduced by Abdulaleem as a generalization of differentiable convex functions.
== E-type I Functions ==
Let
E
:
R
n
→
R
n
{\displaystyle E:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
, and
M
⊂
R
n
{\displaystyle M\subset \mathbb {R} ^{n}}
be an open E-invex set. A vector-valued pair
(
f
,
g
)
{\displaystyle (f,g)}
, where
f
{\displaystyle f}
and
g
{\displaystyle g}
represent objective and constraint functions respectively, is said to be E-type I with respect to a vector-valued function
η
:
M
×
M
→
R
n
{\displaystyle \eta :M\times M\to \mathbb {R} ^{n}}
, at
u
∈
M
{\displaystyle u\in M}
, if the following inequalities hold for all
x
∈
F
E
=
{
x
∈
R
n
|
g
(
E
(
x
)
)
≤
0
}
{\displaystyle x\in F_{E}=\{x\in \mathbb {R} ^{n}\;|\;g(E(x))\leq 0\}}
:
f
i
(
E
(
x
)
)
−
f
i
(
E
(
u
)
)
≥
∇
f
i
(
E
(
u
)
)
⋅
η
(
E
(
x
)
,
E
(
u
)
)
,
{\displaystyle f_{i}(E(x))-f_{i}(E(u))\geq \nabla f_{i}(E(u))\cdot \eta (E(x),E(u)),}
−
g
j
(
E
(
u
)
)
≥
∇
g
j
(
E
(
u
)
)
⋅
η
(
E
(
x
)
,
E
(
u
)
)
.
{\displaystyle -g_{j}(E(u))\geq \nabla g_{j}(E(u))\cdot \eta (E(x),E(u)).}
=== Remark 1. ===
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are differentiable functions and
E
(
x
)
=
x
{\displaystyle E(x)=x}
(
E
{\displaystyle E}
is an identity map), then the definition of E-type I functions reduces to the definition of type I functions introduced by Rueda and Hanson.
== See also ==
Convex function
Pseudoconvex function
Quasiconvex function
== References ==
== Further reading ==
S. K. Mishra and G. Giorgi, Invexity and optimization, Nonconvex Optimization and Its Applications, Vol. 88, Springer-Verlag, Berlin, 2008.
S. K. Mishra, S.-Y. Wang and K. K. Lai, Generalized Convexity and Vector Optimization, Springer, New York, 2009. | Wikipedia/Invex_function |
In economics, a production function gives the technological relation between quantities of physical inputs and quantities of output of goods. The production function is one of the key concepts of mainstream neoclassical theories, used to define marginal product and to distinguish allocative efficiency, a key focus of economics. One important purpose of the production function is to address allocative efficiency in the use of factor inputs in production and the resulting distribution of income to those factors, while abstracting away from the technological problems of achieving technical efficiency, as an engineer or professional manager might understand it.
For modelling the case of many outputs and many inputs, researchers often use the so-called Shephard's distance functions or, alternatively, directional distance functions, which are generalizations of the simple production function in economics.
In macroeconomics, aggregate production functions are estimated to create a framework in which to distinguish how much of economic growth to attribute to changes in factor allocation (e.g. the accumulation of physical capital) and how much to attribute to advancing technology. Some non-mainstream economists, however, reject the very concept of an aggregate production function.
== The theory of production functions ==
In general, economic output is not a (mathematical) function of input, because any given set of inputs can be used to produce a range of outputs. To satisfy the mathematical definition of a function, a production function is customarily assumed to specify the maximum output obtainable from a given set of inputs. The production function, therefore, describes a boundary or frontier representing the limit of output obtainable from each feasible combination of input. Alternatively, a production function can be defined as the specification of the minimum input requirements needed to produce designated quantities of output. Assuming that maximum output is obtained from given inputs allows economists to abstract away from technological and managerial problems associated with realizing such a technical maximum, and to focus exclusively on the problem of allocative efficiency, associated with the economic choice of how much of a factor input to use, or the degree to which one factor may be substituted for another. In the production function itself, the relationship of output to inputs is non-monetary; that is, a production function relates physical inputs to physical outputs, and prices and costs are not reflected in the function.
In the decision frame of a firm making economic choices regarding production—how much of each factor input to use to produce how much output—and facing market prices for output and inputs, the production function represents the possibilities afforded by an exogenous technology. Under certain assumptions, the production function can be used to derive a marginal product for each factor. The profit-maximizing firm in perfect competition (taking output and input prices as given) will choose to add input right up to the point where the marginal cost of additional input matches the marginal product in additional output. This implies an ideal division of the income generated from output into an income due to each input factor of production, equal to the marginal product of each input.
The inputs to the production function are commonly termed factors of production and may represent primary factors, which are stocks. Classically, the primary factors of production were land, labour and capital. Primary factors do not become part of the output product, nor are the primary factors, themselves, transformed in the production process. The production function, as a theoretical construct, may be abstracting away from the secondary factors and intermediate products consumed in a production process. The production function is not a full model of the production process: it deliberately abstracts from inherent aspects of physical production processes that some would argue are essential, including error, entropy or waste, and the consumption of energy or the co-production of pollution. Moreover, production functions do not ordinarily model the business processes, either, ignoring the role of strategic and operational business management. (For a primer on the fundamental elements of microeconomic production theory, see production theory basics).
The production function is central to the marginalist focus of neoclassical economics, its definition of efficiency as allocative efficiency, its analysis of how market prices can govern the achievement of allocative efficiency in a decentralized economy, and an analysis of the distribution of income, which attributes factor income to the marginal product of factor input.
=== Specifying the production function ===
A production function can be expressed in a functional form as the right side of
Q
=
f
(
X
1
,
X
2
,
X
3
,
…
,
X
n
)
{\displaystyle Q=f(X_{1},X_{2},X_{3},\dotsc ,X_{n})}
where
Q
{\displaystyle Q}
is the quantity of output and
X
1
,
X
2
,
X
3
,
…
,
X
n
{\displaystyle X_{1},X_{2},X_{3},\dotsc ,X_{n}}
are the quantities of factor inputs (such as capital, labour, land or raw materials). For
X
1
=
X
2
=
.
.
.
=
X
n
=
0
{\displaystyle X_{1}=X_{2}=...=X_{n}=0}
it must be
Q
=
0
{\displaystyle Q=0}
since we cannot produce anything without inputs.
If
Q
{\displaystyle Q}
is a scalar, then this form does not encompass joint production, which is a production process that has multiple co-products. On the other hand, if
f
{\displaystyle f}
maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
k
{\displaystyle \mathbb {R} ^{k}}
then it is a joint production function expressing the determination of
k
{\displaystyle k}
different types of output based on the joint usage of the specified quantities of the
n
{\displaystyle n}
inputs.
One formulation is as a linear function:
Q
=
a
1
X
1
+
a
2
X
2
+
a
3
X
3
+
⋯
+
a
n
X
n
{\displaystyle Q=a_{1}X_{1}+a_{2}X_{2}+a_{3}X_{3}+\dotsb +a_{n}X_{n}}
where
a
1
,
…
,
a
n
{\displaystyle a_{1},\dots ,a_{n}}
are parameters that are determined empirically. Linear functions imply that inputs are perfect substitutes in production.
Another is as a Cobb–Douglas production function:
Q
=
a
0
X
1
a
1
X
2
a
2
⋯
X
n
a
n
{\displaystyle Q=a_{0}X_{1}^{a_{1}}X_{2}^{a_{2}}\cdots X_{n}^{a_{n}}}
where
a
0
{\displaystyle a_{0}}
is the so-called total factor productivity.
The Leontief production function applies to situations in which inputs must be used in fixed proportions; starting from those proportions, if usage of one input is increased without another being increased, the output will not change. This production function is given by
Q
=
min
(
a
1
X
1
,
a
2
X
2
,
…
,
a
n
X
n
)
.
{\displaystyle Q=\min(a_{1}X_{1},a_{2}X_{2},\dotsc ,a_{n}X_{n}).}
Other forms include the constant elasticity of substitution production function (CES), which is a generalized form of the Cobb–Douglas function, and the quadratic production function. The best form of the equation to use and the values of the parameters (
a
0
,
…
,
a
n
{\displaystyle a_{0},\dots ,a_{n}}
) vary from company to company and industry to industry. In the short run, production function at least one of the
X
{\displaystyle X}
's (inputs) is fixed. In the long run, all factor inputs are variable at the discretion of management.
Moysan and Senouci (2016) provide an analytical formula for all 2-input, neoclassical production functions.
=== Production function as a graph ===
Any of these equations can be plotted on a graph. A typical (quadratic) production function is shown in the following diagram under the assumption of a single variable input (or fixed ratios of inputs so they can be treated as a single variable). All points above the production function are unobtainable with current technology, all points below are technically feasible, and all points on the function show the maximum quantity of output obtainable at the specified level of usage of the input. From point A to point C, the firm is experiencing positive but decreasing marginal returns to the variable input. As additional units of the input are employed, output increases but at a decreasing rate. Point B is the point beyond which there are diminishing average returns, as shown by the declining slope of the average physical product curve (APP) beyond point Y. Point B is just tangent to the steepest ray from the origin hence the average physical product is at a maximum. Beyond point B, mathematical necessity requires that the marginal curve must be below the average curve (See production theory basics for further explanation and Sickles and Zelenyuk (2019) for more extensive discussions of various production functions, their generalizations and estimations).
=== Stages of production ===
To simplify the interpretation of a production function, it is common to divide its range into 3 stages. In Stage 1 (from the origin to point B) the variable input is being used with increasing output per unit, the latter reaching a maximum at point B (since the average physical product is at its maximum at that point). Because the output per unit of the variable input is improving throughout stage 1, a price-taking firm will always operate beyond this stage.
In Stage 2, output increases at a decreasing rate, and the average and marginal physical product both decline. However, the average product of fixed inputs (not shown) is still rising, because output is rising while fixed input usage is constant. In this stage, the employment of additional variable inputs increases the output per unit of fixed input but decreases the output per unit of the variable input. The optimum input/output combination for the price-taking firm will be in stage 2, although a firm facing a downward-sloped demand curve might find it most profitable to operate in Stage 2.
In Stage 3, too much variable input is being used relative to the available fixed inputs: variable inputs are over-utilized in the sense that their presence on the margin obstructs the production process rather than enhancing it. The output per unit of both the fixed and the variable input declines throughout this stage. At the boundary between stage 2 and stage 3, the highest possible output is being obtained from the fixed input.
=== Shifting a production function ===
By definition, in the long run the firm can change its scale of operations by adjusting the level of inputs that are fixed in the short run, thereby shifting the production function upward as plotted against the variable input. If fixed inputs are lumpy, adjustments to the scale of operations may be more significant than what is required to merely balance production capacity with demand. For example, you may only need to increase production by million units per year to keep up with demand, but the production equipment upgrades that are available may involve increasing productive capacity by 2 million units per year.
If a firm is operating at a profit-maximizing level in stage one, it might, in the long run, choose to reduce its scale of operations (by selling capital equipment). By reducing the amount of fixed capital inputs, the production function will shift down. The beginning of stage 2 shifts from B1 to B2. The (unchanged) profit-maximizing output level will now be in stage 2.
=== Homogeneous and homothetic production functions ===
There are two special classes of production functions that are often analyzed. The production function
Q
=
f
(
X
1
,
X
2
,
…
,
X
n
)
{\displaystyle Q=f(X_{1},X_{2},\dotsc ,X_{n})}
is said to be homogeneous of degree
m
{\displaystyle m}
, if given any positive constant
k
{\displaystyle k}
,
f
(
k
X
1
,
k
X
2
,
…
,
k
X
n
)
=
k
m
f
(
X
1
,
X
2
,
…
,
X
n
)
{\displaystyle f(kX_{1},kX_{2},\dotsc ,kX_{n})=k^{m}f(X_{1},X_{2},\dotsc ,X_{n})}
. If
m
>
1
{\displaystyle m>1}
, the function exhibits increasing returns to scale, and it exhibits decreasing returns to scale if
m
<
1
{\displaystyle m<1}
. If it is homogeneous of degree
1
{\displaystyle 1}
, it exhibits constant returns to scale. The presence of increasing returns means that a one percent increase in the usage levels of all inputs would result in a greater than one percent increase in output; the presence of decreasing returns means that it would result in a less than one percent increase in output. Constant returns to scale is the in-between case. In the Cobb–Douglas production function referred to above, returns to scale are increasing if
a
1
+
a
2
+
⋯
+
a
n
>
1
{\displaystyle a_{1}+a_{2}+\dotsb +a_{n}>1}
, decreasing if
a
1
+
a
2
+
⋯
+
a
n
<
1
{\displaystyle a_{1}+a_{2}+\dotsb +a_{n}<1}
, and constant if
a
1
+
a
2
+
⋯
+
a
n
=
1
{\displaystyle a_{1}+a_{2}+\dotsb +a_{n}=1}
.
If a production function is homogeneous of degree one, it is sometimes called "linearly homogeneous". A linearly homogeneous production function with inputs capital and labour has the properties that the marginal and average physical products of both capital and labour can be expressed as functions of the capital-labour ratio alone. Moreover, in this case, if each input is paid at a rate equal to its marginal product, the firm's revenues will be exactly exhausted and there will be no excess economic profit.: pp.412–414
Homothetic functions are functions whose marginal technical rate of substitution (the slope of the isoquant, a curve drawn through the set of points in say labour-capital space at which the same quantity of output is produced for varying combinations of the inputs) is homogeneous of degree zero. Due to this, along rays coming from the origin, the slopes of the isoquants will be the same. Homothetic functions are of the form
F
(
h
(
X
1
,
X
2
)
)
{\displaystyle F(h(X_{1},X_{2}))}
where
F
(
y
)
{\displaystyle F(y)}
is a monotonically increasing function (the derivative of
F
(
y
)
{\displaystyle F(y)}
is positive (
d
F
/
d
y
>
0
{\displaystyle \mathrm {d} F/\mathrm {d} y>0}
)), and the function
h
(
X
1
,
X
2
)
{\displaystyle h(X_{1},X_{2})}
is a homogeneous function of any degree.
=== Aggregate production functions ===
In macroeconomics, aggregate production functions for whole nations are sometimes constructed. In theory, they are the summation of all the production functions of individual producers; however there are methodological problems associated with aggregate production functions, and economists have debated extensively whether the concept is valid.
=== Criticisms of the production function theory ===
There are two major criticisms of the standard form of the production function.
==== On the concept of capital ====
During the 1950s, '60s, and '70s there was a lively debate about the theoretical soundness of production functions (see the Capital controversy). Although the criticism was directed primarily at aggregate production functions, microeconomic production functions were also put under scrutiny. The debate began in 1953 when Joan Robinson criticized the way the factor input capital was measured and how the notion of factor proportions had distracted economists. She wrote:
"The production function has been a powerful instrument of miseducation. The student of economic theory is taught to write
Q
=
f
(
L
,
K
)
{\displaystyle Q=f(L,K)}
where
L
{\displaystyle L}
is a quantity of labor,
K
{\displaystyle K}
a quantity of capital and
Q
{\displaystyle Q}
a rate of output of commodities. [They] are instructed to assume all workers alike, and to measure
L
{\displaystyle L}
in man-hours of labor; [they] are told something about the index-number problem in choosing a unit of output; and then [they] are hurried on to the next question, in the hope that [they] will forget to ask in what units K is measured. Before [they] ever do ask, [they] have become a professor, and so sloppy habits of thought are handed on from one generation to the next".
According to the argument, it is impossible to conceive of capital in such a way that its quantity is independent of the rates of interest and wages. The problem is that this independence is a precondition of constructing an isoquant. Further, the slope of the isoquant helps determine relative factor prices, but the curve cannot be constructed (and its slope measured) unless the prices are known beforehand.
==== On the empirical relevance ====
As a result of the criticism on their weak theoretical grounds, it has been claimed that empirical results firmly support the use of neoclassical well behaved aggregate production functions. Nevertheless, Anwar Shaikh has demonstrated that they also have no empirical relevance, as long as the alleged good fit comes from an accounting identity, not from any underlying laws of production/distribution.
==== Natural resources ====
Natural resources are usually absent in production functions. When Robert Solow and Joseph Stiglitz attempted to develop a more realistic production function by including natural resources, they did it in a manner economist Nicholas Georgescu-Roegen criticized as a "conjuring trick": Solow and Stiglitz had failed to take into account the laws of thermodynamics, since their variant allowed man-made capital to be a complete substitute for natural resources. Neither Solow nor Stiglitz reacted to Georgescu-Roegen's criticism, despite an invitation to do so in the September 1997 issue of the journal Ecological Economics.: 127–136
Georgescu-Roegen can be understood as criticizing Solow and Stiglitz's approach to mathematically modelling factors of production. We will use the example of energy to illustrate the strengths and weaknesses of the two approaches in question.
===== Independent factors of production =====
Robert Solow and Joseph Stiglitz describe an approach to modelling energy as a factor of production which assumes the following:
Labor, capital, energy input, and technical change (omitted below for brevity) are the only relevant factors of production,
The factors of production are independent of one another such that the production function takes the general form
Q
=
f
(
L
,
K
,
E
)
{\displaystyle Q=f(L,K,E)}
,
Labor, capital, and energy input only depend on time such that
K
=
K
(
t
)
,
L
=
L
(
t
)
,
E
=
E
(
t
)
{\displaystyle K=K(t),L=L(t),E=E(t)}
.
This approach yields an energy-dependent production function given as
Q
=
A
L
β
K
α
E
χ
{\displaystyle Q=AL^{\beta }K^{\alpha }E^{\chi }}
. However, as discussed in more-recent work, this approach does not accurately model the mechanism by which energy affects production processes. Consider the following cases which support the revision of the assumptions made by this model:
If workers at any stage of the production process rely on electricity to perform their jobs, a power outage would significantly reduce their maximum output, and a long-enough power outage would reduce their maximum output to zero. Therefore
L
{\displaystyle L}
should be modeled as depending directly on time-dependent energy input
E
(
t
)
{\displaystyle E(t)}
.
If there were a power outage, machines would not be able to run, and therefore their maximum output would be reduced to zero. Therefore
K
{\displaystyle K}
should be modeled as depending directly on time-dependent energy input
E
(
t
)
{\displaystyle E(t)}
.
This model has also been shown to predict a 28% decrease in output for a 99% decrease in energy, which further supports the revision of this model's assumptions. Note that, while inappropriate for energy, an "independent" modelling approach may be appropriate for modelling other natural resources such as land.
===== Inter-dependent factors of production =====
The "independent" energy-dependent production function can be revised by considering energy-dependent labor and capital input functions
L
=
L
(
E
(
t
)
)
{\displaystyle L=L(E(t))}
,
K
=
K
(
E
(
t
)
)
{\displaystyle K=K(E(t))}
. This approach yields an energy-dependent production function given generally as
Q
=
f
(
L
(
E
)
,
K
(
E
)
)
{\displaystyle Q=f(L(E),K(E))}
. Details related to the derivation of a specific functional form of this production function as well as empirical support for this form of the production function are discussed in more-recently published work. Note that similar arguments could be used to develop more-realistic production functions which consider other depletable natural resources beyond energy:
If a geographical region runs out of the natural resources required to produce a given machine or maintain existing machines and is unable to import more or recycle, the machines in that region will eventually fall into disrepair and the machines' maximum output would be reduced to near-zero. This should be modeled as significantly affecting the total output. Therefore, therefore
K
{\displaystyle K}
should be modeled as depending directly on time-dependent natural resource input
N
(
t
)
{\displaystyle N(t)}
.
=== The practice of production functions ===
The theory of the production function depicts the relation between physical outputs of a production process and physical inputs, i.e. factors of production. The practical application of production functions is obtained by valuing the physical outputs and inputs by their prices. The economic value of physical outputs minus the economic value of physical inputs is the income generated by the production process. By keeping the prices fixed between two periods under review we get the income change generated by a change of the production function. This is the principle how the production function is made a practical concept, i.e. measureable and understandable in practical situations.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Brems, Hans (1968). "The Production Function". Quantitative Economic Theory. New York: Wiley. pp. 62–74.
Craig, C.; Harris, R. (1973). "Total Productivity Measurement at the Firm Level". Sloan Management Review (Spring 1973): 13–28.
Guerrien B. and O. Gun (2015) "Putting an end to the aggregate function of production... forever?", Real World Economic Review N°73
Hulten, C. R. (January 2000). "Total Factor Productivity: A Short Biography". NBER Working Paper No. 7471. doi:10.3386/w7471.
Heathfield, D. F. (1971). Production Functions. Macmillan Studies in Economics. New York: Macmillan Press.
Intriligator, Michael D. (1971). Mathematical Optimalization and Economic Theory. Englewood Cliffs: Prentice-Hall. pp. 178–189. ISBN 0-13-561753-7.
Laidler, David (1981). Introduction to Microeconomics (Second ed.). Oxford: Philip Allan. pp. 124–137. ISBN 0-86003-131-4.
Maurice, S. Charles; Phillips, Owen R.; Ferguson, C. E. (1982). Economic Analysis: Theory and Application (Fourth ed.). Homewood: Irwin. pp. 169–222. ISBN 0-256-02614-9.
Moroney, J. R. (1967). "Cobb–Douglass production functions and returns to scale in US manufacturing industry". Western Economic Journal. 6 (1): 39–51. doi:10.1111/j.1465-7295.1967.tb01174.x.
Pearl, D.; Enos, J. (1975). "Engineering Production Functions and Technological Progress". Journal of Industrial Economics. 24 (1): 55–72. doi:10.2307/2098099. JSTOR 2098099.
Shephard, R. (1970). Theory of Cost and Production Functions. Princeton, NJ: Princeton University Press.
Thompson, A. (1981). Economics of the Firm: Theory and Practice (3rd ed.). Englewood Cliffs: Prentice Hall. ISBN 0-13-231423-1.
Sickles, R., & Zelenyuk, V. (2019). Measurement of Productivity and Efficiency: Theory and Practice. Cambridge: Cambridge University Press. https://assets.cambridge.org/97811070/36161/frontmatter/9781107036161_frontmatter.pdf
== External links ==
A further description of production functions
Anatomy of Cobb–Douglas Type Production Functions in 3D
Anatomy of CES Type Production Functions in 3D | Wikipedia/Production_function |
In calculus, Taylor's theorem gives an approximation of a
k
{\textstyle k}
-times differentiable function around a given point by a polynomial of degree
k
{\textstyle k}
, called the
k
{\textstyle k}
-th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order
k
{\textstyle k}
of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1715, although an earlier version of the result was already mentioned in 1671 by James Gregory.
Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis. It gives simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions.
It is the starting point of the study of analytic functions, and is fundamental in various areas of mathematics, as well as in numerical analysis and mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions. It provided the mathematical basis for some landmark early computing machines: Charles Babbage's difference engine calculated sines, cosines, logarithms, and other transcendental functions by numerically integrating the first 7 terms of their Taylor series.
== Motivation ==
If a real-valued function
f
(
x
)
{\textstyle f(x)}
is differentiable at the point
x
=
a
{\textstyle x=a}
, then it has a linear approximation near this point. This means that there exists a function h1(x) such that
f
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
+
h
1
(
x
)
(
x
−
a
)
,
lim
x
→
a
h
1
(
x
)
=
0.
{\displaystyle f(x)=f(a)+f'(a)(x-a)+h_{1}(x)(x-a),\quad \lim _{x\to a}h_{1}(x)=0.}
Here
P
1
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
{\displaystyle P_{1}(x)=f(a)+f'(a)(x-a)}
is the linear approximation of
f
(
x
)
{\textstyle f(x)}
for x near the point a, whose graph
y
=
P
1
(
x
)
{\textstyle y=P_{1}(x)}
is the tangent line to the graph
y
=
f
(
x
)
{\textstyle y=f(x)}
at x = a. The error in the approximation is:
R
1
(
x
)
=
f
(
x
)
−
P
1
(
x
)
=
h
1
(
x
)
(
x
−
a
)
.
{\displaystyle R_{1}(x)=f(x)-P_{1}(x)=h_{1}(x)(x-a).}
As x tends to a, this error goes to zero much faster than
(
x
−
a
)
{\displaystyle (x-a)}
, making
f
(
x
)
≈
P
1
(
x
)
{\displaystyle f(x)\approx P_{1}(x)}
a useful approximation.
For a better approximation to
f
(
x
)
{\textstyle f(x)}
, we can fit a quadratic polynomial instead of a linear function:
P
2
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
+
f
″
(
a
)
2
(
x
−
a
)
2
.
{\displaystyle P_{2}(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2}}(x-a)^{2}.}
Instead of just matching one derivative of
f
(
x
)
{\textstyle f(x)}
at
x
=
a
{\textstyle x=a}
, this polynomial has the same first and second derivatives, as is evident upon differentiation.
Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of
x
=
a
{\textstyle x=a}
, more accurate than the linear approximation. Specifically,
f
(
x
)
=
P
2
(
x
)
+
h
2
(
x
)
(
x
−
a
)
2
,
lim
x
→
a
h
2
(
x
)
=
0.
{\displaystyle f(x)=P_{2}(x)+h_{2}(x)(x-a)^{2},\quad \lim _{x\to a}h_{2}(x)=0.}
Here the error in the approximation is
R
2
(
x
)
=
f
(
x
)
−
P
2
(
x
)
=
h
2
(
x
)
(
x
−
a
)
2
,
{\displaystyle R_{2}(x)=f(x)-P_{2}(x)=h_{2}(x)(x-a)^{2},}
which, given the limiting behavior of
h
2
{\displaystyle h_{2}}
, goes to zero faster than
(
x
−
a
)
2
{\displaystyle (x-a)^{2}}
as x tends to a.
Similarly, we might get still better approximations to f if we use polynomials of higher degree, since then we can match even more derivatives with f at the selected base point.
In general, the error in approximating a function by a polynomial of degree k will go to zero much faster than
(
x
−
a
)
k
{\displaystyle (x-a)^{k}}
as x tends to a. However, there are functions, even infinitely differentiable ones, for which increasing the degree of the approximating polynomial does not increase the accuracy of approximation: we say such a function fails to be analytic at x = a: it is not (locally) determined by its derivatives at this point.
Taylor's theorem is of asymptotic nature: it only tells us that the error
R
k
{\textstyle R_{k}}
in an approximation by a
k
{\textstyle k}
-th order Taylor polynomial Pk tends to zero faster than any nonzero
k
{\textstyle k}
-th degree polynomial as
x
→
a
{\textstyle x\to a}
. It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulas for the remainder term (given below) which are valid under some additional regularity assumptions on f. These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function f is analytic. In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.)
There are several ways we might use the remainder term:
Estimate the error for a polynomial Pk(x) of degree k estimating
f
(
x
)
{\textstyle f(x)}
on a given interval (a – r, a + r). (Given the interval and degree, we find the error.)
Find the smallest degree k for which the polynomial Pk(x) approximates
f
(
x
)
{\textstyle f(x)}
to within a given error tolerance on a given interval (a − r, a + r) . (Given the interval and error tolerance, we find the degree.)
Find the largest interval (a − r, a + r) on which Pk(x) approximates
f
(
x
)
{\textstyle f(x)}
to within a given error tolerance. (Given the degree and error tolerance, we find the interval.)
== Taylor's theorem in one real variable ==
=== Statement of the theorem ===
The precise statement of the most basic version of Taylor's theorem is as follows:
The polynomial appearing in Taylor's theorem is the
k
{\textstyle {\boldsymbol {k}}}
-th order Taylor polynomial
P
k
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
+
f
″
(
a
)
2
!
(
x
−
a
)
2
+
⋯
+
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
{\displaystyle P_{k}(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots +{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}}
of the function f at the point a. The Taylor polynomial is the unique "asymptotic best fit" polynomial in the sense that if there exists a function hk : R → R and a
k
{\textstyle k}
-th order polynomial p such that
f
(
x
)
=
p
(
x
)
+
h
k
(
x
)
(
x
−
a
)
k
,
lim
x
→
a
h
k
(
x
)
=
0
,
{\displaystyle f(x)=p(x)+h_{k}(x)(x-a)^{k},\quad \lim _{x\to a}h_{k}(x)=0,}
then p = Pk. Taylor's theorem describes the asymptotic behavior of the remainder term
R
k
(
x
)
=
f
(
x
)
−
P
k
(
x
)
,
{\displaystyle R_{k}(x)=f(x)-P_{k}(x),}
which is the approximation error when approximating f with its Taylor polynomial. Using the little-o notation, the statement in Taylor's theorem reads as
R
k
(
x
)
=
o
(
|
x
−
a
|
k
)
,
x
→
a
.
{\displaystyle R_{k}(x)=o(|x-a|^{k}),\quad x\to a.}
=== Explicit formulas for the remainder ===
Under stronger regularity assumptions on f there are several precise formulas for the remainder term Rk of the Taylor polynomial, the most common ones being the following.
These refinements of Taylor's theorem are usually proved using the mean value theorem, whence the name. Additionally, notice that this is precisely the mean value theorem when
k
=
0
{\textstyle k=0}
. Also other similar expressions can be found. For example, if G(t) is continuous on the closed interval and differentiable with a non-vanishing derivative on the open interval between
a
{\textstyle a}
and
x
{\textstyle x}
, then
R
k
(
x
)
=
f
(
k
+
1
)
(
ξ
)
k
!
(
x
−
ξ
)
k
G
(
x
)
−
G
(
a
)
G
′
(
ξ
)
{\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi )}{k!}}(x-\xi )^{k}{\frac {G(x)-G(a)}{G'(\xi )}}}
for some number
ξ
{\textstyle \xi }
between
a
{\textstyle a}
and
x
{\textstyle x}
. This version covers the Lagrange and Cauchy forms of the remainder as special cases, and is proved below using Cauchy's mean value theorem. The Lagrange form is obtained by taking
G
(
t
)
=
(
x
−
t
)
k
+
1
{\displaystyle G(t)=(x-t)^{k+1}}
and the Cauchy form is obtained by taking
G
(
t
)
=
t
−
a
{\displaystyle G(t)=t-a}
.
The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. However, it holds also in the sense of Riemann integral provided the (k + 1)th derivative of f is continuous on the closed interval [a,x].
Due to the absolute continuity of f(k) on the closed interval between
a
{\textstyle a}
and
x
{\textstyle x}
, its derivative f(k+1) exists as an L1-function, and the result can be proven by a formal calculation using the fundamental theorem of calculus and integration by parts.
=== Estimates for the remainder ===
It is often useful in practice to be able to estimate the remainder term appearing in the Taylor approximation, rather than having an exact formula for it. Suppose that f is (k + 1)-times continuously differentiable in an interval I containing a. Suppose that there are real constants q and Q such that
q
≤
f
(
k
+
1
)
(
x
)
≤
Q
{\displaystyle q\leq f^{(k+1)}(x)\leq Q}
throughout I. Then the remainder term satisfies the inequality
q
(
x
−
a
)
k
+
1
(
k
+
1
)
!
≤
R
k
(
x
)
≤
Q
(
x
−
a
)
k
+
1
(
k
+
1
)
!
,
{\displaystyle q{\frac {(x-a)^{k+1}}{(k+1)!}}\leq R_{k}(x)\leq Q{\frac {(x-a)^{k+1}}{(k+1)!}},}
if x > a, and a similar estimate if x < a. This is a simple consequence of the Lagrange form of the remainder. In particular, if
|
f
(
k
+
1
)
(
x
)
|
≤
M
{\displaystyle |f^{(k+1)}(x)|\leq M}
on an interval I = (a − r,a + r) with some
r
>
0
{\displaystyle r>0}
, then
|
R
k
(
x
)
|
≤
M
|
x
−
a
|
k
+
1
(
k
+
1
)
!
≤
M
r
k
+
1
(
k
+
1
)
!
{\displaystyle |R_{k}(x)|\leq M{\frac {|x-a|^{k+1}}{(k+1)!}}\leq M{\frac {r^{k+1}}{(k+1)!}}}
for all x∈(a − r,a + r). The second inequality is called a uniform estimate, because it holds uniformly for all x on the interval (a − r,a + r).
=== Example ===
Suppose that we wish to find the approximate value of the function
f
(
x
)
=
e
x
{\textstyle f(x)=e^{x}}
on the interval
[
−
1
,
1
]
{\textstyle [-1,1]}
while ensuring that the error in the approximation is no more than 10−5. In this example we pretend that we only know the following properties of the exponential function:
From these properties it follows that
f
(
k
)
(
x
)
=
e
x
{\textstyle f^{(k)}(x)=e^{x}}
for all
k
{\textstyle k}
, and in particular,
f
(
k
)
(
0
)
=
1
{\textstyle f^{(k)}(0)=1}
. Hence the
k
{\textstyle k}
-th order Taylor polynomial of
f
{\textstyle f}
at
0
{\textstyle 0}
and its remainder term in the Lagrange form are given by
P
k
(
x
)
=
1
+
x
+
x
2
2
!
+
⋯
+
x
k
k
!
,
R
k
(
x
)
=
e
ξ
(
k
+
1
)
!
x
k
+
1
,
{\displaystyle P_{k}(x)=1+x+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{k}}{k!}},\qquad R_{k}(x)={\frac {e^{\xi }}{(k+1)!}}x^{k+1},}
where
ξ
{\textstyle \xi }
is some number between 0 and x. Since ex is increasing by (★), we can simply use
e
x
≤
1
{\textstyle e^{x}\leq 1}
for
x
∈
[
−
1
,
0
]
{\textstyle x\in [-1,0]}
to estimate the remainder on the subinterval
[
−
1
,
0
]
{\displaystyle [-1,0]}
. To obtain an upper bound for the remainder on
[
0
,
1
]
{\displaystyle [0,1]}
, we use the property
e
ξ
<
e
x
{\textstyle e^{\xi }<e^{x}}
for
0
<
ξ
<
x
{\textstyle 0<\xi <x}
to estimate
e
x
=
1
+
x
+
e
ξ
2
x
2
<
1
+
x
+
e
x
2
x
2
,
0
<
x
≤
1
{\displaystyle e^{x}=1+x+{\frac {e^{\xi }}{2}}x^{2}<1+x+{\frac {e^{x}}{2}}x^{2},\qquad 0<x\leq 1}
using the second order Taylor expansion. Then we solve for ex to deduce that
e
x
≤
1
+
x
1
−
x
2
2
=
2
1
+
x
2
−
x
2
≤
4
,
0
≤
x
≤
1
{\displaystyle e^{x}\leq {\frac {1+x}{1-{\frac {x^{2}}{2}}}}=2{\frac {1+x}{2-x^{2}}}\leq 4,\qquad 0\leq x\leq 1}
simply by maximizing the numerator and minimizing the denominator. Combining these estimates for ex we see that
|
R
k
(
x
)
|
≤
4
|
x
|
k
+
1
(
k
+
1
)
!
≤
4
(
k
+
1
)
!
,
−
1
≤
x
≤
1
,
{\displaystyle |R_{k}(x)|\leq {\frac {4|x|^{k+1}}{(k+1)!}}\leq {\frac {4}{(k+1)!}},\qquad -1\leq x\leq 1,}
so the required precision is certainly reached, when
4
(
k
+
1
)
!
<
10
−
5
⟺
4
⋅
10
5
<
(
k
+
1
)
!
⟺
k
≥
9.
{\displaystyle {\frac {4}{(k+1)!}}<10^{-5}\quad \Longleftrightarrow \quad 4\cdot 10^{5}<(k+1)!\quad \Longleftrightarrow \quad k\geq 9.}
(See factorial or compute by hand the values
9
!
=
362880
{\textstyle 9!=362880}
and
10
!
=
3628800
{\textstyle 10!=3628800}
.) As a conclusion, Taylor's theorem leads to the approximation
e
x
=
1
+
x
+
x
2
2
!
+
⋯
+
x
9
9
!
+
R
9
(
x
)
,
|
R
9
(
x
)
|
<
10
−
5
,
−
1
≤
x
≤
1.
{\displaystyle e^{x}=1+x+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{9}}{9!}}+R_{9}(x),\qquad |R_{9}(x)|<10^{-5},\qquad -1\leq x\leq 1.}
For instance, this approximation provides a decimal expression
e
≈
2.71828
{\displaystyle e\approx 2.71828}
, correct up to five decimal places.
== Relationship to analyticity ==
=== Taylor expansions of real analytic functions ===
Let I ⊂ R be an open interval. By definition, a function f : I → R is real analytic if it is locally defined by a convergent power series. This means that for every a ∈ I there exists some r > 0 and a sequence of coefficients ck ∈ R such that (a − r, a + r) ⊂ I and
f
(
x
)
=
∑
k
=
0
∞
c
k
(
x
−
a
)
k
=
c
0
+
c
1
(
x
−
a
)
+
c
2
(
x
−
a
)
2
+
⋯
,
|
x
−
a
|
<
r
.
{\displaystyle f(x)=\sum _{k=0}^{\infty }c_{k}(x-a)^{k}=c_{0}+c_{1}(x-a)+c_{2}(x-a)^{2}+\cdots ,\qquad |x-a|<r.}
In general, the radius of convergence of a power series can be computed from the Cauchy–Hadamard formula
1
R
=
lim sup
k
→
∞
|
c
k
|
1
k
.
{\displaystyle {\frac {1}{R}}=\limsup _{k\to \infty }|c_{k}|^{\frac {1}{k}}.}
This result is based on comparison with a geometric series, and the same method shows that if the power series based on a converges for some b ∈ R, it must converge uniformly on the closed interval
[
a
−
r
b
,
a
+
r
b
]
{\textstyle [a-r_{b},a+r_{b}]}
, where
r
b
=
|
b
−
a
|
{\textstyle r_{b}=\left\vert b-a\right\vert }
. Here only the convergence of the power series is considered, and it might well be that (a − R,a + R) extends beyond the domain I of f.
The Taylor polynomials of the real analytic function f at a are simply the finite truncations
P
k
(
x
)
=
∑
j
=
0
k
c
j
(
x
−
a
)
j
,
c
j
=
f
(
j
)
(
a
)
j
!
{\displaystyle P_{k}(x)=\sum _{j=0}^{k}c_{j}(x-a)^{j},\qquad c_{j}={\frac {f^{(j)}(a)}{j!}}}
of its locally defining power series, and the corresponding remainder terms are locally given by the analytic functions
R
k
(
x
)
=
∑
j
=
k
+
1
∞
c
j
(
x
−
a
)
j
=
(
x
−
a
)
k
h
k
(
x
)
,
|
x
−
a
|
<
r
.
{\displaystyle R_{k}(x)=\sum _{j=k+1}^{\infty }c_{j}(x-a)^{j}=(x-a)^{k}h_{k}(x),\qquad |x-a|<r.}
Here the functions
h
k
:
(
a
−
r
,
a
+
r
)
→
R
h
k
(
x
)
=
(
x
−
a
)
∑
j
=
0
∞
c
k
+
1
+
j
(
x
−
a
)
j
{\displaystyle {\begin{aligned}&h_{k}:(a-r,a+r)\to \mathbb {R} \\[1ex]&h_{k}(x)=(x-a)\sum _{j=0}^{\infty }c_{k+1+j}\left(x-a\right)^{j}\end{aligned}}}
are also analytic, since their defining power series have the same radius of convergence as the original series. Assuming that [a − r, a + r] ⊂ I and r < R, all these series converge uniformly on (a − r, a + r). Naturally, in the case of analytic functions one can estimate the remainder term
R
k
(
x
)
{\textstyle R_{k}(x)}
by the tail of the sequence of the derivatives f′(a) at the center of the expansion, but using complex analysis also another possibility arises, which is described below.
=== Taylor's theorem and convergence of Taylor series ===
The Taylor series of f will converge in some interval in which all its derivatives are bounded and do not grow too fast as k goes to infinity. (However, even if the Taylor series converges, it might not converge to f, as explained below; f is then said to be non-analytic.)
One might think of the Taylor series
f
(
x
)
≈
∑
k
=
0
∞
c
k
(
x
−
a
)
k
=
c
0
+
c
1
(
x
−
a
)
+
c
2
(
x
−
a
)
2
+
⋯
{\displaystyle f(x)\approx \sum _{k=0}^{\infty }c_{k}(x-a)^{k}=c_{0}+c_{1}(x-a)+c_{2}(x-a)^{2}+\cdots }
of an infinitely many times differentiable function f : R → R as its "infinite order Taylor polynomial" at a. Now the estimates for the remainder imply that if, for any r, the derivatives of f are known to be bounded over (a − r, a + r), then for any order k and for any r > 0 there exists a constant Mk,r > 0 such that
for every x ∈ (a − r,a + r). Sometimes the constants Mk,r can be chosen in such way that Mk,r is bounded above, for fixed r and all k. Then the Taylor series of f converges uniformly to some analytic function
T
f
:
(
a
−
r
,
a
+
r
)
→
R
T
f
(
x
)
=
∑
k
=
0
∞
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
{\displaystyle {\begin{aligned}&T_{f}:(a-r,a+r)\to \mathbb {R} \\&T_{f}(x)=\sum _{k=0}^{\infty }{\frac {f^{(k)}(a)}{k!}}\left(x-a\right)^{k}\end{aligned}}}
(One also gets convergence even if Mk,r is not bounded above as long as it grows slowly enough.)
The limit function Tf is by definition always analytic, but it is not necessarily equal to the original function f, even if f is infinitely differentiable. In this case, we say f is a non-analytic smooth function, for example a flat function:
f
:
R
→
R
f
(
x
)
=
{
e
−
1
x
2
x
>
0
0
x
≤
0.
{\displaystyle {\begin{aligned}&f:\mathbb {R} \to \mathbb {R} \\&f(x)={\begin{cases}e^{-{\frac {1}{x^{2}}}}&x>0\\0&x\leq 0.\end{cases}}\end{aligned}}}
Using the chain rule repeatedly by mathematical induction, one shows that for any order k,
f
(
k
)
(
x
)
=
{
p
k
(
x
)
x
3
k
⋅
e
−
1
x
2
x
>
0
0
x
≤
0
{\displaystyle f^{(k)}(x)={\begin{cases}{\frac {p_{k}(x)}{x^{3k}}}\cdot e^{-{\frac {1}{x^{2}}}}&x>0\\0&x\leq 0\end{cases}}}
for some polynomial pk of degree 2(k − 1). The function
e
−
1
x
2
{\displaystyle e^{-{\frac {1}{x^{2}}}}}
tends to zero faster than any polynomial as
x
→
0
{\textstyle x\to 0}
, so f is infinitely many times differentiable and f(k)(0) = 0 for every positive integer k. The above results all hold in this case:
The Taylor series of f converges uniformly to the zero function Tf(x) = 0, which is analytic with all coefficients equal to zero.
The function f is unequal to this Taylor series, and hence non-analytic.
For any order k ∈ N and radius r > 0 there exists Mk,r > 0 satisfying the remainder bound (★★) above.
However, as k increases for fixed r, the value of Mk,r grows more quickly than rk, and the error does not go to zero.
=== Taylor's theorem in complex analysis ===
Taylor's theorem generalizes to functions f : C → C which are complex differentiable in an open subset U ⊂ C of the complex plane. However, its usefulness is dwarfed by other general theorems in complex analysis. Namely, stronger versions of related results can be deduced for complex differentiable functions f : U → C using Cauchy's integral formula as follows.
Let r > 0 such that the closed disk B(z, r) ∪ S(z, r) is contained in U. Then Cauchy's integral formula with a positive parametrization γ(t) = z + reit of the circle S(z, r) with
t
∈
[
0
,
2
π
]
{\displaystyle t\in [0,2\pi ]}
gives
f
(
z
)
=
1
2
π
i
∫
γ
f
(
w
)
w
−
z
d
w
,
f
′
(
z
)
=
1
2
π
i
∫
γ
f
(
w
)
(
w
−
z
)
2
d
w
,
…
,
f
(
k
)
(
z
)
=
k
!
2
π
i
∫
γ
f
(
w
)
(
w
−
z
)
k
+
1
d
w
.
{\displaystyle f(z)={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-z}}\,dw,\quad f'(z)={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-z)^{2}}}\,dw,\quad \ldots ,\quad f^{(k)}(z)={\frac {k!}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-z)^{k+1}}}\,dw.}
Here all the integrands are continuous on the circle S(z, r), which justifies differentiation under the integral sign. In particular, if f is once complex differentiable on the open set U, then it is actually infinitely many times complex differentiable on U. One also obtains Cauchy's estimate
|
f
(
k
)
(
z
)
|
≤
k
!
2
π
∫
γ
M
r
|
w
−
z
|
k
+
1
d
w
=
k
!
M
r
r
k
,
M
r
=
max
|
w
−
c
|
=
r
|
f
(
w
)
|
{\displaystyle |f^{(k)}(z)|\leq {\frac {k!}{2\pi }}\int _{\gamma }{\frac {M_{r}}{|w-z|^{k+1}}}\,dw={\frac {k!M_{r}}{r^{k}}},\quad M_{r}=\max _{|w-c|=r}|f(w)|}
for any z ∈ U and r > 0 such that B(z, r) ∪ S(c, r) ⊂ U. The estimate implies that the complex Taylor series
T
f
(
z
)
=
∑
k
=
0
∞
f
(
k
)
(
c
)
k
!
(
z
−
c
)
k
{\displaystyle T_{f}(z)=\sum _{k=0}^{\infty }{\frac {f^{(k)}(c)}{k!}}(z-c)^{k}}
of f converges uniformly on any open disk
B
(
c
,
r
)
⊂
U
{\textstyle B(c,r)\subset U}
with
S
(
c
,
r
)
⊂
U
{\textstyle S(c,r)\subset U}
into some function Tf. Furthermore, using the contour integral formulas for the derivatives f(k)(c),
T
f
(
z
)
=
∑
k
=
0
∞
(
z
−
c
)
k
2
π
i
∫
γ
f
(
w
)
(
w
−
c
)
k
+
1
d
w
=
1
2
π
i
∫
γ
f
(
w
)
w
−
c
∑
k
=
0
∞
(
z
−
c
w
−
c
)
k
d
w
=
1
2
π
i
∫
γ
f
(
w
)
w
−
c
(
1
1
−
z
−
c
w
−
c
)
d
w
=
1
2
π
i
∫
γ
f
(
w
)
w
−
z
d
w
=
f
(
z
)
,
{\displaystyle {\begin{aligned}T_{f}(z)&=\sum _{k=0}^{\infty }{\frac {(z-c)^{k}}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-c)^{k+1}}}\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-c}}\sum _{k=0}^{\infty }\left({\frac {z-c}{w-c}}\right)^{k}\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-c}}\left({\frac {1}{1-{\frac {z-c}{w-c}}}}\right)\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-z}}\,dw\\&=f(z),\end{aligned}}}
so any complex differentiable function f in an open set U ⊂ C is in fact complex analytic. All that is said for real analytic functions here holds also for complex analytic functions with the open interval I replaced by an open subset U ∈ C and a-centered intervals (a − r, a + r) replaced by c-centered disks B(c, r). In particular, the Taylor expansion holds in the form
f
(
z
)
=
P
k
(
z
)
+
R
k
(
z
)
,
P
k
(
z
)
=
∑
j
=
0
k
f
(
j
)
(
c
)
j
!
(
z
−
c
)
j
,
{\displaystyle f(z)=P_{k}(z)+R_{k}(z),\quad P_{k}(z)=\sum _{j=0}^{k}{\frac {f^{(j)}(c)}{j!}}(z-c)^{j},}
where the remainder term Rk is complex analytic. Methods of complex analysis provide some powerful results regarding Taylor expansions. For example, using Cauchy's integral formula for any positively oriented Jordan curve
γ
{\textstyle \gamma }
which parametrizes the boundary
∂
W
⊂
U
{\textstyle \partial W\subset U}
of a region
W
⊂
U
{\textstyle W\subset U}
, one obtains expressions for the derivatives f(j)(c) as above, and modifying slightly the computation for Tf(z) = f(z), one arrives at the exact formula
R
k
(
z
)
=
∑
j
=
k
+
1
∞
(
z
−
c
)
j
2
π
i
∫
γ
f
(
w
)
(
w
−
c
)
j
+
1
d
w
=
(
z
−
c
)
k
+
1
2
π
i
∫
γ
f
(
w
)
d
w
(
w
−
c
)
k
+
1
(
w
−
z
)
,
z
∈
W
.
{\displaystyle R_{k}(z)=\sum _{j=k+1}^{\infty }{\frac {(z-c)^{j}}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-c)^{j+1}}}\,dw={\frac {(z-c)^{k+1}}{2\pi i}}\int _{\gamma }{\frac {f(w)\,dw}{(w-c)^{k+1}(w-z)}},\qquad z\in W.}
The important feature here is that the quality of the approximation by a Taylor polynomial on the region
W
⊂
U
{\textstyle W\subset U}
is dominated by the values of the function f itself on the boundary
∂
W
⊂
U
{\textstyle \partial W\subset U}
. Similarly, applying Cauchy's estimates to the series expression for the remainder, one obtains the uniform estimates
|
R
k
(
z
)
|
≤
∑
j
=
k
+
1
∞
M
r
|
z
−
c
|
j
r
j
=
M
r
r
k
+
1
|
z
−
c
|
k
+
1
1
−
|
z
−
c
|
r
≤
M
r
β
k
+
1
1
−
β
,
|
z
−
c
|
r
≤
β
<
1.
{\displaystyle |R_{k}(z)|\leq \sum _{j=k+1}^{\infty }{\frac {M_{r}|z-c|^{j}}{r^{j}}}={\frac {M_{r}}{r^{k+1}}}{\frac {|z-c|^{k+1}}{1-{\frac {|z-c|}{r}}}}\leq {\frac {M_{r}\beta ^{k+1}}{1-\beta }},\qquad {\frac {|z-c|}{r}}\leq \beta <1.}
=== Example ===
The function
f
:
R
→
R
f
(
x
)
=
1
1
+
x
2
{\displaystyle {\begin{aligned}&f:\mathbb {R} \to \mathbb {R} \\&f(x)={\frac {1}{1+x^{2}}}\end{aligned}}}
is real analytic, that is, locally determined by its Taylor series. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. This kind of behavior is easily understood in the framework of complex analysis. Namely, the function f extends into a meromorphic function
f
:
C
∪
{
∞
}
→
C
∪
{
∞
}
f
(
z
)
=
1
1
+
z
2
{\displaystyle {\begin{aligned}&f:\mathbb {C} \cup \{\infty \}\to \mathbb {C} \cup \{\infty \}\\&f(z)={\frac {1}{1+z^{2}}}\end{aligned}}}
on the compactified complex plane. It has simple poles at
z
=
i
{\textstyle z=i}
and
z
=
−
i
{\textstyle z=-i}
, and it is analytic elsewhere. Now its Taylor series centered at z0 converges on any disc B(z0, r) with r < |z − z0|, where the same Taylor series converges at z ∈ C. Therefore, Taylor series of f centered at 0 converges on B(0, 1) and it does not converge for any z ∈ C with |z| > 1 due to the poles at i and −i. For the same reason the Taylor series of f centered at 1 converges on
B
(
1
,
2
)
{\textstyle B(1,{\sqrt {2}})}
and does not converge for any z ∈ C with
|
z
−
1
|
>
2
{\textstyle \left\vert z-1\right\vert >{\sqrt {2}}}
.
== Generalizations of Taylor's theorem ==
=== Higher-order differentiability ===
A function f: Rn → R is differentiable at a ∈ Rn if and only if there exists a linear functional L : Rn → R and a function h : Rn → R such that
f
(
x
)
=
f
(
a
)
+
L
(
x
−
a
)
+
h
(
x
)
‖
x
−
a
‖
,
lim
x
→
a
h
(
x
)
=
0.
{\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+L({\boldsymbol {x}}-{\boldsymbol {a}})+h({\boldsymbol {x}})\lVert {\boldsymbol {x}}-{\boldsymbol {a}}\rVert ,\qquad \lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}h({\boldsymbol {x}})=0.}
If this is the case, then
L
=
d
f
(
a
)
{\textstyle L=df({\boldsymbol {a}})}
is the (uniquely defined) differential of f at the point a. Furthermore, then the partial derivatives of f exist at a and the differential of f at a is given by
d
f
(
a
)
(
v
)
=
∂
f
∂
x
1
(
a
)
v
1
+
⋯
+
∂
f
∂
x
n
(
a
)
v
n
.
{\displaystyle df({\boldsymbol {a}})({\boldsymbol {v}})={\frac {\partial f}{\partial x_{1}}}({\boldsymbol {a}})v_{1}+\cdots +{\frac {\partial f}{\partial x_{n}}}({\boldsymbol {a}})v_{n}.}
Introduce the multi-index notation
|
α
|
=
α
1
+
⋯
+
α
n
,
α
!
=
α
1
!
⋯
α
n
!
,
x
α
=
x
1
α
1
⋯
x
n
α
n
{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n},\quad \alpha !=\alpha _{1}!\cdots \alpha _{n}!,\quad {\boldsymbol {x}}^{\alpha }=x_{1}^{\alpha _{1}}\cdots x_{n}^{\alpha _{n}}}
for α ∈ Nn and x ∈ Rn. If all the
k
{\textstyle k}
-th order partial derivatives of f : Rn → R are continuous at a ∈ Rn, then by Clairaut's theorem, one can change the order of mixed derivatives at a, so the short-hand notation
D
α
f
=
∂
|
α
|
f
∂
x
α
=
∂
α
1
+
…
+
α
n
f
∂
x
1
α
1
⋯
∂
x
n
α
n
{\displaystyle D^{\alpha }f={\frac {\partial ^{|\alpha |}f}{\partial {\boldsymbol {x}}^{\alpha }}}={\frac {\partial ^{\alpha _{1}+\ldots +\alpha _{n}}f}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}}
for the higher order partial derivatives is justified in this situation. The same is true if all the (k − 1)-th order partial derivatives of f exist in some neighborhood of a and are differentiable at a. Then we say that f is k times differentiable at the point a.
=== Taylor's theorem for multivariate functions ===
Using notations of the preceding section, one has the following theorem.
If the function f : Rn → R is k + 1 times continuously differentiable in a closed ball
B
=
{
y
∈
R
n
:
‖
a
−
y
‖
≤
r
}
{\displaystyle B=\{\mathbf {y} \in \mathbb {R} ^{n}:\left\|\mathbf {a} -\mathbf {y} \right\|\leq r\}}
for some
r
>
0
{\displaystyle r>0}
, then one can derive an exact formula for the remainder in terms of (k+1)-th order partial derivatives of f in this neighborhood. Namely,
f
(
x
)
=
∑
|
α
|
≤
k
D
α
f
(
a
)
α
!
(
x
−
a
)
α
+
∑
|
β
|
=
k
+
1
R
β
(
x
)
(
x
−
a
)
β
,
R
β
(
x
)
=
|
β
|
β
!
∫
0
1
(
1
−
t
)
|
β
|
−
1
D
β
f
(
a
+
t
(
x
−
a
)
)
d
t
.
{\displaystyle {\begin{aligned}&f({\boldsymbol {x}})=\sum _{|\alpha |\leq k}{\frac {D^{\alpha }f({\boldsymbol {a}})}{\alpha !}}({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }+\sum _{|\beta |=k+1}R_{\beta }({\boldsymbol {x}})({\boldsymbol {x}}-{\boldsymbol {a}})^{\beta },\\&R_{\beta }({\boldsymbol {x}})={\frac {|\beta |}{\beta !}}\int _{0}^{1}(1-t)^{|\beta |-1}D^{\beta }f{\big (}{\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}){\big )}\,dt.\end{aligned}}}
In this case, due to the continuity of (k+1)-th order partial derivatives in the compact set B, one immediately obtains the uniform estimates
|
R
β
(
x
)
|
≤
1
β
!
max
|
α
|
=
|
β
|
max
y
∈
B
|
D
α
f
(
y
)
|
,
x
∈
B
.
{\displaystyle \left|R_{\beta }({\boldsymbol {x}})\right|\leq {\frac {1}{\beta !}}\max _{|\alpha |=|\beta |}\max _{{\boldsymbol {y}}\in B}|D^{\alpha }f({\boldsymbol {y}})|,\qquad {\boldsymbol {x}}\in B.}
=== Example in two dimensions ===
For example, the third-order Taylor polynomial of a smooth function
f
:
R
2
→
R
{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }
is, denoting
x
−
a
=
v
{\displaystyle {\boldsymbol {x}}-{\boldsymbol {a}}={\boldsymbol {v}}}
,
P
3
(
x
)
=
f
(
a
)
+
∂
f
∂
x
1
(
a
)
v
1
+
∂
f
∂
x
2
(
a
)
v
2
+
∂
2
f
∂
x
1
2
(
a
)
v
1
2
2
!
+
∂
2
f
∂
x
1
∂
x
2
(
a
)
v
1
v
2
+
∂
2
f
∂
x
2
2
(
a
)
v
2
2
2
!
+
∂
3
f
∂
x
1
3
(
a
)
v
1
3
3
!
+
∂
3
f
∂
x
1
2
∂
x
2
(
a
)
v
1
2
v
2
2
!
+
∂
3
f
∂
x
1
∂
x
2
2
(
a
)
v
1
v
2
2
2
!
+
∂
3
f
∂
x
2
3
(
a
)
v
2
3
3
!
{\displaystyle {\begin{aligned}P_{3}({\boldsymbol {x}})=f({\boldsymbol {a}})+{}&{\frac {\partial f}{\partial x_{1}}}({\boldsymbol {a}})v_{1}+{\frac {\partial f}{\partial x_{2}}}({\boldsymbol {a}})v_{2}+{\frac {\partial ^{2}f}{\partial x_{1}^{2}}}({\boldsymbol {a}}){\frac {v_{1}^{2}}{2!}}+{\frac {\partial ^{2}f}{\partial x_{1}\partial x_{2}}}({\boldsymbol {a}})v_{1}v_{2}+{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}({\boldsymbol {a}}){\frac {v_{2}^{2}}{2!}}\\&+{\frac {\partial ^{3}f}{\partial x_{1}^{3}}}({\boldsymbol {a}}){\frac {v_{1}^{3}}{3!}}+{\frac {\partial ^{3}f}{\partial x_{1}^{2}\partial x_{2}}}({\boldsymbol {a}}){\frac {v_{1}^{2}v_{2}}{2!}}+{\frac {\partial ^{3}f}{\partial x_{1}\partial x_{2}^{2}}}({\boldsymbol {a}}){\frac {v_{1}v_{2}^{2}}{2!}}+{\frac {\partial ^{3}f}{\partial x_{2}^{3}}}({\boldsymbol {a}}){\frac {v_{2}^{3}}{3!}}\end{aligned}}}
== Proofs ==
=== Proof for Taylor's theorem in one real variable ===
Let
h
k
(
x
)
=
{
f
(
x
)
−
P
(
x
)
(
x
−
a
)
k
x
≠
a
0
x
=
a
{\displaystyle h_{k}(x)={\begin{cases}{\frac {f(x)-P(x)}{(x-a)^{k}}}&x\not =a\\0&x=a\end{cases}}}
where, as in the statement of Taylor's theorem,
P
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
+
f
″
(
a
)
2
!
(
x
−
a
)
2
+
⋯
+
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
.
{\displaystyle P(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots +{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}.}
It is sufficient to show that
lim
x
→
a
h
k
(
x
)
=
0.
{\displaystyle \lim _{x\to a}h_{k}(x)=0.}
The proof here is based on repeated application of L'Hôpital's rule. Note that, for each
j
=
0
,
1
,
.
.
.
,
k
−
1
{\textstyle j=0,1,...,k-1}
,
f
(
j
)
(
a
)
=
P
(
j
)
(
a
)
{\displaystyle f^{(j)}(a)=P^{(j)}(a)}
. Hence each of the first
k
−
1
{\textstyle k-1}
derivatives of the numerator in
h
k
(
x
)
{\displaystyle h_{k}(x)}
vanishes at
x
=
a
{\displaystyle x=a}
, and the same is true of the denominator. Also, since the condition that the function
f
{\textstyle f}
be
k
{\textstyle k}
times differentiable at a point requires differentiability up to order
k
−
1
{\textstyle k-1}
in a neighborhood of said point (this is true, because differentiability requires a function to be defined in a whole neighborhood of a point), the numerator and its
k
−
2
{\textstyle k-2}
derivatives are differentiable in a neighborhood of
a
{\textstyle a}
. Clearly, the denominator also satisfies said condition, and additionally, doesn't vanish unless
x
=
a
{\textstyle x=a}
, therefore all conditions necessary for L'Hôpital's rule are fulfilled, and its use is justified. So
lim
x
→
a
f
(
x
)
−
P
(
x
)
(
x
−
a
)
k
=
lim
x
→
a
d
d
x
(
f
(
x
)
−
P
(
x
)
)
d
d
x
(
x
−
a
)
k
=
⋯
=
lim
x
→
a
d
k
−
1
d
x
k
−
1
(
f
(
x
)
−
P
(
x
)
)
d
k
−
1
d
x
k
−
1
(
x
−
a
)
k
=
1
k
!
lim
x
→
a
f
(
k
−
1
)
(
x
)
−
P
(
k
−
1
)
(
x
)
x
−
a
=
1
k
!
(
f
(
k
)
(
a
)
−
P
(
k
)
(
a
)
)
=
0
{\displaystyle {\begin{aligned}\lim _{x\to a}{\frac {f(x)-P(x)}{(x-a)^{k}}}&=\lim _{x\to a}{\frac {{\frac {d}{dx}}(f(x)-P(x))}{{\frac {d}{dx}}(x-a)^{k}}}\\[1ex]&=\cdots \\[1ex]&=\lim _{x\to a}{\frac {{\frac {d^{k-1}}{dx^{k-1}}}(f(x)-P(x))}{{\frac {d^{k-1}}{dx^{k-1}}}(x-a)^{k}}}\\[1ex]&={\frac {1}{k!}}\lim _{x\to a}{\frac {f^{(k-1)}(x)-P^{(k-1)}(x)}{x-a}}\\[1ex]&={\frac {1}{k!}}(f^{(k)}(a)-P^{(k)}(a))=0\end{aligned}}}
where the second-to-last equality follows by the definition of the derivative at
x
=
a
{\textstyle x=a}
.
=== Alternate proof for Taylor's theorem in one real variable ===
Let
f
(
x
)
{\displaystyle f(x)}
be any real-valued continuous function to be approximated by the Taylor polynomial.
Step 1: Let
F
{\textstyle F}
and
G
{\textstyle G}
be functions. Set
F
{\textstyle F}
and
G
{\textstyle G}
to be
F
(
x
)
=
f
(
x
)
−
∑
k
=
0
n
−
1
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
{\displaystyle {\begin{aligned}F(x)=f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}\end{aligned}}}
G
(
x
)
=
(
x
−
a
)
n
{\displaystyle {\begin{aligned}G(x)=(x-a)^{n}\end{aligned}}}
Step 2: Properties of
F
{\textstyle F}
and
G
{\textstyle G}
:
F
(
a
)
=
f
(
a
)
−
f
(
a
)
−
f
′
(
a
)
(
a
−
a
)
−
.
.
.
−
f
(
n
−
1
)
(
a
)
(
n
−
1
)
!
(
a
−
a
)
n
−
1
=
0
G
(
a
)
=
(
a
−
a
)
n
=
0
{\displaystyle {\begin{aligned}F(a)&=f(a)-f(a)-f'(a)(a-a)-...-{\frac {f^{(n-1)}(a)}{(n-1)!}}(a-a)^{n-1}=0\\G(a)&=(a-a)^{n}=0\end{aligned}}}
Similarly,
F
′
(
a
)
=
f
′
(
a
)
−
f
′
(
a
)
−
f
″
(
a
)
(
2
−
1
)
!
(
a
−
a
)
(
2
−
1
)
−
.
.
.
−
f
(
n
−
1
)
(
a
)
(
n
−
2
)
!
(
a
−
a
)
n
−
2
=
0
{\displaystyle {\begin{aligned}F'(a)=f'(a)-f'(a)-{\frac {f''(a)}{(2-1)!}}(a-a)^{(2-1)}-...-{\frac {f^{(n-1)}(a)}{(n-2)!}}(a-a)^{n-2}=0\end{aligned}}}
G
′
(
a
)
=
n
(
a
−
a
)
n
−
1
=
0
⋮
G
(
n
−
1
)
(
a
)
=
F
(
n
−
1
)
(
a
)
=
0
{\displaystyle {\begin{aligned}G'(a)&=n(a-a)^{n-1}=0\\&\qquad \vdots \\G^{(n-1)}(a)&=F^{(n-1)}(a)=0\end{aligned}}}
Step 3: Use Cauchy Mean Value Theorem
Let
f
1
{\displaystyle f_{1}}
and
g
1
{\displaystyle g_{1}}
be continuous functions on
[
a
,
b
]
{\displaystyle [a,b]}
. Since
a
<
x
<
b
{\displaystyle a<x<b}
so we can work with the interval
[
a
,
x
]
{\displaystyle [a,x]}
. Let
f
1
{\displaystyle f_{1}}
and
g
1
{\displaystyle g_{1}}
be differentiable on
(
a
,
x
)
{\displaystyle (a,x)}
. Assume
g
1
′
(
x
)
≠
0
{\displaystyle g_{1}'(x)\neq 0}
for all
x
∈
(
a
,
b
)
{\displaystyle x\in (a,b)}
.
Then there exists
c
1
∈
(
a
,
x
)
{\displaystyle c_{1}\in (a,x)}
such that
f
1
(
x
)
−
f
1
(
a
)
g
1
(
x
)
−
g
1
(
a
)
=
f
1
′
(
c
1
)
g
1
′
(
c
1
)
{\displaystyle {\begin{aligned}{\frac {f_{1}(x)-f_{1}(a)}{g_{1}(x)-g_{1}(a)}}={\frac {f_{1}'(c_{1})}{g_{1}'(c_{1})}}\end{aligned}}}
Note:
G
′
(
x
)
≠
0
{\displaystyle G'(x)\neq 0}
in
(
a
,
b
)
{\displaystyle (a,b)}
and
F
(
a
)
,
G
(
a
)
=
0
{\displaystyle F(a),G(a)=0}
so
F
(
x
)
G
(
x
)
=
F
(
x
)
−
F
(
a
)
G
(
x
)
−
G
(
a
)
=
F
′
(
c
1
)
G
′
(
c
1
)
{\displaystyle {\begin{aligned}{\frac {F(x)}{G(x)}}={\frac {F(x)-F(a)}{G(x)-G(a)}}={\frac {F'(c_{1})}{G'(c_{1})}}\end{aligned}}}
for some
c
1
∈
(
a
,
x
)
{\displaystyle c_{1}\in (a,x)}
.
This can also be performed for
(
a
,
c
1
)
{\displaystyle (a,c_{1})}
:
F
′
(
c
1
)
G
′
(
c
1
)
=
F
′
(
c
1
)
−
F
′
(
a
)
G
′
(
c
1
)
−
G
′
(
a
)
=
F
″
(
c
2
)
G
″
(
c
2
)
{\displaystyle {\begin{aligned}{\frac {F'(c_{1})}{G'(c_{1})}}={\frac {F'(c_{1})-F'(a)}{G'(c_{1})-G'(a)}}={\frac {F''(c_{2})}{G''(c_{2})}}\end{aligned}}}
for some
c
2
∈
(
a
,
c
1
)
{\displaystyle c_{2}\in (a,c_{1})}
.
This can be continued to
c
n
{\displaystyle c_{n}}
.
This gives a partition in
(
a
,
b
)
{\displaystyle (a,b)}
:
a
<
c
n
<
c
n
−
1
<
⋯
<
c
1
<
x
{\displaystyle a<c_{n}<c_{n-1}<\dots <c_{1}<x}
with
F
(
x
)
G
(
x
)
=
F
′
(
c
1
)
G
′
(
c
1
)
=
⋯
=
F
(
n
)
(
c
n
)
G
(
n
)
(
c
n
)
.
{\displaystyle {\frac {F(x)}{G(x)}}={\frac {F'(c_{1})}{G'(c_{1})}}=\dots ={\frac {F^{(n)}(c_{n})}{G^{(n)}(c_{n})}}.}
Set
c
=
c
n
{\displaystyle c=c_{n}}
:
F
(
x
)
G
(
x
)
=
F
(
n
)
(
c
)
G
(
n
)
(
c
)
{\displaystyle {\frac {F(x)}{G(x)}}={\frac {F^{(n)}(c)}{G^{(n)}(c)}}}
Step 4: Substitute back
F
(
x
)
G
(
x
)
=
f
(
x
)
−
∑
k
=
0
n
−
1
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
(
x
−
a
)
n
=
F
(
n
)
(
c
)
G
(
n
)
(
c
)
{\displaystyle {\begin{aligned}{\frac {F(x)}{G(x)}}={\frac {f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}}{(x-a)^{n}}}={\frac {F^{(n)}(c)}{G^{(n)}(c)}}\end{aligned}}}
By the Power Rule, repeated derivatives of
(
x
−
a
)
n
{\displaystyle (x-a)^{n}}
,
G
(
n
)
(
c
)
=
n
(
n
−
1
)
.
.
.1
{\displaystyle G^{(n)}(c)=n(n-1)...1}
, so:
F
(
n
)
(
c
)
G
(
n
)
(
c
)
=
f
(
n
)
(
c
)
n
(
n
−
1
)
⋯
1
=
f
(
n
)
(
c
)
n
!
.
{\displaystyle {\frac {F^{(n)}(c)}{G^{(n)}(c)}}={\frac {f^{(n)}(c)}{n(n-1)\cdots 1}}={\frac {f^{(n)}(c)}{n!}}.}
This leads to:
f
(
x
)
−
∑
k
=
0
n
−
1
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
=
f
(
n
)
(
c
)
n
!
(
x
−
a
)
n
.
{\displaystyle {\begin{aligned}f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}={\frac {f^{(n)}(c)}{n!}}(x-a)^{n}\end{aligned}}.}
By rearranging, we get:
f
(
x
)
=
∑
k
=
0
n
−
1
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
+
f
(
n
)
(
c
)
n
!
(
x
−
a
)
n
,
{\displaystyle {\begin{aligned}f(x)=\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}+{\frac {f^{(n)}(c)}{n!}}(x-a)^{n}\end{aligned}},}
or because
c
n
=
a
{\displaystyle c_{n}=a}
eventually:
f
(
x
)
=
∑
k
=
0
n
f
(
k
)
(
a
)
k
!
(
x
−
a
)
k
.
{\displaystyle f(x)=\sum _{k=0}^{n}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}.}
=== Derivation for the mean value forms of the remainder ===
Let G be any real-valued function, continuous on the closed interval between
a
{\textstyle a}
and
x
{\textstyle x}
and differentiable with a non-vanishing derivative on the open interval between
a
{\textstyle a}
and
x
{\textstyle x}
, and define
F
(
t
)
=
f
(
t
)
+
f
′
(
t
)
(
x
−
t
)
+
f
″
(
t
)
2
!
(
x
−
t
)
2
+
⋯
+
f
(
k
)
(
t
)
k
!
(
x
−
t
)
k
.
{\displaystyle F(t)=f(t)+f'(t)(x-t)+{\frac {f''(t)}{2!}}(x-t)^{2}+\cdots +{\frac {f^{(k)}(t)}{k!}}(x-t)^{k}.}
For
t
∈
[
a
,
x
]
{\displaystyle t\in [a,x]}
. Then, by Cauchy's mean value theorem,
for some
ξ
{\textstyle \xi }
on the open interval between
a
{\textstyle a}
and
x
{\textstyle x}
. Note that here the numerator
F
(
x
)
−
F
(
a
)
=
R
k
(
x
)
{\textstyle F(x)-F(a)=R_{k}(x)}
is exactly the remainder of the Taylor polynomial for
y
=
f
(
x
)
{\textstyle y=f(x)}
. Compute
F
′
(
t
)
=
f
′
(
t
)
+
(
f
″
(
t
)
(
x
−
t
)
−
f
′
(
t
)
)
+
(
f
(
3
)
(
t
)
2
!
(
x
−
t
)
2
−
f
(
2
)
(
t
)
1
!
(
x
−
t
)
)
+
⋯
⋯
+
(
f
(
k
+
1
)
(
t
)
k
!
(
x
−
t
)
k
−
f
(
k
)
(
t
)
(
k
−
1
)
!
(
x
−
t
)
k
−
1
)
=
f
(
k
+
1
)
(
t
)
k
!
(
x
−
t
)
k
,
{\displaystyle {\begin{aligned}F'(t)={}&f'(t)+{\big (}f''(t)(x-t)-f'(t){\big )}+\left({\frac {f^{(3)}(t)}{2!}}(x-t)^{2}-{\frac {f^{(2)}(t)}{1!}}(x-t)\right)+\cdots \\&\cdots +\left({\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k}-{\frac {f^{(k)}(t)}{(k-1)!}}(x-t)^{k-1}\right)={\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k},\end{aligned}}}
plug it into (★★★) and rearrange terms to find that
R
k
(
x
)
=
f
(
k
+
1
)
(
ξ
)
k
!
(
x
−
ξ
)
k
G
(
x
)
−
G
(
a
)
G
′
(
ξ
)
.
{\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi )}{k!}}(x-\xi )^{k}{\frac {G(x)-G(a)}{G'(\xi )}}.}
This is the form of the remainder term mentioned after the actual statement of Taylor's theorem with remainder in the mean value form.
The Lagrange form of the remainder is found by choosing
G
(
t
)
=
(
x
−
t
)
k
+
1
{\displaystyle G(t)=(x-t)^{k+1}}
and the Cauchy form by choosing
G
(
t
)
=
t
−
a
{\displaystyle G(t)=t-a}
.
Remark. Using this method one can also recover the integral form of the remainder by choosing
G
(
t
)
=
∫
a
t
f
(
k
+
1
)
(
s
)
k
!
(
x
−
s
)
k
d
s
,
{\displaystyle G(t)=\int _{a}^{t}{\frac {f^{(k+1)}(s)}{k!}}(x-s)^{k}\,ds,}
but the requirements for f needed for the use of mean value theorem are too strong, if one aims to prove the claim in the case that f(k) is only absolutely continuous. However, if one uses Riemann integral instead of Lebesgue integral, the assumptions cannot be weakened.
=== Derivation for the integral form of the remainder ===
Due to the absolute continuity of
f
(
k
)
{\displaystyle f^{(k)}}
on the closed interval between
a
{\textstyle a}
and
x
{\textstyle x}
, its derivative
f
(
k
+
1
)
{\displaystyle f^{(k+1)}}
exists as an
L
1
{\displaystyle L^{1}}
-function, and we can use the fundamental theorem of calculus and integration by parts. This same proof applies for the Riemann integral assuming that
f
(
k
)
{\displaystyle f^{(k)}}
is continuous on the closed interval and differentiable on the open interval between
a
{\textstyle a}
and
x
{\textstyle x}
, and this leads to the same result as using the mean value theorem.
The fundamental theorem of calculus states that
f
(
x
)
=
f
(
a
)
+
∫
a
x
f
′
(
t
)
d
t
.
{\displaystyle f(x)=f(a)+\int _{a}^{x}\,f'(t)\,dt.}
Now we can integrate by parts and use the fundamental theorem of calculus again to see that
f
(
x
)
=
f
(
a
)
+
(
x
f
′
(
x
)
−
a
f
′
(
a
)
)
−
∫
a
x
t
f
″
(
t
)
d
t
=
f
(
a
)
+
x
(
f
′
(
a
)
+
∫
a
x
f
″
(
t
)
d
t
)
−
a
f
′
(
a
)
−
∫
a
x
t
f
″
(
t
)
d
t
=
f
(
a
)
+
(
x
−
a
)
f
′
(
a
)
+
∫
a
x
(
x
−
t
)
f
″
(
t
)
d
t
,
{\displaystyle {\begin{aligned}f(x)&=f(a)+{\Big (}xf'(x)-af'(a){\Big )}-\int _{a}^{x}tf''(t)\,dt\\&=f(a)+x\left(f'(a)+\int _{a}^{x}f''(t)\,dt\right)-af'(a)-\int _{a}^{x}tf''(t)\,dt\\&=f(a)+(x-a)f'(a)+\int _{a}^{x}\,(x-t)f''(t)\,dt,\end{aligned}}}
which is exactly Taylor's theorem with remainder in the integral form in the case
k
=
1
{\displaystyle k=1}
. The general statement is proved using induction. Suppose that
Integrating the remainder term by parts we arrive at
∫
a
x
f
(
k
+
1
)
(
t
)
k
!
(
x
−
t
)
k
d
t
=
−
[
f
(
k
+
1
)
(
t
)
(
k
+
1
)
k
!
(
x
−
t
)
k
+
1
]
a
x
+
∫
a
x
f
(
k
+
2
)
(
t
)
(
k
+
1
)
k
!
(
x
−
t
)
k
+
1
d
t
=
f
(
k
+
1
)
(
a
)
(
k
+
1
)
!
(
x
−
a
)
k
+
1
+
∫
a
x
f
(
k
+
2
)
(
t
)
(
k
+
1
)
!
(
x
−
t
)
k
+
1
d
t
.
{\displaystyle {\begin{aligned}\int _{a}^{x}{\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k}\,dt=&-\left[{\frac {f^{(k+1)}(t)}{(k+1)k!}}(x-t)^{k+1}\right]_{a}^{x}+\int _{a}^{x}{\frac {f^{(k+2)}(t)}{(k+1)k!}}(x-t)^{k+1}\,dt\\=&\ {\frac {f^{(k+1)}(a)}{(k+1)!}}(x-a)^{k+1}+\int _{a}^{x}{\frac {f^{(k+2)}(t)}{(k+1)!}}(x-t)^{k+1}\,dt.\end{aligned}}}
Substituting this into the formula in (eq1) shows that if it holds for the value
k
{\displaystyle k}
, it must also hold for the value
k
+
1
{\displaystyle k+1}
. Therefore, since it holds for
k
=
1
{\displaystyle k=1}
, it must hold for every positive integer
k
{\displaystyle k}
.
=== Derivation for the remainder of multivariate Taylor polynomials ===
We prove the special case, where
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
has continuous partial derivatives up to the order
k
+
1
{\displaystyle k+1}
in some closed ball
B
{\displaystyle B}
with center
a
{\displaystyle {\boldsymbol {a}}}
. The strategy of the proof is to apply the one-variable case of Taylor's theorem to the restriction of
f
{\displaystyle f}
to the line segment adjoining
x
{\displaystyle {\boldsymbol {x}}}
and
a
{\displaystyle {\boldsymbol {a}}}
. Parametrize the line segment between
a
{\displaystyle {\boldsymbol {a}}}
and
x
{\displaystyle {\boldsymbol {x}}}
by
u
(
t
)
=
a
+
t
(
x
−
a
)
{\displaystyle {\boldsymbol {u}}(t)={\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}})}
We apply the one-variable version of Taylor's theorem to the function
g
(
t
)
=
f
(
u
(
t
)
)
{\displaystyle g(t)=f({\boldsymbol {u}}(t))}
:
f
(
x
)
=
g
(
1
)
=
g
(
0
)
+
∑
j
=
1
k
1
j
!
g
(
j
)
(
0
)
+
∫
0
1
(
1
−
t
)
k
k
!
g
(
k
+
1
)
(
t
)
d
t
.
{\displaystyle f({\boldsymbol {x}})=g(1)=g(0)+\sum _{j=1}^{k}{\frac {1}{j!}}g^{(j)}(0)\ +\ \int _{0}^{1}{\frac {(1-t)^{k}}{k!}}g^{(k+1)}(t)\,dt.}
Applying the chain rule for several variables gives
g
(
j
)
(
t
)
=
d
j
d
t
j
f
(
u
(
t
)
)
=
d
j
d
t
j
f
(
a
+
t
(
x
−
a
)
)
=
∑
|
α
|
=
j
(
j
α
)
(
D
α
f
)
(
a
+
t
(
x
−
a
)
)
(
x
−
a
)
α
{\displaystyle {\begin{aligned}g^{(j)}(t)&={\frac {d^{j}}{dt^{j}}}f({\boldsymbol {u}}(t))\\&={\frac {d^{j}}{dt^{j}}}f({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))\\&=\sum _{|\alpha |=j}\left({\begin{matrix}j\\\alpha \end{matrix}}\right)(D^{\alpha }f)({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }\end{aligned}}}
where
(
j
α
)
{\displaystyle {\tbinom {j}{\alpha }}}
is the multinomial coefficient. Since
1
j
!
(
j
α
)
=
1
α
!
{\displaystyle {\tfrac {1}{j!}}{\tbinom {j}{\alpha }}={\tfrac {1}{\alpha !}}}
, we get:
f
(
x
)
=
f
(
a
)
+
∑
1
≤
|
α
|
≤
k
1
α
!
(
D
α
f
)
(
a
)
(
x
−
a
)
α
+
∑
|
α
|
=
k
+
1
k
+
1
α
!
(
x
−
a
)
α
∫
0
1
(
1
−
t
)
k
(
D
α
f
)
(
a
+
t
(
x
−
a
)
)
d
t
.
{\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+\sum _{1\leq |\alpha |\leq k}{\frac {1}{\alpha !}}(D^{\alpha }f)({\boldsymbol {a}})({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }+\sum _{|\alpha |=k+1}{\frac {k+1}{\alpha !}}({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }\int _{0}^{1}(1-t)^{k}(D^{\alpha }f)({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))\,dt.}
== See also ==
Hadamard's lemma
Laurent series – Power series with negative powers
Padé approximant – 'Best' approximation of a function by a rational function of given order
Newton series – Discrete analog of a derivativePages displaying short descriptions of redirect targets
Approximation theory – Theory of getting acceptably close inexact mathematical calculations
Function approximation – Approximating an arbitrary function with a well-behaved one
== Footnotes ==
== References ==
Apostol, Tom (1967), Calculus, Wiley, ISBN 0-471-00005-1.
Apostol, Tom (1974), Mathematical analysis, Addison–Wesley.
Bartle, Robert G.; Sherbert, Donald R. (2011), Introduction to Real Analysis (4th ed.), Wiley, ISBN 978-0-471-43331-6.
Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer, ISBN 978-3-540-00662-6.
Kline, Morris (1972), Mathematical thought from ancient to modern times, Volume 2, Oxford University Press.
Kline, Morris (1998), Calculus: An Intuitive and Physical Approach, Dover, ISBN 0-486-40453-6.
Pedrick, George (1994), A First Course in Analysis, Springer, ISBN 0-387-94108-8.
Stromberg, Karl (1981), Introduction to classical real analysis, Wadsworth, ISBN 978-0-534-98012-2.
Rudin, Walter (1987), Real and complex analysis (3rd ed.), McGraw-Hill, ISBN 0-07-054234-1.
Tao, Terence (2014), Analysis, Volume I (3rd ed.), Hindustan Book Agency, ISBN 978-93-80250-64-9.
== External links ==
Taylor Series Approximation to Cosine at cut-the-knot
Trigonometric Taylor Expansion interactive demonstrative applet
Taylor Series Revisited at Holistic Numerical Methods Institute | Wikipedia/Taylor_approximation |
In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
== Formal definition ==
Let
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,\mathrm {T} )}
be measurable spaces, meaning that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are sets equipped with respective
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be measurable if for every
E
∈
T
{\displaystyle E\in \mathrm {T} }
the pre-image of
E
{\displaystyle E}
under
f
{\displaystyle f}
is in
Σ
{\displaystyle \Sigma }
; that is, for all
E
∈
T
{\displaystyle E\in \mathrm {T} }
f
−
1
(
E
)
:=
{
x
∈
X
∣
f
(
x
)
∈
E
}
∈
Σ
.
{\displaystyle f^{-1}(E):=\{x\in X\mid f(x)\in E\}\in \Sigma .}
That is,
σ
(
f
)
⊆
Σ
,
{\displaystyle \sigma (f)\subseteq \Sigma ,}
where
σ
(
f
)
{\displaystyle \sigma (f)}
is the σ-algebra generated by f. If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a measurable function, one writes
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
.
{\displaystyle f\colon (X,\Sigma )\rightarrow (Y,\mathrm {T} ).}
to emphasize the dependency on the
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
== Term usage variations ==
The choice of
σ
{\displaystyle \sigma }
-algebras in the definition above is sometimes implicit and left up to the context. For example, for
R
,
{\displaystyle \mathbb {R} ,}
C
,
{\displaystyle \mathbb {C} ,}
or other topological spaces, the Borel algebra (generated by all the open sets) is a common choice. Some authors define measurable functions as exclusively real-valued ones with respect to the Borel algebra.
If the values of the function lie in an infinite-dimensional vector space, other non-equivalent definitions of measurability, such as weak measurability and Bochner measurability, exist.
== Notable classes of measurable functions ==
Random variables are by definition measurable functions defined on probability spaces.
If
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,T)}
are Borel spaces, a measurable function
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
{\displaystyle f:(X,\Sigma )\to (Y,T)}
is also called a Borel function. Continuous functions are Borel functions but not all Borel functions are continuous. However, a measurable function is nearly a continuous function; see Luzin's theorem. If a Borel function happens to be a section of a map
Y
→
π
X
,
{\displaystyle Y\xrightarrow {~\pi ~} X,}
it is called a Borel section.
A Lebesgue measurable function is a measurable function
f
:
(
R
,
L
)
→
(
C
,
B
C
)
,
{\displaystyle f:(\mathbb {R} ,{\mathcal {L}})\to (\mathbb {C} ,{\mathcal {B}}_{\mathbb {C} }),}
where
L
{\displaystyle {\mathcal {L}}}
is the
σ
{\displaystyle \sigma }
-algebra of Lebesgue measurable sets, and
B
C
{\displaystyle {\mathcal {B}}_{\mathbb {C} }}
is the Borel algebra on the complex numbers
C
.
{\displaystyle \mathbb {C} .}
Lebesgue measurable functions are of interest in mathematical analysis because they can be integrated. In the case
f
:
X
→
R
,
{\displaystyle f:X\to \mathbb {R} ,}
f
{\displaystyle f}
is Lebesgue measurable if and only if
{
f
>
α
}
=
{
x
∈
X
:
f
(
x
)
>
α
}
{\displaystyle \{f>\alpha \}=\{x\in X:f(x)>\alpha \}}
is measurable for all
α
∈
R
.
{\displaystyle \alpha \in \mathbb {R} .}
This is also equivalent to any of
{
f
≥
α
}
,
{
f
<
α
}
,
{
f
≤
α
}
{\displaystyle \{f\geq \alpha \},\{f<\alpha \},\{f\leq \alpha \}}
being measurable for all
α
,
{\displaystyle \alpha ,}
or the preimage of any open set being measurable. Continuous functions, monotone functions, step functions, semicontinuous functions, Riemann-integrable functions, and functions of bounded variation are all Lebesgue measurable. A function
f
:
X
→
C
{\displaystyle f:X\to \mathbb {C} }
is measurable if and only if the real and imaginary parts are measurable.
== Properties of measurable functions ==
The sum and product of two complex-valued measurable functions are measurable. So is the quotient, so long as there is no division by zero.
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
2
)
→
(
Z
,
Σ
3
)
{\displaystyle g:(Y,\Sigma _{2})\to (Z,\Sigma _{3})}
are measurable functions, then so is their composition
g
∘
f
:
(
X
,
Σ
1
)
→
(
Z
,
Σ
3
)
.
{\displaystyle g\circ f:(X,\Sigma _{1})\to (Z,\Sigma _{3}).}
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
3
)
→
(
Z
,
Σ
4
)
{\displaystyle g:(Y,\Sigma _{3})\to (Z,\Sigma _{4})}
are measurable functions, their composition
g
∘
f
:
X
→
Z
{\displaystyle g\circ f:X\to Z}
need not be
(
Σ
1
,
Σ
4
)
{\displaystyle (\Sigma _{1},\Sigma _{4})}
-measurable unless
Σ
3
⊆
Σ
2
.
{\displaystyle \Sigma _{3}\subseteq \Sigma _{2}.}
Indeed, two Lebesgue-measurable functions may be constructed in such a way as to make their composition non-Lebesgue-measurable.
The (pointwise) supremum, infimum, limit superior, and limit inferior of a sequence (viz., countably many) of real-valued measurable functions are all measurable as well.
The pointwise limit of a sequence of measurable functions
f
n
:
X
→
Y
{\displaystyle f_{n}:X\to Y}
is measurable, where
Y
{\displaystyle Y}
is a metric space (endowed with the Borel algebra). This is not true in general if
Y
{\displaystyle Y}
is non-metrizable. The corresponding statement for continuous functions requires stronger conditions than pointwise convergence, such as uniform convergence.
== Non-measurable functions ==
Real-valued functions encountered in applications tend to be measurable; however, it is not difficult to prove the existence of non-measurable functions. Such proofs rely on the axiom of choice in an essential way, in the sense that Zermelo–Fraenkel set theory without the axiom of choice does not prove the existence of such functions.
In any measure space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
with a non-measurable set
A
⊂
X
,
{\displaystyle A\subset X,}
A
∉
Σ
,
{\displaystyle A\notin \Sigma ,}
one can construct a non-measurable indicator function:
1
A
:
(
X
,
Σ
)
→
R
,
1
A
(
x
)
=
{
1
if
x
∈
A
0
otherwise
,
{\displaystyle \mathbf {1} _{A}:(X,\Sigma )\to \mathbb {R} ,\quad \mathbf {1} _{A}(x)={\begin{cases}1&{\text{ if }}x\in A\\0&{\text{ otherwise}},\end{cases}}}
where
R
{\displaystyle \mathbb {R} }
is equipped with the usual Borel algebra. This is a non-measurable function since the preimage of the measurable set
{
1
}
{\displaystyle \{1\}}
is the non-measurable
A
.
{\displaystyle A.}
As another example, any non-constant function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
is non-measurable with respect to the trivial
σ
{\displaystyle \sigma }
-algebra
Σ
=
{
∅
,
X
}
,
{\displaystyle \Sigma =\{\varnothing ,X\},}
since the preimage of any point in the range is some proper, nonempty subset of
X
,
{\displaystyle X,}
which is not an element of the trivial
Σ
.
{\displaystyle \Sigma .}
== See also ==
Bochner measurable function
Bochner space – Type of topological space
Lp space – Function spaces generalizing finite-dimensional p norm spaces - Vector spaces of measurable functions: the
L
p
{\displaystyle L^{p}}
spaces
Measure-preserving dynamical system – Subject of study in ergodic theory
Vector measure
Weakly measurable function
== Notes ==
== External links ==
Measurable function at Encyclopedia of Mathematics
Borel function at Encyclopedia of Mathematics | Wikipedia/Lebesgue_measurable_function |
In mathematics, a complex differential form is a differential form on a manifold (usually a complex manifold) which is permitted to have complex coefficients.
Complex forms have broad applications in differential geometry. On complex manifolds, they are fundamental and serve as the basis for much of algebraic geometry, Kähler geometry, and Hodge theory. Over non-complex manifolds, they also play a role in the study of almost complex structures, the theory of spinors, and CR structures.
Typically, complex forms are considered because of some desirable decomposition that the forms admit. On a complex manifold, for instance, any complex k-form can be decomposed uniquely into a sum of so-called (p, q)-forms: roughly, wedges of p differentials of the holomorphic coordinates with q differentials of their complex conjugates. The ensemble of (p, q)-forms becomes the primitive object of study, and determines a finer geometrical structure on the manifold than the k-forms. Even finer structures exist, for example, in cases where Hodge theory applies.
== Differential forms on a complex manifold ==
Suppose that M is a complex manifold of complex dimension n. Then there is a local coordinate system consisting of n complex-valued functions z1, ..., zn such that the coordinate transitions from one patch to another are holomorphic functions of these variables. The space of complex forms carries a rich structure, depending fundamentally on the fact that these transition functions are holomorphic, rather than just smooth.
=== One-forms ===
We begin with the case of one-forms. First decompose the complex coordinates into their real and imaginary parts: zj = xj + iyj for each j. Letting
d
z
j
=
d
x
j
+
i
d
y
j
,
d
z
¯
j
=
d
x
j
−
i
d
y
j
,
{\displaystyle dz^{j}=dx^{j}+idy^{j},\quad d{\bar {z}}^{j}=dx^{j}-idy^{j},}
one sees that any differential form with complex coefficients can be written uniquely as a sum
∑
j
=
1
n
(
f
j
d
z
j
+
g
j
d
z
¯
j
)
.
{\displaystyle \sum _{j=1}^{n}\left(f_{j}dz^{j}+g_{j}d{\bar {z}}^{j}\right).}
Let Ω1,0 be the space of complex differential forms containing only
d
z
{\displaystyle dz}
's and Ω0,1 be the space of forms containing only
d
z
¯
{\displaystyle d{\bar {z}}}
's. One can show, by the Cauchy–Riemann equations, that the spaces Ω1,0 and Ω0,1 are stable under holomorphic coordinate changes. In other words, if one makes a different choice wi of holomorphic coordinate system, then elements of Ω1,0 transform tensorially, as do elements of Ω0,1. Thus the spaces Ω0,1 and Ω1,0 determine complex vector bundles on the complex manifold.
=== Higher-degree forms ===
The wedge product of complex differential forms is defined in the same way as with real forms. Let p and q be a pair of non-negative integers ≤ n. The space Ωp,q of (p, q)-forms is defined by taking linear combinations of the wedge products of p elements from Ω1,0 and q elements from Ω0,1. Symbolically,
Ω
p
,
q
=
Ω
1
,
0
∧
⋯
∧
Ω
1
,
0
⏟
p
times
∧
Ω
0
,
1
∧
⋯
∧
Ω
0
,
1
⏟
q
times
{\displaystyle \Omega ^{p,q}=\underbrace {\Omega ^{1,0}\wedge \dotsb \wedge \Omega ^{1,0}} _{p{\text{ times}}}\wedge \underbrace {\Omega ^{0,1}\wedge \dotsb \wedge \Omega ^{0,1}} _{q{\text{ times}}}}
where there are p factors of Ω1,0 and q factors of Ω0,1. Just as with the two spaces of 1-forms, these are stable under holomorphic changes of coordinates, and so determine vector bundles.
If Ek is the space of all complex differential forms of total degree k, then each element of Ek can be expressed in a unique way as a linear combination of elements from among the spaces Ωp,q with p + q = k. More succinctly, there is a direct sum decomposition
E
k
=
Ω
k
,
0
⊕
Ω
k
−
1
,
1
⊕
⋯
⊕
Ω
1
,
k
−
1
⊕
Ω
0
,
k
=
⨁
p
+
q
=
k
Ω
p
,
q
.
{\displaystyle E^{k}=\Omega ^{k,0}\oplus \Omega ^{k-1,1}\oplus \dotsb \oplus \Omega ^{1,k-1}\oplus \Omega ^{0,k}=\bigoplus _{p+q=k}\Omega ^{p,q}.}
Because this direct sum decomposition is stable under holomorphic coordinate changes, it also determines a vector bundle decomposition.
In particular, for each k and each p and q with p + q = k, there is a canonical projection of vector bundles
π
p
,
q
:
E
k
→
Ω
p
,
q
.
{\displaystyle \pi ^{p,q}:E^{k}\rightarrow \Omega ^{p,q}.}
=== The Dolbeault operators ===
The usual exterior derivative defines a mapping of sections
d
:
Ω
r
→
Ω
r
+
1
{\displaystyle d:\Omega ^{r}\to \Omega ^{r+1}}
via
d
(
Ω
p
,
q
)
⊆
⨁
r
+
s
=
p
+
q
+
1
Ω
r
,
s
{\displaystyle d(\Omega ^{p,q})\subseteq \bigoplus _{r+s=p+q+1}\Omega ^{r,s}}
The exterior derivative does not in itself reflect the more rigid complex structure of the manifold.
Using d and the projections defined in the previous subsection, it is possible to define the Dolbeault operators:
∂
=
π
p
+
1
,
q
∘
d
:
Ω
p
,
q
→
Ω
p
+
1
,
q
,
∂
¯
=
π
p
,
q
+
1
∘
d
:
Ω
p
,
q
→
Ω
p
,
q
+
1
{\displaystyle \partial =\pi ^{p+1,q}\circ d:\Omega ^{p,q}\rightarrow \Omega ^{p+1,q},\quad {\bar {\partial }}=\pi ^{p,q+1}\circ d:\Omega ^{p,q}\rightarrow \Omega ^{p,q+1}}
To describe these operators in local coordinates, let
α
=
∑
|
I
|
=
p
,
|
J
|
=
q
f
I
J
d
z
I
∧
d
z
¯
J
∈
Ω
p
,
q
{\displaystyle \alpha =\sum _{|I|=p,|J|=q}\ f_{IJ}\,dz^{I}\wedge d{\bar {z}}^{J}\in \Omega ^{p,q}}
where I and J are multi-indices. Then
∂
α
=
∑
|
I
|
,
|
J
|
∑
ℓ
∂
f
I
J
∂
z
ℓ
d
z
ℓ
∧
d
z
I
∧
d
z
¯
J
{\displaystyle \partial \alpha =\sum _{|I|,|J|}\sum _{\ell }{\frac {\partial f_{IJ}}{\partial z^{\ell }}}\,dz^{\ell }\wedge dz^{I}\wedge d{\bar {z}}^{J}}
∂
¯
α
=
∑
|
I
|
,
|
J
|
∑
ℓ
∂
f
I
J
∂
z
¯
ℓ
d
z
¯
ℓ
∧
d
z
I
∧
d
z
¯
J
.
{\displaystyle {\bar {\partial }}\alpha =\sum _{|I|,|J|}\sum _{\ell }{\frac {\partial f_{IJ}}{\partial {\bar {z}}^{\ell }}}d{\bar {z}}^{\ell }\wedge dz^{I}\wedge d{\bar {z}}^{J}.}
The following properties are seen to hold:
d
=
∂
+
∂
¯
{\displaystyle d=\partial +{\bar {\partial }}}
∂
2
=
∂
¯
2
=
∂
∂
¯
+
∂
¯
∂
=
0.
{\displaystyle \partial ^{2}={\bar {\partial }}^{2}=\partial {\bar {\partial }}+{\bar {\partial }}\partial =0.}
These operators and their properties form the basis for Dolbeault cohomology and many aspects of Hodge theory.
On a star-shaped domain of a complex manifold the Dolbeault operators have dual homotopy operators that result from splitting of the homotopy operator for
d
{\displaystyle d}
. This is a content of the Poincaré lemma on a complex manifold.
The Poincaré lemma for
∂
¯
{\displaystyle {\bar {\partial }}}
and
∂
{\displaystyle \partial }
can be improved further to the local
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-lemma, which shows that every
d
{\displaystyle d}
-exact complex differential form is actually
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-exact. On compact Kähler manifolds a global form of the local
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-lemma holds, known as the
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-lemma. It is a consequence of Hodge theory, and states that a complex differential form which is globally
d
{\displaystyle d}
-exact (in other words, whose class in de Rham cohomology is zero) is globally
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-exact.
=== Holomorphic forms ===
For each p, a holomorphic p-form is a holomorphic section of the bundle Ωp,0. In local coordinates, then, a holomorphic p-form can be written in the form
α
=
∑
|
I
|
=
p
f
I
d
z
I
{\displaystyle \alpha =\sum _{|I|=p}f_{I}\,dz^{I}}
where the
f
I
{\displaystyle f_{I}}
are holomorphic functions. Equivalently, and due to the independence of the complex conjugate, the (p, 0)-form α is holomorphic if and only if
∂
¯
α
=
0.
{\displaystyle {\bar {\partial }}\alpha =0.}
The sheaf of holomorphic p-forms is often written Ωp, although this can sometimes lead to confusion so many authors tend to adopt an alternative notation.
== See also ==
Dolbeault complex
Frölicher spectral sequence
Differential of the first kind
== References ==
P. Griffiths; J. Harris (1994). Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. pp. 23–25. ISBN 0-471-05059-8.
Wells, R. O. (1973). Differential analysis on complex manifolds. Springer-Verlag. ISBN 0-387-90419-0.
Voisin, Claire (2008). Hodge Theory and Complex Algebraic Geometry I. Cambridge University Press. ISBN 978-0521718011. | Wikipedia/Complex_differential_form |
In mathematics, the Poincaré lemma gives a sufficient condition for a closed differential form to be exact (while an exact form is necessarily closed). Precisely, it states that every closed p-form on an open ball in Rn is exact for p with 1 ≤ p ≤ n. The lemma was introduced by Henri Poincaré in 1886.
== Informal Discussion ==
Especially in calculus, the Poincaré lemma also says that every closed 1-form on a simply connected open subset in
R
n
{\displaystyle \mathbb {R} ^{n}}
is exact.
In simpler terms, it means that if a differential form is closed in a region that can be shrunk to a point, then it can be written as the derivative of another form; i.e. if dα = 0 on a simplely connected region, we can always find α = dβ; therefore we have d(dβ) = 0, expressed simply as d2 = 0. This concept is used in mathematical physics, particularly in the context of electromagnetism and differential geometry, where it relates to the fact that the boundary of a boundary is always empty, i.e. if you have a surface (a 2-form) and you take its boundary (a 1-form, a curve), then the boundary of that boundary (a 0-form, a point) is an empty set.
In electromagnetism, magnetic fields can be described using a vector potential, and the Poincaré lemma helps in finding such potentials when the magnetic field is "well-behaved" (i.e., when the magnetic field is not due to a monopole), Gauss's law for magnetism states that the total magnetic flux through a closed surface is always zero, which implies that magnetic monopoles, if they exist, are not isolated but must be accompanied by other magnetic charges.
In the language of cohomology, the Poincaré lemma says that the k-th de Rham cohomology group of a contractible open subset of a manifold M (e.g.,
M
=
R
n
{\displaystyle M=\mathbb {R} ^{n}}
) vanishes for
k
≥
1
{\displaystyle k\geq 1}
. In particular, it implies that the de Rham complex yields a resolution of the constant sheaf
R
M
{\displaystyle \mathbb {R} _{M}}
on M. The singular cohomology of a contractible space vanishes in positive degree, but the Poincaré lemma does not follow from this, since the fact that the singular cohomology of a manifold can be computed as the de Rham cohomology of it, that is, the de Rham theorem, relies on the Poincaré lemma. It does, however, mean that it is enough to prove the Poincaré lemma for open balls; the version for contractible manifolds then follows from the topological consideration.
The Poincaré lemma is also a special case of the homotopy invariance of de Rham cohomology; in fact, it is common to establish the lemma by showing the homotopy invariance or at least a version of it.
== Proofs ==
A standard proof of the Poincaré lemma uses the homotopy invariance formula (cf. see the proofs below as well as Integration along fibers#Example). The local form of the homotopy operator is described in Edelen (2005) and the connection of the lemma with the Maurer-Cartan form is explained in Sharpe (1997).
=== Direct proof ===
The Poincaré lemma can be proved by means of integration along fibers. (This approach is a straightforward generalization of constructing a primitive function by means of integration in calculus.)
We shall prove the lemma for an open subset
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
that is star-shaped or a cone over
[
0
,
1
]
{\displaystyle [0,1]}
; i.e., if
x
{\displaystyle x}
is in
U
{\displaystyle U}
, then
t
x
{\displaystyle tx}
is in
U
{\displaystyle U}
for
0
≤
t
≤
1
{\displaystyle 0\leq t\leq 1}
. This case in particular covers the open ball case, since an open ball can be assumed to be centered at the origin without loss of generality.
The trick is to consider differential forms on
U
×
[
0
,
1
]
⊂
R
n
+
1
{\displaystyle U\times [0,1]\subset \mathbb {R} ^{n+1}}
(we use
t
{\displaystyle t}
for the coordinate on
[
0
,
1
]
{\displaystyle [0,1]}
). First define the operator
π
∗
{\displaystyle \pi _{*}}
(called the fiber integration) for k-forms on
U
×
[
0
,
1
]
{\displaystyle U\times [0,1]}
by
π
∗
(
∑
i
1
<
⋯
<
i
k
−
1
f
i
d
t
∧
d
x
i
+
∑
j
1
<
⋯
<
j
k
g
j
d
x
j
)
=
∑
i
1
<
⋯
<
i
k
−
1
(
∫
0
1
f
i
(
⋅
,
t
)
d
t
)
d
x
i
{\displaystyle \pi _{*}\left(\sum _{i_{1}<\cdots <i_{k-1}}f_{i}dt\wedge dx^{i}+\sum _{j_{1}<\cdots <j_{k}}g_{j}dx^{j}\right)=\sum _{i_{1}<\cdots <i_{k-1}}\left(\int _{0}^{1}f_{i}(\cdot ,t)\,dt\right)\,dx^{i}}
where
d
x
i
=
d
x
i
1
∧
⋯
∧
d
x
i
k
−
1
{\displaystyle dx^{i}=dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k-1}}}
,
f
i
=
f
i
1
,
…
,
i
k
−
1
{\displaystyle f_{i}=f_{i_{1},\dots ,i_{k-1}}}
and similarly for
d
x
j
{\displaystyle dx^{j}}
and
g
j
{\displaystyle g_{j}}
. Now, for
α
=
f
d
t
∧
d
x
i
{\displaystyle \alpha =f\,dt\wedge dx^{i}}
, since
d
α
=
−
∑
l
∂
f
∂
x
l
d
t
∧
d
x
l
∧
d
x
i
{\displaystyle d\alpha =-\sum _{l}{\frac {\partial f}{\partial x_{l}}}dt\wedge dx_{l}\wedge dx^{i}}
, using the differentiation under the integral sign, we have:
π
∗
(
d
α
)
=
α
1
−
α
0
−
d
(
π
∗
α
)
{\displaystyle \pi _{*}(d\alpha )=\alpha _{1}-\alpha _{0}-d(\pi _{*}\alpha )}
where
α
0
,
α
1
{\displaystyle \alpha _{0},\alpha _{1}}
denote the restrictions of
α
{\displaystyle \alpha }
to the hyperplanes
t
=
0
,
t
=
1
{\displaystyle t=0,t=1}
and they are zero since
d
t
{\displaystyle dt}
is zero there. If
α
=
g
d
x
j
{\displaystyle \alpha =g\,dx^{j}}
, then a similar computation gives
π
∗
(
d
α
)
=
α
1
−
α
0
−
d
(
π
∗
α
)
{\displaystyle \pi _{*}(d\alpha )=\alpha _{1}-\alpha _{0}-d(\pi _{*}\alpha )}
.
Thus, the above formula holds for any
k
{\displaystyle k}
-form
α
{\displaystyle \alpha }
on
U
×
[
0
,
1
]
{\displaystyle U\times [0,1]}
. (The formula is a special case of a formula sometimes called the relative Stokes formula.)
Finally, let
h
(
x
,
t
)
=
t
x
{\displaystyle h(x,t)=tx}
and then set
J
=
π
∗
∘
h
∗
{\displaystyle J=\pi _{*}\circ h^{*}}
. Then, with the notation
h
t
=
h
(
⋅
,
t
)
{\displaystyle h_{t}=h(\cdot ,t)}
, we get: for any
k
{\displaystyle k}
-form
ω
{\displaystyle \omega }
on
U
{\displaystyle U}
,
h
1
∗
ω
−
h
0
∗
ω
=
J
d
ω
+
d
J
ω
,
{\displaystyle h_{1}^{*}\omega -h_{0}^{*}\omega =Jd\omega +dJ\omega ,}
the formula known as the homotopy formula. The operator
J
{\displaystyle J}
is called the homotopy operator (also called a chain homotopy). Now, if
ω
{\displaystyle \omega }
is closed,
J
d
ω
=
0
{\displaystyle Jd\omega =0}
. On the other hand,
h
1
∗
ω
=
ω
{\displaystyle h_{1}^{*}\omega =\omega }
and
h
0
∗
ω
=
0
{\displaystyle h_{0}^{*}\omega =0}
, the latter because there is no nonzero higher form at a point. Hence,
ω
=
d
J
ω
,
{\displaystyle \omega =dJ\omega ,}
which proves the Poincaré lemma.
The same proof in fact shows the Poincaré lemma for any contractible open subset U of a manifold. Indeed, given such a U, we have the homotopy
h
t
{\displaystyle h_{t}}
with
h
1
=
{\displaystyle h_{1}=}
the identity and
h
0
(
U
)
=
{\displaystyle h_{0}(U)=}
a point. Approximating such
h
t
{\displaystyle h_{t}}
,, we can assume
h
t
{\displaystyle h_{t}}
is in fact smooth. The fiber integration
π
∗
{\displaystyle \pi _{*}}
is also defined for
π
:
U
×
[
0
,
1
]
→
U
{\displaystyle \pi :U\times [0,1]\to U}
. Hence, the same argument goes through.
=== Proof using Lie derivatives ===
Cartan's magic formula for Lie derivatives can be used to give a short proof of the Poincaré lemma. The formula states that the Lie derivative along a vector field
ξ
{\displaystyle \xi }
is given as:
L
ξ
=
d
i
(
ξ
)
+
i
(
ξ
)
d
{\displaystyle L_{\xi }=d\,i(\xi )+i(\xi )d}
where
i
(
ξ
)
{\displaystyle i(\xi )}
denotes the interior product; i.e.,
i
(
ξ
)
ω
=
ω
(
ξ
,
⋅
)
{\displaystyle i(\xi )\omega =\omega (\xi ,\cdot )}
.
Let
f
t
:
U
→
U
{\displaystyle f_{t}:U\to U}
be a smooth family of smooth maps for some open subset U of
R
n
{\displaystyle \mathbb {R} ^{n}}
such that
f
t
{\displaystyle f_{t}}
is defined for t in some closed interval I and
f
t
{\displaystyle f_{t}}
is a diffeomorphism for t in the interior of I. Let
ξ
t
(
x
)
{\displaystyle \xi _{t}(x)}
denote the tangent vectors to the curve
f
t
(
x
)
{\displaystyle f_{t}(x)}
; i.e.,
d
d
t
f
t
(
x
)
=
ξ
t
(
f
t
(
x
)
)
{\displaystyle {\frac {d}{dt}}f_{t}(x)=\xi _{t}(f_{t}(x))}
. For a fixed t in the interior of I, let
g
s
=
f
t
+
s
∘
f
t
−
1
{\displaystyle g_{s}=f_{t+s}\circ f_{t}^{-1}}
. Then
g
0
=
id
,
d
d
s
g
s
|
s
=
0
=
ξ
t
{\displaystyle g_{0}=\operatorname {id} ,\,{\frac {d}{ds}}g_{s}|_{s=0}=\xi _{t}}
. Thus, by the definition of a Lie derivative,
(
L
ξ
t
ω
)
(
f
t
(
x
)
)
=
d
d
s
g
s
∗
ω
(
f
t
(
x
)
)
|
s
=
0
=
d
d
s
f
t
+
s
∗
ω
(
x
)
|
s
=
0
=
d
d
t
f
t
∗
ω
(
x
)
{\displaystyle (L_{\xi _{t}}\omega )(f_{t}(x))={\frac {d}{ds}}g_{s}^{*}\omega (f_{t}(x))|_{s=0}={\frac {d}{ds}}f_{t+s}^{*}\omega (x)|_{s=0}={\frac {d}{dt}}f_{t}^{*}\omega (x)}
.
That is,
d
d
t
f
t
∗
ω
=
f
t
∗
L
ξ
t
ω
.
{\displaystyle {\frac {d}{dt}}f_{t}^{*}\omega =f_{t}^{*}L_{\xi _{t}}\omega .}
Assume
I
=
[
0
,
1
]
{\displaystyle I=[0,1]}
. Then, integrating both sides of the above and then using Cartan's formula and the differentiation under the integral sign, we get: for
0
<
t
0
<
t
1
<
1
{\displaystyle 0<t_{0}<t_{1}<1}
,
f
t
1
∗
ω
−
f
t
0
∗
ω
=
d
∫
t
0
t
1
f
t
∗
i
(
ξ
t
)
ω
d
t
+
∫
t
0
t
1
f
t
∗
i
(
ξ
t
)
d
ω
d
t
{\displaystyle f_{t_{1}}^{*}\omega -f_{t_{0}}^{*}\omega =d\int _{t_{0}}^{t_{1}}f_{t}^{*}i(\xi _{t})\omega \,dt+\int _{t_{0}}^{t_{1}}f_{t}^{*}i(\xi _{t})d\omega \,dt}
where the integration means the integration of each coefficient in a differential form. Letting
t
0
,
t
1
→
0
,
1
{\displaystyle t_{0},t_{1}\to 0,1}
, we then have:
f
1
∗
ω
−
f
0
∗
ω
=
d
J
ω
+
J
d
ω
{\displaystyle f_{1}^{*}\omega -f_{0}^{*}\omega =dJ\omega +Jd\omega }
with the notation
J
ω
=
∫
0
1
f
t
∗
i
(
ξ
t
)
ω
d
t
.
{\displaystyle J\omega =\int _{0}^{1}f_{t}^{*}i(\xi _{t})\omega \,dt.}
Now, assume
U
{\displaystyle U}
is an open ball with center
x
0
{\displaystyle x_{0}}
; then we can take
f
t
(
x
)
=
t
(
x
−
x
0
)
+
x
0
{\displaystyle f_{t}(x)=t(x-x_{0})+x_{0}}
. Then the above formula becomes:
ω
=
d
J
ω
+
J
d
ω
{\displaystyle \omega =dJ\omega +Jd\omega }
,
which proves the Poincaré lemma when
ω
{\displaystyle \omega }
is closed.
=== Proof in the two-dimensional case ===
In two dimensions the Poincaré lemma can be proved directly for closed 1-forms and 2-forms as follows.
If ω = p dx + q dy is a closed 1-form on (a, b) × (c, d), then py = qx. If ω = df then p = fx and q = fy. Set
g
(
x
,
y
)
=
∫
a
x
p
(
t
,
y
)
d
t
,
{\displaystyle g(x,y)=\int _{a}^{x}p(t,y)\,dt,}
so that gx = p. Then h = f − g must satisfy hx = 0 and hy = q − gy. The right hand side here is independent of x since its partial derivative with respect to x is 0. So
h
(
x
,
y
)
=
∫
c
y
q
(
a
,
s
)
d
s
−
g
(
a
,
y
)
=
∫
c
y
q
(
a
,
s
)
d
s
,
{\displaystyle h(x,y)=\int _{c}^{y}q(a,s)\,ds-g(a,y)=\int _{c}^{y}q(a,s)\,ds,}
and hence
f
(
x
,
y
)
=
∫
a
x
p
(
t
,
y
)
d
t
+
∫
c
y
q
(
a
,
s
)
d
s
.
{\displaystyle f(x,y)=\int _{a}^{x}p(t,y)\,dt+\int _{c}^{y}q(a,s)\,ds.}
Similarly, if Ω = r dx ∧ dy then Ω = d(a dx + b dy) with bx − ay = r. Thus a solution is given by a = 0 and
b
(
x
,
y
)
=
∫
a
x
r
(
t
,
y
)
d
t
.
{\displaystyle b(x,y)=\int _{a}^{x}r(t,y)\,dt.}
=== Inductive proof ===
It is also possible to give an inductive proof of Poincaré's lemma which does not use homotopical arguments. Let
X
m
:=
I
m
{\displaystyle X_{m}:=I^{m}}
, where
I
=
[
0
,
1
]
{\displaystyle I=[0,1]}
, be the m dimensional coordinate cube. For a differential k-form
ω
∈
Ω
k
(
X
m
)
{\displaystyle \omega \in \Omega ^{k}(X_{m})}
, let its codegree be the integer m-k. The induction is performed over the codegree of the form. Since we are working over a coordinate domain, partial derivatives and also integrals with respect to the coordinates can be applied to a form itself, by applying them to the coefficients of the form with respect to the canonical coordinates.
First let
ω
∈
Ω
m
(
X
m
)
{\displaystyle \omega \in \Omega ^{m}(X_{m})}
, i.e. the codegree is 0. It can be written as
ω
=
d
x
m
∧
ω
0
,
ω
0
=
f
(
x
1
,
…
,
x
m
)
d
x
1
∧
⋯
∧
d
x
m
−
1
{\displaystyle \omega =dx^{m}\wedge \omega _{0},\quad \omega _{0}=f(x^{1},\dots ,x^{m})dx^{1}\wedge \dots \wedge dx^{m-1}}
so if we define
θ
∈
Ω
m
−
1
(
X
m
)
{\displaystyle \theta \in \Omega ^{m-1}(X_{m})}
by
θ
=
∫
0
x
m
ω
0
(
x
1
,
…
,
x
m
−
1
,
s
)
d
s
{\displaystyle \theta =\int _{0}^{x_{m}}\omega _{0}(x^{1},\dots ,x^{m-1},s)\,ds}
, we have
d
θ
=
d
x
m
∧
∂
m
θ
=
d
x
m
∧
ω
0
=
ω
{\displaystyle d\theta =dx^{m}\wedge \partial _{m}\theta =dx^{m}\wedge \omega _{0}=\omega }
hence,
θ
{\displaystyle \theta }
is a primitive of
ω
{\displaystyle \omega }
.
Let now
ω
∈
Ω
k
(
X
m
)
{\displaystyle \omega \in \Omega ^{k}(X_{m})}
, where
0
<
k
<
m
{\displaystyle 0<k<m}
, i.e.
ω
{\displaystyle \omega }
has codegree m-k, and let us suppose that whenever a closed form has codegree less than m-k, the form is exact. The form
ω
{\displaystyle \omega }
can be decomposed as
ω
=
d
x
m
∧
ω
0
+
ω
1
{\displaystyle \omega =dx^{m}\wedge \omega _{0}+\omega _{1}}
where neither
ω
0
{\displaystyle \omega _{0}}
nor
ω
1
{\displaystyle \omega _{1}}
contain any factor of
d
x
m
{\displaystyle dx^{m}}
. Define
λ
:=
∫
0
x
m
ω
0
(
x
1
,
…
,
x
m
−
1
,
s
)
d
s
{\displaystyle \lambda :=\int _{0}^{x_{m}}\omega _{0}(x^{1},\dots ,x^{m-1},s)\,ds}
, then
d
λ
=
d
x
m
∧
ω
0
+
λ
1
{\displaystyle d\lambda =dx^{m}\wedge \omega _{0}+\lambda _{1}}
, where
λ
1
{\displaystyle \lambda _{1}}
does not contain any factor of
d
x
m
{\displaystyle dx^{m}}
, hence, defining
ω
′
:=
ω
−
d
λ
=
ω
1
−
λ
1
{\displaystyle \omega ^{\prime }:=\omega -d\lambda =\omega _{1}-\lambda _{1}}
, this form is also closed, but does not involve any factor of
d
x
m
{\displaystyle dx^{m}}
. Since this form is closed, we have
0
=
d
ω
′
=
d
x
m
∧
∂
m
ω
′
+
ω
′
′
{\displaystyle 0=d\omega ^{\prime }=dx^{m}\wedge \partial _{m}\omega ^{\prime }+\omega ^{\prime \prime }}
where the last term does not contain a factor of
d
x
m
{\displaystyle dx^{m}}
. Due to linear independence of the coordinate differentials, this equation implies that
ω
′
=
∑
1
≤
i
1
<
⋯
<
i
k
≤
m
−
1
ω
i
1
.
.
.
i
k
(
x
1
,
…
,
x
m
−
1
)
d
x
i
1
∧
⋯
∧
d
x
i
k
{\displaystyle \omega ^{\prime }=\sum _{1\leq i_{1}<\dots <i_{k}\leq m-1}\omega _{i_{1}...i_{k}}(x^{1},\dots ,x^{m-1})dx^{i_{1}}\wedge \dots \wedge dx^{i_{k}}}
i.e. the form
ω
′
{\displaystyle \omega ^{\prime }}
is a differential form in the variables
x
1
,
…
,
x
m
−
1
{\displaystyle x^{1},\dots ,x^{m-1}}
only, hence can be interpreted as an element of
Ω
k
(
X
m
−
1
)
{\displaystyle \Omega ^{k}(X_{m-1})}
, and its codegree is thus m-k-1. The induction hypothesis applies, thus
ω
′
=
d
θ
′
{\displaystyle \omega ^{\prime }=d\theta ^{\prime }}
for some
θ
′
∈
Ω
k
−
1
(
X
m
−
1
)
⊆
Ω
k
−
1
(
X
m
)
{\displaystyle \theta ^{\prime }\in \Omega ^{k-1}(X_{m-1})\subseteq \Omega ^{k-1}(X_{m})}
, therefore
ω
=
d
θ
,
θ
=
θ
′
+
λ
{\displaystyle \omega =d\theta ,\quad \theta =\theta ^{\prime }+\lambda }
concluding the proof for a coordinate cube. In any manifold, every point has a neighborhood which is diffeomorphic to a coordinate cube, the proof also implies that on a manifold any closed k-form (for
0
<
k
≤
m
=
dim
M
{\displaystyle 0<k\leq m=\dim M}
) is locally exact.
== Implication for de Rham cohomology ==
By definition, the k-th de Rham cohomology group
H
d
R
k
(
U
)
{\displaystyle \operatorname {H} _{dR}^{k}(U)}
of an open subset U of a manifold M is defined as the quotient vector space
H
d
R
k
(
U
)
=
{
closed
k
-forms
on
U
}
/
{
exact
k
-forms
on
U
}
.
{\displaystyle \operatorname {H} _{dR}^{k}(U)=\{{\textrm {closed}}\,k{\text{-forms}}\,{\textrm {on}}\,U\}/\{{\textrm {exact}}\,k{\text{-forms}}\,{\textrm {on}}\,U\}.}
Hence, the conclusion of the Poincaré lemma is precisely that if
U
{\displaystyle U}
is an open ball, then
H
d
R
k
(
U
)
=
0
{\displaystyle \operatorname {H} _{dR}^{k}(U)=0}
for
k
≥
1
{\displaystyle k\geq 1}
. Now, differential forms determine a cochain complex called the de Rham complex:
Ω
∗
:
0
→
Ω
0
→
d
0
Ω
1
→
d
1
⋯
→
Ω
n
→
0
{\displaystyle \Omega ^{*}:0\to \Omega ^{0}{\overset {d^{0}}{\to }}\Omega ^{1}{\overset {d^{1}}{\to }}\cdots \to \Omega ^{n}\to 0}
where n = the dimension of M and
Ω
k
{\displaystyle \Omega ^{k}}
denotes the sheaf of differential k-forms; i.e.,
Ω
k
(
U
)
{\displaystyle \Omega ^{k}(U)}
consists of k-forms on U for each open subset U of M. It then gives rise to the complex (the augmented complex)
0
→
R
M
→
ϵ
Ω
0
→
d
0
Ω
1
→
d
1
⋯
→
Ω
n
→
0
{\displaystyle 0\to \mathbb {R} _{M}{\overset {\epsilon }{\to }}\Omega ^{0}{\overset {d^{0}}{\to }}\Omega ^{1}{\overset {d^{1}}{\to }}\cdots \to \Omega ^{n}\to 0}
where
R
M
{\displaystyle \mathbb {R} _{M}}
is the constant sheaf with values in
R
{\displaystyle \mathbb {R} }
; i.e., it is the sheaf of locally constant real-valued functions and
ϵ
{\displaystyle \epsilon }
the inclusion.
The kernel of
d
0
{\displaystyle d^{0}}
is
R
M
{\displaystyle \mathbb {R} _{M}}
, since the smooth functions with zero derivatives are locally constant. Also, a sequence of sheaves is exact if and only if it is so locally. The Poincaré lemma thus says the rest of the sequence is exact too (since a manifold is locally diffeomorphic to an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
and then each point has an open ball as a neighborhood). In the language of homological algebra, it means that the de Rham complex determines a resolution of the constant sheaf
R
M
{\displaystyle \mathbb {R} _{M}}
. This then implies the de Rham theorem; i.e., the de Rham cohomology of a manifold coincides with the singular cohomology of it (in short, because the singular cohomology can be viewed as a sheaf cohomology.)
Once one knows the de Rham theorem, the conclusion of the Poincaré lemma can then be obtained purely topologically. For example, it implies a version of the Poincaré lemma for contractible or simply connected open sets (see §Simply connected case).
== Simply connected case ==
Especially in calculus, the Poincaré lemma is stated for a simply connected open subset
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
. In that case, the lemma says that each closed 1-form on U is exact. This version can be seen using algebraic topology as follows. The rational Hurewicz theorem (or rather the real analog of that) says that
H
1
(
U
;
R
)
=
0
{\displaystyle \operatorname {H} _{1}(U;\mathbb {R} )=0}
since U is simply connected. Since
R
{\displaystyle \mathbb {R} }
is a field, the k-th cohomology
H
k
(
U
;
R
)
{\displaystyle \operatorname {H} ^{k}(U;\mathbb {R} )}
is the dual vector space of the k-th homology
H
k
(
U
;
R
)
{\displaystyle \operatorname {H} _{k}(U;\mathbb {R} )}
. In particular,
H
1
(
U
;
R
)
=
0.
{\displaystyle \operatorname {H} ^{1}(U;\mathbb {R} )=0.}
By the de Rham theorem (which follows from the Poincaré lemma for open balls),
H
1
(
U
;
R
)
{\displaystyle \operatorname {H} ^{1}(U;\mathbb {R} )}
is the same as the first de Rham cohomology group (see §Implication to de Rham cohomology). Hence, each closed 1-form on U is exact.
== Poincaré lemma with compact support ==
There is a version of Poincaré lemma for compactly supported differential forms:
The usual proof in the non-compact case does not go through since the homotopy h is not proper. Thus, somehow a different argument is needed for the compact case.
== Complex-geometry analog ==
On complex manifolds, the use of the Dolbeault operators
∂
{\displaystyle \partial }
and
∂
¯
{\displaystyle {\bar {\partial }}}
for complex differential forms, which refine the exterior derivative by the formula
d
=
∂
+
∂
¯
{\displaystyle d=\partial +{\bar {\partial }}}
, lead to the notion of
∂
¯
{\displaystyle {\bar {\partial }}}
-closed and
∂
¯
{\displaystyle {\bar {\partial }}}
-exact differential forms. The local exactness result for such closed forms is known as the Dolbeault–Grothendieck lemma (or
∂
¯
{\displaystyle {\bar {\partial }}}
-Poincaré lemma); cf. § On polynomial differential forms. Importantly, the geometry of the domain on which a
∂
¯
{\displaystyle {\bar {\partial }}}
-closed differential form is
∂
¯
{\displaystyle {\bar {\partial }}}
-exact is more restricted than for the Poincaré lemma, since the proof of the Dolbeault–Grothendieck lemma holds on a polydisk (a product of disks in the complex plane, on which the multidimensional Cauchy's integral formula may be applied) and there exist counterexamples to the lemma even on contractible domains. The
∂
¯
{\displaystyle {\bar {\partial }}}
-Poincaré lemma holds in more generality for pseudoconvex domains.
Using both the Poincaré lemma and the
∂
¯
{\displaystyle {\bar {\partial }}}
-Poincaré lemma, a refined local
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-Poincaré lemma can be proven, which is valid on domains upon which both the aforementioned lemmas are applicable. This lemma states that
d
{\displaystyle d}
-closed complex differential forms are actually locally
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-exact (rather than just
d
{\displaystyle d}
or
∂
¯
{\displaystyle {\bar {\partial }}}
-exact, as implied by the above lemmas).
== Relative Poincaré lemma ==
The relative Poincaré lemma generalizes Poincaré lemma from a point to a submanifold (or some more general locally closed subset). It states: let V be a submanifold of a manifold M and U a tubular neighborhood of V. If
σ
{\displaystyle \sigma }
is a closed k-form on U, k ≥ 1, that vanishes on V, then there exists a (k-1)-form
η
{\displaystyle \eta }
on U such that
d
η
=
σ
{\displaystyle d\eta =\sigma }
and
η
{\displaystyle \eta }
vanishes on V.
The relative Poincaré lemma can be proved in the same way the original Poincaré lemma is proved. Indeed, since U is a tubular neighborhood, there is a smooth strong deformation retract from U to V; i.e., there is a smooth homotopy
h
t
:
U
→
U
{\displaystyle h_{t}:U\to U}
from the projection
U
→
V
{\displaystyle U\to V}
to the identity such that
h
t
{\displaystyle h_{t}}
is the identity on V. Then we have the homotopy formula on U:
h
1
∗
−
h
0
∗
=
d
J
+
J
d
{\displaystyle h_{1}^{*}-h_{0}^{*}=dJ+Jd}
where
J
{\displaystyle J}
is the homotopy operator given by either Lie derivatives or integration along fibers. Now,
h
0
(
U
)
⊂
V
{\displaystyle h_{0}(U)\subset V}
and so
h
0
∗
σ
=
0
{\displaystyle h_{0}^{*}\sigma =0}
. Since
d
σ
=
0
{\displaystyle d\sigma =0}
and
h
1
∗
σ
=
σ
{\displaystyle h_{1}^{*}\sigma =\sigma }
, we get
σ
=
d
J
σ
{\displaystyle \sigma =dJ\sigma }
; take
η
=
J
σ
{\displaystyle \eta =J\sigma }
. That
η
{\displaystyle \eta }
vanishes on V follows from the definition of J and the fact
h
t
(
V
)
⊂
V
{\displaystyle h_{t}(V)\subset V}
. (So the proof actually goes through if U is not a tubular neighborhood but if U deformation-retracts to V with homotopy relative to V.)
◻
{\displaystyle \square }
== On polynomial differential forms ==
In characteristic zero, the following Poincaré lemma holds for polynomial differential forms.
Let k be a field of characteristic zero,
R
=
k
[
x
1
,
…
,
x
n
]
{\displaystyle R=k[x_{1},\dots ,x_{n}]}
the polynomial ring and
Ω
1
{\displaystyle \Omega ^{1}}
the vector space with a basis written as
d
x
1
,
…
,
d
x
n
{\displaystyle dx_{1},\dots ,dx_{n}}
. Then let
Ω
p
=
∧
p
Ω
1
{\displaystyle \Omega ^{p}=\wedge ^{p}\Omega ^{1}}
be the p-th exterior power of
Ω
1
{\displaystyle \Omega ^{1}}
over
R
{\displaystyle R}
. Then the sequence of vector spaces
0
→
k
→
Ω
0
→
d
Ω
1
→
d
⋯
→
0
{\displaystyle 0\to k\to \Omega ^{0}{\overset {d}{\to }}\Omega ^{1}{\overset {d}{\to }}\cdots \to 0}
is exact, where the differential
d
{\displaystyle d}
is defined by the usual way; i.e., the linearity and
d
(
f
d
x
i
i
∧
⋯
∧
d
x
i
p
)
=
∑
j
∂
f
d
x
j
d
x
j
∧
d
x
i
i
∧
⋯
∧
d
x
i
p
.
{\displaystyle d(f\,dx_{i_{i}}\wedge \cdots \wedge dx_{i_{p}})=\sum _{j}{\frac {\partial f}{dx_{j}}}dx_{j}\wedge dx_{i_{i}}\wedge \cdots \wedge dx_{i_{p}}.}
This version of the lemma is seen by a calculus-like argument. First note that
ker
(
d
:
R
→
Ω
1
)
=
k
{\displaystyle \ker(d:R\to \Omega ^{1})=k}
, clearly. Thus, we only need to check the exactness at
p
>
0
{\displaystyle p>0}
. Let
ω
{\displaystyle \omega }
be a
p
{\displaystyle p}
-form. Then we write
ω
=
ω
0
∧
d
x
1
+
ω
1
{\displaystyle \omega =\omega _{0}\wedge dx_{1}+\omega _{1}}
where the
ω
i
{\displaystyle \omega _{i}}
's do not involve
d
x
1
{\displaystyle dx_{1}}
. Define the integration in
x
1
{\displaystyle x_{1}}
by the linearity and
∫
x
1
r
d
x
1
=
x
1
r
+
1
r
+
1
,
{\displaystyle \int x_{1}^{r}\,dx_{1}={\frac {x_{1}^{r+1}}{r+1}},}
which is well-defined by the char zero assumption. Then let
η
=
∫
ω
0
d
x
1
{\displaystyle \eta =\int \omega _{0}\,dx_{1}}
where the integration is applied to each coefficient in
ω
0
{\displaystyle \omega _{0}}
. Clearly, the fundamental theorem of calculus holds in our formal setup and thus we get:
d
η
=
ω
0
∧
d
x
1
+
σ
{\displaystyle d\eta =\omega _{0}\wedge \,dx_{1}+\sigma }
where
σ
{\displaystyle \sigma }
does not involve
d
x
1
{\displaystyle dx_{1}}
. Hence,
ω
−
d
η
{\displaystyle \omega -d\eta }
does not involve
d
x
1
{\displaystyle dx_{1}}
. Replacing
ω
{\displaystyle \omega }
by
ω
−
d
η
{\displaystyle \omega -d\eta }
, we can thus assume
ω
{\displaystyle \omega }
does not involve
d
x
1
{\displaystyle dx_{1}}
. From the assumption
d
ω
=
0
{\displaystyle d\omega =0}
, it easily follows that each coefficient in
ω
{\displaystyle \omega }
is independent of
x
1
{\displaystyle x_{1}}
; i.e.,
ω
{\displaystyle \omega }
is a polynomial differential form in the variables
x
2
,
…
,
x
n
{\displaystyle x_{2},\dots ,x_{n}}
. Hence, we are done by induction.
◻
{\displaystyle \square }
Remark: With the same proof, the same results hold when
R
=
k
[
[
x
1
,
…
,
x
n
]
]
{\displaystyle R=k[\![x_{1},\dots ,x_{n}]\!]}
is the ring of formal power series or the ring of germs of holomorphic functions. A suitably modified proof also shows the
∂
¯
{\displaystyle {\bar {\partial }}}
-Poincaré lemma; namely, the use of the fundamental theorem of calculus is replaced by Cauchy's integral formula.
== On singular spaces ==
The Poincaré lemma generally fails for singular spaces. For example, if one considers algebraic differential forms on a complex algebraic variety (in the Zariski topology), the lemma is not true for those differential forms. One way to resolve this is to use formal forms and the resulting algebraic de Rham cohomology can compute a singular cohomology.
However, the variants of the lemma still likely hold for some singular spaces (precise formulation and proof depend on the definitions of such spaces and non-smooth differential forms on them.) For example, Kontsevich and Soibelman claim the lemma holds for certain variants of different forms (called PA forms) on their piecewise algebraic spaces.
The homotopy invariance fails for intersection cohomology; in particular, the Poincaré lemma fails for such cohomology.
== Footnote ==
== Notes ==
== References ==
Hartshorne, Robin (1975). "On the de rham cohomology of algebraic varieties". Publications Mathématiques de l'Institut des Hautes Études Scientifiques. 45 (1): 6–99. doi:10.1007/BF02684298. ISSN 1618-1913.
Illusie, Luc (2012), Around the Poincaré lemma, after Beilinson (PDF) (talk notes)
Napier, Terrence; Ramachandran, Mohan (2011), An introduction to Riemann surfaces, Birkhäuser, ISBN 978-0-8176-4693-6
Conlon, Lawrence (2001). Differentiable Manifolds (2nd ed.). Springer. doi:10.1007/978-0-8176-4767-4. ISBN 978-0-8176-4766-7.
Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, vol. 94, Springer, ISBN 0-387-90894-3
== Further reading ==
"Poincaré lemma", ncatlab.org
https://mathoverflow.net/questions/287385/p-adic-poincaré-lemma | Wikipedia/Poincaré_lemma |
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:
δ
i
j
=
{
0
if
i
≠
j
,
1
if
i
=
j
.
{\displaystyle \delta _{ij}={\begin{cases}0&{\text{if }}i\neq j,\\1&{\text{if }}i=j.\end{cases}}}
or with use of Iverson brackets:
δ
i
j
=
[
i
=
j
]
{\displaystyle \delta _{ij}=[i=j]\,}
For example,
δ
12
=
0
{\displaystyle \delta _{12}=0}
because
1
≠
2
{\displaystyle 1\neq 2}
, whereas
δ
33
=
1
{\displaystyle \delta _{33}=1}
because
3
=
3
{\displaystyle 3=3}
.
The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above.
In linear algebra, the
n
×
n
{\displaystyle n\times n}
identity matrix
I
{\displaystyle \mathbf {I} }
has entries equal to the Kronecker delta:
I
i
j
=
δ
i
j
{\displaystyle I_{ij}=\delta _{ij}}
where
i
{\displaystyle i}
and
j
{\displaystyle j}
take the values
1
,
2
,
⋯
,
n
{\displaystyle 1,2,\cdots ,n}
, and the inner product of vectors can be written as
a
⋅
b
=
∑
i
,
j
=
1
n
a
i
δ
i
j
b
j
=
∑
i
=
1
n
a
i
b
i
.
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i,j=1}^{n}a_{i}\delta _{ij}b_{j}=\sum _{i=1}^{n}a_{i}b_{i}.}
Here the Euclidean vectors are defined as n-tuples:
a
=
(
a
1
,
a
2
,
…
,
a
n
)
{\displaystyle \mathbf {a} =(a_{1},a_{2},\dots ,a_{n})}
and
b
=
(
b
1
,
b
2
,
.
.
.
,
b
n
)
{\displaystyle \mathbf {b} =(b_{1},b_{2},...,b_{n})}
and the last step is obtained by using the values of the Kronecker delta to reduce the summation over
j
{\displaystyle j}
.
It is common for i and j to be restricted to a set of the form {1, 2, ..., n} or {0, 1, ..., n − 1}, but the Kronecker delta can be defined on an arbitrary set.
== Properties ==
The following equations are satisfied:
∑
j
δ
i
j
a
j
=
a
i
,
∑
i
a
i
δ
i
j
=
a
j
,
∑
k
δ
i
k
δ
k
j
=
δ
i
j
.
{\displaystyle {\begin{aligned}\sum _{j}\delta _{ij}a_{j}&=a_{i},\\\sum _{i}a_{i}\delta _{ij}&=a_{j},\\\sum _{k}\delta _{ik}\delta _{kj}&=\delta _{ij}.\end{aligned}}}
Therefore, the matrix δ can be considered as an identity matrix.
Another useful representation is the following form:
δ
n
m
=
lim
N
→
∞
1
N
∑
k
=
1
N
e
2
π
i
k
N
(
n
−
m
)
{\displaystyle \delta _{nm}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{k=1}^{N}e^{2\pi i{\frac {k}{N}}(n-m)}}
This can be derived using the formula for the geometric series.
== Alternative notation ==
Using the Iverson bracket:
δ
i
j
=
[
i
=
j
]
.
{\displaystyle \delta _{ij}=[i=j].}
Often, a single-argument notation
δ
i
{\displaystyle \delta _{i}}
is used, which is equivalent to setting
j
=
0
{\displaystyle j=0}
:
δ
i
=
δ
i
0
=
{
0
,
if
i
≠
0
1
,
if
i
=
0
{\displaystyle \delta _{i}=\delta _{i0}={\begin{cases}0,&{\text{if }}i\neq 0\\1,&{\text{if }}i=0\end{cases}}}
In linear algebra, it can be thought of as a tensor, and is written
δ
j
i
{\displaystyle \delta _{j}^{i}}
. Sometimes the Kronecker delta is called the substitution tensor.
== Digital signal processing ==
In the study of digital signal processing (DSP), the Kronecker delta function sometimes means the unit sample function
δ
[
n
]
{\displaystyle \delta [n]}
, which represents a special case of the 2-dimensional Kronecker delta function
δ
i
j
{\displaystyle \delta _{ij}}
where the Kronecker indices include the number zero, and where one of the indices is zero:
δ
[
n
]
≡
δ
n
0
≡
δ
0
n
where
−
∞
<
n
<
∞
{\displaystyle \delta [n]\equiv \delta _{n0}\equiv \delta _{0n}~~~{\text{where}}-\infty <n<\infty }
Or more generally where:
δ
[
n
−
k
]
≡
δ
[
k
−
n
]
≡
δ
n
k
≡
δ
k
n
where
−
∞
<
n
<
∞
,
−
∞
<
k
<
∞
{\displaystyle \delta [n-k]\equiv \delta [k-n]\equiv \delta _{nk}\equiv \delta _{kn}{\text{where}}-\infty <n<\infty ,-\infty <k<\infty }
For discrete-time signals, it is conventional to place a single integer index in square braces; in contrast the Kronecker delta,
δ
i
j
{\displaystyle \delta _{ij}}
, can have any number of indexes. In LTI system theory, the discrete unit sample function is typically used as an input to a discrete-time system for determining the impulse response function of the system which characterizes the system for any general imput. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from an Einstein summation convention.
The discrete unit sample function is more simply defined as:
δ
[
n
]
=
{
1
n
=
0
0
n
is another integer
{\displaystyle \delta [n]={\begin{cases}1&n=0\\0&n{\text{ is another integer}}\end{cases}}}
In comparison, in continuous-time systems the Dirac delta function is often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as:
{
∫
−
ε
+
ε
δ
(
t
)
d
t
=
1
∀
ε
>
0
δ
(
t
)
=
0
∀
t
≠
0
{\displaystyle {\begin{cases}\int _{-\varepsilon }^{+\varepsilon }\delta (t)dt=1&\forall \varepsilon >0\\\delta (t)=0&\forall t\neq 0\end{cases}}}
Unlike the Kronecker delta function
δ
i
j
{\displaystyle \delta _{ij}}
and the unit sample function
δ
[
n
]
{\displaystyle \delta [n]}
, the Dirac delta function
δ
(
t
)
{\displaystyle \delta (t)}
does not have an integer index, it has a single continuous non-integer value t.
In continuous-time systems, the term "unit impulse function" is used to refer to the Dirac delta function
δ
(
t
)
{\displaystyle \delta (t)}
or, in discrete-time systems, the Kronecker delta function
δ
[
n
]
{\displaystyle \delta [n]}
.
== Notable properties ==
The Kronecker delta has the so-called sifting property that for
j
∈
Z
{\displaystyle j\in \mathbb {Z} }
:
∑
i
=
−
∞
∞
a
i
δ
i
j
=
a
j
.
{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ij}=a_{j}.}
and if the integers are viewed as a measure space, endowed with the counting measure, then this property coincides with the defining property of the Dirac delta function
∫
−
∞
∞
δ
(
x
−
y
)
f
(
x
)
d
x
=
f
(
y
)
,
{\displaystyle \int _{-\infty }^{\infty }\delta (x-y)f(x)\,dx=f(y),}
and in fact Dirac's delta was named after the Kronecker delta because of this analogous property. In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention,
δ
(
t
)
{\displaystyle \delta (t)}
generally indicates continuous time (Dirac), whereas arguments like
i
{\displaystyle i}
,
j
{\displaystyle j}
,
k
{\displaystyle k}
,
l
{\displaystyle l}
,
m
{\displaystyle m}
, and
n
{\displaystyle n}
are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus:
δ
[
n
]
{\displaystyle \delta [n]}
. The Kronecker delta is not the result of directly sampling the Dirac delta function.
The Kronecker delta forms the multiplicative identity element of an incidence algebra.
== Relationship to the Dirac delta function ==
In probability theory and statistics, the Kronecker delta and Dirac delta function can both be used to represent a discrete distribution. If the support of a distribution consists of points
x
=
{
x
1
,
⋯
,
x
n
}
{\displaystyle \mathbf {x} =\{x_{1},\cdots ,x_{n}\}}
, with corresponding probabilities
p
1
,
⋯
,
p
n
{\displaystyle p_{1},\cdots ,p_{n}}
, then the probability mass function
p
(
x
)
{\displaystyle p(x)}
of the distribution over
x
{\displaystyle \mathbf {x} }
can be written, using the Kronecker delta, as
p
(
x
)
=
∑
i
=
1
n
p
i
δ
x
x
i
.
{\displaystyle p(x)=\sum _{i=1}^{n}p_{i}\delta _{xx_{i}}.}
Equivalently, the probability density function
f
(
x
)
{\displaystyle f(x)}
of the distribution can be written using the Dirac delta function as
f
(
x
)
=
∑
i
=
1
n
p
i
δ
(
x
−
x
i
)
.
{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
Under certain conditions, the Kronecker delta can arise from sampling a Dirac delta function. For example, if a Dirac delta impulse occurs exactly at a sampling point and is ideally lowpass-filtered (with cutoff at the critical frequency) per the Nyquist–Shannon sampling theorem, the resulting discrete-time signal will be a Kronecker delta function.
== Generalizations ==
If it is considered as a type
(
1
,
1
)
{\displaystyle (1,1)}
tensor, the Kronecker tensor can be written
δ
j
i
{\displaystyle \delta _{j}^{i}}
with a covariant index
j
{\displaystyle j}
and contravariant index
i
{\displaystyle i}
:
δ
j
i
=
{
0
(
i
≠
j
)
,
1
(
i
=
j
)
.
{\displaystyle \delta _{j}^{i}={\begin{cases}0&(i\neq j),\\1&(i=j).\end{cases}}}
This tensor represents:
The identity mapping (or identity matrix), considered as a linear mapping
V
→
V
{\displaystyle V\to V}
or
V
∗
→
V
∗
{\displaystyle V^{*}\to V^{*}}
The trace or tensor contraction, considered as a mapping
V
∗
⊗
V
→
K
{\displaystyle V^{*}\otimes V\to K}
The map
K
→
V
∗
⊗
V
{\displaystyle K\to V^{*}\otimes V}
, representing scalar multiplication as a sum of outer products.
The generalized Kronecker delta or multi-index Kronecker delta of order
2
p
{\displaystyle 2p}
is a type
(
p
,
p
)
{\displaystyle (p,p)}
tensor that is completely antisymmetric in its
p
{\displaystyle p}
upper indices, and also in its
p
{\displaystyle p}
lower indices.
Two definitions that differ by a factor of
p
!
{\displaystyle p!}
are in use. Below, the version is presented has nonzero components scaled to be
±
1
{\displaystyle \pm 1}
. The second version has nonzero components that are
±
1
/
p
!
{\displaystyle \pm 1/p!}
, with consequent changes scaling factors in formulae, such as the scaling factors of
1
/
p
!
{\displaystyle 1/p!}
in § Properties of the generalized Kronecker delta below disappearing.
=== Definitions of the generalized Kronecker delta ===
In terms of the indices, the generalized Kronecker delta is defined as:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
{
−
1
if
ν
1
…
ν
p
are distinct integers and are an even permutation of
μ
1
…
μ
p
−
1
if
ν
1
…
ν
p
are distinct integers and are an odd permutation of
μ
1
…
μ
p
−
0
in all other cases
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{cases}{\phantom {-}}1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an even permutation of }}\mu _{1}\dots \mu _{p}\\-1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an odd permutation of }}\mu _{1}\dots \mu _{p}\\{\phantom {-}}0&\quad {\text{in all other cases}}.\end{cases}}}
Let
S
p
{\displaystyle \mathrm {S} _{p}}
be the symmetric group of degree
p
{\displaystyle p}
, then:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
∑
σ
∈
S
p
sgn
(
σ
)
δ
ν
σ
(
1
)
μ
1
⋯
δ
ν
σ
(
p
)
μ
p
=
∑
σ
∈
S
p
sgn
(
σ
)
δ
ν
1
μ
σ
(
1
)
⋯
δ
ν
p
μ
σ
(
p
)
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{\sigma (1)}}^{\mu _{1}}\cdots \delta _{\nu _{\sigma (p)}}^{\mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{1}}^{\mu _{\sigma (1)}}\cdots \delta _{\nu _{p}}^{\mu _{\sigma (p)}}.}
Using anti-symmetrization:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
p
!
δ
[
ν
1
μ
1
…
δ
ν
p
]
μ
p
=
p
!
δ
ν
1
[
μ
1
…
δ
ν
p
μ
p
]
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=p!\delta _{[\nu _{1}}^{\mu _{1}}\dots \delta _{\nu _{p}]}^{\mu _{p}}=p!\delta _{\nu _{1}}^{[\mu _{1}}\dots \delta _{\nu _{p}}^{\mu _{p}]}.}
In terms of a
p
×
p
{\displaystyle p\times p}
determinant:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
|
δ
ν
1
μ
1
⋯
δ
ν
p
μ
1
⋮
⋱
⋮
δ
ν
1
μ
p
⋯
δ
ν
p
μ
p
|
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{vmatrix}\delta _{\nu _{1}}^{\mu _{1}}&\cdots &\delta _{\nu _{p}}^{\mu _{1}}\\\vdots &\ddots &\vdots \\\delta _{\nu _{1}}^{\mu _{p}}&\cdots &\delta _{\nu _{p}}^{\mu _{p}}\end{vmatrix}}.}
Using the Laplace expansion (Laplace's formula) of determinant, it may be defined recursively:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
∑
k
=
1
p
(
−
1
)
p
+
k
δ
ν
k
μ
p
δ
ν
1
…
ν
ˇ
k
…
ν
p
μ
1
…
μ
k
…
μ
ˇ
p
=
δ
ν
p
μ
p
δ
ν
1
…
ν
p
−
1
μ
1
…
μ
p
−
1
−
∑
k
=
1
p
−
1
δ
ν
k
μ
p
δ
ν
1
…
ν
k
−
1
ν
p
ν
k
+
1
…
ν
p
−
1
μ
1
…
μ
k
−
1
μ
k
μ
k
+
1
…
μ
p
−
1
,
{\displaystyle {\begin{aligned}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}&=\sum _{k=1}^{p}(-1)^{p+k}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots {\check {\nu }}_{k}\dots \nu _{p}}^{\mu _{1}\dots \mu _{k}\dots {\check {\mu }}_{p}}\\&=\delta _{\nu _{p}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{p-1}}-\sum _{k=1}^{p-1}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{k-1}\,\nu _{p}\,\nu _{k+1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{k-1}\,\mu _{k}\,\mu _{k+1}\dots \mu _{p-1}},\end{aligned}}}
where the caron,
ˇ
{\displaystyle {\check {}}}
, indicates an index that is omitted from the sequence.
When
p
=
n
{\displaystyle p=n}
(the dimension of the vector space), in terms of the Levi-Civita symbol:
δ
ν
1
…
ν
n
μ
1
…
μ
n
=
ε
μ
1
…
μ
n
ε
ν
1
…
ν
n
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{n}}^{\mu _{1}\dots \mu _{n}}=\varepsilon ^{\mu _{1}\dots \mu _{n}}\varepsilon _{\nu _{1}\dots \nu _{n}}\,.}
More generally, for
m
=
n
−
p
{\displaystyle m=n-p}
, using the Einstein summation convention:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
1
m
!
ε
κ
1
…
κ
m
μ
1
…
μ
p
ε
κ
1
…
κ
m
ν
1
…
ν
p
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\tfrac {1}{m!}}\varepsilon ^{\kappa _{1}\dots \kappa _{m}\mu _{1}\dots \mu _{p}}\varepsilon _{\kappa _{1}\dots \kappa _{m}\nu _{1}\dots \nu _{p}}\,.}
=== Contractions of the generalized Kronecker delta ===
Kronecker Delta contractions depend on the dimension of the space. For example,
δ
μ
1
ν
1
δ
ν
1
ν
2
μ
1
μ
2
=
(
d
−
1
)
δ
ν
2
μ
2
,
{\displaystyle \delta _{\mu _{1}}^{\nu _{1}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=(d-1)\delta _{\nu _{2}}^{\mu _{2}},}
where d is the dimension of the space. From this relation the full contracted delta is obtained as
δ
μ
1
μ
2
ν
1
ν
2
δ
ν
1
ν
2
μ
1
μ
2
=
2
d
(
d
−
1
)
.
{\displaystyle \delta _{\mu _{1}\mu _{2}}^{\nu _{1}\nu _{2}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=2d(d-1).}
The generalization of the preceding formulas is
δ
μ
1
…
μ
n
ν
1
…
ν
n
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
n
!
(
d
−
p
+
n
)
!
(
d
−
p
)
!
δ
ν
n
+
1
…
ν
p
μ
n
+
1
…
μ
p
.
{\displaystyle \delta _{\mu _{1}\dots \mu _{n}}^{\nu _{1}\dots \nu _{n}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=n!{\frac {(d-p+n)!}{(d-p)!}}\delta _{\nu _{n+1}\dots \nu _{p}}^{\mu _{n+1}\dots \mu _{p}}.}
=== Properties of the generalized Kronecker delta ===
The generalized Kronecker delta may be used for anti-symmetrization:
1
p
!
δ
ν
1
…
ν
p
μ
1
…
μ
p
a
ν
1
…
ν
p
=
a
[
μ
1
…
μ
p
]
,
1
p
!
δ
ν
1
…
ν
p
μ
1
…
μ
p
a
μ
1
…
μ
p
=
a
[
ν
1
…
ν
p
]
.
{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{\nu _{1}\dots \nu _{p}}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{\mu _{1}\dots \mu _{p}}&=a_{[\nu _{1}\dots \nu _{p}]}.\end{aligned}}}
From the above equations and the properties of anti-symmetric tensors, we can derive the properties of the generalized Kronecker delta:
1
p
!
δ
ν
1
…
ν
p
μ
1
…
μ
p
a
[
ν
1
…
ν
p
]
=
a
[
μ
1
…
μ
p
]
,
1
p
!
δ
ν
1
…
ν
p
μ
1
…
μ
p
a
[
μ
1
…
μ
p
]
=
a
[
ν
1
…
ν
p
]
,
1
p
!
δ
ν
1
…
ν
p
μ
1
…
μ
p
δ
κ
1
…
κ
p
ν
1
…
ν
p
=
δ
κ
1
…
κ
p
μ
1
…
μ
p
,
{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{[\nu _{1}\dots \nu _{p}]}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{[\mu _{1}\dots \mu _{p}]}&=a_{[\nu _{1}\dots \nu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}\delta _{\kappa _{1}\dots \kappa _{p}}^{\nu _{1}\dots \nu _{p}}&=\delta _{\kappa _{1}\dots \kappa _{p}}^{\mu _{1}\dots \mu _{p}},\end{aligned}}}
which are the generalized version of formulae written in § Properties. The last formula is equivalent to the Cauchy–Binet formula.
Reducing the order via summation of the indices may be expressed by the identity
δ
ν
1
…
ν
s
μ
s
+
1
…
μ
p
μ
1
…
μ
s
μ
s
+
1
…
μ
p
=
(
n
−
s
)
!
(
n
−
p
)
!
δ
ν
1
…
ν
s
μ
1
…
μ
s
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{s}\,\mu _{s+1}\dots \mu _{p}}^{\mu _{1}\dots \mu _{s}\,\mu _{s+1}\dots \mu _{p}}={\frac {(n-s)!}{(n-p)!}}\delta _{\nu _{1}\dots \nu _{s}}^{\mu _{1}\dots \mu _{s}}.}
Using both the summation rule for the case
p
=
n
{\displaystyle p=n}
and the relation with the Levi-Civita symbol, the summation rule of the Levi-Civita symbol is derived:
δ
ν
1
…
ν
p
μ
1
…
μ
p
=
1
(
n
−
p
)
!
ε
μ
1
…
μ
p
κ
p
+
1
…
κ
n
ε
ν
1
…
ν
p
κ
p
+
1
…
κ
n
.
{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\frac {1}{(n-p)!}}\varepsilon ^{\mu _{1}\dots \mu _{p}\,\kappa _{p+1}\dots \kappa _{n}}\varepsilon _{\nu _{1}\dots \nu _{p}\,\kappa _{p+1}\dots \kappa _{n}}.}
The 4D version of the last relation appears in Penrose's spinor approach to general relativity that he later generalized, while he was developing Aitken's diagrams, to become part of the technique of Penrose graphical notation. Also, this relation is extensively used in S-duality theories, especially when written in the language of differential forms and Hodge duals.
== Integral representations ==
For any integers
j
{\displaystyle j}
and
k
{\displaystyle k}
, the Kronecker delta can be written as a complex contour integral using a standard residue calculation. The integral is taken over the unit circle in the complex plane, oriented counterclockwise. An equivalent representation of the integral arises by parameterizing the contour by an angle around the origin.
δ
j
k
=
1
2
π
i
∮
|
z
|
=
1
z
j
−
k
−
1
d
z
=
1
2
π
∫
0
2
π
e
i
(
j
−
k
)
φ
d
φ
{\displaystyle \delta _{jk}={\frac {1}{2\pi i}}\oint _{|z|=1}z^{j-k-1}\,dz={\frac {1}{2\pi }}\int _{0}^{2\pi }e^{i(j-k)\varphi }\,d\varphi }
== The Kronecker comb ==
The Kronecker comb function with period
N
{\displaystyle N}
is defined (using DSP notation) as:
Δ
N
[
n
]
=
∑
k
=
−
∞
∞
δ
[
n
−
k
N
]
,
{\displaystyle \Delta _{N}[n]=\sum _{k=-\infty }^{\infty }\delta [n-kN],}
where
N
≠
0
{\displaystyle N\neq 0}
and
n
{\displaystyle n}
are integers. The Kronecker comb thus consists of an infinite series of unit impulses that are N units apart, aligned so one of the impulses occurs at zero. It may be considered to be the discrete analog of the Dirac comb.
== See also ==
Dirac measure
Indicator function
Heaviside step function
Levi-Civita symbol
Minkowski metric
't Hooft symbol
Unit function
XNOR gate
== References == | Wikipedia/Kronecker_delta_function |
In differential geometry, an equivariant differential form on a manifold M acted upon by a Lie group G is a polynomial map
α
:
g
→
Ω
∗
(
M
)
{\displaystyle \alpha :{\mathfrak {g}}\to \Omega ^{*}(M)}
from the Lie algebra
g
=
Lie
(
G
)
{\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)}
to the space of differential forms on M that are equivariant; i.e.,
α
(
Ad
(
g
)
X
)
=
g
α
(
X
)
.
{\displaystyle \alpha (\operatorname {Ad} (g)X)=g\alpha (X).}
In other words, an equivariant differential form is an invariant element of
C
[
g
]
⊗
Ω
∗
(
M
)
=
Sym
(
g
∗
)
⊗
Ω
∗
(
M
)
.
{\displaystyle \mathbb {C} [{\mathfrak {g}}]\otimes \Omega ^{*}(M)=\operatorname {Sym} ({\mathfrak {g}}^{*})\otimes \Omega ^{*}(M).}
For an equivariant differential form
α
{\displaystyle \alpha }
, the equivariant exterior derivative
d
g
α
{\displaystyle d_{\mathfrak {g}}\alpha }
of
α
{\displaystyle \alpha }
is defined by
(
d
g
α
)
(
X
)
=
d
(
α
(
X
)
)
−
i
X
#
(
α
(
X
)
)
{\displaystyle (d_{\mathfrak {g}}\alpha )(X)=d(\alpha (X))-i_{X^{\#}}(\alpha (X))}
where d is the usual exterior derivative and
i
X
#
{\displaystyle i_{X^{\#}}}
is the interior product by the fundamental vector field generated by X.
It is easy to see
d
g
∘
d
g
=
0
{\displaystyle d_{\mathfrak {g}}\circ d_{\mathfrak {g}}=0}
(use the fact the Lie derivative of
α
(
X
)
{\displaystyle \alpha (X)}
along
X
#
{\displaystyle X^{\#}}
is zero) and one then puts
H
G
∗
(
X
)
=
ker
d
g
/
im
d
g
,
{\displaystyle H_{G}^{*}(X)=\ker d_{\mathfrak {g}}/\operatorname {im} d_{\mathfrak {g}},}
which is called the equivariant cohomology of M (which coincides with the ordinary equivariant cohomology defined in terms of Borel construction.) The definition is due to H. Cartan. The notion has an application to the equivariant index theory.
d
g
{\displaystyle d_{\mathfrak {g}}}
-closed or
d
g
{\displaystyle d_{\mathfrak {g}}}
-exact forms are called equivariantly closed or equivariantly exact.
The integral of an equivariantly closed form may be evaluated from its restriction to the fixed point by means of the localization formula.
== References ==
Berline, Nicole; Getzler, E.; Vergne, Michèle (2004), Heat Kernels and Dirac Operators, Springer, ISBN 978-3-540-20062-8 | Wikipedia/Equivariant_differential_form |
In mathematics, a locally constant function is a function from a topological space into a set with the property that around every point of its domain, there exists some neighborhood of that point on which it restricts to a constant function.
== Definition ==
Let
f
:
X
→
S
{\displaystyle f:X\to S}
be a function from a topological space
X
{\displaystyle X}
into a set
S
.
{\displaystyle S.}
If
x
∈
X
{\displaystyle x\in X}
then
f
{\displaystyle f}
is said to be locally constant at
x
{\displaystyle x}
if there exists a neighborhood
U
⊆
X
{\displaystyle U\subseteq X}
of
x
{\displaystyle x}
such that
f
{\displaystyle f}
is constant on
U
,
{\displaystyle U,}
which by definition means that
f
(
u
)
=
f
(
v
)
{\displaystyle f(u)=f(v)}
for all
u
,
v
∈
U
.
{\displaystyle u,v\in U.}
The function
f
:
X
→
S
{\displaystyle f:X\to S}
is called locally constant if it is locally constant at every point
x
∈
X
{\displaystyle x\in X}
in its domain.
== Examples ==
Every constant function is locally constant. The converse will hold if its domain is a connected space.
Every locally constant function from the real numbers
R
{\displaystyle \mathbb {R} }
to
R
{\displaystyle \mathbb {R} }
is constant, by the connectedness of
R
.
{\displaystyle \mathbb {R} .}
But the function
f
:
Q
→
R
{\displaystyle f:\mathbb {Q} \to \mathbb {R} }
from the rationals
Q
{\displaystyle \mathbb {Q} }
to
R
,
{\displaystyle \mathbb {R} ,}
defined by
f
(
x
)
=
0
for
x
<
π
,
{\displaystyle f(x)=0{\text{ for }}x<\pi ,}
and
f
(
x
)
=
1
for
x
>
π
,
{\displaystyle f(x)=1{\text{ for }}x>\pi ,}
is locally constant (this uses the fact that
π
{\displaystyle \pi }
is irrational and that therefore the two sets
{
x
∈
Q
:
x
<
π
}
{\displaystyle \{x\in \mathbb {Q} :x<\pi \}}
and
{
x
∈
Q
:
x
>
π
}
{\displaystyle \{x\in \mathbb {Q} :x>\pi \}}
are both open in
Q
{\displaystyle \mathbb {Q} }
).
If
f
:
A
→
B
{\displaystyle f:A\to B}
is locally constant, then it is constant on any connected component of
A
.
{\displaystyle A.}
The converse is true for locally connected spaces, which are spaces whose connected components are open subsets.
Further examples include the following:
Given a covering map
p
:
C
→
X
,
{\displaystyle p:C\to X,}
then to each point
x
∈
X
{\displaystyle x\in X}
we can assign the cardinality of the fiber
p
−
1
(
x
)
{\displaystyle p^{-1}(x)}
over
x
{\displaystyle x}
; this assignment is locally constant.
A map from a topological space
A
{\displaystyle A}
to a discrete space
B
{\displaystyle B}
is continuous if and only if it is locally constant.
== Connection with sheaf theory ==
There are sheaves of locally constant functions on
X
.
{\displaystyle X.}
To be more definite, the locally constant integer-valued functions on
X
{\displaystyle X}
form a sheaf in the sense that for each open set
U
{\displaystyle U}
of
X
{\displaystyle X}
we can form the functions of this kind; and then verify that the sheaf axioms hold for this construction, giving us a sheaf of abelian groups (even commutative rings). This sheaf could be written
Z
X
{\displaystyle Z_{X}}
; described by means of stalks we have stalk
Z
x
,
{\displaystyle Z_{x},}
a copy of
Z
{\displaystyle Z}
at
x
,
{\displaystyle x,}
for each
x
∈
X
.
{\displaystyle x\in X.}
This can be referred to a constant sheaf, meaning exactly sheaf of locally constant functions taking their values in the (same) group. The typical sheaf of course is not constant in this way; but the construction is useful in linking up sheaf cohomology with homology theory, and in logical applications of sheaves. The idea of local coefficient system is that we can have a theory of sheaves that locally look like such 'harmless' sheaves (near any
x
{\displaystyle x}
), but from a global point of view exhibit some 'twisting'.
== See also ==
Liouville's theorem (complex analysis) – Theorem in complex analysis
Locally constant sheaf
== References == | Wikipedia/Locally_constant_function |
In algebra, the ring of polynomial differential forms on the standard n-simplex is the differential graded algebra:
Ω
poly
∗
(
[
n
]
)
=
Q
[
t
0
,
.
.
.
,
t
n
,
d
t
0
,
.
.
.
,
d
t
n
]
/
(
∑
t
i
−
1
,
∑
d
t
i
)
.
{\displaystyle \Omega _{\text{poly}}^{*}([n])=\mathbb {Q} [t_{0},...,t_{n},dt_{0},...,dt_{n}]/(\sum t_{i}-1,\sum dt_{i}).}
Varying n, it determines the simplicial commutative dg algebra:
Ω
poly
∗
{\displaystyle \Omega _{\text{poly}}^{*}}
(each
u
:
[
n
]
→
[
m
]
{\displaystyle u:[n]\to [m]}
induces the map
Ω
poly
∗
(
[
m
]
)
→
Ω
poly
∗
(
[
n
]
)
,
t
i
↦
∑
u
(
j
)
=
i
t
j
{\displaystyle \Omega _{\text{poly}}^{*}([m])\to \Omega _{\text{poly}}^{*}([n]),t_{i}\mapsto \sum _{u(j)=i}t_{j}}
).
== References ==
Aldridge Bousfield and V. K. A. M. Gugenheim, §1 and §2 of: On PL De Rham Theory and Rational Homotopy Type, Memoirs of the A. M. S., vol. 179, 1976.
Hinich, Vladimir (1997-02-11). "Homological algebra of homotopy algebras". arXiv:q-alg/9702015.
== External links ==
https://ncatlab.org/nlab/show/differential+forms+on+simplices
https://mathoverflow.net/questions/220532/polynomial-differential-forms-on-bg | Wikipedia/Polynomial_differential_form |
In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Ya. Khinchin (1924). Another statement was given by A. N. Kolmogorov in 1929.
== Statement ==
Let {Yn} be independent, identically distributed random variables with zero means and unit variances. Let Sn = Y1 + ... + Yn. Then
lim sup
n
→
∞
|
S
n
|
2
n
log
log
n
=
1
a.s.
,
{\displaystyle \limsup _{n\to \infty }{\frac {|S_{n}|}{\sqrt {2n\log \log n}}}=1\quad {\text{a.s.}},}
where "log" is the natural logarithm, "lim sup" denotes the limit superior, and "a.s." stands for "almost surely".
Another statement given by A. N. Kolmogorov in 1929 is as follows.
Let
{
Y
n
}
{\displaystyle \{Y_{n}\}}
be independent random variables with zero means and finite variances. Let
S
n
=
Y
1
+
⋯
+
Y
n
{\displaystyle S_{n}=Y_{1}+\dots +Y_{n}}
and
B
n
=
Var
(
Y
1
)
+
⋯
+
Var
(
Y
n
)
{\displaystyle B_{n}=\operatorname {Var} (Y_{1})+\dots +\operatorname {Var} (Y_{n})}
. If
B
n
→
∞
{\displaystyle B_{n}\to \infty }
and there exists a sequence of positive constants
{
M
n
}
{\displaystyle \{M_{n}\}}
such that
|
Y
n
|
≤
M
n
{\displaystyle |Y_{n}|\leq M_{n}}
a.s. and
M
n
=
o
(
B
n
log
log
B
n
)
,
{\displaystyle M_{n}\;=\;o\left({\sqrt {\frac {B_{n}}{\log \log B_{n}}}}\right),}
then we have
lim sup
n
→
∞
|
S
n
|
2
B
n
log
log
B
n
=
1
a.s.
{\displaystyle \limsup _{n\to \infty }{\frac {|S_{n}|}{\sqrt {2B_{n}\log \log B_{n}}}}=1\quad {\text{a.s.}}}
Note that, the first statement covers the case of the standard normal distribution, but the second does not.
== Discussion ==
The law of iterated logarithms operates "in between" the law of large numbers and the central limit theorem. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely:
S
n
n
→
p
0
,
S
n
n
→
a
.
s
.
0
,
as
n
→
∞
.
{\displaystyle {\frac {S_{n}}{n}}\ {\xrightarrow {p}}\ 0,\qquad {\frac {S_{n}}{n}}\ {\xrightarrow {a.s.}}0,\qquad {\text{as}}\ \ n\to \infty .}
On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−1/2 converge in distribution to a standard normal distribution. By Kolmogorov's zero–one law, for any fixed M, the probability that the event
lim sup
n
S
n
n
≥
M
{\displaystyle \limsup _{n}{\frac {S_{n}}{\sqrt {n}}}\geq M}
occurs is 0 or 1.
Then
Pr
(
lim sup
n
S
n
n
≥
M
)
⩾
lim sup
n
Pr
(
S
n
n
≥
M
)
=
Pr
(
N
(
0
,
1
)
≥
M
)
>
0
{\displaystyle \Pr \left(\limsup _{n}{\frac {S_{n}}{\sqrt {n}}}\geq M\right)\geqslant \limsup _{n}\Pr \left({\frac {S_{n}}{\sqrt {n}}}\geq M\right)=\Pr \left({\mathcal {N}}(0,1)\geq M\right)>0}
so
lim sup
n
S
n
n
=
∞
with probability 1.
{\displaystyle \limsup _{n}{\frac {S_{n}}{\sqrt {n}}}=\infty \qquad {\text{with probability 1.}}}
An identical argument shows that
lim inf
n
S
n
n
=
−
∞
with probability 1.
{\displaystyle \liminf _{n}{\frac {S_{n}}{\sqrt {n}}}=-\infty \qquad {\text{with probability 1.}}}
This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality
S
2
n
2
n
−
S
n
n
=
1
2
S
2
n
−
S
n
n
−
(
1
−
1
2
)
S
n
n
{\displaystyle {\frac {S_{2n}}{\sqrt {2n}}}-{\frac {S_{n}}{\sqrt {n}}}={\frac {1}{\sqrt {2}}}{\frac {S_{2n}-S_{n}}{\sqrt {n}}}-\left(1-{\frac {1}{\sqrt {2}}}\right){\frac {S_{n}}{\sqrt {n}}}}
and the fact that the random variables
S
n
n
and
S
2
n
−
S
n
n
{\displaystyle {\frac {S_{n}}{\sqrt {n}}}\quad {\text{and}}\quad {\frac {S_{2n}-S_{n}}{\sqrt {n}}}}
are independent and both converge in distribution to
N
(
0
,
1
)
.
{\displaystyle {\mathcal {N}}(0,1).}
The law of the iterated logarithm provides the scaling factor where the two limits become different:
S
n
2
n
log
log
n
→
p
0
,
S
n
2
n
log
log
n
↛
a
.
s
.
0
,
as
n
→
∞
.
{\displaystyle {\frac {S_{n}}{\sqrt {2n\log \log n}}}\ {\xrightarrow {p}}\ 0,\qquad {\frac {S_{n}}{\sqrt {2n\log \log n}}}\ {\stackrel {a.s.}{\nrightarrow }}\ 0,\qquad {\text{as}}\ \ n\to \infty .}
Thus, although the absolute value of the quantity
S
n
/
2
n
log
log
n
{\displaystyle S_{n}/{\sqrt {2n\log \log n}}}
is less than any predefined ε > 0 with probability approaching one, it will nevertheless almost surely be greater than ε infinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely.
== Generalizations and variants ==
The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khinchin and Kolmogorov in the 1920s.
Since then, there has been a tremendous amount of work on the LIL for various kinds of
dependent structures and for stochastic processes. The following is a small sample of notable developments.
Hartman–Wintner (1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL.
Chung (1948) proved another version of the law of the iterated logarithm for the absolute value of a brownian motion.
Strassen (1964) studied the LIL from the point of view of invariance principles.
Stout (1970) generalized the LIL to stationary ergodic martingales.
Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions.
Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence). This is notable, as it is outside the realm of classical probability theory.
Yongge Wang (1996) showed that the law of the iterated logarithm holds for polynomial time pseudorandom sequences also. The Java-based software testing tool tests whether a pseudorandom generator outputs sequences that satisfy the LIL.
Balsubramani (2014) proved a non-asymptotic LIL that holds over finite-time martingale sample paths. This subsumes the martingale LIL as it provides matching finite-sample concentration and anti-concentration bounds, and enables sequential testing and other applications.
== See also ==
Iterated logarithm
Brownian motion
== Notes == | Wikipedia/Law_of_the_iterated_logarithm |
A Boolean network consists of a discrete set of Boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t. This may be done synchronously or asynchronously.
Boolean networks have been used in biology to model regulatory networks. Although Boolean networks are a crude simplification of genetic reality where genes are not simple binary switches, there are several cases where they correctly convey the correct pattern of expressed and suppressed genes.
The seemingly mathematical easy (synchronous) model was only fully understood in the mid 2000s.
== Classical model ==
A Boolean network is a particular kind of sequential dynamical system, where time and states are discrete, i.e. both the set of variables and the set of states in the time series each have a bijection onto an integer series.
A random Boolean network (RBN) is one that is randomly selected from the set of all possible Boolean networks of a particular size, N. One then can study statistically, how the expected properties of such networks depend on various statistical properties of the ensemble of all possible networks. For example, one may study how the RBN behavior changes as the average connectivity is changed.
The first Boolean networks were proposed by Stuart A. Kauffman in 1969, as random models of genetic regulatory networks but their mathematical understanding only started in the 2000s.
=== Attractors ===
Since a Boolean network has only 2N possible states, a trajectory will sooner or later reach a previously visited state, and thus, since the dynamics are deterministic, the trajectory will fall into a steady state or cycle called an attractor (though in the broader field of dynamical systems a cycle is only an attractor if perturbations from it lead back to it). If the attractor has only a single state it is called a point attractor, and if the attractor consists of more than one state it is called a cycle attractor. The set of states that lead to an attractor is called the basin of the attractor. States which occur only at the beginning of trajectories (no trajectories lead to them), are called garden-of-Eden states and the dynamics of the network flow from these states towards attractors. The time it takes to reach an attractor is called transient time.
With growing computer power and increasing understanding of the seemingly simple model, different authors gave different estimates for the mean number and length of the attractors, here a brief summary of key publications.
== Stability ==
In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean networks depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (
K
c
{\displaystyle K_{c}}
), and can be characterized by the Hamming distance as distance measure. In the unstable regime, the distance between two initially close states on average grows exponentially in time, while in the stable regime it decreases exponentially. In this, with "initially close states" one means that the Hamming distance is small compared with the number of nodes (
N
{\displaystyle N}
) in the network.
For N-K-model the network is stable if
K
<
K
c
{\displaystyle K<K_{c}}
, critical if
K
=
K
c
{\displaystyle K=K_{c}}
, and unstable if
K
>
K
c
{\displaystyle K>K_{c}}
.
The state of a given node
n
i
{\displaystyle n_{i}}
is updated according to its truth table, whose outputs are randomly populated.
p
i
{\displaystyle p_{i}}
denotes the probability of assigning an off output to a given series of input signals.
If
p
i
=
p
=
c
o
n
s
t
.
{\displaystyle p_{i}=p=const.}
for every node, the transition between the stable and chaotic range depends on
p
{\displaystyle p}
. According to Bernard Derrida and Yves Pomeau
, the critical value of the average number of connections is
K
c
=
1
/
[
2
p
(
1
−
p
)
]
{\displaystyle K_{c}=1/[2p(1-p)]}
.
If
K
{\displaystyle K}
is not constant, and there is no correlation between the in-degrees and out-degrees, the conditions of stability is determined by
⟨
K
i
n
⟩
{\displaystyle \langle K^{in}\rangle }
The network is stable if
⟨
K
i
n
⟩
<
K
c
{\displaystyle \langle K^{in}\rangle <K_{c}}
, critical if
⟨
K
i
n
⟩
=
K
c
{\displaystyle \langle K^{in}\rangle =K_{c}}
, and unstable if
⟨
K
i
n
⟩
>
K
c
{\displaystyle \langle K^{in}\rangle >K_{c}}
.
The conditions of stability are the same in the case of networks with scale-free topology where the in-and out-degree distribution is a power-law distribution:
P
(
K
)
∝
K
−
γ
{\displaystyle P(K)\propto K^{-\gamma }}
, and
⟨
K
i
n
⟩
=
⟨
K
o
u
t
⟩
{\displaystyle \langle K^{in}\rangle =\langle K^{out}\rangle }
, since every out-link from a node is an in-link to another.
Sensitivity shows the probability that the output of the Boolean function of a given node changes if its input changes. For random Boolean networks,
q
i
=
2
p
i
(
1
−
p
i
)
{\displaystyle q_{i}=2p_{i}(1-p_{i})}
. In the general case, stability of the network is governed by the largest eigenvalue
λ
Q
{\displaystyle \lambda _{Q}}
of matrix
Q
{\displaystyle Q}
, where
Q
i
j
=
q
i
A
i
j
{\displaystyle Q_{ij}=q_{i}A_{ij}}
, and
A
{\displaystyle A}
is the adjacency matrix of the network. The network is stable if
λ
Q
<
1
{\displaystyle \lambda _{Q}<1}
, critical if
λ
Q
=
1
{\displaystyle \lambda _{Q}=1}
, unstable if
λ
Q
>
1
{\displaystyle \lambda _{Q}>1}
.
== Variations of the model ==
=== Other topologies ===
One theme is to study different underlying graph topologies.
The homogeneous case simply refers to a grid which is simply the reduction to the famous Ising model.
Scale-free topologies may be chosen for Boolean networks. One can distinguish the case where only in-degree distribution in power-law distributed, or only the out-degree-distribution or both.
=== Other updating schemes ===
Classical Boolean networks (sometimes called CRBN, i.e. Classic Random Boolean Network) are synchronously updated. Motivated by the fact that genes don't usually change their state simultaneously, different alternatives have been introduced. A common classification is the following:
Deterministic asynchronous updated Boolean networks (DRBNs) are not synchronously updated but a deterministic solution still exists. A node i will be updated when t ≡ Qi (mod Pi) where t is the time step.
The most general case is full stochastic updating (GARBN, general asynchronous random Boolean networks). Here, one (or more) node(s) are selected at each computational step to be updated.
The Partially-Observed Boolean Dynamical System (POBDS) signal model differs from all previous deterministic and stochastic Boolean network models by removing the assumption of direct observability of the Boolean state vector and allowing uncertainty in the observation process, addressing the scenario encountered in practice.
Autonomous Boolean networks (ABNs) are updated in continuous time (t is a real number, not an integer), which leads to race conditions and complex dynamical behavior such as deterministic chaos.
== Application of Boolean Networks ==
=== Classification ===
The Scalable Optimal Bayesian Classification developed an optimal classification of trajectories accounting for potential model uncertainty and also proposed a particle-based trajectory classification that is highly scalable for large networks with much lower complexity than the optimal solution.
== See also ==
NK model
== References ==
Dubrova, E., Teslenko, M., Martinelli, A., (2005). *Kauffman Networks: Analysis and Applications, in "Proceedings of International Conference on Computer-Aided Design", pages 479-484.
== External links ==
Analysis of Dynamic Algebraic Models (ADAM) v1.1
bioasp/bonesis: Synthesis of Most Permissive Boolean Networks from network architecture and dynamical properties
CoLoMoTo (Consortium for Logical Models and Tools)
DDLab
NetBuilder Boolean Networks Simulator
Open Source Boolean Network Simulator
JavaScript Kauffman Network
Probabilistic Boolean Networks (PBN)
RBNLab
A SAT-based tool for computing attractors in Boolean Networks | Wikipedia/Boolean_network |
The Wilkie investment model, often just called Wilkie model, is a stochastic asset model developed by A. D. Wilkie that describes the behavior of various economics factors as stochastic time series. These time series are generated by autoregressive models. The main factor of the model which influences all asset prices is the consumer price index. The model is mainly in use for actuarial work and asset liability management. Because of the stochastic properties of that model it is mainly combined with Monte Carlo methods.
Wilkie first proposed the model in 1986, in a paper published in the Transactions of the Faculty of Actuaries. It has since been the subject of extensive study and debate. Wilkie himself updated and expanded the model in a second paper published in 1995.
He advises to use that model to determine the "funnel of doubt", which can be seen as an interval of minimum and maximum development of a corresponding economic factor.
== Components ==
price inflation
wage inflation
share yield
share dividend
consols yield (long-term interest rate)
bank rate (short-term interest rate)
== References == | Wikipedia/Wilkie_investment_model |
In finance, a foreign exchange option (commonly shortened to just FX option or currency option) is a derivative financial instrument that gives the right but not the obligation to exchange money denominated in one currency into another currency at a pre-agreed exchange rate on a specified date. See Foreign exchange derivative.
== Valuation: the Garman–Kohlhagen model ==
As in the Black–Scholes model for stock options and the Black model for certain interest rate options, the value of a European option on an FX rate is typically calculated by assuming that the rate follows a log-normal process.
The earliest currency options pricing model was published by Biger and Hull, (Financial Management, spring 1983). The model preceded the Garman and Kolhagen's Model. In 1983 Garman and Kohlhagen extended the Black–Scholes model to cope with the presence of two interest rates (one for each currency). Suppose that
r
d
{\displaystyle r_{d}}
is the risk-free interest rate to expiry of the domestic currency and
r
f
{\displaystyle r_{f}}
is the foreign currency risk-free interest rate (where domestic currency is the currency in which we obtain the value of the option; the formula also requires that FX rates – both strike and current spot be quoted in terms of "units of domestic currency per unit of foreign currency"). The results are also in the same units and to be meaningful need to be converted into one of the currencies.
Then the domestic currency value of a call option into the foreign currency is
c
=
S
0
e
−
r
f
T
N
(
d
1
)
−
K
e
−
r
d
T
N
(
d
2
)
{\displaystyle c=S_{0}e^{-r_{f}T}{\mathcal {N}}(d_{1})-Ke^{-r_{d}T}{\mathcal {N}}(d_{2})}
The value of a put option has value
p
=
K
e
−
r
d
T
N
(
−
d
2
)
−
S
0
e
−
r
f
T
N
(
−
d
1
)
{\displaystyle p=Ke^{-r_{d}T}{\mathcal {N}}(-d_{2})-S_{0}e^{-r_{f}T}{\mathcal {N}}(-d_{1})}
where :
d
1
=
ln
(
S
0
/
K
)
+
(
r
d
−
r
f
+
σ
2
/
2
)
T
σ
T
{\displaystyle d_{1}={\frac {\ln(S_{0}/K)+(r_{d}-r_{f}+\sigma ^{2}/2)T}{\sigma {\sqrt {T}}}}}
d
2
=
d
1
−
σ
T
{\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T}}}
S
0
{\displaystyle S_{0}}
is the current spot rate
K
{\displaystyle K}
is the strike price
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
is the cumulative normal distribution function
r
d
{\displaystyle r_{d}}
is domestic risk free simple interest rate
r
f
{\displaystyle r_{f}}
is foreign risk free simple interest rate
T
{\displaystyle T}
is the time to maturity (calculated according to the appropriate day count convention)
and
σ
{\displaystyle \sigma }
is the volatility of the FX rate.
== References == | Wikipedia/Garman–Kohlhagen_model |
The Korn–Kreer–Lenssen model (KKL model) is a discrete trinomial model proposed in 1998 by Ralf Korn, Markus Kreer and Mark Lenssen to model illiquid securities and to value financial derivatives on these.
It generalizes the binomial Cox-Ross-Rubinstein model in a natural way as the stock in a given time interval can either rise one unit up, fall one unit down or remain unchanged. In contrast to Black–Scholes or Cox-Ross-Rubinstein model the market consisting of stock and cash is not complete yet. To value and replicate a financial derivative an additional traded security related to the original security needs to be added. This might be a Low Exercise Price Option (or short LEPO). The mathematical proof of arbitrage free pricing is based on martingale representations for point processes pioneered in the 1980s and 1990 by Albert Shiryaev, Robert Liptser and Marc Yor.
The dynamics is based on continuous time linear birth–death processes and analytic formulae for option prices and Greeks can be stated. Later work looks at market completion with general calls or puts. A comprehensive introduction may be found in the attached MSc-thesis.
The model belongs to the class of trinomial models and the difference to the standard trinomial tree is the following: if
Δ
t
{\displaystyle \Delta t}
denotes the waiting time between two movements of the stock price then in the KKL-model
Δ
t
{\displaystyle \Delta t}
remains finite and exponentially distributed whereas in trinomial trees the time is discrete and the limit
Δ
t
→
0
{\displaystyle \Delta t\rightarrow 0}
is taken by numerical extrapolation afterwards.
== See also ==
Binomial options pricing model
Trinomial tree
Valuation of options
Option: Model implementation
== References ==
== Literature ==
Ralf Korn, Markus Kreer and Mark Lenssen: "Pricing of european options when the underlying stock price follows a linear birth–death process", Stochastic Models Vol. 14(3), 1998, pp. 647–662
Xiong Chen: "The Korn–Kreer–Lenssen Model as an alternative for option pricing", Willmott Magazine June 2004, pp. 74–80 | Wikipedia/Korn–Kreer–Lenssen_model |
The Black–Scholes or Black–Scholes–Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments. From the parabolic partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price given the risk of the security and its expected return (instead replacing the security's expected return with the risk-neutral rate). The equation and model are named after economists Fischer Black and Myron Scholes. Robert C. Merton, who first wrote an academic paper on the subject, is sometimes also credited.
The main principle behind the model is to hedge the option by buying and selling the underlying asset in a specific way to eliminate risk. This type of hedging is called "continuously revised delta hedging" and is the basis of more complicated hedging strategies such as those used by investment banks and hedge funds.
The model is widely used, although often with some adjustments, by options market participants.: 751 The model's assumptions have been relaxed and generalized in many directions, leading to a plethora of models that are currently used in derivative pricing and risk management. The insights of the model, as exemplified by the Black–Scholes formula, are frequently used by market participants, as distinguished from the actual prices. These insights include no-arbitrage bounds and risk-neutral pricing (thanks to continuous revision). Further, the Black–Scholes equation, a partial differential equation that governs the price of the option, enables pricing using numerical methods when an explicit formula is not possible.
The Black–Scholes formula has only one parameter that cannot be directly observed in the market: the average future volatility of the underlying asset, though it can be found from the price of other options. Since the option value (whether put or call) is increasing in this parameter, it can be inverted to produce a "volatility surface" that is then used to calibrate other models, e.g. for OTC derivatives.
== History ==
Louis Bachelier's thesis in 1900 was the earliest publication to apply Brownian motion to derivative pricing, though his work had little impact for many years and included important limitations for its application to modern markets. In the 1960's Case Sprenkle, James Boness, Paul Samuelson, and Samuelson's Ph.D. student at the time Robert C. Merton all made important improvements to the theory of options pricing.
Fischer Black and Myron Scholes demonstrated in 1968 that a dynamic revision of a portfolio removes the expected return of the security, thus inventing the risk neutral argument. They based their thinking on work previously done by market researchers and practitioners including the work mentioned above, as well as work by Sheen Kassouf and Edward O. Thorp. Black and Scholes then attempted to apply the formula to the markets, but incurred financial losses, due to a lack of risk management in their trades. In 1970, they decided to return to the academic environment. After three years of efforts, the formula—named in honor of them for making it public—was finally published in 1973 in an article titled "The Pricing of Options and Corporate Liabilities", in the Journal of Political Economy. Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model, and coined the term "Black–Scholes options pricing model".
The formula led to a boom in options trading and provided mathematical legitimacy to the activities of the Chicago Board Options Exchange and other options markets around the world.
Merton and Scholes received the 1997 Nobel Memorial Prize in Economic Sciences for their work, the committee citing their discovery of the risk neutral dynamic revision as a breakthrough that separates the option from the risk of the underlying security. Although ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish Academy.
== Fundamental hypotheses ==
The Black–Scholes model assumes that the market consists of at least one risky asset, usually called the stock, and one riskless asset, usually called the money market, cash, or bond.
The following assumptions are made about the assets (which relate to the names of the assets):
Risk-free rate: The rate of return on the riskless asset is constant and thus called the risk-free interest rate.
Random walk: The instantaneous log return of the stock price is an infinitesimal random walk with drift; more precisely, the stock price follows a geometric Brownian motion, and it is assumed that the drift and volatility of the motion are constant. If drift and volatility are time-varying, a suitably modified Black–Scholes formula can be deduced, as long as the volatility is not random.
The stock does not pay a dividend.
The assumptions about the market are:
No arbitrage opportunity (i.e., there is no way to make a riskless profit in excess of the risk-free rate).
Ability to borrow and lend any amount, even fractional, of cash at the riskless rate.
Ability to buy and sell any amount, even fractional, of the stock (this includes short selling).
The above transactions do not incur any fees or costs (i.e., frictionless market).
With these assumptions, suppose there is a derivative security also trading in this market. It is specified that this security will have a certain payoff at a specified date in the future, depending on the values taken by the stock up to that date. Even though the path the stock price will take in the future is unknown, the derivative's price can be determined at the current time. For the special case of a European call or put option, Black and Scholes showed that "it is possible to create a hedged position, consisting of a long position in the stock and a short position in the option, whose value will not depend on the price of the stock". Their dynamic hedging strategy led to a partial differential equation which governs the price of the option. Its solution is given by the Black–Scholes formula.
Several of these assumptions of the original model have been removed in subsequent extensions of the model. Modern versions account for dynamic interest rates (Merton, 1976), transaction costs and taxes (Ingersoll, 1976), and dividend payout.
== Notation ==
The notation used in the analysis of the Black-Scholes model is defined as follows (definitions grouped by subject):
General and market related:
t
{\displaystyle t}
is a time in years; with
t
=
0
{\displaystyle t=0}
generally representing the present year.
r
{\displaystyle r}
is the annualized risk-free interest rate, continuously compounded (also known as the force of interest).
Asset related:
S
(
t
)
{\displaystyle S(t)}
is the price of the underlying asset at time t, also denoted as
S
t
{\displaystyle S_{t}}
.
μ
{\displaystyle \mu }
is the drift rate of
S
{\displaystyle S}
, annualized.
σ
{\displaystyle \sigma }
is the standard deviation of the stock's returns. This is the square root of the quadratic variation of the stock's log price process, a measure of its volatility.
Option related:
V
(
S
,
t
)
{\displaystyle V(S,t)}
is the price of the option as a function of the underlying asset S at time t, in particular:
C
(
S
,
t
)
{\displaystyle C(S,t)}
is the price of a European call option and
P
(
S
,
t
)
{\displaystyle P(S,t)}
is the price of a European put option.
T
{\displaystyle T}
is the time of option expiration.
τ
{\displaystyle \tau }
is the time until maturity:
τ
=
T
−
t
{\displaystyle \tau =T-t}
.
K
{\displaystyle K}
is the strike price of the option, also known as the exercise price.
N
(
x
)
{\displaystyle N(x)}
denotes the standard normal cumulative distribution function:
N
(
x
)
=
1
2
π
∫
−
∞
x
e
−
z
2
/
2
d
z
.
{\displaystyle N(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{-z^{2}/2}\,dz.}
N
′
(
x
)
{\displaystyle N'(x)}
denotes the standard normal probability density function:
N
′
(
x
)
=
d
N
(
x
)
d
x
=
1
2
π
e
−
x
2
/
2
.
{\displaystyle N'(x)={\frac {dN(x)}{dx}}={\frac {1}{\sqrt {2\pi }}}e^{-x^{2}/2}.}
== Black–Scholes equation ==
The Black–Scholes equation is a parabolic partial differential equation that describes the price
V
(
S
,
t
)
{\displaystyle V(S,t)}
of the option, where
S
{\displaystyle S}
is the price of the underlying asset and
t
{\displaystyle t}
is time:
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
+
r
S
∂
V
∂
S
−
r
V
=
0
{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}
A key financial insight behind the equation is that one can perfectly hedge the option by buying and selling the underlying asset and the bank account asset (cash) in such a way as to "eliminate risk". This implies that there is a unique price for the option given by the Black–Scholes formula (see the next section).
== Black–Scholes formula ==
The Black–Scholes formula calculates the price of European put and call options. This price is consistent with the Black–Scholes equation. This follows since the formula can be obtained by solving the equation for the corresponding terminal and boundary conditions:
C
(
0
,
t
)
=
0
for all
t
C
(
S
,
t
)
→
S
−
K
as
S
→
∞
C
(
S
,
T
)
=
max
{
S
−
K
,
0
}
{\displaystyle {\begin{aligned}&C(0,t)=0{\text{ for all }}t\\&C(S,t)\rightarrow S-K{\text{ as }}S\rightarrow \infty \\&C(S,T)=\max\{S-K,0\}\end{aligned}}}
The value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is:
C
(
S
t
,
t
)
=
N
(
d
+
)
S
t
−
N
(
d
−
)
K
e
−
r
(
T
−
t
)
d
+
=
1
σ
T
−
t
[
ln
(
S
t
K
)
+
(
r
+
σ
2
2
)
(
T
−
t
)
]
d
−
=
d
+
−
σ
T
−
t
{\displaystyle {\begin{aligned}C(S_{t},t)&=N(d_{+})S_{t}-N(d_{-})Ke^{-r(T-t)}\\d_{+}&={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+\left(r+{\frac {\sigma ^{2}}{2}}\right)(T-t)\right]\\d_{-}&=d_{+}-\sigma {\sqrt {T-t}}\\\end{aligned}}}
The price of a corresponding put option based on put–call parity with discount factor
e
−
r
(
T
−
t
)
{\displaystyle e^{-r(T-t)}}
is:
P
(
S
t
,
t
)
=
K
e
−
r
(
T
−
t
)
−
S
t
+
C
(
S
t
,
t
)
=
N
(
−
d
−
)
K
e
−
r
(
T
−
t
)
−
N
(
−
d
+
)
S
t
{\displaystyle {\begin{aligned}P(S_{t},t)&=Ke^{-r(T-t)}-S_{t}+C(S_{t},t)\\&=N(-d_{-})Ke^{-r(T-t)}-N(-d_{+})S_{t}\end{aligned}}\,}
=== Alternative formulation ===
Introducing auxiliary variables allows for the formula to be simplified and reformulated in a form that can be more convenient (this is a special case of the Black '76 formula):
C
(
F
,
τ
)
=
D
[
N
(
d
+
)
F
−
N
(
d
−
)
K
]
d
+
=
1
σ
τ
[
ln
(
F
K
)
+
1
2
σ
2
τ
]
d
−
=
d
+
−
σ
τ
{\displaystyle {\begin{aligned}C(F,\tau )&=D\left[N(d_{+})F-N(d_{-})K\right]\\d_{+}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\ln \left({\frac {F}{K}}\right)+{\frac {1}{2}}\sigma ^{2}\tau \right]\\d_{-}&=d_{+}-\sigma {\sqrt {\tau }}\end{aligned}}}
where:
D
=
e
−
r
τ
{\displaystyle D=e^{-r\tau }}
is the discount factor
F
=
e
r
τ
S
=
S
D
{\displaystyle F=e^{r\tau }S={\frac {S}{D}}}
is the forward price of the underlying asset, and
S
=
D
F
{\displaystyle S=DF}
Given put–call parity, which is expressed in these terms as:
C
−
P
=
D
(
F
−
K
)
=
S
−
D
K
{\displaystyle C-P=D(F-K)=S-DK}
the price of a put option is:
P
(
F
,
τ
)
=
D
[
N
(
−
d
−
)
K
−
N
(
−
d
+
)
F
]
{\displaystyle P(F,\tau )=D\left[N(-d_{-})K-N(-d_{+})F\right]}
=== Interpretation ===
It is possible to have intuitive interpretations of the Black–Scholes formula, with the main subtlety being the interpretation of
d
±
{\displaystyle d_{\pm }}
and why there are two different terms.
The formula can be interpreted by first decomposing a call option into the difference of two binary options: an asset-or-nothing call minus a cash-or-nothing call (long an asset-or-nothing call, short a cash-or-nothing call). A call option exchanges cash for an asset at expiry, while an asset-or-nothing call just yields the asset (with no cash in exchange) and a cash-or-nothing call just yields cash (with no asset in exchange). The Black–Scholes formula is a difference of two terms, and these two terms are equal to the values of the binary call options. These binary options are less frequently traded than vanilla call options, but are easier to analyze.
Thus the formula:
C
=
D
[
N
(
d
+
)
F
−
N
(
d
−
)
K
]
{\displaystyle C=D\left[N(d_{+})F-N(d_{-})K\right]}
breaks up as:
C
=
D
N
(
d
+
)
F
−
D
N
(
d
−
)
K
,
{\displaystyle C=DN(d_{+})F-DN(d_{-})K,}
where
D
N
(
d
+
)
F
{\displaystyle DN(d_{+})F}
is the present value of an asset-or-nothing call and
D
N
(
d
−
)
K
{\displaystyle DN(d_{-})K}
is the present value of a cash-or-nothing call. The D factor is for discounting, because the expiration date is in future, and removing it changes present value to future value (value at expiry). Thus
N
(
d
+
)
F
{\displaystyle N(d_{+})~F}
is the future value of an asset-or-nothing call and
N
(
d
−
)
K
{\displaystyle N(d_{-})~K}
is the future value of a cash-or-nothing call. In risk-neutral terms, these are the expected value of the asset and the expected value of the cash in the risk-neutral measure.
A naive, and slightly incorrect, interpretation of these terms is that
N
(
d
+
)
F
{\displaystyle N(d_{+})F}
is the probability of the option expiring in the money
N
(
d
+
)
{\displaystyle N(d_{+})}
, multiplied by the value of the underlying at expiry F, while
N
(
d
−
)
K
{\displaystyle N(d_{-})K}
is the probability of the option expiring in the money
N
(
d
−
)
,
{\displaystyle N(d_{-}),}
multiplied by the value of the cash at expiry K. This interpretation is incorrect because either both binaries expire in the money or both expire out of the money (either cash is exchanged for the asset or it is not), but the probabilities
N
(
d
+
)
{\displaystyle N(d_{+})}
and
N
(
d
−
)
{\displaystyle N(d_{-})}
are not equal. In fact,
d
±
{\displaystyle d_{\pm }}
can be interpreted as measures of moneyness (in standard deviations) and
N
(
d
±
)
{\displaystyle N(d_{\pm })}
as probabilities of expiring ITM (percent moneyness), in the respective numéraire, as discussed below. Simply put, the interpretation of the cash option,
N
(
d
−
)
K
{\displaystyle N(d_{-})K}
, is correct, as the value of the cash is independent of movements of the underlying asset, and thus can be interpreted as a simple product of "probability times value", while the
N
(
d
+
)
F
{\displaystyle N(d_{+})F}
is more complicated, as the probability of expiring in the money and the value of the asset at expiry are not independent. More precisely, the value of the asset at expiry is variable in terms of cash, but is constant in terms of the asset itself (a fixed quantity of the asset), and thus these quantities are independent if one changes numéraire to the asset rather than cash.
If one uses spot S instead of forward F, in
d
±
{\displaystyle d_{\pm }}
instead of the
1
2
σ
2
{\textstyle {\frac {1}{2}}\sigma ^{2}}
term there is
(
r
±
1
2
σ
2
)
τ
,
{\textstyle \left(r\pm {\frac {1}{2}}\sigma ^{2}\right)\tau ,}
which can be interpreted as a drift factor (in the risk-neutral measure for appropriate numéraire). The use of d− for moneyness rather than the standardized moneyness
m
=
1
σ
τ
ln
(
F
K
)
{\textstyle m={\frac {1}{\sigma {\sqrt {\tau }}}}\ln \left({\frac {F}{K}}\right)}
– in other words, the reason for the
1
2
σ
2
{\textstyle {\frac {1}{2}}\sigma ^{2}}
factor – is due to the difference between the median and mean of the log-normal distribution; it is the same factor as in Itō's lemma applied to geometric Brownian motion. In addition, another way to see that the naive interpretation is incorrect is that replacing
N
(
d
+
)
{\displaystyle N(d_{+})}
by
N
(
d
−
)
{\displaystyle N(d_{-})}
in the formula yields a negative value for out-of-the-money call options.: 6
In detail, the terms
N
(
d
+
)
,
N
(
d
−
)
{\displaystyle N(d_{+}),N(d_{-})}
are the probabilities of the option expiring in-the-money under the equivalent exponential martingale probability measure (numéraire=stock) and the equivalent martingale probability measure (numéraire=risk free asset), respectively. The risk neutral probability density for the stock price
S
T
∈
(
0
,
∞
)
{\displaystyle S_{T}\in (0,\infty )}
is
p
(
S
,
T
)
=
N
′
[
d
−
(
S
T
)
]
S
T
σ
T
{\displaystyle p(S,T)={\frac {N^{\prime }[d_{-}(S_{T})]}{S_{T}\sigma {\sqrt {T}}}}}
where
d
−
=
d
−
(
K
)
{\displaystyle d_{-}=d_{-}(K)}
is defined as above.
Specifically,
N
(
d
−
)
{\displaystyle N(d_{-})}
is the probability that the call will be exercised provided one assumes that the asset drift is the risk-free rate.
N
(
d
+
)
{\displaystyle N(d_{+})}
, however, does not lend itself to a simple probability interpretation.
S
N
(
d
+
)
{\displaystyle SN(d_{+})}
is correctly interpreted as the present value, using the risk-free interest rate, of the expected asset price at expiration, given that the asset price at expiration is above the exercise price. For related discussion – and graphical representation – see Datar–Mathews method for real option valuation.
The equivalent martingale probability measure is also called the risk-neutral probability measure. Note that both of these are probabilities in a measure theoretic sense, and neither of these is the true probability of expiring in-the-money under the real probability measure. To calculate the probability under the real ("physical") probability measure, additional information is required—the drift term in the physical measure, or equivalently, the market price of risk.
==== Derivations ====
A standard derivation for solving the Black–Scholes PDE is given in the article Black–Scholes equation.
The Feynman–Kac formula says that the solution to this type of PDE, when discounted appropriately, is actually a martingale. Thus the option price is the expected value of the discounted payoff of the option. Computing the option price via this expectation is the risk neutrality approach and can be done without knowledge of PDEs. Note the expectation of the option payoff is not done under the real world probability measure, but an artificial risk-neutral measure, which differs from the real world measure. For the underlying logic see section "risk neutral valuation" under Rational pricing as well as section "Derivatives pricing: the Q world" under Mathematical finance; for details, once again, see Hull.: 307–309
== The Options Greeks ==
"The Greeks" measure the sensitivity of the value of a derivative product or a financial portfolio to changes in parameter values while holding the other parameters fixed. They are partial derivatives of the price with respect to the parameter values. One Greek, "gamma" (as well as others not listed here) is a partial derivative of another Greek, "delta" in this case.
The Greeks are important not only in the mathematical theory of finance, but also for those actively trading. Financial institutions will typically set (risk) limit values for each of the Greeks that their traders must not exceed.
Delta is the most important Greek since this usually confers the largest risk. Many traders will zero their delta at the end of the day if they are not speculating on the direction of the market and following a delta-neutral hedging approach as defined by Black–Scholes. When a trader seeks to establish an effective delta-hedge for a portfolio, the trader may also seek to neutralize the portfolio's gamma, as this will ensure that the hedge will be effective over a wider range of underlying price movements.
The Greeks for Black–Scholes are given in closed form below. They can be obtained by differentiation of the Black–Scholes formula.
Note that the gamma and vega are the same value for calls and puts. This can be seen directly from put–call parity, since the difference of a put and a call is a forward, which is linear in S and independent of σ (so a forward has zero gamma and zero vega).
In practice, some sensitivities are usually quoted in scaled-down terms, to match the scale of likely changes in the parameters. For example, rho is often reported divided by 10,000 (1 basis point rate change), vega by 100 (1 vol point change), and theta by 365 or 252 (1 day decay based on either calendar days or trading days per year).
Note that "Vega" is not a letter in the Greek alphabet; the name arises from misreading the Greek letter nu (variously rendered as
ν
{\displaystyle \nu }
, ν, and ν) as a V.
== Extensions of the model ==
The above model can be extended for variable (but deterministic) rates and volatilities. The model may also be used to value European options on instruments paying dividends. In this case, closed-form solutions are available if the dividend is a known proportion of the stock price. American options and options on stocks paying a known cash dividend (in the short term, more realistic than a proportional dividend) are more difficult to value, and a choice of solution techniques is available (for example lattices and grids).
=== Instruments paying continuous yield dividends ===
For options on indices, it is reasonable to make the simplifying assumption that dividends are paid continuously, and that the dividend amount is proportional to the level of the index.
The dividend payment paid over the time period
[
t
,
t
+
d
t
]
{\displaystyle [t,t+dt]}
is then modelled as:
q
S
t
d
t
{\displaystyle qS_{t}\,dt}
for some constant
q
{\displaystyle q}
(the dividend yield).
Under this formulation the arbitrage-free price implied by the Black–Scholes model can be shown to be:
C
(
S
t
,
t
)
=
e
−
r
(
T
−
t
)
[
F
N
(
d
1
)
−
K
N
(
d
2
)
]
{\displaystyle C(S_{t},t)=e^{-r(T-t)}[FN(d_{1})-KN(d_{2})]\,}
and
P
(
S
t
,
t
)
=
e
−
r
(
T
−
t
)
[
K
N
(
−
d
2
)
−
F
N
(
−
d
1
)
]
{\displaystyle P(S_{t},t)=e^{-r(T-t)}[KN(-d_{2})-FN(-d_{1})]\,}
where now
F
=
S
t
e
(
r
−
q
)
(
T
−
t
)
{\displaystyle F=S_{t}e^{(r-q)(T-t)}\,}
is the modified forward price that occurs in the terms
d
1
,
d
2
{\displaystyle d_{1},d_{2}}
:
d
1
=
1
σ
T
−
t
[
ln
(
S
t
K
)
+
(
r
−
q
+
1
2
σ
2
)
(
T
−
t
)
]
{\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+\left(r-q+{\frac {1}{2}}\sigma ^{2}\right)(T-t)\right]}
and
d
2
=
d
1
−
σ
T
−
t
=
1
σ
T
−
t
[
ln
(
S
t
K
)
+
(
r
−
q
−
1
2
σ
2
)
(
T
−
t
)
]
{\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T-t}}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+\left(r-q-{\frac {1}{2}}\sigma ^{2}\right)(T-t)\right]}
.
=== Instruments paying discrete proportional dividends ===
It is also possible to extend the Black–Scholes framework to options on instruments paying discrete proportional dividends. This is useful when the option is struck on a single stock.
A typical model is to assume that a proportion
δ
{\displaystyle \delta }
of the stock price is paid out at pre-determined times
t
1
,
t
2
,
…
,
t
n
{\displaystyle t_{1},t_{2},\ldots ,t_{n}}
. The price of the stock is then modelled as:
S
t
=
S
0
(
1
−
δ
)
n
(
t
)
e
u
t
+
σ
W
t
{\displaystyle S_{t}=S_{0}(1-\delta )^{n(t)}e^{ut+\sigma W_{t}}}
where
n
(
t
)
{\displaystyle n(t)}
is the number of dividends that have been paid by time
t
{\displaystyle t}
.
The price of a call option on such a stock is again:
C
(
S
0
,
T
)
=
e
−
r
T
[
F
N
(
d
1
)
−
K
N
(
d
2
)
]
{\displaystyle C(S_{0},T)=e^{-rT}[FN(d_{1})-KN(d_{2})]\,}
where now
F
=
S
0
(
1
−
δ
)
n
(
T
)
e
r
T
{\displaystyle F=S_{0}(1-\delta )^{n(T)}e^{rT}\,}
is the forward price for the dividend paying stock.
=== American options ===
The problem of finding the price of an American option is related to the optimal stopping problem of finding the time to execute the option. Since the American option can be exercised at any time before the expiration date, the Black–Scholes equation becomes a variational inequality of the form:
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
+
r
S
∂
V
∂
S
−
r
V
≤
0
{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV\leq 0}
together with
V
(
S
,
t
)
≥
H
(
S
)
{\displaystyle V(S,t)\geq H(S)}
where
H
(
S
)
{\displaystyle H(S)}
denotes the payoff at stock price
S
{\displaystyle S}
and the terminal condition:
V
(
S
,
T
)
=
H
(
S
)
{\displaystyle V(S,T)=H(S)}
.
In general this inequality does not have a closed form solution, though an American call with no dividends is equal to a European call and the Roll–Geske–Whaley method provides a solution for an American call with one dividend; see also Black's approximation.
Barone-Adesi and Whaley is a further approximation formula. Here, the stochastic differential equation (which is valid for the value of any derivative) is split into two components: the European option value and the early exercise premium. With some assumptions, a quadratic equation that approximates the solution for the latter is then obtained. This solution involves finding the critical value,
s
∗
{\displaystyle s*}
, such that one is indifferent between early exercise and holding to maturity.
Bjerksund and Stensland provide an approximation based on an exercise strategy corresponding to a trigger price. Here, if the underlying asset price is greater than or equal to the trigger price it is optimal to exercise, and the value must equal
S
−
X
{\displaystyle S-X}
, otherwise the option "boils down to: (i) a European up-and-out call option… and (ii) a rebate that is received at the knock-out date if the option is knocked out prior to the maturity date". The formula is readily modified for the valuation of a put option, using put–call parity. This approximation is computationally inexpensive and the method is fast, with evidence indicating that the approximation may be more accurate in pricing long dated options than Barone-Adesi and Whaley.
==== Perpetual put ====
Despite the lack of a general analytical solution for American put options, it is possible to derive such a formula for the case of a perpetual option – meaning that the option never expires (i.e.,
T
→
∞
{\displaystyle T\rightarrow \infty }
). In this case, the time decay of the option is equal to zero, which leads to the Black–Scholes PDE becoming an ODE:
1
2
σ
2
S
2
d
2
V
d
S
2
+
(
r
−
q
)
S
d
V
d
S
−
r
V
=
0
{\displaystyle {1 \over {2}}\sigma ^{2}S^{2}{d^{2}V \over {dS^{2}}}+(r-q)S{dV \over {dS}}-rV=0}
Let
S
−
{\displaystyle S_{-}}
denote the lower exercise boundary, below which it is optimal to exercise the option. The boundary conditions are:
V
(
S
−
)
=
K
−
S
−
,
d
V
d
S
(
S
−
)
=
−
1
,
V
(
S
)
≤
K
{\displaystyle V(S_{-})=K-S_{-},\quad {dV \over {dS}}(S_{-})=-1,\quad V(S)\leq K}
The solutions to the ODE are a linear combination of any two linearly independent solutions:
V
(
S
)
=
A
1
S
λ
1
+
A
2
S
λ
2
{\displaystyle V(S)=A_{1}S^{\lambda _{1}}+A_{2}S^{\lambda _{2}}}
For
S
−
≤
S
{\displaystyle S_{-}\leq S}
, substitution of this solution into the ODE for
i
=
1
,
2
{\displaystyle i={1,2}}
yields:
[
1
2
σ
2
λ
i
(
λ
i
−
1
)
+
(
r
−
q
)
λ
i
−
r
]
S
λ
i
=
0
{\displaystyle \left[{1 \over {2}}\sigma ^{2}\lambda _{i}(\lambda _{i}-1)+(r-q)\lambda _{i}-r\right]S^{\lambda _{i}}=0}
Rearranging the terms gives:
1
2
σ
2
λ
i
2
+
(
r
−
q
−
1
2
σ
2
)
λ
i
−
r
=
0
{\displaystyle {1 \over {2}}\sigma ^{2}\lambda _{i}^{2}+\left(r-q-{1 \over {2}}\sigma ^{2}\right)\lambda _{i}-r=0}
Using the quadratic formula, the solutions for
λ
i
{\displaystyle \lambda _{i}}
are:
λ
1
=
−
(
r
−
q
−
1
2
σ
2
)
+
(
r
−
q
−
1
2
σ
2
)
2
+
2
σ
2
r
σ
2
λ
2
=
−
(
r
−
q
−
1
2
σ
2
)
−
(
r
−
q
−
1
2
σ
2
)
2
+
2
σ
2
r
σ
2
{\displaystyle {\begin{aligned}\lambda _{1}&={-\left(r-q-{1 \over {2}}\sigma ^{2}\right)+{\sqrt {\left(r-q-{1 \over {2}}\sigma ^{2}\right)^{2}+2\sigma ^{2}r}} \over {\sigma ^{2}}}\\\lambda _{2}&={-\left(r-q-{1 \over {2}}\sigma ^{2}\right)-{\sqrt {\left(r-q-{1 \over {2}}\sigma ^{2}\right)^{2}+2\sigma ^{2}r}} \over {\sigma ^{2}}}\end{aligned}}}
In order to have a finite solution for the perpetual put, since the boundary conditions imply upper and lower finite bounds on the value of the put, it is necessary to set
A
1
=
0
{\displaystyle A_{1}=0}
, leading to the solution
V
(
S
)
=
A
2
S
λ
2
{\displaystyle V(S)=A_{2}S^{\lambda _{2}}}
. From the first boundary condition, it is known that:
V
(
S
−
)
=
A
2
(
S
−
)
λ
2
=
K
−
S
−
⟹
A
2
=
K
−
S
−
(
S
−
)
λ
2
{\displaystyle V(S_{-})=A_{2}(S_{-})^{\lambda _{2}}=K-S_{-}\implies A_{2}={K-S_{-} \over {(S_{-})^{\lambda _{2}}}}}
Therefore, the value of the perpetual put becomes:
V
(
S
)
=
(
K
−
S
−
)
(
S
S
−
)
λ
2
{\displaystyle V(S)=(K-S_{-})\left({S \over {S_{-}}}\right)^{\lambda _{2}}}
The second boundary condition yields the location of the lower exercise boundary:
d
V
d
S
(
S
−
)
=
λ
2
K
−
S
−
S
−
=
−
1
⟹
S
−
=
λ
2
K
λ
2
−
1
{\displaystyle {dV \over {dS}}(S_{-})=\lambda _{2}{K-S_{-} \over {S_{-}}}=-1\implies S_{-}={\lambda _{2}K \over {\lambda _{2}-1}}}
To conclude, for
S
≥
S
−
=
λ
2
K
λ
2
−
1
{\textstyle S\geq S_{-}={\lambda _{2}K \over {\lambda _{2}-1}}}
, the perpetual American put option is worth:
V
(
S
)
=
K
1
−
λ
2
(
λ
2
−
1
λ
2
)
λ
2
(
S
K
)
λ
2
{\displaystyle V(S)={K \over {1-\lambda _{2}}}\left({\lambda _{2}-1 \over {\lambda _{2}}}\right)^{\lambda _{2}}\left({S \over {K}}\right)^{\lambda _{2}}}
=== Binary options ===
By solving the Black–Scholes differential equation with the Heaviside function as a boundary condition, one ends up with the pricing of options that pay one unit above some predefined strike price and nothing below.
In fact, the Black–Scholes formula for the price of a vanilla call option (or put option) can be interpreted by decomposing a call option into an asset-or-nothing call option minus a cash-or-nothing call option, and similarly for a put—the binary options are easier to analyze, and correspond to the two terms in the Black–Scholes formula.
==== Cash-or-nothing call ====
This pays out one unit of cash if the spot is above the strike at maturity. Its value is given by:
C
=
e
−
r
(
T
−
t
)
N
(
d
2
)
.
{\displaystyle C=e^{-r(T-t)}N(d_{2}).\,}
==== Cash-or-nothing put ====
This pays out one unit of cash if the spot is below the strike at maturity. Its value is given by:
P
=
e
−
r
(
T
−
t
)
N
(
−
d
2
)
.
{\displaystyle P=e^{-r(T-t)}N(-d_{2}).\,}
==== Asset-or-nothing call ====
This pays out one unit of asset if the spot is above the strike at maturity. Its value is given by:
C
=
S
e
−
q
(
T
−
t
)
N
(
d
1
)
.
{\displaystyle C=Se^{-q(T-t)}N(d_{1}).\,}
==== Asset-or-nothing put ====
This pays out one unit of asset if the spot is below the strike at maturity. Its value is given by:
P
=
S
e
−
q
(
T
−
t
)
N
(
−
d
1
)
,
{\displaystyle P=Se^{-q(T-t)}N(-d_{1}),}
==== Foreign Exchange (FX) ====
Denoting by S the FOR/DOM exchange rate (i.e., 1 unit of foreign currency is worth S units of domestic currency) one can observe that paying out 1 unit of the domestic currency if the spot at maturity is above or below the strike is exactly like a cash-or nothing call and put respectively. Similarly, paying out 1 unit of the foreign currency if the spot at maturity is above or below the strike is exactly like an asset-or nothing call and put respectively.
Hence by taking
r
f
{\displaystyle r_{f}}
, the foreign interest rate,
r
d
{\displaystyle r_{d}}
, the domestic interest rate, and the rest as above, the following results can be obtained:
In the case of a digital call (this is a call FOR/put DOM) paying out one unit of the domestic currency gotten as present value:
C
=
e
−
r
d
T
N
(
d
2
)
{\displaystyle C=e^{-r_{d}T}N(d_{2})\,}
In the case of a digital put (this is a put FOR/call DOM) paying out one unit of the domestic currency gotten as present value:
P
=
e
−
r
d
T
N
(
−
d
2
)
{\displaystyle P=e^{-r_{d}T}N(-d_{2})\,}
In the case of a digital call (this is a call FOR/put DOM) paying out one unit of the foreign currency gotten as present value:
C
=
S
e
−
r
f
T
N
(
d
1
)
{\displaystyle C=Se^{-r_{f}T}N(d_{1})\,}
In the case of a digital put (this is a put FOR/call DOM) paying out one unit of the foreign currency gotten as present value:
P
=
S
e
−
r
f
T
N
(
−
d
1
)
{\displaystyle P=Se^{-r_{f}T}N(-d_{1})\,}
==== Skew ====
In the standard Black–Scholes model, one can interpret the premium of the binary option in the risk-neutral world as the expected value = probability of being in-the-money * unit, discounted to the present value. The Black–Scholes model relies on symmetry of distribution and ignores the skewness of the distribution of the asset. Market makers adjust for such skewness by, instead of using a single standard deviation for the underlying asset
σ
{\displaystyle \sigma }
across all strikes, incorporating a variable one
σ
(
K
)
{\displaystyle \sigma (K)}
where volatility depends on strike price, thus incorporating the volatility skew into account. The skew matters because it affects the binary considerably more than the regular options.
A binary call option is, at long expirations, similar to a tight call spread using two vanilla options. One can model the value of a binary cash-or-nothing option, C, at strike K, as an infinitesimally tight spread, where
C
v
{\displaystyle C_{v}}
is a vanilla European call:
C
=
lim
ϵ
→
0
C
v
(
K
−
ϵ
)
−
C
v
(
K
)
ϵ
{\displaystyle C=\lim _{\epsilon \to 0}{\frac {C_{v}(K-\epsilon )-C_{v}(K)}{\epsilon }}}
Thus, the value of a binary call is the negative of the derivative of the price of a vanilla call with respect to strike price:
C
=
−
d
C
v
d
K
{\displaystyle C=-{\frac {dC_{v}}{dK}}}
When one takes volatility skew into account,
σ
{\displaystyle \sigma }
is a function of
K
{\displaystyle K}
:
C
=
−
d
C
v
(
K
,
σ
(
K
)
)
d
K
=
−
∂
C
v
∂
K
−
∂
C
v
∂
σ
∂
σ
∂
K
{\displaystyle C=-{\frac {dC_{v}(K,\sigma (K))}{dK}}=-{\frac {\partial C_{v}}{\partial K}}-{\frac {\partial C_{v}}{\partial \sigma }}{\frac {\partial \sigma }{\partial K}}}
The first term is equal to the premium of the binary option ignoring skew:
−
∂
C
v
∂
K
=
−
∂
(
S
N
(
d
1
)
−
K
e
−
r
(
T
−
t
)
N
(
d
2
)
)
∂
K
=
e
−
r
(
T
−
t
)
N
(
d
2
)
=
C
no skew
{\displaystyle -{\frac {\partial C_{v}}{\partial K}}=-{\frac {\partial (SN(d_{1})-Ke^{-r(T-t)}N(d_{2}))}{\partial K}}=e^{-r(T-t)}N(d_{2})=C_{\text{no skew}}}
∂
C
v
∂
σ
{\displaystyle {\frac {\partial C_{v}}{\partial \sigma }}}
is the Vega of the vanilla call;
∂
σ
∂
K
{\displaystyle {\frac {\partial \sigma }{\partial K}}}
is sometimes called the "skew slope" or just "skew". If the skew is typically negative, the value of a binary call will be higher when taking skew into account.
C
=
C
no skew
−
Vega
v
⋅
Skew
{\displaystyle C=C_{\text{no skew}}-{\text{Vega}}_{v}\cdot {\text{Skew}}}
==== Relationship to vanilla options' Greeks ====
Since a binary call is a mathematical derivative of a vanilla call with respect to strike, the price of a binary call has the same shape as the delta of a vanilla call, and the delta of a binary call has the same shape as the gamma of a vanilla call.
== Black–Scholes in practice ==
The assumptions of the Black–Scholes model are not all empirically valid. The model is widely employed as a useful approximation to reality, but proper application requires understanding its limitations – blindly following the model exposes the user to unexpected risk. Among the most significant limitations are:
the underestimation of extreme moves, yielding tail risk, which can be hedged with out-of-the-money options;
the assumption of instant, cost-less trading, yielding liquidity risk, which is difficult to hedge;
the assumption of a stationary process, yielding volatility risk, which can be hedged with volatility hedging;
the assumption of continuous time and continuous trading, yielding gap risk, which can be hedged with Gamma hedging;
the model tends to underprice deep out-of-the-money options and overprice deep in-the-money options.
In short, while in the Black–Scholes model one can perfectly hedge options by simply Delta hedging, in practice there are many other sources of risk.
Results using the Black–Scholes model differ from real world prices because of simplifying assumptions of the model. One significant limitation is that in reality security prices do not follow a strict stationary log-normal process, nor is the risk-free interest actually known (and is not constant over time). The variance has been observed to be non-constant leading to models such as GARCH to model volatility changes. Pricing discrepancies between empirical and the Black–Scholes model have long been observed in options that are far out-of-the-money, corresponding to extreme price changes; such events would be very rare if returns were lognormally distributed, but are observed much more often in practice.
Nevertheless, Black–Scholes pricing is widely used in practice,: 751 because it is:
easy to calculate
a useful approximation, particularly when analyzing the direction in which prices move when crossing critical points
a robust basis for more refined models
reversible, as the model's original output, price, can be used as an input and one of the other variables solved for; the implied volatility calculated in this way is often used to quote option prices (that is, as a quoting convention).
The first point is self-evidently useful. The others can be further discussed:
Useful approximation: although volatility is not constant, results from the model are often helpful in setting up hedges in the correct proportions to minimize risk. Even when the results are not completely accurate, they serve as a first approximation to which adjustments can be made.
Basis for more refined models: The Black–Scholes model is robust in that it can be adjusted to deal with some of its failures. Rather than considering some parameters (such as volatility or interest rates) as constant, one considers them as variables, and thus added sources of risk. This is reflected in the Greeks (the change in option value for a change in these parameters, or equivalently the partial derivatives with respect to these variables), and hedging these Greeks mitigates the risk caused by the non-constant nature of these parameters. Other defects cannot be mitigated by modifying the model, however, notably tail risk and liquidity risk, and these are instead managed outside the model, chiefly by minimizing these risks and by stress testing.
Explicit modeling: this feature means that, rather than assuming a volatility a priori and computing prices from it, one can use the model to solve for volatility, which gives the implied volatility of an option at given prices, durations and exercise prices. Solving for volatility over a given set of durations and strike prices, one can construct an implied volatility surface. In this application of the Black–Scholes model, a coordinate transformation from the price domain to the volatility domain is obtained. Rather than quoting option prices in terms of dollars per unit (which are hard to compare across strikes, durations and coupon frequencies), option prices can thus be quoted in terms of implied volatility, which leads to trading of volatility in option markets.
=== The volatility smile ===
One of the attractive features of the Black–Scholes model is that the parameters in the model other than the volatility (the time to maturity, the strike, the risk-free interest rate, and the current underlying price) are unequivocally observable. All other things being equal, an option's theoretical value is a monotonic increasing function of implied volatility.
By computing the implied volatility for traded options with different strikes and maturities, the Black–Scholes model can be tested. If the Black–Scholes model held, then the implied volatility for a particular stock would be the same for all strikes and maturities. In practice, the volatility surface (the 3D graph of implied volatility against strike and maturity) is not flat.
The typical shape of the implied volatility curve for a given maturity depends on the underlying instrument. Equities tend to have skewed curves: compared to at-the-money, implied volatility is substantially higher for low strikes, and slightly lower for high strikes. Currencies tend to have more symmetrical curves, with implied volatility lowest at-the-money, and higher volatilities in both wings. Commodities often have the reverse behavior to equities, with higher implied volatility for higher strikes.
Despite the existence of the volatility smile (and the violation of all the other assumptions of the Black–Scholes model), the Black–Scholes PDE and Black–Scholes formula are still used extensively in practice. A typical approach is to regard the volatility surface as a fact about the market, and use an implied volatility from it in a Black–Scholes valuation model. This has been described as using "the wrong number in the wrong formula to get the right price". This approach also gives usable values for the hedge ratios (the Greeks). Even when more advanced models are used, traders prefer to think in terms of Black–Scholes implied volatility as it allows them to evaluate and compare options of different maturities, strikes, and so on. For a discussion as to the various alternative approaches developed here, see Financial economics § Challenges and criticism.
=== Valuing bond options ===
Black–Scholes cannot be applied directly to bond securities because of pull-to-par. As the bond reaches its maturity date, all of the prices involved with the bond become known, thereby decreasing its volatility, and the simple Black–Scholes model does not reflect this process. A large number of extensions to Black–Scholes, beginning with the Black model, have been used to deal with this phenomenon. See Bond option § Valuation.
=== Interest rate curve ===
In practice, interest rates are not constant—they vary by tenor (coupon frequency), giving an interest rate curve which may be interpolated to pick an appropriate rate to use in the Black–Scholes formula. Another consideration is that interest rates vary over time. This volatility may make a significant contribution to the price, especially of long-dated options. This is simply like the interest rate and bond price relationship which is inversely related.
=== Short stock rate ===
Taking a short stock position, as inherent in the derivation, is not typically free of cost; equivalently, it is possible to lend out a long stock position for a small fee. In either case, this can be treated as a continuous dividend for the purposes of a Black–Scholes valuation, provided that there is no glaring asymmetry between the short stock borrowing cost and the long stock lending income.
== Criticism and comments ==
Espen Gaarder Haug and Nassim Nicholas Taleb argue that the Black–Scholes model merely recasts existing widely used models in terms of practically impossible "dynamic hedging" rather than "risk", to make them more compatible with mainstream neoclassical economic theory. They also assert that Boness in 1964 had already published a formula that is "actually identical" to the Black–Scholes call option pricing equation. Edward Thorp also claims to have guessed the Black–Scholes formula in 1967 but kept it to himself to make money for his investors. Emanuel Derman and Taleb have also criticized dynamic hedging and state that a number of researchers had put forth similar models prior to Black and Scholes. In response, Paul Wilmott has defended the model.
In his 2008 letter to the shareholders of Berkshire Hathaway, Warren Buffett wrote: "I believe the Black–Scholes formula, even though it is the standard for establishing the dollar liability for options, produces strange results when the long-term variety are being valued... The Black–Scholes formula has approached the status of holy writ in finance ... If the formula is applied to extended time periods, however, it can produce absurd results. In fairness, Black and Scholes almost certainly understood this point well. But their devoted followers may be ignoring whatever caveats the two men attached when they first unveiled the formula."
British mathematician Ian Stewart, author of the 2012 book entitled In Pursuit of the Unknown: 17 Equations That Changed the World, said that Black–Scholes had "underpinned massive economic growth" and the "international financial system was trading derivatives valued at one quadrillion dollars per year" by 2007. He said that the Black–Scholes equation was the "mathematical justification for the trading"—and therefore—"one ingredient in a rich stew of financial irresponsibility, political ineptitude, perverse incentives and lax regulation" that contributed to the 2008 financial crisis. He clarified that "the equation itself wasn't the real problem", but its abuse in the financial industry.
The Black–Scholes model assumes positive underlying prices; if the underlying has a negative price, the model does not work directly. When dealing with options whose underlying can go negative, practitioners may use a different model such as the Bachelier model or simply add a constant offset to the prices.
== See also ==
Binomial options model, a discrete numerical method for calculating option prices
Black model, a variant of the Black–Scholes option pricing model
Black Shoals, a financial art piece
Brownian model of financial markets
Datar–Mathews method for real option valuation
Financial mathematics (contains a list of related articles)
Fuzzy pay-off method for real option valuation
Heat equation, to which the Black–Scholes PDE can be transformed
Jump diffusion
Monte Carlo option model, using simulation in the valuation of options with complicated features
Real options analysis
Stochastic volatility
== Notes ==
== References ==
=== Primary references ===
Black, Fischer; Scholes, Myron (1973). "The Pricing of Options and Corporate Liabilities". Journal of Political Economy. 81 (3): 637–654. doi:10.1086/260062. S2CID 154552078. [1] (Black and Scholes' original paper.)
Merton, Robert C. (1973). "Theory of Rational Option Pricing". Bell Journal of Economics and Management Science. 4 (1). The RAND Corporation: 141–183. doi:10.2307/3003143. hdl:10338.dmlcz/135817. JSTOR 3003143. [2]
Hull, John C. (1997). Options, Futures, and Other Derivatives. Prentice Hall. ISBN 0-13-601589-1.
=== Historical and sociological aspects ===
Bernstein, Peter (1992). Capital Ideas: The Improbable Origins of Modern Wall Street. The Free Press. ISBN 0-02-903012-9.
Derman, Emanuel. "My Life as a Quant" John Wiley & Sons, Inc. 2004. ISBN 0-471-39420-3
MacKenzie, Donald (2003). "An Equation and its Worlds: Bricolage, Exemplars, Disunity and Performativity in Financial Economics" (PDF). Social Studies of Science. 33 (6): 831–868. doi:10.1177/0306312703336002. hdl:20.500.11820/835ab5da-2504-4152-ae5b-139da39595b8. S2CID 15524084. [3]
MacKenzie, Donald; Yuval Millo (2003). "Constructing a Market, Performing Theory: The Historical Sociology of a Financial Derivatives Exchange". American Journal of Sociology. 109 (1): 107–145. CiteSeerX 10.1.1.461.4099. doi:10.1086/374404. S2CID 145805302. [4]
MacKenzie, Donald (2006). An Engine, not a Camera: How Financial Models Shape Markets. MIT Press. ISBN 0-262-13460-8.
Mandelbrot & Hudson, "The (Mis)Behavior of Markets" Basic Books, 2006. ISBN 978-0-465-04355-2
Szpiro, George G., Pricing the Future: Finance, Physics, and the 300-Year Journey to the Black–Scholes Equation; A Story of Genius and Discovery (New York: Basic, 2011) 298 pp.
Taleb, Nassim. "Dynamic Hedging" John Wiley & Sons, Inc. 1997. ISBN 0-471-15280-3
Thorp, Ed. "A Man for all Markets" Random House, 2017. ISBN 978-1-4000-6796-1
=== Further reading ===
Haug, E. G (2007). "Option Pricing and Hedging from Theory to Practice". Derivatives: Models on Models. Wiley. ISBN 978-0-470-01322-9. The book gives a series of historical references supporting the theory that option traders use much more robust hedging and pricing principles than the Black, Scholes and Merton model.
Triana, Pablo (2009). Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets?. Wiley. ISBN 978-0-470-40675-5. The book takes a critical look at the Black, Scholes and Merton model.
== External links ==
=== Discussion of the model ===
Ajay Shah. Black, Merton and Scholes: Their work and its consequences. Economic and Political Weekly, XXXII(52):3337–3342, December 1997
The mathematical equation that caused the banks to crash by Ian Stewart in The Observer, February 12, 2012
When You Cannot Hedge Continuously: The Corrections to Black–Scholes, Emanuel Derman
=== Derivation and solution ===
Solution of the Black–Scholes Equation Using the Green's Function, Prof. Dennis Silverman
The Black–Scholes Equation Expository article by mathematician Terence Tao.
=== Computer implementations ===
Black–Scholes in Multiple Languages
Black–Scholes in Java -moving to link below-
Black–Scholes in Java
Chicago Option Pricing Model (Graphing Version)
Black–Scholes–Merton Implied Volatility Surface Model (Java)
Online Black–Scholes Calculator
=== Historical ===
Trillion Dollar Bet—Companion Web site to a Nova episode originally broadcast on February 8, 2000. "The film tells the fascinating story of the invention of the Black–Scholes Formula, a mathematical Holy Grail that forever altered the world of finance and earned its creators the 1997 Nobel Prize in Economics."
BBC Horizon A TV-programme on the so-called Midas formula and the bankruptcy of Long-Term Capital Management (LTCM)
BBC News Magazine Black–Scholes: The maths formula linked to the financial crash (April 27, 2012 article) | Wikipedia/Black–Scholes_model |
In finance, the binomial options pricing model (BOPM) provides a generalizable numerical method for the valuation of options. Essentially, the model uses a "discrete-time" (lattice based) model of the varying price over time of the underlying financial instrument, addressing cases where the closed-form Black–Scholes formula is wanting, which in general does not exist for the BOPM.
The binomial model was first proposed by William Sharpe in the 1978 edition of Investments (ISBN 013504605X), and formalized by Cox, Ross and Rubinstein in 1979 and by Rendleman and Bartter in that same year.
For binomial trees as applied to fixed income and interest rate derivatives see Lattice model (finance) § Interest rate derivatives.
== Use of the model ==
The Binomial options pricing model approach has been widely used since it is able to handle a variety of conditions for which other models cannot easily be applied. This is largely because the BOPM is based on the description of an underlying instrument over a period of time rather than a single point. As a consequence, it is used to value American options that are exercisable at any time in a given interval as well as Bermudan options that are exercisable at specific instances of time. Being relatively simple, the model is readily implementable in computer software (including a spreadsheet).
Although higher in computational complexity and computationally slower than the Black–Scholes formula, it is more accurate, particularly for longer-dated options on securities with dividend payments. For these reasons, various versions of the binomial model are widely used by practitioners in the options markets.
For options with several sources of uncertainty (e.g., real options) and for options with complicated features (e.g., Asian options), binomial methods are less practical due to several difficulties, and Monte Carlo option models are commonly used instead. When simulating a small number of time steps Monte Carlo simulation will be more computationally time-consuming than BOPM (cf. Monte Carlo methods in finance). However, the worst-case runtime of BOPM will be O(2n), where n is the number of time steps in the simulation. Monte Carlo simulations will generally have a polynomial time complexity, and will be faster for large numbers of simulation steps. Monte Carlo simulations are also less susceptible to sampling errors, since binomial techniques use discrete time units. This becomes more true the smaller the discrete units become.
== Method ==
The binomial pricing model traces the evolution of the option's key underlying variables in discrete-time. This is done by means of a binomial lattice (Tree), for a number of time steps between the valuation and expiration dates. Each node in the lattice represents a possible price of the underlying at a given point in time.
Valuation is performed iteratively, starting at each of the final nodes (those that may be reached at the time of expiration), and then working backwards through the tree towards the first node (valuation date). The value computed at each stage is the value of the option at that point in time.
Option valuation using this method is, as described, a three-step process:
Price tree generation,
Calculation of option value at each final node,
Sequential calculation of the option value at each preceding node.
=== Step 1: Create the binomial price tree ===
The tree of prices is produced by working forward from valuation date to expiration.
At each step, it is assumed that the underlying instrument will move up or down by a specific factor (
u
{\displaystyle u}
or
d
{\displaystyle d}
) per step of the tree (where, by definition,
u
≥
1
{\displaystyle u\geq 1}
and
0
<
d
≤
1
{\displaystyle 0<d\leq 1}
). So, if
S
{\displaystyle S}
is the current price, then in the next period the price will either be
S
u
p
=
S
⋅
u
{\displaystyle S_{up}=S\cdot u}
or
S
d
o
w
n
=
S
⋅
d
{\displaystyle S_{down}=S\cdot d}
.
The up and down factors are calculated using the underlying (fixed) volatility,
σ
{\displaystyle \sigma }
, and the time duration of a step,
t
{\displaystyle t}
, measured in years (using the day count convention of the underlying instrument). From the condition that the variance of the log of the price is
σ
2
t
{\displaystyle \sigma ^{2}t}
, we have:
u
=
e
σ
Δ
t
{\displaystyle u=e^{\sigma {\sqrt {\Delta }}t}}
d
=
e
−
σ
Δ
t
=
1
u
.
{\displaystyle d=e^{-\sigma {\sqrt {\Delta }}t}={\frac {1}{u}}.}
Above is the original Cox, Ross, & Rubinstein (CRR) method; there are various other techniques for generating the lattice, such as "the equal probabilities" tree, see.
The CRR method ensures that the tree is recombinant, i.e. if the underlying asset moves up and then down (u,d), the price will be the same as if it had moved down and then up (d,u)—here the two paths merge or recombine. This property reduces the number of tree nodes, and thus accelerates the computation of the option price.
This property also allows the value of the underlying asset at each node to be calculated directly via formula, and does not require that the tree be built first. The node-value will be:
S
n
=
S
0
×
u
N
u
−
N
d
,
{\displaystyle S_{n}=S_{0}\times u^{N_{u}-N_{d}},}
Where
N
u
{\displaystyle N_{u}}
is the number of up ticks and
N
d
{\displaystyle N_{d}}
is the number of down ticks.
=== Step 2: Find option value at each final node ===
At each final node of the tree—i.e. at expiration of the option—the option value is simply its intrinsic, or exercise, value:
Max [ (Sn − K), 0 ], for a call option
Max [ (K − Sn), 0 ], for a put option,
Where K is the strike price and
S
n
{\displaystyle S_{n}}
is the spot price of the underlying asset at the nth period.
=== Step 3: Find option value at earlier nodes ===
Once the above step is complete, the option value is then found for each node, starting at the penultimate time step, and working back to the first node of the tree (the valuation date) where the calculated result is the value of the option.
In overview: the "binomial value" is found at each node, using the risk neutrality assumption; see Risk neutral valuation. If exercise is permitted at the node, then the model takes the greater of binomial and exercise value at the node.
The steps are as follows:
In calculating the value at the next time step calculated—i.e. one step closer to valuation—the model must use the value selected here, for "Option up"/"Option down" as appropriate, in the formula at the node.
The aside algorithm demonstrates the approach computing the price of an American put option, although is easily generalized for calls and for European and Bermudan options:
== Relationship with Black–Scholes ==
Similar assumptions underpin both the binomial model and the Black–Scholes model, and the binomial model thus provides a discrete time approximation to the continuous process underlying the Black–Scholes model. The binomial model assumes that movements in the price follow a binomial distribution; for many trials, this binomial distribution approaches the log-normal distribution assumed by Black–Scholes. In this case then, for European options without dividends, the binomial model value converges on the Black–Scholes formula value as the number of time steps increases.
In addition, when analyzed as a numerical procedure, the CRR binomial method can be viewed as a special case of the explicit finite difference method for the Black–Scholes PDE; see finite difference methods for option pricing.
== See also ==
Trinomial tree, a similar model with three possible paths per node.
Tree (data structure)
Lattice model (finance), for more general discussion and application to other underlyings
Black–Scholes: binomial lattices are able to handle a variety of conditions for which Black–Scholes cannot be applied.
Monte Carlo option model, used in the valuation of options with complicated features that make them difficult to value through other methods.
Real options analysis, where the BOPM is widely used.
Quantum finance, quantum binomial pricing model.
Mathematical finance, which has a list of related articles.
Employee stock option § Valuation, where the BOPM is widely used.
Implied binomial tree
Edgeworth binomial tree
== References ==
== External links ==
The Binomial Model for Pricing Options, Prof. Thayer Watkins
Binomial Option Pricing (PDF), Prof. Robert M. Conroy
Binomial Option Pricing Model by Fiona Maclachlan, The Wolfram Demonstrations Project
On the Irrelevance of Expected Stock Returns in the Pricing of Options in the Binomial Model: A Pedagogical Note by Valeri Zakamouline
A Simple Derivation of Risk-Neutral Probability in the Binomial Option Pricing Model by Greg Orosi | Wikipedia/Binomial_options_pricing_model |
In mathematical analysis and in probability theory, a σ-algebra ("sigma algebra") is part of the formalism for defining sets that can be measured. In calculus and analysis, for example, σ-algebras are used to define the concept of sets with area or volume. In probability theory, they are used to define events with a well-defined probability. In this way, σ-algebras help to formalize the notion of size.
In formal terms, a σ-algebra (also σ-field, where the σ comes from the German "Summe", meaning "sum") on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
is called a measurable space.
The set X is understood to be an ambient space (such as the 2D plane or the set of outcomes when rolling a six-sided die {1,2,3,4,5,6}), and the collection Σ is a choice of subsets declared to have a well-defined size. The closure requirements for σ-algebras are designed to capture our intuitive ideas about how sizes combine: if there is a well-defined probability that an event occurs, there should be a well-defined probability that it does not occur (closure under complements); if several sets have a well-defined size, so should their combination (countable unions); if several events have a well-defined probability of occurring, so should the event where they all occur simultaneously (countable intersections).
The definition of σ-algebra resembles other mathematical structures such as a topology (which is required to be closed under all unions but only finite intersections, and which doesn't necessarily contain all complements of its sets) or a set algebra (which is closed only under finite unions and intersections).
== Examples of σ-algebras ==
If
X
=
{
a
,
b
,
c
,
d
}
{\displaystyle X=\{a,b,c,d\}}
one possible σ-algebra on
X
{\displaystyle X}
is
Σ
=
{
∅
,
{
a
,
b
}
,
{
c
,
d
}
,
{
a
,
b
,
c
,
d
}
}
,
{\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},}
where
∅
{\displaystyle \varnothing }
is the empty set. In general, a finite algebra is always a σ-algebra.
If
{
A
1
,
A
2
,
A
3
,
…
}
,
{\displaystyle \{A_{1},A_{2},A_{3},\ldots \},}
is a countable partition of
X
{\displaystyle X}
then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra.
A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy).
== Motivation ==
There are at least three key motivators for σ-algebras: defining measures, manipulating limits of sets, and managing partial information characterized by sets.
=== Measure ===
A measure on
X
{\displaystyle X}
is a function that assigns a non-negative real number to subsets of
X
;
{\displaystyle X;}
this can be thought of as making precise a notion of "size" or "volume" for sets. We want the size of the union of disjoint sets to be the sum of their individual sizes, even for an infinite sequence of disjoint sets.
One would like to assign a size to every subset of
X
,
{\displaystyle X,}
but in many natural settings, this is not possible. For example, the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers instead a smaller collection of privileged subsets of
X
.
{\displaystyle X.}
These subsets will be called the measurable sets. They are closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with these properties are called σ-algebras.
=== Limits of sets ===
Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of sets. For this, closure under countable unions and intersections is paramount. Set limits are defined as follows on σ-algebras.
The limit supremum or outer limit of a sequence
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
of subsets of
X
{\displaystyle X}
is
lim sup
n
→
∞
A
n
=
⋂
n
=
1
∞
⋃
m
=
n
∞
A
m
=
⋂
n
=
1
∞
A
n
∪
A
n
+
1
∪
⋯
.
{\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n=1}^{\infty }\bigcup _{m=n}^{\infty }A_{m}=\bigcap _{n=1}^{\infty }A_{n}\cup A_{n+1}\cup \cdots .}
It consists of all points
x
{\displaystyle x}
that are in infinitely many of these sets (or equivalently, that are in cofinally many of them). That is,
x
∈
lim sup
n
→
∞
A
n
{\displaystyle x\in \limsup _{n\to \infty }A_{n}}
if and only if there exists an infinite subsequence
A
n
1
,
A
n
2
,
…
{\displaystyle A_{n_{1}},A_{n_{2}},\ldots }
(where
n
1
<
n
2
<
⋯
{\displaystyle n_{1}<n_{2}<\cdots }
) of sets that all contain
x
;
{\displaystyle x;}
that is, such that
x
∈
A
n
1
∩
A
n
2
∩
⋯
.
{\displaystyle x\in A_{n_{1}}\cap A_{n_{2}}\cap \cdots .}
The limit infimum or inner limit of a sequence
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
of subsets of
X
{\displaystyle X}
is
lim inf
n
→
∞
A
n
=
⋃
n
=
1
∞
⋂
m
=
n
∞
A
m
=
⋃
n
=
1
∞
A
n
∩
A
n
+
1
∩
⋯
.
{\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n=1}^{\infty }\bigcap _{m=n}^{\infty }A_{m}=\bigcup _{n=1}^{\infty }A_{n}\cap A_{n+1}\cap \cdots .}
It consists of all points that are in all but finitely many of these sets (or equivalently, that are eventually in all of them). That is,
x
∈
lim inf
n
→
∞
A
n
{\displaystyle x\in \liminf _{n\to \infty }A_{n}}
if and only if there exists an index
N
∈
N
{\displaystyle N\in \mathbb {N} }
such that
A
N
,
A
N
+
1
,
…
{\displaystyle A_{N},A_{N+1},\ldots }
all contain
x
;
{\displaystyle x;}
that is, such that
x
∈
A
N
∩
A
N
+
1
∩
⋯
.
{\displaystyle x\in A_{N}\cap A_{N+1}\cap \cdots .}
The inner limit is always a subset of the outer limit:
lim inf
n
→
∞
A
n
⊆
lim sup
n
→
∞
A
n
.
{\displaystyle \liminf _{n\to \infty }A_{n}~\subseteq ~\limsup _{n\to \infty }A_{n}.}
If these two sets are equal then their limit
lim
n
→
∞
A
n
{\displaystyle \lim _{n\to \infty }A_{n}}
exists and is equal to this common set:
lim
n
→
∞
A
n
:=
lim inf
n
→
∞
A
n
=
lim sup
n
→
∞
A
n
.
{\displaystyle \lim _{n\to \infty }A_{n}:=\liminf _{n\to \infty }A_{n}=\limsup _{n\to \infty }A_{n}.}
=== Sub σ-algebras ===
In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent only part of all the possible information that can be observed. This partial information can be characterized with a smaller σ-algebra which is a subset of the principal σ-algebra; it consists of the collection of subsets relevant only to and determined only by the partial information. Formally, if
Σ
,
Σ
′
{\displaystyle \Sigma ,\Sigma '}
are σ-algebras on
X
{\displaystyle X}
, then
Σ
′
{\displaystyle \Sigma '}
is a sub σ-algebra of
Σ
{\displaystyle \Sigma }
if
Σ
′
⊆
Σ
{\displaystyle \Sigma '\subseteq \Sigma }
.
The Bernoulli process provides a simple example. This consists of a sequence of random coin flips, coming up Heads (
H
{\displaystyle H}
) or Tails (
T
{\displaystyle T}
), of unbounded length. The sample space Ω consists of all possible infinite sequences of
H
{\displaystyle H}
or
T
:
{\displaystyle T:}
Ω
=
{
H
,
T
}
∞
=
{
(
x
1
,
x
2
,
x
3
,
…
)
:
x
i
∈
{
H
,
T
}
,
i
≥
1
}
.
{\displaystyle \Omega =\{H,T\}^{\infty }=\{(x_{1},x_{2},x_{3},\dots ):x_{i}\in \{H,T\},i\geq 1\}.}
The full sigma algebra can be generated from an ascending sequence of subalgebras, by considering the information that might be obtained after observing some or all of the first
n
{\displaystyle n}
coin flips. This sequence of subalgebras is given by
G
n
=
{
A
×
{
Ω
}
:
A
⊆
{
H
,
T
}
n
}
{\displaystyle {\mathcal {G}}_{n}=\{A\times \{\Omega \}:A\subseteq \{H,T\}^{n}\}}
Each of these is finer than the last, and so can be ordered as a filtration
G
0
⊆
G
1
⊆
G
2
⊆
⋯
⊆
G
∞
{\displaystyle {\mathcal {G}}_{0}\subseteq {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq \cdots \subseteq {\mathcal {G}}_{\infty }}
The first subalgebra
G
0
=
{
∅
,
Ω
}
{\displaystyle {\mathcal {G}}_{0}=\{\varnothing ,\Omega \}}
is the trivial algebra: it has only two elements in it, the empty set and the total space. The second subalgebra
G
1
{\displaystyle {\mathcal {G}}_{1}}
has four elements: the two in
G
0
{\displaystyle {\mathcal {G}}_{0}}
plus two more: sequences that start with
H
{\displaystyle H}
and sequences that start with
T
{\displaystyle T}
. Each subalgebra is finer than the last. The
n
{\displaystyle n}
'th subalgebra contains
2
n
+
1
{\displaystyle 2^{n+1}}
elements: it divides the total space
Ω
{\displaystyle \Omega }
into all of the possible sequences that might have been observed after
n
{\displaystyle n}
flips, including the possible non-observation of some of the flips.
The limiting algebra
G
∞
{\displaystyle {\mathcal {G}}_{\infty }}
is the smallest σ-algebra containing all the others. It is the algebra generated by the product topology or weak topology on the product space
{
H
,
T
}
∞
.
{\displaystyle \{H,T\}^{\infty }.}
== Definition and properties ==
=== Definition ===
Let
X
{\displaystyle X}
be some set, and let
P
(
X
)
{\displaystyle P(X)}
represent its power set, the set of all subsets of
X
{\displaystyle X}
. Then a subset
Σ
⊆
P
(
X
)
{\displaystyle \Sigma \subseteq P(X)}
is called a σ-algebra if and only if it satisfies the following three properties:
X
{\displaystyle X}
is in
Σ
{\displaystyle \Sigma }
.
Σ
{\displaystyle \Sigma }
is closed under complementation: If some set
A
{\displaystyle A}
is in
Σ
,
{\displaystyle \Sigma ,}
then so is its complement,
X
∖
A
.
{\displaystyle X\setminus A.}
Σ
{\displaystyle \Sigma }
is closed under countable unions: If
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
are in
Σ
,
{\displaystyle \Sigma ,}
then so is
A
=
A
1
∪
A
2
∪
A
3
∪
⋯
.
{\displaystyle A=A_{1}\cup A_{2}\cup A_{3}\cup \cdots .}
From these properties, it follows that the σ-algebra is also closed under countable intersections (by applying De Morgan's laws).
It also follows that the empty set
∅
{\displaystyle \varnothing }
is in
Σ
,
{\displaystyle \Sigma ,}
since by (1)
X
{\displaystyle X}
is in
Σ
{\displaystyle \Sigma }
and (2) asserts that its complement, the empty set, is also in
Σ
.
{\displaystyle \Sigma .}
Moreover, since
{
X
,
∅
}
{\displaystyle \{X,\varnothing \}}
satisfies all 3 conditions, it follows that
{
X
,
∅
}
{\displaystyle \{X,\varnothing \}}
is the smallest possible σ-algebra on
X
.
{\displaystyle X.}
The largest possible σ-algebra on
X
{\displaystyle X}
is
P
(
X
)
.
{\displaystyle P(X).}
Elements of the σ-algebra are called measurable sets. An ordered pair
(
X
,
Σ
)
,
{\displaystyle (X,\Sigma ),}
where
X
{\displaystyle X}
is a set and
Σ
{\displaystyle \Sigma }
is a σ-algebra over
X
,
{\displaystyle X,}
is called a measurable space. A function between two measurable spaces is called a measurable function if the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the measurable functions as morphisms. Measures are defined as certain types of functions from a σ-algebra to
[
0
,
∞
]
.
{\displaystyle [0,\infty ].}
A σ-algebra is both a π-system and a Dynkin system (λ-system). The converse is true as well, by Dynkin's theorem (see below).
=== Dynkin's π-λ theorem ===
This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties of specific σ-algebras. It capitalizes on the nature of two simpler classes of sets, namely the following.
A π-system
P
{\displaystyle P}
is a collection of subsets of
X
{\displaystyle X}
that is closed under finitely many intersections, and
A Dynkin system (or λ-system)
D
{\displaystyle D}
is a collection of subsets of
X
{\displaystyle X}
that contains
X
{\displaystyle X}
and is closed under complement and under countable unions of disjoint subsets.
Dynkin's π-λ theorem says, if
P
{\displaystyle P}
is a π-system and
D
{\displaystyle D}
is a Dynkin system that contains
P
,
{\displaystyle P,}
then the σ-algebra
σ
(
P
)
{\displaystyle \sigma (P)}
generated by
P
{\displaystyle P}
is contained in
D
.
{\displaystyle D.}
Since certain π-systems are relatively simple classes, it may not be hard to verify that all sets in
P
{\displaystyle P}
enjoy the property under consideration while, on the other hand, showing that the collection
D
{\displaystyle D}
of all subsets with the property is a Dynkin system can also be straightforward. Dynkin's π-λ Theorem then implies that all sets in
σ
(
P
)
{\displaystyle \sigma (P)}
enjoy the property, avoiding the task of checking it for an arbitrary set in
σ
(
P
)
.
{\displaystyle \sigma (P).}
One of the most fundamental uses of the π-λ theorem is to show equivalence of separately defined measures or integrals. For example, it is used to equate a probability for a random variable
X
{\displaystyle X}
with the Lebesgue-Stieltjes integral typically associated with computing the probability:
P
(
X
∈
A
)
=
∫
A
F
(
d
x
)
{\displaystyle \mathbb {P} (X\in A)=\int _{A}\,F(dx)}
for all
A
{\displaystyle A}
in the Borel σ-algebra on
R
,
{\displaystyle \mathbb {R} ,}
where
F
(
x
)
{\displaystyle F(x)}
is the cumulative distribution function for
X
,
{\displaystyle X,}
defined on
R
,
{\displaystyle \mathbb {R} ,}
while
P
{\displaystyle \mathbb {P} }
is a probability measure, defined on a σ-algebra
Σ
{\displaystyle \Sigma }
of subsets of some sample space
Ω
.
{\displaystyle \Omega .}
=== Combining σ-algebras ===
Suppose
{
Σ
α
:
α
∈
A
}
{\displaystyle \textstyle \left\{\Sigma _{\alpha }:\alpha \in {\mathcal {A}}\right\}}
is a collection of σ-algebras on a space
X
.
{\displaystyle X.}
Meet
The intersection of a collection of σ-algebras is a σ-algebra. To emphasize its character as a σ-algebra, it often is denoted by:
⋀
α
∈
A
Σ
α
.
{\displaystyle \bigwedge _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }.}
Sketch of Proof: Let
Σ
∗
{\displaystyle \Sigma ^{*}}
denote the intersection. Since
X
{\displaystyle X}
is in every
Σ
α
,
Σ
∗
{\displaystyle \Sigma _{\alpha },\Sigma ^{*}}
is not empty. Closure under complement and countable unions for every
Σ
α
{\displaystyle \Sigma _{\alpha }}
implies the same must be true for
Σ
∗
.
{\displaystyle \Sigma ^{*}.}
Therefore,
Σ
∗
{\displaystyle \Sigma ^{*}}
is a σ-algebra.
Join
The union of a collection of σ-algebras is not generally a σ-algebra, or even an algebra, but it generates a σ-algebra known as the join which typically is denoted
⋁
α
∈
A
Σ
α
=
σ
(
⋃
α
∈
A
Σ
α
)
.
{\displaystyle \bigvee _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }=\sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).}
A π-system that generates the join is
P
=
{
⋂
i
=
1
n
A
i
:
A
i
∈
Σ
α
i
,
α
i
∈
A
,
n
≥
1
}
.
{\displaystyle {\mathcal {P}}=\left\{\bigcap _{i=1}^{n}A_{i}:A_{i}\in \Sigma _{\alpha _{i}},\alpha _{i}\in {\mathcal {A}},\ n\geq 1\right\}.}
Sketch of Proof: By the case
n
=
1
,
{\displaystyle n=1,}
it is seen that each
Σ
α
⊂
P
,
{\displaystyle \Sigma _{\alpha }\subset {\mathcal {P}},}
so
⋃
α
∈
A
Σ
α
⊆
P
.
{\displaystyle \bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\subseteq {\mathcal {P}}.}
This implies
σ
(
⋃
α
∈
A
Σ
α
)
⊆
σ
(
P
)
{\displaystyle \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)\subseteq \sigma ({\mathcal {P}})}
by the definition of a σ-algebra generated by a collection of subsets. On the other hand,
P
⊆
σ
(
⋃
α
∈
A
Σ
α
)
{\displaystyle {\mathcal {P}}\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)}
which, by Dynkin's π-λ theorem, implies
σ
(
P
)
⊆
σ
(
⋃
α
∈
A
Σ
α
)
.
{\displaystyle \sigma ({\mathcal {P}})\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).}
=== σ-algebras for subspaces ===
Suppose
Y
{\displaystyle Y}
is a subset of
X
{\displaystyle X}
and let
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
be a measurable space.
The collection
{
Y
∩
B
:
B
∈
Σ
}
{\displaystyle \{Y\cap B:B\in \Sigma \}}
is a σ-algebra of subsets of
Y
.
{\displaystyle Y.}
Suppose
(
Y
,
Λ
)
{\displaystyle (Y,\Lambda )}
is a measurable space. The collection
{
A
⊆
X
:
A
∩
Y
∈
Λ
}
{\displaystyle \{A\subseteq X:A\cap Y\in \Lambda \}}
is a σ-algebra of subsets of
X
.
{\displaystyle X.}
=== Relation to σ-ring ===
A σ-algebra
Σ
{\displaystyle \Sigma }
is just a σ-ring that contains the universal set
X
.
{\displaystyle X.}
A σ-ring need not be a σ-algebra, as for example measurable subsets of zero Lebesgue measure in the real line are a σ-ring, but not a σ-algebra since the real line has infinite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes measurable subsets of finite Lebesgue measure, those are a ring but not a σ-ring, since the real line can be obtained by their countable union yet its measure is not finite.
=== Typographic note ===
σ-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
may be denoted as
(
X
,
F
)
{\displaystyle \scriptstyle (X,\,{\mathcal {F}})}
or
(
X
,
F
)
.
{\displaystyle \scriptstyle (X,\,{\mathfrak {F}}).}
== Particular cases and examples ==
=== Separable σ-algebras ===
A separable
σ
{\displaystyle \sigma }
-algebra (or separable
σ
{\displaystyle \sigma }
-field) is a
σ
{\displaystyle \sigma }
-algebra
F
{\displaystyle {\mathcal {F}}}
that is a separable space when considered as a metric space with metric
ρ
(
A
,
B
)
=
μ
(
A
△
B
)
{\displaystyle \rho (A,B)=\mu (A{\mathbin {\triangle }}B)}
for
A
,
B
∈
F
{\displaystyle A,B\in {\mathcal {F}}}
and a given finite measure
μ
{\displaystyle \mu }
(and with
△
{\displaystyle \triangle }
being the symmetric difference operator). Any
σ
{\displaystyle \sigma }
-algebra generated by a countable collection of sets is separable, but the converse need not hold. For example, the Lebesgue
σ
{\displaystyle \sigma }
-algebra is separable (since every Lebesgue measurable set is equivalent to some Borel set) but not countably generated (since its cardinality is higher than continuum).
A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance between two sets is defined as the measure of the symmetric difference of the two sets. The symmetric difference of two distinct sets can have measure zero; hence the pseudometric as defined above need not to be a true metric. However, if sets whose symmetric difference has measure zero are identified into a single equivalence class, the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can be shown that the corresponding metric space is, too.
=== Simple set-based examples ===
Let
X
{\displaystyle X}
be any set.
The family consisting only of the empty set and the set
X
,
{\displaystyle X,}
called the minimal or trivial σ-algebra over
X
.
{\displaystyle X.}
The power set of
X
,
{\displaystyle X,}
called the discrete σ-algebra.
The collection
{
∅
,
A
,
X
∖
A
,
X
}
{\displaystyle \{\varnothing ,A,X\setminus A,X\}}
is a simple σ-algebra generated by the subset
A
.
{\displaystyle A.}
The collection of subsets of
X
{\displaystyle X}
which are countable or whose complements are countable is a σ-algebra (which is distinct from the power set of
X
{\displaystyle X}
if and only if
X
{\displaystyle X}
is uncountable). This is the σ-algebra generated by the singletons of
X
.
{\displaystyle X.}
Note: "countable" includes finite or empty.
The collection of all unions of sets in a countable partition of
X
{\displaystyle X}
is a σ-algebra.
=== Stopping time sigma-algebras ===
A stopping time
τ
{\displaystyle \tau }
can define a
σ
{\displaystyle \sigma }
-algebra
F
τ
,
{\displaystyle {\mathcal {F}}_{\tau },}
the
so-called stopping time sigma-algebra, which in a filtered probability space describes the information up to the random time
τ
{\displaystyle \tau }
in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often repeating it until the time
τ
{\displaystyle \tau }
is
F
τ
.
{\displaystyle {\mathcal {F}}_{\tau }.}
== σ-algebras generated by families of sets ==
=== σ-algebra generated by an arbitrary family ===
Let
F
{\displaystyle F}
be an arbitrary family of subsets of
X
.
{\displaystyle X.}
Then there exists a unique smallest σ-algebra which contains every set in
F
{\displaystyle F}
(even though
F
{\displaystyle F}
may or may not itself be a σ-algebra). It is, in fact, the intersection of all σ-algebras containing
F
.
{\displaystyle F.}
(See intersections of σ-algebras above.) This σ-algebra is denoted
σ
(
F
)
{\displaystyle \sigma (F)}
and is called the σ-algebra generated by
F
.
{\displaystyle F.}
If
F
{\displaystyle F}
is empty, then
σ
(
∅
)
=
{
∅
,
X
}
.
{\displaystyle \sigma (\varnothing )=\{\varnothing ,X\}.}
Otherwise
σ
(
F
)
{\displaystyle \sigma (F)}
consists of all the subsets of
X
{\displaystyle X}
that can be made from elements of
F
{\displaystyle F}
by a countable number of complement, union and intersection operations.
For a simple example, consider the set
X
=
{
1
,
2
,
3
}
.
{\displaystyle X=\{1,2,3\}.}
Then the σ-algebra generated by the single subset
{
1
}
{\displaystyle \{1\}}
is
σ
(
{
1
}
)
=
{
∅
,
{
1
}
,
{
2
,
3
}
,
{
1
,
2
,
3
}
}
.
{\displaystyle \sigma (\{1\})=\{\varnothing ,\{1\},\{2,3\},\{1,2,3\}\}.}
By an abuse of notation, when a collection of subsets contains only one element,
A
,
{\displaystyle A,}
σ
(
A
)
{\displaystyle \sigma (A)}
may be written instead of
σ
(
{
A
}
)
;
{\displaystyle \sigma (\{A\});}
in the prior example
σ
(
{
1
}
)
{\displaystyle \sigma (\{1\})}
instead of
σ
(
{
{
1
}
}
)
.
{\displaystyle \sigma (\{\{1\}\}).}
Indeed, using
σ
(
A
1
,
A
2
,
…
)
{\displaystyle \sigma \left(A_{1},A_{2},\ldots \right)}
to mean
σ
(
{
A
1
,
A
2
,
…
}
)
{\displaystyle \sigma \left(\left\{A_{1},A_{2},\ldots \right\}\right)}
is also quite common.
There are many families of subsets that generate useful σ-algebras. Some of these are presented here.
=== σ-algebra generated by a function ===
If
f
{\displaystyle f}
is a function from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
and
B
{\displaystyle B}
is a
σ
{\displaystyle \sigma }
-algebra of subsets of
Y
,
{\displaystyle Y,}
then the
σ
{\displaystyle \sigma }
-algebra generated by the function
f
,
{\displaystyle f,}
denoted by
σ
(
f
)
,
{\displaystyle \sigma (f),}
is the collection of all inverse images
f
−
1
(
S
)
{\displaystyle f^{-1}(S)}
of the sets
S
{\displaystyle S}
in
B
.
{\displaystyle B.}
That is,
σ
(
f
)
=
{
f
−
1
(
S
)
:
S
∈
B
}
.
{\displaystyle \sigma (f)=\left\{f^{-1}(S)\,:\,S\in B\right\}.}
A function
f
{\displaystyle f}
from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
is measurable with respect to a σ-algebra
Σ
{\displaystyle \Sigma }
of subsets of
X
{\displaystyle X}
if and only if
σ
(
f
)
{\displaystyle \sigma (f)}
is a subset of
Σ
.
{\displaystyle \Sigma .}
One common situation, and understood by default if
B
{\displaystyle B}
is not specified explicitly, is when
Y
{\displaystyle Y}
is a metric or topological space and
B
{\displaystyle B}
is the collection of Borel sets on
Y
.
{\displaystyle Y.}
If
f
{\displaystyle f}
is a function from
X
{\displaystyle X}
to
R
n
{\displaystyle \mathbb {R} ^{n}}
then
σ
(
f
)
{\displaystyle \sigma (f)}
is generated by the family of subsets which are inverse images of intervals/rectangles in
R
n
:
{\displaystyle \mathbb {R} ^{n}:}
σ
(
f
)
=
σ
(
{
f
−
1
(
[
a
1
,
b
1
]
×
⋯
×
[
a
n
,
b
n
]
)
:
a
i
,
b
i
∈
R
}
)
.
{\displaystyle \sigma (f)=\sigma \left(\left\{f^{-1}(\left[a_{1},b_{1}\right]\times \cdots \times \left[a_{n},b_{n}\right]):a_{i},b_{i}\in \mathbb {R} \right\}\right).}
A useful property is the following. Assume
f
{\displaystyle f}
is a measurable map from
(
X
,
Σ
X
)
{\displaystyle \left(X,\Sigma _{X}\right)}
to
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
and
g
{\displaystyle g}
is a measurable map from
(
X
,
Σ
X
)
{\displaystyle \left(X,\Sigma _{X}\right)}
to
(
T
,
Σ
T
)
.
{\displaystyle \left(T,\Sigma _{T}\right).}
If there exists a measurable map
h
{\displaystyle h}
from
(
T
,
Σ
T
)
{\displaystyle \left(T,\Sigma _{T}\right)}
to
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
such that
f
(
x
)
=
h
(
g
(
x
)
)
{\displaystyle f(x)=h(g(x))}
for all
x
,
{\displaystyle x,}
then
σ
(
f
)
⊆
σ
(
g
)
.
{\displaystyle \sigma (f)\subseteq \sigma (g).}
If
S
{\displaystyle S}
is finite or countably infinite or, more generally,
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
is a standard Borel space (for example, a separable complete metric space with its associated Borel sets), then the converse is also true. Examples of standard Borel spaces include
R
n
{\displaystyle \mathbb {R} ^{n}}
with its Borel sets and
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
with the cylinder σ-algebra described below.
=== Borel and Lebesgue σ-algebras ===
An important example is the Borel algebra over any topological space: the σ-algebra generated by the open sets (or, equivalently, by the closed sets). This σ-algebra is not, in general, the whole power set. For a non-trivial example that is not a Borel set, see the Vitali set or Non-Borel sets.
On the Euclidean space
R
n
,
{\displaystyle \mathbb {R} ^{n},}
another σ-algebra is of importance: that of all Lebesgue measurable sets. This σ-algebra contains more sets than the Borel σ-algebra on
R
n
{\displaystyle \mathbb {R} ^{n}}
and is preferred in integration theory, as it gives a complete measure space.
=== Product σ-algebra ===
Let
(
X
1
,
Σ
1
)
{\displaystyle \left(X_{1},\Sigma _{1}\right)}
and
(
X
2
,
Σ
2
)
{\displaystyle \left(X_{2},\Sigma _{2}\right)}
be two measurable spaces. The σ-algebra for the corresponding product space
X
1
×
X
2
{\displaystyle X_{1}\times X_{2}}
is called the product σ-algebra and is defined by
Σ
1
×
Σ
2
=
σ
(
{
B
1
×
B
2
:
B
1
∈
Σ
1
,
B
2
∈
Σ
2
}
)
.
{\displaystyle \Sigma _{1}\times \Sigma _{2}=\sigma \left(\left\{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\right\}\right).}
Observe that
{
B
1
×
B
2
:
B
1
∈
Σ
1
,
B
2
∈
Σ
2
}
{\displaystyle \{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\}}
is a π-system.
The Borel σ-algebra for
R
n
{\displaystyle \mathbb {R} ^{n}}
is generated by half-infinite rectangles and by finite rectangles. For example,
B
(
R
n
)
=
σ
(
{
(
−
∞
,
b
1
]
×
⋯
×
(
−
∞
,
b
n
]
:
b
i
∈
R
}
)
=
σ
(
{
(
a
1
,
b
1
]
×
⋯
×
(
a
n
,
b
n
]
:
a
i
,
b
i
∈
R
}
)
.
{\displaystyle {\mathcal {B}}(\mathbb {R} ^{n})=\sigma \left(\left\{(-\infty ,b_{1}]\times \cdots \times (-\infty ,b_{n}]:b_{i}\in \mathbb {R} \right\}\right)=\sigma \left(\left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{n},b_{n}\right]:a_{i},b_{i}\in \mathbb {R} \right\}\right).}
For each of these two examples, the generating family is a π-system.
=== σ-algebra generated by cylinder sets ===
Suppose
X
⊆
R
T
=
{
f
:
f
(
t
)
∈
R
,
t
∈
T
}
{\displaystyle X\subseteq \mathbb {R} ^{\mathbb {T} }=\{f:f(t)\in \mathbb {R} ,\ t\in \mathbb {T} \}}
is a set of real-valued functions. Let
B
(
R
)
{\displaystyle {\mathcal {B}}(\mathbb {R} )}
denote the Borel subsets of
R
.
{\displaystyle \mathbb {R} .}
A cylinder subset of
X
{\displaystyle X}
is a finitely restricted set defined as
C
t
1
,
…
,
t
n
(
B
1
,
…
,
B
n
)
=
{
f
∈
X
:
f
(
t
i
)
∈
B
i
,
1
≤
i
≤
n
}
.
{\displaystyle C_{t_{1},\dots ,t_{n}}(B_{1},\dots ,B_{n})=\left\{f\in X:f(t_{i})\in B_{i},1\leq i\leq n\right\}.}
Each
{
C
t
1
,
…
,
t
n
(
B
1
,
…
,
B
n
)
:
B
i
∈
B
(
R
)
,
1
≤
i
≤
n
}
{\displaystyle \left\{C_{t_{1},\dots ,t_{n}}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\right\}}
is a π-system that generates a σ-algebra
Σ
t
1
,
…
,
t
n
.
{\displaystyle \textstyle \Sigma _{t_{1},\dots ,t_{n}}.}
Then the family of subsets
F
X
=
⋃
n
=
1
∞
⋃
t
i
∈
T
,
i
≤
n
Σ
t
1
,
…
,
t
n
{\displaystyle {\mathcal {F}}_{X}=\bigcup _{n=1}^{\infty }\bigcup _{t_{i}\in \mathbb {T} ,i\leq n}\Sigma _{t_{1},\dots ,t_{n}}}
is an algebra that generates the cylinder σ-algebra for
X
.
{\displaystyle X.}
This σ-algebra is a subalgebra of the Borel σ-algebra determined by the product topology of
R
T
{\displaystyle \mathbb {R} ^{\mathbb {T} }}
restricted to
X
.
{\displaystyle X.}
An important special case is when
T
{\displaystyle \mathbb {T} }
is the set of natural numbers and
X
{\displaystyle X}
is a set of real-valued sequences. In this case, it suffices to consider the cylinder sets
C
n
(
B
1
,
…
,
B
n
)
=
(
B
1
×
⋯
×
B
n
×
R
∞
)
∩
X
=
{
(
x
1
,
x
2
,
…
,
x
n
,
x
n
+
1
,
…
)
∈
X
:
x
i
∈
B
i
,
1
≤
i
≤
n
}
,
{\displaystyle C_{n}\left(B_{1},\dots ,B_{n}\right)=\left(B_{1}\times \cdots \times B_{n}\times \mathbb {R} ^{\infty }\right)\cap X=\left\{\left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\in X:x_{i}\in B_{i},1\leq i\leq n\right\},}
for which
Σ
n
=
σ
(
{
C
n
(
B
1
,
…
,
B
n
)
:
B
i
∈
B
(
R
)
,
1
≤
i
≤
n
}
)
{\displaystyle \Sigma _{n}=\sigma \left(\{C_{n}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\}\right)}
is a non-decreasing sequence of σ-algebras.
=== Ball σ-algebra ===
The ball σ-algebra is the smallest σ-algebra containing all the open (and/or closed) balls. This is never larger than the Borel σ-algebra. Note that the two σ-algebra are equal for separable spaces. For some nonseparable spaces, some maps are ball measurable even though they are not Borel measurable, making use of the ball σ-algebra useful in the analysis of such maps.
=== σ-algebra generated by random variable or vector ===
Suppose
(
Ω
,
Σ
,
P
)
{\displaystyle (\Omega ,\Sigma ,\mathbb {P} )}
is a probability space. If
Y
:
Ω
→
R
n
{\displaystyle \textstyle Y:\Omega \to \mathbb {R} ^{n}}
is measurable with respect to the Borel σ-algebra on
R
n
{\displaystyle \mathbb {R} ^{n}}
then
Y
{\displaystyle Y}
is called a random variable (
n
=
1
{\displaystyle n=1}
) or random vector (
n
>
1
{\displaystyle n>1}
). The σ-algebra generated by
Y
{\displaystyle Y}
is
σ
(
Y
)
=
{
Y
−
1
(
A
)
:
A
∈
B
(
R
n
)
}
.
{\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in {\mathcal {B}}\left(\mathbb {R} ^{n}\right)\right\}.}
=== σ-algebra generated by a stochastic process ===
Suppose
(
Ω
,
Σ
,
P
)
{\displaystyle (\Omega ,\Sigma ,\mathbb {P} )}
is a probability space and
R
T
{\displaystyle \mathbb {R} ^{\mathbb {T} }}
is the set of real-valued functions on
T
.
{\displaystyle \mathbb {T} .}
If
Y
:
Ω
→
X
⊆
R
T
{\displaystyle \textstyle Y:\Omega \to X\subseteq \mathbb {R} ^{\mathbb {T} }}
is measurable with respect to the cylinder σ-algebra
σ
(
F
X
)
{\displaystyle \sigma \left({\mathcal {F}}_{X}\right)}
(see above) for
X
{\displaystyle X}
then
Y
{\displaystyle Y}
is called a stochastic process or random process. The σ-algebra generated by
Y
{\displaystyle Y}
is
σ
(
Y
)
=
{
Y
−
1
(
A
)
:
A
∈
σ
(
F
X
)
}
=
σ
(
{
Y
−
1
(
A
)
:
A
∈
F
X
}
)
,
{\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in \sigma \left({\mathcal {F}}_{X}\right)\right\}=\sigma \left(\left\{Y^{-1}(A):A\in {\mathcal {F}}_{X}\right\}\right),}
the σ-algebra generated by the inverse images of cylinder sets.
== See also ==
Measurable function – Kind of mathematical function
Sample space – Set of all possible outcomes or results of a statistical trial or experiment
Sigma-additive set function – Mapping function
Sigma-ring – Family of sets closed under countable unions
== References ==
== External links ==
"Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Sigma Algebra from PlanetMath. | Wikipedia/Σ-algebra |
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or hidden) Markov process (referred to as
X
{\displaystyle X}
). An HMM requires that there be an observable process
Y
{\displaystyle Y}
whose outcomes depend on the outcomes of
X
{\displaystyle X}
in a known way. Since
X
{\displaystyle X}
cannot be observed directly, the goal is to learn about state of
X
{\displaystyle X}
by observing
Y
{\displaystyle Y}
. By definition of being a Markov model, an HMM has an additional requirement that the outcome of
Y
{\displaystyle Y}
at time
t
=
t
0
{\displaystyle t=t_{0}}
must be "influenced" exclusively by the outcome of
X
{\displaystyle X}
at
t
=
t
0
{\displaystyle t=t_{0}}
and that the outcomes of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
at
t
<
t
0
{\displaystyle t<t_{0}}
must be conditionally independent of
Y
{\displaystyle Y}
at
t
=
t
0
{\displaystyle t=t_{0}}
given
X
{\displaystyle X}
at time
t
=
t
0
{\displaystyle t=t_{0}}
. Estimation of the parameters in an HMM can be performed using maximum likelihood estimation. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate parameters.
Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
== Definition ==
Let
X
n
{\displaystyle X_{n}}
and
Y
n
{\displaystyle Y_{n}}
be discrete-time stochastic processes and
n
≥
1
{\displaystyle n\geq 1}
. The pair
(
X
n
,
Y
n
)
{\displaystyle (X_{n},Y_{n})}
is a hidden Markov model if
X
n
{\displaystyle X_{n}}
is a Markov process whose behavior is not directly observable ("hidden");
P
(
Y
n
∈
A
|
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
P
(
Y
n
∈
A
|
X
n
=
x
n
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{1}=x_{1},\ldots ,X_{n}=x_{n}{\bigr )}=\operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{n}=x_{n}{\bigr )}}
,
for every
n
≥
1
{\displaystyle n\geq 1}
,
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, and every Borel set
A
{\displaystyle A}
.
Let
X
t
{\displaystyle X_{t}}
and
Y
t
{\displaystyle Y_{t}}
be continuous-time stochastic processes. The pair
(
X
t
,
Y
t
)
{\displaystyle (X_{t},Y_{t})}
is a hidden Markov model if
X
t
{\displaystyle X_{t}}
is a Markov process whose behavior is not directly observable ("hidden");
P
(
Y
t
0
∈
A
∣
{
X
t
∈
B
t
}
t
≤
t
0
)
=
P
(
Y
t
0
∈
A
∣
X
t
0
∈
B
t
0
)
{\displaystyle \operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid \{X_{t}\in B_{t}\}_{t\leq t_{0}})=\operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid X_{t_{0}}\in B_{t_{0}})}
,
for every
t
0
{\displaystyle t_{0}}
, every Borel set
A
{\displaystyle A}
, and every family of Borel sets
{
B
t
}
t
≤
t
0
{\displaystyle \{B_{t}\}_{t\leq t_{0}}}
.
=== Terminology ===
The states of the process
X
n
{\displaystyle X_{n}}
(resp.
X
t
)
{\displaystyle X_{t})}
are called hidden states, and
P
(
Y
n
∈
A
∣
X
n
=
x
n
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\mid X_{n}=x_{n}{\bigr )}}
(resp.
P
(
Y
t
∈
A
∣
X
t
∈
B
t
)
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{t}\in A\mid X_{t}\in B_{t}{\bigr )})}
is called emission probability or output probability.
== Examples ==
=== Drawing balls from hidden urns ===
In its discrete form, a hidden Markov process can be visualized as a generalization of the urn problem with replacement (where each item from the urn is returned to the original urn before the next step). Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the n-th ball depends only upon a random number and the choice of the urn for the (n − 1)-th ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a Markov process. It can be described by the upper part of Figure 1.
The Markov process cannot be observed, only the sequence of labeled balls, thus this arrangement is called a hidden Markov process. This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, e.g. y1, y2 and y3 on the conveyor belt, the observer still cannot be sure which urn (i.e., at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
=== Weather guessing game ===
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in Python:
In this piece of code, start_probability represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Rainy': 0.57, 'Sunny': 0.43}. The transition_probability represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.
A similar example is further elaborated in the Viterbi algorithm page.
== Structural architecture ==
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1, x2, x3 }). The random variable y(t) is the observation at time t (with y(t) ∈ { y1, y2, y3, y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1); the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) depends on only the value of the hidden variable x(t) (both at time t).
In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time
t
−
1
{\displaystyle t-1}
.
The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time
t
+
1
{\displaystyle t+1}
, for a total of
N
2
{\displaystyle N^{2}}
transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, the
N
×
N
{\displaystyle N\times N}
matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of
N
(
N
−
1
)
{\displaystyle N(N-1)}
transition parameters.
In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be
M
−
1
{\displaystyle M-1}
separate parameters, for a total of
N
(
M
−
1
)
{\displaystyle N(M-1)}
emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary multivariate Gaussian distribution, there will be M parameters controlling the means and
M
(
M
+
1
)
2
{\displaystyle {\frac {M(M+1)}{2}}}
parameters controlling the covariance matrix, for a total of
N
(
M
+
M
(
M
+
1
)
2
)
=
N
M
(
M
+
3
)
2
=
O
(
N
M
2
)
{\displaystyle N\left(M+{\frac {M(M+1)}{2}}\right)={\frac {NM(M+3)}{2}}=O(NM^{2})}
emission parameters. (In such a case, unless the value of M is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
== Inference ==
Several inference problems are associated with hidden Markov models, as outlined below.
=== Probability of an observed sequence ===
The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
The probability of observing a sequence
Y
=
y
(
0
)
,
y
(
1
)
,
…
,
y
(
L
−
1
)
,
{\displaystyle Y=y(0),y(1),\dots ,y(L-1),}
of length L is given by
P
(
Y
)
=
∑
X
P
(
Y
∣
X
)
P
(
X
)
,
{\displaystyle P(Y)=\sum _{X}P(Y\mid X)P(X),}
where the sum runs over all possible hidden-node sequences
X
=
x
(
0
)
,
x
(
1
)
,
…
,
x
(
L
−
1
)
.
{\displaystyle X=x(0),x(1),\dots ,x(L-1).}
Applying the principle of dynamic programming, this problem, too, can be handled efficiently using the forward algorithm.
=== Probability of the latent variables ===
A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations
y
(
1
)
,
…
,
y
(
t
)
{\displaystyle y(1),\dots ,y(t)}
.
==== Filtering ====
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute
P
(
x
(
t
)
∣
y
(
1
)
,
…
,
y
(
t
)
)
{\displaystyle P(x(t)\mid y(1),\dots ,y(t))}
. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end.
This problem can be handled efficiently using the forward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determine
P
(
h
t
∣
v
1
:
t
)
{\displaystyle \mathrm {P} {\big (}h_{t}\mid v_{1:t}{\big )}}
.
==== Smoothing ====
This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute
P
(
x
(
k
)
∣
y
(
1
)
,
…
,
y
(
t
)
)
{\displaystyle P(x(k)\mid y(1),\dots ,y(t))}
for some
k
<
t
{\displaystyle k<t}
. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time k in the past, relative to time t.
The forward-backward algorithm is a good method for computing the smoothed values for all hidden state variables.
==== Most likely explanation ====
The task, unlike the previous two, asks about the joint probability of the entire sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is part-of-speech tagging, where the hidden states represent the underlying parts of speech corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute.
This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the Viterbi algorithm.
=== Statistical significance ===
For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence.
== Learning ==
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm.
If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
== Applications ==
HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include:
Computational finance
Single-molecule kinetic analysis
Neuroscience
Cryptanalysis
Speech recognition, including Siri
Speech synthesis
Part-of-speech tagging
Document separation in scanning solutions
Machine translation
Partial discharge
Gene prediction
Handwriting recognition
Alignment of bio-sequences
Time series analysis
Activity recognition
Protein folding
Sequence classification
Metamorphic virus detection
Sequence motif discovery (DNA and proteins)
DNA hybridization kinetics
Chromatin state discovery
Transportation forecasting
Solar irradiance variability
== History ==
Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar.
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.
== Extensions ==
=== General state spaces ===
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter.
Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model and the learnability limits are still under exploration.
=== Bayesian modeling of the transitions probabilities ===
Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the concentration parameter) controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm.
An extension of the previously described hidden Markov models with Dirichlet priors uses a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short. It was originally described under the name "Infinite Hidden Markov Model" and was further formalized in "Hierarchical Dirichlet Processes".
=== Discriminative approach ===
A different type of extension uses a discriminative model in place of the generative model of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called maximum entropy Markov model (MEMM), which models the conditional distribution of the states using logistic regression (also known as a "maximum entropy model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be statistically independent of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities.
A variant of the previously described discriminative model is the linear-chain conditional random field. This uses an undirected graphical model (aka Markov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called label bias problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
=== Other extensions ===
Yet another variant is the factorial hidden Markov model, which allows for a single observation to be conditioned on the corresponding hidden variables of a set of
K
{\displaystyle K}
independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with
N
K
{\displaystyle N^{K}}
states (assuming there are
N
{\displaystyle N}
states for each chain), and therefore, learning in such a model is difficult: for a sequence of length
T
{\displaystyle T}
, a straightforward Viterbi algorithm has complexity
O
(
N
2
K
T
)
{\displaystyle O(N^{2K}\,T)}
. To find an exact solution, a junction tree algorithm could be used, but it results in an
O
(
N
K
+
1
K
T
)
{\displaystyle O(N^{K+1}\,K\,T)}
complexity. In practice, approximate techniques, such as variational approaches, could be used.
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general
K
{\displaystyle K}
adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an
O
(
N
K
T
)
{\displaystyle O(N^{K}\,T)}
running time, for
K
{\displaystyle K}
adjacent states and
T
{\displaystyle T}
total observations (i.e. a length-
T
{\displaystyle T}
Markov chain). This extension has been widely used in bioinformatics, in the modeling of DNA sequences.
Another recent extension is the triplet Markov model, in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the theory of evidence and the triplet Markov models and which allows to fuse data in Markovian context and to model nonstationary data. Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g.,
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012. It consists in employing a small recurrent neural network (RNN), specifically a reservoir network, to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution.
In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions. Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law. This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications.
The model suitable in the context of longitudinal data is named latent Markov model. The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in
== Measure theory ==
Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the states
A
,
B
1
,
B
2
{\displaystyle A,B_{1},B_{2}}
, with invariant distribution
π
=
(
2
/
7
,
4
/
7
,
1
/
7
)
{\displaystyle \pi =(2/7,4/7,1/7)}
. By ignoring the distinction between
B
1
,
B
2
{\displaystyle B_{1},B_{2}}
, this space of subshifts is projected on
A
,
B
1
,
B
2
{\displaystyle A,B_{1},B_{2}}
into another space of subshifts on
A
,
B
{\displaystyle A,B}
, and this projection also projects the probability measure down to a probability measure on the subshifts on
A
,
B
{\displaystyle A,B}
.
The curious thing is that the probability measure on the subshifts on
A
,
B
{\displaystyle A,B}
is not created by a Markov chain on
A
,
B
{\displaystyle A,B}
, not even multiple orders. Intuitively, this is because if one observes a long sequence of
B
n
{\displaystyle B^{n}}
, then one would become increasingly sure that the
Pr
(
A
∣
B
n
)
→
2
3
{\displaystyle \Pr(A\mid B^{n})\to {\frac {2}{3}}}
, meaning that the observable part of the system can be affected by something infinitely in the past.
Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6).
== See also ==
== References ==
== External links ==
=== Concepts ===
Teif, V. B.; Rippe, K. (2010). "Statistical–mechanical lattice models for protein–DNA binding in chromatin". J. Phys.: Condens. Matter. 22 (41): 414105. arXiv:1004.5514. Bibcode:2010JPCM...22O4105T. doi:10.1088/0953-8984/22/41/414105. PMID 21386588. S2CID 103345.
A Revealing Introduction to Hidden Markov Models by Mark Stamp, San Jose State University.
Fitting HMM's with expectation-maximization – complete derivation
A step-by-step tutorial on HMMs Archived 2017-08-13 at the Wayback Machine (University of Leeds)
Hidden Markov Models (an exposition using basic mathematics)
Hidden Markov Models (by Narada Warakagoda)
Hidden Markov Models: Fundamentals and Applications Part 1, Part 2 (by V. Petrushin)
Lecture on a Spreadsheet by Jason Eisner, Video and interactive spreadsheet | Wikipedia/Hidden_Markov_model |
In statistical mechanics, the Potts model, a generalization of the Ising model, is a model of interacting spins on a crystalline lattice. By studying the Potts model, one may gain insight into the behaviour of ferromagnets and certain other phenomena of solid-state physics. The strength of the Potts model is not so much that it models these physical systems well; it is rather that the one-dimensional case is exactly solvable, and that it has a rich mathematical formulation that has been studied extensively.
The model is named after Renfrey Potts, who described the model near the end of his 1951 Ph.D. thesis. The model was related to the "planar Potts" or "clock model", which was suggested to him by his advisor, Cyril Domb. The four-state Potts model is sometimes known as the Ashkin–Teller model, after Julius Ashkin and Edward Teller, who considered an equivalent model in 1943.
The Potts model is related to, and generalized by, several other models, including the XY model, the Heisenberg model and the N-vector model. The infinite-range Potts model is known as the Kac model. When the spins are taken to interact in a non-Abelian manner, the model is related to the flux tube model, which is used to discuss confinement in quantum chromodynamics. Generalizations of the Potts model have also been used to model grain growth in metals, coarsening in foams, and statistical properties of proteins. A further generalization of these methods by James Glazier and Francois Graner, known as the cellular Potts model, has been used to simulate static and kinetic phenomena in foam and biological morphogenesis.
== Definition ==
=== Vector Potts model ===
The Potts model consists of spins that are placed on a lattice; the lattice is usually taken to be a two-dimensional rectangular Euclidean lattice, but is often generalized to other dimensions and lattice structures.
Originally, Domb suggested that the spin takes one of
q
{\displaystyle q}
possible values , distributed uniformly about the circle, at angles
θ
s
=
2
π
s
q
,
{\displaystyle \theta _{s}={\frac {2\pi s}{q}},}
where
s
=
0
,
1
,
.
.
.
,
q
−
1
{\displaystyle s=0,1,...,q-1}
and that the interaction Hamiltonian is given by
H
c
=
J
c
∑
⟨
i
,
j
⟩
cos
(
θ
s
i
−
θ
s
j
)
{\displaystyle H_{c}=J_{c}\sum _{\langle i,j\rangle }\cos \left(\theta _{s_{i}}-\theta _{s_{j}}\right)}
with the sum running over the nearest neighbor pairs
⟨
i
,
j
⟩
{\displaystyle \langle i,j\rangle }
over all lattice sites, and
J
c
{\displaystyle J_{c}}
is a coupling constant, determining the interaction strength. This model is now known as the vector Potts model or the clock model. Potts provided the location in two dimensions of the phase transition for
q
=
3
,
4
{\displaystyle q=3,4}
. In the limit
q
→
∞
{\displaystyle q\to \infty }
, this becomes the XY model.
=== Standard Potts model ===
What is now known as the standard Potts model was suggested by Potts in the course of his study of the model above and is defined by a simpler Hamiltonian:
H
p
=
−
J
p
∑
(
i
,
j
)
δ
(
s
i
,
s
j
)
{\displaystyle H_{p}=-J_{p}\sum _{(i,j)}\delta (s_{i},s_{j})\,}
where
δ
(
s
i
,
s
j
)
{\displaystyle \delta (s_{i},s_{j})}
is the Kronecker delta, which equals one whenever
s
i
=
s
j
{\displaystyle s_{i}=s_{j}}
and zero otherwise.
The
q
=
2
{\displaystyle q=2}
standard Potts model is equivalent to the Ising model and the 2-state vector Potts model, with
J
p
=
−
2
J
c
{\displaystyle J_{p}=-2J_{c}}
. The
q
=
3
{\displaystyle q=3}
standard Potts model is equivalent to the three-state vector Potts model, with
J
p
=
−
3
2
J
c
{\displaystyle J_{p}=-{\frac {3}{2}}J_{c}}
.
=== Generalized Potts model ===
A generalization of the Potts model is often used in statistical inference and biophysics, particularly for modelling proteins through direct coupling analysis. This generalized Potts model consists of 'spins' that each may take on
q
{\displaystyle q}
states:
s
i
∈
{
1
,
…
,
q
}
{\displaystyle s_{i}\in \{1,\dots ,q\}}
(with no particular ordering). The Hamiltonian is,
H
=
∑
i
<
j
J
i
j
(
s
i
,
s
j
)
+
∑
i
h
i
(
s
i
)
,
{\displaystyle H=\sum _{i<j}J_{ij}(s_{i},s_{j})+\sum _{i}h_{i}(s_{i}),}
where
J
i
j
(
k
,
k
′
)
{\displaystyle J_{ij}(k,k')}
is the energetic cost of spin
i
{\displaystyle i}
being in state
k
{\displaystyle k}
while spin
j
{\displaystyle j}
is in state
k
′
{\displaystyle k'}
, and
h
i
(
k
)
{\displaystyle h_{i}(k)}
is the energetic cost of spin
i
{\displaystyle i}
being in state
k
{\displaystyle k}
. Note:
J
i
j
(
k
,
k
′
)
=
J
j
i
(
k
′
,
k
)
{\displaystyle J_{ij}(k,k')=J_{ji}(k',k)}
. This model resembles the Sherrington-Kirkpatrick model in that couplings can be heterogeneous and non-local. There is no explicit lattice structure in this model.
== Physical properties ==
=== Phase transitions ===
Despite its simplicity as a model of a physical system, the Potts model is useful as a model system for the study of phase transitions. For example, for the standard ferromagnetic Potts model in
2
d
{\displaystyle 2d}
, a phase transition exists for all real values
q
≥
1
{\displaystyle q\geq 1}
, with the critical point at
β
J
=
log
(
1
+
q
)
{\displaystyle \beta J=\log(1+{\sqrt {q}})}
. The phase transition is continuous (second order) for
1
≤
q
≤
4
{\displaystyle 1\leq q\leq 4}
and discontinuous (first order) for
q
>
4
{\displaystyle q>4}
.
For the clock model, there is evidence that the corresponding phase transitions are infinite order BKT transitions, and a continuous phase transition is observed when
q
≤
4
{\displaystyle q\leq 4}
. Further use is found through the model's relation to percolation problems and the Tutte and chromatic polynomials found in combinatorics. For integer values of
q
≥
3
{\displaystyle q\geq 3}
, the model displays the phenomenon of 'interfacial adsorption' with intriguing critical wetting properties when fixing opposite boundaries in two different states .
=== Relation with the random cluster model ===
The Potts model has a close relation to the Fortuin-Kasteleyn random cluster model, another model in statistical mechanics. Understanding this relationship has helped develop efficient Markov chain Monte Carlo methods for numerical exploration of the model at small
q
{\displaystyle q}
, and led to the rigorous proof of the critical temperature of the model.
At the level of the partition function
Z
p
=
∑
{
s
i
}
e
−
H
p
{\displaystyle Z_{p}=\sum _{\{s_{i}\}}e^{-H_{p}}}
, the relation amounts to transforming the sum over spin configurations
{
s
i
}
{\displaystyle \{s_{i}\}}
into a sum over edge configurations
ω
=
{
(
i
,
j
)
|
s
i
=
s
j
}
{\displaystyle \omega ={\Big \{}(i,j){\Big |}s_{i}=s_{j}{\Big \}}}
i.e. sets of nearest neighbor pairs of the same color. The transformation is done using the identity
e
J
p
δ
(
s
i
,
s
j
)
=
1
+
v
δ
(
s
i
,
s
j
)
with
v
=
e
J
p
−
1
.
{\displaystyle e^{J_{p}\delta (s_{i},s_{j})}=1+v\delta (s_{i},s_{j})\qquad {\text{ with }}\qquad v=e^{J_{p}}-1\ .}
This leads to rewriting the partition function as
Z
p
=
∑
ω
v
#
edges
(
ω
)
q
#
clusters
(
ω
)
{\displaystyle Z_{p}=\sum _{\omega }v^{\#{\text{edges}}(\omega )}q^{\#{\text{clusters}}(\omega )}}
where the FK clusters are the connected components of the union of closed segments
∪
(
i
,
j
)
∈
ω
[
i
,
j
]
{\displaystyle \cup _{(i,j)\in \omega }[i,j]}
. This is proportional to the partition function of the random cluster model with the open edge probability
p
=
v
1
+
v
=
1
−
e
−
J
p
{\displaystyle p={\frac {v}{1+v}}=1-e^{-J_{p}}}
. An advantage of the random cluster formulation is that
q
{\displaystyle q}
can be an arbitrary complex number, rather than a natural integer.
Alternatively, instead of FK clusters, the model can be formulated in terms of spin clusters, using the identity
e
J
p
δ
(
s
i
,
s
j
)
=
(
1
−
δ
(
s
i
,
s
j
)
)
+
e
J
p
δ
(
s
i
,
s
j
)
.
{\displaystyle e^{J_{p}\delta (s_{i},s_{j})}=(1-\delta (s_{i},s_{j}))+e^{J_{p}}\delta (s_{i},s_{j})\ .}
A spin cluster is the union of neighbouring FK clusters with the same color: two neighbouring spin clusters have different colors, while two neighbouring FK clusters are colored independently.
== Measure-theoretic description ==
The one dimensional Potts model may be expressed in terms of a subshift of finite type, and thus gains access to all of the mathematical techniques associated with this formalism. In particular, it can be solved exactly using the techniques of transfer operators. (However, Ernst Ising used combinatorial methods to solve the Ising model, which is the "ancestor" of the Potts model, in his 1924 PhD thesis). This section develops the mathematical formalism, based on measure theory, behind this solution.
While the example below is developed for the one-dimensional case, many of the arguments, and almost all of the notation, generalizes easily to any number of dimensions. Some of the formalism is also broad enough to handle related models, such as the XY model, the Heisenberg model and the N-vector model.
=== Topology of the space of states ===
Let Q = {1, ..., q} be a finite set of symbols, and let
Q
Z
=
{
s
=
(
…
,
s
−
1
,
s
0
,
s
1
,
…
)
:
s
k
∈
Q
∀
k
∈
Z
}
{\displaystyle Q^{\mathbf {Z} }=\{s=(\ldots ,s_{-1},s_{0},s_{1},\ldots ):s_{k}\in Q\;\forall k\in \mathbf {Z} \}}
be the set of all bi-infinite strings of values from the set Q. This set is called a full shift. For defining the Potts model, either this whole space, or a certain subset of it, a subshift of finite type, may be used. Shifts get this name because there exists a natural operator on this space, the shift operator τ : QZ → QZ, acting as
τ
(
s
)
k
=
s
k
+
1
{\displaystyle \tau (s)_{k}=s_{k+1}}
This set has a natural product topology; the base for this topology are the cylinder sets
C
m
[
ξ
0
,
…
,
ξ
k
]
=
{
s
∈
Q
Z
:
s
m
=
ξ
0
,
…
,
s
m
+
k
=
ξ
k
}
{\displaystyle C_{m}[\xi _{0},\ldots ,\xi _{k}]=\{s\in Q^{\mathbf {Z} }:s_{m}=\xi _{0},\ldots ,s_{m+k}=\xi _{k}\}}
that is, the set of all possible strings where k+1 spins match up exactly to a given, specific set of values ξ0, ..., ξk. Explicit representations for the cylinder sets can be gotten by noting that the string of values corresponds to a q-adic number, however the natural topology of the q-adic numbers is finer than the above product topology.
=== Interaction energy ===
The interaction between the spins is then given by a continuous function V : QZ → R on this topology. Any continuous function will do; for example
V
(
s
)
=
−
J
δ
(
s
0
,
s
1
)
{\displaystyle V(s)=-J\delta (s_{0},s_{1})}
will be seen to describe the interaction between nearest neighbors. Of course, different functions give different interactions; so a function of s0, s1 and s2 will describe a next-nearest neighbor interaction. A function V gives interaction energy between a set of spins; it is not the Hamiltonian, but is used to build it. The argument to the function V is an element s ∈ QZ, that is, an infinite string of spins. In the above example, the function V just picked out two spins out of the infinite string: the values s0 and s1. In general, the function V may depend on some or all of the spins; currently, only those that depend on a finite number are exactly solvable.
Define the function Hn : QZ → R as
H
n
(
s
)
=
∑
k
=
0
n
V
(
τ
k
s
)
{\displaystyle H_{n}(s)=\sum _{k=0}^{n}V(\tau ^{k}s)}
This function can be seen to consist of two parts: the self-energy of a configuration [s0, s1, ..., sn] of spins, plus the interaction energy of this set and all the other spins in the lattice. The n → ∞ limit of this function is the Hamiltonian of the system; for finite n, these are sometimes called the finite state Hamiltonians.
=== Partition function and measure ===
The corresponding finite-state partition function is given by
Z
n
(
V
)
=
∑
s
0
,
…
,
s
n
∈
Q
exp
(
−
β
H
n
(
C
0
[
s
0
,
s
1
,
…
,
s
n
]
)
)
{\displaystyle Z_{n}(V)=\sum _{s_{0},\ldots ,s_{n}\in Q}\exp(-\beta H_{n}(C_{0}[s_{0},s_{1},\ldots ,s_{n}]))}
with C0 being the cylinder sets defined above. Here, β = 1/kT, where k is the Boltzmann constant, and T is the temperature. It is very common in mathematical treatments to set β = 1, as it is easily regained by rescaling the interaction energy. This partition function is written as a function of the interaction V to emphasize that it is only a function of the interaction, and not of any specific configuration of spins. The partition function, together with the Hamiltonian, are used to define a measure on the Borel σ-algebra in the following way: The measure of a cylinder set, i.e. an element of the base, is given by
μ
(
C
k
[
s
0
,
s
1
,
…
,
s
n
]
)
=
1
Z
n
(
V
)
exp
(
−
β
H
n
(
C
k
[
s
0
,
s
1
,
…
,
s
n
]
)
)
{\displaystyle \mu (C_{k}[s_{0},s_{1},\ldots ,s_{n}])={\frac {1}{Z_{n}(V)}}\exp(-\beta H_{n}(C_{k}[s_{0},s_{1},\ldots ,s_{n}]))}
One can then extend by countable additivity to the full σ-algebra. This measure is a probability measure; it gives the likelihood of a given configuration occurring in the configuration space QZ. By endowing the configuration space with a probability measure built from a Hamiltonian in this way, the configuration space turns into a canonical ensemble.
Most thermodynamic properties can be expressed directly in terms of the partition function. Thus, for example, the Helmholtz free energy is given by
A
n
(
V
)
=
−
k
T
log
Z
n
(
V
)
{\displaystyle A_{n}(V)=-kT\log Z_{n}(V)}
Another important related quantity is the topological pressure, defined as
P
(
V
)
=
lim
n
→
∞
1
n
log
Z
n
(
V
)
{\displaystyle P(V)=\lim _{n\to \infty }{\frac {1}{n}}\log Z_{n}(V)}
which will show up as the logarithm of the leading eigenvalue of the transfer operator of the solution.
=== Free field solution ===
The simplest model is the model where there is no interaction at all, and so V = c and Hn = c (with c constant and independent of any spin configuration). The partition function becomes
Z
n
(
c
)
=
e
−
c
β
∑
s
0
,
…
,
s
n
∈
Q
1
{\displaystyle Z_{n}(c)=e^{-c\beta }\sum _{s_{0},\ldots ,s_{n}\in Q}1}
If all states are allowed, that is, the underlying set of states is given by a full shift, then the sum may be trivially evaluated as
Z
n
(
c
)
=
e
−
c
β
q
n
+
1
{\displaystyle Z_{n}(c)=e^{-c\beta }q^{n+1}}
If neighboring spins are only allowed in certain specific configurations, then the state space is given by a subshift of finite type. The partition function may then be written as
Z
n
(
c
)
=
e
−
c
β
|
Fix
τ
n
|
=
e
−
c
β
Tr
A
n
{\displaystyle Z_{n}(c)=e^{-c\beta }|{\mbox{Fix}}\,\tau ^{n}|=e^{-c\beta }{\mbox{Tr}}A^{n}}
where card is the cardinality or count of a set, and Fix is the set of fixed points of the iterated shift function:
Fix
τ
n
=
{
s
∈
Q
Z
:
τ
n
s
=
s
}
{\displaystyle {\mbox{Fix}}\,\tau ^{n}=\{s\in Q^{\mathbf {Z} }:\tau ^{n}s=s\}}
The q × q matrix A is the adjacency matrix specifying which neighboring spin values are allowed.
=== Interacting model ===
The simplest case of the interacting model is the Ising model, where the spin can only take on one of two values, sn ∈ {−1, 1} and only nearest neighbor spins interact. The interaction potential is given by
V
(
σ
)
=
−
J
p
s
0
s
1
{\displaystyle V(\sigma )=-J_{p}s_{0}s_{1}\,}
This potential can be captured in a 2 × 2 matrix with matrix elements
M
σ
σ
′
=
exp
(
β
J
p
σ
σ
′
)
{\displaystyle M_{\sigma \sigma '}=\exp \left(\beta J_{p}\sigma \sigma '\right)}
with the index σ, σ′ ∈ {−1, 1}. The partition function is then given by
Z
n
(
V
)
=
Tr
M
n
{\displaystyle Z_{n}(V)={\mbox{Tr}}\,M^{n}}
The general solution for an arbitrary number of spins, and an arbitrary finite-range interaction, is given by the same general form. In this case, the precise expression for the matrix M is a bit more complex.
The goal of solving a model such as the Potts model is to give an exact closed-form expression for the partition function and an expression for the Gibbs states or equilibrium states in the limit of n → ∞, the thermodynamic limit.
== Applications ==
=== Signal and image processing ===
The Potts model has applications in signal reconstruction. Assume that we are given noisy observation of a piecewise constant signal g in Rn. To recover g from the noisy observation vector f in Rn, one seeks a minimizer of the corresponding inverse problem, the Lp-Potts functional Pγ(u), which is defined by
P
γ
(
u
)
=
γ
‖
∇
u
‖
0
+
‖
u
−
f
‖
p
p
=
γ
#
{
i
:
u
i
≠
u
i
+
1
}
+
∑
i
=
1
n
|
u
i
−
f
i
|
p
{\displaystyle P_{\gamma }(u)=\gamma \|\nabla u\|_{0}+\|u-f\|_{p}^{p}=\gamma \#\{i:u_{i}\neq u_{i+1}\}+\sum _{i=1}^{n}|u_{i}-f_{i}|^{p}}
The jump penalty
‖
∇
u
‖
0
{\displaystyle \|\nabla u\|_{0}}
forces piecewise constant solutions and the data term
‖
u
−
f
‖
p
p
{\displaystyle \|u-f\|_{p}^{p}}
couples the minimizing candidate u to the data f. The parameter γ > 0 controls the tradeoff between regularity and data fidelity. There are fast algorithms for the exact minimization of the L1 and the L2-Potts functional.
In image processing, the Potts functional is related to the segmentation problem. However, in two dimensions the problem is NP-hard.
== See also ==
Random cluster model
Critical three-state Potts model
Chiral Potts model
Square-lattice Ising model
Minimal models
Z N model
Cellular Potts model
== References ==
== External links ==
Haggard, Gary; Pearce, David J.; Royle, Gordon. "Code for efficiently computing Tutte, Chromatic and Flow Polynomials". | Wikipedia/Potts_model |
The Rendleman–Bartter model (Richard J. Rendleman, Jr. and Brit J. Bartter) in finance is a short-rate model describing the evolution of interest rates. It is a "one factor model" as it describes interest rate movements as driven by only one source of market risk. It can be used in the valuation of interest rate derivatives. It is a stochastic asset model.
The model specifies that the instantaneous interest rate follows a geometric Brownian motion:
d
r
t
=
θ
r
t
d
t
+
σ
r
t
d
W
t
{\displaystyle dr_{t}=\theta r_{t}\,dt+\sigma r_{t}\,dW_{t}}
where Wt is a Wiener process modelling the random market risk factor. The drift parameter,
θ
{\displaystyle \theta }
, represents a constant expected instantaneous rate of change in the interest rate, while the standard deviation parameter,
σ
{\displaystyle \sigma }
, determines the volatility of the interest rate.
This is one of the early models of the short-term interest rates, using the same stochastic process as the one already used to describe the dynamics of the underlying price in stock options. Its main disadvantage is that it does not capture the mean reversion of interest rates (their tendency to revert toward some value or range of values rather than wander without bounds in either direction).
Note that in 1979 Rendleman-Bartter also published a version of the Binomial options pricing model for equity underlyings. ("Two-State Option Pricing". Journal of Finance 24: 1093-1110.)
== References ==
Hull, John C. (2003). Options, Futures and Other Derivatives. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-009056-5.
Rendleman, R. and B. Bartter (1980). "The Pricing of Options on Debt Securities". Journal of Financial and Quantitative Analysis. 15 (1): 11–24. doi:10.2307/2979016. JSTOR 2979016. S2CID 154495945. | Wikipedia/Rendleman–Bartter_model |
In measure theory and probability, the monotone class theorem connects monotone classes and 𝜎-algebras. The theorem says that the smallest monotone class containing an algebra of sets
G
{\displaystyle G}
is precisely the smallest 𝜎-algebra containing
G
.
{\displaystyle G.}
It is used as a type of transfinite induction to prove many other theorems, such as Fubini's theorem.
== Definition of a monotone class ==
A monotone class is a family (i.e. class)
M
{\displaystyle M}
of sets that is closed under countable monotone unions and also under countable monotone intersections. Explicitly, this means
M
{\displaystyle M}
has the following properties:
if
A
1
,
A
2
,
…
∈
M
{\displaystyle A_{1},A_{2},\ldots \in M}
and
A
1
⊆
A
2
⊆
⋯
{\displaystyle A_{1}\subseteq A_{2}\subseteq \cdots }
then
⋃
i
=
1
∞
A
i
∈
M
,
{\textstyle {\textstyle \bigcup \limits _{i=1}^{\infty }}A_{i}\in M,}
and
if
B
1
,
B
2
,
…
∈
M
{\displaystyle B_{1},B_{2},\ldots \in M}
and
B
1
⊇
B
2
⊇
⋯
{\displaystyle B_{1}\supseteq B_{2}\supseteq \cdots }
then
⋂
i
=
1
∞
B
i
∈
M
.
{\textstyle {\textstyle \bigcap \limits _{i=1}^{\infty }}B_{i}\in M.}
== Monotone class theorem for sets ==
== Monotone class theorem for functions ==
=== Proof ===
The following argument originates in Rick Durrett's Probability: Theory and Examples.
== Results and applications ==
As a corollary, if
G
{\displaystyle G}
is a ring of sets, then the smallest monotone class containing it coincides with the 𝜎-ring of
G
.
{\displaystyle G.}
By invoking this theorem, one can use monotone classes to help verify that a certain collection of subsets is a 𝜎-algebra.
The monotone class theorem for functions can be a powerful tool that allows statements about particularly simple classes of functions to be generalized to arbitrary bounded and measurable functions.
== See also ==
Dynkin system – Family closed under complements and countable disjoint unions
π-𝜆 theorem – Family closed under complements and countable disjoint unionsPages displaying short descriptions of redirect targets
π-system – Family of sets closed under intersection
σ-algebra – Algebraic structure of set algebra
== Citations ==
== References ==
Durrett, Richard (2019). Probability: Theory and Examples (PDF). Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 49 (5th ed.). Cambridge New York, NY: Cambridge University Press. ISBN 978-1-108-47368-2. OCLC 1100115281. Retrieved November 5, 2020. | Wikipedia/Monotone_class_lemma |
In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a tool for understanding a series and predicting future values. AR involves regressing the variable on its own lagged (i.e., past) values. MA involves modeling the error as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually denoted ARMA(p, q), where p is the order of AR and q is the order of MA.
The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.
ARMA models can be estimated by using the Box–Jenkins method.
== Mathematical formulation ==
=== Autoregressive model ===
The notation AR(p) refers to the autoregressive model of order p. The AR(p) model is written as
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}}
where
φ
1
,
…
,
φ
p
{\displaystyle \varphi _{1},\ldots ,\varphi _{p}}
are parameters and the random variable
ε
t
{\displaystyle \varepsilon _{t}}
is white noise, usually independent and identically distributed (i.i.d.) normal random variables.
In order for the model to remain stationary, the roots of its characteristic polynomial must lie outside the unit circle. For example, processes in the AR(1) model with
|
φ
1
|
≥
1
{\displaystyle |\varphi _{1}|\geq 1}
are not stationary because the root of
1
−
φ
1
B
=
0
{\displaystyle 1-\varphi _{1}B=0}
lies within the unit circle.
The augmented Dickey–Fuller test can assesses the stability of an intrinsic mode function and trend components. For stationary time series, the ARMA models can be used, while for non-stationary series, Long short-term memory models can be used to derive abstract features. The final value is obtained by reconstructing the predicted outcomes of each time series.
=== Moving average model ===
The notation MA(q) refers to the moving average model of order q:
X
t
=
μ
+
ε
t
+
∑
i
=
1
q
θ
i
ε
t
−
i
{\displaystyle X_{t}=\mu +\varepsilon _{t}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}\,}
where the
θ
1
,
.
.
.
,
θ
q
{\displaystyle \theta _{1},...,\theta _{q}}
are the parameters of the model,
μ
{\displaystyle \mu }
is the expectation of
X
t
{\displaystyle X_{t}}
(often assumed to equal 0), and
ε
1
{\displaystyle \varepsilon _{1}}
, ...,
ε
t
{\displaystyle \varepsilon _{t}}
are i.i.d. white noise error terms that are commonly normal random variables.
=== ARMA model ===
The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving-average terms. This model contains the AR(p) and MA(q) models,
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
=== In terms of lag operator ===
In some texts, the models is specified using the lag operator L. In these terms, the AR(p) model is given by
ε
t
=
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
φ
(
L
)
X
t
{\displaystyle \varepsilon _{t}=\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\varphi (L)X_{t}\,}
where
φ
{\displaystyle \varphi }
represents the polynomial
φ
(
L
)
=
1
−
∑
i
=
1
p
φ
i
L
i
.
{\displaystyle \varphi (L)=1-\sum _{i=1}^{p}\varphi _{i}L^{i}.\,}
The MA(q) model is given by
X
t
−
μ
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
=
θ
(
L
)
ε
t
,
{\displaystyle X_{t}-\mu =\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}=\theta (L)\varepsilon _{t},\,}
where
θ
{\displaystyle \theta }
represents the polynomial
θ
(
L
)
=
1
+
∑
i
=
1
q
θ
i
L
i
.
{\displaystyle \theta (L)=1+\sum _{i=1}^{q}\theta _{i}L^{i}.\,}
Finally, the combined ARMA(p, q) model is given by
(
1
−
∑
i
=
1
p
φ
i
L
i
)
X
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
,
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,,}
or more concisely,
φ
(
L
)
X
t
=
θ
(
L
)
ε
t
{\displaystyle \varphi (L)X_{t}=\theta (L)\varepsilon _{t}\,}
or
φ
(
L
)
θ
(
L
)
X
t
=
ε
t
.
{\displaystyle {\frac {\varphi (L)}{\theta (L)}}X_{t}=\varepsilon _{t}\,.}
This is the form used in Box, Jenkins & Reinsel.
Moreover, starting summations from
i
=
0
{\displaystyle i=0}
and setting
ϕ
0
=
−
1
{\displaystyle \phi _{0}=-1}
and
θ
0
=
1
{\displaystyle \theta _{0}=1}
, then we get an even more elegant formulation:
−
∑
i
=
0
p
ϕ
i
L
i
X
t
=
∑
i
=
0
q
θ
i
L
i
ε
t
.
{\displaystyle -\sum _{i=0}^{p}\phi _{i}L^{i}\;X_{t}=\sum _{i=0}^{q}\theta _{i}L^{i}\;\varepsilon _{t}\,.}
== Spectrum ==
The spectral density of an ARMA process is
S
(
f
)
=
σ
2
2
π
|
θ
(
e
−
i
f
)
ϕ
(
e
−
i
f
)
|
2
{\displaystyle S(f)={\frac {\sigma ^{2}}{2\pi }}\left\vert {\frac {\theta (e^{-if})}{\phi (e^{-if})}}\right\vert ^{2}}
where
σ
2
{\displaystyle \sigma ^{2}}
is the variance of the white noise,
θ
{\displaystyle \theta }
is the characteristic polynomial of the moving average part of the ARMA model, and
ϕ
{\displaystyle \phi }
is the characteristic polynomial of the autoregressive part of the ARMA model.
== Fitting models ==
=== Choosing p and q ===
An appropriate value of p in the ARMA(p, q) model can be found by plotting the partial autocorrelation functions. Similarly, q can be estimated by using the autocorrelation functions. Both p and q can be determined simultaneously using extended autocorrelation functions (EACF). Further information can be gleaned by considering the same functions for the residuals of a model fitted with an initial selection of p and q.
Brockwell & Davis recommend using Akaike information criterion (AIC) for finding p and q. Another option is the Bayesian information criterion (BIC).
=== Estimating coefficients ===
After choosing p and q, ARMA models can be fitted by least squares regression to find the values of the parameters which minimize the error term. It is good practice to find the smallest values of p and q which provide an acceptable fit to the data. For a pure AR model, the Yule-Walker equations may be used to provide a fit.
ARMA outputs are used primarily to forecast (predict), and not to infer causation as in other areas of econometrics and regression methods such as OLS and 2SLS.
=== Software implementations ===
In R, standard package stats has function arima, documented in ARIMA Modelling of Time Series. Package astsa has an improved script called sarima for fitting ARMA models (seasonal and nonseasonal) and sarima.sim to simulate data from these models. Extension packages contain related and extended functionality: package tseries includes the function arma(), documented in "Fit ARMA Models to Time Series"; packagefracdiff contains fracdiff() for fractionally integrated ARMA processes; and package forecast includes auto.arima for selecting a parsimonious set of p, q. The CRAN task view on Time Series contains links to most of these.
Mathematica has a complete library of time series functions including ARMA.
MATLAB includes functions such as arma, ar and arx to estimate autoregressive, exogenous autoregressive and ARMAX models. See System Identification Toolbox and Econometrics Toolbox for details.
Julia has community-driven packages that implement fitting with an ARMA model such as arma.jl.
Python has the statsmodelsS package which includes many models and functions for time series analysis, including ARMA. Formerly part of the scikit-learn library, it is now stand-alone and integrates well with Pandas.
PyFlux has a Python-based implementation of ARIMAX models, including Bayesian ARIMAX models.
IMSL Numerical Libraries are libraries of numerical analysis functionality including ARMA and ARIMA procedures implemented in standard programming languages like C, Java, C# .NET, and Fortran.
gretl can estimate ARMA models, as mentioned here
GNU Octave extra package octave-forge supports AR models.
Stata includes the function arima. for ARMA and ARIMA models.
SuanShu is a Java library of numerical methods that implements univariate/multivariate ARMA, ARIMA, ARMAX, etc models, documented in "SuanShu, a Java numerical and statistical library".
SAS has an econometric package, ETS, that estimates ARIMA models. See details.
== History and interpretations ==
The general ARMA model was described in the 1951 thesis of Peter Whittle, who used mathematical analysis (Laurent series and Fourier analysis) and statistical inference. ARMA models were popularized by a 1970 book by George E. P. Box and Jenkins, who expounded an iterative (Box–Jenkins) method for choosing and estimating them. This method was useful for low-order polynomials (of degree three or less).
ARMA is essentially an infinite impulse response filter applied to white noise, with some additional interpretation placed on it.
In digital signal processing, ARMA is represented as a digital filter with white noise at the input and the ARMA process at the output.
== Applications ==
ARMA is appropriate when a system is a function of a series of unobserved shocks (the MA or moving average part) as well as its own behavior. For example, stock prices may be shocked by fundamental information as well as exhibiting technical trending and mean-reversion effects due to market participants.
== Generalizations ==
There are various generalizations of ARMA. Nonlinear AR (NAR), nonlinear MA (NMA) and nonlinear ARMA (NARMA) model nonlinear dependence on past values and error terms. Vector AR (VAR) and vector ARMA (VARMA) model multivariate time series. Autoregressive integrated moving average (ARIMA) models non-stationary time series (that is, whose mean changes over time). Autoregressive conditional heteroskedasticity (ARCH) models time series where the variance changes. Seasonal ARIMA (SARIMA or periodic ARMA) models periodic variation. Autoregressive fractionally integrated moving average (ARFIMA, or Fractional ARIMA, FARIMA) model time-series that exhibits long memory. Multiscale AR (MAR) is indexed by the nodes of a tree instead of integers.
=== Autoregressive–moving-average model with exogenous inputs (ARMAX) ===
The notation ARMAX(p, q, b) refers to a model with p autoregressive terms, q moving average terms and b exogenous inputs terms. The last term is a linear combination of the last b terms of a known and external time series
d
t
{\displaystyle d_{t}}
. It is given by:
X
t
=
ε
t
+
∑
i
=
1
p
φ
i
X
t
−
i
+
∑
i
=
1
q
θ
i
ε
t
−
i
+
∑
i
=
1
b
η
i
d
t
−
i
.
{\displaystyle X_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}+\sum _{i=1}^{b}\eta _{i}d_{t-i}.\,}
where
η
1
,
…
,
η
b
{\displaystyle \eta _{1},\ldots ,\eta _{b}}
are the parameters of the exogenous input
d
t
{\displaystyle d_{t}}
.
Some nonlinear variants of models with exogenous variables have been defined: see for example Nonlinear autoregressive exogenous model.
Statistical packages implement the ARMAX model through the use of "exogenous" (that is, independent) variables. Care must be taken when interpreting the output of those packages, because the estimated parameters usually (for example, in R and gretl) refer to the regression:
X
t
−
m
t
=
ε
t
+
∑
i
=
1
p
φ
i
(
X
t
−
i
−
m
t
−
i
)
+
∑
i
=
1
q
θ
i
ε
t
−
i
.
{\displaystyle X_{t}-m_{t}=\varepsilon _{t}+\sum _{i=1}^{p}\varphi _{i}(X_{t-i}-m_{t-i})+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,}
where
m
t
{\displaystyle m_{t}}
incorporates all exogenous (or independent) variables:
m
t
=
c
+
∑
i
=
0
b
η
i
d
t
−
i
.
{\displaystyle m_{t}=c+\sum _{i=0}^{b}\eta _{i}d_{t-i}.\,}
== See also ==
Autoregressive integrated moving average (ARIMA)
Exponential smoothing
Linear predictive coding
Predictive analytics
Infinite impulse response
Finite impulse response
== References ==
== Further reading ==
Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 0521343399.
Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 052135532X.
Francq, C.; Zakoïan, J.-M. (2005), "Recent results for linear time series models with non independent innovations", in Duchesne, P.; Remillard, B. (eds.), Statistical Modeling and Analysis for Complex Data Problems, Springer, pp. 241–265, CiteSeerX 10.1.1.721.1754.
Shumway, R.H. and Stoffer, D.S. (2017). Time Series Analysis and Its Applications with R Examples. Springer. DOI: 10.1007/978-3-319-52452-8 | Wikipedia/Autoregressive–moving-average_model |
In mathematical finance, the Cox–Ingersoll–Ross (CIR) model describes the evolution of interest rates. It is a type of "one factor model" (short-rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives. It was introduced in 1985 by John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross as an extension of the Vasicek model, itself an Ornstein–Uhlenbeck process.
== The model ==
The CIR model describes the instantaneous interest rate
r
t
{\displaystyle r_{t}}
with a Feller square-root process, whose stochastic differential equation is
d
r
t
=
a
(
b
−
r
t
)
d
t
+
σ
r
t
d
W
t
,
{\displaystyle dr_{t}=a(b-r_{t})\,dt+\sigma {\sqrt {r_{t}}}\,dW_{t},}
where
W
t
{\displaystyle W_{t}}
is a Wiener process (modelling the random market risk factor) and
a
{\displaystyle a}
,
b
{\displaystyle b}
, and
σ
{\displaystyle \sigma \,}
are the parameters. The parameter
a
{\displaystyle a}
corresponds to the speed of adjustment to the mean
b
{\displaystyle b}
, and
σ
{\displaystyle \sigma \,}
to volatility. The drift factor,
a
(
b
−
r
t
)
{\displaystyle a(b-r_{t})}
, is exactly the same as in the Vasicek model. It ensures mean reversion of the interest rate towards the long run value
b
{\displaystyle b}
, with speed of adjustment governed by the strictly positive parameter
a
{\displaystyle a}
.
The standard deviation factor,
σ
r
t
{\displaystyle \sigma {\sqrt {r_{t}}}}
, avoids the possibility of negative interest rates for all positive values of
a
{\displaystyle a}
and
b
{\displaystyle b}
.
An interest rate of zero is also precluded if the condition
2
a
b
≥
σ
2
{\displaystyle 2ab\geq \sigma ^{2}\,}
is met. More generally, when the rate (
r
t
{\displaystyle r_{t}}
) is close to zero, the standard deviation (
σ
r
t
{\displaystyle \sigma {\sqrt {r_{t}}}}
) also becomes very small, which dampens the effect of the random shock on the rate. Consequently, when the rate gets close to zero, its evolution becomes dominated by the drift factor, which pushes the rate upwards (towards equilibrium).
In the case
4
a
b
=
σ
2
{\displaystyle 4ab=\sigma ^{2}\,}
, the Feller square-root process can be obtained from the square of an Ornstein–Uhlenbeck process. It is ergodic and possesses a stationary distribution. It is used in the Heston model to model stochastic volatility.
=== Distribution ===
Future distribution
The distribution of future values of a CIR process can be computed in closed form:
r
t
+
T
=
Y
2
c
,
{\displaystyle r_{t+T}={\frac {Y}{2c}},}
where
c
=
2
a
(
1
−
e
−
a
T
)
σ
2
{\displaystyle c={\frac {2a}{(1-e^{-aT})\sigma ^{2}}}}
, and Y is a non-central chi-squared distribution with
4
a
b
σ
2
{\displaystyle {\frac {4ab}{\sigma ^{2}}}}
degrees of freedom and non-centrality parameter
2
c
r
t
e
−
a
T
{\displaystyle 2cr_{t}e^{-aT}}
. Formally the probability density function is:
f
(
r
t
+
T
;
r
t
,
a
,
b
,
σ
)
=
c
e
−
u
−
v
(
v
u
)
q
/
2
I
q
(
2
u
v
)
,
{\displaystyle f(r_{t+T};r_{t},a,b,\sigma )=c\,e^{-u-v}\left({\frac {v}{u}}\right)^{q/2}I_{q}(2{\sqrt {uv}}),}
where
q
=
2
a
b
σ
2
−
1
{\displaystyle q={\frac {2ab}{\sigma ^{2}}}-1}
,
u
=
c
r
t
e
−
a
T
{\displaystyle u=cr_{t}e^{-aT}}
,
v
=
c
r
t
+
T
{\displaystyle v=cr_{t+T}}
, and
I
q
(
2
u
v
)
{\displaystyle I_{q}(2{\sqrt {uv}})}
is a modified Bessel function of the first kind of order
q
{\displaystyle q}
.
Asymptotic distribution
Due to mean reversion, as time becomes large, the distribution of
r
∞
{\displaystyle r_{\infty }}
will approach a gamma distribution with the probability density of:
f
(
r
∞
;
a
,
b
,
σ
)
=
β
α
Γ
(
α
)
r
∞
α
−
1
e
−
β
r
∞
,
{\displaystyle f(r_{\infty };a,b,\sigma )={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}r_{\infty }^{\alpha -1}e^{-\beta r_{\infty }},}
where
β
=
2
a
/
σ
2
{\displaystyle \beta =2a/\sigma ^{2}}
and
α
=
2
a
b
/
σ
2
{\displaystyle \alpha =2ab/\sigma ^{2}}
.
=== Properties ===
Mean reversion,
Level dependent volatility (
σ
r
t
{\displaystyle \sigma {\sqrt {r_{t}}}}
),
For given positive
r
0
{\displaystyle r_{0}}
the process will never touch zero, if
2
a
b
≥
σ
2
{\displaystyle 2ab\geq \sigma ^{2}}
; otherwise it can occasionally touch the zero point,
E
[
r
t
∣
r
0
]
=
r
0
e
−
a
t
+
b
(
1
−
e
−
a
t
)
{\displaystyle \operatorname {E} [r_{t}\mid r_{0}]=r_{0}e^{-at}+b(1-e^{-at})}
, so long term mean is
b
{\displaystyle b}
,
Var
[
r
t
∣
r
0
]
=
r
0
σ
2
a
(
e
−
a
t
−
e
−
2
a
t
)
+
b
σ
2
2
a
(
1
−
e
−
a
t
)
2
.
{\displaystyle \operatorname {Var} [r_{t}\mid r_{0}]=r_{0}{\frac {\sigma ^{2}}{a}}(e^{-at}-e^{-2at})+{\frac {b\sigma ^{2}}{2a}}(1-e^{-at})^{2}.}
=== Calibration ===
Ordinary least squares
The continuous SDE can be discretized as follows
r
t
+
Δ
t
−
r
t
=
a
(
b
−
r
t
)
Δ
t
+
σ
r
t
Δ
t
ε
t
,
{\displaystyle r_{t+\Delta t}-r_{t}=a(b-r_{t})\,\Delta t+\sigma \,{\sqrt {r_{t}\Delta t}}\varepsilon _{t},}
which is equivalent to
r
t
+
Δ
t
−
r
t
r
t
=
a
b
Δ
t
r
t
−
a
r
t
Δ
t
+
σ
Δ
t
ε
t
,
{\displaystyle {\frac {r_{t+\Delta t}-r_{t}}{{\sqrt {r}}_{t}}}={\frac {ab\Delta t}{{\sqrt {r}}_{t}}}-a{\sqrt {r}}_{t}\Delta t+\sigma \,{\sqrt {\Delta t}}\varepsilon _{t},}
provided
ε
t
{\displaystyle \varepsilon _{t}}
is n.i.i.d. (0,1). This equation can be used for a linear regression.
Martingale estimation
Maximum likelihood
=== Simulation ===
Stochastic simulation of the CIR process can be achieved using two variants:
Discretization
Exact
== Bond pricing ==
Under the no-arbitrage assumption, a bond may be priced using this interest rate process. The bond price is exponential affine in the interest rate:
P
(
t
,
T
)
=
A
(
t
,
T
)
e
−
B
(
t
,
T
)
r
t
{\displaystyle P(t,T)=A(t,T)e^{-B(t,T)r_{t}}\!}
where
A
(
t
,
T
)
=
(
2
h
e
(
a
+
h
)
(
T
−
t
)
/
2
2
h
+
(
a
+
h
)
(
e
h
(
T
−
t
)
−
1
)
)
2
a
b
/
σ
2
{\displaystyle A(t,T)=\left({\frac {2he^{(a+h)(T-t)/2}}{2h+(a+h)(e^{h(T-t)}-1)}}\right)^{2ab/\sigma ^{2}}}
B
(
t
,
T
)
=
2
(
e
h
(
T
−
t
)
−
1
)
2
h
+
(
a
+
h
)
(
e
h
(
T
−
t
)
−
1
)
{\displaystyle B(t,T)={\frac {2(e^{h(T-t)}-1)}{2h+(a+h)(e^{h(T-t)}-1)}}}
h
=
a
2
+
2
σ
2
{\displaystyle h={\sqrt {a^{2}+2\sigma ^{2}}}}
== Extensions ==
The CIR model uses a special case of a basic affine jump diffusion, which still permits a closed-form expression for bond prices. Time varying functions replacing coefficients can be introduced in the model in order to make it consistent with a pre-assigned term structure of interest rates and possibly volatilities. The most general approach is in Maghsoodi (1996). A more tractable approach is in Brigo and Mercurio (2001b) where an external time-dependent shift is added to the model for consistency with an input term structure of rates.
A significant extension of the CIR model to the case of stochastic mean and stochastic volatility is given by Lin Chen (1996) and is known as Chen model. A more recent extension for handling cluster volatility, negative interest rates and different distributions is the so-called "CIR #" by Orlando, Mininni and Bufalo (2018, 2019, 2020, 2021, 2023) and a simpler extension focussing on negative interest rates was proposed by Di Francesco and Kamm (2021, 2022), which are referred to as the CIR- and CIR-- models.
== See also ==
Hull–White model
Vasicek model
Chen model
== References ==
== Further References ==
Hull, John C. (2003). Options, Futures and Other Derivatives. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-009056-5.
Cox, J.C.; Ingersoll, J.E.; Ross, S.A. (1985). "A Theory of the Term Structure of Interest Rates". Econometrica. 53 (2): 385–407. doi:10.2307/1911242. JSTOR 1911242.
Maghsoodi, Y. (1996). "Solution of the extended CIR Term Structure and Bond Option Valuation". Mathematical Finance. 6 (6): 89–109. doi:10.1111/j.1467-9965.1996.tb00113.x.
Damiano Brigo; Fabio Mercurio (2001). Interest Rate Models — Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
Brigo, Damiano; Fabio Mercurio (2001b). "A deterministic-shift extension of analytically tractable and time-homogeneous short rate models". Finance & Stochastics. 5 (3): 369–388. doi:10.1007/PL00013541. S2CID 35316609.
Open Source library implementing the CIR process in python
Orlando, Giuseppe; Mininni, Rosa Maria; Bufalo, Michele (2020). "Forecasting interest rates through Vasicek and CIR models: A partitioning approach". Journal of Forecasting. 39 (4): 569–579. arXiv:1901.02246. doi:10.1002/for.2642. ISSN 1099-131X. S2CID 126507446. | Wikipedia/Cox–Ingersoll–Ross_model |
In physics, a Langevin equation (named after Paul Langevin) is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
== Brownian motion as a prototype ==
The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,
m
d
v
d
t
=
−
λ
v
+
η
(
t
)
.
{\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right).}
Here,
v
{\displaystyle \mathbf {v} }
is the velocity of the particle,
λ
{\displaystyle \lambda }
is its damping coefficient, and
m
{\displaystyle m}
is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and a noise term
η
(
t
)
{\displaystyle {\boldsymbol {\eta }}\left(t\right)}
representing the effect of the collisions with the molecules of the fluid. The force
η
(
t
)
{\displaystyle {\boldsymbol {\eta }}\left(t\right)}
has a Gaussian probability distribution with correlation function
⟨
η
i
(
t
)
η
j
(
t
′
)
⟩
=
2
λ
k
B
T
δ
i
,
j
δ
(
t
−
t
′
)
,
{\displaystyle \left\langle \eta _{i}\left(t\right)\eta _{j}\left(t'\right)\right\rangle =2\lambda k_{\text{B}}T\delta _{i,j}\delta \left(t-t'\right),}
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant,
T
{\displaystyle T}
is the temperature and
η
i
(
t
)
{\displaystyle \eta _{i}\left(t\right)}
is the i-th component of the vector
η
(
t
)
{\displaystyle {\boldsymbol {\eta }}\left(t\right)}
. The
δ
{\displaystyle \delta }
-function form of the time correlation means that the force at a time
t
{\displaystyle t}
is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit the
δ
{\displaystyle \delta }
-correlation and the Langevin equation becomes virtually exact.
Another common feature of the Langevin equation is the occurrence of the damping coefficient
λ
{\displaystyle \lambda }
in the correlation function of the random force, which in an equilibrium system is an expression of the Einstein relation.
== Mathematical aspects ==
A strictly
δ
{\displaystyle \delta }
-correlated fluctuating force
η
(
t
)
{\displaystyle {\boldsymbol {\eta }}\left(t\right)}
is not a function in the usual mathematical sense and even the derivative
d
v
/
d
t
{\displaystyle \mathrm {d} \mathbf {v} /\mathrm {d} t}
is not defined in this limit. This problem disappears when the Langevin equation is written in integral form
m
v
=
∫
t
(
−
λ
v
+
η
(
t
)
)
d
t
.
{\displaystyle m\mathbf {v} =\int ^{t}\left(-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right)\right)\mathrm {d} t.}
Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation".
Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g.,
|
v
(
t
)
|
η
(
t
)
{\displaystyle \left|{\boldsymbol {v}}(t)\right|{\boldsymbol {\eta }}(t)}
. If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (see Itô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one.
== Generic Langevin equation ==
There is a formal derivation of a generic Langevin equation from classical mechanics. This generic equation plays a central role in the theory of critical dynamics, and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case.
An essential step in the derivation is the division of the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with the Zwanzig projection operator. Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems.
Let
A
=
{
A
i
}
{\displaystyle A=\{A_{i}\}}
denote the slow variables. The generic Langevin equation then reads
d
A
i
d
t
=
k
B
T
∑
j
[
A
i
,
A
j
]
d
H
d
A
j
−
∑
j
λ
i
,
j
(
A
)
d
H
d
A
j
+
∑
j
d
λ
i
,
j
(
A
)
d
A
j
+
η
i
(
t
)
.
{\displaystyle {\frac {\mathrm {d} A_{i}}{\mathrm {d} t}}=k_{\text{B}}T\sum \limits _{j}{\left[{A_{i},A_{j}}\right]{\frac {{\mathrm {d} }{\mathcal {H}}}{\mathrm {d} A_{j}}}}-\sum \limits _{j}{\lambda _{i,j}\left(A\right){\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+}\sum \limits _{j}{\frac {\mathrm {d} {\lambda _{i,j}\left(A\right)}}{\mathrm {d} A_{j}}}+\eta _{i}\left(t\right).}
The fluctuating force
η
i
(
t
)
{\displaystyle \eta _{i}\left(t\right)}
obeys a Gaussian probability distribution with correlation function
⟨
η
i
(
t
)
η
j
(
t
′
)
⟩
=
2
λ
i
,
j
(
A
)
δ
(
t
−
t
′
)
.
{\displaystyle \left\langle {\eta _{i}\left(t\right)\eta _{j}\left(t'\right)}\right\rangle =2\lambda _{i,j}\left(A\right)\delta \left(t-t'\right).}
This implies the Onsager reciprocity relation
λ
i
,
j
=
λ
j
,
i
{\displaystyle \lambda _{i,j}=\lambda _{j,i}}
for the damping coefficients
λ
{\displaystyle \lambda }
. The dependence
d
λ
i
,
j
/
d
A
j
{\displaystyle \mathrm {d} \lambda _{i,j}/\mathrm {d} A_{j}}
of
λ
{\displaystyle \lambda }
on
A
{\displaystyle A}
is negligible in most cases. The symbol
H
=
−
ln
(
p
0
)
{\displaystyle {\mathcal {H}}=-\ln \left(p_{0}\right)}
denotes the Hamiltonian of the system, where
p
0
(
A
)
{\displaystyle p_{0}\left(A\right)}
is the equilibrium probability distribution of the variables
A
{\displaystyle A}
. Finally,
[
A
i
,
A
j
]
{\displaystyle [A_{i},A_{j}]}
is the projection of the Poisson bracket of the slow variables
A
i
{\displaystyle A_{i}}
and
A
j
{\displaystyle A_{j}}
onto the space of slow variables.
In the Brownian motion case one would have
H
=
p
2
/
(
2
m
k
B
T
)
{\displaystyle {\mathcal {H}}=\mathbf {p} ^{2}/\left(2mk_{\text{B}}T\right)}
,
A
=
{
p
}
{\displaystyle A=\{\mathbf {p} \}}
or
A
=
{
x
,
p
}
{\displaystyle A=\{\mathbf {x} ,\mathbf {p} \}}
and
[
x
i
,
p
j
]
=
δ
i
,
j
{\displaystyle [x_{i},p_{j}]=\delta _{i,j}}
. The equation of motion
d
x
/
d
t
=
p
/
m
{\displaystyle \mathrm {d} \mathbf {x} /\mathrm {d} t=\mathbf {p} /m}
for
x
{\displaystyle \mathbf {x} }
is exact: there is no fluctuating force
η
x
{\displaystyle \eta _{x}}
and no damping coefficient
λ
x
,
p
{\displaystyle \lambda _{x,p}}
.
== Examples ==
=== Thermal noise in an electrical resistor ===
There is a close analogy between the paradigmatic Brownian particle discussed above and Johnson noise, the electric voltage generated by thermal fluctuations in a resistor. The diagram at the right shows an electric circuit consisting of a resistance R and a capacitance C. The slow variable is the voltage U between the ends of the resistor. The Hamiltonian reads
H
=
E
/
k
B
T
=
C
U
2
/
(
2
k
B
T
)
{\displaystyle {\mathcal {H}}=E/k_{\text{B}}T=CU^{2}/(2k_{\text{B}}T)}
, and the Langevin equation becomes
d
U
d
t
=
−
U
R
C
+
η
(
t
)
,
⟨
η
(
t
)
η
(
t
′
)
⟩
=
2
k
B
T
R
C
2
δ
(
t
−
t
′
)
.
{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}=-{\frac {U}{RC}}+\eta \left(t\right),\;\;\left\langle \eta \left(t\right)\eta \left(t'\right)\right\rangle ={\frac {2k_{\text{B}}T}{RC^{2}}}\delta \left(t-t'\right).}
This equation may be used to determine the correlation function
⟨
U
(
t
)
U
(
t
′
)
⟩
=
k
B
T
C
exp
(
−
|
t
−
t
′
|
R
C
)
≈
2
R
k
B
T
δ
(
t
−
t
′
)
,
{\displaystyle \left\langle U\left(t\right)U\left(t'\right)\right\rangle ={\frac {k_{\text{B}}T}{C}}\exp \left(-{\frac {\left|t-t'\right|}{RC}}\right)\approx 2Rk_{\text{B}}T\delta \left(t-t'\right),}
which becomes white noise (Johnson noise) when the capacitance C becomes negligibly small.
=== Critical dynamics ===
The dynamics of the order parameter
φ
{\displaystyle \varphi }
of a second order phase transition slows down near the critical point and can be described with a Langevin equation. The simplest case is the universality class "model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets,
∂
∂
t
φ
(
x
,
t
)
=
−
λ
δ
H
δ
φ
+
η
(
x
,
t
)
,
H
=
∫
d
d
x
[
1
2
r
0
φ
2
+
u
φ
4
+
1
2
(
∇
φ
)
2
]
,
⟨
η
(
x
,
t
)
η
(
x
′
,
t
′
)
⟩
=
2
λ
δ
(
x
−
x
′
)
δ
(
t
−
t
′
)
.
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}\varphi {\left(\mathbf {x} ,t\right)}&=-\lambda {\frac {\delta {\mathcal {H}}}{\delta \varphi }}+\eta {\left(\mathbf {x} ,t\right)},\\[2ex]{\mathcal {H}}&=\int d^{d}x\left[{\frac {1}{2}}r_{0}\varphi ^{2}+u\varphi ^{4}+{\frac {1}{2}}\left(\nabla \varphi \right)^{2}\right],\\[2ex]\left\langle \eta {\left(\mathbf {x} ,t\right)}\,\eta {\left(\mathbf {x} ',t'\right)}\right\rangle &=2\lambda \,\delta {\left(\mathbf {x} -\mathbf {x} '\right)}\;\delta {\left(t-t'\right)}.\end{aligned}}}
Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets.
=== Harmonic oscillator in a fluid ===
m
d
v
d
t
=
−
λ
v
+
η
(
t
)
−
k
x
{\displaystyle m{\frac {dv}{dt}}=-\lambda v+\eta (t)-kx}
A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by the fluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to the Maxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential (
U
=
1
2
k
x
2
{\textstyle U={\frac {1}{2}}kx^{2}}
) is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium.
=== Trajectories of free Brownian particles ===
Consider a free particle of mass
m
{\displaystyle m}
with equation of motion described by
m
d
v
d
t
=
−
v
μ
+
η
(
t
)
,
{\displaystyle m{\frac {d\mathbf {v} }{dt}}=-{\frac {\mathbf {v} }{\mu }}+{\boldsymbol {\eta }}(t),}
where
v
=
d
r
/
d
t
{\displaystyle \mathbf {v} =d\mathbf {r} /dt}
is the particle velocity,
μ
{\displaystyle \mu }
is the particle mobility, and
η
(
t
)
=
m
a
(
t
)
{\displaystyle {\boldsymbol {\eta }}(t)=m\mathbf {a} (t)}
is a rapidly fluctuating force whose time-average vanishes over a characteristic timescale
t
c
{\displaystyle t_{c}}
of particle collisions, i.e.
η
(
t
)
¯
=
0
{\displaystyle {\overline {{\boldsymbol {\eta }}(t)}}=0}
. The general solution to the equation of motion is
v
(
t
)
=
v
(
0
)
e
−
t
/
τ
+
∫
0
t
a
(
t
′
)
e
−
(
t
−
t
′
)
/
τ
d
t
′
,
{\displaystyle \mathbf {v} (t)=\mathbf {v} (0)e^{-t/\tau }+\int _{0}^{t}\mathbf {a} (t')e^{-(t-t')/\tau }dt',}
where
τ
=
m
μ
{\displaystyle \tau =m\mu }
is the correlation time of the noise term. It can also be shown that the autocorrelation function of the particle velocity
v
{\displaystyle \mathbf {v} }
is given by
R
v
v
(
t
1
,
t
2
)
≡
⟨
v
(
t
1
)
⋅
v
(
t
2
)
⟩
=
v
2
(
0
)
e
−
(
t
1
+
t
2
)
/
τ
+
∫
0
t
1
∫
0
t
2
R
a
a
(
t
1
′
,
t
2
′
)
e
−
(
t
1
+
t
2
−
t
1
′
−
t
2
′
)
/
τ
d
t
1
′
d
t
2
′
≃
v
2
(
0
)
e
−
|
t
2
−
t
1
|
/
τ
+
[
3
k
B
T
m
−
v
2
(
0
)
]
[
e
−
|
t
2
−
t
1
|
/
τ
−
e
−
(
t
1
+
t
2
)
/
τ
]
,
{\displaystyle {\begin{aligned}R_{vv}(t_{1},t_{2})&\equiv \langle \mathbf {v} (t_{1})\cdot \mathbf {v} (t_{2})\rangle \\&=v^{2}(0)e^{-(t_{1}+t_{2})/\tau }+\int _{0}^{t_{1}}\int _{0}^{t_{2}}R_{aa}(t_{1}',t_{2}')e^{-(t_{1}+t_{2}-t_{1}'-t_{2}')/\tau }dt_{1}'dt_{2}'\\&\simeq v^{2}(0)e^{-|t_{2}-t_{1}|/\tau }+\left[{\frac {3k_{\text{B}}T}{m}}-v^{2}(0)\right]{\Big [}e^{-|t_{2}-t_{1}|/\tau }-e^{-(t_{1}+t_{2})/\tau }{\Big ]},\end{aligned}}}
where we have used the property that the variables
a
(
t
1
′
)
{\displaystyle \mathbf {a} (t_{1}')}
and
a
(
t
2
′
)
{\displaystyle \mathbf {a} (t_{2}')}
become uncorrelated for time separations
t
2
′
−
t
1
′
≫
t
c
{\displaystyle t_{2}'-t_{1}'\gg t_{c}}
. Besides, the value of
lim
t
→
∞
⟨
v
2
(
t
)
⟩
=
lim
t
→
∞
R
v
v
(
t
,
t
)
{\textstyle \lim _{t\to \infty }\langle v^{2}(t)\rangle =\lim _{t\to \infty }R_{vv}(t,t)}
is set to be equal to
3
k
B
T
/
m
{\displaystyle 3k_{\text{B}}T/m}
such that it obeys the equipartition theorem. If the system is initially at thermal equilibrium already with
v
2
(
0
)
=
3
k
B
T
/
m
{\displaystyle v^{2}(0)=3k_{\text{B}}T/m}
, then
⟨
v
2
(
t
)
⟩
=
3
k
B
T
/
m
{\displaystyle \langle v^{2}(t)\rangle =3k_{\text{B}}T/m}
for all
t
{\displaystyle t}
, meaning that the system remains at equilibrium at all times.
The velocity
v
(
t
)
{\displaystyle \mathbf {v} (t)}
of the Brownian particle can be integrated to yield its trajectory
r
(
t
)
{\displaystyle \mathbf {r} (t)}
. If it is initially located at the origin with probability 1, then the result is
r
(
t
)
=
v
(
0
)
τ
(
1
−
e
−
t
/
τ
)
+
τ
∫
0
t
a
(
t
′
)
[
1
−
e
−
(
t
−
t
′
)
/
τ
]
d
t
′
.
{\displaystyle \mathbf {r} (t)=\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)+\tau \int _{0}^{t}\mathbf {a} (t')\left[1-e^{-(t-t')/\tau }\right]dt'.}
Hence, the average displacement
⟨
r
(
t
)
⟩
=
v
(
0
)
τ
(
1
−
e
−
t
/
τ
)
{\textstyle \langle \mathbf {r} (t)\rangle =\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)}
asymptotes to
v
(
0
)
τ
{\displaystyle \mathbf {v} (0)\tau }
as the system relaxes. The mean squared displacement can be determined similarly:
⟨
r
2
(
t
)
⟩
=
v
2
(
0
)
τ
2
(
1
−
e
−
t
/
τ
)
2
−
3
k
B
T
m
τ
2
(
1
−
e
−
t
/
τ
)
(
3
−
e
−
t
/
τ
)
+
6
k
B
T
m
τ
t
.
{\displaystyle \langle r^{2}(t)\rangle =v^{2}(0)\tau ^{2}\left(1-e^{-t/\tau }\right)^{2}-{\frac {3k_{\text{B}}T}{m}}\tau ^{2}\left(1-e^{-t/\tau }\right)\left(3-e^{-t/\tau }\right)+{\frac {6k_{\text{B}}T}{m}}\tau t.}
This expression implies that
⟨
r
2
(
t
≪
τ
)
⟩
≃
v
2
(
0
)
t
2
{\displaystyle \langle r^{2}(t\ll \tau )\rangle \simeq v^{2}(0)t^{2}}
, indicating that the motion of Brownian particles at timescales much shorter than the relaxation time
τ
{\displaystyle \tau }
of the system is (approximately) time-reversal invariant. On the other hand,
⟨
r
2
(
t
≫
τ
)
⟩
≃
6
k
B
T
τ
t
/
m
=
6
μ
k
B
T
t
=
6
D
t
{\displaystyle \langle r^{2}(t\gg \tau )\rangle \simeq 6k_{\text{B}}T\tau t/m=6\mu k_{\text{B}}Tt=6Dt}
, which indicates an irreversible, dissipative process.
=== Recovering Boltzmann statistics ===
If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to the Boltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case of overdamped dynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectory
x
(
t
)
{\displaystyle x(t)}
is described by the overdamped Langevin equation
λ
d
x
d
t
=
−
∂
V
(
x
)
∂
x
+
η
(
t
)
≡
−
∂
V
(
x
)
∂
x
+
2
λ
k
B
T
d
B
t
d
t
,
{\displaystyle \lambda {\frac {dx}{dt}}=-{\frac {\partial V(x)}{\partial x}}+\eta (t)\equiv -{\frac {\partial V(x)}{\partial x}}+{\sqrt {2\lambda k_{\text{B}}T}}{\frac {dB_{t}}{dt}},}
where
λ
{\displaystyle \lambda }
is the damping constant. The term
η
(
t
)
{\displaystyle \eta (t)}
is white noise, characterized by
⟨
η
(
t
)
η
(
t
′
)
⟩
=
2
k
B
T
λ
δ
(
t
−
t
′
)
{\displaystyle \left\langle \eta (t)\eta (t')\right\rangle =2k_{\text{B}}T\lambda \delta (t-t')}
(formally, the Wiener process). One way to solve this equation is to introduce a test function
f
{\displaystyle f}
and calculate its average. The average of
f
(
x
(
t
)
)
{\displaystyle f(x(t))}
should be time-independent for finite
x
(
t
)
{\displaystyle x(t)}
, leading to
d
d
t
⟨
f
(
x
(
t
)
)
⟩
=
0
,
{\displaystyle {\frac {d}{dt}}\left\langle f(x(t))\right\rangle =0,}
Itô's lemma for the Itô drift-diffusion process
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
{\displaystyle dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t}}
says that the differential of a twice-differentiable function f(t, x) is given by
d
f
=
(
∂
f
∂
t
+
μ
t
∂
f
∂
x
+
σ
t
2
2
∂
2
f
∂
x
2
)
d
t
+
σ
t
∂
f
∂
x
d
B
t
.
{\displaystyle df=\left({\frac {\partial f}{\partial t}}+\mu _{t}{\frac {\partial f}{\partial x}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+\sigma _{t}{\frac {\partial f}{\partial x}}\,dB_{t}.}
Applying this to the calculation of
⟨
f
(
x
(
t
)
)
⟩
{\displaystyle \langle f(x(t))\rangle }
gives
⟨
−
f
′
(
x
)
∂
V
∂
x
+
k
B
T
f
″
(
x
)
⟩
=
0.
{\displaystyle \left\langle -f'(x){\frac {\partial V}{\partial x}}+k_{\text{B}}Tf''(x)\right\rangle =0.}
This average can be written using the probability density function
p
(
x
)
{\displaystyle p(x)}
;
∫
(
−
f
′
(
x
)
∂
V
∂
x
p
(
x
)
+
k
B
T
f
″
(
x
)
p
(
x
)
)
d
x
=
∫
(
−
f
′
(
x
)
∂
V
∂
x
p
(
x
)
−
k
B
T
f
′
(
x
)
p
′
(
x
)
)
d
x
=
0
{\displaystyle {\begin{aligned}&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}f''(x)p(x)\right)dx\\=&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)-{k_{\text{B}}T}f'(x)p'(x)\right)dx\\=&\;0\end{aligned}}}
where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functions
f
{\displaystyle f}
, it follows that
∂
V
∂
x
p
(
x
)
+
k
B
T
p
′
(
x
)
=
0
,
{\displaystyle {\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}p'(x)=0,}
thus recovering the Boltzmann distribution
p
(
x
)
∝
exp
(
−
V
(
x
)
k
B
T
)
.
{\displaystyle p(x)\propto \exp \left({-{\frac {V(x)}{k_{\text{B}}T}}}\right).}
== Equivalent techniques ==
In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation.
=== Fokker–Planck equation ===
A Fokker–Planck equation is a deterministic equation for the time dependent probability density
P
(
A
,
t
)
{\displaystyle P\left(A,t\right)}
of stochastic variables
A
{\displaystyle A}
. The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following:
∂
P
(
A
,
t
)
∂
t
=
∑
i
,
j
∂
∂
A
i
(
−
k
B
T
[
A
i
,
A
j
]
∂
H
∂
A
j
+
λ
i
,
j
∂
H
∂
A
j
+
λ
i
,
j
∂
∂
A
j
)
P
(
A
,
t
)
.
{\displaystyle {\frac {\partial P\left(A,t\right)}{\partial t}}=\sum _{i,j}{\frac {\partial }{\partial A_{i}}}\left(-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial }{\partial A_{j}}}\right)P\left(A,t\right).}
The equilibrium distribution
P
(
A
)
=
p
0
(
A
)
=
const
×
exp
(
−
H
)
{\displaystyle P(A)=p_{0}(A)={\text{const}}\times \exp(-{\mathcal {H}})}
is a stationary solution.
==== Klein–Kramers equation ====
The Fokker–Planck equation for an underdamped Brownian particle is called the Klein–Kramers equation. If the Langevin equations are written as
r
˙
=
p
m
p
˙
=
−
ξ
p
−
∇
V
(
r
)
+
2
m
ξ
k
B
T
η
(
t
)
,
⟨
η
T
(
t
)
η
(
t
′
)
⟩
=
I
δ
(
t
−
t
′
)
{\displaystyle {\begin{aligned}{\dot {\mathbf {r} }}&={\frac {\mathbf {p} }{m}}\\{\dot {\mathbf {p} }}&=-\xi \,\mathbf {p} -\nabla V(\mathbf {r} )+{\sqrt {2m\xi k_{\mathrm {B} }T}}{\boldsymbol {\eta }}(t),\qquad \langle {\boldsymbol {\eta }}^{\mathrm {T} }(t){\boldsymbol {\eta }}(t')\rangle =\mathbf {I} \delta (t-t')\end{aligned}}}
where
p
{\displaystyle \mathbf {p} }
is the momentum, then the corresponding Fokker–Planck equation is
∂
f
∂
t
+
1
m
p
⋅
∇
r
f
=
ξ
∇
p
⋅
(
p
f
)
+
∇
p
⋅
(
∇
V
(
r
)
f
)
+
m
ξ
k
B
T
∇
p
2
f
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {1}{m}}\mathbf {p} \cdot \nabla _{\mathbf {r} }f=\xi \nabla _{\mathbf {p} }\cdot \left(\mathbf {p} \,f\right)+\nabla _{\mathbf {p} }\cdot \left(\nabla V(\mathbf {r} )\,f\right)+m\xi k_{\mathrm {B} }T\,\nabla _{\mathbf {p} }^{2}f}
Here
∇
r
{\displaystyle \nabla _{\mathbf {r} }}
and
∇
p
{\displaystyle \nabla _{\mathbf {p} }}
are the gradient operator with respect to r and p, and
∇
p
2
{\displaystyle \nabla _{\mathbf {p} }^{2}}
is the Laplacian with respect to p.
In
d
{\displaystyle d}
-dimensional free space, corresponding to
V
(
r
)
=
constant
{\displaystyle V(\mathbf {r} )={\text{constant}}}
on
R
d
{\displaystyle \mathbb {R} ^{d}}
, this equation can be solved using Fourier transforms. If the particle is initialized at
t
=
0
{\displaystyle t=0}
with position
r
′
{\displaystyle \mathbf {r} '}
and momentum
p
′
{\displaystyle \mathbf {p} '}
, corresponding to initial condition
f
(
r
,
p
,
0
)
=
δ
(
r
−
r
′
)
δ
(
p
−
p
′
)
{\displaystyle f(\mathbf {r} ,\mathbf {p} ,0)=\delta (\mathbf {r} -\mathbf {r} ')\delta (\mathbf {p} -\mathbf {p} ')}
, then the solution is
f
(
r
,
p
,
t
)
=
1
(
2
π
σ
X
σ
P
1
−
β
2
)
d
×
exp
[
−
1
2
(
1
−
β
2
)
(
|
r
−
μ
X
|
2
σ
X
2
+
|
p
−
μ
P
|
2
σ
P
2
−
2
β
(
r
−
μ
X
)
⋅
(
p
−
μ
P
)
σ
X
σ
P
)
]
{\displaystyle {\begin{aligned}f(\mathbf {r} ,\mathbf {p} ,t)=&{\frac {1}{\left(2\pi \sigma _{X}\sigma _{P}{\sqrt {1-\beta ^{2}}}\right)^{d}}}\times \\&\quad \exp \left[-{\frac {1}{2(1-\beta ^{2})}}\left({\frac {|\mathbf {r} -{\boldsymbol {\mu }}_{X}|^{2}}{\sigma _{X}^{2}}}+{\frac {|\mathbf {p} -{\boldsymbol {\mu }}_{P}|^{2}}{\sigma _{P}^{2}}}-{\frac {2\beta (\mathbf {r} -{\boldsymbol {\mu }}_{X})\cdot (\mathbf {p} -{\boldsymbol {\mu }}_{P})}{\sigma _{X}\sigma _{P}}}\right)\right]\end{aligned}}}
where
σ
X
2
=
k
B
T
m
ξ
2
[
1
+
2
ξ
t
−
(
2
−
e
−
ξ
t
)
2
]
;
σ
P
2
=
m
k
B
T
(
1
−
e
−
2
ξ
t
)
β
=
k
B
T
ξ
σ
X
σ
P
(
1
−
e
−
ξ
t
)
2
μ
X
=
r
′
+
(
m
ξ
)
−
1
(
1
−
e
−
ξ
t
)
p
′
;
μ
P
=
p
′
e
−
ξ
t
.
{\displaystyle {\begin{aligned}&\sigma _{X}^{2}={\frac {k_{\mathrm {B} }T}{m\xi ^{2}}}\left[1+2\xi t-\left(2-e^{-\xi t}\right)^{2}\right];\qquad \sigma _{P}^{2}=mk_{\mathrm {B} }T\left(1-e^{-2\xi t}\right)\\&\beta ={\frac {k_{\text{B}}T}{\xi \sigma _{X}\sigma _{P}}}\left(1-e^{-\xi t}\right)^{2}\\&{\boldsymbol {\mu }}_{X}=\mathbf {r} '+(m\xi )^{-1}\left(1-e^{-\xi t}\right)\mathbf {p} ';\qquad {\boldsymbol {\mu }}_{P}=\mathbf {p} 'e^{-\xi t}.\end{aligned}}}
In three spatial dimensions, the mean squared displacement is
⟨
r
(
t
)
2
⟩
=
∫
f
(
r
,
p
,
t
)
r
2
d
r
d
p
=
μ
X
2
+
3
σ
X
2
{\displaystyle \langle \mathbf {r} (t)^{2}\rangle =\int f(\mathbf {r} ,\mathbf {p} ,t)\mathbf {r} ^{2}\,d\mathbf {r} d\mathbf {p} ={\boldsymbol {\mu }}_{X}^{2}+3\sigma _{X}^{2}}
=== Path integral ===
A path integral equivalent to a Langevin equation can be obtained from the corresponding Fokker–Planck equation or by transforming the Gaussian probability distribution
P
(
η
)
(
η
)
d
η
{\displaystyle P^{(\eta )}(\eta )\mathrm {d} \eta }
of the fluctuating force
η
{\displaystyle \eta }
to a probability distribution of the slow variables, schematically
P
(
A
)
d
A
=
P
(
η
)
(
η
(
A
)
)
det
(
d
η
/
d
A
)
d
A
{\displaystyle P(A)\mathrm {d} A=P^{(\eta )}(\eta (A))\det(\mathrm {d} \eta /\mathrm {d} A)\mathrm {d} A}
.
The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, where
A
(
t
+
Δ
t
)
−
A
(
t
)
{\displaystyle A(t+\Delta t)-A(t)}
depends on
A
(
t
)
{\displaystyle A(t)}
but not on
A
(
t
+
Δ
t
)
{\displaystyle A(t+\Delta t)}
. It turns out to be convenient to introduce auxiliary response variables
A
~
{\displaystyle {\tilde {A}}}
. The path integral equivalent to the generic Langevin equation then reads
∫
P
(
A
,
A
~
)
d
A
d
A
~
=
N
∫
exp
(
L
(
A
,
A
~
)
)
d
A
d
A
~
,
{\displaystyle \int P(A,{\tilde {A}})\,\mathrm {d} A\,\mathrm {d} {\tilde {A}}=N\int \exp \left(L(A,{\tilde {A}})\right)\mathrm {d} A\,\mathrm {d} {\tilde {A}},}
where
N
{\displaystyle N}
is a normalization factor and
L
(
A
,
A
~
)
=
∫
∑
i
,
j
{
A
~
i
λ
i
,
j
A
~
j
−
A
~
i
{
δ
i
,
j
d
A
j
d
t
−
k
B
T
[
A
i
,
A
j
]
d
H
d
A
j
+
λ
i
,
j
d
H
d
A
j
−
d
λ
i
,
j
d
A
j
}
}
d
t
.
{\displaystyle L(A,{\tilde {A}})=\int \sum _{i,j}\left\{{\tilde {A}}_{i}\lambda _{i,j}{\tilde {A}}_{j}-{\widetilde {A}}_{i}\left\{\delta _{i,j}{\frac {\mathrm {d} A_{j}}{\mathrm {d} t}}-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+\lambda _{i,j}{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}-{\frac {\mathrm {d} \lambda _{i,j}}{\mathrm {d} A_{j}}}\right\}\right\}\mathrm {d} t.}
The path integral formulation allows for the use of tools from quantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism or the Janssen-De Dominicis formalism after its developers. The mathematical formalism for this representation can be developed on abstract Wiener space.
== See also ==
Grote–Hynes theory
Langevin dynamics
Stochastic thermodynamics
== References ==
== Further reading ==
W. T. Coffey (Trinity College, Dublin, Ireland) and Yu P. Kalmykov (Université de Perpignan, France, The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering (Third edition), World Scientific Series in Contemporary Chemical Physics – Vol 27.
Reif, F. Fundamentals of Statistical and Thermal Physics, McGraw Hill New York, 1965. See section 15.5 Langevin Equation
R. Friedrich, J. Peinke and Ch. Renner. How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market, Phys. Rev. Lett. 84, 5224–5227 (2000)
L.C.G. Rogers and D. Williams. Diffusions, Markov Processes, and Martingales, Cambridge Mathematical Library, Cambridge University Press, Cambridge, reprint of 2nd (1994) edition, 2000. | Wikipedia/Langevin_equation |
In financial mathematics, the Hull–White model is a model of future interest rates. In its most generic formulation, it belongs to the class of no-arbitrage models that are able to fit today's term structure of interest rates. It is relatively straightforward to translate the mathematical description of the evolution of future interest rates onto a tree or lattice and so interest rate derivatives such as bermudan swaptions can be valued in the model.
The first Hull–White model was described by John C. Hull and Alan White in 1990. The model is still popular in the market today.
== The model ==
=== One-factor model ===
The model is a short-rate model. In general, it has the following dynamics:
d
r
(
t
)
=
[
θ
(
t
)
−
α
(
t
)
r
(
t
)
]
d
t
+
σ
(
t
)
d
W
(
t
)
.
{\displaystyle dr(t)=\left[\theta (t)-\alpha (t)r(t)\right]\,dt+\sigma (t)\,dW(t).}
There is a degree of ambiguity among practitioners about exactly which parameters in the model are time-dependent or what name to apply to the model in each case. The most commonly accepted naming convention is the following:
θ
{\displaystyle \theta }
has t (time) dependence — the Hull–White model.
θ
{\displaystyle \theta }
and
α
{\displaystyle \alpha }
are both time-dependent — the extended Vasicek model.
=== Two-factor model ===
The two-factor Hull–White model (Hull 2006:657–658) contains an additional disturbance term whose mean reverts to zero, and is of the form:
d
f
(
r
(
t
)
)
=
[
θ
(
t
)
+
u
−
α
(
t
)
f
(
r
(
t
)
)
]
d
t
+
σ
1
(
t
)
d
W
1
(
t
)
,
{\displaystyle d\,f(r(t))=\left[\theta (t)+u-\alpha (t)\,f(r(t))\right]dt+\sigma _{1}(t)\,dW_{1}(t),}
where
f
{\displaystyle \displaystyle f}
is a deterministic function, typically the identity function (extension of the one-factor version, analytically tractable, and with potentially negative rates), the natural logarithm (extension of the Black–Karasinski model, not analytically tractable, and with positive interest rates), or combinations (proportional to the natural logarithm on small rates and proportional to the identity function on large rates); and
u
{\displaystyle \displaystyle u}
has an initial value of 0 and follows the process:
d
u
=
−
b
u
d
t
+
σ
2
d
W
2
(
t
)
{\displaystyle du=-bu\,dt+\sigma _{2}\,dW_{2}(t)}
== Analysis of the one-factor model ==
For the rest of this article we assume only
θ
{\displaystyle \theta }
has t-dependence.
Neglecting the stochastic term for a moment, notice that for
α
>
0
{\displaystyle \alpha >0}
the change in r is negative if r is currently "large" (greater than
θ
(
t
)
/
α
)
{\displaystyle \theta (t)/\alpha )}
and positive if the current value is small. That is, the stochastic process is a mean-reverting Ornstein–Uhlenbeck process.
θ is calculated from the initial yield curve describing the current term structure of interest rates. Typically α is left as a user input (for example it may be estimated from historical data). σ is determined via calibration to a set of caplets and swaptions readily tradeable in the market.
When
α
{\displaystyle \alpha }
,
θ
{\displaystyle \theta }
, and
σ
{\displaystyle \sigma }
are constant, Itô's lemma can be used to prove that
r
(
t
)
=
e
−
α
t
r
(
0
)
+
θ
α
(
1
−
e
−
α
t
)
+
σ
e
−
α
t
∫
0
t
e
α
u
d
W
(
u
)
,
{\displaystyle r(t)=e^{-\alpha t}r(0)+{\frac {\theta }{\alpha }}\left(1-e^{-\alpha t}\right)+\sigma e^{-\alpha t}\int _{0}^{t}e^{\alpha u}\,dW(u),}
which has distribution
r
(
t
)
∼
N
(
e
−
α
t
r
(
0
)
+
θ
α
(
1
−
e
−
α
t
)
,
σ
2
2
α
(
1
−
e
−
2
α
t
)
)
,
{\displaystyle r(t)\sim {\mathcal {N}}\left(e^{-\alpha t}r(0)+{\frac {\theta }{\alpha }}\left(1-e^{-\alpha t}\right),{\frac {\sigma ^{2}}{2\alpha }}\left(1-e^{-2\alpha t}\right)\right),}
where
N
(
μ
,
σ
2
)
{\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}
is the normal distribution with mean
μ
{\displaystyle \mu }
and variance
σ
2
{\displaystyle \sigma ^{2}}
.
When
θ
(
t
)
{\displaystyle \theta (t)}
is time-dependent,
r
(
t
)
=
e
−
α
t
r
(
0
)
+
∫
0
t
e
α
(
s
−
t
)
θ
(
s
)
d
s
+
σ
e
−
α
t
∫
0
t
e
α
u
d
W
(
u
)
,
{\displaystyle r(t)=e^{-\alpha t}r(0)+\int _{0}^{t}e^{\alpha (s-t)}\theta (s)ds+\sigma e^{-\alpha t}\int _{0}^{t}e^{\alpha u}\,dW(u),}
which has distribution
r
(
t
)
∼
N
(
e
−
α
t
r
(
0
)
+
∫
0
t
e
α
(
s
−
t
)
θ
(
s
)
d
s
,
σ
2
2
α
(
1
−
e
−
2
α
t
)
)
.
{\displaystyle r(t)\sim {\mathcal {N}}\left(e^{-\alpha t}r(0)+\int _{0}^{t}e^{\alpha (s-t)}\theta (s)ds,{\frac {\sigma ^{2}}{2\alpha }}\left(1-e^{-2\alpha t}\right)\right).}
== Bond pricing using the Hull–White model ==
It turns out that the time-S value of the T-maturity discount bond has distribution (note the affine term structure here!)
P
(
S
,
T
)
=
A
(
S
,
T
)
exp
(
−
B
(
S
,
T
)
r
(
S
)
)
,
{\displaystyle P(S,T)=A(S,T)\exp(-B(S,T)r(S)),}
where
B
(
S
,
T
)
=
1
−
exp
(
−
α
(
T
−
S
)
)
α
,
{\displaystyle B(S,T)={\frac {1-\exp(-\alpha (T-S))}{\alpha }},}
A
(
S
,
T
)
=
P
(
0
,
T
)
P
(
0
,
S
)
exp
(
−
B
(
S
,
T
)
∂
log
(
P
(
0
,
S
)
)
∂
S
−
σ
2
(
exp
(
−
α
T
)
−
exp
(
−
α
S
)
)
2
(
exp
(
2
α
S
)
−
1
)
4
α
3
)
.
{\displaystyle A(S,T)={\frac {P(0,T)}{P(0,S)}}\exp \left(\,-B(S,T){\frac {\partial \log(P(0,S))}{\partial S}}-{\frac {\sigma ^{2}(\exp(-\alpha T)-\exp(-\alpha S))^{2}(\exp(2\alpha S)-1)}{4\alpha ^{3}}}\right).}
Note that their terminal distribution for
P
(
S
,
T
)
{\displaystyle P(S,T)}
is distributed log-normally.
== Derivative pricing ==
By selecting as numeraire the time-S bond (which corresponds to switching to the S-forward measure), we have from the fundamental theorem of arbitrage-free pricing, the value at time t of a derivative which has payoff at time S.
V
(
t
)
=
P
(
t
,
S
)
E
S
[
V
(
S
)
∣
F
(
t
)
]
.
{\displaystyle V(t)=P(t,S)\mathbb {E} _{S}[V(S)\mid {\mathcal {F}}(t)].}
Here,
E
S
{\displaystyle \mathbb {E} _{S}}
is the expectation taken with respect to the forward measure. Moreover, standard arbitrage arguments show
that the time T forward price
F
V
(
t
,
T
)
{\displaystyle F_{V}(t,T)}
for a payoff at time T given by V(T) must satisfy
F
V
(
t
,
T
)
=
V
(
t
)
/
P
(
t
,
T
)
{\displaystyle F_{V}(t,T)=V(t)/P(t,T)}
, thus
F
V
(
t
,
T
)
=
E
T
[
V
(
T
)
∣
F
(
t
)
]
.
{\displaystyle F_{V}(t,T)=\mathbb {E} _{T}[V(T)\mid {\mathcal {F}}(t)].}
Thus it is possible to value many derivatives V dependent solely on a single bond
P
(
S
,
T
)
{\displaystyle P(S,T)}
analytically when working in the Hull–White model. For example, in the case of a bond put
V
(
S
)
=
(
K
−
P
(
S
,
T
)
)
+
.
{\displaystyle V(S)=(K-P(S,T))^{+}.}
Because
P
(
S
,
T
)
{\displaystyle P(S,T)}
is lognormally distributed, the general calculation used for the Black–Scholes model shows that
E
S
[
(
K
−
P
(
S
,
T
)
)
+
]
=
K
N
(
−
d
2
)
−
F
(
t
,
S
,
T
)
N
(
−
d
1
)
,
{\displaystyle {E}_{S}[(K-P(S,T))^{+}]=KN(-d_{2})-F(t,S,T)N(-d_{1}),}
where
d
1
=
log
(
F
/
K
)
+
σ
P
2
S
/
2
σ
P
S
{\displaystyle d_{1}={\frac {\log(F/K)+\sigma _{P}^{2}S/2}{\sigma _{P}{\sqrt {S}}}}}
and
d
2
=
d
1
−
σ
P
S
.
{\displaystyle d_{2}=d_{1}-\sigma _{P}{\sqrt {S}}.}
Thus today's value (with the P(0,S) multiplied back in and t set to 0) is:
P
(
0
,
S
)
K
N
(
−
d
2
)
−
P
(
0
,
T
)
N
(
−
d
1
)
.
{\displaystyle P(0,S)KN(-d_{2})-P(0,T)N(-d_{1}).}
Here
σ
P
{\displaystyle \sigma _{P}}
is the standard deviation (relative volatility) of the log-normal distribution for
P
(
S
,
T
)
{\displaystyle P(S,T)}
. A fairly substantial amount of algebra shows that it is related to the original parameters via
S
σ
P
=
σ
α
(
1
−
exp
(
−
α
(
T
−
S
)
)
)
1
−
exp
(
−
2
α
S
)
2
α
.
{\displaystyle {\sqrt {S}}\sigma _{P}={\frac {\sigma }{\alpha }}(1-\exp(-\alpha (T-S))){\sqrt {\frac {1-\exp(-2\alpha S)}{2\alpha }}}.}
Note that this expectation was done in the S-bond measure, whereas we did not specify a measure at all for the original Hull–White process. This does not matter — the volatility is all that matters and is measure-independent.
Because interest rate caps/floors are equivalent to bond puts and calls respectively, the above analysis shows that caps and floors can be priced analytically in the Hull–White model. Jamshidian's trick applies to Hull–White (as today's value of a swaption in the Hull–White model is a monotonic function of today's short rate). Thus knowing how to price caps is also sufficient for pricing swaptions. In the event that the underlying is a compounded backward-looking rate rather than a (forward-looking) LIBOR term rate, Turfus (2020) shows how this formula can be straightforwardly modified to take into account the additional convexity.
Swaptions can also be priced directly as described in Henrard (2003). Direct implementations are usually more efficient.
== Monte-Carlo simulation, trees and lattices ==
However, valuing vanilla instruments such as caps and swaptions is useful primarily for calibration. The real use of the model is to value somewhat more exotic derivatives such as bermudan swaptions on a lattice, or other derivatives in a multi-currency context such as Quanto Constant Maturity Swaps, as explained for example in Brigo and Mercurio (2001). The efficient and exact Monte-Carlo simulation of the Hull–White model with time dependent parameters can be easily performed, see Ostrovski (2013) and (2016). An open-source implementation of the exact Monte-Carlo simulation following Fries (2016) can be found in finmath lib.
== Forecasting ==
Even though single factor models such as Vasicek, CIR and Hull–White model has been devised for pricing, recent research has shown their potential with regard to forecasting. In Orlando et al. (2018, 2019,) was provided a new methodology to forecast future interest rates called CIR#.
The ideas, apart from turning a short-rate model used for pricing into a forecasting tool, lies in an appropriate partitioning of the dataset into subgroups according to a given distribution ).
In there it was shown how the said partitioning enables capturing statistically significant time changes in volatility of interest rates. following the said approach, Orlando et al. (2021) ) compares the Hull–White model with the CIR model in terms of forecasting and prediction of interest rate directionality.
== See also ==
Vasicek model
Cox–Ingersoll–Ross model
Black–Karasinski model
== References ==
Primary references
John Hull and Alan White, "Using Hull–White interest rate trees," Journal of Derivatives, Vol. 3, No. 3 (Spring 1996), pp. 26–36
John Hull and Alan White, "Numerical procedures for implementing term structure models I," Journal of Derivatives, Fall 1994, pp. 7–16.
John Hull and Alan White, "Numerical procedures for implementing term structure models II," Journal of Derivatives, Winter 1994, pp. 37–48.
John Hull and Alan White, "The pricing of options on interest rate caps and floors using the Hull–White model" in Advanced Strategies in Financial Risk Management, Chapter 4, pp. 59–67.
John Hull and Alan White, "One factor interest rate models and the valuation of interest rate derivative securities," Journal of Financial and Quantitative Analysis, Vol 28, No 2, (June 1993) pp. 235–254.
John Hull and Alan White, "Pricing interest-rate derivative securities", The Review of Financial Studies, Vol 3, No. 4 (1990) pp. 573–592.
Other references
Hull, John C. (2006). "Interest Rate Derivatives: Models of the Short Rate". Options, Futures, and Other Derivatives (6th ed.). Upper Saddle River, N.J: Prentice Hall. pp. 657–658. ISBN 0-13-149908-4. LCCN 2005047692. OCLC 60321487.
Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models — Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
Henrard, Marc (2003). "Explicit Bond Option and Swaption Formula in Heath–Jarrow–Morton One Factor Model," International Journal of Theoretical and Applied Finance, 6(1), 57–72. Preprint SSRN.
Henrard, Marc (2009). Efficient swaptions price in Hull–White one factor model, arXiv, 0901.1776v1. Preprint arXiv.
Ostrovski, Vladimir (2013). Efficient and Exact Simulation of the Hull–White Model, Preprint SSRN.
Ostrovski, Vladimir (2016). Efficient and Exact Simulation of the Gaussian Affine Interest Rate Models., International Journal of Financial Engineering, Vol. 3, No. 02.,Preprint SSRN.
Puschkarski, Eugen. Implementation of Hull–White's No-Arbitrage Term Structure Model, Diploma Thesis, Center for Central European Financial Markets
Turfus, Colin (2020). Caplet Pricing with Backward-Looking Rates., Preprint SSRN.
Letian Wang, Hull–White Model, Fixed Income Quant Group, DTCC (detailed numeric example and derivation) | Wikipedia/Hull–White_model |
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation) which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
Unlike the moving-average (MA) model, the autoregressive model is not always stationary, because it may contain a unit root.
Large language models are called autoregressive, but they are not a classical autoregressive model in this sense because they are not linear.
== Definition ==
The notation
A
R
(
p
)
{\displaystyle AR(p)}
indicates an autoregressive model of order p. The AR(p) model is defined as
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}}
where
φ
1
,
…
,
φ
p
{\displaystyle \varphi _{1},\ldots ,\varphi _{p}}
are the parameters of the model, and
ε
t
{\displaystyle \varepsilon _{t}}
is white noise. This can be equivalently written using the backshift operator B as
X
t
=
∑
i
=
1
p
φ
i
B
i
X
t
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}B^{i}X_{t}+\varepsilon _{t}}
so that, moving the summation term to the left side and using polynomial notation, we have
ϕ
[
B
]
X
t
=
ε
t
{\displaystyle \phi [B]X_{t}=\varepsilon _{t}}
An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.
Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with
|
φ
1
|
≥
1
{\displaystyle |\varphi _{1}|\geq 1}
are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial
Φ
(
z
)
:=
1
−
∑
i
=
1
p
φ
i
z
i
{\displaystyle \Phi (z):=\textstyle 1-\sum _{i=1}^{p}\varphi _{i}z^{i}}
must lie outside the unit circle, i.e., each (complex) root
z
i
{\displaystyle z_{i}}
must satisfy
|
z
i
|
>
1
{\displaystyle |z_{i}|>1}
(see pages 89,92 ).
== Intertemporal effect of shocks ==
In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model
X
t
=
φ
1
X
t
−
1
+
ε
t
{\displaystyle X_{t}=\varphi _{1}X_{t-1}+\varepsilon _{t}}
. A non-zero value for
ε
t
{\displaystyle \varepsilon _{t}}
at say time t=1 affects
X
1
{\displaystyle X_{1}}
by the amount
ε
1
{\displaystyle \varepsilon _{1}}
. Then by the AR equation for
X
2
{\displaystyle X_{2}}
in terms of
X
1
{\displaystyle X_{1}}
, this affects
X
2
{\displaystyle X_{2}}
by the amount
φ
1
ε
1
{\displaystyle \varphi _{1}\varepsilon _{1}}
. Then by the AR equation for
X
3
{\displaystyle X_{3}}
in terms of
X
2
{\displaystyle X_{2}}
, this affects
X
3
{\displaystyle X_{3}}
by the amount
φ
1
2
ε
1
{\displaystyle \varphi _{1}^{2}\varepsilon _{1}}
. Continuing this process shows that the effect of
ε
1
{\displaystyle \varepsilon _{1}}
never ends, although if the process is stationary then the effect diminishes toward zero in the limit.
Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression
ϕ
(
B
)
X
t
=
ε
t
{\displaystyle \phi (B)X_{t}=\varepsilon _{t}\,}
(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as
X
t
=
1
ϕ
(
B
)
ε
t
.
{\displaystyle X_{t}={\frac {1}{\phi (B)}}\varepsilon _{t}\,.}
When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to
ε
t
{\displaystyle \varepsilon _{t}}
has an infinite order—that is, an infinite number of lagged values of
ε
t
{\displaystyle \varepsilon _{t}}
appear on the right side of the equation.
== Characteristic polynomial ==
The autocorrelation function of an AR(p) process can be expressed as
ρ
(
τ
)
=
∑
k
=
1
p
a
k
y
k
−
|
τ
|
,
{\displaystyle \rho (\tau )=\sum _{k=1}^{p}a_{k}y_{k}^{-|\tau |},}
where
y
k
{\displaystyle y_{k}}
are the roots of the polynomial
ϕ
(
B
)
=
1
−
∑
k
=
1
p
φ
k
B
k
{\displaystyle \phi (B)=1-\sum _{k=1}^{p}\varphi _{k}B^{k}}
where B is the backshift operator, where
ϕ
(
⋅
)
{\displaystyle \phi (\cdot )}
is the function defining the autoregression, and where
φ
k
{\displaystyle \varphi _{k}}
are the coefficients in the autoregression. The formula is valid only if all the roots have multiplicity 1.
The autocorrelation function of an AR(p) process is a sum of decaying exponentials.
Each real root contributes a component to the autocorrelation function that decays exponentially.
Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation.
== Graphs of AR(p) processes ==
The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.
For an AR(1) process with a positive
φ
{\displaystyle \varphi }
, only the previous term in the process and the noise term contribute to the output. If
φ
{\displaystyle \varphi }
is close to 0, then the process still looks like white noise, but as
φ
{\displaystyle \varphi }
approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter.
For an AR(2) process, the previous two terms and the noise term contribute to the output. If both
φ
1
{\displaystyle \varphi _{1}}
and
φ
2
{\displaystyle \varphi _{2}}
are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If
φ
1
{\displaystyle \varphi _{1}}
is positive while
φ
2
{\displaystyle \varphi _{2}}
is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be linked to edge detection or detection of change in direction.
== Example: An AR(1) process ==
An AR(1) process is given by:
X
t
=
φ
X
t
−
1
+
ε
t
{\displaystyle X_{t}=\varphi X_{t-1}+\varepsilon _{t}\,}
where
ε
t
{\displaystyle \varepsilon _{t}}
is a white noise process with zero mean and constant variance
σ
ε
2
{\displaystyle \sigma _{\varepsilon }^{2}}
.
(Note: The subscript on
φ
1
{\displaystyle \varphi _{1}}
has been dropped.) The process is weak-sense stationary if
|
φ
|
<
1
{\displaystyle |\varphi |<1}
since it is obtained as the output of a stable filter whose input is white noise. (If
φ
=
1
{\displaystyle \varphi =1}
then the variance of
X
t
{\displaystyle X_{t}}
depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not weak sense stationary.) Assuming
|
φ
|
<
1
{\displaystyle |\varphi |<1}
, the mean
E
(
X
t
)
{\displaystyle \operatorname {E} (X_{t})}
is identical for all values of t by the very definition of weak sense stationarity. If the mean is denoted by
μ
{\displaystyle \mu }
, it follows from
E
(
X
t
)
=
φ
E
(
X
t
−
1
)
+
E
(
ε
t
)
,
{\displaystyle \operatorname {E} (X_{t})=\varphi \operatorname {E} (X_{t-1})+\operatorname {E} (\varepsilon _{t}),}
that
μ
=
φ
μ
+
0
,
{\displaystyle \mu =\varphi \mu +0,}
and hence
μ
=
0.
{\displaystyle \mu =0.}
The variance is
var
(
X
t
)
=
E
(
X
t
2
)
−
μ
2
=
σ
ε
2
1
−
φ
2
,
{\displaystyle {\textrm {var}}(X_{t})=\operatorname {E} (X_{t}^{2})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}},}
where
σ
ε
{\displaystyle \sigma _{\varepsilon }}
is the standard deviation of
ε
t
{\displaystyle \varepsilon _{t}}
. This can be shown by noting that
var
(
X
t
)
=
φ
2
var
(
X
t
−
1
)
+
σ
ε
2
,
{\displaystyle {\textrm {var}}(X_{t})=\varphi ^{2}{\textrm {var}}(X_{t-1})+\sigma _{\varepsilon }^{2},}
and then by noticing that the quantity above is a stable fixed point of this relation.
The autocovariance is given by
B
n
=
E
(
X
t
+
n
X
t
)
−
μ
2
=
σ
ε
2
1
−
φ
2
φ
|
n
|
.
{\displaystyle B_{n}=\operatorname {E} (X_{t+n}X_{t})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|n|}.}
It can be seen that the autocovariance function decays with a decay time (also called time constant) of
τ
=
1
−
φ
{\displaystyle \tau =1-\varphi }
.
The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:
Φ
(
ω
)
=
1
2
π
∑
n
=
−
∞
∞
B
n
e
−
i
ω
n
=
1
2
π
(
σ
ε
2
1
+
φ
2
−
2
φ
cos
(
ω
)
)
.
{\displaystyle \Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,\sum _{n=-\infty }^{\infty }B_{n}e^{-i\omega n}={\frac {1}{\sqrt {2\pi }}}\,\left({\frac {\sigma _{\varepsilon }^{2}}{1+\varphi ^{2}-2\varphi \cos(\omega )}}\right).}
This expression is periodic due to the discrete nature of the
X
j
{\displaystyle X_{j}}
, which is manifested as the cosine term in the denominator. If we assume that the sampling time (
Δ
t
=
1
{\displaystyle \Delta t=1}
) is much smaller than the decay time (
τ
{\displaystyle \tau }
), then we can use a continuum approximation to
B
n
{\displaystyle B_{n}}
:
B
(
t
)
≈
σ
ε
2
1
−
φ
2
φ
|
t
|
{\displaystyle B(t)\approx {\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|t|}}
which yields a Lorentzian profile for the spectral density:
Φ
(
ω
)
=
1
2
π
σ
ε
2
1
−
φ
2
γ
π
(
γ
2
+
ω
2
)
{\displaystyle \Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,{\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,{\frac {\gamma }{\pi (\gamma ^{2}+\omega ^{2})}}}
where
γ
=
1
/
τ
{\displaystyle \gamma =1/\tau }
is the angular frequency associated with the decay time
τ
{\displaystyle \tau }
.
An alternative expression for
X
t
{\displaystyle X_{t}}
can be derived by first substituting
φ
X
t
−
2
+
ε
t
−
1
{\displaystyle \varphi X_{t-2}+\varepsilon _{t-1}}
for
X
t
−
1
{\displaystyle X_{t-1}}
in the defining equation. Continuing this process N times yields
X
t
=
φ
N
X
t
−
N
+
∑
k
=
0
N
−
1
φ
k
ε
t
−
k
.
{\displaystyle X_{t}=\varphi ^{N}X_{t-N}+\sum _{k=0}^{N-1}\varphi ^{k}\varepsilon _{t-k}.}
For N approaching infinity,
φ
N
{\displaystyle \varphi ^{N}}
will approach zero and:
X
t
=
∑
k
=
0
∞
φ
k
ε
t
−
k
.
{\displaystyle X_{t}=\sum _{k=0}^{\infty }\varphi ^{k}\varepsilon _{t-k}.}
It is seen that
X
t
{\displaystyle X_{t}}
is white noise convolved with the
φ
k
{\displaystyle \varphi ^{k}}
kernel plus the constant mean. If the white noise
ε
t
{\displaystyle \varepsilon _{t}}
is a Gaussian process then
X
t
{\displaystyle X_{t}}
is also a Gaussian process. In other cases, the central limit theorem indicates that
X
t
{\displaystyle X_{t}}
will be approximately normally distributed when
φ
{\displaystyle \varphi }
is close to one.
For
ε
t
=
0
{\displaystyle \varepsilon _{t}=0}
, the process
X
t
=
φ
X
t
−
1
{\displaystyle X_{t}=\varphi X_{t-1}}
will be a geometric progression (exponential growth or decay). In this case, the solution can be found analytically:
X
t
=
a
φ
t
{\displaystyle X_{t}=a\varphi ^{t}}
whereby
a
{\displaystyle a}
is an unknown constant (initial condition).
=== Explicit mean/difference form of AR(1) process ===
The AR(1) model is the discrete-time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model, with process parameter
θ
∈
R
{\displaystyle \theta \in \mathbb {R} }
, is given by
X
t
+
1
=
X
t
+
(
1
−
θ
)
(
μ
−
X
t
)
+
ε
t
+
1
{\displaystyle X_{t+1}=X_{t}+(1-\theta )(\mu -X_{t})+\varepsilon _{t+1}}
, where
|
θ
|
<
1
{\displaystyle |\theta |<1\,}
,
μ
:=
E
(
X
)
{\displaystyle \mu :=E(X)}
is the model mean, and
{
ϵ
t
}
{\displaystyle \{\epsilon _{t}\}}
is a white-noise process with zero mean and constant variance
σ
{\displaystyle \sigma }
.
By rewriting this as
X
t
+
1
=
θ
X
t
+
(
1
−
θ
)
μ
+
ε
t
+
1
{\displaystyle X_{t+1}=\theta X_{t}+(1-\theta )\mu +\varepsilon _{t+1}}
and then deriving (by induction)
X
t
+
n
=
θ
n
X
t
+
(
1
−
θ
n
)
μ
+
Σ
i
=
1
n
(
θ
n
−
i
ϵ
t
+
i
)
{\displaystyle X_{t+n}=\theta ^{n}X_{t}+(1-\theta ^{n})\mu +\Sigma _{i=1}^{n}\left(\theta ^{n-i}\epsilon _{t+i}\right)}
, one can show that
E
(
X
t
+
n
|
X
t
)
=
μ
[
1
−
θ
n
]
+
X
t
θ
n
{\displaystyle \operatorname {E} (X_{t+n}|X_{t})=\mu \left[1-\theta ^{n}\right]+X_{t}\theta ^{n}}
and
Var
(
X
t
+
n
|
X
t
)
=
σ
2
1
−
θ
2
n
1
−
θ
2
{\displaystyle \operatorname {Var} (X_{t+n}|X_{t})=\sigma ^{2}{\frac {1-\theta ^{2n}}{1-\theta ^{2}}}}
.
== Choosing the maximum lag ==
The partial autocorrelation of an AR(p) process equals zero at lags larger than p, so the appropriate maximum lag p is the one after which the partial autocorrelations are all zero.
== Calculation of the AR parameters ==
There are many ways to estimate the coefficients, such as the ordinary least squares procedure or method of moments (through Yule–Walker equations).
The AR(p) model is given by the equation
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
.
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}.\,}
It is based on parameters
φ
i
{\displaystyle \varphi _{i}}
where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.
=== Yule–Walker equations ===
The Yule–Walker equations, named for Udny Yule and Gilbert Walker, are the following set of equations.
γ
m
=
∑
k
=
1
p
φ
k
γ
m
−
k
+
σ
ε
2
δ
m
,
0
,
{\displaystyle \gamma _{m}=\sum _{k=1}^{p}\varphi _{k}\gamma _{m-k}+\sigma _{\varepsilon }^{2}\delta _{m,0},}
where m = 0, …, p, yielding p + 1 equations. Here
γ
m
{\displaystyle \gamma _{m}}
is the autocovariance function of Xt,
σ
ε
{\displaystyle \sigma _{\varepsilon }}
is the standard deviation of the input noise process, and
δ
m
,
0
{\displaystyle \delta _{m,0}}
is the Kronecker delta function.
Because the last part of an individual equation is non-zero only if m = 0, the set of equations can be solved by representing the equations for m > 0 in matrix form, thus getting the equation
[
γ
1
γ
2
γ
3
⋮
γ
p
]
=
[
γ
0
γ
−
1
γ
−
2
⋯
γ
1
γ
0
γ
−
1
⋯
γ
2
γ
1
γ
0
⋯
⋮
⋮
⋮
⋱
γ
p
−
1
γ
p
−
2
γ
p
−
3
⋯
]
[
φ
1
φ
2
φ
3
⋮
φ
p
]
{\displaystyle {\begin{bmatrix}\gamma _{1}\\\gamma _{2}\\\gamma _{3}\\\vdots \\\gamma _{p}\\\end{bmatrix}}={\begin{bmatrix}\gamma _{0}&\gamma _{-1}&\gamma _{-2}&\cdots \\\gamma _{1}&\gamma _{0}&\gamma _{-1}&\cdots \\\gamma _{2}&\gamma _{1}&\gamma _{0}&\cdots \\\vdots &\vdots &\vdots &\ddots \\\gamma _{p-1}&\gamma _{p-2}&\gamma _{p-3}&\cdots \\\end{bmatrix}}{\begin{bmatrix}\varphi _{1}\\\varphi _{2}\\\varphi _{3}\\\vdots \\\varphi _{p}\\\end{bmatrix}}}
which can be solved for all
{
φ
m
;
m
=
1
,
2
,
…
,
p
}
.
{\displaystyle \{\varphi _{m};m=1,2,\dots ,p\}.}
The remaining equation for m = 0 is
γ
0
=
∑
k
=
1
p
φ
k
γ
−
k
+
σ
ε
2
,
{\displaystyle \gamma _{0}=\sum _{k=1}^{p}\varphi _{k}\gamma _{-k}+\sigma _{\varepsilon }^{2},}
which, once
{
φ
m
;
m
=
1
,
2
,
…
,
p
}
{\displaystyle \{\varphi _{m};m=1,2,\dots ,p\}}
are known, can be solved for
σ
ε
2
.
{\displaystyle \sigma _{\varepsilon }^{2}.}
An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first p+1 elements
ρ
(
τ
)
{\displaystyle \rho (\tau )}
of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating
ρ
(
τ
)
=
∑
k
=
1
p
φ
k
ρ
(
k
−
τ
)
{\displaystyle \rho (\tau )=\sum _{k=1}^{p}\varphi _{k}\rho (k-\tau )}
Examples for some Low-order AR(p) processes
p=1
γ
1
=
φ
1
γ
0
{\displaystyle \gamma _{1}=\varphi _{1}\gamma _{0}}
Hence
ρ
1
=
γ
1
/
γ
0
=
φ
1
{\displaystyle \rho _{1}=\gamma _{1}/\gamma _{0}=\varphi _{1}}
p=2
The Yule–Walker equations for an AR(2) process are
γ
1
=
φ
1
γ
0
+
φ
2
γ
−
1
{\displaystyle \gamma _{1}=\varphi _{1}\gamma _{0}+\varphi _{2}\gamma _{-1}}
γ
2
=
φ
1
γ
1
+
φ
2
γ
0
{\displaystyle \gamma _{2}=\varphi _{1}\gamma _{1}+\varphi _{2}\gamma _{0}}
Remember that
γ
−
k
=
γ
k
{\displaystyle \gamma _{-k}=\gamma _{k}}
Using the first equation yields
ρ
1
=
γ
1
/
γ
0
=
φ
1
1
−
φ
2
{\displaystyle \rho _{1}=\gamma _{1}/\gamma _{0}={\frac {\varphi _{1}}{1-\varphi _{2}}}}
Using the recursion formula yields
ρ
2
=
γ
2
/
γ
0
=
φ
1
2
−
φ
2
2
+
φ
2
1
−
φ
2
{\displaystyle \rho _{2}=\gamma _{2}/\gamma _{0}={\frac {\varphi _{1}^{2}-\varphi _{2}^{2}+\varphi _{2}}{1-\varphi _{2}}}}
=== Estimation of AR parameters ===
The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values. Some of these variants can be described as follows:
Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of Xt on the p previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:
X
t
=
∑
i
=
1
p
φ
i
X
t
+
i
+
ε
t
∗
.
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t+i}+\varepsilon _{t}^{*}\,.}
Here predicted values of Xt would be based on the p future values of the same series. This way of estimating the AR parameters is due to John Parker Burg, and is called the Burg method: Burg and later authors called these particular estimates "maximum entropy estimates", but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation.
Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial p values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.
== Spectrum ==
The power spectral density (PSD) of an AR(p) process with noise variance
V
a
r
(
Z
t
)
=
σ
Z
2
{\displaystyle \mathrm {Var} (Z_{t})=\sigma _{Z}^{2}}
is
S
(
f
)
=
σ
Z
2
|
1
−
∑
k
=
1
p
φ
k
e
−
i
2
π
f
k
|
2
.
{\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{|1-\sum _{k=1}^{p}\varphi _{k}e^{-i2\pi fk}|^{2}}}.}
=== AR(0) ===
For white noise (AR(0))
S
(
f
)
=
σ
Z
2
.
{\displaystyle S(f)=\sigma _{Z}^{2}.}
=== AR(1) ===
For AR(1)
S
(
f
)
=
σ
Z
2
|
1
−
φ
1
e
−
2
π
i
f
|
2
=
σ
Z
2
1
+
φ
1
2
−
2
φ
1
cos
2
π
f
{\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{|1-\varphi _{1}e^{-2\pi if}|^{2}}}={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}-2\varphi _{1}\cos 2\pi f}}}
If
φ
1
>
0
{\displaystyle \varphi _{1}>0}
there is a single spectral peak at
f
=
0
{\displaystyle f=0}
, often referred to as red noise. As
φ
1
{\displaystyle \varphi _{1}}
becomes nearer 1, there is stronger power at low frequencies, i.e. larger time lags. This is then a low-pass filter, when applied to full spectrum light, everything except for the red light will be filtered.
If
φ
1
<
0
{\displaystyle \varphi _{1}<0}
there is a minimum at
f
=
0
{\displaystyle f=0}
, often referred to as blue noise. This similarly acts as a high-pass filter, everything except for blue light will be filtered.
=== AR(2) ===
The behavior of an AR(2) process is determined entirely by the roots of it characteristic equation, which is expressed in terms of the lag operator as:
1
−
φ
1
B
−
φ
2
B
2
=
0
,
{\displaystyle 1-\varphi _{1}B-\varphi _{2}B^{2}=0,}
or equivalently by the poles of its transfer function, which is defined in the Z domain by:
H
z
=
(
1
−
φ
1
z
−
1
−
φ
2
z
−
2
)
−
1
.
{\displaystyle H_{z}=(1-\varphi _{1}z^{-1}-\varphi _{2}z^{-2})^{-1}.}
It follows that the poles are values of z satisfying:
1
−
φ
1
z
−
1
−
φ
2
z
−
2
=
0
{\displaystyle 1-\varphi _{1}z^{-1}-\varphi _{2}z^{-2}=0}
,
which yields:
z
1
,
z
2
=
1
2
φ
2
(
φ
1
±
φ
1
2
+
4
φ
2
)
{\displaystyle z_{1},z_{2}={\frac {1}{2\varphi _{2}}}\left(\varphi _{1}\pm {\sqrt {\varphi _{1}^{2}+4\varphi _{2}}}\right)}
.
z
1
{\displaystyle z_{1}}
and
z
2
{\displaystyle z_{2}}
are the reciprocals of the characteristic roots, as well as the eigenvalues of the temporal update matrix:
[
φ
1
φ
2
1
0
]
{\displaystyle {\begin{bmatrix}\varphi _{1}&\varphi _{2}\\1&0\end{bmatrix}}}
AR(2) processes can be split into three groups depending on the characteristics of their roots/poles:
When
φ
1
2
+
4
φ
2
<
0
{\displaystyle \varphi _{1}^{2}+4\varphi _{2}<0}
, the process has a pair of complex-conjugate poles, creating a mid-frequency peak at:
f
∗
=
1
2
π
cos
−
1
(
φ
1
2
−
φ
2
)
,
{\displaystyle f^{*}={\frac {1}{2\pi }}\cos ^{-1}\left({\frac {\varphi _{1}}{2{\sqrt {-\varphi _{2}}}}}\right),}
with bandwidth about the peak inversely proportional to the moduli of the poles:
|
z
1
|
=
|
z
2
|
=
−
φ
2
.
{\displaystyle |z_{1}|=|z_{2}|={\sqrt {-\varphi _{2}}}.}
The terms involving square roots are all real in the case of complex poles since they exist only when
φ
2
<
0
{\displaystyle \varphi _{2}<0}
.
Otherwise the process has real roots, and:
When
φ
1
>
0
{\displaystyle \varphi _{1}>0}
it acts as a low-pass filter on the white noise with a spectral peak at
f
=
0
{\displaystyle f=0}
When
φ
1
<
0
{\displaystyle \varphi _{1}<0}
it acts as a high-pass filter on the white noise with a spectral peak at
f
=
1
/
2
{\displaystyle f=1/2}
.
The process is non-stationary when the poles are on or outside the unit circle, or equivalently when the characteristic roots are on or inside the unit circle.
The process is stable when the poles are strictly within the unit circle (roots strictly outside the unit circle), or equivalently when the coefficients are in the triangle
−
1
≤
φ
2
≤
1
−
|
φ
1
|
{\displaystyle -1\leq \varphi _{2}\leq 1-|\varphi _{1}|}
.
The full PSD function can be expressed in real form as:
S
(
f
)
=
σ
Z
2
1
+
φ
1
2
+
φ
2
2
−
2
φ
1
(
1
−
φ
2
)
cos
(
2
π
f
)
−
2
φ
2
cos
(
4
π
f
)
{\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}+\varphi _{2}^{2}-2\varphi _{1}(1-\varphi _{2})\cos(2\pi f)-2\varphi _{2}\cos(4\pi f)}}}
== Implementations in statistics packages ==
R – the stats package includes ar function; the astsa package includes sarima function to fit various models including AR.
MATLAB – the Econometrics Toolbox and System Identification Toolbox include AR models.
MATLAB and Octave – the TSA toolbox contains several estimation functions for uni-variate, multivariate, and adaptive AR models.
PyMC3 – the Bayesian statistics and probabilistic programming framework supports AR modes with p lags.
bayesloop – supports parameter inference and model selection for the AR-1 process with time-varying parameters.
Python – statsmodels.org hosts an AR model.
== Impulse response ==
The impulse response of a system is the change in an evolving variable in response to a change in the value of a shock term k periods earlier, as a function of k. Since the AR model is a special case of the vector autoregressive model, the computation of the impulse response in vector autoregression#impulse response applies here.
== n-step-ahead forecasting ==
Once the parameters of the autoregression
X
t
=
∑
i
=
1
p
φ
i
X
t
−
i
+
ε
t
{\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}\,}
have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use t to refer to the first period for which data is not yet available; substitute the known preceding values Xt-i for i=1, ..., p into the autoregressive equation while setting the error term
ε
t
{\displaystyle \varepsilon _{t}}
equal to zero (because we forecast Xt to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use t to refer to the next period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of X one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from preceding steps.
There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term
ε
t
{\displaystyle \varepsilon _{t}\,}
for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the n-step-ahead predictions; the confidence interval will become wider as n increases because of the use of an increasing number of estimated values for the right-side variables.
== See also ==
Moving average model
Linear difference equation
Predictive analytics
Linear predictive coding
Resonance
Levinson recursion
Ornstein–Uhlenbeck process
Infinite impulse response
== Notes ==
== References ==
Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 9780521343398.
Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. Bibcode:1993sapa.book.....P.
Pandit, Sudhakar M.; Wu, Shien-Ming (1983). Time Series and System Analysis with Applications. John Wiley & Sons.
== External links ==
AutoRegression Analysis (AR) by Paul Bourke
Econometrics lecture (topic: Autoregressive models) on YouTube by Mark Thoma | Wikipedia/Autoregressive_model |
In financial mathematics, the Ho-Lee model is a short-rate model widely used in the pricing of bond options, swaptions and other interest rate derivatives, and in modeling future interest rates.: 381 It was developed in 1986 by Thomas Ho and Sang Bin Lee.
Under this model, the short rate follows a normal process:
d
r
t
=
θ
t
d
t
+
σ
d
W
t
{\displaystyle dr_{t}=\theta _{t}\,dt+\sigma \,dW_{t}}
The model can be calibrated to market data by implying the form of
θ
t
{\displaystyle \theta _{t}}
from market prices, meaning that it can exactly return the price of bonds comprising the yield curve. This calibration, and subsequent valuation of bond options, swaptions and other interest rate derivatives, is typically performed via a binomial lattice based model. Closed form valuations of bonds, and "Black-like" bond option formulae are also available.
As the model generates a symmetric ("bell shaped") distribution of rates in the future, negative rates are possible. Further, it does not incorporate mean reversion. For both of these reasons, models such as Black–Derman–Toy (lognormal and mean reverting) and Hull–White (mean reverting with lognormal variant available) are often preferred.: 385 The Kalotay–Williams–Fabozzi model is a lognormal analogue to the Ho–Lee model, although is less widely used than the latter two.
== References ==
Notes
Primary references
T.S.Y. Ho, S.B. Lee, Term structure movements and pricing interest rate contingent claims, Journal of Finance 41, 1986. doi:10.2307/2328161
John C. Hull, Options, futures, and other derivatives, 5th edition, Prentice Hall, ISBN 0-13-009056-5
== External links ==
Valuation and Hedging of Interest Rates Derivatives with the Ho-Lee Model, Markus Leippold and Zvi Wiener, Wharton School
Term Structure Lattice Models Archived 2012-01-23 at the Wayback Machine, Martin Haugh, Columbia University
Online tools
Binomial Tree – Excel implementation, thomasho.com | Wikipedia/Ho–Lee_model |
In mathematics, Itô's lemma or Itô's formula (also called the Itô–Döblin formula) is an identity used in Itô calculus to find the differential of a time-dependent function of a stochastic process. It serves as the stochastic calculus counterpart of the chain rule. It can be heuristically derived by forming the Taylor series expansion of the function up to its second derivatives and retaining terms up to first order in the time increment and second order in the Wiener process increment. The lemma is widely employed in mathematical finance, and its best known application is in the derivation of the Black–Scholes equation for option values.
This result was discovered by Japanese mathematician Kiyoshi Itô in 1951.
== Motivation ==
Suppose we are given the stochastic differential equation
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
,
{\displaystyle dX_{t}=\mu _{t}\ dt+\sigma _{t}\ dB_{t},}
where Bt is a Wiener process and the functions
μ
t
,
σ
t
{\displaystyle \mu _{t},\sigma _{t}}
are deterministic (not stochastic) functions of time. In general, it's not possible to write a solution
X
t
{\displaystyle X_{t}}
directly in terms of
B
t
.
{\displaystyle B_{t}.}
However, we can formally write an integral solution
X
t
=
∫
0
t
μ
s
d
s
+
∫
0
t
σ
s
d
B
s
.
{\displaystyle X_{t}=\int _{0}^{t}\mu _{s}\ ds+\int _{0}^{t}\sigma _{s}\ dB_{s}.}
This expression lets us easily read off the mean and variance of
X
t
{\displaystyle X_{t}}
(which has no higher moments). First, notice that every
d
B
t
{\displaystyle \mathrm {d} B_{t}}
individually has mean 0, so the expected value of
X
t
{\displaystyle X_{t}}
is simply the integral of the drift function:
E
[
X
t
]
=
∫
0
t
μ
s
d
s
.
{\displaystyle \mathrm {E} [X_{t}]=\int _{0}^{t}\mu _{s}\ ds.}
Similarly, because the
d
B
{\displaystyle dB}
terms have variance 1 and no correlation with one another, the variance of
X
t
{\displaystyle X_{t}}
is simply the integral of the variance of each infinitesimal step in the random walk:
V
a
r
[
X
t
]
=
∫
0
t
σ
s
2
d
s
.
{\displaystyle \mathrm {Var} [X_{t}]=\int _{0}^{t}\sigma _{s}^{2}\ ds.}
However, sometimes we are faced with a stochastic differential equation for a more complex process
Y
t
,
{\displaystyle Y_{t},}
in which the process appears on both sides of the differential equation. That is, say
d
Y
t
=
a
1
(
Y
t
,
t
)
d
t
+
a
2
(
Y
t
,
t
)
d
B
t
,
{\displaystyle dY_{t}=a_{1}(Y_{t},t)\ dt+a_{2}(Y_{t},t)\ dB_{t},}
for some functions
a
1
{\displaystyle a_{1}}
and
a
2
.
{\displaystyle a_{2}.}
In this case, we cannot immediately write a formal solution as we did for the simpler case above. Instead, we hope to write the process
Y
t
{\displaystyle Y_{t}}
as a function of a simpler process
X
t
{\displaystyle X_{t}}
taking the form above. That is, we want to identify three functions
f
(
t
,
x
)
,
μ
t
,
{\displaystyle f(t,x),\mu _{t},}
and
σ
t
,
{\displaystyle \sigma _{t},}
such that
Y
t
=
f
(
t
,
X
t
)
{\displaystyle Y_{t}=f(t,X_{t})}
and
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
.
{\displaystyle dX_{t}=\mu _{t}\ dt+\sigma _{t}\ dB_{t}.}
In practice, Ito's lemma is used in order to find this transformation. Finally, once we have transformed the problem into the simpler type of problem, we can determine the mean and higher moments of the process.
== Derivation ==
We derive Itô's lemma by expanding a Taylor series and applying the rules of stochastic calculus.
Suppose
X
t
{\displaystyle X_{t}}
is an Itô drift-diffusion process that satisfies the stochastic differential equation
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
,
{\displaystyle dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t},}
where Bt is a Wiener process.
If f(t,x) is a twice-differentiable scalar function, its expansion in a Taylor series is
Δ
f
(
t
)
d
t
d
t
=
f
(
t
+
d
t
,
x
)
−
f
(
t
,
x
)
=
∂
f
∂
t
d
t
+
1
2
∂
2
f
∂
t
2
(
d
t
)
2
+
⋯
Δ
f
(
x
)
d
x
d
x
=
f
(
t
,
x
+
d
x
)
−
f
(
t
,
x
)
=
∂
f
∂
x
d
x
+
1
2
∂
2
f
∂
x
2
(
d
x
)
2
+
⋯
{\displaystyle {\begin{aligned}{\frac {\Delta f(t)}{dt}}dt&=f(t+dt,x)-f(t,x)\\&={\frac {\partial f}{\partial t}}\,dt+{\frac {1}{2}}{\frac {\partial ^{2}f}{\partial t^{2}}}\,(dt)^{2}+\cdots \\[1ex]{\frac {\Delta f(x)}{dx}}dx&=f(t,x+dx)-f(t,x)\\&={\frac {\partial f}{\partial x}}\,dx+{\frac {1}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\,(dx)^{2}+\cdots \end{aligned}}}
Then use the total derivative and the definition of the partial derivative
f
y
=
lim
d
y
→
0
Δ
f
(
y
)
d
y
{\displaystyle f_{y}=\lim _{dy\to 0}{\frac {\Delta f(y)}{dy}}}
:
d
f
=
f
t
d
t
+
f
x
d
x
=
lim
d
x
→
0
d
t
→
0
∂
f
∂
t
d
t
+
∂
f
∂
x
d
x
+
1
2
(
∂
2
f
∂
t
2
(
d
t
)
2
+
∂
2
f
∂
x
2
(
d
x
)
2
)
+
⋯
.
{\displaystyle {\begin{aligned}df&=f_{t}dt+f_{x}dx\\[1ex]&=\lim _{dx\to 0 \atop dt\to 0}{\frac {\partial f}{\partial t}}\,dt+{\frac {\partial f}{\partial x}}\,dx+{\frac {1}{2}}\left({\frac {\partial ^{2}f}{\partial t^{2}}}\,(dt)^{2}+{\frac {\partial ^{2}f}{\partial x^{2}}}\,(dx)^{2}\right)+\cdots .\end{aligned}}}
Substituting
x
=
X
t
{\displaystyle x=X_{t}}
and therefore
d
x
=
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
{\displaystyle dx=dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t}}
, we get
d
f
=
lim
d
B
t
→
0
d
t
→
0
∂
f
∂
t
d
t
+
∂
f
∂
x
(
μ
t
d
t
+
σ
t
d
B
t
)
+
1
2
[
∂
2
f
∂
t
2
(
d
t
)
2
+
∂
2
f
∂
x
2
(
μ
t
2
(
d
t
)
2
+
2
μ
t
σ
t
d
t
d
B
t
+
σ
t
2
(
d
B
t
)
2
)
]
+
⋯
.
{\displaystyle {\begin{aligned}df=\lim _{dB_{t}\to 0 \atop dt\to 0}\;&{\frac {\partial f}{\partial t}}\,dt+{\frac {\partial f}{\partial x}}\left(\mu _{t}\,dt+\sigma _{t}\,dB_{t}\right)\\&+{\frac {1}{2}}\left[{\frac {\partial ^{2}f}{\partial t^{2}}}\,{\left(dt\right)}^{2}+{\frac {\partial ^{2}f}{\partial x^{2}}}\left(\mu _{t}^{2}\,{\left(dt\right)}^{2}+2\mu _{t}\sigma _{t}\,dt\,dB_{t}+\sigma _{t}^{2}\,{\left(dB_{t}\right)}^{2}\right)\right]+\cdots .\end{aligned}}}
In the limit
d
t
→
0
{\displaystyle dt\to 0}
, the terms
(
d
t
)
2
{\displaystyle (dt)^{2}}
and
d
t
d
B
t
{\displaystyle dt\,dB_{t}}
tend to zero faster than
d
t
{\displaystyle dt}
.
(
d
B
t
)
2
{\displaystyle (dB_{t})^{2}}
is
O
(
d
t
)
{\displaystyle O(dt)}
(due to the quadratic variation of a Wiener process which says
B
t
2
=
O
(
t
)
{\displaystyle B_{t}^{2}=O(t)}
), so setting
(
d
t
)
2
,
d
t
d
B
t
{\displaystyle (dt)^{2},dt\,dB_{t}}
and
(
d
x
)
3
{\displaystyle (dx)^{3}}
terms to zero and substituting
d
t
{\displaystyle dt}
for
(
d
B
t
)
2
{\displaystyle (dB_{t})^{2}}
, and then collecting the
d
t
{\displaystyle dt}
terms, we obtain
d
f
=
lim
d
t
→
0
(
∂
f
∂
t
+
μ
t
∂
f
∂
x
+
σ
t
2
2
∂
2
f
∂
x
2
)
d
t
+
σ
t
∂
f
∂
x
d
B
t
{\displaystyle df=\lim _{dt\to 0}\left({\frac {\partial f}{\partial t}}+\mu _{t}{\frac {\partial f}{\partial x}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+\sigma _{t}{\frac {\partial f}{\partial x}}\,dB_{t}}
as required.
Alternatively,
d
f
=
lim
d
t
→
0
(
∂
f
∂
t
+
σ
t
2
2
∂
2
f
∂
x
2
)
d
t
+
∂
f
∂
x
d
X
t
{\displaystyle df=\lim _{dt\to 0}\left({\frac {\partial f}{\partial t}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+{\frac {\partial f}{\partial x}}\,dX_{t}}
== Geometric intuition ==
Suppose we know that
X
t
,
X
t
+
d
t
{\displaystyle X_{t},X_{t+dt}}
are two jointly-Gaussian distributed random variables, and
f
{\displaystyle f}
is nonlinear but has continuous second derivative, then in general, neither of
f
(
X
t
)
,
f
(
X
t
+
d
t
)
{\displaystyle f(X_{t}),f(X_{t+dt})}
is Gaussian, and their joint distribution is also not Gaussian. However, since
X
t
+
d
t
∣
X
t
{\displaystyle X_{t+dt}\mid X_{t}}
is Gaussian, we might still find
f
(
X
t
+
d
t
)
∣
f
(
X
t
)
{\displaystyle f(X_{t+dt})\mid f(X_{t})}
is Gaussian. This is not true when
d
t
{\displaystyle dt}
is finite, but when
d
t
{\displaystyle dt}
becomes infinitesimal, this becomes true.
The key idea is that
X
t
+
d
t
=
X
t
+
μ
t
d
t
+
d
W
t
{\displaystyle X_{t+dt}=X_{t}+\mu _{t}\,dt+dW_{t}}
has a deterministic part and a noisy part. When
f
{\displaystyle f}
is nonlinear, the noisy part has a deterministic contribution. If
f
{\displaystyle f}
is convex, then the deterministic contribution is positive (by Jensen's inequality).
To find out how large the contribution is, we write
X
t
+
d
t
=
X
t
+
μ
t
d
t
+
σ
t
d
t
z
{\displaystyle X_{t+dt}=X_{t}+\mu _{t}\,dt+\sigma _{t}{\sqrt {dt}}\,z}
, where
z
{\displaystyle z}
is a standard Gaussian, then perform Taylor expansion.
f
(
X
t
+
d
t
)
=
f
(
X
t
)
+
f
′
(
X
t
)
μ
t
d
t
+
f
′
(
X
t
)
σ
t
d
t
z
+
1
2
f
″
(
X
t
)
(
σ
t
2
z
2
d
t
+
2
μ
t
σ
t
z
d
t
3
/
2
+
μ
t
2
d
t
2
)
+
o
(
d
t
)
=
[
f
(
X
t
)
+
f
′
(
X
t
)
μ
t
d
t
+
1
2
f
″
(
X
t
)
σ
t
2
d
t
+
o
(
d
t
)
]
+
[
f
′
(
X
t
)
σ
t
d
t
z
+
1
2
f
″
(
X
t
)
σ
t
2
(
z
2
−
1
)
d
t
+
o
(
d
t
)
]
{\displaystyle {\begin{aligned}f(X_{t+dt})={}&f(X_{t})+f'(X_{t})\mu _{t}\,dt+f'(X_{t})\sigma _{t}{\sqrt {dt}}\,z\\[1ex]&+{\frac {1}{2}}f''(X_{t})\left(\sigma _{t}^{2}z^{2}\,dt+2\mu _{t}\sigma _{t}z\,dt^{3/2}+\mu _{t}^{2}dt^{2}\right)+o(dt)\\[2ex]={}&\left[f(X_{t})+f'(X_{t})\mu _{t}\,dt+{\frac {1}{2}}f''(X_{t})\sigma _{t}^{2}\,dt+o(dt)\right]\\[1ex]&+\left[f'(X_{t})\sigma _{t}{\sqrt {dt}}\,z+{\frac {1}{2}}f''(X_{t})\sigma _{t}^{2}\left(z^{2}-1\right)\,dt+o(dt)\right]\end{aligned}}}
We have split it into two parts, a deterministic part, and a random part with mean zero. The random part is non-Gaussian, but the non-Gaussian parts decay faster than the Gaussian part, and at the
d
t
→
0
{\displaystyle dt\to 0}
limit, only the Gaussian part remains. The deterministic part has the expected
f
(
X
t
)
+
f
′
(
X
t
)
μ
t
d
t
{\displaystyle f(X_{t})+f'(X_{t})\mu _{t}\,dt}
, but also a part contributed by the convexity:
1
2
f
″
(
X
t
)
σ
t
2
d
t
{\textstyle {\frac {1}{2}}f''(X_{t})\sigma _{t}^{2}\,dt}
.
To understand why there should be a contribution due to convexity, consider the simplest case of geometric Brownian walk (of the stock market):
S
t
+
d
t
=
S
t
(
1
+
d
B
t
)
{\displaystyle S_{t+dt}=S_{t}(1+dB_{t})}
. In other words,
d
(
ln
S
t
)
=
d
B
t
{\displaystyle d(\ln S_{t})=dB_{t}}
. Let
X
t
=
ln
S
t
{\displaystyle X_{t}=\ln S_{t}}
, then
S
t
=
e
X
t
{\displaystyle S_{t}=e^{X_{t}}}
, and
X
t
{\displaystyle X_{t}}
is a Brownian walk. However, although the expectation of
X
t
{\displaystyle X_{t}}
remains constant, the expectation of
S
t
{\displaystyle S_{t}}
grows. Intuitively it is because the downside is limited at zero, but the upside is unlimited. That is, while
X
t
{\displaystyle X_{t}}
is normally distributed,
S
t
{\displaystyle S_{t}}
is log-normally distributed.
== Mathematical formulation of Itô's lemma ==
In the following subsections we discuss versions of Itô's lemma for different types of stochastic processes.
=== Itô drift-diffusion processes (due to: Kunita–Watanabe) ===
In its simplest form, Itô's lemma states the following: for an Itô drift-diffusion process
d
X
t
=
μ
t
d
t
+
σ
t
d
B
t
{\displaystyle dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t}}
and any twice differentiable scalar function f(t,x) of two real variables t and x, one has
d
f
(
t
,
X
t
)
=
(
∂
f
∂
t
+
μ
t
∂
f
∂
x
+
σ
t
2
2
∂
2
f
∂
x
2
)
d
t
+
σ
t
∂
f
∂
x
d
B
t
.
{\displaystyle df(t,X_{t})=\left({\frac {\partial f}{\partial t}}+\mu _{t}{\frac {\partial f}{\partial x}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+\sigma _{t}{\frac {\partial f}{\partial x}}\,dB_{t}.}
This immediately implies that f(t,Xt) is itself an Itô drift-diffusion process.
In higher dimensions, if
X
t
=
(
X
t
1
,
X
t
2
,
…
,
X
t
n
)
T
{\displaystyle \mathbf {X} _{t}=(X_{t}^{1},X_{t}^{2},\ldots ,X_{t}^{n})^{T}}
is a vector of Itô processes such that
d
X
t
=
μ
t
d
t
+
G
t
d
B
t
{\displaystyle d\mathbf {X} _{t}={\boldsymbol {\mu }}_{t}\,dt+\mathbf {G} _{t}\,d\mathbf {B} _{t}}
for a vector
μ
t
{\displaystyle {\boldsymbol {\mu }}_{t}}
and matrix
G
t
{\displaystyle \mathbf {G} _{t}}
, Itô's lemma then states that
d
f
(
t
,
X
t
)
=
∂
f
∂
t
d
t
+
(
∇
X
f
)
T
d
X
t
+
1
2
(
d
X
t
)
T
(
H
X
f
)
d
X
t
,
=
{
∂
f
∂
t
+
(
∇
X
f
)
T
μ
t
+
1
2
Tr
[
G
t
T
(
H
X
f
)
G
t
]
}
d
t
+
(
∇
X
f
)
T
G
t
d
B
t
{\displaystyle {\begin{aligned}df(t,\mathbf {X} _{t})&={\frac {\partial f}{\partial t}}\,dt+\left(\nabla _{\mathbf {X} }f\right)^{T}\,d\mathbf {X} _{t}+{\frac {1}{2}}\left(d\mathbf {X} _{t}\right)^{T}\left(H_{\mathbf {X} }f\right)\,d\mathbf {X} _{t},\\[4pt]&=\left\{{\frac {\partial f}{\partial t}}+\left(\nabla _{\mathbf {X} }f\right)^{T}{\boldsymbol {\mu }}_{t}+{\frac {1}{2}}\operatorname {Tr} \left[\mathbf {G} _{t}^{T}\left(H_{\mathbf {X} }f\right)\mathbf {G} _{t}\right]\right\}\,dt+\left(\nabla _{\mathbf {X} }f\right)^{T}\mathbf {G} _{t}\,d\mathbf {B} _{t}\end{aligned}}}
where
∇
X
f
{\displaystyle \nabla _{\mathbf {X} }f}
is the gradient of f w.r.t. X, HX f is the Hessian matrix of f w.r.t. X, and Tr is the trace operator.
=== Poisson jump processes ===
We may also define functions on discontinuous stochastic processes.
Let h be the jump intensity. The Poisson process model for jumps is that the probability of one jump in the interval [t, t + Δt] is hΔt plus higher order terms. h could be a constant, a deterministic function of time, or a stochastic process. The survival probability ps(t) is the probability that no jump has occurred in the interval [0, t]. The change in the survival probability is
d
p
s
(
t
)
=
−
p
s
(
t
)
h
(
t
)
d
t
.
{\displaystyle dp_{s}(t)=-p_{s}(t)h(t)\,dt.}
So
p
s
(
t
)
=
exp
(
−
∫
0
t
h
(
u
)
d
u
)
.
{\displaystyle p_{s}(t)=\exp \left(-\int _{0}^{t}h(u)\,du\right).}
Let S(t) be a discontinuous stochastic process. Write
S
(
t
−
)
{\displaystyle S(t^{-})}
for the value of S as we approach t from the left. Write
d
j
S
(
t
)
{\displaystyle d_{j}S(t)}
for the non-infinitesimal change in S(t) as a result of a jump. Then
d
j
S
(
t
)
=
lim
Δ
t
→
0
[
S
(
t
+
Δ
t
)
−
S
(
t
−
)
]
{\displaystyle d_{j}S(t)=\lim _{\Delta t\to 0}\left[S(t+\Delta t)-S(t^{-})\right]}
Let z be the magnitude of the jump and let
η
(
S
(
t
−
)
,
z
)
{\displaystyle \eta (S(t^{-}),z)}
be the distribution of z. The expected magnitude of the jump is
E
[
d
j
S
(
t
)
]
=
h
(
S
(
t
−
)
)
d
t
∫
z
z
η
(
S
(
t
−
)
,
z
)
d
z
.
{\displaystyle \operatorname {E} [d_{j}S(t)]=h(S(t^{-}))\,dt\int _{z}z\eta (S(t^{-}),z)\,dz.}
Define
d
J
S
(
t
)
{\displaystyle dJ_{S}(t)}
, a compensated process and martingale, as
d
J
S
(
t
)
=
d
j
S
(
t
)
−
E
[
d
j
S
(
t
)
]
=
S
(
t
)
−
S
(
t
−
)
−
(
h
(
S
(
t
−
)
)
∫
z
z
η
(
S
(
t
−
)
,
z
)
d
z
)
d
t
.
{\displaystyle {\begin{aligned}dJ_{S}(t)&=d_{j}S(t)-\operatorname {E} [d_{j}S(t)]\\[1ex]&=S(t)-S(t^{-})-\left(h(S(t^{-}))\int _{z}z\eta \left(S(t^{-}),z\right)\,dz\right)\,dt.\end{aligned}}}
Then
d
j
S
(
t
)
=
E
[
d
j
S
(
t
)
]
+
d
J
S
(
t
)
=
h
(
S
(
t
−
)
)
(
∫
z
z
η
(
S
(
t
−
)
,
z
)
d
z
)
d
t
+
d
J
S
(
t
)
.
{\displaystyle {\begin{aligned}d_{j}S(t)&=E[d_{j}S(t)]+dJ_{S}(t)\\[1ex]&=h(S(t^{-}))\left(\int _{z}z\eta (S(t^{-}),z)\,dz\right)dt+dJ_{S}(t).\end{aligned}}}
Consider a function
g
(
S
(
t
)
,
t
)
{\displaystyle g(S(t),t)}
of the jump process dS(t). If S(t) jumps by Δs then g(t) jumps by Δg. Δg is drawn from distribution
η
g
(
)
{\displaystyle \eta _{g}()}
which may depend on
g
(
t
−
)
{\displaystyle g(t^{-})}
, dg and
S
(
t
−
)
{\displaystyle S(t^{-})}
. The jump part of
g
{\displaystyle g}
is
g
(
t
)
−
g
(
t
−
)
=
h
(
t
)
d
t
∫
Δ
g
Δ
g
η
g
(
⋅
)
d
Δ
g
+
d
J
g
(
t
)
.
{\displaystyle g(t)-g(t^{-})=h(t)\,dt\int _{\Delta g}\,\Delta g\eta _{g}(\cdot )\,d\Delta g+dJ_{g}(t).}
If
S
{\displaystyle S}
contains drift, diffusion and jump parts, then Itô's Lemma for
g
(
S
(
t
)
,
t
)
{\displaystyle g(S(t),t)}
is
d
g
(
t
)
=
(
∂
g
∂
t
+
μ
∂
g
∂
S
+
σ
2
2
∂
2
g
∂
S
2
+
h
(
t
)
∫
Δ
g
(
Δ
g
η
g
(
⋅
)
d
Δ
g
)
)
d
t
+
∂
g
∂
S
σ
d
W
(
t
)
+
d
J
g
(
t
)
.
{\displaystyle {\begin{aligned}dg(t)={}&\left({\frac {\partial g}{\partial t}}+\mu {\frac {\partial g}{\partial S}}+{\frac {\sigma ^{2}}{2}}{\frac {\partial ^{2}g}{\partial S^{2}}}+h(t)\int _{\Delta g}\left(\Delta g\eta _{g}(\cdot )\,d{\Delta }g\right)\,\right)dt\\&+{\frac {\partial g}{\partial S}}\sigma \,dW(t)+dJ_{g}(t).\end{aligned}}}
Itô's lemma for a process which is the sum of a drift-diffusion process and a jump process is just the sum of the Itô's lemma for the individual parts.
=== Discontinuous semimartingales ===
Itô's lemma can also be applied to general d-dimensional semimartingales, which need not be continuous. In general, a semimartingale is a càdlàg process, and an additional jump term needs to be added to the Itô's formula.
For any cadlag process Yt, the left limit in t is denoted by Yt−, which is a left-continuous process. The jumps are written as ΔYt = Yt − Yt−. Then, Itô's lemma states that if X = (X1, X2, ..., Xd) is a d-dimensional semimartingale and f is a twice continuously differentiable real valued function on Rd then f(X) is a semimartingale, and
f
(
X
t
)
=
f
(
X
0
)
+
∑
i
=
1
d
∫
0
t
f
i
(
X
s
−
)
d
X
s
i
+
1
2
∑
i
,
j
=
1
d
∫
0
t
f
i
,
j
(
X
s
−
)
d
[
X
i
,
X
j
]
s
+
∑
s
≤
t
(
Δ
f
(
X
s
)
−
∑
i
=
1
d
f
i
(
X
s
−
)
Δ
X
s
i
−
1
2
∑
i
,
j
=
1
d
f
i
,
j
(
X
s
−
)
Δ
X
s
i
Δ
X
s
j
)
.
{\displaystyle {\begin{aligned}f(X_{t})=f(X_{0})&+\sum _{i=1}^{d}\int _{0}^{t}f_{i}(X_{s-})\,dX_{s}^{i}+{\frac {1}{2}}\sum _{i,j=1}^{d}\int _{0}^{t}f_{i,j}(X_{s-})\,d[X^{i},X^{j}]_{s}\\&+\sum _{s\leq t}\left(\Delta f(X_{s})-\sum _{i=1}^{d}f_{i}(X_{s-})\,\Delta X_{s}^{i}-{\frac {1}{2}}\sum _{i,j=1}^{d}f_{i,j}(X_{s-})\,\Delta X_{s}^{i}\,\Delta X_{s}^{j}\right).\end{aligned}}}
This differs from the formula for continuous semi-martingales by the last term summing over the jumps of X, which ensures that the jump of the right hand side at time t is Δf(Xt).
== Examples ==
=== Geometric Brownian motion ===
A process S is said to follow a geometric Brownian motion with constant volatility σ and constant drift μ if it satisfies the stochastic differential equation
d
S
t
=
σ
S
t
d
B
t
+
μ
S
t
d
t
{\displaystyle dS_{t}=\sigma S_{t}\,dB_{t}+\mu S_{t}\,dt}
, for a Brownian motion B. Applying Itô's lemma with
f
(
S
t
)
=
log
(
S
t
)
{\displaystyle f(S_{t})=\log(S_{t})}
gives
d
f
=
f
′
(
S
t
)
d
S
t
+
1
2
f
″
(
S
t
)
(
d
S
t
)
2
=
1
S
t
d
S
t
+
1
2
(
−
S
t
−
2
)
(
S
t
2
σ
2
d
t
)
=
1
S
t
(
σ
S
t
d
B
t
+
μ
S
t
d
t
)
−
σ
2
2
d
t
=
σ
d
B
t
+
(
μ
−
σ
2
2
)
d
t
.
{\displaystyle {\begin{aligned}df&=f'(S_{t})\,dS_{t}+{\frac {1}{2}}f''(S_{t})\,{\left(dS_{t}\right)}^{2}\\[4pt]&={\frac {1}{S_{t}}}\,dS_{t}+{\frac {1}{2}}\left(-S_{t}^{-2}\right)\left(S_{t}^{2}\sigma ^{2}\,dt\right)\\[4pt]&={\frac {1}{S_{t}}}\left(\sigma S_{t}\,dB_{t}+\mu S_{t}\,dt\right)-{\frac {\sigma ^{2}}{2}}\,dt\\[4pt]&=\sigma \,dB_{t}+\left(\mu -{\tfrac {\sigma ^{2}}{2}}\right)dt.\end{aligned}}}
It follows that
log
(
S
t
)
=
log
(
S
0
)
+
σ
B
t
+
(
μ
−
σ
2
2
)
t
,
{\displaystyle \log(S_{t})=\log(S_{0})+\sigma B_{t}+\left(\mu -{\tfrac {\sigma ^{2}}{2}}\right)t,}
exponentiating gives the expression for S,
S
t
=
S
0
exp
(
σ
B
t
+
(
μ
−
σ
2
2
)
t
)
.
{\displaystyle S_{t}=S_{0}\exp \left(\sigma B_{t}+\left(\mu -{\tfrac {\sigma ^{2}}{2}}\right)t\right).}
The correction term of − σ2/2 corresponds to the difference between the median and mean of the log-normal distribution, or equivalently for this distribution, the geometric mean and arithmetic mean, with the median (geometric mean) being lower. This is due to the AM–GM inequality, and corresponds to the logarithm being concave (or convex upwards), so the correction term can accordingly be interpreted as a convexity correction. This is an infinitesimal version of the fact that the annualized return is less than the average return, with the difference proportional to the variance. See geometric moments of the log-normal distribution for further discussion.
The same factor of σ2/2 appears in the d1 and d2 auxiliary variables of the Black–Scholes formula, and can be interpreted as a consequence of Itô's lemma.
=== Doléans-Dade exponential ===
The Doléans-Dade exponential (or stochastic exponential) of a continuous semimartingale X can be defined as the solution to the SDE dY = Y dX with initial condition Y0 = 1. It is sometimes denoted by Ɛ(X).
Applying Itô's lemma with f(Y) = log(Y) gives
d
log
(
Y
)
=
1
Y
d
Y
−
1
2
Y
2
d
[
Y
]
=
d
X
−
1
2
d
[
X
]
.
{\displaystyle {\begin{aligned}d\log(Y)&={\frac {1}{Y}}\,dY-{\frac {1}{2Y^{2}}}\,d[Y]\\[6pt]&=dX-{\tfrac {1}{2}}\,d[X].\end{aligned}}}
Exponentiating gives the solution
Y
t
=
exp
(
X
t
−
X
0
−
1
2
[
X
]
t
)
.
{\displaystyle Y_{t}=\exp \left(X_{t}-X_{0}-{\tfrac {1}{2}}[X]_{t}\right).}
=== Black–Scholes formula ===
Itô's lemma can be used to derive the Black–Scholes equation for an option. Suppose a stock price follows a geometric Brownian motion given by the stochastic differential equation dS = S(σ dB + μ dt). Then, if the value of an option at time t is f(t, St), Itô's lemma gives
d
f
(
t
,
S
t
)
=
(
∂
f
∂
t
+
1
2
(
S
t
σ
)
2
∂
2
f
∂
S
2
)
d
t
+
∂
f
∂
S
d
S
t
.
{\displaystyle df(t,S_{t})=\left({\frac {\partial f}{\partial t}}+{\frac {1}{2}}\left(S_{t}\sigma \right)^{2}{\frac {\partial ^{2}f}{\partial S^{2}}}\right)\,dt+{\frac {\partial f}{\partial S}}\,dS_{t}.}
The term ∂f/∂S dS represents the change in value in time dt of the trading strategy consisting of holding an amount ∂ f/∂S of the stock. If this trading strategy is followed, and any cash held is assumed to grow at the risk free rate r, then the total value V of this portfolio satisfies the SDE
d
V
t
=
r
(
V
t
−
∂
f
∂
S
S
t
)
d
t
+
∂
f
∂
S
d
S
t
.
{\displaystyle dV_{t}=r\left(V_{t}-{\frac {\partial f}{\partial S}}S_{t}\right)\,dt+{\frac {\partial f}{\partial S}}\,dS_{t}.}
This strategy replicates the option if V = f(t,S). Combining these equations gives the celebrated Black–Scholes equation
∂
f
∂
t
+
σ
2
S
2
2
∂
2
f
∂
S
2
+
r
S
∂
f
∂
S
−
r
f
=
0.
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\sigma ^{2}S^{2}}{2}}{\frac {\partial ^{2}f}{\partial S^{2}}}+rS{\frac {\partial f}{\partial S}}-rf=0.}
=== Product rule for Itô processes ===
Let
X
t
{\displaystyle \mathbf {X} _{t}}
be a two-dimensional Ito process with SDE:
d
X
t
=
d
(
X
t
1
X
t
2
)
=
(
μ
t
1
μ
t
2
)
d
t
+
(
σ
t
1
σ
t
2
)
d
B
t
{\displaystyle d\mathbf {X} _{t}=d{\begin{pmatrix}X_{t}^{1}\\X_{t}^{2}\end{pmatrix}}={\begin{pmatrix}\mu _{t}^{1}\\\mu _{t}^{2}\end{pmatrix}}dt+{\begin{pmatrix}\sigma _{t}^{1}\\\sigma _{t}^{2}\end{pmatrix}}\,dB_{t}}
Then we can use the multi-dimensional form of Ito's lemma to find an expression for
d
(
X
t
1
X
t
2
)
{\displaystyle d(X_{t}^{1}X_{t}^{2})}
.
We have
μ
t
=
(
μ
t
1
μ
t
2
)
{\displaystyle \mu _{t}={\begin{pmatrix}\mu _{t}^{1}\\\mu _{t}^{2}\end{pmatrix}}}
and
G
=
(
σ
t
1
σ
t
2
)
{\displaystyle \mathbf {G} ={\begin{pmatrix}\sigma _{t}^{1}\\\sigma _{t}^{2}\end{pmatrix}}}
.
We set
f
(
t
,
X
t
)
=
X
t
1
X
t
2
{\displaystyle f(t,\mathbf {X} _{t})=X_{t}^{1}X_{t}^{2}}
and observe that
∂
f
∂
t
=
0
{\displaystyle {\frac {\partial f}{\partial t}}=0}
,
(
∇
X
f
)
T
=
(
X
t
2
X
t
1
)
{\displaystyle (\nabla _{\mathbf {X} }f)^{T}={\begin{pmatrix}X_{t}^{2}&X_{t}^{1}\end{pmatrix}}}
, and
H
X
f
=
(
0
1
1
0
)
{\displaystyle H_{\mathbf {X} }f={\begin{pmatrix}0&1\\1&0\end{pmatrix}}}
Substituting these values in the multi-dimensional version of the lemma gives us:
d
(
X
t
1
X
t
2
)
=
d
f
(
t
,
X
t
)
=
0
⋅
d
t
+
(
X
t
2
X
t
1
)
d
X
t
+
1
2
(
d
X
t
1
d
X
t
2
)
(
0
1
1
0
)
(
d
X
t
1
d
X
t
2
)
=
X
t
2
d
X
t
1
+
X
t
1
d
X
t
2
+
d
X
t
1
d
X
t
2
{\displaystyle {\begin{aligned}d(X_{t}^{1}X_{t}^{2})&=df(t,\mathbf {X} _{t})\\&=0\cdot dt+{\begin{pmatrix}X_{t}^{2}&X_{t}^{1}\end{pmatrix}}\,d\mathbf {X} _{t}+{\frac {1}{2}}{\begin{pmatrix}dX_{t}^{1}&dX_{t}^{2}\end{pmatrix}}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}{\begin{pmatrix}dX_{t}^{1}\\dX_{t}^{2}\end{pmatrix}}\\[1ex]&=X_{t}^{2}\,dX_{t}^{1}+X_{t}^{1}\,dX_{t}^{2}+dX_{t}^{1}\,dX_{t}^{2}\end{aligned}}}
This is a generalisation of Leibniz's product rule to Ito processes, which are non-differentiable.
Further, using the second form of the multidimensional version above gives us
d
(
X
t
1
X
t
2
)
=
{
0
+
(
X
t
2
X
t
1
)
(
μ
t
1
μ
t
2
)
+
1
2
Tr
[
(
σ
t
1
σ
t
2
)
(
0
1
1
0
)
(
σ
t
1
σ
t
2
)
]
}
d
t
+
(
X
t
2
σ
t
1
+
X
t
1
σ
t
2
)
d
B
t
=
(
X
t
2
μ
t
1
+
X
t
1
μ
t
2
+
σ
t
1
σ
t
2
)
d
t
+
(
X
t
2
σ
t
1
+
X
t
1
σ
t
2
)
d
B
t
{\displaystyle {\begin{aligned}d(X_{t}^{1}X_{t}^{2})&=\left\{0+{\begin{pmatrix}X_{t}^{2}&X_{t}^{1}\end{pmatrix}}{\begin{pmatrix}\mu _{t}^{1}\\\mu _{t}^{2}\end{pmatrix}}+{\frac {1}{2}}\operatorname {Tr} \left[{\begin{pmatrix}\sigma _{t}^{1}&\sigma _{t}^{2}\end{pmatrix}}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}{\begin{pmatrix}\sigma _{t}^{1}\\\sigma _{t}^{2}\end{pmatrix}}\right]\right\}dt\\[1ex]&\qquad +\left(X_{t}^{2}\sigma _{t}^{1}+X_{t}^{1}\sigma _{t}^{2}\right)dB_{t}\\[2ex]&=\left(X_{t}^{2}\mu _{t}^{1}+X_{t}^{1}\mu _{t}^{2}+\sigma _{t}^{1}\sigma _{t}^{2}\right)dt+\left(X_{t}^{2}\sigma _{t}^{1}+X_{t}^{1}\sigma _{t}^{2}\right)dB_{t}\end{aligned}}}
so we see that the product
X
t
1
X
t
2
{\displaystyle X_{t}^{1}X_{t}^{2}}
is itself an Itô drift-diffusion process.
== Itô's formula for functions with finite quadratic variation ==
Hans Föllmer provided a non-probabilistic proof of the Itô formula and showed that it holds for all functions with finite quadratic variation.
Let
f
∈
C
2
{\displaystyle f\in C^{2}}
be a real-valued function and
x
:
[
0
,
∞
]
→
R
{\displaystyle x:[0,\infty ]\to \mathbb {R} }
a right-continuous function with left limits and finite quadratic variation
[
x
]
{\displaystyle [x]}
. Then
f
(
x
t
)
=
f
(
x
0
)
+
∫
0
t
f
′
(
x
s
−
)
d
x
s
+
1
2
∫
]
0
,
t
]
f
″
(
x
s
−
)
d
[
x
]
s
+
∑
0
≤
s
≤
t
[
f
(
x
s
)
−
f
(
x
s
−
)
−
f
′
(
x
s
−
)
Δ
x
s
−
1
2
f
″
(
x
s
−
)
(
Δ
x
s
)
2
]
.
{\displaystyle {\begin{aligned}f(x_{t})=f(x_{0})&+\int _{0}^{t}f'(x_{s-})\,\mathrm {d} x_{s}+{\frac {1}{2}}\int _{]0,t]}f''(x_{s-})\,d[x]_{s}\\&+\sum _{0\leq s\leq t}\left[f(x_{s})-f(x_{s-})-f'(x_{s-})\Delta x_{s}-{\frac {1}{2}}f''(x_{s-})(\Delta x_{s})^{2}\right].\end{aligned}}}
where the quadratic variation of
x
{\displaystyle x}
is defined as a limit along a sequence of partitions
D
n
{\displaystyle D_{n}}
of
[
0
,
t
]
{\displaystyle [0,t]}
with step decreasing to zero:
[
x
]
(
t
)
=
lim
n
→
∞
∑
t
k
n
∈
D
n
(
x
t
k
+
1
n
−
x
t
k
n
)
2
.
{\displaystyle [x](t)=\lim _{n\to \infty }\sum _{t_{k}^{n}\in D_{n}}\left(x_{t_{k+1}^{n}}-x_{t_{k}^{n}}\right)^{2}.}
== Higher-order Itô formula ==
Rama Cont and Nicholas Perkowski extended the Ito formula to functions with finite p-th variation where
p
≥
2
{\displaystyle p\geq 2}
is an arbitrarily large integer.
Given a continuous function with finite p-th variation
[
x
]
p
(
t
)
=
lim
n
→
∞
∑
t
k
n
∈
D
n
(
x
t
k
+
1
n
−
x
t
k
n
)
p
,
{\displaystyle [x]^{p}(t)=\lim _{n\to \infty }\sum _{t_{k}^{n}\in D_{n}}{\left(x_{t_{k+1}^{n}}-x_{t_{k}^{n}}\right)}^{p},}
Cont and Perkowski's change of variable formula states that for any
f
∈
C
p
(
R
d
,
R
)
{\displaystyle f\in C^{p}(\mathbb {R} ^{d},\mathbb {R} )}
:
f
(
x
t
)
=
f
(
x
0
)
+
∫
0
t
∇
p
−
1
f
(
x
s
−
)
d
x
s
+
1
p
!
∫
]
0
,
t
]
f
p
(
x
s
−
)
d
[
x
]
s
p
{\displaystyle {\begin{aligned}f(x_{t})={}&f(x_{0})+\int _{0}^{t}\nabla _{p-1}f(x_{s-})\,\mathrm {d} x_{s}+{\frac {1}{p!}}\int _{]0,t]}f^{p}(x_{s-})\,d[x]_{s}^{p}\end{aligned}}}
where the first integral is defined as a limit of compensated left Riemann sums along a sequence of partitions
D
n
{\displaystyle D_{n}}
:
∫
0
t
∇
p
−
1
f
(
x
s
−
)
d
x
s
:=
∑
t
k
n
∈
D
n
∑
k
=
1
p
−
1
f
k
(
x
t
k
n
)
k
!
(
x
t
k
+
1
n
−
x
t
k
n
)
k
.
{\displaystyle {\begin{aligned}\int _{0}^{t}\nabla _{p-1}f(x_{s-})\,\mathrm {d} x_{s}:={}&\sum _{t_{k}^{n}\in D_{n}}\sum _{k=1}^{p-1}{\frac {f^{k}(x_{t_{k}^{n}})}{k!}}\left(x_{t_{k+1}^{n}}-x_{t_{k}^{n}}\right)^{k}.\end{aligned}}}
An extension to the case of fractional regularity (non-integer
p
{\displaystyle p}
) was obtained by Cont and Jin.
== Infinite-dimensional formulas ==
There exist some extensions to infinite-dimensional spaces (e.g. Pardoux, Gyöngy-Krylov, Brzezniak-van Neerven-Veraar-Weis).
== See also ==
Wiener process
Itô calculus
Feynman–Kac formula
Euler–Maruyama method
== Notes ==
== References ==
== External links ==
Derivation, Prof. Thayer Watkins
Informal proof, optiontutor | Wikipedia/Itô's_lemma |
In actuarial science and applied probability, ruin theory (sometimes risk theory or collective risk theory) uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.
== Classical model ==
The theoretical foundation of ruin theory, known as the Cramér–Lundberg model (or classical compound-Poisson risk model, classical risk process or Poisson risk process) was introduced in 1903 by the Swedish actuary Filip Lundberg. Lundberg's work was republished in the 1930s by Harald Cramér.
The model describes an insurance company who experiences two opposing cash flows: incoming cash premiums and outgoing claims. Premiums arrive a constant rate
c
>
0
{\textstyle c>0}
from customers and claims arrive according to a Poisson process
N
t
{\displaystyle N_{t}}
with intensity
λ
{\textstyle \lambda }
and are independent and identically distributed non-negative random variables
ξ
i
{\displaystyle \xi _{i}}
with distribution
F
{\textstyle F}
and mean
μ
{\textstyle \mu }
(they form a compound Poisson process). So for an insurer who starts with initial surplus
x
{\textstyle x}
, the aggregate assets
X
t
{\displaystyle X_{t}}
are given by:
X
t
=
x
+
c
t
−
∑
i
=
1
N
t
ξ
i
for t
≥
0.
{\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for t}}\geq 0.}
The central object of the model is to investigate the probability that the insurer's surplus level eventually falls below zero (making the firm bankrupt). This quantity, called the probability of ultimate ruin, is defined as
ψ
(
x
)
=
P
x
{
τ
<
∞
}
{\displaystyle \psi (x)=\mathbb {P} ^{x}\{\tau <\infty \}}
,
where the time of ruin is
τ
=
inf
{
t
>
0
:
X
(
t
)
<
0
}
{\displaystyle \tau =\inf\{t>0\,:\,X(t)<0\}}
with the convention that
inf
∅
=
∞
{\displaystyle \inf \varnothing =\infty }
. This can be computed exactly using the Pollaczek–Khinchine formula as (the ruin function here is equivalent to the tail function of the stationary distribution of waiting time in an M/G/1 queue)
ψ
(
x
)
=
(
1
−
λ
μ
c
)
∑
n
=
0
∞
(
λ
μ
c
)
n
(
1
−
F
l
∗
n
(
x
)
)
{\displaystyle \psi (x)=\left(1-{\frac {\lambda \mu }{c}}\right)\sum _{n=0}^{\infty }\left({\frac {\lambda \mu }{c}}\right)^{n}(1-F_{l}^{\ast n}(x))}
where
F
l
{\displaystyle F_{l}}
is the transform of the tail distribution of
F
{\displaystyle F}
,
F
l
(
x
)
=
1
μ
∫
0
x
(
1
−
F
(
u
)
)
d
u
{\displaystyle F_{l}(x)={\frac {1}{\mu }}\int _{0}^{x}\left(1-F(u)\right){\text{d}}u}
and
⋅
∗
n
{\displaystyle \cdot ^{\ast n}}
denotes the
n
{\displaystyle n}
-fold convolution.
In the case where the claim sizes are exponentially distributed, this simplifies to
ψ
(
x
)
=
λ
μ
c
e
−
(
1
μ
−
λ
c
)
x
.
{\displaystyle \psi (x)={\frac {\lambda \mu }{c}}e^{-\left({\frac {1}{\mu }}-{\frac {\lambda }{c}}\right)x}.}
== Sparre Andersen model ==
E. Sparre Andersen extended the classical model in 1957 by allowing claim inter-arrival times to have arbitrary distribution functions.
X
t
=
x
+
c
t
−
∑
i
=
1
N
t
ξ
i
for
t
≥
0
,
{\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for }}t\geq 0,}
where the claim number process
(
N
t
)
t
≥
0
{\displaystyle (N_{t})_{t\geq 0}}
is a renewal process and
(
ξ
i
)
i
∈
N
{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}
are independent and identically distributed random variables.
The model furthermore assumes that
ξ
i
>
0
{\displaystyle \xi _{i}>0}
almost surely and that
(
N
t
)
t
≥
0
{\displaystyle (N_{t})_{t\geq 0}}
and
(
ξ
i
)
i
∈
N
{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}
are independent. The model is also known as the renewal risk model.
== Expected discounted penalty function ==
Michael R. Powers and Gerber and Shiu analyzed the behavior of the insurer's surplus through the expected discounted penalty function, which is commonly referred to as Gerber-Shiu function in the ruin literature and named after actuarial scientists Elias S.W. Shiu and Hans-Ulrich Gerber. It is arguable whether the function should have been called Powers-Gerber-Shiu function due to the contribution of Powers.
In Powers' notation, this is defined as
m
(
x
)
=
E
x
[
e
−
δ
τ
K
τ
]
{\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }K_{\tau }]}
,
where
δ
{\displaystyle \delta }
is the discounting force of interest,
K
τ
{\displaystyle K_{\tau }}
is a general penalty function reflecting the economic costs to the insurer at the time of ruin, and the expectation
E
x
{\displaystyle \mathbb {E} ^{x}}
corresponds to the probability measure
P
x
{\displaystyle \mathbb {P} ^{x}}
. The function is called expected discounted cost of insolvency by Powers.
In Gerber and Shiu's notation, it is given as
m
(
x
)
=
E
x
[
e
−
δ
τ
w
(
X
τ
−
,
X
τ
)
I
(
τ
<
∞
)
]
{\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }w(X_{\tau -},X_{\tau })\mathbb {I} (\tau <\infty )]}
,
where
δ
{\displaystyle \delta }
is the discounting force of interest and
w
(
X
τ
−
,
X
τ
)
{\displaystyle w(X_{\tau -},X_{\tau })}
is a penalty function capturing the economic costs to the insurer at the time of ruin (assumed to depend on the surplus prior to ruin
X
τ
−
{\displaystyle X_{\tau -}}
and the deficit at ruin
X
τ
{\displaystyle X_{\tau }}
), and the expectation
E
x
{\displaystyle \mathbb {E} ^{x}}
corresponds to the probability measure
P
x
{\displaystyle \mathbb {P} ^{x}}
. Here the indicator function
I
(
τ
<
∞
)
{\displaystyle \mathbb {I} (\tau <\infty )}
emphasizes that the penalty is exercised only when ruin occurs.
It is quite intuitive to interpret the expected discounted penalty function. Since the function measures the actuarial present value of the penalty that occurs at
τ
{\displaystyle \tau }
, the penalty function is multiplied by the discounting factor
e
−
δ
τ
{\displaystyle e^{-\delta \tau }}
, and then averaged over the probability distribution of the waiting time to
τ
{\displaystyle \tau }
. While Gerber and Shiu applied this function to the classical compound-Poisson model, Powers argued that an insurer's surplus is better modeled by a family of diffusion processes.
There are a great variety of ruin-related quantities that fall into the category of the expected discounted penalty function.
Other finance-related quantities belonging to the class of the expected discounted penalty function include the perpetual American put option, the contingent claim at optimal exercise time, and more.
== Recent developments ==
Compound-Poisson risk model with constant interest
Compound-Poisson risk model with stochastic interest
Brownian-motion risk model
General diffusion-process model
Markov-modulated risk model
Accident probability factor (APF) calculator – risk analysis model (@SBH)
== See also ==
Financial risk
Volterra integral equation § Application: Ruin theory
Chance-constrained portfolio selection
== References ==
== Further reading ==
Gerber, H.U. (1979). An Introduction to Mathematical Risk Theory. Philadelphia: S.S. Heubner Foundation Monograph Series 8.
Asmussen S., Albrecher H. (2010). Ruin Probabilities, 2nd Edition. Singapore: World Scientific Publishing Co. | Wikipedia/Sparre–Anderson_model |
In mathematics, Tanaka's equation is an example of a stochastic differential equation which admits a weak solution but has no strong solution. It is named after the Japanese mathematician Hiroshi Tanaka (Tanaka Hiroshi).
Tanaka's equation is the one-dimensional stochastic differential equation
d
X
t
=
sgn
(
X
t
)
d
B
t
,
{\displaystyle \mathrm {d} X_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} B_{t},}
driven by canonical Brownian motion B, with initial condition X0 = 0, where sgn denotes the sign function
sgn
(
x
)
=
{
+
1
,
x
≥
0
;
−
1
,
x
<
0.
{\displaystyle \operatorname {sgn}(x)={\begin{cases}+1,&x\geq 0;\\-1,&x<0.\end{cases}}}
(Note the unconventional value for sgn(0).) The signum function does not satisfy the Lipschitz continuity condition required for the usual theorems guaranteeing existence and uniqueness of strong solutions. The Tanaka equation has no strong solution, i.e. one for which the version B of Brownian motion is given in advance and the solution X is adapted to the filtration generated by B and the initial conditions. However, the Tanaka equation does have a weak solution, one for which the process X and version of Brownian motion are both specified as part of the solution, rather than the Brownian motion being given a priori. In this case, simply choose X to be any Brownian motion and define
B
~
{\displaystyle {\tilde {B}}}
by
B
~
t
=
∫
0
t
sgn
(
X
s
)
d
X
s
,
{\displaystyle {\tilde {B}}_{t}=\int _{0}^{t}\operatorname {sgn} {\big (}X_{s}{\big )}\,\mathrm {d} X_{s},}
i.e.
d
B
~
t
=
sgn
(
X
t
)
d
X
t
.
{\displaystyle \mathrm {d} {\tilde {B}}_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} X_{t}.}
Hence,
d
X
t
=
sgn
(
X
t
)
d
B
~
t
,
{\displaystyle \mathrm {d} X_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} {\tilde {B}}_{t},}
and so X is a weak solution of the Tanaka equation. Furthermore, this solution is weakly unique, i.e. any other weak solution must have the same law.
Another counterexample of this type is Tsirelson's stochastic differential equation.
== References ==
Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN 3-540-04758-1. (Example 5.3.2) | Wikipedia/Tanaka_equation |
In credibility theory, a branch of study in actuarial science, the Bühlmann model is a random effects model (or "variance components model" or hierarchical linear model) used to determine the appropriate premium for a group of insurance contracts. The model is named after Hans Bühlmann who first published a description in 1967.
== Model description ==
Consider i risks which generate random losses for which historical data of m recent claims are available (indexed by j). A premium for the ith risk is to be determined based on the expected value of claims. A linear estimator which minimizes the mean square error is sought. Write
Xij for the j-th claim on the i-th risk (we assume that all claims for i-th risk are independent and identically distributed)
X
¯
i
=
1
m
∑
j
=
1
m
X
i
j
{\displaystyle {\bar {X}}_{i}={\frac {1}{m}}\sum _{j=1}^{m}X_{ij}}
for the average value.
Θ
i
{\displaystyle \Theta _{i}}
- the parameter for the distribution of the i-th risk
m
(
ϑ
)
=
E
[
X
i
j
|
Θ
i
=
ϑ
]
{\displaystyle m(\vartheta )=\operatorname {E} \left[X_{ij}|\Theta _{i}=\vartheta \right]}
Π
=
E
(
m
(
ϑ
)
|
X
i
1
,
X
i
2
,
.
.
.
X
i
m
)
{\displaystyle \Pi =\operatorname {E} (m(\vartheta )|X_{i1},X_{i2},...X_{im})}
- premium for the i-th risk
μ
=
E
(
m
(
ϑ
)
)
{\displaystyle \mu =\operatorname {E} (m(\vartheta ))}
s
2
(
ϑ
)
=
Var
[
X
i
j
|
Θ
i
=
ϑ
]
{\displaystyle s^{2}(\vartheta )=\operatorname {Var} \left[X_{ij}|\Theta _{i}=\vartheta \right]}
σ
2
=
E
[
s
2
(
ϑ
)
]
{\displaystyle \sigma ^{2}=\operatorname {E} \left[s^{2}(\vartheta )\right]}
v
2
=
Var
[
m
(
ϑ
)
]
{\displaystyle v^{2}=\operatorname {Var} \left[m(\vartheta )\right]}
Note:
m
(
ϑ
)
{\displaystyle m(\vartheta )}
and
s
2
(
ϑ
)
{\displaystyle s^{2}(\vartheta )}
are functions of random parameter
ϑ
{\displaystyle \vartheta }
The Bühlmann model is the solution for the problem:
a
r
g
m
i
n
a
i
0
,
a
i
1
,
.
.
.
,
a
i
m
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
2
]
{\displaystyle {\underset {a_{i0},a_{i1},...,a_{im}}{\operatorname {arg\,min} }}\operatorname {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)^{2}\right]}
where
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
{\displaystyle a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}}
is the estimator of premium
Π
{\displaystyle \Pi }
and arg min represents the parameter values which minimize the expression.
== Model solution ==
The solution for the problem is:
Z
X
¯
i
+
(
1
−
Z
)
μ
{\displaystyle Z{\bar {X}}_{i}+(1-Z)\mu }
where:
Z
=
1
1
+
σ
2
v
2
m
{\displaystyle Z={\frac {1}{1+{\frac {\sigma ^{2}}{v^{2}m}}}}}
We can give this result the interpretation, that Z part of the premium is based on the information that we have about the specific risk, and (1-Z) part is based on the information that we have about the whole population.
=== Proof ===
The following proof is slightly different from the one in the original paper. It is also more general, because it considers all linear estimators, while original proof considers only estimators based on average claim.
Lemma. The problem can be stated alternatively as:
f
=
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
m
(
ϑ
)
)
2
]
→
min
{\displaystyle f=\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-m(\vartheta )\right)^{2}\right]\to \min }
Proof:
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
m
(
ϑ
)
)
2
]
=
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
2
]
+
E
[
(
m
(
ϑ
)
−
Π
)
2
]
−
2
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
(
m
(
ϑ
)
−
Π
)
]
=
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
2
]
+
E
[
(
m
(
ϑ
)
−
Π
)
2
]
{\displaystyle {\begin{aligned}\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-m(\vartheta )\right)^{2}\right]&=\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)^{2}\right]+\mathbb {E} \left[\left(m(\vartheta )-\Pi \right)^{2}\right]-2\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)\left(m(\vartheta )-\Pi \right)\right]\\&=\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)^{2}\right]+\mathbb {E} \left[\left(m(\vartheta )-\Pi \right)^{2}\right]\end{aligned}}}
The last equation follows from the fact that
E
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
(
m
(
ϑ
)
−
Π
)
]
=
E
Θ
[
E
X
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
(
m
(
ϑ
)
−
Π
)
|
X
i
1
,
…
,
X
i
m
]
]
=
E
Θ
[
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
Π
)
[
E
X
[
(
m
(
ϑ
)
−
Π
)
|
X
i
1
,
…
,
X
i
m
]
]
]
=
0
{\displaystyle {\begin{aligned}\mathbb {E} \left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)\left(m(\vartheta )-\Pi \right)\right]&=\mathbb {E} _{\Theta }\left[\mathbb {E} _{X}\left.\left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)(m(\vartheta )-\Pi )\right|X_{i1},\ldots ,X_{im}\right]\right]\\&=\mathbb {E} _{\Theta }\left[\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-\Pi \right)\left[\mathbb {E} _{X}\left[(m(\vartheta )-\Pi )|X_{i1},\ldots ,X_{im}\right]\right]\right]\\&=0\end{aligned}}}
We are using here the law of total expectation and the fact, that
Π
=
E
[
m
(
ϑ
)
|
X
i
1
,
…
,
X
i
m
]
.
{\displaystyle \Pi =\mathbb {E} [m(\vartheta )|X_{i1},\ldots ,X_{im}].}
In our previous equation, we decompose minimized function in the sum of two expressions. The second expression does not depend on parameters used in minimization. Therefore, minimizing the function is the same as minimizing the first part of the sum.
Let us find critical points of the function
1
2
∂
f
∂
a
i
0
=
E
[
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
m
(
ϑ
)
]
=
a
i
0
+
∑
j
=
1
m
a
i
j
E
(
X
i
j
)
−
E
(
m
(
ϑ
)
)
=
a
i
0
+
(
∑
j
=
1
m
a
i
j
−
1
)
μ
{\displaystyle {\frac {1}{2}}{\frac {\partial f}{\partial a_{i0}}}=\mathbb {E} \left[a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-m(\vartheta )\right]=a_{i0}+\sum _{j=1}^{m}a_{ij}\mathbb {E} (X_{ij})-\mathbb {E} (m(\vartheta ))=a_{i0}+\left(\sum _{j=1}^{m}a_{ij}-1\right)\mu }
a
i
0
=
(
1
−
∑
j
=
1
m
a
i
j
)
μ
{\displaystyle a_{i0}=\left(1-\sum _{j=1}^{m}a_{ij}\right)\mu }
For
k
≠
0
{\displaystyle k\neq 0}
we have:
1
2
∂
f
∂
a
i
k
=
E
[
X
i
k
(
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
−
m
(
ϑ
)
)
]
=
E
[
X
i
k
]
a
i
0
+
∑
j
=
1
,
j
≠
k
m
a
i
j
E
[
X
i
k
X
i
j
]
+
a
i
k
E
[
X
i
k
2
]
−
E
[
X
i
k
m
(
ϑ
)
]
=
0
{\displaystyle {\frac {1}{2}}{\frac {\partial f}{\partial a_{ik}}}=\mathbb {E} \left[X_{ik}\left(a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}-m(\vartheta )\right)\right]=\mathbb {E} \left[X_{ik}\right]a_{i0}+\sum _{j=1,j\neq k}^{m}a_{ij}\mathbb {E} [X_{ik}X_{ij}]+a_{ik}\mathbb {E} [X_{ik}^{2}]-\mathbb {E} [X_{ik}m(\vartheta )]=0}
We can simplify derivative, noting that:
E
[
X
i
j
X
i
k
]
=
E
[
E
[
X
i
j
X
i
k
|
ϑ
]
]
=
E
[
cov
(
X
i
j
X
i
k
|
ϑ
)
+
E
(
X
i
j
|
ϑ
)
E
(
X
i
k
|
ϑ
)
]
=
E
[
(
m
(
ϑ
)
)
2
]
=
v
2
+
μ
2
E
[
X
i
k
2
]
=
E
[
E
[
X
i
k
2
|
ϑ
]
]
=
E
[
s
2
(
ϑ
)
+
(
m
(
ϑ
)
)
2
]
=
σ
2
+
v
2
+
μ
2
E
[
X
i
k
m
(
ϑ
)
]
=
E
[
E
[
X
i
k
m
(
ϑ
)
|
Θ
i
]
=
E
[
(
m
(
ϑ
)
)
2
]
=
v
2
+
μ
2
{\displaystyle {\begin{aligned}\mathbb {E} [X_{ij}X_{ik}]&=\mathbb {E} \left[\mathbb {E} [X_{ij}X_{ik}|\vartheta ]\right]=\mathbb {E} [{\text{cov}}(X_{ij}X_{ik}|\vartheta )+\mathbb {E} (X_{ij}|\vartheta )\mathbb {E} (X_{ik}|\vartheta )]=\mathbb {E} [(m(\vartheta ))^{2}]=v^{2}+\mu ^{2}\\\mathbb {E} [X_{ik}^{2}]&=\mathbb {E} \left[\mathbb {E} [X_{ik}^{2}|\vartheta ]\right]=\mathbb {E} [s^{2}(\vartheta )+(m(\vartheta ))^{2}]=\sigma ^{2}+v^{2}+\mu ^{2}\\\mathbb {E} [X_{ik}m(\vartheta )]&=\mathbb {E} [\mathbb {E} [X_{ik}m(\vartheta )|\Theta _{i}]=\mathbb {E} [(m(\vartheta ))^{2}]=v^{2}+\mu ^{2}\end{aligned}}}
Taking above equations and inserting into derivative, we have:
1
2
∂
f
∂
a
i
k
=
(
1
−
∑
j
=
1
m
a
i
j
)
μ
2
+
∑
j
=
1
,
j
≠
k
m
a
i
j
(
v
2
+
μ
2
)
+
a
i
k
(
σ
2
+
v
2
+
μ
2
)
−
(
v
2
+
μ
2
)
=
a
i
k
σ
2
−
(
1
−
∑
j
=
1
m
a
i
j
)
v
2
=
0
{\displaystyle {\frac {1}{2}}{\frac {\partial f}{\partial a_{ik}}}=\left(1-\sum _{j=1}^{m}a_{ij}\right)\mu ^{2}+\sum _{j=1,j\neq k}^{m}a_{ij}(v^{2}+\mu ^{2})+a_{ik}(\sigma ^{2}+v^{2}+\mu ^{2})-(v^{2}+\mu ^{2})=a_{ik}\sigma ^{2}-\left(1-\sum _{j=1}^{m}a_{ij}\right)v^{2}=0}
σ
2
a
i
k
=
v
2
(
1
−
∑
j
=
1
m
a
i
j
)
{\displaystyle \sigma ^{2}a_{ik}=v^{2}\left(1-\sum _{j=1}^{m}a_{ij}\right)}
Right side doesn't depend on k. Therefore, all
a
i
k
{\displaystyle a_{ik}}
are constant
a
i
1
=
⋯
=
a
i
m
=
v
2
σ
2
+
m
v
2
{\displaystyle a_{i1}=\cdots =a_{im}={\frac {v^{2}}{\sigma ^{2}+mv^{2}}}}
From the solution for
a
i
0
{\displaystyle a_{i0}}
we have
a
i
0
=
(
1
−
m
a
i
k
)
μ
=
(
1
−
m
v
2
σ
2
+
m
v
2
)
μ
{\displaystyle a_{i0}=(1-ma_{ik})\mu =\left(1-{\frac {mv^{2}}{\sigma ^{2}+mv^{2}}}\right)\mu }
Finally, the best estimator is
a
i
0
+
∑
j
=
1
m
a
i
j
X
i
j
=
m
v
2
σ
2
+
m
v
2
X
i
¯
+
(
1
−
m
v
2
σ
2
+
m
v
2
)
μ
=
Z
X
i
¯
+
(
1
−
Z
)
μ
{\displaystyle a_{i0}+\sum _{j=1}^{m}a_{ij}X_{ij}={\frac {mv^{2}}{\sigma ^{2}+mv^{2}}}{\bar {X_{i}}}+\left(1-{\frac {mv^{2}}{\sigma ^{2}+mv^{2}}}\right)\mu =Z{\bar {X_{i}}}+(1-Z)\mu }
== References ==
=== Citations ===
=== Sources === | Wikipedia/Bühlmann_model |
In queueing theory, a discipline within the mathematical theory of probability, a G-network (generalized queueing network, often called a Gelenbe network) is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:
positive customers, which arrive from other queues or arrive externally as Poisson arrivals, and obey standard service and routing disciplines as in conventional network models,
negative customers, which arrive from another queue, or which arrive externally as Poisson arrivals, and remove (or 'kill') customers in a non-empty queue, representing the need to remove traffic when the network is congested, including the removal of "batches" of customers
"triggers", which arrive from other queues or from outside the network, and which displace customers and move them to other queues
A product-form solution superficially similar in form to Jackson's theorem, but which requires the solution of a system of non-linear equations for the traffic flows, exists for the stationary distribution of G-networks while the traffic equations of a G-network are in fact surprisingly non-linear, and the model does not obey partial balance. This broke previous assumptions that partial balance was a necessary condition for a product-form solution. A powerful property of G-networks is that they are universal approximators for continuous and bounded functions, so that they can be used to approximate quite general input-output behaviours.
== Definition ==
A network of m interconnected queues is a G-network if
each queue has one server, who serves at rate μi,
external arrivals of positive customers or of triggers or resets form Poisson processes of rate
Λ
i
{\displaystyle \scriptstyle {\Lambda _{i}}}
for positive customers, while triggers and resets, including negative customers, form a Poisson process of rate
λ
i
{\displaystyle \scriptstyle {\lambda _{i}}}
,
on completing service a customer moves from queue i to queue j as a positive customer with probability
p
i
j
+
{\displaystyle \scriptstyle {p_{ij}^{+}}}
, as a trigger or reset with probability
p
i
j
−
{\displaystyle \scriptstyle {p_{ij}^{-}}}
and departs the network with probability
d
i
{\displaystyle \scriptstyle {d_{i}}}
,
on arrival to a queue, a positive customer acts as usual and increases the queue length by 1,
on arrival to a queue, the negative customer reduces the length of the queue by some random number (if there is at least one positive customer present at the queue), while a trigger moves a customer probabilistically to another queue and a reset sets the state of the queue to its steady-state if the queue is empty when the reset arrives. All triggers, negative customers and resets disappear after they have taken their action, so that they are in fact "control" signals in the network,
note that normal customers leaving a queue can become triggers or resets and negative customers when they visit the next queue.
A queue in such a network is known as a G-queue.
== Stationary distribution ==
Define the utilization at each node,
ρ
i
=
λ
i
+
μ
i
+
λ
i
−
{\displaystyle \rho _{i}={\frac {\lambda _{i}^{+}}{\mu _{i}+\lambda _{i}^{-}}}}
where the
λ
i
+
,
λ
i
−
{\displaystyle \scriptstyle {\lambda _{i}^{+},\lambda _{i}^{-}}}
for
i
=
1
,
…
,
m
{\displaystyle \scriptstyle {i=1,\ldots ,m}}
satisfy
Then writing (n1, ... ,nm) for the state of the network (with queue length ni at node i), if a unique non-negative solution
(
λ
i
+
,
λ
i
−
)
{\displaystyle \scriptstyle {(\lambda _{i}^{+},\lambda _{i}^{-})}}
exists to the above equations (1) and (2) such that ρi for all i then the stationary probability distribution π exists and is given by
π
(
n
1
,
n
2
,
…
,
n
m
)
=
∏
i
=
1
m
(
1
−
ρ
i
)
ρ
i
n
i
.
{\displaystyle \pi (n_{1},n_{2},\ldots ,n_{m})=\prod _{i=1}^{m}(1-\rho _{i})\rho _{i}^{n_{i}}.}
=== Proof ===
It is sufficient to show
π
{\displaystyle \pi }
satisfies the global balance equations which, quite differently from Jackson networks are non-linear. We note that the model also allows for multiple classes.
G-networks have been used in a wide range of applications, including to represent Gene Regulatory Networks, the mix of control and payload in packet networks, neural networks, and the representation of colour images and medical images such as Magnetic Resonance Images.
== Response time distribution ==
The response time is the length of time a customer spends in the system. The response time distribution for a single G-queue is known where customers are served using a FCFS discipline at rate μ, with positive arrivals at rate λ+ and negative arrivals at rate λ− which kill customers from the end of the queue. The Laplace transform of response time distribution in this situation is
W
∗
(
s
)
=
μ
(
1
−
ρ
)
λ
+
s
+
λ
+
μ
(
1
−
ρ
)
−
[
s
+
λ
+
μ
(
1
−
ρ
)
]
2
−
4
λ
+
λ
−
λ
−
−
λ
+
−
μ
(
1
−
ρ
)
−
s
+
[
s
+
λ
+
μ
(
1
−
ρ
)
]
2
−
4
λ
+
λ
−
{\displaystyle W^{\ast }(s)={\frac {\mu (1-\rho )}{\lambda ^{+}}}{\frac {s+\lambda +\mu (1-\rho )-{\sqrt {[s+\lambda +\mu (1-\rho )]^{2}-4\lambda ^{+}\lambda ^{-}}}}{\lambda ^{-}-\lambda ^{+}-\mu (1-\rho )-s+{\sqrt {[s+\lambda +\mu (1-\rho )]^{2}-4\lambda ^{+}\lambda ^{-}}}}}}
where λ = λ+ + λ− and ρ = λ+/(λ− + μ), requiring ρ < 1 for stability.
The response time for a tandem pair of G-queues (where customers who finish service at the first node immediately move to the second, then leave the network) is also known, and it is thought extensions to larger networks will be intractable.
== References == | Wikipedia/G-network |
In probability theory, a martingale is a stochastic process in which the expected value of the next observation, given all prior observations, is equal to the most recent value. In other words, the conditional expectation of the next value, given the past, is equal to the present value. Martingales are used to model fair games, where future expected winnings are equal to the current amount regardless of past outcomes.
== History ==
Originally, martingale referred to a class of betting strategies that was popular in 18th-century France. The simplest of these strategies was designed for a game in which the gambler wins their stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double their bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, their probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users due to finite bankrolls. Stopped Brownian motion, which is a martingale process, can be used to model the trajectory of such games.
The concept of martingale in probability theory was introduced by Paul Lévy in 1934, though he did not name it. The term "martingale" was introduced later by Ville (1939), who also extended the definition to continuous martingales. Much of the original development of the theory was done by Joseph Leo Doob among others. Part of the motivation for that work was to show the impossibility of successful betting strategies in games of chance.
== Definitions ==
A basic definition of a discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for any time n,
E
(
|
X
n
|
)
<
∞
{\displaystyle \mathbf {E} (\vert X_{n}\vert )<\infty }
E
(
X
n
+
1
∣
X
1
,
…
,
X
n
)
=
X
n
.
{\displaystyle \mathbf {E} (X_{n+1}\mid X_{1},\ldots ,X_{n})=X_{n}.}
That is, the conditional expected value of the next observation, given all the past observations, is equal to the most recent observation.
=== Martingale sequences with respect to another sequence ===
More generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n
E
(
|
Y
n
|
)
<
∞
{\displaystyle \mathbf {E} (\vert Y_{n}\vert )<\infty }
E
(
Y
n
+
1
∣
X
1
,
…
,
X
n
)
=
Y
n
.
{\displaystyle \mathbf {E} (Y_{n+1}\mid X_{1},\ldots ,X_{n})=Y_{n}.}
Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t
E
(
|
Y
t
|
)
<
∞
{\displaystyle \mathbf {E} (\vert Y_{t}\vert )<\infty }
E
(
Y
t
∣
{
X
τ
,
τ
≤
s
}
)
=
Y
s
∀
s
≤
t
.
{\displaystyle \mathbf {E} (Y_{t}\mid \{X_{\tau },\tau \leq s\})=Y_{s}\quad \forall s\leq t.}
This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time
s
{\displaystyle s}
, is equal to the observation at time s (of course, provided that s ≤ t). The second property implies that
Y
n
{\displaystyle Y_{n}}
is measurable with respect to
X
1
…
X
n
{\displaystyle X_{1}\dots X_{n}}
.
=== General definition ===
In full generality, a stochastic process
Y
:
T
×
Ω
→
S
{\displaystyle Y:T\times \Omega \to S}
taking values in a Banach space
S
{\displaystyle S}
with norm
‖
⋅
‖
S
{\displaystyle \lVert \cdot \rVert _{S}}
is a martingale with respect to a filtration
Σ
∗
{\displaystyle \Sigma _{*}}
and probability measure
P
{\displaystyle \mathbb {P} }
if
Σ∗ is a filtration of the underlying probability space (Ω, Σ,
P
{\displaystyle \mathbb {P} }
);
Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function;
for each t, Yt lies in the Lp space L1(Ω, Σt,
P
{\displaystyle \mathbb {P} }
; S), i.e.
E
P
(
‖
Y
t
‖
S
)
<
+
∞
;
{\displaystyle \mathbf {E} _{\mathbb {P} }(\lVert Y_{t}\rVert _{S})<+\infty ;}
for all s and t with s < t and all F ∈ Σs,
E
P
(
[
Y
t
−
Y
s
]
χ
F
)
=
0
,
{\displaystyle \mathbf {E} _{\mathbb {P} }\left([Y_{t}-Y_{s}]\chi _{F}\right)=0,}
where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as
Y
s
=
E
P
(
Y
t
∣
Σ
s
)
,
{\displaystyle Y_{s}=\mathbf {E} _{\mathbb {P} }(Y_{t}\mid \Sigma _{s}),}
which is a general form of conditional expectation.
It is important to note that the property of being a martingale involves both the filtration and the probability measure (with respect to which the expectations are taken). It is possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale.
In the Banach space setting the conditional expectation is also denoted in operator notation as
E
Σ
s
Y
t
{\displaystyle \mathbf {E} ^{\Sigma _{s}}Y_{t}}
.
== Examples of martingales ==
An unbiased random walk, in any number of dimensions, is an example of a martingale. For example, consider a 1-dimensional random walk where at each time step a move to the right or left is equally likely.
A gambler's fortune (capital) is a martingale if all the betting games which the gambler plays are fair. The gambler is playing a game of coin flipping. Suppose Xn is the gambler's fortune after n tosses of a fair coin, such that the gambler wins $1 if the coin toss outcome is heads and loses $1 if the coin toss outcome is tails. The gambler's conditional expected fortune after the next game, given the history, is equal to his present fortune. This sequence is thus a martingale.
Let Yn = Xn2 − n where Xn is the gambler's fortune from the prior example. Then the sequence {Yn : n = 1, 2, 3, ... } is a martingale. This can be used to show that the gambler's total gain or loss varies roughly between plus or minus the square root of the number of games of coin flipping played.
de Moivre's martingale: Suppose the coin toss outcomes are unfair, i.e., biased, with probability p of coming up heads and probability q = 1 − p of tails. Let
X
n
+
1
=
X
n
±
1
{\displaystyle X_{n+1}=X_{n}\pm 1}
with "+" in case of "heads" and "−" in case of "tails". Let
Y
n
=
(
q
/
p
)
X
n
{\displaystyle Y_{n}=(q/p)^{X_{n}}}
Then {Yn : n = 1, 2, 3, ... } is a martingale with respect to {Xn : n = 1, 2, 3, ... }. To show this
E
[
Y
n
+
1
∣
X
1
,
…
,
X
n
]
=
p
(
q
/
p
)
X
n
+
1
+
q
(
q
/
p
)
X
n
−
1
=
p
(
q
/
p
)
(
q
/
p
)
X
n
+
q
(
p
/
q
)
(
q
/
p
)
X
n
=
q
(
q
/
p
)
X
n
+
p
(
q
/
p
)
X
n
=
(
q
/
p
)
X
n
=
Y
n
.
{\displaystyle {\begin{aligned}E[Y_{n+1}\mid X_{1},\dots ,X_{n}]&=p(q/p)^{X_{n}+1}+q(q/p)^{X_{n}-1}\\[6pt]&=p(q/p)(q/p)^{X_{n}}+q(p/q)(q/p)^{X_{n}}\\[6pt]&=q(q/p)^{X_{n}}+p(q/p)^{X_{n}}=(q/p)^{X_{n}}=Y_{n}.\end{aligned}}}
Pólya's urn contains a number of different-coloured marbles; at each iteration a marble is randomly selected from the urn and replaced with several more of that same colour. For any given colour, the fraction of marbles in the urn with that colour is a martingale. For example, if currently 95% of the marbles are red then, though the next iteration is more likely to add red marbles than another color, this bias is exactly balanced out by the fact that adding more red marbles alters the fraction much less significantly than adding the same number of non-red marbles would.
Likelihood-ratio testing in statistics: A random variable X is thought to be distributed according either to probability density f or to a different probability density g. A random sample X1, ..., Xn is taken. Let Yn be the "likelihood ratio"
Y
n
=
∏
i
=
1
n
g
(
X
i
)
f
(
X
i
)
{\displaystyle Y_{n}=\prod _{i=1}^{n}{\frac {g(X_{i})}{f(X_{i})}}}
If X is actually distributed according to the density f rather than according to g, then {Yn :n=1, 2, 3,...} is a martingale with respect to {Xn :n=1, 2, 3, ...}
In an ecological community, i.e. a group of species that are in a particular trophic level, competing for similar resources in a local area, the number of individuals of any particular species of fixed size is a function of (discrete) time, and may be viewed as a sequence of random variables. This sequence is a martingale under the unified neutral theory of biodiversity and biogeography.
If { Nt : t ≥ 0 } is a Poisson process with intensity λ, then the compensated Poisson process { Nt − λt : t ≥ 0 } is a continuous-time martingale with right-continuous/left-limit sample paths.
Wald's martingale
A
d
{\displaystyle d}
-dimensional process
M
=
(
M
(
1
)
,
…
,
M
(
d
)
)
{\displaystyle M=(M^{(1)},\dots ,M^{(d)})}
in some space
S
d
{\displaystyle S^{d}}
is a martingale in
S
d
{\displaystyle S^{d}}
if each component
T
i
(
M
)
=
M
(
i
)
{\displaystyle T_{i}(M)=M^{(i)}}
is a one-dimensional martingale in
S
{\displaystyle S}
.
== Submartingales, supermartingales, and relationship to harmonic functions ==
There are two generalizations of a martingale that also include cases when the current observation Xn is not necessarily equal to the future conditional expectation E[Xn+1 | X1,...,Xn] but instead an upper or lower bound on the conditional expectation. These generalizations reflect the relationship between martingale theory and potential theory, that is, the study of harmonic functions. Just as a continuous-time martingale satisfies E[Xt | {Xτ : τ ≤ s}] − Xs = 0 ∀s ≤ t, a harmonic function f satisfies the partial differential equation Δf = 0 where Δ is the Laplacian operator. Given a Brownian motion process Wt and a harmonic function f, the resulting process f(Wt) is also a martingale.
A discrete-time submartingale is a sequence
X
1
,
X
2
,
X
3
,
…
{\displaystyle X_{1},X_{2},X_{3},\ldots }
of integrable random variables satisfying
E
[
X
n
+
1
∣
X
1
,
…
,
X
n
]
≥
X
n
.
{\displaystyle \operatorname {E} [X_{n+1}\mid X_{1},\ldots ,X_{n}]\geq X_{n}.}
Likewise, a continuous-time submartingale satisfies
E
[
X
t
∣
{
X
τ
:
τ
≤
s
}
]
≥
X
s
∀
s
≤
t
.
{\displaystyle \operatorname {E} [X_{t}\mid \{X_{\tau }:\tau \leq s\}]\geq X_{s}\quad \forall s\leq t.}
In potential theory, a subharmonic function f satisfies Δf ≥ 0. Any subharmonic function that is bounded above by a harmonic function for all points on the boundary of a ball is bounded above by the harmonic function for all points inside the ball. Similarly, if a submartingale and a martingale have equivalent expectations for a given time, the history of the submartingale tends to be bounded above by the history of the martingale. Roughly speaking, the prefix "sub-" is consistent because the current observation Xn is less than (or equal to) the conditional expectation E[Xn+1 | X1,...,Xn]. Consequently, the current observation provides support from below the future conditional expectation, and the process tends to increase in future time.
Analogously, a discrete-time supermartingale satisfies
E
[
X
n
+
1
∣
X
1
,
…
,
X
n
]
≤
X
n
.
{\displaystyle \operatorname {E} [X_{n+1}\mid X_{1},\ldots ,X_{n}]\leq X_{n}.}
Likewise, a continuous-time supermartingale satisfies
E
[
X
t
∣
{
X
τ
:
τ
≤
s
}
]
≤
X
s
∀
s
≤
t
.
{\displaystyle \operatorname {E} [X_{t}\mid \{X_{\tau }:\tau \leq s\}]\leq X_{s}\quad \forall s\leq t.}
In potential theory, a superharmonic function f satisfies Δf ≤ 0. Any superharmonic function that is bounded below by a harmonic function for all points on the boundary of a ball is bounded below by the harmonic function for all points inside the ball. Similarly, if a supermartingale and a martingale have equivalent expectations for a given time, the history of the supermartingale tends to be bounded below by the history of the martingale. Roughly speaking, the prefix "super-" is consistent because the current observation Xn is greater than (or equal to) the conditional expectation E[Xn+1 | X1,...,Xn]. Consequently, the current observation provides support from above the future conditional expectation, and the process tends to decrease in future time.
=== Examples of submartingales and supermartingales ===
Every martingale is also a submartingale and a supermartingale. Conversely, any stochastic process that is both a submartingale and a supermartingale is a martingale.
Consider again the gambler who wins $1 when a coin comes up heads and loses $1 when the coin comes up tails. Suppose now that the coin may be biased, so that it comes up heads with probability p.
If p is equal to 1/2, the gambler on average neither wins nor loses money, and the gambler's fortune over time is a martingale.
If p is less than 1/2, the gambler loses money on average, and the gambler's fortune over time is a supermartingale.
If p is greater than 1/2, the gambler wins money on average, and the gambler's fortune over time is a submartingale.
A convex function of a martingale is a submartingale, by Jensen's inequality. For example, the square of the gambler's fortune in the fair coin game is a submartingale (which also follows from the fact that Xn2 − n is a martingale). Similarly, a concave function of a martingale is a supermartingale.
== Martingales and stopping times ==
A stopping time with respect to a sequence of random variables X1, X2, X3, ... is a random variable τ with the property that for each t, the occurrence or non-occurrence of the event τ = t depends only on the values of X1, X2, X3, ..., Xt. The intuition behind the definition is that at any particular time t, you can look at the sequence so far and tell if it is time to stop. An example in real life might be the time at which a gambler leaves the gambling table, which might be a function of their previous winnings (for example, he might leave only when he goes broke), but he can't choose to go or stay based on the outcome of games that haven't been played yet.
In some contexts the concept of stopping time is defined by requiring only that the occurrence or non-occurrence of the event τ = t is probabilistically independent of Xt + 1, Xt + 2, ... but not that it is completely determined by the history of the process up to time t. That is a weaker condition than the one appearing in the paragraph above, but is strong enough to serve in some of the proofs in which stopping times are used.
One of the basic properties of martingales is that, if
(
X
t
)
t
>
0
{\displaystyle (X_{t})_{t>0}}
is a (sub-/super-) martingale and
τ
{\displaystyle \tau }
is a stopping time, then the corresponding stopped process
(
X
t
τ
)
t
>
0
{\displaystyle (X_{t}^{\tau })_{t>0}}
defined by
X
t
τ
:=
X
min
{
τ
,
t
}
{\displaystyle X_{t}^{\tau }:=X_{\min\{\tau ,t\}}}
is also a (sub-/super-) martingale.
The concept of a stopped martingale leads to a series of important theorems, including, for example, the optional stopping theorem which states that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value.
== Martingale problem ==
The martingale problem is a framework in stochastic analysis for characterizing solutions to stochastic differential equations (SDEs) through martingale conditions.
=== General Martingale Problem (A, μ) ===
Let
E
{\displaystyle E}
be a Polish space with Borel
σ
{\displaystyle \sigma }
-algebra
E
{\displaystyle {\mathcal {E}}}
, and let
P
(
E
)
{\displaystyle {\mathcal {P}}(E)}
be the set of probability measures on
E
{\displaystyle E}
. Suppose
A
:
D
(
A
)
→
C
(
E
)
{\displaystyle A:{\mathcal {D}}(A)\to C(E)}
is a Markov pregenerator, where
D
(
A
)
{\displaystyle {\mathcal {D}}(A)}
is a dense subspace of
C
(
E
)
{\displaystyle C(E)}
. A probability measure
P
{\displaystyle \mathbb {P} }
on the Skorokhod space
D
E
[
0
,
∞
)
{\displaystyle D_{E}[0,\infty )}
solves the martingale problem
(
A
,
μ
)
{\displaystyle (A,\mu )}
for
μ
∈
P
(
E
)
{\displaystyle \mu \in {\mathcal {P}}(E)}
if:
For every
Γ
∈
E
{\displaystyle \Gamma \in {\mathcal {E}}}
,
P
ζ
:
ζ
0
∈
Γ
=
μ
(
Γ
)
.
{\displaystyle \mathbb {P} {\zeta :\zeta _{0}\in \Gamma }=\mu (\Gamma ).}
For every
f
∈
D
(
A
)
{\displaystyle f\in {\mathcal {D}}(A)}
, the process
f
(
ζ
t
)
−
∫
0
t
A
f
(
ζ
s
)
,
d
s
{\displaystyle f(\zeta _{t})-\int _{0}^{t}Af(\zeta _{s}),ds}
is a local martingale under
P
{\displaystyle \mathbb {P} }
with respect to its natural filtration.
If
μ
=
δ
η
{\displaystyle \mu =\delta _{\eta }}
(the Dirac measure at
η
{\displaystyle \eta }
), then
P
{\displaystyle \mathbb {P} }
is said to solve the martingale problem for
A
{\displaystyle A}
with initial point
η
{\displaystyle \eta }
.
=== Martingale Problem for Diffusions M(a, b) ===
A process
X
=
(
X
t
)
t
≥
0
{\displaystyle X=(X_{t})_{t\geq 0}}
on a filtered probability space
(
Ω
,
F
,
(
F
t
)
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}t),\mathbb {P} )}
solves the martingale problem
M
(
a
,
b
)
{\displaystyle M(a,b)}
for measurable functions
a
:
R
d
→
S
+
d
{\displaystyle a:\mathbb {R} ^{d}\to \mathbb {S} +^{d}}
and
b
:
R
d
→
R
d
{\displaystyle b:\mathbb {R} ^{d}\to \mathbb {R} ^{d}}
if:
For each
1
≤
i
≤
d
{\displaystyle 1\leq i\leq d}
,
M
t
i
=
X
t
i
−
∫
0
t
b
i
(
X
s
)
,
d
s
{\displaystyle M_{t}^{i}=X_{t}^{i}-\int _{0}^{t}b_{i}(X_{s}),ds}
is a local martingale.
For each
1
≤
i
,
j
≤
d
{\displaystyle 1\leq i,j\leq d}
,
M
t
i
,
M
t
j
−
∫
0
t
a
i
j
(
X
s
)
,
d
s
{\displaystyle M_{t}^{i},M_{t}^{j}-\int _{0}^{t}a_{ij}(X_{s}),ds}
is a local martingale.
=== Connection to Stochastic Differential Equations ===
Solutions to
M
(
a
,
b
)
{\displaystyle M(a,b)}
correspond (in a weak sense) to solutions of the SDE
d
X
t
=
b
(
X
t
)
,
d
t
+
σ
(
X
t
)
,
d
B
t
{\displaystyle dX_{t}=b(X_{t}),dt+\sigma (X_{t}),dB_{t}}
, where
σ
σ
⊤
=
a
{\displaystyle \sigma \sigma ^{\top }=a}
. One sees this by applying the generator
A
{\displaystyle A}
to simple functions such as
x
i
{\displaystyle x_{i}}
or
x
i
,
x
j
{\displaystyle x_{i},x_{j}}
, thereby recovering the drift
b
{\displaystyle b}
and the diffusion matrix
a
{\displaystyle a}
.
== See also ==
== Notes ==
== References ==
"Martingale", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"The Splendors and Miseries of Martingales". Electronic Journal for History of Probability and Statistics. 5 (1). June 2009. Entire issue dedicated to Martingale probability theory (Laurent Mazliak and Glenn Shafer, Editors).
Baldi, Paolo; Mazliak, Laurent; Priouret, Pierre (1991). Martingales and Markov Chains. Chapman and Hall. ISBN 978-1-584-88329-6.
Williams, David (1991). Probability with Martingales. Cambridge University Press. ISBN 978-0-521-40605-5.
Kleinert, Hagen (2004). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets (4th ed.). Singapore: World Scientific. ISBN 981-238-107-4.
Richard, Mark; Vecer, Jan (2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Siminelakis, Paris (2010). "Martingales and Stopping Times: Use of martingales in obtaining bounds and analyzing algorithms" (PDF). University of Athens. Archived from the original (PDF) on 2018-02-19. Retrieved 2010-06-18.
Ville, Jean (1939). "Étude critique de la notion de collectif". Bulletin of the American Mathematical Society. Monographies des Probabilités (in French). 3 (11). Paris: 824–825. doi:10.1090/S0002-9904-1939-07089-4. Zbl 0021.14601. Review by Doob.
Stroock, D. W. and Varadhan, S. R. S. (1979). Multidimensional Diffusion Processes. Springer.
Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and Convergence. Wiley. | Wikipedia/Martingale_(probability_theory) |
In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable is cross-correlated with a non-identical to itself random-variable.
Together with the autoregressive (AR) model, the moving-average model is a special case and key component of the more general ARMA and ARIMA models of time series, which have a more complicated stochastic structure. Contrary to the AR model, the finite MA model is always stationary.
The moving-average model should not be confused with the moving average, a distinct concept despite some similarities.
== Definition ==
The notation MA(q) refers to the moving average model of order q:
X
t
=
μ
+
ε
t
+
θ
1
ε
t
−
1
+
⋯
+
θ
q
ε
t
−
q
=
μ
+
∑
i
=
1
q
θ
i
ε
t
−
i
+
ε
t
,
{\displaystyle X_{t}=\mu +\varepsilon _{t}+\theta _{1}\varepsilon _{t-1}+\cdots +\theta _{q}\varepsilon _{t-q}=\mu +\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}+\varepsilon _{t},}
where
μ
{\displaystyle \mu }
is the mean of the series, the
θ
1
,
.
.
.
,
θ
q
{\displaystyle \theta _{1},...,\theta _{q}}
are the coefficients of the model and
ε
t
,
ε
t
−
1
,
.
.
.
,
ε
t
−
q
{\displaystyle \varepsilon _{t},\varepsilon _{t-1},...,\varepsilon _{t-q}}
are the error terms. The value of q is called the order of the MA model. This can be equivalently written in terms of the backshift operator B as
X
t
=
μ
+
(
1
+
θ
1
B
+
⋯
+
θ
q
B
q
)
ε
t
.
{\displaystyle X_{t}=\mu +(1+\theta _{1}B+\cdots +\theta _{q}B^{q})\varepsilon _{t}.}
Thus, a moving-average model is conceptually a linear regression of the current value of the series against current and previous (observed) white noise error terms or random shocks. The random shocks at each point are assumed to be mutually independent and to come from the same distribution, typically a normal distribution, with location at zero and constant scale.
== Interpretation ==
The moving-average model is essentially a finite impulse response filter applied to white noise, with some additional interpretation placed on it. The role of the random shocks in the MA model differs from their role in the autoregressive (AR) model in two ways. First, they are propagated to future values of the time series directly: for example,
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
appears directly on the right side of the equation for
X
t
{\displaystyle X_{t}}
. In contrast, in an AR model
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
does not appear on the right side of the
X
t
{\displaystyle X_{t}}
equation, but it does appear on the right side of the
X
t
−
1
{\displaystyle X_{t-1}}
equation, and
X
t
−
1
{\displaystyle X_{t-1}}
appears on the right side of the
X
t
{\displaystyle X_{t}}
equation, giving only an indirect effect of
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
on
X
t
{\displaystyle X_{t}}
. Second, in the MA model a shock affects
X
{\displaystyle X}
values only for the current period and q periods into the future; in contrast, in the AR model a shock affects
X
{\displaystyle X}
values infinitely far into the future, because
ε
t
{\displaystyle \varepsilon _{t}}
affects
X
t
{\displaystyle X_{t}}
, which affects
X
t
+
1
{\displaystyle X_{t+1}}
, which affects
X
t
+
2
{\displaystyle X_{t+2}}
, and so on forever (see Impulse response).
== Fitting the model ==
Fitting a moving-average model is generally more complicated than fitting an autoregressive model. This is because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares. Moving average models are linear combinations of past white noise terms, while autoregressive models are linear combinations of past time series values. ARMA models are more complicated than pure AR and MA models, as they combine both autoregressive and moving average components.
The autocorrelation function (ACF) of an MA(q) process is zero at lag q + 1 and greater. Therefore, we determine the appropriate maximum lag for the estimation by examining the sample autocorrelation function to see where it becomes insignificantly different from zero for all lags beyond a certain lag, which is designated as the maximum lag q.
Sometimes the ACF and partial autocorrelation function (PACF) will suggest that an MA model would be a better model choice and sometimes both AR and MA terms should be used in the same model (see Box–Jenkins method).
Autoregressive Integrated Moving Average (ARIMA) models are an alternative to segmented regression that can also be used for fitting a moving-average model.
== See also ==
Autoregressive–moving-average model
Autoregressive integrated moving average
Autoregressive model
Finite impulse response
Infinite impulse response
== References ==
== Further reading ==
Enders, Walter (2004). "Stationary Time-Series Models". Applied Econometric Time Series (Second ed.). New York: Wiley. pp. 48–107. ISBN 0-471-45173-8.
== External links ==
Common approaches to univariate time series
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Moving-average_model |
A maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While a standard random walk samples for every vertex a uniform probability distribution of outgoing edges, locally maximizing entropy rate, MERW maximizes it globally (average entropy production) by sampling a uniform probability distribution among all paths in a given graph.
MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously to Fibonacci coding. Its properties also made it useful for example in analysis of complex networks, like link prediction, community detection,
robust transport over networks and centrality measures. It is also used in image analysis, for example for detecting visual saliency regions, object localization, tampering detection or tractography problem.
Additionally, it recreates some properties of quantum mechanics, suggesting a way to repair the discrepancy between diffusion models and quantum predictions, like Anderson localization.
== Basic model ==
Consider a graph with
n
{\displaystyle n}
vertices, defined by an adjacency matrix
A
∈
{
0
,
1
}
n
×
n
{\displaystyle A\in \left\{0,1\right\}^{n\times n}}
:
A
i
j
=
1
{\displaystyle A_{ij}=1}
if there is an edge from vertex
i
{\displaystyle i}
to
j
{\displaystyle j}
, 0 otherwise. For simplicity, assume it is an undirected graph, which corresponds to a symmetric
A
{\displaystyle A}
; however, MERWs can also be generalized for directed and weighted graphs (for example Boltzmann distribution among paths instead of uniform).
We would like to choose a random walk as a Markov process on this graph: for every vertex
i
{\displaystyle i}
and its outgoing edge to
j
{\displaystyle j}
, choose probability
S
i
j
{\displaystyle S_{ij}}
of the walker randomly using this edge after visiting
i
{\displaystyle i}
. Formally, find a stochastic matrix
S
{\displaystyle S}
(containing the transition probabilities of a Markov chain) such that
0
≤
S
i
j
≤
A
i
j
{\displaystyle 0\leq S_{ij}\leq A_{ij}}
for all
i
,
j
{\displaystyle i,j}
and
∑
j
=
1
n
S
i
j
=
1
{\displaystyle \sum _{j=1}^{n}S_{ij}=1}
for all
i
{\displaystyle i}
.
Assuming this graph is connected and not periodic, ergodic theory says that evolution of this stochastic process leads to some stationary probability distribution
ρ
{\displaystyle \rho }
such that
ρ
S
=
ρ
{\displaystyle \rho S=\rho }
.
Using Shannon entropy for every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of the stochastic process:
H
(
S
)
=
∑
i
=
1
n
ρ
i
∑
j
=
1
n
S
i
j
log
(
1
/
S
i
j
)
{\displaystyle H(S)=\sum _{i=1}^{n}\rho _{i}\sum _{j=1}^{n}S_{ij}\log(1/S_{ij})}
This definition turns out to be equivalent to the asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process.
In the standard random walk, referred to here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable:
S
i
j
=
A
i
j
∑
k
=
1
n
A
i
k
{\displaystyle S_{ij}={\frac {A_{ij}}{\sum \limits _{k=1}^{n}A_{ik}}}}
.
For a symmetric
A
{\displaystyle A}
it leads to a stationary probability distribution
ρ
{\displaystyle \rho }
with
ρ
i
=
∑
j
=
1
n
A
i
j
∑
i
=
1
n
∑
j
=
1
n
A
i
j
{\displaystyle \rho _{i}={\frac {\sum \limits _{j=1}^{n}A_{ij}}{\sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n}A_{ij}}}}
.
It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rate
H
(
S
)
{\displaystyle H(S)}
.
MERW chooses the stochastic matrix which maximizes
H
(
S
)
{\displaystyle H(S)}
, or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominant eigenvalue
λ
{\displaystyle \lambda }
and corresponding eigenvector
ψ
{\displaystyle \psi }
of the adjacency matrix, i.e. the largest
λ
∈
R
{\displaystyle \lambda \in \mathbb {R} }
with corresponding
ψ
∈
R
n
{\displaystyle \psi \in \mathbb {R} ^{n}}
such that
ψ
A
=
λ
ψ
{\displaystyle \psi A=\lambda \psi }
. Then the stochastic matrix and stationary probability distribution are given by
S
i
j
=
A
i
j
λ
ψ
j
ψ
i
{\displaystyle S_{ij}={\frac {A_{ij}}{\lambda }}{\frac {\psi _{j}}{\psi _{i}}}}
for which every possible path of length
l
{\displaystyle l}
from the
i
{\displaystyle i}
-th to
j
{\displaystyle j}
-th vertex has probability
1
λ
l
ψ
j
ψ
i
{\displaystyle {\frac {1}{\lambda ^{l}}}{\frac {\psi _{j}}{\psi _{i}}}}
.
Its entropy rate is
log
(
λ
)
{\displaystyle \log(\lambda )}
and the stationary probability distribution
ρ
{\displaystyle \rho }
is
ρ
i
=
ψ
i
2
‖
ψ
‖
2
2
{\displaystyle \rho _{i}={\frac {\psi _{i}^{2}}{\|\psi \|_{2}^{2}}}}
.
In contrast to GRW, the MERW transition probabilities generally depend on the structure of the entire graph, making it nonlocal. Hence, they should not be imagined as directly applied by the walker – if random-looking decisions are made based on the local situation, like a person would make, the GRW approach is more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we do not have any additional knowledge about the system. For example, it would be appropriate for modelling our knowledge about an object performing some complex dynamics – not necessarily random, like a particle.
=== Sketch of derivation ===
Assume for simplicity that the considered graph is undirected, connected and aperiodic, allowing to conclude from the Perron–Frobenius theorem that the dominant eigenvector is unique. Hence
A
l
{\displaystyle A^{l}}
can be asymptotically (
l
→
∞
{\displaystyle l\rightarrow \infty }
) approximated by
λ
l
ψ
ψ
T
{\displaystyle \lambda ^{l}\psi \psi ^{T}}
(or
λ
l
|
ψ
⟩
⟨
ψ
|
{\displaystyle \lambda ^{l}|\psi \rangle \langle \psi |}
in bra–ket notation).
MERW requires a uniform distribution along paths. The number
m
i
l
{\displaystyle m_{il}}
of paths with length
2
l
{\displaystyle 2l}
and vertex
i
{\displaystyle i}
in the center is
m
i
l
=
∑
j
=
1
n
∑
k
=
1
n
(
A
l
)
j
i
(
A
l
)
i
k
≈
∑
j
=
1
n
∑
k
=
1
n
(
λ
l
ψ
ψ
⊤
)
j
i
(
λ
l
ψ
ψ
⊤
)
i
k
=
∑
j
=
1
n
∑
k
=
1
n
λ
2
l
ψ
j
ψ
i
ψ
i
ψ
k
=
λ
2
l
ψ
i
2
∑
j
=
1
n
ψ
j
∑
k
=
1
n
ψ
k
⏟
=:
b
{\displaystyle m_{il}=\sum _{j=1}^{n}\sum _{k=1}^{n}\left(A^{l}\right)_{ji}\left(A^{l}\right)_{ik}\approx \sum _{j=1}^{n}\sum _{k=1}^{n}\left(\lambda ^{l}\psi \psi ^{\top }\right)_{ji}\left(\lambda ^{l}\psi \psi ^{\top }\right)_{ik}=\sum _{j=1}^{n}\sum _{k=1}^{n}\lambda ^{2l}\psi _{j}\psi _{i}\psi _{i}\psi _{k}=\lambda ^{2l}\psi _{i}^{2}\underbrace {\sum _{j=1}^{n}\psi _{j}\sum _{k=1}^{n}\psi _{k}} _{=:b}}
,
hence for all
i
{\displaystyle i}
,
ρ
i
=
lim
l
→
∞
m
i
l
∑
k
=
1
n
m
k
l
=
lim
l
→
∞
λ
2
l
ψ
i
2
b
∑
k
=
1
n
λ
2
l
ψ
k
2
b
=
lim
l
→
∞
ψ
i
2
∑
k
=
1
n
ψ
k
2
=
ψ
i
2
∑
k
=
1
n
ψ
k
2
=
ψ
i
2
‖
ψ
‖
2
2
{\displaystyle \rho _{i}=\lim _{l\rightarrow \infty }{\frac {m_{il}}{\sum \limits _{k=1}^{n}m_{kl}}}=\lim _{l\rightarrow \infty }{\frac {\lambda ^{2l}\psi _{i}^{2}b}{\sum \limits _{k=1}^{n}\lambda ^{2l}\psi _{k}^{2}b}}=\lim _{l\rightarrow \infty }{\frac {\psi _{i}^{2}}{\sum \limits _{k=1}^{n}\psi _{k}^{2}}}={\frac {\psi _{i}^{2}}{\sum \limits _{k=1}^{n}\psi _{k}^{2}}}={\frac {\psi _{i}^{2}}{\|\psi \|_{2}^{2}}}}
.
Analogously calculating probability distribution for two succeeding vertices, one obtains that the probability of being at the
i
{\displaystyle i}
-th vertex and next at the
j
{\displaystyle j}
-th vertex is
ψ
i
A
i
j
ψ
j
∑
i
′
=
1
n
∑
j
′
=
1
n
ψ
i
′
A
i
′
j
′
ψ
j
′
=
ψ
i
A
i
j
ψ
j
ψ
A
ψ
⊤
=
ψ
i
A
i
j
ψ
j
λ
‖
ψ
‖
2
2
{\displaystyle {\frac {\psi _{i}A_{ij}\psi _{j}}{\sum \limits _{i'=1}^{n}\sum \limits _{j'=1}^{n}\psi _{i'}A_{i'j'}\psi _{j'}}}={\frac {\psi _{i}A_{ij}\psi _{j}}{\psi A\psi ^{\top }}}={\frac {\psi _{i}A_{ij}\psi _{j}}{\lambda \|\psi \|_{2}^{2}}}}
.
Dividing by the probability of being at the
i
{\displaystyle i}
-th vertex, i.e.
ρ
i
{\displaystyle \rho _{i}}
, gives for the conditional probability
S
i
j
{\displaystyle S_{ij}}
of the
j
{\displaystyle j}
-th vertex being next after the
i
{\displaystyle i}
-th vertex
S
i
j
=
A
i
j
λ
ψ
j
ψ
i
{\displaystyle S_{ij}={\frac {A_{ij}}{\lambda }}{\frac {\psi _{j}}{\psi _{i}}}}
.
=== Weighted MERW: Boltzmann path ensemble ===
We have assumed that
A
i
j
∈
{
0
,
1
}
{\displaystyle A_{ij}\in \{0,1\}}
, yielding a MERW corresponding to the uniform ensemble among paths. However, the above derivation works for any real nonnegative
A
{\displaystyle A}
for which the Perron-Frobenius theorem applies. Given
A
i
j
=
exp
(
−
E
i
j
)
{\displaystyle A_{ij}=\exp(-E_{ij})}
, the probability of a particular length-
l
{\displaystyle l}
path
(
γ
0
,
…
,
γ
l
)
{\displaystyle (\gamma _{0},\ldots ,\gamma _{l})}
is as follows:
Pr
(
γ
0
,
…
,
γ
l
)
=
ρ
γ
0
S
γ
0
γ
1
…
S
γ
l
−
1
γ
l
=
ψ
γ
0
A
γ
0
γ
1
…
A
γ
l
−
1
γ
l
λ
l
ψ
γ
l
=
ψ
γ
0
exp
(
−
(
E
γ
0
γ
1
+
…
+
E
γ
l
−
1
γ
l
)
)
λ
l
ψ
γ
l
{\displaystyle {\textrm {Pr}}(\gamma _{0},\ldots ,\gamma _{l})=\rho _{\gamma _{0}}S_{\gamma _{0}\gamma _{1}}\ldots S_{\gamma _{l-1}\gamma _{l}}=\psi _{\gamma _{0}}{\frac {A_{\gamma _{0}\gamma _{1}}\ldots A_{\gamma _{l-1}\gamma _{l}}}{\lambda ^{l}}}\psi _{\gamma _{l}}=\psi _{\gamma _{0}}{\frac {\exp(-(E_{\gamma _{0}\gamma _{1}}+\ldots +E_{\gamma _{l-1}\gamma _{l}}))}{\lambda ^{l}}}\psi _{\gamma _{l}}}
,
which is the same as the Boltzmann distribution of paths with energy defined as the sum of
E
i
j
{\displaystyle E_{ij}}
over the edges of the path. For example, this can be used with the transfer matrix to calculate the probability distribution of patterns in the Ising model.
== Examples ==
Let us first look at a simple nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0s and 1s, but not using two successive 1s: after a 1 there has to be a 0. To maximize the amount of information transmitted in such sequence, we should assume a uniform probability distribution in the space of all possible sequences fulfilling this constraint.
To practically use such long sequences, after 1 we have to use 0, but there remains the freedom of choosing the probability of 0 after 0. Let us denote this probability
q
{\displaystyle q}
. Entropy coding allows encoding a message using this chosen probability distribution. The stationary probability distribution of symbols for a given
q
{\displaystyle q}
turns out to be
ρ
=
(
1
/
(
2
−
q
)
,
1
−
1
/
(
2
−
q
)
)
{\displaystyle \rho =(1/(2-q),1-1/(2-q))}
. Hence, entropy produced is
H
(
S
)
=
ρ
0
(
q
log
(
1
/
q
)
+
(
1
−
q
)
log
(
1
/
(
1
−
q
)
)
)
{\displaystyle H(S)=\rho _{0}\left(q\log(1/q)+(1-q)\log(1/(1-q))\right)}
, which is maximized for
q
=
(
5
−
1
)
/
2
≈
0.618
{\displaystyle q=({\sqrt {5}}-1)/2\approx 0.618}
, known as the golden ratio. In contrast, a standard random walk would choose the suboptimal
q
=
0.5
{\displaystyle q=0.5}
. While choosing a larger
q
{\displaystyle q}
reduces the amount of information produced after 0, it also reduces the frequency of 1, after which we cannot write any information.
A more complex case is the defected one-dimensional cyclic lattice, for example, a ring with 1000 connected nodes, for which all nodes but the defects have a self-loop (edge to itself). In a standard random walk (GRW), the stationary probability distribution would have the defect probability be 2/3 of probability of the non-defect vertices – there is nearly no localization, also analogously for standard diffusion, which is the infinitesimal limit of a GRW. For a MERW, we have to first find the dominant eigenvector of the adjacency matrix – maximizing
λ
{\displaystyle \lambda }
in:
(
λ
ψ
)
x
=
(
A
ψ
)
x
=
ψ
x
−
1
+
(
1
−
V
x
)
ψ
x
+
ψ
x
+
1
{\displaystyle (\lambda \psi )_{x}=(A\psi )_{x}=\psi _{x-1}+(1-V_{x})\psi _{x}+\psi _{x+1}}
for all positions
x
{\displaystyle x}
, where
V
x
=
1
{\displaystyle V_{x}=1}
for defects, 0 otherwise. Substituting
3
ψ
x
{\displaystyle 3\psi _{x}}
and multiplying the equation by −1 we get:
E
ψ
x
=
−
(
ψ
x
−
1
−
2
ψ
x
+
ψ
x
+
1
)
+
V
x
ψ
x
{\displaystyle E\psi _{x}=-(\psi _{x-1}-2\psi _{x}+\psi _{x+1})+V_{x}\psi _{x}}
where
E
=
3
−
λ
{\displaystyle E=3-\lambda }
is minimized now, becoming the analog of energy. The formula inside the bracket is discrete Laplace operator, making this equation a discrete analogue of the stationary Schrödinger equation. As in quantum mechanics, MERWs predict that the probability distribution is that of the quantum ground state:
ρ
x
∝
ψ
x
2
{\displaystyle \rho _{x}\propto \psi _{x}^{2}}
with its strongly localized density (in contrast to standard diffusion). Taking the infinitesimal limit, we can get the standard continuous stationary (time-independent) Schrödinger equation (
E
ψ
=
−
C
ψ
x
x
+
V
ψ
{\displaystyle E\psi =-C\psi _{xx}+V\psi }
for
C
=
ℏ
2
/
2
m
{\displaystyle C=\hbar ^{2}/2m}
) here.
== See also ==
Principle of maximum entropy
Eigenvector centrality
Markov chain
Anderson localization
== References ==
== External links ==
Gábor Simonyi, Y. Lin, Z. Zhang, "Mean first-passage time for maximal-entropy random walks in complex networks". Scientific Reports, 2014.
Electron Conductance Models Using Maximal Entropy Random Walks Wolfram Demonstration Project | Wikipedia/Maximal_entropy_random_walk |
The Ising model (or Lenz–Ising model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a lattice (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.
The Ising model was invented by the physicist Wilhelm Lenz (1920), who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by Ising (1925) alone in his 1924 thesis; it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by Lars Onsager (1944). It is usually solved by a transfer-matrix method, although there exists a very simple approach relating the model to a non-interacting fermionic quantum field theory.
In dimensions greater than four, the phase transition of the Ising model is described by mean-field theory. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970s, culminating in an exact solution of the zero-field, time-independent Barth (1981) model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible applications.
The Ising problem without an external field can be equivalently formulated as a graph maximum cut (Max-Cut) problem that can be solved via combinatorial optimization.
== Definition ==
Consider a set
Λ
{\displaystyle \Lambda }
of lattice sites, each with a set of adjacent sites (e.g. a graph) forming a
d
{\displaystyle d}
-dimensional lattice. For each lattice site
k
∈
Λ
{\displaystyle k\in \Lambda }
there is a discrete variable
σ
k
{\displaystyle \sigma _{k}}
such that
σ
k
∈
{
−
1
,
+
1
}
{\displaystyle \sigma _{k}\in \{-1,+1\}}
, representing the site's spin. A spin configuration,
σ
=
{
σ
k
}
k
∈
Λ
{\displaystyle {\sigma }=\{\sigma _{k}\}_{k\in \Lambda }}
is an assignment of spin value to each lattice site.
For any two adjacent sites
i
,
j
∈
Λ
{\displaystyle i,j\in \Lambda }
there is an interaction
J
i
j
{\displaystyle J_{ij}}
. Also a site
j
∈
Λ
{\displaystyle j\in \Lambda }
has an external magnetic field
h
j
{\displaystyle h_{j}}
interacting with it. The energy of a configuration
σ
{\displaystyle {\sigma }}
is given by the Hamiltonian function
H
(
σ
)
=
−
∑
⟨
i
j
⟩
J
i
j
σ
i
σ
j
−
μ
∑
j
h
j
σ
j
,
{\displaystyle H(\sigma )=-\sum _{\langle ij\rangle }J_{ij}\sigma _{i}\sigma _{j}-\mu \sum _{j}h_{j}\sigma _{j},}
where the first sum is over pairs of adjacent spins (every pair is counted once). The notation
⟨
i
j
⟩
{\displaystyle \langle ij\rangle }
indicates that sites
i
{\displaystyle i}
and
j
{\displaystyle j}
are nearest neighbors. The magnetic moment is given by
μ
{\displaystyle \mu }
. Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally. The Ising Hamiltonian is an example of a pseudo-Boolean function; tools from the analysis of Boolean functions can be applied to describe and study it.
The configuration probability is given by the Boltzmann distribution with inverse temperature
β
≥
0
{\displaystyle \beta \geq 0}
:
P
β
(
σ
)
=
e
−
β
H
(
σ
)
Z
β
,
{\displaystyle P_{\beta }(\sigma )={\frac {e^{-\beta H(\sigma )}}{Z_{\beta }}},}
where
β
=
1
/
(
k
B
T
)
{\displaystyle \beta =1/(k_{\text{B}}T)}
, and the normalization constant
Z
β
=
∑
σ
e
−
β
H
(
σ
)
{\displaystyle Z_{\beta }=\sum _{\sigma }e^{-\beta H(\sigma )}}
is the partition function. For a function
f
{\displaystyle f}
of the spins ("observable"), one denotes by
⟨
f
⟩
β
=
∑
σ
f
(
σ
)
P
β
(
σ
)
{\displaystyle \langle f\rangle _{\beta }=\sum _{\sigma }f(\sigma )P_{\beta }(\sigma )}
the expectation (mean) value of
f
{\displaystyle f}
.
The configuration probabilities
P
β
(
σ
)
{\displaystyle P_{\beta }(\sigma )}
represent the probability that (in equilibrium) the system is in a state with configuration
σ
{\displaystyle \sigma }
.
=== Discussion ===
The minus sign on each term of the Hamiltonian function
H
(
σ
)
{\displaystyle H(\sigma )}
is conventional. Using this sign convention, Ising models can be classified according to the sign of the interaction: if, for a pair i, j
The system is called ferromagnetic or antiferromagnetic if all interactions are ferromagnetic or all are antiferromagnetic. The original Ising models were ferromagnetic, and it is still often assumed that "Ising model" means a ferromagnetic Ising model.
In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs.
The sign convention of H(σ) also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If:
=== Simplifications ===
Ising models are often examined without an external field interacting with the lattice, that is, h = 0 for all j in the lattice Λ. Using this simplification, the Hamiltonian becomes
H
(
σ
)
=
−
∑
⟨
i
j
⟩
J
i
j
σ
i
σ
j
.
{\displaystyle H(\sigma )=-\sum _{\langle i~j\rangle }J_{ij}\sigma _{i}\sigma _{j}.}
When the external field is zero everywhere, h = 0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a nonzero field breaks this symmetry.
Another common simplification is to assume that all of the nearest neighbors ⟨ij⟩ have the same interaction strength. Then we can set Jij = J for all pairs i, j in Λ. In this case the Hamiltonian is further simplified to
H
(
σ
)
=
−
J
∑
⟨
i
j
⟩
σ
i
σ
j
.
{\displaystyle H(\sigma )=-J\sum _{\langle i~j\rangle }\sigma _{i}\sigma _{j}.}
=== Connection to graph maximum cut ===
A subset S of the vertex set V(G) of a weighted undirected graph G determines a cut of the graph G into S and its complementary subset G\S. The size of the cut is the sum of the weights of the edges between S and G\S. A maximum cut size is at least the size of any other cut, varying S.
For the Ising model without an external field on a graph G, the Hamiltonian becomes the following sum over the graph edges E(G)
H
(
σ
)
=
−
∑
i
j
∈
E
(
G
)
J
i
j
σ
i
σ
j
{\displaystyle H(\sigma )=-\sum _{ij\in E(G)}J_{ij}\sigma _{i}\sigma _{j}}
.
Here each vertex i of the graph is a spin site that takes a spin value
σ
i
=
±
1
{\displaystyle \sigma _{i}=\pm 1}
. A given spin configuration
σ
{\displaystyle \sigma }
partitions the set of vertices
V
(
G
)
{\displaystyle V(G)}
into two
σ
{\displaystyle \sigma }
-depended subsets, those with spin up
V
+
{\displaystyle V^{+}}
and those with spin down
V
−
{\displaystyle V^{-}}
. We denote by
δ
(
V
+
)
{\displaystyle \delta (V^{+})}
the
σ
{\displaystyle \sigma }
-depended set of edges that connects the two complementary vertex subsets
V
+
{\displaystyle V^{+}}
and
V
−
{\displaystyle V^{-}}
. The size
|
δ
(
V
+
)
|
{\displaystyle \left|\delta (V^{+})\right|}
of the cut
δ
(
V
+
)
{\displaystyle \delta (V^{+})}
to bipartite the weighted undirected graph G can be defined as
|
δ
(
V
+
)
|
=
1
2
∑
i
j
∈
δ
(
V
+
)
W
i
j
,
{\displaystyle \left|\delta (V^{+})\right|={\frac {1}{2}}\sum _{ij\in \delta (V^{+})}W_{ij},}
where
W
i
j
{\displaystyle W_{ij}}
denotes a weight of the edge
i
j
{\displaystyle ij}
and the scaling 1/2 is introduced to compensate for double counting the same weights
W
i
j
=
W
j
i
{\displaystyle W_{ij}=W_{ji}}
.
The identities
H
(
σ
)
=
−
∑
i
j
∈
E
(
V
+
)
J
i
j
−
∑
i
j
∈
E
(
V
−
)
J
i
j
+
∑
i
j
∈
δ
(
V
+
)
J
i
j
=
−
∑
i
j
∈
E
(
G
)
J
i
j
+
2
∑
i
j
∈
δ
(
V
+
)
J
i
j
,
{\displaystyle {\begin{aligned}H(\sigma )&=-\sum _{ij\in E(V^{+})}J_{ij}-\sum _{ij\in E(V^{-})}J_{ij}+\sum _{ij\in \delta (V^{+})}J_{ij}\\&=-\sum _{ij\in E(G)}J_{ij}+2\sum _{ij\in \delta (V^{+})}J_{ij},\end{aligned}}}
where the total sum in the first term does not depend on
σ
{\displaystyle \sigma }
, imply that minimizing
H
(
σ
)
{\displaystyle H(\sigma )}
in
σ
{\displaystyle \sigma }
is equivalent to minimizing
∑
i
j
∈
δ
(
V
+
)
J
i
j
{\displaystyle \sum _{ij\in \delta (V^{+})}J_{ij}}
. Defining the edge weight
W
i
j
=
−
J
i
j
{\displaystyle W_{ij}=-J_{ij}}
thus turns the Ising problem without an external field into a graph Max-Cut problem
maximizing the cut size
|
δ
(
V
+
)
|
{\displaystyle \left|\delta (V^{+})\right|}
, which is related to the Ising Hamiltonian as follows,
H
(
σ
)
=
∑
i
j
∈
E
(
G
)
W
i
j
−
4
|
δ
(
V
+
)
|
.
{\displaystyle H(\sigma )=\sum _{ij\in E(G)}W_{ij}-4\left|\delta (V^{+})\right|.}
=== Questions ===
A significant number of statistical questions to ask about this model are in the limit of large numbers of spins:
In a typical configuration, are most of the spins +1 or −1, or are they split equally?
If a spin at any given position i is 1, what is the probability that the spin at position j is also 1?
If β is changed, is there a phase transition?
On a lattice Λ, what is the fractal dimension of the shape of a large cluster of +1 spins?
== Basic properties and history ==
The most studied case of the Ising model is the translation-invariant ferromagnetic zero-field model on a d-dimensional lattice, namely, Λ = Zd, Jij = 1, h = 0.
=== No phase transition in one dimension ===
In his 1924 PhD thesis, Ising solved the model for the d = 1 case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no phase transition. Namely, for any positive β, the correlations ⟨σiσj⟩ decay exponentially in |i − j|:
⟨
σ
i
σ
j
⟩
β
≤
C
exp
(
−
c
(
β
)
|
i
−
j
|
)
,
{\displaystyle \langle \sigma _{i}\sigma _{j}\rangle _{\beta }\leq C\exp \left(-c(\beta )|i-j|\right),}
and the system is disordered. On the basis of this result, he incorrectly concluded that this model does not exhibit phase behaviour in any dimension.
=== Phase transition and exact solution in two dimensions ===
The Ising model undergoes a phase transition between an ordered and a disordered phase in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order:
⟨
σ
i
σ
j
⟩
β
≥
c
(
β
)
>
0.
{\displaystyle \langle \sigma _{i}\sigma _{j}\rangle _{\beta }\geq c(\beta )>0.}
This was first proven by Rudolf Peierls in 1936, using what is now called a Peierls argument.
The Ising model on a two-dimensional square lattice with no magnetic field was analytically solved by Lars Onsager (1944). Onsager obtained the correlation functions and free energy of the Ising model and announced the formula for the spontaneous magnetization for the 2-dimensional model in 1949 but did not give a derivation. Yang (1952) gave the first published proof of this formula, using a limit formula for Fredholm determinants, proved in 1951 by Szegő in direct response to Onsager's work.
=== Correlation inequalities ===
A number of correlation inequalities have been derived rigorously for the Ising spin correlations (for general lattice structures), which have enabled mathematicians to study the Ising model both on and off criticality.
==== Griffiths inequality ====
Given any subset of spins
σ
A
{\displaystyle \sigma _{A}}
and
σ
B
{\displaystyle \sigma _{B}}
on the lattice, the following inequality holds,
⟨
σ
A
σ
B
⟩
≥
⟨
σ
A
⟩
⟨
σ
B
⟩
,
{\displaystyle \langle \sigma _{A}\sigma _{B}\rangle \geq \langle \sigma _{A}\rangle \langle \sigma _{B}\rangle ,}
where
⟨
σ
A
⟩
=
⟨
∏
j
∈
A
σ
j
⟩
{\displaystyle \langle \sigma _{A}\rangle =\langle \prod _{j\in A}\sigma _{j}\rangle }
.
With
B
=
∅
{\displaystyle B=\emptyset }
, the special case
⟨
σ
A
⟩
≥
0
{\displaystyle \langle \sigma _{A}\rangle \geq 0}
results.
This means that spins are positively correlated on the Ising ferromagnet. An immediate application of this is that the magnetization of any set of spins
⟨
σ
A
⟩
{\displaystyle \langle \sigma _{A}\rangle }
is increasing with respect to any set of coupling constants
J
B
{\displaystyle J_{B}}
.
==== Simon-Lieb inequality ====
The Simon-Lieb inequality states that for any set
S
{\displaystyle S}
disconnecting
x
{\displaystyle x}
from
y
{\displaystyle y}
(e.g. the boundary of a box with
x
{\displaystyle x}
being inside the box and
y
{\displaystyle y}
being outside),
⟨
σ
x
σ
y
⟩
≤
∑
z
∈
S
⟨
σ
x
σ
z
⟩
⟨
σ
z
σ
y
⟩
.
{\displaystyle \langle \sigma _{x}\sigma _{y}\rangle \leq \sum _{z\in S}\langle \sigma _{x}\sigma _{z}\rangle \langle \sigma _{z}\sigma _{y}\rangle .}
This inequality can be used to establish the sharpness of phase transition for the Ising model.
==== FKG inequality ====
This inequality is proven first for a type of positively-correlated percolation model, of which includes a representation of the Ising model. It is used to determine the critical temperatures of planar Potts model using percolation arguments (which includes the Ising model as a special case).
== Historical significance ==
One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials, as when ice melts to water or water turns to steam. His idea was that small changes in atomic-scale properties would lead to big changes in the aggregate behavior. Others believed that matter is inherently continuous, not atomic, and that the large-scale properties of matter are not reducible to basic atomic properties.
While the laws of chemical bonding made it clear to nineteenth century chemists that atoms were real, among physicists the debate continued well into the early twentieth century. Atomists, notably James Clerk Maxwell and Ludwig Boltzmann, applied Hamilton's formulation of Newton's laws to large systems, and found that the statistical behavior of the atoms correctly describes room temperature gases. But classical statistical mechanics did not account for all of the properties of liquids and solids, nor of gases at low temperature.
Once modern quantum mechanics was formulated, atomism was no longer in conflict with experiment, but this did not lead to a universal acceptance of statistical mechanics, which went beyond atomism. Josiah Willard Gibbs had given a complete formalism to reproduce the laws of thermodynamics from the laws of mechanics. But many faulty arguments survived from the 19th century, when statistical mechanics was considered dubious. The lapses in intuition mostly stemmed from the fact that the limit of an infinite statistical system has many zero-one laws which are absent in finite systems: an infinitesimal change in a parameter can lead to big differences in the overall, aggregate behavior, as Democritus expected.
=== No phase transitions in finite volume ===
In the early part of the twentieth century, some believed that the partition function could never describe a phase transition, based on the following argument:
The partition function is a sum of e−βE over all configurations.
The exponential function is everywhere analytic as a function of β.
The sum of analytic functions is an analytic function.
This argument works for a finite sum of exponentials, and correctly establishes that there are no singularities in the free energy of a system of a finite size. For systems which are in the thermodynamic limit (that is, for infinite systems) the infinite sum can lead to singularities. The convergence to the thermodynamic limit is fast, so that the phase behavior is apparent already on a relatively small lattice, even though the singularities are smoothed out by the system's finite size.
This was first established by Rudolf Peierls in the Ising model.
=== Peierls droplets ===
Shortly after Lenz and Ising constructed the Ising model, Peierls was able to explicitly show that a phase transition occurs in two dimensions.
To do this, he compared the high-temperature and low-temperature limits. At infinite temperature (β = 0) all configurations have equal probability. Each spin is completely independent of any other, and if typical configurations at infinite temperature are plotted so that plus/minus are represented by black and white, they look like television snow. For high, but not infinite temperature, there are small correlations between neighboring positions, the snow tends to clump a little bit, but the screen stays randomly looking, and there is no net excess of black or white.
A quantitative measure of the excess is the magnetization, which is the average value of the spin:
M
=
1
N
∑
i
=
1
N
σ
i
.
{\displaystyle M={\frac {1}{N}}\sum _{i=1}^{N}\sigma _{i}.}
A bogus argument analogous to the argument in the last section now establishes that the average magnetization in the Ising model is always zero.
Every configuration of spins has equal energy to the configuration with all spins flipped.
So for every configuration with magnetization M there is a configuration with magnetization −M with equal probability.
The system should therefore spend equal amounts of time in the configuration with magnetization M as with magnetization −M.
So the average magnetization (over all time) is zero.
As before, this only proves that the average magnetization is zero at any finite volume. For an infinite system, fluctuations might not be able to push the system from a mostly plus state to a mostly minus with a nonzero probability.
For very high temperatures, the magnetization is zero, as it is at infinite temperature. To see this, note that if spin A has only a small correlation ε with spin B, and B is only weakly correlated with C, but C is otherwise independent of A, the amount of correlation of A and C goes like ε2. For two spins separated by distance L, the amount of correlation goes as εL, but if there is more than one path by which the correlations can travel, this amount is enhanced by the number of paths.
The number of paths of length L on a square lattice in d dimensions is
N
(
L
)
=
(
2
d
)
L
,
{\displaystyle N(L)=(2d)^{L},}
since there are 2d choices for where to go at each step.
A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length L divided by
∑
L
(
2
d
)
L
ε
L
,
{\displaystyle \sum _{L}(2d)^{L}\varepsilon ^{L},}
which goes to zero when ε is small.
At low temperatures (β ≫ 1) the configurations are near the lowest-energy configuration, the one where all the spins are plus or all the spins are minus. Peierls asked whether it is statistically possible at low temperature, starting with all the spins minus, to fluctuate to a state where most of the spins are plus. For this to happen, droplets of plus spin must be able to congeal to make the plus state.
The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter L, the area is somewhere between (L − 2)/2 (the straight line) and (L/4)2 (the square box). The probability cost for introducing a droplet has the factor e−βL, but this contributes to the partition function multiplied by the total number of droplets with perimeter L, which is less than the total number of paths of length L:
N
(
L
)
<
4
2
L
.
{\displaystyle N(L)<4^{2L}.}
So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by
∑
L
L
2
4
2
L
e
−
4
β
L
,
{\displaystyle \sum _{L}L^{2}4^{2L}e^{-4\beta L},}
which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from −1.
So Peierls established that the magnetization in the Ising model eventually defines superselection sectors, separated domains not linked by finite fluctuations.
=== Kramers–Wannier duality ===
Kramers and Wannier were able to show that the high-temperature expansion and the low-temperature expansion of the model are equal up to an overall rescaling of the free energy. This allowed the phase-transition point in the two-dimensional model to be determined exactly (under the assumption that there is a unique critical point).
=== Yang–Lee zeros ===
After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.
== Applications ==
=== Magnetism ===
The original motivation for the model was the phenomenon of ferromagnetism. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time.
In the 19th century, it was thought that magnetic fields are due to currents in matter, and Ampère postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by Larmor. In order to have ferromagnetism, the atoms must have permanent magnetic moments which are not due to the motion of classical charges.
Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electron spins all oriented in the same direction. It was natural to ask how the electrons' spins all know which direction to point in, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electron spins could be oriented in the same direction using only local forces.
=== Lattice gas ===
The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy depends only on momentum and not on position, while the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms.
A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits Bi, where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4JBiBj for each occupied neighboring pair.
The density of the atoms can be controlled by adding a chemical potential, which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with N atoms is changed by μN. The probability cost of one more atom is a factor of exp(−βμ).
So the energy of the lattice gas is:
E
=
−
1
2
∑
⟨
i
,
j
⟩
4
J
B
i
B
j
+
∑
i
μ
B
i
.
{\displaystyle E=-{\frac {1}{2}}\sum _{\langle i,j\rangle }4JB_{i}B_{j}+\sum _{i}\mu B_{i}.}
Rewriting the bits in terms of spins,
B
i
=
(
S
i
+
1
)
/
2.
{\displaystyle B_{i}=(S_{i}+1)/2.}
E
=
−
1
2
∑
⟨
i
,
j
⟩
J
S
i
S
j
−
1
2
∑
i
(
4
J
−
μ
)
S
i
.
{\displaystyle E=-{\frac {1}{2}}\sum _{\langle i,j\rangle }JS_{i}S_{j}-{\frac {1}{2}}\sum _{i}(4J-\mu )S_{i}.}
For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field h = (zJ − μ)/2, where z is the number of neighbors.
In biological systems, modified versions of the lattice gas model have been used to understand a range of binding behaviors. These include the binding of ligands to receptors in the cell surface, the binding of chemotaxis proteins to the flagellar motor, and the condensation of DNA.
=== Neuroscience ===
The activity of neurons in the brain can be modelled statistically. Each neuron at any time is either active + or inactive −. The active neurons are those that send an action potential down the axon in any given time window, and the inactive ones are those that do not.
Following the general approach of Jaynes, a later interpretation of Schneidman, Berry, Segev and Bialek,
is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the principle of maximum entropy. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a Lagrange multiplier for each neuron:
E
=
−
∑
i
h
i
S
i
{\displaystyle E=-\sum _{i}h_{i}S_{i}}
But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers:
E
=
−
1
2
∑
i
j
J
i
j
S
i
S
j
−
∑
i
h
i
S
i
{\displaystyle E=-{\tfrac {1}{2}}\sum _{ij}J_{ij}S_{i}S_{j}-\sum _{i}h_{i}S_{i}}
where
J
i
j
{\displaystyle J_{ij}}
are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics.
This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences.
=== Spin glasses ===
With the Ising model the so-called spin glasses can also be described, by the usual Hamiltonian
H
=
−
1
2
∑
J
i
,
k
S
i
S
k
,
{\textstyle H=-{\frac {1}{2}}\,\sum J_{i,k}\,S_{i}\,S_{k},}
where the S-variables describe the Ising spins, while the Ji,k are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability p and ferromagnetic bonds with probability 1 − p (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When p = 0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.
=== Artificial neural network ===
Ising model was instrumental in the development of the Hopfield network. The original Ising model is a model for equilibrium. Roy J. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium (Glauber dynamics), adding in the component of time. (Kaoru Nakano, 1971) and (Shun'ichi Amari, 1972), proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by (William A. Little, 1974), who was cited by Hopfield in his 1982 paper.
The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.
=== Sea ice ===
The melt pond can be modelled by the Ising model; sea ice topography data bears rather heavily on the results. The state variable is binary for a simple 2D approximation, being either water or ice.
=== Cayley tree topologies and large neural networks ===
In order to investigate an Ising model with potential relevance for large (e.g. with
10
4
{\displaystyle 10^{4}}
or
10
5
{\displaystyle 10^{5}}
interactions per node) neural nets, at the suggestion of Krizan in 1979, Barth (1981) obtained the exact analytical expression for the free energy of the Ising model on the closed Cayley tree (with an arbitrarily large branching ratio) for a zero-external magnetic field (in the thermodynamic limit) by applying the methodologies of Glasser (1970) and Jellito (1979)
−
β
f
=
ln
2
+
2
γ
(
γ
+
1
)
ln
(
cosh
J
)
+
γ
(
γ
−
1
)
(
γ
+
1
)
∑
i
=
2
z
1
γ
i
ln
J
i
(
τ
)
{\displaystyle -\beta f=\ln 2+{\frac {2\gamma }{(\gamma +1)}}\ln(\cosh J)+{\frac {\gamma (\gamma -1)}{(\gamma +1)}}\sum _{i=2}^{z}{\frac {1}{\gamma ^{i}}}\ln J_{i}(\tau )}
where
γ
{\displaystyle \gamma }
is an arbitrary branching ratio (greater than or equal to 2),
t
≡
tanh
J
{\displaystyle t\equiv \tanh J}
,
τ
≡
t
2
{\displaystyle \tau \equiv t^{2}}
,
J
≡
β
ϵ
{\displaystyle J\equiv \beta \epsilon }
(with
ϵ
{\displaystyle \epsilon }
representing the nearest-neighbor interaction energy) and there are k (→ ∞ in the thermodynamic limit) generations in each of the tree branches (forming the closed tree architecture as shown in the given closed Cayley tree diagram.) The sum in the last term can be shown to converge uniformly and rapidly (i.e. for z → ∞, it remains finite) yielding a continuous and monotonous function, establishing that, for
γ
{\displaystyle \gamma }
greater than or equal to 2, the free energy is a continuous function of temperature T. Further analysis of the free energy indicates that it exhibits an unusual discontinuous first derivative at the critical temperature (Krizan, Barth & Glasser (1983), Glasser & Goldberg (1983).)
The spin-spin correlation between sites (in general, m and n) on the tree was found to have a transition point when considered at the vertices (e.g. A and Ā, its reflection), their respective neighboring sites (such as B and its reflection), and between sites adjacent to the top and bottom extreme vertices of the two trees (e.g. A and B), as may be determined from
⟨
s
m
s
n
⟩
=
Z
N
−
1
(
0
,
T
)
[
cosh
J
]
N
b
2
N
∑
l
=
1
z
g
m
n
(
l
)
t
l
{\displaystyle \langle s_{m}s_{n}\rangle ={Z_{N}}^{-1}(0,T)[\cosh J]^{N_{b}}2^{N}\sum _{l=1}^{z}g_{mn}(l)t^{l}}
where
N
b
{\displaystyle N_{b}}
is equal to the number of bonds,
g
m
n
(
l
)
t
l
{\displaystyle g_{mn}(l)t^{l}}
is the number of graphs counted for odd vertices with even intermediate sites (see cited methodologies and references for detailed calculations),
2
N
{\displaystyle 2^{N}}
is the multiplicity resulting from two-valued spin possibilities and the partition function
Z
N
{\displaystyle {Z_{N}}}
is derived from
∑
{
s
}
e
−
β
H
{\displaystyle \sum _{\{s\}}e^{-\beta H}}
. (Note:
s
i
{\displaystyle s_{i}}
is consistent with the referenced literature in this section and is equivalent to
S
i
{\displaystyle S_{i}}
or
σ
i
{\displaystyle \sigma _{i}}
utilized above and in earlier sections; it is valued at
±
1
{\displaystyle \pm 1}
.) The critical temperature
T
C
{\displaystyle T_{C}}
is given by
T
C
=
2
ϵ
k
B
[
ln
(
γ
+
1
)
−
ln
(
γ
−
1
)
]
.
{\displaystyle T_{C}={\frac {2\epsilon }{k_{\text{B}}[\ln({\sqrt {\gamma }}+1)-\ln({\sqrt {\gamma }}-1)]}}.}
The critical temperature for this model is only determined by the branching ratio
γ
{\displaystyle \gamma }
and the site-to-site interaction energy
ϵ
{\displaystyle \epsilon }
, a fact which may have direct implications associated with neural structure vs. its function (in that it relates the energies of interaction and branching ratio to its transitional behavior.) For example, a relationship between the transition behavior of activities of neural networks between sleeping and wakeful states (which may correlate with a spin-spin type of phase transition) in terms of changes in neural interconnectivity (
γ
{\displaystyle \gamma }
) and/or neighbor-to-neighbor interactions (
ϵ
{\displaystyle \epsilon }
), over time, is just one possible avenue suggested for further experimental investigation into such a phenomenon. In any case, for this Ising model it was established, that “the stability of the long-range correlation increases with increasing
γ
{\displaystyle \gamma }
or increasing
ϵ
{\displaystyle \epsilon }
.”
For this topology, the spin-spin correlation was found to be zero between the extreme vertices and the central sites at which the two trees (or branches) are joined (i.e. between A and individually C, D, or E.) This behavior is explained to be due to the fact that, as k increases, the number of links increases exponentially (between the extreme vertices) and so even though the contribution to spin correlations decrease exponentially, the correlation between sites such as the extreme vertex (A) in one tree and the extreme vertex in the joined tree (Ā) remains finite (above the critical temperature.) In addition, A and B also exhibit a non-vanishing correlation (as do their reflections) thus lending itself to, for B level sites (with A level), being considered “clusters” which tend to exhibit synchronization of firing.
Based upon a review of other classical network models as a comparison, the Ising model on a closed Cayley tree was determined to be the first classical statistical mechanical model to demonstrate both local and long-range sites with non-vanishing spin-spin correlations, while at the same time exhibiting intermediate sites with zero correlation, which indeed was a relevant matter for large neural networks at the time of its consideration. The model's behavior is also of relevance for any other divergent-convergent tree physical (or biological) system exhibiting a closed Cayley tree topology with an Ising-type of interaction. This topology should not be ignored since its behavior for Ising models has been solved exactly, and presumably nature will have found a way of taking advantage of such simple symmetries at many levels of its designs.
Barth (1981) early on noted the possibility of interrelationships between (1) the classical large neural network model (with similar coupled divergent-convergent topologies) with (2) an underlying statistical quantum mechanical model (independent of topology and with persistence in fundamental quantum states):
The most significant result obtained from the closed Cayley tree model involves the occurrence of long-range correlation in the absence of intermediate-range correlation. This result has not been demonstrated by other classical models. The failure of the classical view of impulse transmission to account for this phenomenon has been cited by numerous investigators (Ricciiardi and Umezawa, 1967, Hokkyo 1972, Stuart, Takahashi and Umezawa 1978, 1979) as significant enough to warrant radically new assumptions on a very fundamental level and have suggested the existence of quantum cooperative modes within the brain…In addition, it is interesting to note that the (modeling) of…Goldstone particles or bosons (as per Umezawa, et al)…within the brain, demonstrates the long-range correlation of quantum numbers preserved in the ground state…In the closed Cayley tree model ground states of pairs of sites, as well as the state variable of individual sites, (can) exhibit long-range correlation.
It was a natural and common belief among early neurophysicists (e.g. Umezawa, Krizan, Barth, etc.) that classical neural models (including those with statistical mechanical aspects) will one day have to be integrated with quantum physics (with quantum statistical aspects), similar perhaps to how the domain of chemistry has historically integrated itself into quantum physics via quantum chemistry.
Several additional statistical mechanical problems of interest remain to be solved for the closed Cayley tree, including the time-dependent case and the external field situation, as well as theoretical efforts aimed at understanding interrelationships with underlying quantum constituents and their physics.
== Numerical simulation ==
The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with
L = |Λ|: the total number of sites on the lattice,
σj ∈ {−1, +1}: an individual spin site on the lattice, j = 1, ..., L,
S ∈ {−1, +1}L: state of the system.
Since every spin site has ±1 spin, there are 2L different states that are possible. This motivates the reason for the Ising model to be simulated using Monte Carlo methods.
The Hamiltonian that is commonly used to represent the energy of the model when using Monte Carlo methods is:
H
(
σ
)
=
−
J
∑
⟨
i
j
⟩
σ
i
σ
j
−
h
∑
j
σ
j
.
{\displaystyle H(\sigma )=-J\sum _{\langle i~j\rangle }\sigma _{i}\sigma _{j}-h\sum _{j}\sigma _{j}.}
Furthermore, the Hamiltonian is further simplified by assuming zero external field h, since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:
H
(
σ
)
=
−
J
∑
⟨
i
j
⟩
σ
i
σ
j
.
{\displaystyle H(\sigma )=-J\sum _{\langle i~j\rangle }\sigma _{i}\sigma _{j}.}
Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.
=== Metropolis algorithm ===
The Metropolis–Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising model estimations. The algorithm first chooses selection probabilities g(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities A(μ, ν) so that detailed balance is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes ferromagnetic, meaning all of the sites point in the same direction.
When implementing the algorithm, one must ensure that g(μ, ν) is selected such that ergodicity is met. In thermal equilibrium a system's energy only fluctuates within a small range. This is the motivation behind the concept of single-spin-flip dynamics, which states that in each transition, we will only change one of the spin sites on the lattice. Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time. The maximum amount of change between the energy of the present state, Hμ and any possible new state's energy Hν (using single-spin-flip dynamics) is 2J between the spin we choose to "flip" to move to the new state and that spin's neighbor. Thus, in a 1D Ising model, where each site has two neighbors (left and right), the maximum difference in energy would be 4J. Let c represent the lattice coordination number; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to periodic boundary conditions. It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.
Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following. Since there are L total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of L new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the L states: g(μ, ν) = 1/L. Detailed balance tells us that the following equation must hold:
P
(
μ
,
ν
)
P
(
ν
,
μ
)
=
g
(
μ
,
ν
)
A
(
μ
,
ν
)
g
(
ν
,
μ
)
A
(
ν
,
μ
)
=
A
(
μ
,
ν
)
A
(
ν
,
μ
)
=
P
β
(
ν
)
P
β
(
μ
)
=
1
Z
e
−
β
(
H
ν
)
1
Z
e
−
β
(
H
μ
)
=
e
−
β
(
H
ν
−
H
μ
)
.
{\displaystyle {\frac {P(\mu ,\nu )}{P(\nu ,\mu )}}={\frac {g(\mu ,\nu )A(\mu ,\nu )}{g(\nu ,\mu )A(\nu ,\mu )}}={\frac {A(\mu ,\nu )}{A(\nu ,\mu )}}={\frac {P_{\beta }(\nu )}{P_{\beta }(\mu )}}={\frac {{\frac {1}{Z}}e^{-\beta (H_{\nu })}}{{\frac {1}{Z}}e^{-\beta (H_{\mu })}}}=e^{-\beta (H_{\nu }-H_{\mu })}.}
Thus, we want to select the acceptance probability for our algorithm to satisfy
A
(
μ
,
ν
)
A
(
ν
,
μ
)
=
e
−
β
(
H
ν
−
H
μ
)
.
{\displaystyle {\frac {A(\mu ,\nu )}{A(\nu ,\mu )}}=e^{-\beta (H_{\nu }-H_{\mu })}.}
If Hν > Hμ, then A(ν, μ) > A(μ, ν). Metropolis sets the larger of A(μ, ν) or A(ν, μ) to be 1. By this reasoning the acceptance algorithm is:
A
(
μ
,
ν
)
=
{
e
−
β
(
H
ν
−
H
μ
)
,
if
H
ν
−
H
μ
>
0
,
1
otherwise
.
{\displaystyle A(\mu ,\nu )={\begin{cases}e^{-\beta (H_{\nu }-H_{\mu })},&{\text{if }}H_{\nu }-H_{\mu }>0,\\1&{\text{otherwise}}.\end{cases}}}
The basic form of the algorithm is as follows:
Pick a spin site using selection probability g(μ, ν) and calculate the contribution to the energy involving this spin.
Flip the value of the spin and calculate the new contribution.
If the new energy is less, keep the flipped value.
If the new energy is more, only keep with probability
e
−
β
(
H
ν
−
H
μ
)
.
{\displaystyle e^{-\beta (H_{\nu }-H_{\mu })}.}
Repeat.
The change in energy Hν − Hμ only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.
=== As a Markov chain ===
It is possible to view the Ising model as a Markov chain, as the immediate probability Pβ(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a Markov chain Monte Carlo simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly L other states, where each transition corresponds to flipping a single spin site to the opposite value. Furthermore, since the energy equation Hσ change only depends on the nearest-neighbor interaction strength J, the Ising model and its variants such the Sznajd model can be seen as a form of a voter model for opinion dynamics.
== Solutions ==
=== One dimension ===
The thermodynamic limit exists as long as the interaction decay is
J
i
j
∼
|
i
−
j
|
−
α
{\displaystyle J_{ij}\sim |i-j|^{-\alpha }}
with α > 1.
In the case of ferromagnetic interaction
J
i
j
∼
|
i
−
j
|
−
α
{\displaystyle J_{ij}\sim |i-j|^{-\alpha }}
with 1 < α < 2, Dyson proved, by comparison with the hierarchical case, that there is phase transition at small enough temperature.
In the case of ferromagnetic interaction
J
i
j
∼
|
i
−
j
|
−
2
{\displaystyle J_{ij}\sim |i-j|^{-2}}
, Fröhlich and Spencer proved that there is phase transition at small enough temperature (in contrast with the hierarchical case).
In the case of interaction
J
i
j
∼
|
i
−
j
|
−
α
{\displaystyle J_{ij}\sim |i-j|^{-\alpha }}
with α > 2 (which includes the case of finite-range interactions), there is no phase transition at any positive temperature (i.e. finite β), since the free energy is analytic in the thermodynamic parameters.
In the case of nearest neighbor interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters, and the truncated two-point spin correlation decays exponentially fast. At zero temperature (i.e. infinite β), there is a second-order phase transition: the free energy is infinite, and the truncated two-point spin correlation does not decay (remains constant). Therefore, T = 0 is the critical temperature of this case. Scaling formulas are satisfied.
==== Ising's exact solution ====
In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The Hamiltonian of the one-dimensional Ising model on a lattice of L sites with free boundary conditions is
H
(
σ
)
=
−
J
∑
i
=
1
,
…
,
L
−
1
σ
i
σ
i
+
1
−
h
∑
i
σ
i
,
{\displaystyle H(\sigma )=-J\sum _{i=1,\ldots ,L-1}\sigma _{i}\sigma _{i+1}-h\sum _{i}\sigma _{i},}
where J and h can be any number, since in this simplified case J is a constant representing the interaction strength between the nearest neighbors and h is the constant external magnetic field applied to lattice sites. Then the
free energy is
f
(
β
,
h
)
=
−
lim
L
→
∞
1
β
L
ln
Z
(
β
)
=
−
1
β
ln
(
e
β
J
cosh
β
h
+
e
2
β
J
(
sinh
β
h
)
2
+
e
−
2
β
J
)
,
{\displaystyle f(\beta ,h)=-\lim _{L\to \infty }{\frac {1}{\beta L}}\ln Z(\beta )=-{\frac {1}{\beta }}\ln \left(e^{\beta J}\cosh \beta h+{\sqrt {e^{2\beta J}(\sinh \beta h)^{2}+e^{-2\beta J}}}\right),}
and the spin-spin correlation (i.e. the covariance) is
⟨
σ
i
σ
j
⟩
−
⟨
σ
i
⟩
⟨
σ
j
⟩
=
C
(
β
)
e
−
c
(
β
)
|
i
−
j
|
,
{\displaystyle \langle \sigma _{i}\sigma _{j}\rangle -\langle \sigma _{i}\rangle \langle \sigma _{j}\rangle =C(\beta )e^{-c(\beta )|i-j|},}
where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length c(β) vanishes.
===== Proof =====
The proof of this result is a simple computation.
If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when
H
(
σ
)
=
−
J
(
σ
1
σ
2
+
⋯
+
σ
L
−
1
σ
L
)
.
{\displaystyle H(\sigma )=-J\left(\sigma _{1}\sigma _{2}+\cdots +\sigma _{L-1}\sigma _{L}\right).}
Then the model factorizes under the change of variables
σ
j
′
=
σ
j
σ
j
−
1
,
j
≥
2.
{\displaystyle \sigma '_{j}=\sigma _{j}\sigma _{j-1},\quad j\geq 2.}
This gives
Z
(
β
)
=
∑
σ
1
,
…
,
σ
L
e
β
J
σ
1
σ
2
e
β
J
σ
2
σ
3
⋯
e
β
J
σ
L
−
1
σ
L
=
2
∏
j
=
2
L
∑
σ
j
′
e
β
J
σ
j
′
=
2
[
e
β
J
+
e
−
β
J
]
L
−
1
.
{\displaystyle Z(\beta )=\sum _{\sigma _{1},\ldots ,\sigma _{L}}e^{\beta J\sigma _{1}\sigma _{2}}e^{\beta J\sigma _{2}\sigma _{3}}\cdots e^{\beta J\sigma _{L-1}\sigma _{L}}=2\prod _{j=2}^{L}\sum _{\sigma '_{j}}e^{\beta J\sigma '_{j}}=2\left[e^{\beta J}+e^{-\beta J}\right]^{L-1}.}
Therefore, the free energy is
f
(
β
,
0
)
=
−
1
β
ln
[
e
β
J
+
e
−
β
J
]
.
{\displaystyle f(\beta ,0)=-{\frac {1}{\beta }}\ln \left[e^{\beta J}+e^{-\beta J}\right].}
With the same change of variables
⟨
σ
j
σ
j
+
N
⟩
=
[
e
β
J
−
e
−
β
J
e
β
J
+
e
−
β
J
]
N
,
{\displaystyle \langle \sigma _{j}\sigma _{j+N}\rangle =\left[{\frac {e^{\beta J}-e^{-\beta J}}{e^{\beta J}+e^{-\beta J}}}\right]^{N},}
hence it decays exponentially as soon as T ≠ 0; but for T = 0, i.e. in the limit β → ∞ there is no decay.
If h ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is
Z
(
β
)
=
∑
σ
1
,
…
,
σ
L
e
β
h
σ
1
e
β
J
σ
1
σ
2
e
β
h
σ
2
e
β
J
σ
2
σ
3
⋯
e
β
h
σ
L
e
β
J
σ
L
σ
1
=
∑
σ
1
,
…
,
σ
L
V
σ
1
,
σ
2
V
σ
2
,
σ
3
⋯
V
σ
L
,
σ
1
.
{\displaystyle Z(\beta )=\sum _{\sigma _{1},\ldots ,\sigma _{L}}e^{\beta h\sigma _{1}}e^{\beta J\sigma _{1}\sigma _{2}}e^{\beta h\sigma _{2}}e^{\beta J\sigma _{2}\sigma _{3}}\cdots e^{\beta h\sigma _{L}}e^{\beta J\sigma _{L}\sigma _{1}}=\sum _{\sigma _{1},\ldots ,\sigma _{L}}V_{\sigma _{1},\sigma _{2}}V_{\sigma _{2},\sigma _{3}}\cdots V_{\sigma _{L},\sigma _{1}}.}
The coefficients
V
σ
,
σ
′
{\displaystyle V_{\sigma ,\sigma '}}
can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is
V
σ
,
σ
′
=
e
β
h
2
σ
e
β
J
σ
σ
′
e
β
h
2
σ
′
{\displaystyle V_{\sigma ,\sigma '}=e^{{\frac {\beta h}{2}}\sigma }e^{\beta J\sigma \sigma '}e^{{\frac {\beta h}{2}}\sigma '}}
or
V
=
[
e
β
(
h
+
J
)
e
−
β
J
e
−
β
J
e
−
β
(
h
−
J
)
]
.
{\displaystyle V={\begin{bmatrix}e^{\beta (h+J)}&e^{-\beta J}\\e^{-\beta J}&e^{-\beta (h-J)}\end{bmatrix}}.}
In matrix formalism
Z
(
β
)
=
Tr
(
V
L
)
=
λ
1
L
+
λ
2
L
=
λ
1
L
[
1
+
(
λ
2
λ
1
)
L
]
,
{\displaystyle Z(\beta )=\operatorname {Tr} \left(V^{L}\right)=\lambda _{1}^{L}+\lambda _{2}^{L}=\lambda _{1}^{L}\left[1+\left({\frac {\lambda _{2}}{\lambda _{1}}}\right)^{L}\right],}
where λ1 is the highest eigenvalue of V, while λ2 is the other eigenvalue:
λ
1
=
e
β
J
cosh
β
h
+
e
2
β
J
(
cosh
β
h
)
2
−
2
sinh
2
β
J
=
e
β
J
cosh
β
h
+
e
2
β
J
(
sinh
β
h
)
2
+
e
−
2
β
J
,
{\displaystyle \lambda _{1}=e^{\beta J}\cosh \beta h+{\sqrt {e^{2\beta J}(\cosh \beta h)^{2}-2\sinh 2\beta J}}=e^{\beta J}\cosh \beta h+{\sqrt {e^{2\beta J}(\sinh \beta h)^{2}+e^{-2\beta J}}},}
and λ2 < λ1. This gives the formula of the free energy above. In the thermodynamics limit for the non-interaction case (J = 0), we got
Z
N
→
(
λ
1
)
N
=
(
2
cosh
β
h
)
N
,
{\displaystyle Z_{N}\to (\lambda _{1})^{N}=(2\cosh \beta h)^{N},}
as the answer for the open-boundary Ising model.
===== Comments =====
The energy of the lowest state is −JL, when all the spins are the same. For any other configuration, the extra energy is equal to 2J times the number of sign changes that are encountered when scanning the configuration from left to right.
If we designate the number of sign changes in a configuration as k, the difference in energy from the lowest energy state is 2k. Since the energy is additive in the number of flips, the probability p of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:
p
1
−
p
=
e
−
2
β
J
.
{\displaystyle {\frac {p}{1-p}}=e^{-2\beta J}.}
The problem is reduced to independent biased coin tosses. This essentially completes the mathematical description.
From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to p, so the correlations fall off exponentially.
⟨
S
i
S
j
⟩
∝
e
−
p
|
i
−
j
|
.
{\displaystyle \langle S_{i}S_{j}\rangle \propto e^{-p|i-j|}.}
The partition function is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:
Z
=
∑
configs
e
∑
k
S
k
=
∏
k
(
1
+
p
)
=
(
1
+
p
)
L
.
{\displaystyle Z=\sum _{\text{configs}}e^{\sum _{k}S_{k}}=\prod _{k}(1+p)=(1+p)^{L}.}
The logarithm divided by L is the free energy density:
β
f
=
log
(
1
+
p
)
=
log
(
1
+
e
−
2
β
J
1
+
e
−
2
β
J
)
,
{\displaystyle \beta f=\log(1+p)=\log \left(1+{\frac {e^{-2\beta J}}{1+e^{-2\beta J}}}\right),}
which is analytic away from β = ∞. A sign of a phase transition is a non-analytic free energy, so the one-dimensional model does not have a phase transition.
==== One-dimensional solution with transverse field ====
To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective Pauli matrices. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The transverse-field Hamiltonian is given by
H
(
σ
)
=
−
J
∑
i
=
1
,
…
,
L
σ
i
z
σ
i
+
1
z
−
h
∑
i
σ
i
x
.
{\displaystyle H(\sigma )=-J\sum _{i=1,\ldots ,L}\sigma _{i}^{z}\sigma _{i+1}^{z}-h\sum _{i}\sigma _{i}^{x}.}
The transverse-field model experiences a phase transition between an ordered and disordered regime at J ~ h. This can be shown by a mapping of Pauli matrices
σ
n
z
=
∏
i
=
1
n
T
i
x
,
{\displaystyle \sigma _{n}^{z}=\prod _{i=1}^{n}T_{i}^{x},}
σ
n
x
=
T
n
z
T
n
+
1
z
.
{\displaystyle \sigma _{n}^{x}=T_{n}^{z}T_{n+1}^{z}.}
Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain
H
(
σ
)
=
−
h
∑
i
=
1
,
…
,
L
T
i
z
T
i
+
1
z
−
J
∑
i
T
i
x
.
{\displaystyle H(\sigma )=-h\sum _{i=1,\ldots ,L}T_{i}^{z}T_{i+1}^{z}-J\sum _{i}T_{i}^{x}.}
Since the roles of h and J are switched, the Hamiltonian undergoes a transition at J = h.
==== Renormalization ====
When there is no external field, we can derive a functional equation that
f
(
β
,
0
)
=
f
(
β
)
{\displaystyle f(\beta ,0)=f(\beta )}
satisfies using renormalization. Specifically, let
Z
N
(
β
,
J
)
{\displaystyle Z_{N}(\beta ,J)}
be the partition function with
N
{\displaystyle N}
sites. Now we have:
Z
N
(
β
,
J
)
=
∑
σ
e
K
σ
2
(
σ
1
+
σ
3
)
e
K
σ
4
(
σ
3
+
σ
5
)
⋯
{\displaystyle Z_{N}(\beta ,J)=\sum _{\sigma }e^{K\sigma _{2}(\sigma _{1}+\sigma _{3})}e^{K\sigma _{4}(\sigma _{3}+\sigma _{5})}\cdots }
where
K
:=
β
J
{\displaystyle K:=\beta J}
. We sum over each of
σ
2
,
σ
4
,
⋯
{\displaystyle \sigma _{2},\sigma _{4},\cdots }
, to obtain
Z
N
(
β
,
J
)
=
∑
σ
(
2
cosh
(
K
(
σ
1
+
σ
3
)
)
)
⋅
(
2
cosh
(
K
(
σ
3
+
σ
5
)
)
)
⋯
{\displaystyle Z_{N}(\beta ,J)=\sum _{\sigma }(2\cosh(K(\sigma _{1}+\sigma _{3})))\cdot (2\cosh(K(\sigma _{3}+\sigma _{5})))\cdots }
Now, since the cosh function is even, we can solve
A
e
K
′
σ
1
σ
3
=
2
cosh
(
K
(
σ
1
+
σ
3
)
)
{\displaystyle Ae^{K'\sigma _{1}\sigma _{3}}=2\cosh(K(\sigma _{1}+\sigma _{3}))}
as
A
=
2
cosh
(
2
K
)
,
K
′
=
1
2
ln
cosh
(
2
K
)
{\textstyle A=2{\sqrt {\cosh(2K)}},K'={\frac {1}{2}}\ln \cosh(2K)}
. Now we have a self-similarity relation:
1
N
ln
Z
N
(
K
)
=
1
2
ln
(
2
cosh
(
2
K
)
)
+
1
2
1
N
/
2
ln
Z
N
/
2
(
K
′
)
{\displaystyle {\frac {1}{N}}\ln Z_{N}(K)={\frac {1}{2}}\ln \left(2{\sqrt {\cosh(2K)}}\right)+{\frac {1}{2}}{\frac {1}{N/2}}\ln Z_{N/2}(K')}
Taking the limit, we obtain
f
(
β
)
=
1
2
ln
(
2
cosh
(
2
K
)
)
+
1
2
f
(
β
′
)
{\displaystyle f(\beta )={\frac {1}{2}}\ln \left(2{\sqrt {\cosh(2K)}}\right)+{\frac {1}{2}}f(\beta ')}
where
β
′
J
=
1
2
ln
cosh
(
2
β
J
)
{\displaystyle \beta 'J={\frac {1}{2}}\ln \cosh(2\beta J)}
.
When
β
{\displaystyle \beta }
is small, we have
f
(
β
)
≈
ln
2
{\displaystyle f(\beta )\approx \ln 2}
, so we can numerically evaluate
f
(
β
)
{\displaystyle f(\beta )}
by iterating the functional equation until
K
{\displaystyle K}
is small.
=== Two dimensions ===
In the ferromagnetic case there is a phase transition. At low temperature, the Peierls argument proves positive magnetization for the nearest neighbor case and then, by the Griffiths inequality, also when longer range interactions are added. Meanwhile, at high temperature, the cluster expansion gives analyticity of the thermodynamic functions.
In the nearest-neighbor case, the free energy was exactly computed by Onsager. The spin-spin correlation functions were computed by McCoy and Wu.
==== Onsager's exact solution ====
Onsager (1944) obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field
h
=
0
{\displaystyle h=0}
in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies
J
1
{\displaystyle J_{1}}
and
J
2
{\displaystyle J_{2}}
, respectively
−
β
f
=
ln
2
+
1
8
π
2
∫
0
2
π
d
θ
1
∫
0
2
π
d
θ
2
ln
[
cosh
(
2
β
J
1
)
cosh
(
2
β
J
2
)
−
sinh
(
2
β
J
1
)
cos
(
θ
1
)
−
sinh
(
2
β
J
2
)
cos
(
θ
2
)
]
.
{\displaystyle -\beta f=\ln 2+{\frac {1}{8\pi ^{2}}}\int _{0}^{2\pi }d\theta _{1}\int _{0}^{2\pi }d\theta _{2}\ln[\cosh(2\beta J_{1})\cosh(2\beta J_{2})-\sinh(2\beta J_{1})\cos(\theta _{1})-\sinh(2\beta J_{2})\cos(\theta _{2})].}
From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature
T
c
{\displaystyle T_{c}}
which solves the equation
sinh
(
2
J
1
k
T
c
)
sinh
(
2
J
2
k
T
c
)
=
1.
{\displaystyle \sinh \left({\frac {2J_{1}}{kT_{c}}}\right)\sinh \left({\frac {2J_{2}}{kT_{c}}}\right)=1.}
In the isotropic case when the horizontal and vertical interaction energies are equal
J
1
=
J
2
=
J
{\displaystyle J_{1}=J_{2}=J}
, the critical temperature
T
c
{\displaystyle T_{c}}
occurs at the following point
T
c
=
2
J
k
ln
(
1
+
2
)
=
(
2.269185
⋯
)
J
k
{\displaystyle T_{c}={\frac {2J}{k\ln(1+{\sqrt {2}})}}=(2.269185\cdots ){\frac {J}{k}}}
When the interaction energies
J
1
{\displaystyle J_{1}}
,
J
2
{\displaystyle J_{2}}
are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field
h
=
0
{\displaystyle h=0}
, so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently. Specifically, around a triangle, it is impossible to make all 3 spin-pairs antiparallel, so the antiferromagnetic Ising model cannot reach the minimal energy state. This is an example of geometric frustration.
===== Transfer matrix =====
Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function
∑
{
S
}
exp
(
∑
i
j
S
i
,
j
(
S
i
,
j
+
1
+
S
i
+
1
,
j
)
)
.
{\displaystyle \sum _{\{S\}}\exp {\biggl (}\sum _{ij}S_{i,j}\left(S_{i,j+1}+S_{i+1,j}\right){\biggr )}.}
Think of the i direction as space, and the j direction as time. This is an independent sum over all the values that the spins can take at each time slice. This is a type of path integral, it is the sum over all spin histories.
A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time t and time t + Δt:
U
=
e
i
H
Δ
t
{\displaystyle U=e^{iH\Delta t}}
The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.
U
N
=
(
e
i
H
Δ
t
)
N
=
∫
D
X
e
i
L
{\displaystyle U^{N}=(e^{iH\Delta t})^{N}=\int DXe^{iL}}
where N is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.
Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the transfer matrix:
T
C
1
C
2
.
{\displaystyle T_{C_{1}C_{2}}.}
The configuration in each slice is a one-dimensional collection of spins. At each time slice, T has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are C1 and C2, and they are all one-dimensional spin configurations. We can think of the vector space that T acts on as all complex linear combinations of these. Using quantum mechanical notation:
|
A
⟩
=
∑
S
A
(
S
)
|
S
⟩
{\displaystyle |A\rangle =\sum _{S}A(S)|S\rangle }
where each basis vector
|
S
⟩
{\displaystyle |S\rangle }
is a spin configuration of a one-dimensional Ising model.
Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the sum over all histories which come back to the original configuration after N steps:
Z
=
t
r
(
T
N
)
.
{\displaystyle Z=\mathrm {tr} (T^{N}).}
Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix T, we can find Z.
===== T in terms of Pauli matrices =====
The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:
σ
i
x
.
{\displaystyle \sigma _{i}^{x}.}
In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.
Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position i:
σ
i
z
.
{\displaystyle \sigma _{i}^{z}.}
T can be written in terms of these:
∑
i
A
σ
i
x
+
B
σ
i
z
σ
i
+
1
z
{\displaystyle \sum _{i}A\sigma _{i}^{x}+B\sigma _{i}^{z}\sigma _{i+1}^{z}}
where A and B are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position i has flipped.
===== Spin flip creation and annihilation operators =====
Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σz term in T counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:
∑
C
ψ
i
†
ψ
i
.
{\displaystyle \sum C\psi _{i}^{\dagger }\psi _{i}.\,}
The first term flips a spin, so depending on the basis state it either:
moves a spin-flip one unit to the right
moves a spin-flip one unit to the left
produces two spin-flips on neighboring sites
destroys two spin-flips on neighboring sites.
Writing this out in terms of creation and annihilation operators:
σ
i
x
=
D
ψ
†
i
ψ
i
+
1
+
D
∗
ψ
†
i
ψ
i
−
1
+
C
ψ
i
ψ
i
+
1
+
C
∗
ψ
†
i
ψ
†
i
+
1
.
{\displaystyle \sigma _{i}^{x}=D{\psi ^{\dagger }}_{i}\psi _{i+1}+D^{*}{\psi ^{\dagger }}_{i}\psi _{i-1}+C\psi _{i}\psi _{i+1}+C^{*}{\psi ^{\dagger }}_{i}{\psi ^{\dagger }}_{i+1}.}
Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the T matrix can be diagonalized by Fourier transforms.
Carrying out the diagonalization produces the Onsager free energy.
===== Onsager's formula for spontaneous magnetization =====
Onsager famously announced the following expression for the spontaneous magnetization M of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof
M
=
(
1
−
[
sinh
2
β
J
1
sinh
2
β
J
2
]
−
2
)
1
8
{\displaystyle M=\left(1-\left[\sinh 2\beta J_{1}\sinh 2\beta J_{2}\right]^{-2}\right)^{\frac {1}{8}}}
where
J
1
{\displaystyle J_{1}}
and
J
2
{\displaystyle J_{2}}
are horizontal and vertical interaction energies.
A complete derivation was only given in 1951 by Yang (1952) using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward using Szegő's limit formula for Toeplitz determinants by treating the magnetization as the limit of correlation functions.
==== Minimal model ====
At the critical point, the two-dimensional Ising model is a two-dimensional conformal field theory. The spin and energy correlation functions are described by a minimal model, which has been exactly solved.
=== Three dimensions ===
In three as in two dimensions, the most studied case of the Ising model is the translation-invariant model on a cubic lattice with nearest-neighbor coupling in the zero magnetic field. Many theoreticians searched for an analytical three-dimensional solution for many decades, which would be analogous to Onsager's solution in the two-dimensional case. Such a solution has not been found until now, although there is no proof that it may not exist. In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic strings by Alexander Polyakov and Vladimir Dotsenko. This construction has been carried on the lattice, and the continuum limit, conjecturally describing the critical point, is unknown.
In three as in two dimensions, Peierls' argument shows that there is a phase transition. This phase transition is rigorously known to be continuous (in the sense that correlation length diverges and the magnetization goes to zero), and is called the critical point. It is believed that the critical point can be described by a renormalization group fixed point of the Wilson-Kadanoff renormalization group transformation. It is also believed that the phase transition can be described by a three-dimensional unitary conformal field theory, as evidenced by Monte Carlo simulations, exact diagonalization results in quantum models, and quantum field theoretical arguments. Although it is an open problem to establish rigorously the renormalization group picture or the conformal field theory picture, theoretical physicists have used these two methods to compute the critical exponents of the phase transition, which agree with the experiments and with the Monte Carlo simulations. This conformal field theory describing the three-dimensional Ising critical point is under active investigation using the method of the conformal bootstrap. This method currently yields the most precise information about the structure of the critical theory (see Ising critical exponents).
In 2000, Sorin Istrail of Sandia National Laboratories proved that the spin glass Ising model on a nonplanar lattice is NP-complete. That is, assuming P ≠ NP, the general spin glass Ising model is exactly solvable only in planar cases, so solutions for dimensions higher than two are also intractable. Istrail's result only concerns the spin glass model with spatially varying couplings, and tells nothing about Ising's original ferromagnetic model with equal couplings.
=== Four dimensions and above ===
In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.
==== Local field ====
The field H is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of H and not the spins. Once the correlations in H are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in H.
For any value of the slowly varying field H, the free energy (log-probability) is a local analytic function of H and its gradients. The free energy F(H) is defined to be the sum over all Ising configurations which are consistent with the long wavelength field. Since H is a coarse description, there are many Ising configurations consistent with each value of H, so long as not too much exactness is required for the match.
Since the allowed range of values of the spin in any region only depends on the values of H within one averaging volume from that region, the free energy contribution from each region only depends on the value of H there and in the neighboring regions. So F is a sum over all regions of a local contribution, which only depends on H and its derivatives.
By symmetry in H, only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:
β
F
=
∫
d
d
x
[
A
H
2
+
∑
i
=
1
d
Z
i
(
∂
i
H
)
2
+
λ
H
4
+
⋯
]
.
{\displaystyle \beta F=\int d^{d}x\left[AH^{2}+\sum _{i=1}^{d}Z_{i}(\partial _{i}H)^{2}+\lambda H^{4}+\cdots \right].}
On a square lattice, symmetries guarantee that the coefficients Zi of the derivative terms are all equal. But even for an anisotropic Ising model, where the Zi's in different directions are different, the fluctuations in H are isotropic in a coordinate system where the different directions of space are rescaled.
On any lattice, the derivative term
Z
i
j
∂
i
H
∂
j
H
{\displaystyle Z_{ij}\,\partial _{i}H\,\partial _{j}H}
is a positive definite quadratic form, and can be used to define the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make Zij = δij. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this accidental symmetry is lost.
Since βF is a function of a slowly spatially varying field, the probability of any field configuration is (omitting higher-order terms):
P
(
H
)
∝
e
−
∫
d
d
x
[
A
H
2
+
Z
|
∇
H
|
2
+
λ
H
4
]
=
e
−
β
F
[
H
]
.
{\displaystyle P(H)\propto e^{-\int d^{d}x\left[AH^{2}+Z|\nabla H|^{2}+\lambda H^{4}\right]}=e^{-\beta F[H]}.}
The statistical average of any product of H terms is equal to:
⟨
H
(
x
1
)
H
(
x
2
)
⋯
H
(
x
n
)
⟩
=
∫
D
H
e
−
∫
d
d
x
[
A
H
2
+
Z
|
∇
H
|
2
+
λ
H
4
]
H
(
x
1
)
H
(
x
2
)
⋯
H
(
x
n
)
∫
D
H
e
−
∫
d
d
x
[
A
H
2
+
Z
|
∇
H
|
2
+
λ
H
4
]
.
{\displaystyle \langle H(x_{1})H(x_{2})\cdots H(x_{n})\rangle ={\int DH\,e^{-\int d^{d}x\left[AH^{2}+Z|\nabla H|^{2}+\lambda H^{4}\right]}H(x_{1})H(x_{2})\cdots H(x_{n}) \over \int DH\,e^{-\int d^{d}x\left[AH^{2}+Z|\nabla H|^{2}+\lambda H^{4}\right]}}.}
The denominator in this expression is called the partition function:
Z
=
∫
D
H
e
−
∫
d
d
x
[
A
H
2
+
Z
|
∇
H
|
2
+
λ
H
4
]
{\displaystyle Z=\int DH\,e^{-\int d^{d}x\left[AH^{2}+Z|\nabla H|^{2}+\lambda H^{4}\right]}}
and the integral over all possible values of H is a statistical path integral. It integrates exp(βF) over all values of H, over all the long wavelength fourier components of the spins. F is a "Euclidean" Lagrangian for the field H. It is similar to the Lagrangian in of a scalar field in quantum field theory, the difference being that all the derivative terms enter with a positive sign, and there is no overall factor of i (thus "Euclidean").
==== Dimensional analysis ====
The form of F can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of H needs to be determined.
In the generic case, choosing the scaling law for H is easy, since the only term that contributes is the first one,
F
=
∫
d
d
x
A
H
2
.
{\displaystyle F=\int d^{d}x\,AH^{2}.}
This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of H at any point fluctuates completely independently of the value at any other point.
The scale of the field can be redefined to absorb the coefficient A, and then it is clear that A only determines the overall scale of fluctuations. The ultralocal model describes the long wavelength high temperature behavior of the Ising model, since in this limit the fluctuation averages are independent from point to point.
To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in H go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing A in the system of units where H does not absorb A. The phase transition can only happen when the subleading terms in F can contribute, but since the first term dominates at long distances, the coefficient A must be tuned to zero. This is the location of the critical point:
F
=
∫
d
d
x
[
t
H
2
+
λ
H
4
+
Z
(
∇
H
)
2
]
,
{\displaystyle F=\int d^{d}x\left[tH^{2}+\lambda H^{4}+Z(\nabla H)^{2}\right],}
where t is a parameter which goes through zero at the transition.
Since t is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once t is small, the scale of the field can either be set to fix the coefficient of the H4 term or the (∇H)2 term to 1.
==== Magnetization ====
To find the magnetization, fix the scaling of H so that λ is one. Now the field H has dimension −d/4, so that H4ddx is dimensionless, and Z has dimension 2 − d/2. In this scaling, the gradient term is only important at long distances for d ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.
There is one subtle point. The field H is fluctuating statistically, and the fluctuations can shift the zero point of t. To see how, consider H4 split in the following way:
H
(
x
)
4
=
−
⟨
H
(
x
)
2
⟩
2
+
2
⟨
H
(
x
)
2
⟩
H
(
x
)
2
+
(
H
(
x
)
2
−
⟨
H
(
x
)
2
⟩
)
2
{\displaystyle H(x)^{4}=-\langle H(x)^{2}\rangle ^{2}+2\langle H(x)^{2}\rangle H(x)^{2}+\left(H(x)^{2}-\langle H(x)^{2}\rangle \right)^{2}}
The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in t. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of t by dimensional analysis, it is the shifted t that is important. This was historically very confusing, because the shift in t at any finite λ is finite, but near the transition t is very small. The fractional change in t is very large, and in units where t is fixed the shift looks infinite.
The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted t,
∂
∂
H
(
t
H
2
+
λ
H
4
)
=
2
t
H
+
4
λ
H
3
=
0
{\displaystyle {\partial \over \partial H}\left(tH^{2}+\lambda H^{4}\right)=2tH+4\lambda H^{3}=0}
For t < 0, the minima are at H proportional to the square root of t. So Landau's catastrophe argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean-field value.
When t is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the fluctuations again become ultralocal at long distances.
==== Fluctuations ====
To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1 − d/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the H2 term is 2, while the scale dimension of the H4 term is 4 − d. For d < 4, the H4 term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.
This is an essential difference. In dimensions higher than 4, fixing the scale of the gradient term means that the coefficient of the H4 term is less and less important at longer and longer wavelengths. The dimension at which nonquadratic contributions begin to contribute is known as the critical dimension. In the Ising model, the critical dimension is 4.
In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as Gaussian averages:
⟨
S
(
x
)
S
(
y
)
⟩
∝
⟨
H
(
x
)
H
(
y
)
⟩
=
G
(
x
−
y
)
=
∫
d
k
(
2
π
)
d
e
i
k
(
x
−
y
)
k
2
+
t
{\displaystyle \langle S(x)S(y)\rangle \propto \langle H(x)H(y)\rangle =G(x-y)=\int {dk \over (2\pi )^{d}}{e^{ik(x-y)} \over k^{2}+t}}
valid when x − y is large. The function G(x − y) is the analytic continuation to imaginary time of the Feynman propagator, since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by Wick's theorem. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of G(x − y) for each pair.
⟨
S
(
x
1
)
S
(
x
2
)
⋯
S
(
x
2
n
)
⟩
=
C
n
∑
G
(
x
i
1
,
x
j
1
)
G
(
x
i
2
,
x
j
2
)
…
G
(
x
i
n
,
x
j
n
)
{\displaystyle \langle S(x_{1})S(x_{2})\cdots S(x_{2n})\rangle =C^{n}\sum G(x_{i1},x_{j1})G(x_{i2},x_{j2})\ldots G(x_{in},x_{jn})}
where C is the proportionality constant. So knowing G is enough. It determines all the multipoint correlations of the field.
==== The critical two-point function ====
To determine the form of G, consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:
(
−
∇
x
2
+
t
)
⟨
H
(
x
)
H
(
y
)
⟩
=
0
→
∇
2
G
(
x
)
+
t
G
(
x
)
=
0
{\displaystyle {\begin{aligned}&&\left(-\nabla _{x}^{2}+t\right)\langle H(x)H(y)\rangle &=0\\\rightarrow {}&&\nabla ^{2}G(x)+tG(x)&=0\end{aligned}}}
This is valid at noncoincident points only, since the correlations of H are singular when points collide. H obeys classical equations of motion for the same reason that quantum mechanical operators obey them—its fluctuations are defined by a path integral.
At the critical point t = 0, this is Laplace's equation, which can be solved by Gauss's method from electrostatics. Define an electric field analog by
E
=
∇
G
{\displaystyle E=\nabla G}
Away from the origin:
∇
⋅
E
=
0
{\displaystyle \nabla \cdot E=0}
since G is spherically symmetric in d dimensions, and E is the radial gradient of G. Integrating over a large d − 1 dimensional sphere,
∫
d
d
−
1
S
E
r
=
c
o
n
s
t
a
n
t
{\displaystyle \int d^{d-1}SE_{r}=\mathrm {constant} }
This gives:
E
=
C
r
d
−
1
{\displaystyle E={C \over r^{d-1}}}
and G can be found by integrating with respect to r.
G
(
r
)
=
C
r
d
−
2
{\displaystyle G(r)={C \over r^{d-2}}}
The constant C fixes the overall normalization of the field.
==== G(r) away from the critical point ====
When t does not equal zero, so that H is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:
∇
2
G
+
t
G
=
0
→
1
r
d
−
1
d
d
r
(
r
d
−
1
d
G
d
r
)
+
t
G
(
r
)
=
0
{\displaystyle \nabla ^{2}G+tG=0\to {1 \over r^{d-1}}{d \over dr}\left(r^{d-1}{dG \over dr}\right)+tG(r)=0}
For r small compared with
t
{\displaystyle {\sqrt {t}}}
, the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.
To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:
G
(
x
)
=
∫
d
τ
1
(
2
π
τ
)
d
e
−
x
2
4
τ
−
t
τ
{\displaystyle G(x)=\int d\tau {1 \over \left({\sqrt {2\pi \tau }}\right)^{d}}e^{-{x^{2} \over 4\tau }-t\tau }}
This is G, since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k.
G
(
k
)
=
∫
d
τ
e
−
(
k
2
−
t
)
τ
=
1
k
2
−
t
{\displaystyle G(k)=\int d\tau e^{-(k^{2}-t)\tau }={1 \over k^{2}-t}}
This is the inverse of the operator ∇2 − t in k-space, acting on the unit function in k-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as G with the same boundary conditions that determine the strength of the divergence at 0.
The interpretation of the integral representation over the proper time τ is that the two point function is the sum over all random walk paths that link position 0 to position x over time τ. The density of these paths at time τ at position x is Gaussian, but the random walkers disappear at a steady rate proportional to t so that the Gaussian at time τ is diminished in height by a factor that decreases steadily exponentially. In the quantum field theory context, these are the paths of relativistically localized quanta in a formalism that follows the paths of individual particles. In the pure statistical context, these paths still appear by the mathematical correspondence with quantum fields, but their interpretation is less directly physical.
The integral representation immediately shows that G(r) is positive, since it is represented as a weighted sum of positive Gaussians. It also gives the rate of decay at large r, since the proper time for a random walk to reach position τ is r2 and in this time, the Gaussian height has decayed by
e
−
t
τ
=
e
−
t
r
2
{\displaystyle e^{-t\tau }=e^{-tr^{2}}}
. The decay factor appropriate for position r is therefore
e
−
t
r
{\displaystyle e^{-{\sqrt {t}}r}}
.
A heuristic approximation for G(r) is:
G
(
r
)
≈
e
−
t
r
r
d
−
2
{\displaystyle G(r)\approx {e^{-{\sqrt {t}}r} \over r^{d-2}}}
This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of Bessel functions.
==== Symanzik polymer interpretation ====
The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the H4 interaction is 4. The term H4 can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the H2 coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.
The fractal dimension of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε−2. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. Kurt Symanzik argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.
==== 4 − ε dimensions – renormalization group ====
The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.
The negative logarithm of the probability of any field configuration H is the free energy function
F
=
∫
d
4
x
[
Z
2
|
∇
H
|
2
+
t
2
H
2
+
λ
4
!
H
4
]
{\displaystyle F=\int d^{4}x\left[{Z \over 2}|\nabla H|^{2}+{t \over 2}H^{2}+{\lambda \over 4!}H^{4}\right]\,}
The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a Feynman expansion as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.
Although dimensional analysis shows that both λ and Z are dimensionless, this is misleading. The long wavelength statistical fluctuations are not exactly scale invariant, and only become scale invariant when the interaction strength vanishes.
The reason is that there is a cutoff used to define H, and the cutoff defines the shortest wavelength. Fluctuations of H at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.
===== Wilson renormalization =====
A quick heuristic way of studying the scaling is to cut off the H wavenumbers at a point λ. Fourier modes of H with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.
To restore the old cutoff, perform a partial integration over all the wavenumbers which used to be forbidden, but are now fluctuating. In Feynman diagrams, integrating over a fluctuating mode at wavenumber k links up lines carrying momentum k in a correlation function in pairs, with a factor of the inverse propagator.
Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and do not change at all.
The lowest order effect of integrating out can be calculated from the equations of motion:
∇
2
H
+
t
H
=
−
λ
6
H
3
.
{\displaystyle \nabla ^{2}H+tH=-{\lambda \over 6}H^{3}.}
This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < k < (1+b)Λ, it will be a slightly different identity.
Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the H3 term. In a Feynman diagram expansion, the H3 term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber k gives a change H3 with one dangling line, so proportional to H:
δ
H
3
=
3
H
∫
Λ
<
|
k
|
<
(
1
+
b
)
Λ
d
4
k
(
2
π
)
4
1
(
k
2
+
t
)
{\displaystyle \delta H^{3}=3H\int _{\Lambda <|k|<(1+b)\Lambda }{d^{4}k \over (2\pi )^{4}}{1 \over (k^{2}+t)}}
The factor of 3 comes from the fact that the loop can be closed in three different ways.
The integral should be split into two parts:
∫
d
k
1
k
2
−
t
∫
d
k
1
k
2
(
k
2
+
t
)
=
A
Λ
2
b
+
B
b
t
{\displaystyle \int dk{1 \over k^{2}}-t\int dk{1 \over k^{2}(k^{2}+t)}=A\Lambda ^{2}b+Bbt}
The first part is not proportional to t, and in the equation of motion it can be absorbed by a constant shift in t. It is caused by the fact that the H3 term has a linear part. Only the second term, which varies from t to t, contributes to the critical scaling.
This new linear term adds to the first term on the left hand side, changing t by an amount proportional to t. The total change in t is the sum of the term from dimensional analysis and this second term from operator products:
δ
t
=
(
2
−
B
λ
2
)
b
t
{\displaystyle \delta t=\left(2-{B\lambda \over 2}\right)bt}
So t is rescaled, but its dimension is anomalous, it is changed by an amount proportional to the value of λ.
But λ also changes. The change in λ requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from H3 splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is
δ
λ
=
−
3
λ
2
2
∫
k
d
k
1
(
k
2
+
t
)
2
=
−
3
λ
2
2
b
{\displaystyle \delta \lambda =-{3\lambda ^{2} \over 2}\int _{k}dk{1 \over (k^{2}+t)^{2}}=-{3\lambda ^{2} \over 2}b}
The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So
δ
λ
=
−
3
B
λ
2
b
{\displaystyle \delta \lambda =-3B\lambda ^{2}b}
These two equations together define the renormalization group equations in four dimensions:
d
t
t
=
(
2
−
B
λ
2
)
b
d
λ
λ
=
−
3
B
λ
2
b
{\displaystyle {\begin{aligned}{dt \over t}&=\left(2-{B\lambda \over 2}\right)b\\{d\lambda \over \lambda }&={-3B\lambda \over 2}b\end{aligned}}}
The coefficient B is determined by the formula
B
b
=
∫
Λ
<
|
k
|
<
(
1
+
b
)
Λ
d
4
k
(
2
π
)
4
1
k
4
{\displaystyle Bb=\int _{\Lambda <|k|<(1+b)\Lambda }{d^{4}k \over (2\pi )^{4}}{1 \over k^{4}}}
and is proportional to the area of a three-dimensional sphere of radius λ, times the width of the integration region bΛ divided by Λ4:
B
=
(
2
π
2
Λ
3
)
1
(
2
π
)
4
b
Λ
1
b
Λ
4
=
1
8
π
2
{\displaystyle B=(2\pi ^{2}\Lambda ^{3}){1 \over (2\pi )^{4}}{b\Lambda }{1 \over b\Lambda ^{4}}={1 \over 8\pi ^{2}}}
In other dimensions, the constant B changes, but the same constant appears both in the t flow and in the coupling flow. The reason is that the derivative with respect to t of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the t is the combinatorial factors from joining and splitting.
===== Wilson–Fisher fixed point =====
To investigate three dimensions starting from the four-dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling does not change very much when the dimension is changed.
The process of continuing away from dimension 4 is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4 − ε defined by:
G
(
x
−
y
)
=
∫
d
τ
1
t
d
2
e
x
2
2
τ
+
t
τ
{\displaystyle G(x-y)=\int d\tau {1 \over t^{d \over 2}}e^{{x^{2} \over 2\tau }+t\tau }}
In dimension 4 − ε, the coupling λ has positive scale dimension ε, and this must be added to the flow.
d
λ
λ
=
ε
−
3
B
λ
d
t
t
=
2
−
λ
B
{\displaystyle {\begin{aligned}{d\lambda \over \lambda }&=\varepsilon -3B\lambda \\{dt \over t}&=2-\lambda B\end{aligned}}}
The coefficient B is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at:
λ
=
ε
3
B
{\displaystyle \lambda ={\varepsilon \over 3B}}
where the scale dimensions of t is altered by an amount λB = ε/3.
The magnetization exponent is altered proportionately to:
1
2
(
1
−
ε
3
)
{\displaystyle {\tfrac {1}{2}}\left(1-{\varepsilon \over 3}\right)}
which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125.
==== Infinite dimensions – mean field ====
The behavior of an Ising model on a fully connected graph may be completely understood by mean-field theory. This type of description is appropriate to very-high-dimensional square lattices, because then each site has a very large number of neighbors.
The idea is that if each spin is connected to a large number of spins, only the average ratio of + spins to − spins is important, since the fluctuations about this mean will be small. The mean field H is the average fraction of spins which are + minus the average fraction of spins which are −. The energy cost of flipping a single spin in the mean field H is ±2JNH. It is convenient to redefine J to absorb the factor N, so that the limit N → ∞ is smooth. In terms of the new J, the energy cost for flipping a spin is ±2JH.
This energy cost gives the ratio of probability p that the spin is + to the probability 1−p that the spin is −. This ratio is the Boltzmann factor:
p
1
−
p
=
e
2
β
J
H
{\displaystyle {p \over 1-p}=e^{2\beta JH}}
so that
p
=
1
1
+
e
−
2
β
J
H
{\displaystyle p={1 \over 1+e^{-2\beta JH}}}
The mean value of the spin is given by averaging 1 and −1 with the weights p and 1 − p, so the mean value is 2p − 1. But this average is the same for all spins, and is therefore equal to H.
H
=
2
p
−
1
=
1
−
e
−
2
β
J
H
1
+
e
−
2
β
J
H
=
tanh
(
β
J
H
)
{\displaystyle H=2p-1={1-e^{-2\beta JH} \over 1+e^{-2\beta JH}}=\tanh(\beta JH)}
The solutions to this equation are the possible consistent mean fields. For βJ < 1 there is only the one solution at H = 0. For bigger values of β there are three solutions, and the solution at H = 0 is unstable.
The instability means that increasing the mean field above zero a little bit produces a statistical fraction of spins which are + which is bigger than the value of the mean field. So a mean field which fluctuates above zero will produce an even greater mean field, and will eventually settle at the stable solution. This means that for temperatures below the critical value βJ = 1 the mean-field Ising model undergoes a phase transition in the limit of large N.
Above the critical temperature, fluctuations in H are damped because the mean field restores the fluctuation to zero field. Below the critical temperature, the mean field is driven to a new equilibrium value, which is either the positive H or negative H solution to the equation.
For βJ = 1 + ε, just below the critical temperature, the value of H can be calculated from the Taylor expansion of the hyperbolic tangent:
H
=
tanh
(
β
J
H
)
≈
(
1
+
ε
)
H
−
(
1
+
ε
)
3
H
3
3
{\displaystyle H=\tanh(\beta JH)\approx (1+\varepsilon )H-{(1+\varepsilon )^{3}H^{3} \over 3}}
Dividing by H to discard the unstable solution at H = 0, the stable solutions are:
H
=
3
ε
{\displaystyle H={\sqrt {3\varepsilon }}}
The spontaneous magnetization H grows near the critical point as the square root of the change in temperature. This is true whenever H can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led Landau to suspect that all Ising type phase transitions in all dimensions should follow this law.
The mean-field exponent is universal because changes in the character of solutions of analytic equations are always described by catastrophes in the Taylor series, which is a polynomial equation. By symmetry, the equation for H must only have odd powers of H on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of H on the right hand side is 1. Near the transition:
H
=
∂
(
β
F
)
∂
h
=
(
1
+
A
ε
)
H
+
B
H
3
+
⋯
{\displaystyle H={\partial (\beta F) \over \partial h}=(1+A\varepsilon )H+BH^{3}+\cdots }
Whatever A and B are, so long as neither of them is tuned to zero, the spontaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy βF is either non-analytic or non-generic at the exact β where the transition occurs.
But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is:
H
∝
ε
0.308
{\displaystyle H\propto \varepsilon ^{0.308}}
The exponent is also universal, since it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean-field value. This was a great surprise.
This is also true in two dimensions, where
H
∝
ε
0.125
{\displaystyle H\propto \varepsilon ^{0.125}}
But there it was not a surprise, because it was predicted by Onsager.
==== Low dimensions – block spins ====
In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3.
But renormalization can also be productively applied to the spins directly, without passing to an average field. Historically, this approach is due to Leo Kadanoff and predated the perturbative ε expansion.
The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality.
===== Migdal–Kadanoff renormalization =====
Write the two-dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute:
E
=
∑
i
j
J
i
j
S
i
S
j
+
∑
J
i
j
k
l
S
i
S
j
S
k
S
l
…
.
{\displaystyle E=\sum _{ij}J_{ij}S_{i}S_{j}+\sum J_{ijkl}S_{i}S_{j}S_{k}S_{l}\ldots .}
By translation invariance, Jij is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two-dimensional vector i − j. The higher order coefficients are also similarly restricted.
The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (i,j), the odd sites are those with i + j odd and the even sites those with i + j even, and even sites are only connected to odd sites.
The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances
2
{\textstyle {\sqrt {2}}}
larger.
Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than the critical temperature, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included.
The simplest approximation is to keep only the usual J term, and discard everything else. This will generate a flow in J, analogous to the flow in t at the fixed point of λ in the ε expansion.
To find the change in J, consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is:
e
J
(
N
+
−
N
−
)
+
e
J
(
N
−
−
N
+
)
=
2
cosh
(
J
[
N
+
−
N
−
]
)
{\displaystyle e^{J(N_{+}-N_{-})}+e^{J(N_{-}-N_{+})}=2\cosh(J[N_{+}-N_{-}])}
where N± is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is:
F
=
log
(
cosh
[
J
(
N
+
−
N
−
)
]
)
.
{\displaystyle F=\log(\cosh[J(N_{+}-N_{-})]).}
This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is:
Δ
F
=
ln
(
cosh
[
4
J
]
)
.
{\displaystyle \Delta F=\ln(\cosh[4J]).}
From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8J. The difference in energy between all spins equal and nonstaggered but net zero spin is 4J. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6J. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that:
3
J
′
=
ln
(
cosh
[
4
J
]
)
.
{\displaystyle 3J'=\ln(\cosh[4J]).}
For small J, this quickly flows to zero coupling. Large J's flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point.
Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in both two and three dimensions.
== See also ==
== Footnotes ==
== References ==
== External links ==
Ising model at The Net Advance of Physics
Barry Arthur Cipra, "The Ising model is NP-complete", SIAM News, Vol. 33, No. 6; online edition (.pdf)
Science World article on the Ising Model
A dynamical 2D Ising java applet by UCSC
A dynamical 2D Ising java applet
A larger/more complicated 2D Ising java applet Archived 2020-11-25 at the Wayback Machine
“I sing well-tempered” The Ising Model: A simple model for critical behavior in a system of spins by Dirk Brockman, is an interactive simulation that allows users to export the working code to a presentation slide
Ising Model simulation by Enrique Zeleny, the Wolfram Demonstrations Project
Phase transitions on lattices
Three-dimensional proof for Ising Model impossible, Sandia researcher claims
Interactive Monte Carlo simulation of the Ising, XY and Heisenberg models with 3D graphics (requires WebGL compatible browser)
Ising Model code , image denoising example with Ising Model
David Tong's Lecture Notes provide a good introduction
The Cartoon Picture of Magnets That Has Transformed Science - Quanta Magazine article about Ising model
Simulation of the 2-dimensional Ising model in Julia: https://github.com/cossio/SquareIsingModel.jl | Wikipedia/Ising_model |
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.
Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company. These ideas were seminal to the field of teletraffic engineering and have since seen applications in telecommunications, traffic engineering, computing, project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.
== Spelling ==
The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems.
== Description ==
Queueing theory is one of the major areas of study in the discipline of management science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic. The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute. The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc.
== Single queueing nodes ==
A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue.
However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job.
An analogy often used is that of the cashier at a supermarket. Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n.
=== Birth-death process ===
The behaviour of a single queue (also called a queueing node) can be described by a birth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. If k denotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increases k by 1 and a departure decreases k by 1.
The system transitions between values of k by "births" and "deaths", which occur at the arrival rates
λ
i
{\displaystyle \lambda _{i}}
and the departure rates
μ
i
{\displaystyle \mu _{i}}
for each job
i
{\displaystyle i}
. For a queue, these rates are generally considered not to vary with the number of jobs in the queue, so a single average rate of arrivals/departures per unit time is assumed. Under this assumption, this process has an arrival rate of
λ
=
avg
(
λ
1
,
λ
2
,
…
,
λ
k
)
{\displaystyle \lambda ={\text{avg}}(\lambda _{1},\lambda _{2},\dots ,\lambda _{k})}
and a departure rate of
μ
=
avg
(
μ
1
,
μ
2
,
…
,
μ
k
)
{\displaystyle \mu ={\text{avg}}(\mu _{1},\mu _{2},\dots ,\mu _{k})}
.
==== Balance equations ====
The steady state equations for the birth-and-death process, known as the balance equations, are as follows. Here
P
n
{\displaystyle P_{n}}
denotes the steady state probability to be in state n.
μ
1
P
1
=
λ
0
P
0
{\displaystyle \mu _{1}P_{1}=\lambda _{0}P_{0}}
λ
0
P
0
+
μ
2
P
2
=
(
λ
1
+
μ
1
)
P
1
{\displaystyle \lambda _{0}P_{0}+\mu _{2}P_{2}=(\lambda _{1}+\mu _{1})P_{1}}
λ
n
−
1
P
n
−
1
+
μ
n
+
1
P
n
+
1
=
(
λ
n
+
μ
n
)
P
n
{\displaystyle \lambda _{n-1}P_{n-1}+\mu _{n+1}P_{n+1}=(\lambda _{n}+\mu _{n})P_{n}}
The first two equations imply
P
1
=
λ
0
μ
1
P
0
{\displaystyle P_{1}={\frac {\lambda _{0}}{\mu _{1}}}P_{0}}
and
P
2
=
λ
1
μ
2
P
1
+
1
μ
2
(
μ
1
P
1
−
λ
0
P
0
)
=
λ
1
μ
2
P
1
=
λ
1
λ
0
μ
2
μ
1
P
0
{\displaystyle P_{2}={\frac {\lambda _{1}}{\mu _{2}}}P_{1}+{\frac {1}{\mu _{2}}}(\mu _{1}P_{1}-\lambda _{0}P_{0})={\frac {\lambda _{1}}{\mu _{2}}}P_{1}={\frac {\lambda _{1}\lambda _{0}}{\mu _{2}\mu _{1}}}P_{0}}
.
By mathematical induction,
P
n
=
λ
n
−
1
λ
n
−
2
⋯
λ
0
μ
n
μ
n
−
1
⋯
μ
1
P
0
=
P
0
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
{\displaystyle P_{n}={\frac {\lambda _{n-1}\lambda _{n-2}\cdots \lambda _{0}}{\mu _{n}\mu _{n-1}\cdots \mu _{1}}}P_{0}=P_{0}\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}}
.
The condition
∑
n
=
0
∞
P
n
=
P
0
+
P
0
∑
n
=
1
∞
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
=
1
{\displaystyle \sum _{n=0}^{\infty }P_{n}=P_{0}+P_{0}\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}=1}
leads to
P
0
=
1
1
+
∑
n
=
1
∞
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
{\displaystyle P_{0}={\frac {1}{1+\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}}}}
which, together with the equation for
P
n
{\displaystyle P_{n}}
(
n
≥
1
)
{\displaystyle (n\geq 1)}
, fully describes the required steady state probabilities.
=== Kendall's notation ===
Single queueing nodes are usually described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs, and c the number of servers at the node. For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process (where inter-arrival durations are exponentially distributed) and have exponentially distributed service times (the M denotes a Markov process). In an M/G/1 queue, the G stands for "general" and indicates an arbitrary probability distribution for service times.
=== Example analysis of an M/M/1 queue ===
Consider a queue with one server and the following characteristics:
λ
{\displaystyle \lambda }
: the arrival rate (the reciprocal of the expected time between each customer arriving, e.g. 10 customers per second)
μ
{\displaystyle \mu }
: the reciprocal of the mean service time (the expected number of consecutive service completions per the same unit time, e.g. per 30 seconds)
n: the parameter characterizing the number of customers in the system
P
n
{\displaystyle P_{n}}
: the probability of there being n customers in the system in steady state
Further, let
E
n
{\displaystyle E_{n}}
represent the number of times the system enters state n, and
L
n
{\displaystyle L_{n}}
represent the number of times the system leaves state n. Then
|
E
n
−
L
n
|
∈
{
0
,
1
}
{\displaystyle \left\vert E_{n}-L_{n}\right\vert \in \{0,1\}}
for all n. That is, the number of times the system leaves a state differs by at most 1 from the number of times it enters that state, since it will either return into that state at some time in the future (
E
n
=
L
n
{\displaystyle E_{n}=L_{n}}
) or not (
|
E
n
−
L
n
|
=
1
{\displaystyle \left\vert E_{n}-L_{n}\right\vert =1}
).
When the system arrives at a steady state, the arrival rate should be equal to the departure rate.
Thus the balance equations
μ
P
1
=
λ
P
0
{\displaystyle \mu P_{1}=\lambda P_{0}}
λ
P
0
+
μ
P
2
=
(
λ
+
μ
)
P
1
{\displaystyle \lambda P_{0}+\mu P_{2}=(\lambda +\mu )P_{1}}
λ
P
n
−
1
+
μ
P
n
+
1
=
(
λ
+
μ
)
P
n
{\displaystyle \lambda P_{n-1}+\mu P_{n+1}=(\lambda +\mu )P_{n}}
imply
P
n
=
λ
μ
P
n
−
1
,
n
=
1
,
2
,
…
{\displaystyle P_{n}={\frac {\lambda }{\mu }}P_{n-1},\ n=1,2,\ldots }
The fact that
P
0
+
P
1
+
⋯
=
1
{\displaystyle P_{0}+P_{1}+\cdots =1}
leads to the geometric distribution formula
P
n
=
(
1
−
ρ
)
ρ
n
{\displaystyle P_{n}=(1-\rho )\rho ^{n}}
where
ρ
=
λ
μ
<
1
{\displaystyle \rho ={\frac {\lambda }{\mu }}<1}
.
=== Simple two-equation queue ===
A common basic queueing system is attributed to Erlang and is a modification of Little's Law. Given an arrival rate λ, a dropout rate σ, and a departure rate μ, length of the queue L is defined as:
L
=
λ
−
σ
μ
{\displaystyle L={\frac {\lambda -\sigma }{\mu }}}
.
Assuming an exponential distribution for the rates, the waiting time W can be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving:
μ
λ
=
e
−
W
μ
{\displaystyle {\frac {\mu }{\lambda }}=e^{-W{\mu }}}
The second equation is commonly rewritten as:
W
=
1
μ
l
n
λ
μ
{\displaystyle W={\frac {1}{\mu }}\mathrm {ln} {\frac {\lambda }{\mu }}}
The two-stage one-box model is common in epidemiology.
== History ==
In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation:
M stands for "Markov" or "memoryless", and means arrivals occur according to a Poisson process
D stands for "deterministic", and means jobs arriving at the queue require a fixed amount of service
k describes the number of servers at the queueing node (k = 1, 2, 3, ...)
If the node has more jobs than servers, then jobs will queue and wait for service.
The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula.
After the 1940s, queueing theory became an area of research interest to mathematicians. In 1953, David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula.
Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet.
The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered.
Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing.
Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration.
Problems such as performance metrics for the M/G/k queue remain an open problem.
== Service disciplines ==
Various scheduling policies can be used at queueing nodes:
First in, first out
Also called first-come, first-served (FCFS), this principle states that customers are served one at a time and that the customer that has been waiting the longest is served first.
Last in, first out
This principle also serves customers one at a time, but the customer with the shortest waiting time will be served first. Also known as a stack.
Processor sharing
Service capacity is shared equally between customers.
Priority
Customers with high priority are served first. Priority queues can be of two types: non-preemptive (where a job in service cannot be interrupted) and preemptive (where a job in service can be interrupted by a higher-priority job). No work is lost in either model.
Shortest job first
The next job to be served is the one with the smallest size.
Preemptive shortest job first
The next job to be served is the one with the smallest original size.
Shortest remaining processing time
The next job to serve is the one with the smallest remaining processing requirement.
Service facility
Single server: customers line up and there is only one server
Several parallel servers (single queue): customers line up and there are several servers
Several parallel servers (several queues): there are many counters and customers can decide for which to queue
Unreliable server
Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed.
Customer waiting behavior
Balking: customers decide not to join the queue if it is too long
Jockeying: customers switch between queues if they think they will get served faster by doing so
Reneging: customers leave the queue if they have waited too long for service
Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue.
== Queueing networks ==
Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network.
For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node.
The simplest non-trivial networks of queues are called tandem queues. The first significant results in this area were Jackson networks, for which an efficient product-form stationary distribution exists and the mean value analysis (which allows average metrics such as throughput and sojourn times) can be computed. If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem. This result was extended to the BCMP network, where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973.
Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes. Another type of network are G-networks, first proposed by Erol Gelenbe in 1993: these networks do not assume exponential time distributions like the classic Jackson network.
=== Routing algorithms ===
In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node. In the more general case where jobs can visit more than one node, backpressure routing gives optimal throughput. A network scheduler must choose a queueing algorithm, which affects the characteristics of the larger network.
=== Mean-field limits ===
Mean-field models consider the limiting behaviour of the empirical measure (proportion of queues in different states) as the number of queues m approaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model.
=== Heavy traffic/diffusion approximations ===
In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by a reflected Brownian motion, Ornstein–Uhlenbeck process, or more general diffusion process. The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negative orthant.
=== Fluid limits ===
Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit.
=== Queueing Applications ===
Queueing theory finds widespread application in computer science and information technology. In networking, for instance, queues are integral to routers and switches, where packets queue up for transmission. By applying queueing theory principles, designers can optimize these systems, ensuring responsive performance and efficient resource utilization.
Beyond the technological realm, queueing theory is relevant to everyday experiences. Whether waiting in line at a supermarket or for public transportation, understanding the principles of queueing theory provides valuable insights into optimizing these systems for enhanced user satisfaction. At some point, everyone will be involved in an aspect of queuing. What some may view to be an inconvenience could possibly be the most effective method.
Queueing theory, a discipline rooted in applied mathematics and computer science, is a field dedicated to the study and analysis of queues, or waiting lines, and their implications across a diverse range of applications. This theoretical framework has proven instrumental in understanding and optimizing the efficiency of systems characterized by the presence of queues. The study of queues is essential in contexts such as traffic systems, computer networks, telecommunications, and service operations.
Queueing theory delves into various foundational concepts, with the arrival process and service process being central. The arrival process describes the manner in which entities join the queue over time, often modeled using stochastic processes like Poisson processes. The efficiency of queueing systems is gauged through key performance metrics. These include the average queue length, average wait time, and system throughput. These metrics provide insights into the system's functionality, guiding decisions aimed at enhancing performance and reducing wait times.
== See also ==
== References ==
== Further reading ==
Gross, Donald; Carl M. Harris (1998). Fundamentals of Queueing Theory. Wiley. ISBN 978-0-471-32812-4. Online
Zukerman, Moshe (2013). Introduction to Queueing Theory and Stochastic Teletraffic Models (PDF). arXiv:1307.2968.
Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673. ISBN 978-0-201-14502-1. chap.15, pp. 380–412
Gelenbe, Erol; Isi Mitrani (2010). Analysis and Synthesis of Computer Systems. World Scientific 2nd Edition. ISBN 978-1-908978-42-4.
Newell, Gordron F. (1 June 1971). Applications of Queueing Theory. Chapman and Hall.
Leonard Kleinrock, Information Flow in Large Communication Nets, (MIT, Cambridge, May 31, 1961) Proposal for a Ph.D. Thesis
Leonard Kleinrock. Information Flow in Large Communication Nets (RLE Quarterly Progress Report, July 1961)
Leonard Kleinrock. Communication Nets: Stochastic Message Flow and Delay (McGraw-Hill, New York, 1964)
Kleinrock, Leonard (2 January 1975). Queueing Systems: Volume I – Theory. New York: Wiley Interscience. pp. 417. ISBN 978-0-471-49110-1.
Kleinrock, Leonard (22 April 1976). Queueing Systems: Volume II – Computer Applications. New York: Wiley Interscience. pp. 576. ISBN 978-0-471-49111-8.
Lazowska, Edward D.; John Zahorjan; G. Scott Graham; Kenneth C. Sevcik (1984). Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc. ISBN 978-0-13-746975-8.
Jon Kleinberg; Éva Tardos (30 June 2013). Algorithm Design. Pearson. ISBN 978-1-292-02394-6.
== External links ==
Teknomo's Queueing theory tutorial and calculators
Virtamo's Queueing Theory Course
Myron Hlynka's Queueing Theory Page
LINE: a general-purpose engine to solve queueing models | Wikipedia/Queueing_theory |
In actuarial science and applied probability, ruin theory (sometimes risk theory or collective risk theory) uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin.
== Classical model ==
The theoretical foundation of ruin theory, known as the Cramér–Lundberg model (or classical compound-Poisson risk model, classical risk process or Poisson risk process) was introduced in 1903 by the Swedish actuary Filip Lundberg. Lundberg's work was republished in the 1930s by Harald Cramér.
The model describes an insurance company who experiences two opposing cash flows: incoming cash premiums and outgoing claims. Premiums arrive a constant rate
c
>
0
{\textstyle c>0}
from customers and claims arrive according to a Poisson process
N
t
{\displaystyle N_{t}}
with intensity
λ
{\textstyle \lambda }
and are independent and identically distributed non-negative random variables
ξ
i
{\displaystyle \xi _{i}}
with distribution
F
{\textstyle F}
and mean
μ
{\textstyle \mu }
(they form a compound Poisson process). So for an insurer who starts with initial surplus
x
{\textstyle x}
, the aggregate assets
X
t
{\displaystyle X_{t}}
are given by:
X
t
=
x
+
c
t
−
∑
i
=
1
N
t
ξ
i
for t
≥
0.
{\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for t}}\geq 0.}
The central object of the model is to investigate the probability that the insurer's surplus level eventually falls below zero (making the firm bankrupt). This quantity, called the probability of ultimate ruin, is defined as
ψ
(
x
)
=
P
x
{
τ
<
∞
}
{\displaystyle \psi (x)=\mathbb {P} ^{x}\{\tau <\infty \}}
,
where the time of ruin is
τ
=
inf
{
t
>
0
:
X
(
t
)
<
0
}
{\displaystyle \tau =\inf\{t>0\,:\,X(t)<0\}}
with the convention that
inf
∅
=
∞
{\displaystyle \inf \varnothing =\infty }
. This can be computed exactly using the Pollaczek–Khinchine formula as (the ruin function here is equivalent to the tail function of the stationary distribution of waiting time in an M/G/1 queue)
ψ
(
x
)
=
(
1
−
λ
μ
c
)
∑
n
=
0
∞
(
λ
μ
c
)
n
(
1
−
F
l
∗
n
(
x
)
)
{\displaystyle \psi (x)=\left(1-{\frac {\lambda \mu }{c}}\right)\sum _{n=0}^{\infty }\left({\frac {\lambda \mu }{c}}\right)^{n}(1-F_{l}^{\ast n}(x))}
where
F
l
{\displaystyle F_{l}}
is the transform of the tail distribution of
F
{\displaystyle F}
,
F
l
(
x
)
=
1
μ
∫
0
x
(
1
−
F
(
u
)
)
d
u
{\displaystyle F_{l}(x)={\frac {1}{\mu }}\int _{0}^{x}\left(1-F(u)\right){\text{d}}u}
and
⋅
∗
n
{\displaystyle \cdot ^{\ast n}}
denotes the
n
{\displaystyle n}
-fold convolution.
In the case where the claim sizes are exponentially distributed, this simplifies to
ψ
(
x
)
=
λ
μ
c
e
−
(
1
μ
−
λ
c
)
x
.
{\displaystyle \psi (x)={\frac {\lambda \mu }{c}}e^{-\left({\frac {1}{\mu }}-{\frac {\lambda }{c}}\right)x}.}
== Sparre Andersen model ==
E. Sparre Andersen extended the classical model in 1957 by allowing claim inter-arrival times to have arbitrary distribution functions.
X
t
=
x
+
c
t
−
∑
i
=
1
N
t
ξ
i
for
t
≥
0
,
{\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for }}t\geq 0,}
where the claim number process
(
N
t
)
t
≥
0
{\displaystyle (N_{t})_{t\geq 0}}
is a renewal process and
(
ξ
i
)
i
∈
N
{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}
are independent and identically distributed random variables.
The model furthermore assumes that
ξ
i
>
0
{\displaystyle \xi _{i}>0}
almost surely and that
(
N
t
)
t
≥
0
{\displaystyle (N_{t})_{t\geq 0}}
and
(
ξ
i
)
i
∈
N
{\displaystyle (\xi _{i})_{i\in \mathbb {N} }}
are independent. The model is also known as the renewal risk model.
== Expected discounted penalty function ==
Michael R. Powers and Gerber and Shiu analyzed the behavior of the insurer's surplus through the expected discounted penalty function, which is commonly referred to as Gerber-Shiu function in the ruin literature and named after actuarial scientists Elias S.W. Shiu and Hans-Ulrich Gerber. It is arguable whether the function should have been called Powers-Gerber-Shiu function due to the contribution of Powers.
In Powers' notation, this is defined as
m
(
x
)
=
E
x
[
e
−
δ
τ
K
τ
]
{\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }K_{\tau }]}
,
where
δ
{\displaystyle \delta }
is the discounting force of interest,
K
τ
{\displaystyle K_{\tau }}
is a general penalty function reflecting the economic costs to the insurer at the time of ruin, and the expectation
E
x
{\displaystyle \mathbb {E} ^{x}}
corresponds to the probability measure
P
x
{\displaystyle \mathbb {P} ^{x}}
. The function is called expected discounted cost of insolvency by Powers.
In Gerber and Shiu's notation, it is given as
m
(
x
)
=
E
x
[
e
−
δ
τ
w
(
X
τ
−
,
X
τ
)
I
(
τ
<
∞
)
]
{\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }w(X_{\tau -},X_{\tau })\mathbb {I} (\tau <\infty )]}
,
where
δ
{\displaystyle \delta }
is the discounting force of interest and
w
(
X
τ
−
,
X
τ
)
{\displaystyle w(X_{\tau -},X_{\tau })}
is a penalty function capturing the economic costs to the insurer at the time of ruin (assumed to depend on the surplus prior to ruin
X
τ
−
{\displaystyle X_{\tau -}}
and the deficit at ruin
X
τ
{\displaystyle X_{\tau }}
), and the expectation
E
x
{\displaystyle \mathbb {E} ^{x}}
corresponds to the probability measure
P
x
{\displaystyle \mathbb {P} ^{x}}
. Here the indicator function
I
(
τ
<
∞
)
{\displaystyle \mathbb {I} (\tau <\infty )}
emphasizes that the penalty is exercised only when ruin occurs.
It is quite intuitive to interpret the expected discounted penalty function. Since the function measures the actuarial present value of the penalty that occurs at
τ
{\displaystyle \tau }
, the penalty function is multiplied by the discounting factor
e
−
δ
τ
{\displaystyle e^{-\delta \tau }}
, and then averaged over the probability distribution of the waiting time to
τ
{\displaystyle \tau }
. While Gerber and Shiu applied this function to the classical compound-Poisson model, Powers argued that an insurer's surplus is better modeled by a family of diffusion processes.
There are a great variety of ruin-related quantities that fall into the category of the expected discounted penalty function.
Other finance-related quantities belonging to the class of the expected discounted penalty function include the perpetual American put option, the contingent claim at optimal exercise time, and more.
== Recent developments ==
Compound-Poisson risk model with constant interest
Compound-Poisson risk model with stochastic interest
Brownian-motion risk model
General diffusion-process model
Markov-modulated risk model
Accident probability factor (APF) calculator – risk analysis model (@SBH)
== See also ==
Financial risk
Volterra integral equation § Application: Ruin theory
Chance-constrained portfolio selection
== References ==
== Further reading ==
Gerber, H.U. (1979). An Introduction to Mathematical Risk Theory. Philadelphia: S.S. Heubner Foundation Monograph Series 8.
Asmussen S., Albrecher H. (2010). Ruin Probabilities, 2nd Edition. Singapore: World Scientific Publishing Co. | Wikipedia/Cramér–Lundberg_model |
In finance, the Chen model is a mathematical model describing the evolution of interest rates. It is a type of "three-factor model" (short-rate model) as it describes interest rate movements as driven by three sources of market risk. It was the first stochastic mean and stochastic volatility model and it was published in 1994 by Lin Chen, economist, theoretical physicist and former lecturer/professor at Beijing Institute of Technology, American University of Beirut, Yonsei University of Korea, and SunYetSan University .
The dynamics of the instantaneous interest rate are specified by the stochastic differential equations:
d
r
t
=
κ
(
θ
t
−
r
t
)
d
t
+
r
t
σ
t
d
W
1
,
{\displaystyle dr_{t}=\kappa (\theta _{t}-r_{t})\,dt+{\sqrt {r_{t}}}\,{\sqrt {\sigma _{t}}}\,dW_{1},}
d
θ
t
=
ν
(
ζ
−
θ
t
)
d
t
+
α
θ
t
d
W
2
,
{\displaystyle d\theta _{t}=\nu (\zeta -\theta _{t})\,dt+\alpha \,{\sqrt {\theta _{t}}}\,dW_{2},}
d
σ
t
=
μ
(
β
−
σ
t
)
d
t
+
η
σ
t
d
W
3
.
{\displaystyle d\sigma _{t}=\mu (\beta -\sigma _{t})\,dt+\eta \,{\sqrt {\sigma _{t}}}\,dW_{3}.}
In an authoritative review of modern finance (Continuous-Time Methods in Finance: A Review and an Assessment), the Chen model is listed along with the models of Robert C. Merton, Oldrich Vasicek, John C. Cox, Stephen A. Ross, Darrell Duffie, John Hull, Robert A. Jarrow, and Emanuel Derman as a major term structure model.
Different variants of Chen model are still being used in financial institutions worldwide. James and Webber devote a section to discuss Chen model in their book; Gibson et al. devote a section to cover Chen model in their review article. Andersen et al. devote a paper to study and extend Chen model. Gallant et al. devote a paper to test Chen model and other models; Wibowo and Cai, among some others, devote their PhD dissertations to testing Chen model and other competing interest rate models.
== References ==
Lin Chen (1996). "Stochastic Mean and Stochastic Volatility — A Three-Factor Model of the Term Structure of Interest Rates and Its Application to the Pricing of Interest Rate Derivatives". Financial Markets, Institutions & Instruments. 5: 1–88.
Lin Chen (1996). Interest Rate Dynamics, Derivatives Pricing, and Risk Management. Lecture Notes in Economics and Mathematical Systems, 435. Springer. ISBN 978-3-540-60814-1.
Jessica James; Nick Webber (2000). Interest Rate Modelling. Wiley Finance. ISBN 978-0-471-97523-6.
Rajna Gibson, François-Serge Lhabitant and Denis Talay (2001). Modeling the Term Structure of Interest Rates: A Review of the Literature. RiskLab, ETH.
Frank J. Fabozzi and Moorad Choudhry (2007). The Handbook of European Fixed Income Securities. Wiley Finance. ISBN 978-0-471-43039-1.
Sanjay K. Nawalkha; Gloria M. Soto; Natalia A. Beliaeva (2007). Dynamic Term Structure Modeling: The Fixed Income Valuation Course. Wiley Finance. ISBN 978-0-471-73714-8.
Sundaresan, Suresh M. (2000). "Continuous-Time Methods in Finance: A Review and an Assessment". The Journal of Finance. 55 (54, number 4): 1569–1622. CiteSeerX 10.1.1.194.3963. doi:10.1111/0022-1082.00261.
Andersen, T.G. & L. Benzoni, J. Lund (2004). Stochastic Volatility, Mean Drift, and Jumps in the Short-Term Interest Rate. Working Paper, Northwestern University.
Gallant, A.R.; G. Tauchen (1997). Estimation of Continuous Time Models for Stock Returns and Interest Rates. Macroeconomic Dynamics 1, 135-168.
Cai, L. (2008). Specification Testing for Multifactor Diffusion Processes:An Empirical and Methodological Analysis of Model Stability Across Different Historical Episodes (PDF). Rutgers University.
Wibowo A. (2006). Continuous-time identification of exponential-affine term structure models. Twente University. | Wikipedia/Chen_model |
In mathematical finance, the Black–Derman–Toy model (BDT) is a popular short-rate model used in the pricing of bond options, swaptions and other interest rate derivatives; see Lattice model (finance) § Interest rate derivatives. It is a one-factor model; that is, a single stochastic factor—the short rate—determines the future evolution of all interest rates. It was the first model to combine the mean-reverting behaviour of the short rate with the log-normal distribution, and is still widely used.
== History ==
The model was introduced by Fischer Black, Emanuel Derman, and Bill Toy. It was first developed for in-house use by Goldman Sachs in the 1980s and was published in the Financial Analysts Journal in 1990. A personal account of the development of the model is provided in Emanuel Derman's memoir My Life as a Quant.
== Formulae ==
Under BDT, using a binomial lattice, one calibrates the model parameters to fit both the current term structure of interest rates (yield curve), and the volatility structure for interest rate caps (usually as implied by the Black-76-prices for each component caplet); see aside. Using the calibrated lattice one can then value a variety of more complex interest-rate sensitive securities and interest rate derivatives.
Although initially developed for a lattice-based environment, the model has been shown to imply the following continuous stochastic differential equation:
d
ln
(
r
)
=
[
θ
t
+
σ
t
′
σ
t
ln
(
r
)
]
d
t
+
σ
t
d
W
t
{\displaystyle d\ln(r)=\left[\theta _{t}+{\frac {\sigma '_{t}}{\sigma _{t}}}\ln(r)\right]dt+\sigma _{t}\,dW_{t}}
where,
r
{\displaystyle r\,}
= the instantaneous short rate at time t
θ
t
{\displaystyle \theta _{t}\,}
= value of the underlying asset at option expiry
σ
t
{\displaystyle \sigma _{t}\,}
= instant short rate volatility
W
t
{\displaystyle W_{t}\,}
= a standard Brownian motion under a risk-neutral probability measure;
d
W
t
{\displaystyle dW_{t}\,}
its differential.
For constant (time independent) short rate volatility,
σ
{\displaystyle \sigma \,}
, the model is:
d
ln
(
r
)
=
θ
t
d
t
+
σ
d
W
t
{\displaystyle d\ln(r)=\theta _{t}\,dt+\sigma \,dW_{t}}
One reason that the model remains popular, is that the "standard" Root-finding algorithms—such as Newton's method (the secant method) or bisection—are very easily applied to the calibration. Relatedly, the model was originally described in algorithmic language, and not using stochastic calculus or martingales.
== References ==
Notes
== External links ==
R function for computing the Black–Derman–Toy short rate tree, Andrea Ruberto
Excel BDT calculator and tree generator, Serkan Gur | Wikipedia/Black–Derman–Toy_model |
In network science, a biased random walk on a graph is a time path process in which an evolving variable jumps from its current state to one of various potential new states; unlike in a pure random walk, the probabilities of the potential new states are unequal.
Biased random walks on a graph provide an approach for the structural analysis of undirected graphs in order to extract their symmetries when the network is too complex or when it is not large enough to be analyzed by statistical methods. The concept of biased random walks on a graph has attracted the attention of many researchers and data companies over the past decade especially in the transportation and social networks.
== Model ==
There have been written many different representations of the biased random walks on graphs based on the particular purpose of the analysis. A common representation of the mechanism for undirected graphs is as follows:
On an undirected graph, a walker takes a step from the current node,
j
,
{\displaystyle j,}
to node
i
.
{\displaystyle i.}
Assuming that each node has an attribute
α
i
,
{\displaystyle \alpha _{i},}
the probability of jumping from node
j
{\displaystyle j}
to
i
{\displaystyle i}
is given by:
T
i
j
α
=
α
i
A
i
j
∑
k
α
k
A
k
j
,
{\displaystyle T_{ij}^{\alpha }={\frac {\alpha _{i}A_{ij}}{\sum _{k}\alpha _{k}A_{kj}}},}
where
A
i
j
{\displaystyle A_{ij}}
represents the topological weight of the edge going from
j
{\displaystyle j}
to
i
.
{\displaystyle i.}
In fact, the steps of the walker are biased by the factor of
α
{\displaystyle \alpha }
which may differ from one node to another.
Depending on the network, the attribute
α
{\displaystyle \alpha }
can be interpreted differently. It might be implied as the attraction of a person in a social network, it might be betweenness centrality or even it might be explained as an intrinsic characteristic of a node. In case of a fair random walk on graph
α
{\displaystyle \alpha }
is one for all the nodes.
In case of shortest paths random walks
α
i
{\displaystyle \alpha _{i}}
is the total number of the shortest paths between all pairs of nodes that pass through the node
i
{\displaystyle i}
. In fact the walker prefers the nodes with higher betweenness centrality which is defined as below:
C
(
i
)
=
Total number of shortest paths through
i
Total number of shortest paths
{\displaystyle C(i)={\tfrac {{\text{Total number of shortest paths through }}i}{\text{Total number of shortest paths}}}}
Based on the above equation, the recurrence time to a node in the biased walk is given by:
r
i
=
1
C
(
i
)
{\displaystyle r_{i}={\frac {1}{C(i)}}}
== Applications ==
There are a variety of applications using biased random walks on graphs. Such applications include control of diffusion, advertisement of products on social networks, explaining dispersal and population redistribution of animals and micro-organisms, community detections, wireless networks, and search engines.
== See also ==
Betweenness centrality
Community structure
Kullback–Leibler divergence
Markov chain
Maximal entropy random walk
Random walk closeness centrality
Social network analysis
Travelling salesman problem
== References ==
== External links ==
Gábor Simonyi, "Graph Entropy: A Survey". In Combinatorial Optimization (ed. W. Cook, L. Lovász, and P. Seymour). Providence, RI: Amer. Math. Soc., pp. 399–441, 1995.
Anne-Marie Kermarrec, Erwan Le Merrer, Bruno Sericola, Gilles Trédan, "Evaluating the Quality of a Network Topology through Random Walks" in Gadi Taubenfeld (ed.) Distributed Computing | Wikipedia/Biased_random_walk_on_a_graph |
In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. It is a type of one-factor short-rate model as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives, and has also been adapted for credit markets. It was introduced in 1977 by Oldřich Vašíček, and can be also seen as a stochastic investment model.
== Details ==
The model specifies that the instantaneous interest rate follows the stochastic differential equation:
d
r
t
=
a
(
b
−
r
t
)
d
t
+
σ
d
W
t
{\displaystyle dr_{t}=a(b-r_{t})\,dt+\sigma \,dW_{t}}
where Wt is a Wiener process under the risk neutral framework modelling the random market risk factor, in that it models the continuous inflow of randomness into the system. The standard deviation parameter,
σ
{\displaystyle \sigma }
, determines the volatility of the interest rate and in a way characterizes the amplitude of the instantaneous randomness inflow. The typical parameters
b
,
a
{\displaystyle b,a}
and
σ
{\displaystyle \sigma }
, together with the initial condition
r
0
{\displaystyle r_{0}}
, completely characterize the dynamics, and can be quickly characterized as follows, assuming
a
{\displaystyle a}
to be non-negative:
b
{\displaystyle b}
: "long term mean level". All future trajectories of
r
{\displaystyle r}
will evolve around a mean level b in the long run;
a
{\displaystyle a}
: "speed of reversion".
a
{\displaystyle a}
characterizes the velocity at which such trajectories will regroup around
b
{\displaystyle b}
in time;
σ
{\displaystyle \sigma }
: "instantaneous volatility", measures instant by instant the amplitude of randomness entering the system. Higher
σ
{\displaystyle \sigma }
implies more randomness
The following derived quantity is also of interest,
σ
2
/
(
2
a
)
{\displaystyle {\sigma ^{2}}/(2a)}
: "long term variance". All future trajectories of
r
{\displaystyle r}
will regroup around the long term mean with such variance after a long time.
a
{\displaystyle a}
and
σ
{\displaystyle \sigma }
tend to oppose each other: increasing
σ
{\displaystyle \sigma }
increases the amount of randomness entering the system, but at the same time increasing
a
{\displaystyle a}
amounts to increasing the speed at which the system will stabilize statistically around the long term mean
b
{\displaystyle b}
with a corridor of variance determined also by
a
{\displaystyle a}
. This is clear when looking at the long term variance,
σ
2
2
a
{\displaystyle {\frac {\sigma ^{2}}{2a}}}
which increases with
σ
{\displaystyle \sigma }
but decreases with
a
{\displaystyle a}
.
This model is an Ornstein–Uhlenbeck stochastic process.
== Discussion ==
Vasicek's model was the first one to capture mean reversion, an essential characteristic of the interest rate that sets it apart from other financial prices. Thus, as opposed to stock prices for instance, interest rates cannot rise indefinitely. This is because at very high levels they would hamper economic activity, prompting a decrease in interest rates. Similarly, interest rates do not usually decrease much below 0. As a result, interest rates move in a limited range, showing a tendency to revert to a long run value.
The drift factor
a
(
b
−
r
t
)
{\displaystyle a(b-r_{t})}
represents the expected instantaneous change in the interest rate at time t. The parameter b represents the long-run equilibrium value towards which the interest rate reverts. Indeed, in the absence of shocks (
d
W
t
=
0
{\displaystyle dW_{t}=0}
), the interest rate remains constant when rt = b. The parameter a, governing the speed of adjustment, needs to be positive to ensure stability around the long term value. For example, when rt is below b, the drift term
a
(
b
−
r
t
)
{\displaystyle a(b-r_{t})}
becomes positive for positive a, generating a tendency for the interest rate to move upwards (toward equilibrium).
The main disadvantage is that, under Vasicek's model, it is theoretically possible for the interest rate to become negative, an undesirable feature under pre-crisis assumptions. This shortcoming was fixed in the Cox–Ingersoll–Ross model, exponential Vasicek model, Black–Derman–Toy model and Black–Karasinski model, among many others. The Vasicek model was further extended in the Hull–White model. The Vasicek model is also a canonical example of the affine term structure model, along with the Cox–Ingersoll–Ross model. In recent research both models were used for data partitioning and forecasting.
== Asymptotic mean and variance ==
We solve the stochastic differential equation to obtain
r
t
=
r
0
e
−
a
t
+
b
(
1
−
e
−
a
t
)
+
σ
e
−
a
t
∫
0
t
e
a
s
d
W
s
.
{\displaystyle r_{t}=r_{0}e^{-at}+b\left(1-e^{-at}\right)+\sigma e^{-at}\int _{0}^{t}e^{as}\,dW_{s}.\,\!}
Using similar techniques as applied to the Ornstein–Uhlenbeck stochastic process we get that state variable is distributed normally with mean
E
[
r
t
]
=
r
0
e
−
a
t
+
b
(
1
−
e
−
a
t
)
{\displaystyle \mathrm {E} [r_{t}]=r_{0}e^{-at}+b(1-e^{-at})}
and variance
V
a
r
[
r
t
]
=
σ
2
2
a
(
1
−
e
−
2
a
t
)
.
{\displaystyle \mathrm {Var} [r_{t}]={\frac {\sigma ^{2}}{2a}}(1-e^{-2at}).}
Consequently, we have
lim
t
→
∞
E
[
r
t
]
=
b
{\displaystyle \lim _{t\to \infty }\mathrm {E} [r_{t}]=b}
and
lim
t
→
∞
V
a
r
[
r
t
]
=
σ
2
2
a
.
{\displaystyle \lim _{t\to \infty }\mathrm {Var} [r_{t}]={\frac {\sigma ^{2}}{2a}}.}
== Bond pricing ==
Under the no-arbitrage assumption, a discount bond may be priced in the Vasicek model. The time
t
{\displaystyle t}
value of a discount bond with maturity date
T
{\displaystyle T}
is exponential affine in the interest rate:
P
(
t
,
T
)
=
A
(
t
,
T
)
e
−
B
(
t
,
T
)
r
(
t
)
{\displaystyle P(t,T)=A(t,T)e^{-B(t,T)r(t)}}
where
B
(
t
,
T
)
=
1
−
e
−
a
(
T
−
t
)
a
{\displaystyle B(t,T)={\frac {1-e^{-a(T-t)}}{a}}}
A
(
t
,
T
)
=
exp
{
(
b
−
σ
2
2
a
2
)
[
B
(
t
,
T
)
−
(
T
−
t
)
]
−
σ
2
4
a
B
2
(
t
,
T
)
}
{\displaystyle A(t,T)=\exp \left\{\left(b-{\frac {\sigma ^{2}}{2a^{2}}}\right)\left[B(t,T)-(T-t)\right]-{\frac {\sigma ^{2}}{4a}}B^{2}(t,T)\right\}}
== See also ==
Ornstein–Uhlenbeck process.
Hull–White model
Cox–Ingersoll–Ross model
== References ==
Hull, John C. (2003). Options, Futures and Other Derivatives. Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-009056-0.
Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models – Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
Jessica James, Nick Webber (2000). Interest Rate Modelling. Wiley. ISBN 978-0-471-97523-6.
== External links ==
The Vasicek Model, Bjørn Eraker, Wisconsin School of Business
Yield Curve Estimation and Prediction with the Vasicek Model, D. Bayazit, Middle East Technical University | Wikipedia/Vasicek_model |
Extreme value theory or extreme value analysis (EVA) is the study of extremes in statistical distributions.
It is widely used in many disciplines, such as structural engineering, finance, economics, earth sciences, traffic prediction, and geological engineering. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, such as the 100-year flood. Similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50 year wave and design the structure accordingly.
== Data analysis ==
Two main approaches exist for practical extreme value analysis.
The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima (minima), generating an annual maxima series (AMS).
The second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold (falls below a certain threshold). This method is generally referred to as the peak over threshold method (POT).
For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, leading to the generalized extreme value distribution being selected for fitting. However, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the limiting distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. Given that the number of relevant random events within a year may be rather limited, it is unsurprising that analyses of observed AMS data often lead to distributions other than the generalized extreme value distribution (GEVD) being selected.
For POT data, the analysis may involve fitting two distributions: One for the number of events in a time period considered and a second for the size of the exceedances.
A common assumption for the first is the Poisson distribution, with the generalized Pareto distribution being used for the exceedances.
A tail-fitting can be based on the Pickands–Balkema–de Haan theorem.
Novak (2011) reserves the term "POT method" to the case where the threshold is non-random, and distinguishes it from the case where one deals with exceedances of a random threshold.
== Applications ==
Applications of extreme value theory include predicting the probability distribution of:
== History ==
The field of extreme value theory was pioneered by L. Tippett (1902–1985). Tippett was employed by the British Cotton Industry Research Association, where he worked to make cotton thread stronger. In his studies, he realized that the strength of a thread was controlled by the strength of its weakest fibres. With the help of R.A. Fisher, Tippet obtained three asymptotic limits describing the distributions of extremes assuming independent variables. E.J. Gumbel (1958) codified this theory. These results can be extended to allow for slight correlations between variables, but the classical theory does not extend to strong correlations of the order of the variance. One universality class of particular interest is that of log-correlated fields, where the correlations decay logarithmically with the distance.
== Univariate theory ==
The theory for extreme values of a single variable is governed by the extreme value theorem, also called the Fisher–Tippett–Gnedenko theorem, which describes which of the three possible distributions for extreme values applies for a particular statistical variable
X
{\displaystyle X}
.
== Multivariate theory ==
Extreme value theory in more than one variable introduces additional issues that have to be addressed. One problem that arises is that one must specify what constitutes an extreme event.
Although this is straightforward in the univariate case, there is no unambiguous way to do this in the multivariate case. The fundamental problem is that although it is possible to order a set of real-valued numbers, there is no natural way to order a set of vectors.
As an example, in the univariate case, given a set of observations
x
i
{\displaystyle \ x_{i}\ }
it is straightforward to find the most extreme event simply by taking the maximum (or minimum) of the observations. However, in the bivariate case, given a set of observations
(
x
i
,
y
i
)
{\displaystyle \ (x_{i},y_{i})\ }
, it is not immediately clear how to find the most extreme event. Suppose that one has measured the values
(
3
,
4
)
{\displaystyle \ (3,4)\ }
at a specific time and the values
(
5
,
2
)
{\displaystyle \ (5,2)\ }
at a later time. Which of these events would be considered more extreme? There is no universal answer to this question.
Another issue in the multivariate case is that the limiting model is not as fully prescribed as in the univariate case. In the univariate case, the model (GEV distribution) contains three parameters whose values are not predicted by the theory and must be obtained by fitting the distribution to the data. In the multivariate case, the model not only contains unknown parameters, but also a function whose exact form is not prescribed by the theory. However, this function must obey certain constraints.
It is not straightforward to devise estimators that obey such constraints though some have been recently constructed.
As an example of an application, bivariate extreme value theory has been applied to ocean research.
== Non-stationary extremes ==
Statistical modeling for nonstationary time series was developed in the 1990s. Methods for nonstationary multivariate extremes have been introduced more recently.
The latter can be used for tracking how the dependence between extreme values changes over time, or over another covariate.
== See also ==
== References ==
== Sources ==
== Software ==
Belzile, L.R.; Dutang, C.; Northrop, P.J.; Opitz, T. (2023). "A modeler's guide to extreme value software". Extremes. 26 (4): 595–638. arXiv:2205.07714. doi:10.1007/s10687-023-00475-9.
"Extreme Value Statistics in R". cran.r-project.org (software). 4 November 2023. — Package for extreme value statistics in R.
"Extremes.jl". github.com (software). — Package for extreme value statistics in Julia.
"Source code for stationary and non-stationary extreme value analysis". amir.eng.uci.edu (software). Irvine, CA: University of California, Irvine.
== External links == | Wikipedia/Extreme_value_theory |
A Hopfield network (or associative memory) is a form of recurrent neural network, or a spin glass system, that can serve as a content-addressable memory. The Hopfield network, named for John Hopfield, consists of a single layer of neurons, where each neuron is connected to every other neuron except itself. These connections are bidirectional and symmetric, meaning the weight of the connection from neuron i to neuron j is the same as the weight from neuron j to neuron i. Patterns are associatively recalled by fixing certain inputs, and dynamically evolve the network to minimize an energy function, towards local energy minimum states that correspond to stored patterns. Patterns are associatively learned (or "stored") by a Hebbian learning algorithm.
One of the key features of Hopfield networks is their ability to recover complete patterns from partial or noisy inputs, making them robust in the face of incomplete or corrupted data. Their connection to statistical mechanics, recurrent networks, and human cognitive psychology has led to their application in various fields, including physics, psychology, neuroscience, and machine learning theory and practice.
== History ==
One origin of associative memory is human cognitive psychology, specifically the associative memory. Frank Rosenblatt studied "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule.: 73–75 : Chapter 19, 21
Another model of associative memory is where the output does not loop back to the input. W. K. Taylor proposed such a model trained by Hebbian learning in 1956. Karl Steinbuch, who wanted to understand learning, and inspired by watching his children learn, published the Lernmatrix in 1961. It was translated to English in 1963. Similar research was done with the correlogram of D. J. Willshaw et al. in 1969. Teuvo Kohonen trained an associative memory by gradient descent in 1974.
Another origin of associative memory was statistical mechanics. The Ising model was published in 1920s as a model of magnetism, however it studied the thermal equilibrium, which does not change with time. Roy J. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium (Glauber dynamics), adding in the component of time.
The second component to be added was adaptation to stimulus. Described independently by Kaoru Nakano in 1971 and Shun'ichi Amari in 1972, they proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by William A. Little in 1974, who was acknowledged by Hopfield in his 1982 paper.
See Carpenter (1989) and Cowan (1990) for a technical description of some of these early works in associative memory.
The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.
A major advance in memory storage capacity was developed by Dimitry Krotov and Hopfield in 2016 through a change in network dynamics and energy function. This idea was further extended by Demircigil and collaborators in 2017. The continuous dynamics of large memory capacity models was developed in a series of papers between 2016 and 2020. Large memory storage capacity Hopfield Networks are now called Dense Associative Memories or modern Hopfield networks.
In 2024, John J. Hopfield and Geoffrey E. Hinton were awarded the Nobel Prize in Physics for their foundational contributions to machine learning, such as the Hopfield network.
== Structure ==
The units in Hopfield nets are binary threshold units, i.e. the units only take on two different values for their states, and the value is determined by whether or not the unit's input exceeds its threshold
U
i
{\displaystyle U_{i}}
. Discrete Hopfield nets describe relationships between binary (firing or not-firing) neurons
1
,
2
,
…
,
i
,
j
,
…
,
N
{\displaystyle 1,2,\ldots ,i,j,\ldots ,N}
. At a certain time, the state of the neural net is described by a vector
V
{\displaystyle V}
, which records which neurons are firing in a binary word of
N
{\displaystyle N}
bits.
The interactions
w
i
j
{\displaystyle w_{ij}}
between neurons have units that usually take on values of 1 or −1, and this convention will be used throughout this article. However, other literature might use units that take values of 0 and 1. These interactions are "learned" via Hebb's law of association, such that, for a certain state
V
s
{\displaystyle V^{s}}
and distinct nodes
i
,
j
{\displaystyle i,j}
w
i
j
=
V
i
s
V
j
s
{\displaystyle w_{ij}=V_{i}^{s}V_{j}^{s}}
but
w
i
i
=
0
{\displaystyle w_{ii}=0}
.
(Note that the Hebbian learning rule takes the form
w
i
j
=
(
2
V
i
s
−
1
)
(
2
V
j
s
−
1
)
{\displaystyle w_{ij}=(2V_{i}^{s}-1)(2V_{j}^{s}-1)}
when the units assume values in
{
0
,
1
}
{\displaystyle \{0,1\}}
.)
Once the network is trained,
w
i
j
{\displaystyle w_{ij}}
no longer evolve. If a new state of neurons
V
s
′
{\displaystyle V^{s'}}
is introduced to the neural network, the net acts on neurons such that
V
i
s
′
→
1
{\displaystyle V_{i}^{s'}\rightarrow 1}
if
∑
j
w
i
j
V
j
s
′
≥
U
i
{\displaystyle \sum _{j}w_{ij}V_{j}^{s'}\geq U_{i}}
V
i
s
′
→
−
1
{\displaystyle V_{i}^{s'}\rightarrow -1}
if
∑
j
w
i
j
V
j
s
′
<
U
i
{\displaystyle \sum _{j}w_{ij}V_{j}^{s'}<U_{i}}
where
U
i
{\displaystyle U_{i}}
is the threshold value of the i'th neuron (often taken to be 0). In this way, Hopfield networks have the ability to "remember" states stored in the interaction matrix, because if a new state
V
s
′
{\displaystyle V^{s'}}
is subjected to the interaction matrix, each neuron will change until it matches the original state
V
s
{\displaystyle V^{s}}
(see the Updates section below).
The connections in a Hopfield net typically have the following restrictions:
w
i
i
=
0
,
∀
i
{\displaystyle w_{ii}=0,\forall i}
(no unit has a connection with itself)
w
i
j
=
w
j
i
,
∀
i
,
j
{\displaystyle w_{ij}=w_{ji},\forall i,j}
(connections are symmetric)
The constraint that weights are symmetric guarantees that the energy function decreases monotonically while following the activation rules. A network with asymmetric weights may exhibit some periodic or chaotic behaviour; however, Hopfield found that this behavior is confined to relatively small parts of the phase space and does not impair the network's ability to act as a content-addressable associative memory system.
Hopfield also modeled neural nets for continuous values, in which the electric output of each neuron is not binary but some value between 0 and 1. He found that this type of network was also able to store and reproduce memorized states.
Notice that every pair of units i and j in a Hopfield network has a connection that is described by the connectivity weight
w
i
j
{\displaystyle w_{ij}}
. In this sense, the Hopfield network can be formally described as a complete undirected graph
G
=
⟨
V
,
f
⟩
{\displaystyle G=\langle V,f\rangle }
, where
V
{\displaystyle V}
is a set of McCulloch–Pitts neurons and
f
:
V
2
→
R
{\displaystyle f:V^{2}\rightarrow \mathbb {R} }
is a function that links pairs of units to a real value, the connectivity weight.
== Updating ==
Updating one unit (node in the graph simulating the artificial neuron) in the Hopfield network is performed using the following rule:
s
i
←
{
+
1
if
∑
j
w
i
j
s
j
≥
θ
i
,
−
1
otherwise.
{\displaystyle s_{i}\leftarrow \left\{{\begin{array}{ll}+1&{\text{if }}\sum _{j}{w_{ij}s_{j}}\geq \theta _{i},\\-1&{\text{otherwise.}}\end{array}}\right.}
where:
w
i
j
{\displaystyle w_{ij}}
is the strength of the connection weight from unit j to unit i (the weight of the connection).
s
i
{\displaystyle s_{i}}
is the state of unit i.
θ
i
{\displaystyle \theta _{i}}
is the threshold of unit i.
Updates in the Hopfield network can be performed in two different ways:
Asynchronous: Only one unit is updated at a time. This unit can be picked at random, or a pre-defined order can be imposed from the very beginning.
Synchronous: All units are updated at the same time. This requires a central clock to the system in order to maintain synchronization. This method is viewed by some as less realistic, based on an absence of observed global clock influencing analogous biological or physical systems of interest.
=== Neurons "attract or repel each other" in state space ===
The weight between two units has a powerful impact upon the values of the neurons. Consider the connection weight
w
i
j
{\displaystyle w_{ij}}
between two neurons i and j. If
w
i
j
>
0
{\displaystyle w_{ij}>0}
, the updating rule implies that:
when
s
j
=
1
{\displaystyle s_{j}=1}
, the contribution of j in the weighted sum is positive. Thus,
s
i
{\displaystyle s_{i}}
is pulled by j towards its value
s
i
=
1
{\displaystyle s_{i}=1}
when
s
j
=
−
1
{\displaystyle s_{j}=-1}
, the contribution of j in the weighted sum is negative. Then again,
s
i
{\displaystyle s_{i}}
is pushed by j towards its value
s
i
=
−
1
{\displaystyle s_{i}=-1}
Thus, the values of neurons i and j will converge if the weight between them is positive. Similarly, they will diverge if the weight is negative.
== Convergence properties of discrete and continuous Hopfield networks ==
Bruck in his paper in 1990 studied discrete Hopfield networks and proved a generalized convergence theorem that is based on the connection between the network's dynamics and cuts in the associated graph. This generalization covered both asynchronous as well as synchronous dynamics and presented elementary proofs based on greedy algorithms for max-cut in graphs. A subsequent paper further investigated the behavior of any neuron in both discrete-time and continuous-time Hopfield networks when the corresponding energy function is minimized during an optimization process. Bruck showed that neuron j changes its state if and only if it further decreases the following biased pseudo-cut. The discrete Hopfield network minimizes the following biased pseudo-cut for the synaptic weight matrix of the Hopfield net.
J
p
s
e
u
d
o
−
c
u
t
(
k
)
=
∑
i
∈
C
1
(
k
)
∑
j
∈
C
2
(
k
)
w
i
j
+
∑
j
∈
C
1
(
k
)
θ
j
{\displaystyle J_{pseudo-cut}(k)=\sum _{i\in C_{1}(k)}\sum _{j\in C_{2}(k)}w_{ij}+\sum _{j\in C_{1}(k)}{\theta _{j}}}
where
C
1
(
k
)
{\displaystyle C_{1}(k)}
and
C
2
(
k
)
{\displaystyle C_{2}(k)}
represents the set of neurons which are −1 and +1, respectively, at time
k
{\displaystyle k}
. For further details, see the recent paper.
The discrete-time Hopfield Network always minimizes exactly the following pseudo-cut
U
(
k
)
=
∑
i
=
1
N
∑
j
=
1
N
w
i
j
(
s
i
(
k
)
−
s
j
(
k
)
)
2
+
2
∑
j
=
1
N
θ
j
s
j
(
k
)
{\displaystyle U(k)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{ij}(s_{i}(k)-s_{j}(k))^{2}+2\sum _{j=1}^{N}\theta _{j}s_{j}(k)}
The continuous-time Hopfield network always minimizes an upper bound to the following weighted cut
V
(
t
)
=
∑
i
=
1
N
∑
j
=
1
N
w
i
j
(
f
(
s
i
(
t
)
)
−
f
(
s
j
(
t
)
)
2
+
2
∑
j
=
1
N
θ
j
f
(
s
j
(
t
)
)
{\displaystyle V(t)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{ij}(f(s_{i}(t))-f(s_{j}(t))^{2}+2\sum _{j=1}^{N}\theta _{j}f(s_{j}(t))}
where
f
(
⋅
)
{\displaystyle f(\cdot )}
is a zero-centered sigmoid function.
The complex Hopfield network, on the other hand, generally tends to minimize the so-called shadow-cut of the complex weight matrix of the net.
== Energy ==
Hopfield nets have a scalar value associated with each state of the network, referred to as the "energy", E, of the network, where:
E
=
−
1
2
∑
i
,
j
w
i
j
s
i
s
j
−
∑
i
θ
i
s
i
{\displaystyle E=-{\frac {1}{2}}\sum _{i,j}w_{ij}s_{i}s_{j}-\sum _{i}\theta _{i}s_{i}}
This quantity is called "energy" because it either decreases or stays the same upon network units being updated. Furthermore, under repeated updating the network will eventually converge to a state which is a local minimum in the energy function (which is considered to be a Lyapunov function). Thus, if a state is a local minimum in the energy function it is a stable state for the network. Note that this energy function belongs to a general class of models in physics under the name of Ising models; these in turn are a special case of Markov networks, since the associated probability measure, the Gibbs measure, has the Markov property.
== Hopfield network in optimization ==
Hopfield and Tank presented the Hopfield network application in solving the classical traveling-salesman problem in 1985. Since then, the Hopfield network has been widely used for optimization. The idea of using the Hopfield network in optimization problems is straightforward: If a constrained/unconstrained cost function can be written in the form of the Hopfield energy function E, then there exists a Hopfield network whose equilibrium points represent solutions to the constrained/unconstrained optimization problem. Minimizing the Hopfield energy function both minimizes the objective function and satisfies the constraints also as the constraints are "embedded" into the synaptic weights of the network. Although including the optimization constraints into the synaptic weights in the best possible way is a challenging task, many difficult optimization problems with constraints in different disciplines have been converted to the Hopfield energy function: Associative memory systems, Analog-to-Digital conversion, job-shop scheduling problem, quadratic assignment and other related NP-complete problems, channel allocation problem in wireless networks, mobile ad-hoc network routing problem, image restoration, system identification, combinatorial optimization, etc, just to name a few. However, while it is possible to convert hard optimization problems to Hopfield energy functions, it does not guarantee convergence to a solution (even in exponential time).
== Initialization and running ==
Initialization of the Hopfield networks is done by setting the values of the units to the desired start pattern. Repeated updates are then performed until the network converges to an attractor pattern. Convergence is generally assured, as Hopfield proved that the attractors of this nonlinear dynamical system are stable, not periodic or chaotic as in some other systems. Therefore, in the context of Hopfield networks, an attractor pattern is a final stable state, a pattern that cannot change any value within it under updating.
== Training ==
Training a Hopfield net involves lowering the energy of states that the net should "remember". This allows the net to serve as a content addressable memory system, that is to say, the network will converge to a "remembered" state if it is given only part of the state. The net can be used to recover from a distorted input to the trained state that is most similar to that input. This is called associative memory because it recovers memories on the basis of similarity. For example, if we train a Hopfield net with five units so that the state (1, −1, 1, −1, 1) is an energy minimum, and we give the network the state (1, −1, −1, −1, 1) it will converge to (1, −1, 1, −1, 1). Thus, the network is properly trained when the energy of states which the network should remember are local minima. Note that, in contrast to Perceptron training, the thresholds of the neurons are never updated.
=== Learning rules ===
There are various different learning rules that can be used to store information in the memory of the Hopfield network. It is desirable for a learning rule to have both of the following two properties:
Local: A learning rule is local if each weight is updated using information available to neurons on either side of the connection that is associated with that particular weight.
Incremental: New patterns can be learned without using information from the old patterns that have been also used for training. That is, when a new pattern is used for training, the new values for the weights only depend on the old values and on the new pattern.
These properties are desirable, since a learning rule satisfying them is more biologically plausible. For example, since the human brain is always learning new concepts, one can reason that human learning is incremental. A learning system that was not incremental would generally be trained only once, with a huge batch of training data.
=== Hebbian learning rule for Hopfield networks ===
Hebbian theory was introduced by Donald Hebb in 1949 in order to explain "associative learning", in which simultaneous activation of neuron cells leads to pronounced increases in synaptic strength between those cells. It is often summarized as "Neurons that fire together wire together. Neurons that fire out of sync fail to link".
The Hebbian rule is both local and incremental. For the Hopfield networks, it is implemented in the following manner when learning
n
{\displaystyle n}
binary patterns:
w
i
j
=
1
n
∑
μ
=
1
n
ϵ
i
μ
ϵ
j
μ
{\displaystyle w_{ij}={\frac {1}{n}}\sum _{\mu =1}^{n}\epsilon _{i}^{\mu }\epsilon _{j}^{\mu }}
where
ϵ
i
μ
{\displaystyle \epsilon _{i}^{\mu }}
represents bit i from pattern
μ
{\displaystyle \mu }
.
If the bits corresponding to neurons i and j are equal in pattern
μ
{\displaystyle \mu }
, then the product
ϵ
i
μ
ϵ
j
μ
{\displaystyle \epsilon _{i}^{\mu }\epsilon _{j}^{\mu }}
will be positive. This would, in turn, have a positive effect on the weight
w
i
j
{\displaystyle w_{ij}}
and the values of i and j will tend to become equal. The opposite happens if the bits corresponding to neurons i and j are different.
=== Storkey learning rule ===
This rule was introduced by Amos Storkey in 1997 and is both local and incremental. Storkey also showed that a Hopfield network trained using this rule has a greater capacity than a corresponding network trained using the Hebbian rule. The weight matrix of an attractor neural network is said to follow the Storkey learning rule if it obeys:
w
i
j
ν
=
w
i
j
ν
−
1
+
1
n
ϵ
i
ν
ϵ
j
ν
−
1
n
ϵ
i
ν
h
j
i
ν
−
1
n
ϵ
j
ν
h
i
j
ν
{\displaystyle w_{ij}^{\nu }=w_{ij}^{\nu -1}+{\frac {1}{n}}\epsilon _{i}^{\nu }\epsilon _{j}^{\nu }-{\frac {1}{n}}\epsilon _{i}^{\nu }h_{ji}^{\nu }-{\frac {1}{n}}\epsilon _{j}^{\nu }h_{ij}^{\nu }}
where
h
i
j
ν
=
∑
k
=
1
:
i
≠
k
≠
j
n
w
i
k
ν
−
1
ϵ
k
ν
{\displaystyle h_{ij}^{\nu }=\sum _{k=1~:~i\neq k\neq j}^{n}w_{ik}^{\nu -1}\epsilon _{k}^{\nu }}
is a form of local field at neuron i.
This learning rule is local, since the synapses take into account only neurons at their sides. The rule makes use of more information from the patterns and weights than the generalized Hebbian rule, due to the effect of the local field.
== Spurious patterns ==
Patterns that the network uses for training (called retrieval states) become attractors of the system. Repeated updates would eventually lead to convergence to one of the retrieval states. However, sometimes the network will converge to spurious patterns (different from the training patterns). In fact, the number of spurious patterns can be exponential in the number of stored patterns, even if the stored patterns are orthogonal. The energy in these spurious patterns is also a local minimum. For each stored pattern x, the negation -x is also a spurious pattern.
A spurious state can also be a linear combination of an odd number of retrieval states. For example, when using 3 patterns
μ
1
,
μ
2
,
μ
3
{\displaystyle \mu _{1},\mu _{2},\mu _{3}}
, one can get the following spurious state:
ϵ
i
m
i
x
=
±
sgn
(
±
ϵ
i
μ
1
±
ϵ
i
μ
2
±
ϵ
i
μ
3
)
{\displaystyle \epsilon _{i}^{\rm {mix}}=\pm \operatorname {sgn}(\pm \epsilon _{i}^{\mu _{1}}\pm \epsilon _{i}^{\mu _{2}}\pm \epsilon _{i}^{\mu _{3}})}
Spurious patterns that have an even number of states cannot exist, since they might sum up to zero
== Capacity ==
The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network. Therefore, the number of memories that are able to be stored is dependent on neurons and connections. Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138 (approximately 138 vectors can be recalled from storage for every 1000 nodes) (Hertz et al., 1991). Therefore, it is evident that many mistakes will occur if one tries to store a large number of vectors. When the Hopfield model does not recall the right pattern, it is possible that an intrusion has taken place, since semantically related items tend to confuse the individual, and recollection of the wrong pattern occurs. Therefore, the Hopfield network model is shown to confuse one stored item with that of another upon retrieval. Perfect recalls and high capacity, >0.14, can be loaded in the network by Storkey learning method; ETAM, ETAM experiments also in. Ulterior models inspired by the Hopfield network were later devised to raise the storage limit and reduce the retrieval error rate, with some being capable of one-shot learning.
The storage capacity can be given as
C
≅
n
2
log
2
n
{\displaystyle C\cong {\frac {n}{2\log _{2}n}}}
where
n
{\displaystyle n}
is the number of neurons in the net.
== Human memory ==
The Hopfield network is a model for human associative learning and recall. It accounts for associative memory through the incorporation of memory vectors. Memory vectors can be slightly used, and this would spark the retrieval of the most similar vector in the network. However, we will find out that due to this process, intrusions can occur. In associative memory for the Hopfield network, there are two types of operations: auto-association and hetero-association. The first being when a vector is associated with itself, and the latter being when two different vectors are associated in storage. Furthermore, both types of operations are possible to store within a single memory matrix, but only if that given representation matrix is not one or the other of the operations, but rather the combination (auto-associative and hetero-associative) of the two.
Hopfield's network model utilizes the same learning rule as Hebb's (1949) learning rule, which characterised learning as being a result of the strengthening of the weights in cases of neuronal activity.
Rizzuto and Kahana (2001) were able to show that the neural network model can account for repetition on recall accuracy by incorporating a probabilistic-learning algorithm. During the retrieval process, no learning occurs. As a result, the weights of the network remain fixed, showing that the model is able to switch from a learning stage to a recall stage. By adding contextual drift they were able to show the rapid forgetting that occurs in a Hopfield model during a cued-recall task. The entire network contributes to the change in the activation of any single node.
McCulloch and Pitts' (1943) dynamical rule, which describes the behavior of neurons, does so in a way that shows how the activations of multiple neurons map onto the activation of a new neuron's firing rate, and how the weights of the neurons strengthen the synaptic connections between the new activated neuron (and those that activated it). Hopfield would use McCulloch–Pitts's dynamical rule in order to show how retrieval is possible in the Hopfield network. However, Hopfield would do so in a repetitious fashion. Hopfield would use a nonlinear activation function, instead of using a linear function. This would therefore create the Hopfield dynamical rule and with this, Hopfield was able to show that with the nonlinear activation function, the dynamical rule will always modify the values of the state vector in the direction of one of the stored patterns.
== Dense associative memory or modern Hopfield network ==
Hopfield networks are recurrent neural networks with dynamical trajectories converging to fixed point attractor states and described by an energy function. The state of each model neuron
i
{\textstyle i}
is defined by a time-dependent variable
V
i
{\displaystyle V_{i}}
, which can be chosen to be either discrete or continuous. A complete model describes the mathematics of how the future state of activity of each neuron depends on the known present or previous activity of all the neurons.
In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the
V
i
{\displaystyle V_{i}}
was defined, and the dynamics consisted of changing the activity of each single neuron
i
{\displaystyle i}
only if doing so would lower the total energy of the system. This same idea was extended to the case of
V
i
{\displaystyle V_{i}}
being a continuous variable representing the output of neuron
i
{\displaystyle i}
, and
V
i
{\displaystyle V_{i}}
being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased. The energy in the continuous case has one term which is quadratic in the
V
i
{\displaystyle V_{i}}
(as in the binary model), and a second term which depends on the gain function (neuron's activation function). While having many desirable properties of associative memory, both of these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features. In contrast, by increasing the number of parameters in the model so that there are not just pair-wise but also higher-order interactions between the neurons, one can increase the memory storage capacity.
Dense Associative Memories (also known as the modern Hopfield networks) are generalizations of the classical Hopfield Networks that break the linear scaling relationship between the number of input features and the number of stored memories. This is achieved by introducing stronger non-linearities (either in the energy function or neurons' activation functions) leading to super-linear (even an exponential) memory storage capacity as a function of the number of feature neurons, in effect increasing the order of interactions between the neurons. The network still requires a sufficient number of hidden neurons.
The key theoretical idea behind dense associative memory networks is to use an energy function and an update rule that is more sharply peaked around the stored memories in the space of neuron's configurations compared to the classical model, as demonstrated when the higher-order interactions and subsequent energy landscapes are explicitly modelled.
=== Discrete variables ===
A simple example of the modern Hopfield network can be written in terms of binary variables
V
i
{\displaystyle V_{i}}
that represent the active
V
i
=
+
1
{\displaystyle V_{i}=+1}
and inactive
V
i
=
−
1
{\displaystyle V_{i}=-1}
state of the model neuron
i
{\displaystyle i}
.
E
=
−
∑
μ
=
1
N
mem
F
(
∑
i
=
1
N
f
ξ
μ
i
V
i
)
{\displaystyle E=-\sum \limits _{\mu =1}^{N_{\text{mem}}}F{\Big (}\sum \limits _{i=1}^{N_{f}}\xi _{\mu i}V_{i}{\Big )}}
In this formula the weights
ξ
μ
i
{\textstyle \xi _{\mu i}}
represent the matrix of memory vectors (index
μ
=
1...
N
mem
{\displaystyle \mu =1...N_{\text{mem}}}
enumerates different memories, and index
i
=
1...
N
f
{\displaystyle i=1...N_{f}}
enumerates the content of each memory corresponding to the
i
{\displaystyle i}
-th feature neuron), and the function
F
(
x
)
{\displaystyle F(x)}
is a rapidly growing non-linear function. The update rule for individual neurons (in the asynchronous case) can be written in the following form
V
i
(
t
+
1
)
=
S
i
g
n
[
∑
μ
=
1
N
mem
(
F
(
ξ
μ
i
+
∑
j
≠
i
ξ
μ
j
V
j
(
t
)
)
−
F
(
−
ξ
μ
i
+
∑
j
≠
i
ξ
μ
j
V
j
(
t
)
)
)
]
{\displaystyle V_{i}^{(t+1)}=Sign{\bigg [}\sum \limits _{\mu =1}^{N_{\text{mem}}}{\bigg (}F{\Big (}\xi _{\mu i}+\sum \limits _{j\neq i}\xi _{\mu j}V_{j}^{(t)}{\Big )}-F{\Big (}-\xi _{\mu i}+\sum \limits _{j\neq i}\xi _{\mu j}V_{j}^{(t)}{\Big )}{\bigg )}{\bigg ]}}
which states that in order to calculate the updated state of the
i
{\textstyle i}
-th neuron the network compares two energies: the energy of the network with the
i
{\displaystyle i}
-th neuron in the ON state and the energy of the network with the
i
{\displaystyle i}
-th neuron in the OFF state, given the states of the remaining neuron. The updated state of the
i
{\displaystyle i}
-th neuron selects the state that has the lowest of the two energies.
In the limiting case when the non-linear energy function is quadratic
F
(
x
)
=
x
2
{\displaystyle F(x)=x^{2}}
these equations reduce to the familiar energy function and the update rule for the classical binary Hopfield Network.
The memory storage capacity of these networks can be calculated for random binary patterns. For the power energy function
F
(
x
)
=
x
n
{\displaystyle F(x)=x^{n}}
the maximal number of memories that can be stored and retrieved from this network without errors is given by
N
mem
m
a
x
≈
1
2
(
2
n
−
3
)
!
!
N
f
n
−
1
ln
(
N
f
)
{\displaystyle N_{\text{mem}}^{max}\approx {\frac {1}{2(2n-3)!!}}{\frac {N_{f}^{n-1}}{\ln(N_{f})}}}
For an exponential energy function
F
(
x
)
=
e
x
{\textstyle F(x)=e^{x}}
the memory storage capacity is exponential in the number of feature neurons
N
mem
m
a
x
≈
2
N
f
/
2
{\displaystyle N_{\text{mem}}^{max}\approx 2^{N_{f}/2}}
=== Continuous variables ===
Modern Hopfield networks or dense associative memories can be best understood in continuous variables and continuous time. Consider the network architecture, shown in Fig.1, and the equations for neuron's states evolutionwhere the currents of the feature neurons are denoted by
x
i
{\textstyle x_{i}}
, and the currents of the memory neurons are denoted by
h
μ
{\displaystyle h_{\mu }}
(
h
{\displaystyle h}
stands for hidden neurons). There are no synaptic connections among the feature neurons or the memory neurons. A matrix
ξ
μ
i
{\displaystyle \xi _{\mu i}}
denotes the strength of synapses from a feature neuron
i
{\displaystyle i}
to the memory neuron
μ
{\displaystyle \mu }
. The synapses are assumed to be symmetric, so that the same value characterizes a different physical synapse from the memory neuron
μ
{\displaystyle \mu }
to the feature neuron
i
{\displaystyle i}
. The outputs of the memory neurons and the feature neurons are denoted by
f
μ
{\displaystyle f_{\mu }}
and
g
i
{\displaystyle g_{i}}
, which are non-linear functions of the corresponding currents. In general these outputs can depend on the currents of all the neurons in that layer so that
f
μ
=
f
(
{
h
μ
}
)
{\displaystyle f_{\mu }=f(\{h_{\mu }\})}
and
g
i
=
g
(
{
x
i
}
)
{\textstyle g_{i}=g(\{x_{i}\})}
. It is convenient to define these activation functions as derivatives of the Lagrangian functions for the two groups of neuronsThis way the specific form of the equations for neuron's states is completely defined once the Lagrangian functions are specified. Finally, the time constants for the two groups of neurons are denoted by
τ
f
{\displaystyle \tau _{f}}
and
τ
h
{\displaystyle \tau _{h}}
,
I
i
{\displaystyle I_{i}}
is the input current to the network that can be driven by the presented data.
General systems of non-linear differential equations can have many complicated behaviors that can depend on the choice of the non-linearities and the initial conditions. For Hopfield Networks, however, this is not the case - the dynamical trajectories always converge to a fixed point attractor state. This property is achieved because these equations are specifically engineered so that they have an underlying energy function The terms grouped into square brackets represent a Legendre transform of the Lagrangian function with respect to the states of the neurons. If the Hessian matrices of the Lagrangian functions are positive semi-definite, the energy function is guaranteed to decrease on the dynamical trajectory This property makes it possible to prove that the system of dynamical equations describing temporal evolution of neurons' activities will eventually reach a fixed point attractor state.
In certain situations one can assume that the dynamics of hidden neurons equilibrates at a much faster time scale compared to the feature neurons,
τ
h
≪
τ
f
{\textstyle \tau _{h}\ll \tau _{f}}
. In this case the steady state solution of the second equation in the system (1) can be used to express the currents of the hidden units through the outputs of the feature neurons. This makes it possible to reduce the general theory (1) to an effective theory for feature neurons only. The resulting effective update rules and the energies for various common choices of the Lagrangian functions are shown in Fig.2. In the case of log-sum-exponential Lagrangian function the update rule (if applied once) for the states of the feature neurons is the attention mechanism commonly used in many modern AI systems (see Ref. for the derivation of this result from the continuous time formulation).
=== Relationship to classical Hopfield network with continuous variables ===
Classical formulation of continuous Hopfield Networks can be understood as a special limiting case of the modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described by the dynamical equations and the energy function where
V
i
=
g
(
x
i
)
{\textstyle V_{i}=g(x_{i})}
, and
g
−
1
(
z
)
{\displaystyle g^{-1}(z)}
is the inverse of the activation function
g
(
x
)
{\displaystyle g(x)}
. This model is a special limit of the class of models that is called models A, with the following choice of the Lagrangian functions that, according to the definition (2), leads to the activation functions If we integrate out the hidden neurons the system of equations (1) reduces to the equations on the feature neurons (5) with
T
i
j
=
∑
μ
=
1
N
h
ξ
μ
i
ξ
μ
j
{\displaystyle T_{ij}=\sum \limits _{\mu =1}^{N_{h}}\xi _{\mu i}\xi _{\mu j}}
, and the general expression for the energy (3) reduces to the effective energy While the first two terms in equation (6) are the same as those in equation (9), the third terms look superficially different. In equation (9) it is a Legendre transform of the Lagrangian for the feature neurons, while in (6) the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to
x
i
{\displaystyle x_{i}}
. The results of these differentiations for both expressions are equal to
x
i
g
(
x
i
)
′
{\displaystyle x_{i}g(x_{i})'}
. Thus, the two expressions are equal up to an additive constant. This completes the proof that the classical Hopfield Network with continuous states is a special limiting case of the modern Hopfield network (1) with energy (3).
=== General formulation of the modern Hopfield network ===
Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy function and the corresponding dynamical equations are described assuming that each neuron has its own activation function and kinetic time scale. The network is assumed to be fully connected, so that every neuron is connected to every other neuron using a symmetric matrix of weights
W
I
J
{\displaystyle W_{IJ}}
, indices
I
{\displaystyle I}
and
J
{\displaystyle J}
enumerate different neurons in the network, see Fig.3. The easiest way to mathematically formulate this problem is to define the architecture through a Lagrangian function
L
(
{
x
I
}
)
{\displaystyle L(\{x_{I}\})}
that depends on the activities of all the neurons in the network. The activation function for each neuron is defined as a partial derivative of the Lagrangian with respect to that neuron's activity From the biological perspective one can think about
g
I
{\displaystyle g_{I}}
as an axonal output of the neuron
I
{\displaystyle I}
. In the simplest case, when the Lagrangian is additive for different neurons, this definition results in the activation that is a non-linear function of that neuron's activity. For non-additive Lagrangians this activation function can depend on the activities of a group of neurons. For instance, it can contain contrastive (softmax) or divisive normalization. The dynamical equations describing temporal evolution of a given neuron are given by This equation belongs to the class of models called firing rate models in neuroscience. Each neuron
I
{\displaystyle I}
collects the axonal outputs
g
J
{\displaystyle g_{J}}
from all the neurons, weights them with the synaptic coefficients
W
I
J
{\displaystyle W_{IJ}}
and produces its own time-dependent activity
x
I
{\displaystyle x_{I}}
. The temporal evolution has a time constant
τ
I
{\displaystyle \tau _{I}}
, which in general can be different for every neuron. This network has a global energy function where the first two terms represent the Legendre transform of the Lagrangian function with respect to the neurons' currents
x
I
{\displaystyle x_{I}}
. The temporal derivative of this energy function can be computed on the dynamical trajectories leading to (see for details) The last inequality sign holds provided that the matrix
M
I
K
{\displaystyle M_{IK}}
(or its symmetric part) is positive semi-definite. If, in addition to this, the energy function is bounded from below the non-linear dynamical equations are guaranteed to converge to a fixed point attractor state. The advantage of formulating this network in terms of the Lagrangian functions is that it makes it possible to easily experiment with different choices of the activation functions and different architectural arrangements of neurons. For all those flexible choices the conditions of convergence are determined by the properties of the matrix
M
I
J
{\displaystyle M_{IJ}}
and the existence of the lower bound on the energy function.
=== Hierarchical associative memory network ===
The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connections, the general fully connected network (11), (12) reduces to the architecture shown in Fig.4. It has
N
layer
{\displaystyle N_{\text{layer}}}
layers of recurrently connected neurons with the states described by continuous variables
x
i
A
{\displaystyle x_{i}^{A}}
and the activation functions
g
i
A
{\displaystyle g_{i}^{A}}
, index
A
{\displaystyle A}
enumerates the layers of the network, and index
i
{\displaystyle i}
enumerates individual neurons in that layer. The activation functions can depend on the activities of all the neurons in the layer. Every layer can have a different number of neurons
N
A
{\displaystyle N_{A}}
. These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers
A
{\displaystyle A}
and
B
{\displaystyle B}
are denoted by
ξ
i
j
(
A
,
B
)
{\displaystyle \xi _{ij}^{(A,B)}}
(the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index
i
{\displaystyle i}
enumerates neurons in the layer
A
{\displaystyle A}
, and index
j
{\displaystyle j}
enumerates neurons in the layer
B
{\displaystyle B}
). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as with boundary conditions The main difference between these equations and those from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function
L
A
(
{
x
i
A
}
)
{\displaystyle L^{A}(\{x_{i}^{A}\})}
for the
A
{\displaystyle A}
-th hidden layer, which depends on the activities of all the neurons in that layer. The activation functions in that layer can be defined as partial derivatives of the Lagrangian With these definitions the energy (Lyapunov) function is given by If the Lagrangian functions, or equivalently the activation functions, are chosen in such a way that the Hessians for each layer are positive semi-definite and the overall energy is bounded from below, this system is guaranteed to converge to a fixed point attractor state. The temporal derivative of this energy function is given by Thus, the hierarchical layered network is indeed an attractor network with the global energy function. This network is described by a hierarchical set of synaptic weights that can be learned for each specific problem.
== See also ==
Associative memory (disambiguation)
Autoassociative memory
Boltzmann machine – like a Hopfield net but uses annealed Gibbs sampling instead of gradient descent
Dynamical systems model of cognition
Ising model
Hebbian theory
== References ==
== External links ==
Rojas, Raul (12 July 1996). "13. The Hopfield model" (PDF). Neural Networks – A Systematic Introduction. Springer. ISBN 978-3-540-60505-8.
Hopfield Network Javascript
The Travelling Salesman Problem Archived 2015-05-30 at the Wayback Machine – Hopfield Neural Network JAVA Applet
Hopfield, John (2007). "Hopfield network". Scholarpedia. 2 (5): 1977. Bibcode:2007SchpJ...2.1977H. doi:10.4249/scholarpedia.1977.
"Don't Forget About Associative Memories". The Gradient. November 7, 2020. Retrieved September 27, 2024.
Fletcher, Tristan. "Hopfield Network Learning Using Deterministic Latent Variables" (PDF) (Tutorial). Archived from the original (PDF) on 2011-10-05. | Wikipedia/Hopfield_model |
In financial economics, asset pricing refers to a formal treatment and development of two interrelated pricing principles, outlined below, together with the resultant models. There have been many models developed for different situations, but correspondingly, these stem from either general equilibrium asset pricing or rational asset pricing, the latter corresponding to risk neutral pricing.
Investment theory, which is near synonymous, encompasses the body of knowledge used to support the decision-making process of choosing investments, and the asset pricing models are then applied in determining the asset-specific required rate of return on the investment in question, and for hedging.
== General equilibrium asset pricing ==
Under general equilibrium theory prices are determined through market pricing by supply and demand.
Here asset prices jointly satisfy the requirement that the quantities of each asset supplied and the quantities demanded must be equal at that price - so called market clearing. These models are born out of modern portfolio theory, with the capital asset pricing model (CAPM) as the prototypical result. Prices here are determined with reference to macroeconomic variables–for the CAPM, the "overall market"; for the CCAPM, overall wealth– such that individual preferences are subsumed.
These models aim at modeling the statistically derived probability distribution of the market prices of "all" securities at a given future investment horizon; they are thus of "large dimension". See § Risk and portfolio management: the P world under Mathematical finance. General equilibrium pricing is then used when evaluating diverse portfolios, creating one asset price for many assets.
Calculating an investment or share value here, entails:
(i) a financial forecast for the business or project in question;
(ii) where the output cashflows are then discounted at the rate returned by the model selected; this rate in turn reflecting the "riskiness" - i.e. the idiosyncratic, or undiversifiable risk - of these cashflows;
(iii) these present values are then aggregated, returning the value in question.
See: Financial modeling § Accounting, and Valuation using discounted cash flows.
(Note that an alternate, although less common approach, is to apply a "fundamental valuation" method, such as the T-model, which instead relies on accounting information, attempting to model return based on the company's expected financial performance.)
== Rational pricing ==
Under Rational pricing, derivative prices are calculated such that they are arbitrage-free with respect to more fundamental (equilibrium determined) securities prices;
for an overview of the logic see Rational pricing § Pricing derivatives.
In general this approach does not group assets but rather creates a unique risk price for each asset; these models are then of "low dimension".
For further discussion, see § Derivatives pricing: the Q world under Mathematical finance.
Calculating option prices, and their "Greeks", i.e. sensitivities, combines:
(i) a model of the underlying price behavior, or "process" - i.e. the asset pricing model selected, with its parameters having been calibrated to observed prices;
and
(ii) a mathematical method which returns the premium (or sensitivity) as the expected value of option payoffs over the range of prices of the underlying.
See Valuation of options § Pricing models.
The classical model here is Black–Scholes which describes the dynamics of a market including derivatives (with its option pricing formula); leading more generally to martingale pricing, as well as the above listed models. Black–Scholes assumes a log-normal process; the other models will, for example, incorporate features such as mean reversion, or will be "volatility surface aware", applying local volatility or stochastic volatility.
Rational pricing is also applied to fixed income instruments such as bonds (that consist of just one asset), as well as to interest rate modeling in general, where yield curves must be arbitrage free with respect to the prices of individual instruments.
See Rational pricing § Fixed income securities, Bootstrapping (finance), and Multi-curve framework.
For discussion as to how the models listed above are applied to options on these instruments, and other interest rate derivatives, see short-rate model and Heath–Jarrow–Morton framework.
== Interrelationship ==
These principles are interrelated
through the fundamental theorem of asset pricing.
Here, "in the absence of arbitrage, the market imposes a probability distribution, called a risk-neutral or equilibrium measure, on the set of possible market scenarios, and... this probability measure determines market prices via discounted expectation".
Correspondingly, this essentially means that one may make financial decisions using the risk neutral probability distribution consistent with (i.e. solved for) observed equilibrium prices. See Financial economics § Arbitrage-free pricing and equilibrium.
Relatedly, both approaches are consistent with what is called the Arrow–Debreu theory.
Here models can be derived as a function of "state prices" - contracts that pay one unit of a numeraire (a currency or a commodity) if a particular state occurs at a particular time, and zero otherwise. The approach taken is to recognize that since the price of a security can be returned as a linear combination of its state prices (contingent claim analysis) so, conversely, pricing- or return-models can be backed-out, given state prices.
The CAPM, for example, can be derived by linking risk aversion to overall market return, and restating for price. Black-Scholes can be derived by attaching a binomial probability to each of numerous possible spot-prices (i.e. states) and then rearranging for the terms in its formula.
See Financial economics § Uncertainty.
== See also ==
List of financial economics articles
Outline of finance § Asset pricing theory
Outline of finance § Portfolio theory
== References == | Wikipedia/Asset_pricing_model |
In probability theory, the telegraph process is a memoryless continuous-time stochastic process that shows two distinct values. It models burst noise (also called popcorn noise or random telegraph signal). If the two possible values that a random variable can take are
c
1
{\displaystyle c_{1}}
and
c
2
{\displaystyle c_{2}}
, then the process can be described by the following master equations:
∂
t
P
(
c
1
,
t
|
x
,
t
0
)
=
−
λ
1
P
(
c
1
,
t
|
x
,
t
0
)
+
λ
2
P
(
c
2
,
t
|
x
,
t
0
)
{\displaystyle \partial _{t}P(c_{1},t|x,t_{0})=-\lambda _{1}P(c_{1},t|x,t_{0})+\lambda _{2}P(c_{2},t|x,t_{0})}
and
∂
t
P
(
c
2
,
t
|
x
,
t
0
)
=
λ
1
P
(
c
1
,
t
|
x
,
t
0
)
−
λ
2
P
(
c
2
,
t
|
x
,
t
0
)
.
{\displaystyle \partial _{t}P(c_{2},t|x,t_{0})=\lambda _{1}P(c_{1},t|x,t_{0})-\lambda _{2}P(c_{2},t|x,t_{0}).}
where
λ
1
{\displaystyle \lambda _{1}}
is the transition rate for going from state
c
1
{\displaystyle c_{1}}
to state
c
2
{\displaystyle c_{2}}
and
λ
2
{\displaystyle \lambda _{2}}
is the transition rate for going from going from state
c
2
{\displaystyle c_{2}}
to state
c
1
{\displaystyle c_{1}}
. The process is also known under the names Kac process (after mathematician Mark Kac), and dichotomous random process.
== Solution ==
The master equation is compactly written in a matrix form by introducing a vector
P
=
[
P
(
c
1
,
t
|
x
,
t
0
)
,
P
(
c
2
,
t
|
x
,
t
0
)
]
{\displaystyle \mathbf {P} =[P(c_{1},t|x,t_{0}),P(c_{2},t|x,t_{0})]}
,
d
P
d
t
=
W
P
{\displaystyle {\frac {d\mathbf {P} }{dt}}=W\mathbf {P} }
where
W
=
(
−
λ
1
λ
2
λ
1
−
λ
2
)
{\displaystyle W={\begin{pmatrix}-\lambda _{1}&\lambda _{2}\\\lambda _{1}&-\lambda _{2}\end{pmatrix}}}
is the transition rate matrix. The formal solution is constructed from the initial condition
P
(
0
)
{\displaystyle \mathbf {P} (0)}
(that defines that at
t
=
t
0
{\displaystyle t=t_{0}}
, the state is
x
{\displaystyle x}
) by
P
(
t
)
=
e
W
t
P
(
0
)
{\displaystyle \mathbf {P} (t)=e^{Wt}\mathbf {P} (0)}
.
It can be shown that
e
W
t
=
I
+
W
(
1
−
e
−
2
λ
t
)
2
λ
{\displaystyle e^{Wt}=I+W{\frac {(1-e^{-2\lambda t})}{2\lambda }}}
where
I
{\displaystyle I}
is the identity matrix and
λ
=
(
λ
1
+
λ
2
)
/
2
{\displaystyle \lambda =(\lambda _{1}+\lambda _{2})/2}
is the average transition rate. As
t
→
∞
{\displaystyle t\rightarrow \infty }
, the solution approaches a stationary distribution
P
(
t
→
∞
)
=
P
s
{\displaystyle \mathbf {P} (t\rightarrow \infty )=\mathbf {P} _{s}}
given by
P
s
=
1
2
λ
(
λ
2
λ
1
)
{\displaystyle \mathbf {P} _{s}={\frac {1}{2\lambda }}{\begin{pmatrix}\lambda _{2}\\\lambda _{1}\end{pmatrix}}}
== Properties ==
Knowledge of an initial state decays exponentially. Therefore, for a time
t
≫
(
2
λ
)
−
1
{\displaystyle t\gg (2\lambda )^{-1}}
, the process will reach the following stationary values, denoted by subscript s:
Mean:
⟨
X
⟩
s
=
c
1
λ
2
+
c
2
λ
1
λ
1
+
λ
2
.
{\displaystyle \langle X\rangle _{s}={\frac {c_{1}\lambda _{2}+c_{2}\lambda _{1}}{\lambda _{1}+\lambda _{2}}}.}
Variance:
var
{
X
}
s
=
(
c
1
−
c
2
)
2
λ
1
λ
2
(
λ
1
+
λ
2
)
2
.
{\displaystyle \operatorname {var} \{X\}_{s}={\frac {(c_{1}-c_{2})^{2}\lambda _{1}\lambda _{2}}{(\lambda _{1}+\lambda _{2})^{2}}}.}
One can also calculate a correlation function:
⟨
X
(
t
)
,
X
(
u
)
⟩
s
=
e
−
2
λ
|
t
−
u
|
var
{
X
}
s
.
{\displaystyle \langle X(t),X(u)\rangle _{s}=e^{-2\lambda |t-u|}\operatorname {var} \{X\}_{s}.}
== Application ==
This random process finds wide application in model building:
In physics, spin systems and fluorescence intermittency show dichotomous properties. But especially in single molecule experiments probability distributions featuring algebraic tails are used instead of the exponential distribution implied in all formulas above.
In finance for describing stock prices
In biology for describing transcription factor binding and unbinding.
== See also ==
Markov chain
List of stochastic processes topics
Random telegraph signal
== References == | Wikipedia/Telegraph_process |
In financial mathematics, the Black–Karasinski model is a mathematical model of the term structure of interest rates; see short-rate model. It is a one-factor model as it describes interest rate movements as driven by a single source of randomness. It belongs to the class of no-arbitrage models, i.e. it can fit today's zero-coupon bond prices, and in its most general form, today's prices for a set of caps, floors or European swaptions. The model was introduced by Fischer Black and Piotr Karasinski in 1991.
== Model ==
The main state variable of the model is the short rate, which is assumed to follow the stochastic differential equation (under the risk-neutral measure):
d
ln
(
r
)
=
[
θ
t
−
ϕ
t
ln
(
r
)
]
d
t
+
σ
t
d
W
t
{\displaystyle d\ln(r)=[\theta _{t}-\phi _{t}\ln(r)]\,dt+\sigma _{t}\,dW_{t}}
where dWt is a standard Brownian motion. The model implies a log-normal distribution for the short rate and therefore the expected value of the money-market account is infinite for any maturity.
In the original article by Fischer Black and Piotr Karasinski the model was implemented using a binomial tree with variable spacing, but a trinomial tree implementation is more common in practice, typically a log-normal application of the Hull–White lattice.
== Applications ==
The model is used mainly for the pricing of exotic interest rate derivatives such as American and Bermudan bond options and swaptions, once its parameters have been calibrated to the current term structure of interest rates and to the prices or implied volatilities of caps, floors or European swaptions. Numerical methods (usually trees) are used in the calibration stage as well as for pricing. It can also be used in modeling credit default risk, where the Black–Karasinski short rate expresses the (stochastic) intensity of default events driven by a Cox process; the guaranteed positive rates are an important feature of the model here. Recent work on Perturbation Methods in Credit Derivatives has shown how analytic prices can be conveniently deduced in many such circumstances, as well as for interest rate options.
== References ==
== External links ==
Simon Benninga and Zvi Wiener (1998). Binomial Term Structure Models, Mathematica in Education and Research, Vol. 7 No. 3 1998
Blanka Horvath, Antoine Jacquier and Colin Turfus (2017). Analytic Option Prices for the Black–Karasinski Short Rate Model
Colin Turfus (2018). Analytic Swaption Pricing in the Black–Karasinski Model
Colin Turfus (2018). Exact Arrow-Debreu Pricing for the Black–Karasinski Short Rate Model
Colin Turfus (2019). Perturbation Expansion for Arrow–Debreu Pricing with Hull-White Interest Rates and Black–Karasinski Credit Intensity
Colin Turfus and Piotr Karasinski (2021). The Black-Karasinski Model: Thirty Years On | Wikipedia/Black–Karasinski_model |
In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are totally ordered collections of subsets that are used to model the information that is available at a given point and therefore play an important role in the formalization of random (stochastic) processes.
== Definition ==
Let
(
Ω
,
A
,
P
)
{\displaystyle (\Omega ,{\mathcal {A}},P)}
be a probability space and let
I
{\displaystyle I}
be an index set with a total order
≤
{\displaystyle \leq }
(often
N
{\displaystyle \mathbb {N} }
,
R
+
{\displaystyle \mathbb {R} ^{+}}
, or a subset of
R
+
{\displaystyle \mathbb {R} ^{+}}
).
For every
i
∈
I
{\displaystyle i\in I}
let
F
i
{\displaystyle {\mathcal {F}}_{i}}
be a sub-σ-algebra of
A
{\displaystyle {\mathcal {A}}}
. Then
F
:=
(
F
i
)
i
∈
I
{\displaystyle \mathbb {F} :=({\mathcal {F}}_{i})_{i\in I}}
is called a filtration, if
F
k
⊆
F
ℓ
{\displaystyle {\mathcal {F}}_{k}\subseteq {\mathcal {F}}_{\ell }}
for all
k
≤
ℓ
{\displaystyle k\leq \ell }
. So filtrations are families of σ-algebras that are ordered non-decreasingly. If
F
{\displaystyle \mathbb {F} }
is a filtration, then
(
Ω
,
A
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {A}},\mathbb {F} ,P)}
is called a filtered probability space.
== Example ==
Let
(
X
n
)
n
∈
N
{\displaystyle (X_{n})_{n\in \mathbb {N} }}
be a stochastic process on the probability space
(
Ω
,
A
,
P
)
{\displaystyle (\Omega ,{\mathcal {A}},P)}
.
Let
σ
(
X
k
∣
k
≤
n
)
{\displaystyle \sigma (X_{k}\mid k\leq n)}
denote the σ-algebra generated by the random variables
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}}
.
Then
F
n
:=
σ
(
X
k
∣
k
≤
n
)
{\displaystyle {\mathcal {F}}_{n}:=\sigma (X_{k}\mid k\leq n)}
is a σ-algebra and
F
=
(
F
n
)
n
∈
N
{\displaystyle \mathbb {F} =({\mathcal {F}}_{n})_{n\in \mathbb {N} }}
is a filtration.
F
{\displaystyle \mathbb {F} }
really is a filtration, since by definition all
F
n
{\displaystyle {\mathcal {F}}_{n}}
are σ-algebras and
σ
(
X
k
∣
k
≤
n
)
⊆
σ
(
X
k
∣
k
≤
n
+
1
)
.
{\displaystyle \sigma (X_{k}\mid k\leq n)\subseteq \sigma (X_{k}\mid k\leq n+1).}
This is known as the natural filtration of
A
{\displaystyle {\mathcal {A}}}
with respect to
(
X
n
)
n
∈
N
{\displaystyle (X_{n})_{n\in \mathbb {N} }}
.
== Types of filtrations ==
=== Right-continuous filtration ===
If
F
=
(
F
i
)
i
∈
I
{\displaystyle \mathbb {F} =({\mathcal {F}}_{i})_{i\in I}}
is a filtration, then the corresponding right-continuous filtration is defined as
F
+
:=
(
F
i
+
)
i
∈
I
,
{\displaystyle \mathbb {F} ^{+}:=({\mathcal {F}}_{i}^{+})_{i\in I},}
with
F
i
+
:=
⋂
z
>
i
F
z
.
{\displaystyle {\mathcal {F}}_{i}^{+}:=\bigcap _{z>i}{\mathcal {F}}_{z}.}
The filtration
F
{\displaystyle \mathbb {F} }
itself is called right-continuous if
F
+
=
F
{\displaystyle \mathbb {F} ^{+}=\mathbb {F} }
.
=== Complete filtration ===
Let
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
be a probability space, and let
N
P
:=
{
A
⊆
Ω
∣
A
⊆
B
for some
B
∈
F
with
P
(
B
)
=
0
}
{\displaystyle {\mathcal {N}}_{P}:=\{A\subseteq \Omega \mid A\subseteq B{\text{ for some }}B\in {\mathcal {F}}{\text{ with }}P(B)=0\}}
be the set of all sets that are contained within a
P
{\displaystyle P}
-null set.
A filtration
F
=
(
F
i
)
i
∈
I
{\displaystyle \mathbb {F} =({\mathcal {F}}_{i})_{i\in I}}
is called a complete filtration, if every
F
i
{\displaystyle {\mathcal {F}}_{i}}
contains
N
P
{\displaystyle {\mathcal {N}}_{P}}
. This implies
(
Ω
,
F
i
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}}_{i},P)}
is a complete measure space for every
i
∈
I
.
{\displaystyle i\in I.}
(The converse is not necessarily true.)
=== Augmented filtration ===
A filtration is called an augmented filtration if it is complete and right continuous. For every filtration
F
{\displaystyle \mathbb {F} }
there exists a smallest augmented filtration
F
~
{\displaystyle {\tilde {\mathbb {F} }}}
refining
F
{\displaystyle \mathbb {F} }
.
If a filtration is an augmented filtration, it is said to satisfy the usual hypotheses or the usual conditions.
== See also ==
Natural filtration
Filtration (mathematics)
Filter (mathematics)
== References == | Wikipedia/Filtration_(probability_theory) |
The LIBOR market model, also known as the BGM Model (Brace Gatarek Musiela Model, in reference to the names of some of the inventors) is a financial model of interest rates. It is used for pricing interest rate derivatives, especially exotic derivatives like Bermudan swaptions, ratchet caps and floors, target redemption notes, autocaps, zero coupon swaptions, constant maturity swaps and spread options, among many others. The quantities that are modeled, rather than the short rate or instantaneous forward rates (like in the Heath–Jarrow–Morton framework) are a set of forward rates (also called forward LIBORs), which have the advantage of being directly observable in the market, and whose volatilities are naturally linked to traded contracts. Each forward rate is modeled by a lognormal process under its forward measure, i.e. a Black model leading to a Black formula for interest rate caps. This formula is the market standard to quote cap prices in terms of implied volatilities, hence the term "market model". The LIBOR market model may be interpreted as a collection of forward LIBOR dynamics for different forward rates with spanning tenors and maturities, each forward rate being consistent with a Black interest rate caplet formula for its canonical maturity. One can write the different rates' dynamics under a common pricing measure, for example the forward measure for a preferred single maturity, and in this case forward rates will not be lognormal under the unique measure in general, leading to the need for numerical methods such as Monte Carlo simulation or approximations like the frozen drift assumption.
== Model dynamic ==
The LIBOR market models a set of
n
{\displaystyle n}
forward rates
L
j
{\displaystyle L_{j}}
,
j
=
1
,
…
,
n
{\displaystyle j=1,\ldots ,n}
as lognormal processes. Under the respective
T
j
{\displaystyle T_{j}}
-Forward measure
Q
T
j
+
1
{\displaystyle Q_{T_{j+1}}}
d
L
j
(
t
)
=
μ
j
(
t
)
L
j
(
t
)
d
t
+
σ
j
(
t
)
L
j
(
t
)
d
W
Q
T
j
+
1
(
t
)
.
{\displaystyle dL_{j}(t)=\mu _{j}(t)L_{j}(t)dt+\sigma _{j}(t)L_{j}(t)dW^{Q_{T_{j+1}}}(t).}
Here we can consider that
μ
j
(
t
)
=
0
,
∀
t
{\displaystyle \mu _{j}(t)=0,\forall t}
(centered process).
Here,
L
j
{\displaystyle L_{j}}
is the forward rate for the period
[
T
j
,
T
j
+
1
]
{\displaystyle [T_{j},T_{j+1}]}
. For each single forward rate the model corresponds to the Black model.
The novelty is that, in contrast to the Black model, the LIBOR market model describes the dynamic of a whole family of forward rates under a common measure. The question now is how to switch between the different
T
{\displaystyle T}
-Forward measures.
By means of the multivariate Girsanov's theorem one can show
that
d
W
Q
T
j
(
t
)
=
{
d
W
Q
T
p
(
t
)
−
∑
k
=
j
+
1
p
δ
L
k
(
t
)
1
+
δ
L
k
(
t
)
σ
k
(
t
)
ρ
j
k
d
t
j
<
p
d
W
Q
T
p
(
t
)
j
=
p
d
W
Q
T
p
(
t
)
+
∑
k
=
p
+
1
j
δ
L
k
(
t
)
1
+
δ
L
k
(
t
)
σ
k
(
t
)
ρ
j
k
d
t
j
>
p
{\displaystyle dW^{Q_{T_{j}}}(t)={\begin{cases}dW^{Q_{T_{p}}}(t)-\sum \limits _{k=j+1}^{p}{\frac {\delta L_{k}(t)}{1+\delta L_{k}(t)}}{\sigma }_{k}(t){\rho }_{jk}dt&j<p\\dW^{Q_{T_{p}}}(t)&j=p\\dW^{Q_{T_{p}}}(t)+\sum \limits _{k=p+1}^{j}{\frac {\delta L_{k}(t)}{1+\delta L_{k}(t)}}{\sigma }_{k}(t){\rho }_{jk}dt&j>p\end{cases}}}
and
d
L
j
(
t
)
=
{
L
j
(
t
)
σ
j
(
t
)
d
W
Q
T
p
(
t
)
−
L
j
(
t
)
∑
k
=
j
+
1
p
δ
L
k
(
t
)
1
+
δ
L
k
(
t
)
σ
j
(
t
)
σ
k
(
t
)
ρ
j
k
d
t
j
<
p
L
j
(
t
)
σ
j
(
t
)
d
W
Q
T
p
(
t
)
j
=
p
L
j
(
t
)
σ
j
(
t
)
d
W
Q
T
p
(
t
)
+
L
j
(
t
)
∑
k
=
p
+
1
j
δ
L
k
(
t
)
1
+
δ
L
k
(
t
)
σ
j
(
t
)
σ
k
(
t
)
ρ
j
k
d
t
j
>
p
{\displaystyle dL_{j}(t)={\begin{cases}L_{j}(t){\sigma }_{j}(t)dW^{Q_{T_{p}}}(t)-L_{j}(t)\sum \limits _{k=j+1}^{p}{\frac {\delta L_{k}(t)}{1+\delta L_{k}(t)}}{\sigma }_{j}(t){\sigma }_{k}(t){\rho }_{jk}dt&j<p\\L_{j}(t){\sigma }_{j}(t)dW^{Q_{T_{p}}}(t)&j=p\\L_{j}(t){\sigma }_{j}(t)dW^{Q_{T_{p}}}(t)+L_{j}(t)\sum \limits _{k=p+1}^{j}{\frac {\delta L_{k}(t)}{1+\delta L_{k}(t)}}{\sigma }_{j}(t){\sigma }_{k}(t){\rho }_{jk}dt&j>p\\\end{cases}}}
== References ==
== Literature ==
Brace, A., Gatarek, D. et Musiela, M. (1997): “The Market Model of Interest Rate Dynamics”, Mathematical Finance, 7(2), 127-154.
Miltersen, K., Sandmann, K. et Sondermann, D., (1997): “Closed Form Solutions for Term Structure Derivates with Log-Normal Interest Rates”, Journal of Finance, 52(1), 409-430.
Wernz, J. (2020): “Bank Management and Control”, Springer Nature, 85-88
== External links ==
Java applets for pricing under a LIBOR market model and Monte-Carlo methods
Jave source code and spreadsheet of a LIBOR market model, including calibration to swaption and product valuation
Damiano Brigo's lecture notes on the LIBOR market model for the Bocconi University fixed income course | Wikipedia/LIBOR_market_model |
In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law.
== Statement of lemma for probability spaces ==
Let E1, E2, ... be a sequence of events in some probability space.
The Borel–Cantelli lemma states:
Here, "lim sup" denotes limit supremum of the sequence of events. That is, lim sup En is the outcome that infinitely many of the infinite sequence of events (En) actually occur. Explicitly,
lim sup
n
→
∞
E
n
=
⋂
n
=
1
∞
⋃
k
=
n
∞
E
k
.
{\displaystyle \limsup _{n\to \infty }E_{n}=\bigcap _{n=1}^{\infty }\bigcup _{k=n}^{\infty }E_{k}.}
The set lim sup En is sometimes denoted {En i.o.}, where "i.o." stands for "infinitely often". The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that contain infinitely many events must have probability zero. Note that no assumption of independence is required.
=== Example ===
Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ΣPr(Xn = 0) converges to π2/6 ≈ 1.645 < ∞, and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n.
== Proof ==
Let (En) be a sequence of events in some probability space.
The sequence of events
{
⋃
n
=
N
∞
E
n
}
N
=
1
∞
{\textstyle \left\{\bigcup _{n=N}^{\infty }E_{n}\right\}_{N=1}^{\infty }}
is non-increasing:
⋃
n
=
1
∞
E
n
⊇
⋃
n
=
2
∞
E
n
⊇
⋯
⊇
⋃
n
=
N
∞
E
n
⊇
⋃
n
=
N
+
1
∞
E
n
⊇
⋯
⊇
lim sup
n
→
∞
E
n
.
{\displaystyle \bigcup _{n=1}^{\infty }E_{n}\supseteq \bigcup _{n=2}^{\infty }E_{n}\supseteq \cdots \supseteq \bigcup _{n=N}^{\infty }E_{n}\supseteq \bigcup _{n=N+1}^{\infty }E_{n}\supseteq \cdots \supseteq \limsup _{n\to \infty }E_{n}.}
By continuity from above,
Pr
(
lim sup
n
→
∞
E
n
)
=
lim
N
→
∞
Pr
(
⋃
n
=
N
∞
E
n
)
.
{\displaystyle \Pr(\limsup _{n\to \infty }E_{n})=\lim _{N\to \infty }\Pr \left(\bigcup _{n=N}^{\infty }E_{n}\right).}
By subadditivity,
Pr
(
⋃
n
=
N
∞
E
n
)
≤
∑
n
=
N
∞
Pr
(
E
n
)
.
{\displaystyle \Pr \left(\bigcup _{n=N}^{\infty }E_{n}\right)\leq \sum _{n=N}^{\infty }\Pr(E_{n}).}
By original assumption,
∑
n
=
1
∞
Pr
(
E
n
)
<
∞
.
{\textstyle \sum _{n=1}^{\infty }\Pr(E_{n})<\infty .}
As the series
∑
n
=
1
∞
Pr
(
E
n
)
{\textstyle \sum _{n=1}^{\infty }\Pr(E_{n})}
converges,
lim
N
→
∞
∑
n
=
N
∞
Pr
(
E
n
)
=
0
,
{\displaystyle \lim _{N\to \infty }\sum _{n=N}^{\infty }\Pr(E_{n})=0,}
as required.
== General measure spaces ==
For general measure spaces, the Borel–Cantelli lemma takes the following form:
== Converse result ==
A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is:
The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult.
The infinite monkey theorem follows from this second lemma.
=== Example ===
The lemma can be applied to give a covering theorem in Rn. Specifically (Stein 1993, Lemma X.2.1), if Ej is a collection of Lebesgue measurable subsets of a compact set in Rn such that
∑
j
μ
(
E
j
)
=
∞
,
{\displaystyle \sum _{j}\mu (E_{j})=\infty ,}
then there is a sequence Fj of translates
F
j
=
E
j
+
x
j
{\displaystyle F_{j}=E_{j}+x_{j}}
such that
lim
sup
F
j
=
⋂
n
=
1
∞
⋃
k
=
n
∞
F
k
=
R
n
{\displaystyle \lim \sup F_{j}=\bigcap _{n=1}^{\infty }\bigcup _{k=n}^{\infty }F_{k}=\mathbb {R} ^{n}}
apart from a set of measure zero.
=== Proof ===
Suppose that
∑
n
=
1
∞
Pr
(
E
n
)
=
∞
{\textstyle \sum _{n=1}^{\infty }\Pr(E_{n})=\infty }
and the events
(
E
n
)
n
=
1
∞
{\displaystyle (E_{n})_{n=1}^{\infty }}
are independent. It is sufficient to show the event that the En's did not occur for infinitely many values of n has probability 0. This is just to say that it is sufficient to show that
1
−
Pr
(
lim sup
n
→
∞
E
n
)
=
0.
{\displaystyle 1-\Pr(\limsup _{n\to \infty }E_{n})=0.}
Noting that:
1
−
Pr
(
lim sup
n
→
∞
E
n
)
=
1
−
Pr
(
{
E
n
i.o.
}
)
=
Pr
(
{
E
n
i.o.
}
c
)
=
Pr
(
(
⋂
N
=
1
∞
⋃
n
=
N
∞
E
n
)
c
)
=
Pr
(
⋃
N
=
1
∞
⋂
n
=
N
∞
E
n
c
)
=
Pr
(
lim inf
n
→
∞
E
n
c
)
=
lim
N
→
∞
Pr
(
⋂
n
=
N
∞
E
n
c
)
,
{\displaystyle {\begin{aligned}1-\Pr(\limsup _{n\to \infty }E_{n})&=1-\Pr \left(\{E_{n}{\text{ i.o.}}\}\right)=\Pr \left(\{E_{n}{\text{ i.o.}}\}^{c}\right)\\&=\Pr \left(\left(\bigcap _{N=1}^{\infty }\bigcup _{n=N}^{\infty }E_{n}\right)^{c}\right)=\Pr \left(\bigcup _{N=1}^{\infty }\bigcap _{n=N}^{\infty }E_{n}^{c}\right)\\&=\Pr \left(\liminf _{n\to \infty }E_{n}^{c}\right)=\lim _{N\to \infty }\Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right),\end{aligned}}}
it is enough to show:
Pr
(
⋂
n
=
N
∞
E
n
c
)
=
0
{\textstyle \Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)=0}
. Since the
(
E
n
)
n
=
1
∞
{\displaystyle (E_{n})_{n=1}^{\infty }}
are independent:
Pr
(
⋂
n
=
N
∞
E
n
c
)
=
∏
n
=
N
∞
Pr
(
E
n
c
)
=
∏
n
=
N
∞
(
1
−
Pr
(
E
n
)
)
.
{\displaystyle {\begin{aligned}\Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)&=\prod _{n=N}^{\infty }\Pr(E_{n}^{c})\\&=\prod _{n=N}^{\infty }(1-\Pr(E_{n})).\end{aligned}}}
The convergence test for infinite products guarantees that the product above is 0, if
∑
n
=
N
∞
Pr
(
E
n
)
{\textstyle \sum _{n=N}^{\infty }\Pr(E_{n})}
diverges. This completes the proof.
== Counterpart ==
Another related result is the so-called counterpart of the Borel–Cantelli lemma. It is a counterpart of the
Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that
(
A
n
)
{\displaystyle (A_{n})}
is monotone increasing for sufficiently large indices. This Lemma says:
Let
(
A
n
)
{\displaystyle (A_{n})}
be such that
A
k
⊆
A
k
+
1
{\displaystyle A_{k}\subseteq A_{k+1}}
,
and let
A
¯
{\displaystyle {\bar {A}}}
denote the complement of
A
{\displaystyle A}
. Then the probability of infinitely many
A
k
{\displaystyle A_{k}}
occur (that is, at least one
A
k
{\displaystyle A_{k}}
occurs) is one if and only if there exists a strictly increasing sequence of positive integers
(
t
k
)
{\displaystyle (t_{k})}
such that
∑
k
Pr
(
A
t
k
+
1
∣
A
¯
t
k
)
=
∞
.
{\displaystyle \sum _{k}\Pr(A_{t_{k+1}}\mid {\bar {A}}_{t_{k}})=\infty .}
This simple result can be useful in problems such as for instance those involving hitting probabilities for stochastic process with the choice of the sequence
(
t
k
)
{\displaystyle (t_{k})}
usually being the essence.
== Kochen–Stone ==
Let
(
A
n
)
{\displaystyle (A_{n})}
be a sequence of events with
∑
Pr
(
A
n
)
=
∞
{\textstyle \sum \Pr(A_{n})=\infty }
and
lim sup
k
→
∞
(
∑
n
=
1
k
Pr
(
A
n
)
)
2
∑
1
≤
m
,
n
≤
k
Pr
(
A
m
∩
A
n
)
>
0.
{\textstyle \limsup _{k\to \infty }{\frac {\left(\sum _{n=1}^{k}\Pr(A_{n})\right)^{2}}{\sum _{1\leq m,n\leq k}\Pr(A_{m}\cap A_{n})}}>0.}
Then there is a positive probability that
A
n
{\displaystyle A_{n}}
occur infinitely often.
=== Proof ===
Let
S
m
,
n
=
∑
i
=
m
n
1
A
i
{\displaystyle S_{m,n}=\sum _{i=m}^{n}\mathbf {1} _{A_{i}}}
. Then, note that
E
[
S
m
,
n
]
2
=
(
∑
i
=
m
n
Pr
(
A
i
)
)
2
{\displaystyle E[S_{m,n}]^{2}=\left(\sum _{i=m}^{n}\Pr(A_{i})\right)^{2}}
and
E
[
S
m
,
n
2
]
=
∑
1
≤
i
≤
j
≤
n
Pr
(
A
i
∩
A
j
)
.
{\displaystyle E[S_{m,n}^{2}]=\sum _{1\leq i\leq j\leq n}\Pr(A_{i}\cap A_{j}).}
Hence, we know that
lim sup
n
→
∞
E
[
S
1
,
n
]
2
E
[
S
1
,
n
2
]
>
0.
{\displaystyle \limsup _{n\to \infty }{\frac {\mathbb {E} [S_{1,n}]^{2}}{\mathbb {E} [S_{1,n}^{2}]}}>0.}
We have that
Pr
(
⋃
i
=
m
n
A
i
)
=
Pr
(
S
m
,
n
>
0
)
.
{\displaystyle \Pr \left(\bigcup _{i=m}^{n}A_{i}\right)=\Pr(S_{m,n}>0).}
Now, notice that by the Cauchy-Schwarz Inequality, for any random variable
X
≥
0
{\displaystyle X\geq 0}
:
E
[
X
]
2
≤
E
[
X
1
{
X
>
0
}
]
2
≤
E
[
X
2
]
Pr
(
X
>
0
)
,
{\displaystyle \mathbb {E} [X]^{2}\leq \mathbb {E} [X\mathbf {1} _{\{X>0\}}]^{2}\leq \mathbb {E} [X^{2}]\Pr(X>0),}
therefore,
Pr
(
S
m
,
n
>
0
)
≥
E
[
S
m
,
n
]
2
E
[
S
m
,
n
2
]
.
{\displaystyle \Pr(S_{m,n}>0)\geq {\frac {\mathbb {E} [S_{m,n}]^{2}}{\mathbb {E} [S_{m,n}^{2}]}}.}
We then have
E
[
S
m
,
n
]
2
E
[
S
m
,
n
2
]
≥
E
[
S
1
,
n
−
S
1
,
m
−
1
]
2
E
[
S
1
,
n
2
]
.
{\displaystyle {\frac {\mathbb {E} [S_{m,n}]^{2}}{\mathbb {E} [S_{m,n}^{2}]}}\geq {\frac {E[S_{1,n}-S_{1,m-1}]^{2}}{E[S_{1,n}^{2}]}}.}
Given
m
{\displaystyle m}
, since
lim
n
→
∞
E
[
S
1
,
n
]
=
∞
{\displaystyle \lim _{n\to \infty }\mathbb {E} [S_{1,n}]=\infty }
, we can find
n
{\displaystyle n}
large enough so that
|
E
[
S
1
,
n
]
−
E
[
S
1
,
m
−
1
]
E
[
S
1
,
n
]
−
1
|
<
ϵ
,
{\displaystyle {\biggr |}{\frac {\mathbb {E} [S_{1,n}]-\mathbb {E} [S_{1,m-1}]}{\mathbb {E} [S_{1,n}]}}-1{\biggr |}<\epsilon ,}
for any given
ϵ
>
0
{\displaystyle \epsilon >0}
. Therefore,
lim
m
→
∞
sup
n
≥
m
Pr
(
⋃
i
=
m
n
A
i
)
≥
lim
m
→
∞
sup
n
≥
m
E
[
S
1
,
n
]
2
E
[
S
1
,
n
2
]
>
0.
{\displaystyle \lim _{m\to \infty }\sup _{n\geq m}\Pr \left(\bigcup _{i=m}^{n}A_{i}\right)\geq \lim _{m\to \infty }\sup _{n\geq m}{\frac {E[S_{1,n}]^{2}}{E[S_{1,n}^{2}]}}>0.}
But the left side is precisely the probability that the
A
n
{\displaystyle A_{n}}
occur infinitely often since
{
A
k
i.o.
}
=
{
ω
∈
Ω
:
∀
m
,
∃
n
≥
m
s.t.
ω
∈
A
n
}
.
{\displaystyle \{A_{k}{\text{ i.o.}}\}=\{\omega \in \Omega :\forall m,\exists n\geq m{\text{ s.t. }}\omega \in A_{n}\}.}
We're done now, since we've shown that
P
(
A
k
i.o.
)
>
0.
{\displaystyle P(A_{k}{\text{ i.o.}})>0.}
== See also ==
Lévy's zero–one law
Kuratowski convergence
Infinite monkey theorem
== References ==
Prokhorov, A.V. (2001) [1994], "Borel–Cantelli lemma", Encyclopedia of Mathematics, EMS Press
Feller, William (1961), An Introduction to Probability Theory and Its Application, John Wiley & Sons.
Stein, Elias (1993), Harmonic analysis: Real-variable methods, orthogonality, and oscillatory integrals, Princeton University Press.
Bruss, F. Thomas (1980), "A counterpart of the Borel Cantelli Lemma", J. Appl. Probab., 17 (4): 1094–1101, doi:10.2307/3213220, JSTOR 3213220, S2CID 250344204.
Durrett, Rick. "Probability: Theory and Examples." Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005.
== External links ==
Planet Math Proof Refer for a simple proof of the Borel Cantelli Lemma | Wikipedia/Borel–Cantelli_lemma |
In time series analysis used in statistics and econometrics, autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) models are generalizations of the autoregressive moving average (ARMA) model to non-stationary series and periodic variation, respectively. All these models are fitted to time series in order to better understand it and predict future values. The purpose of these generalizations is to fit the data as well as possible. Specifically, ARMA assumes that the series is stationary, that is, its expected value is constant in time. If instead the series has a trend (but a constant variance/autocovariance), the trend is removed by "differencing", leaving a stationary series. This operation generalizes ARMA and corresponds to the "integrated" part of ARIMA. Analogously, periodic variation is removed by "seasonal differencing".
== Components ==
As in ARMA, the "autoregressive" (AR) part of ARIMA indicates that the evolving variable of interest is regressed on its prior values. The "moving average" (MA) part indicates that the regression error is a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The "integrated" (I) part indicates that the data values have been replaced with the difference between each value and the previous value.
According to Wold's decomposition theorem the ARMA model is sufficient to describe a regular (a.k.a. purely nondeterministic) wide-sense stationary time series. This motivates to make such a non-stationary time series stationary, e.g., by using differencing, before using ARMA.
If the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework that it is eliminated by the seasonal differencing.
== Mathematical formulation ==
Non-seasonal ARIMA models are usually denoted ARIMA(p, d, q) where parameters p, d, q are non-negative integers: p is the order (number of time lags) of the autoregressive model, d is the degree of differencing (the number of times the data have had past values subtracted), and q is the order of the moving-average model. Seasonal ARIMA models are usually denoted ARIMA(p, d, q)(P, D, Q)m, where the uppercase P, D, Q are the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model and m is the number of periods in each season. When two of the parameters are 0, the model may be referred to based on the non-zero parameter, dropping "AR", "I" or "MA" from the acronym. For example,
ARIMA
(
1
,
0
,
0
)
{\displaystyle {\text{ARIMA}}(1,0,0)}
is AR(1),
ARIMA
(
0
,
1
,
0
)
{\displaystyle {\text{ARIMA}}(0,1,0)}
is I(1), and
ARIMA
(
0
,
0
,
1
)
{\displaystyle {\text{ARIMA}}(0,0,1)}
is MA(1).
Given time series data Xt where t is an integer index and the Xt are real numbers, an
ARMA
(
p
′
,
q
)
{\displaystyle {\text{ARMA}}(p',q)}
model is given by
X
t
−
α
1
X
t
−
1
−
⋯
−
α
p
′
X
t
−
p
′
=
ε
t
+
θ
1
ε
t
−
1
+
⋯
+
θ
q
ε
t
−
q
,
{\displaystyle X_{t}-\alpha _{1}X_{t-1}-\dots -\alpha _{p'}X_{t-p'}=\varepsilon _{t}+\theta _{1}\varepsilon _{t-1}+\cdots +\theta _{q}\varepsilon _{t-q},}
or equivalently by
(
1
−
∑
i
=
1
p
′
α
i
L
i
)
X
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
{\displaystyle \left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,}
where
L
{\displaystyle L}
is the lag operator, the
α
i
{\displaystyle \alpha _{i}}
are the parameters of the autoregressive part of the model, the
θ
i
{\displaystyle \theta _{i}}
are the parameters of the moving average part and the
ε
t
{\displaystyle \varepsilon _{t}}
are error terms. The error terms
ε
t
{\displaystyle \varepsilon _{t}}
are generally assumed to be independent, identically distributed variables sampled from a normal distribution with zero mean.
If the polynomial
(
1
−
∑
i
=
1
p
′
α
i
L
i
)
{\displaystyle \textstyle \left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)}
has a unit root (a factor
(
1
−
L
)
{\displaystyle (1-L)}
) of multiplicity d, then it can be rewritten as:
(
1
−
∑
i
=
1
p
′
α
i
L
i
)
=
(
1
−
∑
i
=
1
p
′
−
d
φ
i
L
i
)
(
1
−
L
)
d
.
{\displaystyle \left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)=\left(1-\sum _{i=1}^{p'-d}\varphi _{i}L^{i}\right)\left(1-L\right)^{d}.}
An ARIMA(p, d, q) process expresses this polynomial factorisation property with p = p'−d, and is given by:
(
1
−
∑
i
=
1
p
φ
i
L
i
)
(
1
−
L
)
d
X
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)(1-L)^{d}X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,}
and so is special case of an ARMA(p+d, q) process having the autoregressive polynomial with d unit roots. (This is why no process that is accurately described by an ARIMA model with d > 0 is wide-sense stationary.)
The above can be generalized as follows.
(
1
−
∑
i
=
1
p
φ
i
L
i
)
(
1
−
L
)
d
X
t
=
δ
+
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
.
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)(1-L)^{d}X_{t}=\delta +\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}.\,}
This defines an ARIMA(p, d, q) process with drift
δ
1
−
∑
φ
i
{\displaystyle {\frac {\delta }{1-\sum \varphi _{i}}}}
.
== Other special forms ==
The explicit identification of the factorization of the autoregression polynomial into factors as above can be extended to other cases, firstly to apply to the moving average polynomial and secondly to include other special factors. For example, having a factor
(
1
−
L
s
)
{\displaystyle (1-L^{s})}
in a model is one way of including a non-stationary seasonality of period s into the model; this factor has the effect of re-expressing the data as changes from s periods ago. Another example is the factor
(
1
−
3
L
+
L
2
)
{\displaystyle \left(1-{\sqrt {3}}L+L^{2}\right)}
, which includes a (non-stationary) seasonality of period 2. The effect of the first type of factor is to allow each season's value to drift separately over time, whereas with the second type values for adjacent seasons move together.
Identification and specification of appropriate factors in an ARIMA model can be an important step in modeling as it can allow a reduction in the overall number of parameters to be estimated while allowing the imposition on the model of types of behavior that logic and experience suggest should be there.
== Differencing ==
A stationary time series's properties do not change. Specifically, for a wide-sense stationary time series, the mean and the variance/autocovariance are constant over time. Differencing in statistics is a transformation applied to a non-stationary time-series in order to make it trend stationary (i.e., stationary in the mean sense), by removing or subtracting the trend or non-constant mean. However, it does not affect the non-stationarity of the variance or autocovariance. Likewise, seasonal differencing or deseasonalization is applied to a time-series to remove the seasonal component.
From the perspective of signal processing, especially the Fourier spectral analysis theory, the trend is a low-frequency part in the spectrum of a series, while the season is a periodic-frequency part. Therefore, differencing is a high-pass (that is, low-stop) filter and the seasonal-differencing is a comb filter to suppress respectively the low-frequency trend and the periodic-frequency season in the spectrum domain (rather than directly in the time domain).
To difference the data, we compute the difference between consecutive observations. Mathematically, this is shown as
y
t
′
=
y
t
−
y
t
−
1
{\displaystyle y_{t}'=y_{t}-y_{t-1}\,}
It may be necessary to difference the data a second time to obtain a stationary time series, which is referred to as second-order differencing:
y
t
∗
=
y
t
′
−
y
t
−
1
′
=
(
y
t
−
y
t
−
1
)
−
(
y
t
−
1
−
y
t
−
2
)
=
y
t
−
2
y
t
−
1
+
y
t
−
2
{\displaystyle {\begin{aligned}y_{t}^{*}&=y_{t}'-y_{t-1}'\\&=(y_{t}-y_{t-1})-(y_{t-1}-y_{t-2})\\&=y_{t}-2y_{t-1}+y_{t-2}\end{aligned}}}
Seasonal differencing involves computing the difference between an observation and the corresponding observation in the previous season e.g a year. This is shown as:
y
t
′
=
y
t
−
y
t
−
m
where
m
=
duration of season
.
{\displaystyle y_{t}'=y_{t}-y_{t-m}\quad {\text{where }}m={\text{duration of season}}.}
The differenced data are then used for the estimation of an ARMA model.
== Examples ==
Some well-known special cases arise naturally or are mathematically equivalent to other popular forecasting models. For example:
ARIMA(0, 0, 0) models white noise.
An ARIMA(0, 1, 0) model is a random walk.
An ARIMA(0, 1, 2) model is a Damped Holt's model.
An ARIMA(0, 1, 1) model without constant is a basic exponential smoothing model.
An ARIMA(0, 2, 2) model is given by
X
t
=
2
X
t
−
1
−
X
t
−
2
+
(
α
+
β
−
2
)
ε
t
−
1
+
(
1
−
α
)
ε
t
−
2
+
ε
t
{\displaystyle X_{t}=2X_{t-1}-X_{t-2}+(\alpha +\beta -2)\varepsilon _{t-1}+(1-\alpha )\varepsilon _{t-2}+\varepsilon _{t}}
— which is equivalent to Holt's linear method with additive errors, or double exponential smoothing.
== Choosing the order ==
The order p and q can be determined using the sample autocorrelation function (ACF), partial autocorrelation function (PACF), and/or extended autocorrelation function (EACF) method.
Other alternative methods include AIC, BIC, etc. To determine the order of a non-seasonal ARIMA model, a useful criterion is the Akaike information criterion (AIC). It is written as
AIC
=
−
2
log
(
L
)
+
2
(
p
+
q
+
k
)
,
{\displaystyle {\text{AIC}}=-2\log(L)+2(p+q+k),}
where L is the likelihood of the data, p is the order of the autoregressive part and q is the order of the moving average part. The k represents the intercept of the ARIMA model. For AIC, if k = 1 then there is an intercept in the ARIMA model (c ≠ 0) and if k = 0 then there is no intercept in the ARIMA model (c = 0).
The corrected AIC for ARIMA models can be written as
AICc
=
AIC
+
2
(
p
+
q
+
k
)
(
p
+
q
+
k
+
1
)
T
−
p
−
q
−
k
−
1
.
{\displaystyle {\text{AICc}}={\text{AIC}}+{\frac {2(p+q+k)(p+q+k+1)}{T-p-q-k-1}}.}
The Bayesian Information Criterion (BIC) can be written as
BIC
=
AIC
+
(
(
log
T
)
−
2
)
(
p
+
q
+
k
)
.
{\displaystyle {\text{BIC}}={\text{AIC}}+((\log T)-2)(p+q+k).}
The objective is to minimize the AIC, AICc or BIC values for a good model. The lower the value of one of these criteria for a range of models being investigated, the better the model will suit the data. The AIC and the BIC are used for two completely different purposes. While the AIC tries to approximate models towards the reality of the situation, the BIC attempts to find the perfect fit. The BIC approach is often criticized as there never is a perfect fit to real-life complex data; however, it is still a useful method for selection as it penalizes models more heavily for having more parameters than the AIC would.
AICc can only be used to compare ARIMA models with the same orders of differencing. For ARIMAs with different orders of differencing, RMSE can be used for model comparison.
== Estimation of coefficients ==
== Forecasts using ARIMA models ==
The ARIMA model can be viewed as a "cascade" of two models. The first is non-stationary:
Y
t
=
(
1
−
L
)
d
X
t
{\displaystyle Y_{t}=(1-L)^{d}X_{t}}
while the second is wide-sense stationary:
(
1
−
∑
i
=
1
p
φ
i
L
i
)
Y
t
=
(
1
+
∑
i
=
1
q
θ
i
L
i
)
ε
t
.
{\displaystyle \left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)Y_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,.}
Now forecasts can be made for the process
Y
t
{\displaystyle Y_{t}}
, using a generalization of the method of autoregressive forecasting.
=== Forecast intervals ===
The forecast intervals (confidence intervals for forecasts) for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these assumptions does not hold, then the forecast intervals may be incorrect. For this reason, researchers plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals.
95% forecast interval:
y
^
T
+
h
∣
T
±
1.96
v
T
+
h
∣
T
{\displaystyle {\hat {y}}_{T+h\,\mid \,T}\pm 1.96{\sqrt {v_{T+h\,\mid \,T}}}}
, where
v
T
+
h
∣
T
{\displaystyle v_{T+h\mid T}}
is the variance of
y
T
+
h
∣
y
1
,
…
,
y
T
{\displaystyle y_{T+h}\mid y_{1},\dots ,y_{T}}
.
For
h
=
1
{\displaystyle h=1}
,
v
T
+
h
∣
T
=
σ
^
2
{\displaystyle v_{T+h\,\mid \,T}={\hat {\sigma }}^{2}}
for all ARIMA models regardless of parameters and orders.
For ARIMA(0,0,q),
y
t
=
e
t
+
∑
i
=
1
q
θ
i
e
t
−
i
.
{\displaystyle y_{t}=e_{t}+\sum _{i=1}^{q}\theta _{i}e_{t-i}.}
v
T
+
h
∣
T
=
σ
^
2
[
1
+
∑
i
=
1
h
−
1
θ
i
e
t
−
i
]
,
for
h
=
2
,
3
,
…
{\displaystyle v_{T+h\,\mid \,T}={\hat {\sigma }}^{2}\left[1+\sum _{i=1}^{h-1}\theta _{i}e_{t-i}\right],{\text{ for }}h=2,3,\ldots }
In general, forecast intervals from ARIMA models will increase as the forecast horizon increases.
== Variations and extensions ==
A number of variations on the ARIMA model are commonly employed. If multiple time series are used then the
X
t
{\displaystyle X_{t}}
can be thought of as vectors and a VARIMA model may be appropriate. Sometimes a seasonal effect is suspected in the model; in that case, it is generally considered better to use a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model. If the time-series is suspected to exhibit long-range dependence, then the d parameter may be allowed to have non-integer values in an autoregressive fractionally integrated moving average model, which is also called a Fractional ARIMA (FARIMA or ARFIMA) model.
== Software implementations ==
Various packages that apply methodology like Box–Jenkins parameter optimization are available to find the right parameters for the ARIMA model.
EViews: has extensive ARIMA and SARIMA capabilities.
Julia: contains an ARIMA implementation in the TimeModels package
Mathematica: includes ARIMAProcess function.
MATLAB: the Econometrics Toolbox includes ARIMA models and regression with ARIMA errors
NCSS: includes several procedures for ARIMA fitting and forecasting.
Python: the "statsmodels" package includes models for time series analysis – univariate time series analysis: AR, ARIMA – vector autoregressive models, VAR and structural VAR – descriptive statistics and process models for time series analysis.
R: the standard R stats package includes an arima function, which is documented in "ARIMA Modelling of Time Series". Besides the
ARIMA
(
p
,
d
,
q
)
{\displaystyle {\text{ARIMA}}(p,d,q)}
part, the function also includes seasonal factors, an intercept term, and exogenous variables (xreg, called "external regressors"). The package astsa has scripts such as sarima to estimate seasonal or nonseasonal models and sarima.sim to simulate from these models. The CRAN task view on Time Series is the reference with many more links. The "forecast" package in R can automatically select an ARIMA model for a given time series with the auto.arima() function [that can often give questionable results][1] and can also simulate seasonal and non-seasonal ARIMA models with its simulate.Arima() function.
Ruby: the "statsample-timeseries" gem is used for time series analysis, including ARIMA models and Kalman Filtering.
JavaScript: the "arima" package includes models for time series analysis and forecasting (ARIMA, SARIMA, SARIMAX, AutoARIMA)
C: the "ctsa" package includes ARIMA, SARIMA, SARIMAX, AutoARIMA and multiple methods for time series analysis.
SAFE TOOLBOXES: includes ARIMA modelling and regression with ARIMA errors.
SAS: includes extensive ARIMA processing in its Econometric and Time Series Analysis system: SAS/ETS.
IBM SPSS: includes ARIMA modeling in the Professional and Premium editions of its Statistics package as well as its Modeler package. The default Expert Modeler feature evaluates a range of seasonal and non-seasonal autoregressive (p), integrated (d), and moving average (q) settings and seven exponential smoothing models. The Expert Modeler can also transform the target time-series data into its square root or natural log. The user also has the option to restrict the Expert Modeler to ARIMA models, or to manually enter ARIMA nonseasonal and seasonal p, d, and q settings without Expert Modeler. Automatic outlier detection is available for seven types of outliers, and the detected outliers will be accommodated in the time-series model if this feature is selected.
SAP: the APO-FCS package in SAP ERP from SAP allows creation and fitting of ARIMA models using the Box–Jenkins methodology.
SQL Server Analysis Services: from Microsoft includes ARIMA as a Data Mining algorithm.
Stata includes ARIMA modelling (using its arima command) as of Stata 9.
StatSim: includes ARIMA models in the Forecast web app.
Teradata Vantage has the ARIMA function as part of its machine learning engine.
TOL (Time Oriented Language) is designed to model ARIMA models (including SARIMA, ARIMAX and DSARIMAX variants) [2].
Scala: spark-timeseries library contains ARIMA implementation for Scala, Java and Python. Implementation is designed to run on Apache Spark.
PostgreSQL/MadLib: Time Series Analysis/ARIMA.
X-12-ARIMA: from the US Bureau of the Census
== See also ==
Autocorrelation
ARMA
Finite impulse response
Infinite impulse response
Partial autocorrelation
X-13ARIMA-SEATS
== References ==
== Further reading ==
Asteriou, Dimitros; Hall, Stephen G. (2011). "ARIMA Models and the Box–Jenkins Methodology". Applied Econometrics (Second ed.). Palgrave MacMillan. pp. 265–286. ISBN 978-0-230-27182-1.
Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 978-0-521-34339-8.
Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 978-0-521-35532-2.
Shumway R.H. and Stoffer, D.S. (2017). Time Series Analysis and Its Applications: With R Examples. Springer. DOI: 10.1007/978-3-319-52452-8
ARIMA Models in R. Become an expert in fitting ARIMA (autoregressive integrated moving average) models to time series data using R.
== External links ==
Lecture notes on ARIMA models by Robert Nau at Duke University | Wikipedia/Autoregressive_integrated_moving_average |
In mathematical finance, the CEV or constant elasticity of variance model is a stochastic volatility model, although technically it would be classed more precisely as a local volatility model, that attempts to capture stochastic volatility and the leverage effect. The model is widely used by practitioners in the financial industry, especially for modelling equities and commodities. It was developed by John Cox in 1975.
== Dynamic ==
The CEV model is a stochastic process which evolves according to the following stochastic differential equation:
d
S
t
=
μ
S
t
d
t
+
σ
S
t
γ
d
W
t
{\displaystyle \mathrm {d} S_{t}=\mu S_{t}\mathrm {d} t+\sigma S_{t}^{\gamma }\mathrm {d} W_{t}}
in which S is the spot price, t is time, and μ is a parameter characterising the drift, σ and γ are volatility parameters, and W is a Brownian motion.
It is a special case of a general local volatility model, written as
d
S
t
=
μ
S
t
d
t
+
v
(
t
,
S
t
)
S
t
d
W
t
{\displaystyle \mathrm {d} S_{t}=\mu S_{t}\mathrm {d} t+v(t,S_{t})S_{t}\mathrm {d} W_{t}}
where the price return volatility is
v
(
t
,
S
t
)
=
σ
S
t
γ
−
1
{\displaystyle v(t,S_{t})=\sigma S_{t}^{\gamma -1}}
The constant parameters
σ
,
γ
{\displaystyle \sigma ,\;\gamma }
satisfy the conditions
σ
≥
0
,
γ
≥
0
{\displaystyle \sigma \geq 0,\;\gamma \geq 0}
.
The parameter
γ
{\displaystyle \gamma }
controls the relationship between volatility and price, and is the central feature of the model. When
γ
<
1
{\displaystyle \gamma <1}
we see an effect, commonly observed in equity markets, where the volatility of a stock increases as its price falls and the leverage ratio increases. Conversely, in commodity markets, we often observe
γ
>
1
{\displaystyle \gamma >1}
, whereby the volatility of the price of a commodity tends to increase as its price increases and leverage ratio decreases. If we observe
γ
=
1
{\displaystyle \gamma =1}
this model becomes a geometric Brownian motion as in the Black-Scholes model, whereas if
γ
=
0
{\displaystyle \gamma =0}
and either
μ
=
0
{\displaystyle \mu =0}
or the drift
μ
S
{\displaystyle \mu S}
is replaced by
μ
{\displaystyle \mu }
, this model becomes an arithmetic Brownian motion, the model which was proposed by Louis Bachelier in his PhD Thesis "The Theory of Speculation", known as Bachelier model.
== See also ==
Volatility (finance)
Stochastic volatility
Local volatility
SABR volatility model
CKLS process
== References ==
== External links ==
Asymptotic Approximations to CEV and SABR Models
Price and implied volatility under CEV model with closed formulas, Monte-Carlo and Finite Difference Method
Price and implied volatility of European options in CEV Model delamotte-b.fr | Wikipedia/Constant_elasticity_of_variance_model |
In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward.
== Dynamics ==
The SABR model describes a single forward
F
{\displaystyle F}
, such as a LIBOR forward rate, a forward swap rate, or a forward stock price. This is one of the standards in market used by market participants to quote volatilities. The volatility of the forward
F
{\displaystyle F}
is described by a parameter
σ
{\displaystyle \sigma }
. SABR is a dynamic model in which both
F
{\displaystyle F}
and
σ
{\displaystyle \sigma }
are represented by stochastic state variables whose time evolution is given by the following system of stochastic differential equations:
d
F
t
=
σ
t
(
F
t
)
β
d
W
t
,
{\displaystyle dF_{t}=\sigma _{t}\left(F_{t}\right)^{\beta }\,dW_{t},}
d
σ
t
=
α
σ
t
d
Z
t
,
{\displaystyle d\sigma _{t}=\alpha \sigma _{t}^{}\,dZ_{t},}
with the prescribed time zero (currently observed) values
F
0
{\displaystyle F_{0}}
and
σ
0
{\displaystyle \sigma _{0}}
. Here,
W
t
{\displaystyle W_{t}}
and
Z
t
{\displaystyle Z_{t}}
are two correlated Wiener processes with correlation coefficient
−
1
<
ρ
<
1
{\displaystyle -1<\rho <1}
:
d
W
t
d
Z
t
=
ρ
d
t
{\displaystyle dW_{t}\,dZ_{t}=\rho \,dt}
The constant parameters
β
,
α
{\displaystyle \beta ,\;\alpha }
satisfy the conditions
0
≤
β
≤
1
,
α
≥
0
{\displaystyle 0\leq \beta \leq 1,\;\alpha \geq 0}
.
α
{\displaystyle \alpha }
is a volatility-like parameter for the volatility.
ρ
{\displaystyle \rho }
is the instantaneous correlation between the underlying and its volatility. The initial volatility
σ
0
{\displaystyle \sigma _{0}}
controls the height of the ATM implied volatility level. Both the correlation
ρ
{\displaystyle \rho }
and
β
{\displaystyle \beta }
controls the slope of the implied skew. The volatility of volatility
α
{\displaystyle \alpha }
controls its curvature.
The above dynamics is a stochastic version of the CEV model with the skewness parameter
β
{\displaystyle \beta }
: in fact, it reduces to the CEV model if
α
=
0
{\displaystyle \alpha =0}
The parameter
α
{\displaystyle \alpha }
is often referred to as the volvol, and its meaning is that of the lognormal volatility of the volatility parameter
σ
{\displaystyle \sigma }
.
== Asymptotic solution ==
We consider a European option (say, a call) on the forward
F
{\displaystyle F}
struck at
K
{\displaystyle K}
, which expires
T
{\displaystyle T}
years from now. The value of this option is equal to the suitably discounted expected value of the payoff
max
(
F
T
−
K
,
0
)
{\displaystyle \max(F_{T}-K,\;0)}
under the probability distribution of the process
F
t
{\displaystyle F_{t}}
.
Except for the special cases of
β
=
0
{\displaystyle \beta =0}
and
β
=
1
{\displaystyle \beta =1}
, no closed form expression for this probability distribution is known. The general case can be solved approximately by means of an asymptotic expansion in the parameter
ε
=
T
α
2
{\displaystyle \varepsilon =T\alpha ^{2}}
. Under typical market conditions, this parameter is small and the approximate solution is actually quite accurate. Also significantly, this solution has a rather simple functional form, is very easy to implement in computer code, and lends itself well to risk management of large portfolios of options in real time.
It is convenient to express the solution in terms of the implied volatility
σ
impl
{\displaystyle \sigma _{\textrm {impl}}}
of the option. Namely, we force the SABR model price of the option into the form of the Black model valuation formula. Then the implied volatility, which is the value of the lognormal volatility parameter in Black's model that forces it to match the SABR price, is approximately given by:
σ
impl
=
α
log
(
F
0
/
K
)
D
(
ζ
)
{
1
+
[
2
γ
2
−
γ
1
2
+
1
/
(
F
mid
)
2
24
(
σ
0
C
(
F
mid
)
α
)
2
+
ρ
γ
1
4
σ
0
C
(
F
mid
)
α
+
2
−
3
ρ
2
24
]
ε
}
,
{\displaystyle \sigma _{\text{impl}}=\alpha \;{\frac {\log(F_{0}/K)}{D(\zeta )}}\;\left\{1+\left[{\frac {2\gamma _{2}-\gamma _{1}^{2}+1/\left(F_{\text{mid}}\right)^{2}}{24}}\;\left({\frac {\sigma _{0}C(F_{\text{mid}})}{\alpha }}\right)^{2}+{\frac {\rho \gamma _{1}}{4}}\;{\frac {\sigma _{0}C(F_{\text{mid}})}{\alpha }}+{\frac {2-3\rho ^{2}}{24}}\right]\varepsilon \right\},}
where, for clarity, we have set
C
(
F
)
=
F
β
{\displaystyle C\left(F\right)=F^{\beta }}
. The formula is undefined when
K
=
F
0
{\displaystyle K=F_{0}}
, so we replace it by its limit as
K
→
F
0
{\displaystyle K\to F_{0}}
, which is given by replacing the factor
log
(
F
0
/
K
)
D
(
ζ
)
{\displaystyle {\frac {\log(F_{0}/K)}{D(\zeta )}}}
by 1.
The value
F
mid
{\displaystyle F_{\text{mid}}}
denotes a conveniently chosen midpoint between
F
0
{\displaystyle F_{0}}
and
K
{\displaystyle K}
(such as the geometric average
F
0
K
{\displaystyle {\sqrt {F_{0}K}}}
or the arithmetic average
(
F
0
+
K
)
/
2
{\displaystyle \left(F_{0}+K\right)/2}
). We have also set
ζ
=
α
σ
0
∫
K
F
0
d
x
C
(
x
)
=
α
σ
0
(
1
−
β
)
(
F
0
1
−
β
−
K
1
−
β
)
,
{\displaystyle \zeta ={\frac {\alpha }{\sigma _{0}}}\;\int _{K}^{F_{0}}{\frac {dx}{C(x)}}={\frac {\alpha }{\sigma _{0}(1-\beta )}}\;\left(F_{0}{}^{1-\beta }-K^{1-\beta }\right),}
and
γ
1
=
C
′
(
F
mid
)
C
(
F
mid
)
=
β
F
mid
,
{\displaystyle \gamma _{1}={\frac {C'(F_{\text{mid}})}{C(F_{\text{mid}})}}={\frac {\beta }{F_{\text{mid}}}}\;,}
γ
2
=
C
″
(
F
mid
)
C
(
F
mid
)
=
−
β
(
1
−
β
)
(
F
mid
)
2
,
{\displaystyle \gamma _{2}={\frac {C''(F_{\text{mid}})}{C(F_{\text{mid}})}}=-{\frac {\beta (1-\beta )}{\left(F_{\text{mid}}\right)^{2}}}\;,}
The function
D
(
ζ
)
{\displaystyle D\left(\zeta \right)}
entering the formula above is given by
D
(
ζ
)
=
log
(
1
−
2
ρ
ζ
+
ζ
2
+
ζ
−
ρ
1
−
ρ
)
.
{\displaystyle D(\zeta )=\log \left({\frac {{\sqrt {1-2\rho \zeta +\zeta ^{2}}}+\zeta -\rho }{1-\rho }}\right).}
Alternatively, one can express the SABR price in terms of the Bachelier's model. Then the implied normal volatility can be asymptotically computed by means of the following expression:
σ
impl
n
=
α
F
0
−
K
D
(
ζ
)
{
1
+
[
2
γ
2
−
γ
1
2
24
(
σ
0
C
(
F
mid
)
α
)
2
+
ρ
γ
1
4
σ
0
C
(
F
mid
)
α
+
2
−
3
ρ
2
24
]
ε
}
.
{\displaystyle \sigma _{\text{impl}}^{\text{n}}=\alpha \;{\frac {F_{0}-K}{D(\zeta )}}\;\left\{1+\left[{\frac {2\gamma _{2}-\gamma _{1}^{2}}{24}}\;\left({\frac {\sigma _{0}C(F_{\text{mid}})}{\alpha }}\right)^{2}+{\frac {\rho \gamma _{1}}{4}}\;{\frac {\sigma _{0}C(F_{\text{mid}})}{\alpha }}+{\frac {2-3\rho ^{2}}{24}}\right]\varepsilon \right\}.}
It is worth noting that the normal SABR implied volatility is generally somewhat more accurate than the lognormal implied volatility.
The approximation accuracy and the degree of arbitrage can be further improved if the equivalent volatility under the CEV model with the same
β
{\displaystyle \beta }
is used for pricing options.
== SABR for the negative rates ==
A SABR model extension for negative interest rates that has gained popularity in recent years is the shifted SABR model, where the shifted forward rate is assumed to follow a SABR process
d
F
t
=
σ
t
(
F
t
+
s
)
β
d
W
t
,
{\displaystyle dF_{t}=\sigma _{t}(F_{t}+s)^{\beta }\,dW_{t},}
d
σ
t
=
α
σ
t
d
Z
t
,
{\displaystyle d\sigma _{t}=\alpha \sigma _{t}\,dZ_{t},}
for some positive shift
s
{\displaystyle s}
.
Since shifts are included in a market quotes, and there is an intuitive soft boundary for how negative rates can become, shifted SABR has become market best practice to accommodate negative rates.
The SABR model can also be modified to cover negative interest rates by:
d
F
t
=
σ
t
|
F
t
|
β
d
W
t
,
{\displaystyle dF_{t}=\sigma _{t}|F_{t}|^{\beta }\,dW_{t},}
d
σ
t
=
α
σ
t
d
Z
t
,
{\displaystyle d\sigma _{t}=\alpha \sigma _{t}\,dZ_{t},}
for
0
≤
β
≤
1
/
2
{\displaystyle 0\leq \beta \leq 1/2}
and a free boundary condition for
F
=
0
{\displaystyle F=0}
. Its exact solution for the zero correlation as well as an
efficient approximation for a general case are available. An obvious drawback of this approach is the a priori assumption of potential highly negative interest rates via the free boundary.
== Arbitrage problem in the implied volatility formula ==
Although the asymptotic solution is very easy to implement, the density implied by the approximation is not always arbitrage-free, especially not for very low strikes (it becomes negative or the density does not integrate to one).
One possibility to "fix" the formula is use the stochastic collocation method and to project the corresponding implied, ill-posed, model on a polynomial of an arbitrage-free variables, e.g. normal. This will guarantee equality in probability at the collocation points while the generated density is arbitrage-free. Using the projection method analytic European option prices are available and the implied volatilities stay very close to those initially obtained by the asymptotic formula.
Another possibility is to rely on a fast and robust PDE solver on an equivalent expansion of the forward PDE, that preserves numerically the zero-th and first moment, thus guaranteeing the absence of arbitrage.
== Extensions ==
The SABR model can be extended by assuming its parameters to be time-dependent. This however complicates the calibration procedure. An advanced calibration method of the time-dependent SABR model is based on so-called "effective parameters".
Alternatively, Guerrero and Orlando show that a time-dependent local stochastic volatility (SLV) model can be reduced to a system of autonomous PDEs that can be solved using the heat kernel, by means of the Wei-Norman factorization method and Lie algebraic techniques. Explicit solutions obtained by said techniques are comparable to traditional Monte Carlo simulations allowing for shorter time in numerical computations.
== Simulation ==
As the stochastic volatility process follows a geometric Brownian motion, its exact simulation is straightforward. However, the simulation of the forward asset process is not a trivial task. Taylor-based simulation schemes are typically considered, like Euler–Maruyama or Milstein. Recently, novel methods have been proposed for the almost exact Monte Carlo simulation of the SABR model. Extensive studies for SABR model have recently been considered.
For the normal SABR model (
β
=
0
{\displaystyle \beta =0}
with no boundary condition at
F
=
0
{\displaystyle F=0}
), a closed-form simulation method is known.
== See also ==
Volatility (finance)
Stochastic volatility
Risk-neutral measure
== References ==
== Further reading ==
Hagan, Patrick; Lesniewski, Andrew; Woodward, Diana (2005-03-22). "Probability Distribution in the SABR Model of Stochastic Volatility" (PDF). Archived from the original (PDF) on 2021-03-08. Retrieved 2022-04-30.
Bartlett, Bruce (February 2006). "Hedging under SABR Model" (PDF). Wilmott. Archived from the original (PDF) on 2020-12-30. Retrieved 2022-04-30.
Hagan, Patrick; Lesniewski, Andrew (2008-04-30). "LIBOR market model with SABR style stochastic volatility" (PDF). Archived from the original (PDF) on 2022-03-03. Retrieved 2022-04-30.
Hagan, Patrick S.; Kumar, Deep; Lesniewski, Andrew S.; Woodward, Diana E. (2014-01-29). "Arbitrage Free SABR". Wilmott. Vol. 2014, no. 69. pp. 60–75. doi:10.1002/wilm.10290. Retrieved 2022-04-30.
Obloj, Jan (2008-03-18). "Fine Tune Your Smile – Correction to Hagan et al". arXiv:0708.0998 [q-fin.CP].
West, Graeme. "A summary of the approaches to the SABR model for equity derivatives smile". Riskworx. Archived from the original on 2015-09-14. Retrieved 2022-04-30.
Henry-Labordere, Pierre (2005-02-15). "Unifying the BGM and SABR models: a short ride in hyperbolic geometry". arXiv:physics/0602102.
Jordan, Richard; Tier, Charles (2011-05-17). "Asymptotic Approximations to CEV and SABR Models". SSRN 1850709.
"SABR calibration". 2012-12-26. Archived from the original on 2016-05-27. Retrieved 2022-04-30.
Antonov, Alexandre; Spector, Michael (2012-03-23). "Advanced Analytics for the SABR Model". SSRN 2026350.
Antonov, Alexandre; Konikov, Michael; Spector, Michael (2019-05-02). Modern SABR Analytics: Formulas and Insights for Quants, Former Physicists and Mathematicians (Springer Briefs in Quantitative Finance) 1st ed. doi:10.1007/978-3-030-10656-0. ISBN 978-3-030-10655-3. ISSN 2192-7014. S2CID 182484805. / Press Release, New York, NY – June 24, 2019
Gulisashvili, Archil; Horvath, Blanka; Jacquier, Antoine (Jack) (2016-11-22). "Mass at Zero in the Uncorrelated SABR Model and Implied Volatility Asymptotics". SSRN 2563510. | Wikipedia/SABR_volatility_model |
In finance, the Heston model, named after Steven L. Heston, is a mathematical model that describes the evolution of the volatility of an underlying asset. It is a stochastic volatility model: such a model assumes that the volatility of the asset is not constant, nor even deterministic, but follows a random process.
== Mathematical formulation ==
The Heston model assumes that St, the price of the asset, is determined by a stochastic process,
d
S
t
=
μ
S
t
d
t
+
ν
t
S
t
d
W
t
S
,
{\displaystyle dS_{t}=\mu S_{t}\,dt+{\sqrt {\nu _{t}}}S_{t}\,dW_{t}^{S},}
where the volatility
ν
t
{\displaystyle {\sqrt {\nu _{t}}}}
is given by a Feller square-root or CIR process,
d
ν
t
=
κ
(
θ
−
ν
t
)
d
t
+
ξ
ν
t
d
W
t
ν
,
{\displaystyle d\nu _{t}=\kappa (\theta -\nu _{t})\,dt+\xi {\sqrt {\nu _{t}}}\,dW_{t}^{\nu },}
and
W
t
S
,
W
t
ν
{\displaystyle W_{t}^{S},W_{t}^{\nu }}
are Wiener processes (i.e., continuous random walks) with correlation ρ. The value
ν
t
{\displaystyle \nu _{t}}
, being the square of the volatility, is called the instantaneous variance.
The model has five parameters:
ν
0
{\displaystyle \nu _{0}}
, the initial variance.
θ
{\displaystyle \theta }
, the long variance, or long-run average variance of the price; as t tends to infinity, the expected value of νt tends to θ.
ρ
{\displaystyle \rho }
, the correlation of the two Wiener processes.
κ
{\displaystyle \kappa }
, the rate at which νt reverts to θ.
ξ
{\displaystyle \xi }
, the volatility of the volatility, or 'vol of vol', which determines the variance of νt.
If the parameters obey the following condition (known as the Feller condition) then the process
ν
t
{\displaystyle \nu _{t}}
is strictly positive
2
κ
θ
>
ξ
2
.
{\displaystyle 2\kappa \theta >\xi ^{2}.}
== Risk-neutral measure ==
See Risk-neutral measure for the complete article
A fundamental concept in derivatives pricing is the risk-neutral measure; this is explained in further depth in the above article. For our purposes, it is sufficient to note the following:
To price a derivative whose payoff is a function of one or more underlying assets, we evaluate the expected value of its discounted payoff under a risk-neutral measure.
A risk-neutral measure, also known as an equivalent martingale measure, is one which is equivalent to the real-world measure, and which is arbitrage-free: under such a measure, the discounted price of each of the underlying assets is a martingale. See Girsanov's theorem.
In the Black-Scholes and Heston frameworks (where filtrations are generated from a linearly independent set of Wiener processes alone), any equivalent measure can be described in a very loose sense by adding a drift to each of the Wiener processes.
By selecting certain values for the drifts described above, we may obtain an equivalent measure which fulfills the arbitrage-free condition.
Consider a general situation where we have
n
{\displaystyle n}
underlying assets and a linearly independent set of
m
{\displaystyle m}
Wiener processes. The set of equivalent measures is isomorphic to Rm, the space of possible drifts. Consider the set of equivalent martingale measures to be isomorphic to a manifold
M
{\displaystyle M}
embedded in Rm; initially, consider the situation where we have no assets and
M
{\displaystyle M}
is isomorphic to Rm.
Now consider each of the underlying assets as providing a constraint on the set of equivalent measures, as its expected discount process must be equal to a constant (namely, its initial value). By adding one asset at a time, we may consider each additional constraint as reducing the dimension of
M
{\displaystyle M}
by one dimension. Hence we can see that in the general situation described above, the dimension of the set of equivalent martingale measures is
m
−
n
{\displaystyle m-n}
.
In the Black-Scholes model, we have one asset and one Wiener process. The dimension of the set of equivalent martingale measures is zero; hence it can be shown that there is a single value for the drift, and thus a single risk-neutral measure, under which the discounted asset
e
−
ρ
t
S
t
{\displaystyle e^{-\rho t}S_{t}}
will be a martingale.
In the Heston model, we still have one asset (volatility is not considered to be directly observable or tradeable in the market) but we now have two Wiener processes - the first in the Stochastic Differential Equation (SDE) for the stock price and the second in the SDE for the variance of the stock price. Here, the dimension of the set of equivalent martingale measures is one; there is no unique risk-free measure.
This is of course problematic; while any of the risk-free measures may theoretically be used to price a derivative, it is likely that each of them will give a different price. In theory, however, only one of these risk-free measures would be compatible with the market prices of volatility-dependent options (for example, European calls, or more explicitly, variance swaps). Hence we could add a volatility-dependent asset; by doing so, we add an additional constraint, and thus choose a single risk-free measure which is compatible with the market. This measure may be used for pricing.
== Implementation ==
The use of the Fourier transform to value options was shown by Carr and Madan.
A discussion of the implementation of the Heston model was given by Kahl and Jäckel.
A derivation of closed-form option prices for the time-dependent Heston model was presented by Benhamou et al.
A derivation of closed-form option prices for the double Heston model was given by Christoffersen et al. and by Gauthier and Possamai.
An extension of the Heston model with stochastic interest rates was given by Grzelak and Oosterlee.
An expression of the characteristic function of the Heston model that is both numerically continuous and easily differentiable with respect to the parameters was introduced by Cui et al.
The use of the model in a local stochastic volatility context was given by Van Der Weijst.
An explicit solution of the Heston price equation in terms of the volatility was developed by Kouritzin. This can be combined with known weak solutions for the volatility equation and Girsanov's theorem to produce explicit weak solutions of the Heston model. Such solutions are useful for efficient simulation.
High precision reference prices are available in a blog post by Alan Lewis.
There are few known parameterisations of the volatility surface based on the Heston model (Schonbusher, SVI and gSVI).
== Calibration ==
The calibration of the Heston model is often formulated as a least squares problem, with the objective function minimizing the squared difference between the prices observed in the market and those calculated from the model.
The prices are typically those of vanilla options. Sometimes the model is also calibrated to the variance swap term-structure as in Guillaume and Schoutens. Yet another approach is to include forward start options, or barrier options as well, in order to capture the forward smile.
Under the Heston model, the price of vanilla options is given analytically, but requires a numerical method to compute the integral. Le Floc'h summarized the various quadratures applied and proposed an efficient adaptive Filon quadrature.
Calibration usually requires the gradient of the objective function with respect to the model parameters. This was usually computed with a finite difference approximation although it is less accurate, less efficient and less elegant than an analytical gradient because an insightful expression of the latter became available only when a new representation of the characteristic function was introduced by Cui et al. in 2017 . Another possibility is to resort to automatic differentiation. For example, the tangent mode of algorithmic differentiation may be applied using dual numbers in a straightforward manner.
== See also ==
Stochastic volatility
Risk-neutral measure (another name for the equivalent martingale measure)
Girsanov's theorem
Martingale (probability theory)
SABR volatility model
MATLAB code for implementation by Kahl, Jäckel and Lord
== References ==
Damghani, Babak Mahdavi; Kos, Andrew (2013). "De-arbitraging with a weak smile: Application to skew risk". Wilmott. 2013 (1): 40–49. doi:10.1002/wilm.10201. S2CID 154646708.
Mario, Dell'Era (2014). "CLOSED FORM SOLUTION FOR HESTON PDE BY GEOMETRICALTRANSFORMATIONS". 4 (6): 793–807. {{cite journal}}: Cite journal requires |journal= (help) | Wikipedia/Heston_model |
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.
Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company. These ideas were seminal to the field of teletraffic engineering and have since seen applications in telecommunications, traffic engineering, computing, project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.
== Spelling ==
The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems.
== Description ==
Queueing theory is one of the major areas of study in the discipline of management science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic. The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute. The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc.
== Single queueing nodes ==
A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue.
However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job.
An analogy often used is that of the cashier at a supermarket. Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n.
=== Birth-death process ===
The behaviour of a single queue (also called a queueing node) can be described by a birth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. If k denotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increases k by 1 and a departure decreases k by 1.
The system transitions between values of k by "births" and "deaths", which occur at the arrival rates
λ
i
{\displaystyle \lambda _{i}}
and the departure rates
μ
i
{\displaystyle \mu _{i}}
for each job
i
{\displaystyle i}
. For a queue, these rates are generally considered not to vary with the number of jobs in the queue, so a single average rate of arrivals/departures per unit time is assumed. Under this assumption, this process has an arrival rate of
λ
=
avg
(
λ
1
,
λ
2
,
…
,
λ
k
)
{\displaystyle \lambda ={\text{avg}}(\lambda _{1},\lambda _{2},\dots ,\lambda _{k})}
and a departure rate of
μ
=
avg
(
μ
1
,
μ
2
,
…
,
μ
k
)
{\displaystyle \mu ={\text{avg}}(\mu _{1},\mu _{2},\dots ,\mu _{k})}
.
==== Balance equations ====
The steady state equations for the birth-and-death process, known as the balance equations, are as follows. Here
P
n
{\displaystyle P_{n}}
denotes the steady state probability to be in state n.
μ
1
P
1
=
λ
0
P
0
{\displaystyle \mu _{1}P_{1}=\lambda _{0}P_{0}}
λ
0
P
0
+
μ
2
P
2
=
(
λ
1
+
μ
1
)
P
1
{\displaystyle \lambda _{0}P_{0}+\mu _{2}P_{2}=(\lambda _{1}+\mu _{1})P_{1}}
λ
n
−
1
P
n
−
1
+
μ
n
+
1
P
n
+
1
=
(
λ
n
+
μ
n
)
P
n
{\displaystyle \lambda _{n-1}P_{n-1}+\mu _{n+1}P_{n+1}=(\lambda _{n}+\mu _{n})P_{n}}
The first two equations imply
P
1
=
λ
0
μ
1
P
0
{\displaystyle P_{1}={\frac {\lambda _{0}}{\mu _{1}}}P_{0}}
and
P
2
=
λ
1
μ
2
P
1
+
1
μ
2
(
μ
1
P
1
−
λ
0
P
0
)
=
λ
1
μ
2
P
1
=
λ
1
λ
0
μ
2
μ
1
P
0
{\displaystyle P_{2}={\frac {\lambda _{1}}{\mu _{2}}}P_{1}+{\frac {1}{\mu _{2}}}(\mu _{1}P_{1}-\lambda _{0}P_{0})={\frac {\lambda _{1}}{\mu _{2}}}P_{1}={\frac {\lambda _{1}\lambda _{0}}{\mu _{2}\mu _{1}}}P_{0}}
.
By mathematical induction,
P
n
=
λ
n
−
1
λ
n
−
2
⋯
λ
0
μ
n
μ
n
−
1
⋯
μ
1
P
0
=
P
0
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
{\displaystyle P_{n}={\frac {\lambda _{n-1}\lambda _{n-2}\cdots \lambda _{0}}{\mu _{n}\mu _{n-1}\cdots \mu _{1}}}P_{0}=P_{0}\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}}
.
The condition
∑
n
=
0
∞
P
n
=
P
0
+
P
0
∑
n
=
1
∞
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
=
1
{\displaystyle \sum _{n=0}^{\infty }P_{n}=P_{0}+P_{0}\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}=1}
leads to
P
0
=
1
1
+
∑
n
=
1
∞
∏
i
=
0
n
−
1
λ
i
μ
i
+
1
{\displaystyle P_{0}={\frac {1}{1+\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}}}}
which, together with the equation for
P
n
{\displaystyle P_{n}}
(
n
≥
1
)
{\displaystyle (n\geq 1)}
, fully describes the required steady state probabilities.
=== Kendall's notation ===
Single queueing nodes are usually described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs, and c the number of servers at the node. For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process (where inter-arrival durations are exponentially distributed) and have exponentially distributed service times (the M denotes a Markov process). In an M/G/1 queue, the G stands for "general" and indicates an arbitrary probability distribution for service times.
=== Example analysis of an M/M/1 queue ===
Consider a queue with one server and the following characteristics:
λ
{\displaystyle \lambda }
: the arrival rate (the reciprocal of the expected time between each customer arriving, e.g. 10 customers per second)
μ
{\displaystyle \mu }
: the reciprocal of the mean service time (the expected number of consecutive service completions per the same unit time, e.g. per 30 seconds)
n: the parameter characterizing the number of customers in the system
P
n
{\displaystyle P_{n}}
: the probability of there being n customers in the system in steady state
Further, let
E
n
{\displaystyle E_{n}}
represent the number of times the system enters state n, and
L
n
{\displaystyle L_{n}}
represent the number of times the system leaves state n. Then
|
E
n
−
L
n
|
∈
{
0
,
1
}
{\displaystyle \left\vert E_{n}-L_{n}\right\vert \in \{0,1\}}
for all n. That is, the number of times the system leaves a state differs by at most 1 from the number of times it enters that state, since it will either return into that state at some time in the future (
E
n
=
L
n
{\displaystyle E_{n}=L_{n}}
) or not (
|
E
n
−
L
n
|
=
1
{\displaystyle \left\vert E_{n}-L_{n}\right\vert =1}
).
When the system arrives at a steady state, the arrival rate should be equal to the departure rate.
Thus the balance equations
μ
P
1
=
λ
P
0
{\displaystyle \mu P_{1}=\lambda P_{0}}
λ
P
0
+
μ
P
2
=
(
λ
+
μ
)
P
1
{\displaystyle \lambda P_{0}+\mu P_{2}=(\lambda +\mu )P_{1}}
λ
P
n
−
1
+
μ
P
n
+
1
=
(
λ
+
μ
)
P
n
{\displaystyle \lambda P_{n-1}+\mu P_{n+1}=(\lambda +\mu )P_{n}}
imply
P
n
=
λ
μ
P
n
−
1
,
n
=
1
,
2
,
…
{\displaystyle P_{n}={\frac {\lambda }{\mu }}P_{n-1},\ n=1,2,\ldots }
The fact that
P
0
+
P
1
+
⋯
=
1
{\displaystyle P_{0}+P_{1}+\cdots =1}
leads to the geometric distribution formula
P
n
=
(
1
−
ρ
)
ρ
n
{\displaystyle P_{n}=(1-\rho )\rho ^{n}}
where
ρ
=
λ
μ
<
1
{\displaystyle \rho ={\frac {\lambda }{\mu }}<1}
.
=== Simple two-equation queue ===
A common basic queueing system is attributed to Erlang and is a modification of Little's Law. Given an arrival rate λ, a dropout rate σ, and a departure rate μ, length of the queue L is defined as:
L
=
λ
−
σ
μ
{\displaystyle L={\frac {\lambda -\sigma }{\mu }}}
.
Assuming an exponential distribution for the rates, the waiting time W can be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving:
μ
λ
=
e
−
W
μ
{\displaystyle {\frac {\mu }{\lambda }}=e^{-W{\mu }}}
The second equation is commonly rewritten as:
W
=
1
μ
l
n
λ
μ
{\displaystyle W={\frac {1}{\mu }}\mathrm {ln} {\frac {\lambda }{\mu }}}
The two-stage one-box model is common in epidemiology.
== History ==
In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation:
M stands for "Markov" or "memoryless", and means arrivals occur according to a Poisson process
D stands for "deterministic", and means jobs arriving at the queue require a fixed amount of service
k describes the number of servers at the queueing node (k = 1, 2, 3, ...)
If the node has more jobs than servers, then jobs will queue and wait for service.
The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula.
After the 1940s, queueing theory became an area of research interest to mathematicians. In 1953, David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula.
Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet.
The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered.
Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing.
Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration.
Problems such as performance metrics for the M/G/k queue remain an open problem.
== Service disciplines ==
Various scheduling policies can be used at queueing nodes:
First in, first out
Also called first-come, first-served (FCFS), this principle states that customers are served one at a time and that the customer that has been waiting the longest is served first.
Last in, first out
This principle also serves customers one at a time, but the customer with the shortest waiting time will be served first. Also known as a stack.
Processor sharing
Service capacity is shared equally between customers.
Priority
Customers with high priority are served first. Priority queues can be of two types: non-preemptive (where a job in service cannot be interrupted) and preemptive (where a job in service can be interrupted by a higher-priority job). No work is lost in either model.
Shortest job first
The next job to be served is the one with the smallest size.
Preemptive shortest job first
The next job to be served is the one with the smallest original size.
Shortest remaining processing time
The next job to serve is the one with the smallest remaining processing requirement.
Service facility
Single server: customers line up and there is only one server
Several parallel servers (single queue): customers line up and there are several servers
Several parallel servers (several queues): there are many counters and customers can decide for which to queue
Unreliable server
Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed.
Customer waiting behavior
Balking: customers decide not to join the queue if it is too long
Jockeying: customers switch between queues if they think they will get served faster by doing so
Reneging: customers leave the queue if they have waited too long for service
Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue.
== Queueing networks ==
Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network.
For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node.
The simplest non-trivial networks of queues are called tandem queues. The first significant results in this area were Jackson networks, for which an efficient product-form stationary distribution exists and the mean value analysis (which allows average metrics such as throughput and sojourn times) can be computed. If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem. This result was extended to the BCMP network, where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973.
Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes. Another type of network are G-networks, first proposed by Erol Gelenbe in 1993: these networks do not assume exponential time distributions like the classic Jackson network.
=== Routing algorithms ===
In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node. In the more general case where jobs can visit more than one node, backpressure routing gives optimal throughput. A network scheduler must choose a queueing algorithm, which affects the characteristics of the larger network.
=== Mean-field limits ===
Mean-field models consider the limiting behaviour of the empirical measure (proportion of queues in different states) as the number of queues m approaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model.
=== Heavy traffic/diffusion approximations ===
In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by a reflected Brownian motion, Ornstein–Uhlenbeck process, or more general diffusion process. The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negative orthant.
=== Fluid limits ===
Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit.
=== Queueing Applications ===
Queueing theory finds widespread application in computer science and information technology. In networking, for instance, queues are integral to routers and switches, where packets queue up for transmission. By applying queueing theory principles, designers can optimize these systems, ensuring responsive performance and efficient resource utilization.
Beyond the technological realm, queueing theory is relevant to everyday experiences. Whether waiting in line at a supermarket or for public transportation, understanding the principles of queueing theory provides valuable insights into optimizing these systems for enhanced user satisfaction. At some point, everyone will be involved in an aspect of queuing. What some may view to be an inconvenience could possibly be the most effective method.
Queueing theory, a discipline rooted in applied mathematics and computer science, is a field dedicated to the study and analysis of queues, or waiting lines, and their implications across a diverse range of applications. This theoretical framework has proven instrumental in understanding and optimizing the efficiency of systems characterized by the presence of queues. The study of queues is essential in contexts such as traffic systems, computer networks, telecommunications, and service operations.
Queueing theory delves into various foundational concepts, with the arrival process and service process being central. The arrival process describes the manner in which entities join the queue over time, often modeled using stochastic processes like Poisson processes. The efficiency of queueing systems is gauged through key performance metrics. These include the average queue length, average wait time, and system throughput. These metrics provide insights into the system's functionality, guiding decisions aimed at enhancing performance and reducing wait times.
== See also ==
== References ==
== Further reading ==
Gross, Donald; Carl M. Harris (1998). Fundamentals of Queueing Theory. Wiley. ISBN 978-0-471-32812-4. Online
Zukerman, Moshe (2013). Introduction to Queueing Theory and Stochastic Teletraffic Models (PDF). arXiv:1307.2968.
Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673. ISBN 978-0-201-14502-1. chap.15, pp. 380–412
Gelenbe, Erol; Isi Mitrani (2010). Analysis and Synthesis of Computer Systems. World Scientific 2nd Edition. ISBN 978-1-908978-42-4.
Newell, Gordron F. (1 June 1971). Applications of Queueing Theory. Chapman and Hall.
Leonard Kleinrock, Information Flow in Large Communication Nets, (MIT, Cambridge, May 31, 1961) Proposal for a Ph.D. Thesis
Leonard Kleinrock. Information Flow in Large Communication Nets (RLE Quarterly Progress Report, July 1961)
Leonard Kleinrock. Communication Nets: Stochastic Message Flow and Delay (McGraw-Hill, New York, 1964)
Kleinrock, Leonard (2 January 1975). Queueing Systems: Volume I – Theory. New York: Wiley Interscience. pp. 417. ISBN 978-0-471-49110-1.
Kleinrock, Leonard (22 April 1976). Queueing Systems: Volume II – Computer Applications. New York: Wiley Interscience. pp. 576. ISBN 978-0-471-49111-8.
Lazowska, Edward D.; John Zahorjan; G. Scott Graham; Kenneth C. Sevcik (1984). Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc. ISBN 978-0-13-746975-8.
Jon Kleinberg; Éva Tardos (30 June 2013). Algorithm Design. Pearson. ISBN 978-1-292-02394-6.
== External links ==
Teknomo's Queueing theory tutorial and calculators
Virtamo's Queueing Theory Course
Myron Hlynka's Queueing Theory Page
LINE: a general-purpose engine to solve queueing models | Wikipedia/Queueing_model |
Earth has a human population of over 8,2 billion as of 2024, with an overall population density of 50 people per km2 (130 per sq. mile). Nearly 60% of the world's population lives in Asia, with more than 2.8 billion in the countries of India and China combined. The percentage shares of China, India and rest of South Asia of the world population have remained at similar levels for the last few thousand years of recorded history.
The world's population is predominantly urban and suburban, and there has been significant migration toward cities and urban centers. The urban population jumped from 29% in 1950 to 55.3% in 2018. Interpolating from the United Nations prediction that the world will be 51.3% urban by 2010, Ron Wimberley, Libby Morris and Gregory Fulkerson estimated 23 May 2007 would have been the first time the urban population was more populous than the rural population in history. India and China are the most populous countries, as the birth rate has consistently dropped in wealthy countries and until recently remained high in poorer countries. Tokyo is the largest urban agglomeration in the world.
As of 2024, the total fertility rate of the world is estimated at 2.25 children per woman, which is slightly below the global average for the replacement fertility rate of approximately 2.33 (as of 2003). However, world population growth is unevenly distributed, with the total fertility rate ranging from the world's lowest of 0.8 in South Korea, to the highest of 6.7 in Niger. The United Nations estimated an annual population increase of 1.14% for the year of 2000.
The current world population growth is approximately 1.09%. People under 15 years of age made up over a quarter of the world population (25.18%), and people age 65 and over made up nearly ten percent (9.69%) in 2021. The world's literacy rate has increased dramatically in the last 40 years, from 66.7% in 1979 to 86.3% today. Lower literacy levels are mostly attributable to poverty and are found mostly in South Asia and Sub-Saharan Africa.
The world population more than tripled during the 20th century from about 1.65 billion in 1900 to 5.97 billion in 1999. It reached the 2 billion mark in 1927, the 3 billion mark in 1960, 4 billion in 1974, and 5 billion in 1987. The overall population of the world is approximately 8 billion as of November 2022. Currently, population growth is fastest among low wealth, least developed countries. The UN projects a world population of 9.15 billion in 2050, a 32.7% increase from 6.89 billion in 2010.
== History ==
Historical migration of human populations begins with the movement of Homo erectus out of Africa across Eurasia about a million years ago. Homo sapiens appear to have occupied all of Africa about 300,000 years ago, moved out of Africa 50,000 – 60,000 years ago, and had spread across Australia, Asia and Europe by 30,000 years BC. Migration to the Americas took place 20,000 to 15,000 years ago, and by 2,000 years ago, most of the Pacific Islands were colonized.
Until c. 10,000 years ago, humans lived as hunter-gatherers. They generally lived in small nomadic groups known as band societies. The advent of agriculture prompted the Neolithic Revolution, when access to food surplus led to the formation of permanent human settlements. About 6,000 years ago, the first proto-states developed in Mesopotamia, Egypt's Nile Valley and the Indus Valley. Early human settlements were dependent on proximity to water and, depending on the lifestyle, other natural resources used for subsistence. But humans have a great capacity for altering their habitats by means of technology.
Since 1800, the human population has increased from one billion to over eight billion. In 2004, some 2.5 billion out of 6.3 billion people (39.7%) lived in urban areas. In February 2008, the U.N. estimated that half the world's population would live in urban areas by the end of the year. Problems for humans living in cities include various forms of pollution and crime, especially in inner city and suburban slums. Both overall population numbers and the proportion residing in cities are expected to increase significantly in the coming decades.
=== World Population, AD 1–1998 (in thousands) ===
Source: Maddison and others. (University of Groningen).
=== Shares of world population, AD 1–1998 (% of world total) ===
Source: Maddison and others. (University of Groningen).
== Vital statistics ==
The following estimates of global trends in various demographic indicators from 1950 to 2021 are from UN DESA's World Population Prospects 2022. In July 2022, UN DESA published its 2022 World Population Prospects, a biennially-updated database where key demographic indicators are estimated and projected worldwide and on the country and regional level.
Notable events in World demography:
1958–1961– Great Chinese Famine
1989 - Fall of the Berlin Wall, Revolutions of 1989
2020–2022– COVID-19
== Current world population and latest projection ==
== Major cities ==
The world has hundreds of cities across it with most being in coastal regions. According to the latest official data, the world population is 8,209,580,000 people.[3]
As of 2010, about 3 billion people live in or around urban areas.
The following table shows the populations of the top thirteen conglomerations.
== Population density ==
The world's population is over 8 billion and Earth's total surface area (including land and water) is 510 million square kilometres (197 million square miles). Therefore, the worldwide human population density is 8 billion ÷ 510 million km2 (197 million sq mi) = 15.7 people/km2 (41 people/sq mi). If only the Earth's land area of 150 million km2 (58 million sq mi) is taken into account, then human population density increases to 53.3 people/km2 (138 people/sq mi).
Several of the most densely populated territories in the world are city-states, microstates or dependencies. These territories share a relatively small area and a high urbanization level, with an economically specialized city population drawing also on rural resources outside the area, illustrating the difference between high population density and overpopulation.
== Religion ==
The table below lists religions classified by philosophy; however, religious philosophy is not always the determining factor in local practice. Please note that this table includes heterodox movements as adherents to their larger philosophical category, although this may be disputed by others within that category. For example, Cao Đài is listed because it claims to be a separate category from Buddhism, while Hòa Hảo is not, even though they are similar new religious movements.
The population numbers below are computed by a combination of census reports, random surveys (in countries where religion data is not collected in census, for example United States or France), and self-reported attendance numbers, but results can vary widely depending on the way questions are phrased, the definitions of religion used and the bias of the agencies or organizations conducting the survey. Informal or unorganized religions are especially difficult to count. Some organizations may wildly inflate their numbers.
Since the late 19th century, the demographics of religion have changed a great deal. Some countries with a historically large Christian population have experienced a significant decline in the numbers of professed active Christians: see demographics of atheism. Symptoms of the decline in active participation in Christian religious life include declining recruitment for the priesthood and monastic life, as well as diminishing attendance at church. On the other hand, since the 19th century, large areas of sub-Saharan Africa have been converted to Christianity, and this area of the world has the highest population growth rate. In the realm of Western civilization, there has been an increase in the number of people who identify themselves as secular humanists. Despite the decline, Christianity remains the dominant religion in the Western world, where 70% of the population is Christian. In many countries, such as the People's Republic of China, communist governments have discouraged religion, making it difficult to count the actual number of believers. However, after the collapse of communism in numerous countries of Eastern Europe and the former Soviet Union, religious life has been experiencing resurgence there, in the form of traditional Eastern Christianity. While, Islam however has gained considerably in the Soviet Unions former republics in Central Asia.
Following is some available data based on the work of the World Christian Encyclopedia:
=== Growth rate of adherents ===
The annual growth in the world population over the same period is 1.41%.
Studies conducted by the Pew Research Center have found that, generally, poorer nations had a larger proportion of citizens who found religion to be very important than richer nations, with the exceptions of the United States and Kuwait.
In the book book Shall the Religious Inherit the Earth?, Eric Kaufmann argues that demographic trends point to religious fundamentalists greatly increasing as a share of the population over the next century. Other scholars have argued that this may be a form of "cultural selection" that will affect future demographics due to certain religious groups having high fertility that is unexplained by other factors such as income.
== Marriage ==
The average age of marriage varies greatly from country to country and has varied through time. Women tend to marry earlier than men and currently varies from 17.6 for women in Niger, to 32.4 for women in Denmark while men range from 22.6 in Mozambique to 35.1 in Sweden.
In 2021, 13.3 million babies, or about 10 per cent of the total worldwide, were born to mothers under 20 years old.
== Age structure ==
According to the 2021 CIA World Factbook, around 25% of the world's population is below 15 years of age.
0–14 years: 25.2% (male 1,010,373,278/female 946,624,579)
15–64 years: 65.1% (male 2,562,946,384/female 2,498,562,457)
65 years and over: 9.7% (male 337,244,947/female 415,884,753) (2021 est.)
Median Age – 31 years (male: 30.3 years, female: 31.8 years, 2021 est.)
According to a report by the Global Social Change Research Project, worldwide, the percent of the population age 0–14 declined from 34% in 1950 to 27% in 2010. The elderly population (60+) increased during the same period from 8% to 11%.
== Population growth rate ==
Globally, the growth rate of the human population has been declining since peaking in 1962 and 1963 at 2.20% per annum. In 2009, the estimated annual growth rate was 1.1%. The CIA World Factbook gives the world annual birthrate, mortality rate, and growth rate as 1.915%, 0.812%, and 1.092% respectively The last one hundred years have seen a rapid increase in population due to medical advances and massive increase in agricultural productivity made possible by the Green Revolution.
The actual annual growth in the number of humans fell from its peak of 88.0 million in 1989, to a low of 73.9 million in 2003, after which it rose again to 75.2 million in 2006. Since then, annual growth has declined. In 2009, the human population increased by 74.6 million, which is projected to fall steadily to about 41 million per annum in 2050, at which time the population will have increased to about 9.2 billion. Each region of the globe has seen great reductions in growth rate in recent decades, though growth rates remain above 2% in some countries of the Middle East and Sub-Saharan Africa, and also in South Asia, Southeast Asia, and Latin America.
Some countries experienced negative population growth, especially in Eastern Europe mainly due to low fertility rates, high death rates and emigration. In Southern Africa, growth is slowing due to the high number of HIV-related deaths. Some Western Europe countries might also encounter negative population growth. Japan's population began decreasing in 2005.
Population in the world increased from 1990 to 2008 with 1,423 billion and 27% growth. Measured by persons, the increase was highest in India (290 million) and China (192 million). Population growth was highest in Qatar (174%) and United Arab Emirates (140%).
In 2022, the world population reached 8 billion. The latest projections by the United Nations suggest that the global population could grow to around 8.5 billion in 2030, 9.7 billion in 2050 and 10.4 billion in 2100.
More than half of the projected increase in global population up to 2050 will be concentrated in just eight
countries: Democratic Republic of the Congo, Egypt, Ethiopia, India, Nigeria, Pakistan, Philippines and Tanzania.
== Births ==
In 2021, most births worldwide occurred in two regions: sub-Saharan Africa
(29 per cent of global births), the region with the highest fertility level, Central and Southern Asia (28 per cent of global births) and Eastern and South-Eastern Asia (18 per cent).
== Birth count ==
The 10 countries with the highest estimated number of births in 2021 according to the World Population Prospects 2022 of the United Nations Department of Economic and Social Affairs.
== Birth rate ==
As of 2009, the average birth rate (unclear whether this is the weighted average rate per country [with each country getting a weight of 1], or the unweighted average of the entire world population) for the whole world is 19.95 per year per 1000 total population, a 0.48% decline from 2003's world birth rate of 20.43 per 1000 total population.
According to the CIA – The World Factbook, the country with the highest birth rate currently is Niger at 51.26 births per 1000 people. The country with the lowest birth rate is Japan at 7.64 births per 1000 people. Hong Kong, a Special Administrative Region of China, is at 7.42 births per 1000 people. As compared to the 1950s, birth rate was at 36 births per 1000 in the 1950s, birth rate has declined by 16 births per 1000 people. In July 2011, the U.S. National Institutes of Health announced that the adolescent birth rate continues to decline.
Birth rates vary even within the same geographic areas. In Europe, as of July 2011, Ireland's birth rate is 16.5 percent, which is 3.5 percent higher than the next-ranked country, the UK. France has a birth rate of 12.8 per cent while Sweden is at 12.3 percent. In July 2011, the UK's Office for National Statistics (ONS) announced a 2.4% increase in live births in the UK in 2010 alone. This is the highest birth rate in the UK in 40 years. By contrast, the birth rate in Germany is only 8.3 per 1,000, which is so low that both the UK and France, which have significantly smaller populations, produced more births in 2010. Birth rates also vary within the same geographic area, based on different demographic groups. For example, in April 2011, the U.S. CDC announced that the birth rate for women over the age of 40 in the U.S. rose between 2007 and 2009, while it fell among every other age group during the same time span. In August 2011, Taiwan's government announced that its birth rate declined in the previous year, despite the fact that it implemented a host of approaches to encourage its citizens to have babies.
Birth rates ranging from 10 to 20 births per 1000 are considered low, while rates from 40 to 50 births per 1000 are considered high. There are problems associated with both an extremely high birth rate and an extremely low birth rate. High birth rates can cause stress on the government welfare and family programs to support a youthful population. Additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental effects that a large population can produce. Low birth rates can put stress on the government to provide adequate senior welfare systems and also the stress on families to support the elders themselves. There will be less children or working age population to support the constantly growing aging population.
=== Highest and lowest birth rates (annual births per 1000 persons) ===
The ten countries with the highest and lowest crude birth rate, according to the 2018 and 2022 CIA World Factbook estimates, are:
(These lists include independent countries only, not regions.)
== Death rate ==
The ten countries with the highest and lowest crude death rate, according to the 2018 and 2022 CIA World Factbook estimates, are:
See list of countries by mortality rate for worldwide statistics.
According to the World Health Organization, the 10 leading causes of death in 2002 were:
12.6% Ischemic heart disease
9.7% Cerebrovascular disease
6.8% Lower respiratory infections
4.9% HIV/AIDS
4.8% Chronic obstructive pulmonary disease
3.2% Diarrhoeal diseases
2.7% Tuberculosis
2.2% Trachea/bronchus/lung cancers
2.2% Malaria
2.1% Road traffic accidents
Causes of death vary greatly between first and third world countries.
According to Jean Ziegler (the United Nations Special Rapporteur on the Right to Food for 2000 to March 2008), mortality due to malnutrition accounted for 58% of the total mortality in 2006: "In the world, approximately 62 millions people, all causes of death combined, die each year. In 2006, more than 36 millions died of hunger or diseases due to deficiencies in micronutrients".
Of the roughly 150,000 people who died each day across the globe, about two-thirds—100,000 per day—died of age-related causes in 2001, according to an article which counts all deaths "due to causes that kill hardly anyone under the age of 40" as age-related. In industrialized nations, the proportion was even higher according to that article, reaching 90%.
== Total fertility rate ==
The Total fertility rate is the average number of children born per mother. In 2021, fertility levels high were found in sub-Saharan Africa (4.6 births per woman), Oceania excluding Australia and New Zealand (3.1), Northern Africa and Western Asia
(2.8), and Central and Southern Asia (2.3).
There is an inverse correlation between income and fertility, wherein developed countries usually have a much lower fertility rate. Various fertility factors may be involved, such as education and urbanization. Mortality rates are low, birth control is understood and easily accessible, and costs are often deemed very high because of education, clothing, feeding, and social amenities. With wealth, contraception becomes affordable. However, in countries like Iran where contraception was made artificially affordable before the economy accelerated, birth rate also rapidly declined. Further, longer periods of time spent getting higher education often mean women have children later in life. Female labor participation rate also has substantial negative impact on fertility. However, this effect is neutralized among Nordic or liberalist countries.
In undeveloped countries on the other hand, families desire children for their labour and as caregivers for their parents in old age. Fertility rates are also higher due to the lack of access to contraceptives, generally lower levels of female education, patriarchal culture and lower rates of female employment in industry.
Total fertility rates by region, 2010–2015
Total fertility rate is the number of children born per woman.
== Health ==
The average number of hospital beds per 1,000 population is 2.94. It is highest in Switzerland (18.3) and lowest in Mexico (1.1)
96% of the urban population has access to improved drinking water, while only 78% of rural inhabitants have improved drinking water. A total average of 87% of urban and rural have access to improved drinking water.
76% of the urban population has access to sanitation facilities, while only 45% of the rural population has access. A total world average of 39% do not have access to sanitation facilities.
As of 2009, there are an estimated 33.3 million people living with HIV/AIDS, which is approximately 0.8% of the world population, and there have been an estimated 1.8 million deaths attributed to HIV/AIDS.
As of 2010, 925 million people are undernourished.
Life Expectancy at Birth:
total population: 71.4 years
male: 69.1 years
female: 73.8 years (2015 est.)
== Sex ratio ==
The value for the entire world population is 1.02 males/female, with 1.07 at birth, 1.06 for those under 15, 1.02 for those between 15 and 64, and 0.78 for those over 65.
The Northern Mariana Islands have the highest female ratio with 0.77 males per female. Qatar has the highest male ratio, with 2.87 males/female. For the group aged below 15, Sierra Leone has the highest female ratio with 0.96 males/female, and Georgia and China are tied for the highest male ratio with 1.13 males/female (according to the 2006 CIA World Factbook).
The "First World" G7 members all have a gender ratio in the range of 0.95–0.98 for the total population, of 1.05–1.07 at birth, of 1.05–1.06 for the group below 15, of 1.00–1.04 for the group aged 15–64, and of 0.70–0.75 for those over 65.
Countries on the Arabian Peninsula tend to have a "natural" ratio of about 1.05 at birth but a very high ratio of males for those over 65 (Saudi Arabia 1.13, United Arab Emirates 2.73, Qatar 2.84), indicating either an above-average mortality rate for females or a below-average mortality for males, or, more likely in this case, a large population of aging male guest workers. Conversely, countries of Eastern Europe (the Baltic states, Belarus, Ukraine, Russia) tend to have a "normal" ratio at birth but a very low ratio of males among those over 65 (Russia 0.46, Latvia 0.48, Ukraine 0.52); similarly, Armenia has a far above average male ratio at birth (1.17), and a below-average male ratio above 65 (0.67). This effect may be caused by emigration and higher male mortality as result of higher post-Soviet era deaths; it may also be related to the enormous (by western standards) rate of alcoholism in the former Soviet states. Another possible contributory factor is an aging population, with a higher than normal proportion of relatively elderly people: we recall that due to higher differential mortality rates the ratio of males to females reduces for each year of age.
== Unemployment rate ==
8.7% (2010 est.)
8.2% (2009 est.)
note: 30% combined unemployment and underemployment in many non-industrialized countries; developed countries typically 4%–12% unemployment (2007 est.)
== Languages ==
Worldwide, English is used widely as a lingua franca and can be seen to be the dominant language at this time. The world's largest language by native speakers is Mandarin Chinese which is a first language of around 1,100 million people, or 12.44% of the population, predominantly in Greater China. Spanish is spoken by around 330 to 400 million people, predominantly in the Americas and Spain. Hindustani (Hindi-Urdu) is spoken by about 370 to 420 million speakers, mostly in India and Pakistan.
Arabic is spoken by around 350 million people predominantly in Arab world. Bengali is spoken by around 250 million people worldwide, predominantly in Bangladesh and India. Portuguese is spoken by about 230 million speakers in Portugal, Brazil, East Timor, and Southern Africa.
There are numerous other languages, grouped into nine major families:
Indo-European languages 46% (Europe, Western Asia, South Asia, North Asia, North America, South America, and Oceania)
Sino-Tibetan languages 21% (East Asia, Mainland Southeast Asia, and South Asia)
Niger–Congo languages 6.4% (Sub-Saharan Africa)
Afroasiatic languages 6.0% (North Africa to Horn of Africa, and Western Asia)
Austronesian languages 5.9% (Oceania, Madagascar, and Maritime Southeast Asia)
Dravidian languages 3.7% (South Asia)
Altaic languages (controversial combination of Turkic, Mongolic, and Tungusic families) 2.3% (Central Asia, North Asia (Siberia), and Anatolia)
Austroasiatic languages 1.7% (Mainland Southeast Asia)
Kra–Dai languages 1.3% (Southeast Asia)
There are also hundreds of non-verbal sign languages.
== Education ==
Total population: 83.7% over the age of 15 can read and write, 88.3% male and 79.2% female
note: over two-thirds of the world's 793 million illiterate adults are found in only eight countries (Bangladesh, China, Egypt, Ethiopia, India, Indonesia, Nigeria, and Pakistan); of all the illiterate adults in the world, two-thirds are women; extremely low literacy rates are concentrated in three regions, the Arab states, South and West Asia, and Sub-Saharan Africa, where around one-third of the men and half of all women are illiterate (2005–09 est.)
As of 2008, the school life expectancy (primary to tertiary education) for a man or woman is 11 years.
== See also ==
== Notes ==
== References == | Wikipedia/Demographics_of_the_world |
Infant mortality is the death of an infant before the infant's first birthday. The occurrence of infant mortality in a population can be described by the infant mortality rate (IMR), which is the number of deaths of infants under one year of age per 1,000 live births. Similarly, the child mortality rate, also known as the under-five mortality rate, compares the death rate of children up to the age of five.
In 2013, the leading cause of infant mortality in the United States was birth defects. Other leading causes of infant mortality include birth asphyxia, pneumonia, neonatal infection, diarrhea, malaria, measles, malnutrition, congenital malformations, term birth complications such as abnormal presentation of the fetus, umbilical cord prolapse, or prolonged labor. One of the most common preventable causes of infant mortality is smoking during pregnancy. Lack of prenatal care, alcohol consumption during pregnancy, and drug use also cause complications that may result in infant mortality. Many situational factors contribute to the infant mortality rate, such as the pregnant woman's level of education, environmental conditions, political infrastructure, and level of medical support. Improving sanitation, access to clean drinking water, immunization against infectious diseases, and other public health measures can help reduce rates of infant mortality.
In 1990, 8.8 million infants younger than one-year-old died globally out of 12.6 million child deaths under the age of five. More than 60% of the deaths of children under-five are seen as avoidable with low-cost measures such as continuous breastfeeding, vaccinations, and improved nutrition. The global under-five mortality rate in 1950 was 22.5%, which dropped to 4.5% in 2015. Over the same period, the infant mortality rate declined from 65 deaths per 1,000 live births to 29 deaths per 1,000. Globally, 5.4 million children died before their fifth birthday in 2017; by 2021 that number had dropped to 5 million children.
The child mortality rate (not the infant mortality rate) was an indicator used to monitor progress towards the Fourth Goal of the Millennium Development Goals of the United Nations for the year 2015. A reduction in child mortality was established as a target in the Sustainable Development Goals—Goal Number 3: Ensure healthy lives and promote well-being for all at all ages. As of January 2022, an analysis of 200 countries found 133 already meeting the SDG target, with 13 others trending towards meeting the target by 2030. Throughout the world, the infant mortality rate (IMR) fluctuates drastically, and according to Biotechnology and Health Sciences, education and life expectancy in a country are the leading indicators of IMR. This study was conducted across 135 countries over the course of 11 years, with the continent of Africa having the highest infant mortality rate of any region studied, with 68 deaths per 1,000 live births.
== Classification ==
Infant mortality rate (IMR) is the number of deaths per 1,000 live births of children under one year of age. The rate for a given region is the number of children dying under one year of age, divided by the number of live births during the year, multiplied by 1,000.
Forms of infant mortality:
Perinatal mortality is late fetal death (22 weeks gestation to birth) or death of a newborn up to one week postpartum.
Neonatal mortality is death occurring within 28 days postpartum. Neonatal death is often attributed to inadequate access to basic medical care, during pregnancy and after delivery. This accounts for 40–60% of infant mortality in developing countries.
Postneonatal mortality is the death of children aged 29 days to one year. The major contributors to postneonatal death are malnutrition, infectious disease, pregnancy complications, sudden infant death syndrome, and problems in the home environment.
== Causes ==
Causes of infant mortality, or direct causes of death, differ from contributions to the IMR, as contributing factors raise the risk of death, but do not directly cause death. Environmental and social barriers that prevent access to basic medical resources contribute to an increased infant mortality rate, 86% of infant deaths are caused by infections, premature births, complications during delivery, perinatal asphyxia, and birth injuries. Many of these common causes are preventable with low-cost measures. While 99% of infant deaths occur in developing countries, the greatest percentage reduction in infant mortality occurs in countries that already have low rates of infant mortality. In the United States, a primary source of infant mortality risk is infant birth weight, with lower birth weights increasing the risk; the causes of low birth weight include socioeconomic, psychological, behavioral, and environmental factors.
=== Main causes ===
There are three main leading causes of infant mortality: conditions related to preterm birth, congenital anomalies, and SIDS (sudden infant death syndrome). In North Carolina between 1980 and 1984, 37.5% of infant deaths were due to prematurity, congenital anomalies accounted for 17.4% and SIDS accounted for 12.9%.
==== Premature birth ====
Premature, or preterm birth (PTB), is defined as birth before a gestational age of 37 weeks, as opposed to full term birth at 40 weeks. This can be further sub-divided in various ways, one being: "mild preterm (32–36 weeks), very preterm (28–31 weeks) and extremely preterm (<28 weeks)". A lower gestational age increases the risk of infant mortality.
Between 1990 and 2010 prematurity was the second leading cause of worldwide mortality for neonates and children under the age of five. The overall PTB mortality rate in 2010 was 11.1% (15 million deaths) worldwide and was highest in low to middle-income countries in sub-Saharan Africa and south Asia (60% of all PTBs), compared with high-income countries in Europe or the United States. Low-income countries also have limited resources to care for the needs of preterm infants, which increases the risk of infant mortality. The survival rate in these countries for infants born before 28 weeks of gestation is 10%, compared with a 90% survival rate in high-income countries. In the United States, the period from 1980 to 2000 saw a decrease in the total number of infant mortality cases, despite a significant increase in premature births.
Based on distinct clinical presentations, there are three main subgroups of preterm births: those that occur due to spontaneous premature labor, those that occur due to spontaneous membrane (amniotic sac) rupture, and those that are medically induced. Both spontaneous factors are viewed to be a result of similar causes; hence, two main classifications remain: spontaneous and medically induced causes. The risk of spontaneous PTB increases with "extremes of maternal age (both young and old), short inter-pregnancy intervals, multiple gestations, assisted reproductive technology, prior PTB, family history, substance abuse, cigarette use, low maternal socioeconomic status, late or no prenatal care, low maternal prepregnancy weight, bacterial vaginosis, periodontal disease, and poor pregnancy weight gain." Medically induced preterm birth is often conducted when continuing pregnancy poses significant risks to the pregnant parent or fetus; the most common causes include preeclampsia, diabetes, maternal medical conditions, fetal distress, or developmental problems. Despite these risk factors, the underlying causes of premature infant death are often unknown, and approximately 65% of all cases are not associated with any known risk factor.
Infant mortality caused by premature birth is mainly attributed to developmental immaturity, which impacts multiple organ systems in the infant's body. The main body systems affected include the respiratory system, which may result in pulmonary hypoplasia, respiratory distress syndrome, bronchopulmonary dysplasia (a chronic lung disease), and apnea. Other body systems that fully develop at a later gestational age include the gastrointestinal system, the skin, the immune system, the cardiovascular system, and the hematologic system. Poor development of these systems increases the risk of infant mortality.
Understanding the biological causes and predictors of PTB is important for identifying and preventing premature birth and infant mortality. While the exact mechanisms responsible for inducing premature birth are often unknown, many of the underlying risk factors are associated with inflammation. Approximately "80% of preterm births that occur at <1,000 g or at <28 to 30 weeks of gestation" have been associated with inflammation. Biomarkers of inflammation, including C-reactive protein, ferritin, various interleukins, chemokines, cytokines, defensins, and bacteria, have been shown to be associated with increased risks of infection or inflammation-related preterm birth. Biological fluids have been utilized to analyze these markers in hopes of understanding the pathology of preterm birth, but they are not always useful if not acquired at the appropriate gestational time-frame. For example, biomarkers such as fibronectin are accurate predictors of premature birth at over 24 weeks of gestation but have poor predictive values before then. Additionally, understanding the risks associated with different gestational ages is a helpful determiner of Gestational age-specific mortality.
==== Sudden infant death syndrome (SIDS) ====
Sudden infant death syndrome (SIDS) is defined as the sudden death of an infant less than one year of age with no cause detected after a thorough investigation. SIDS is more common in Western countries. The United States Centers for Disease Control and Prevention report SIDS to be the leading cause of death in infants aged one month to one year of life. Even though researchers are not sure what causes SIDS, they have found that putting babies to sleep on their backs, instead of their stomachs, lowers the risk. Campaigns like Back to Sleep have used this research to lower the SIDS death rate by 50%. Though the exact cause is unknown, the "triple-risk model" presents three factors that together may contribute to SIDS: smoking while pregnant, the age of the infant, and stress from conditions such as prone sleeping, co-sleeping, overheating, and covering of the face or head. In the early 1990s, it was argued that immunizations could contribute to an increased risk of SIDS; however, more recent support the idea that vaccinations reduce the risk of SIDS.
In the United States, approximately 3,500 infant deaths are sleep-related, a category that includes SIDS. To reduce sleep-related infant deaths, the American Academy of Pediatrics recommends providing infants with safe-sleeping environments, breastfeeding, and immunizing according to the recommended immunization schedule. They recommend against the use of a pacifier and recommend avoiding exposure to smoke, alcohol, and illicit drugs during and after pregnancy.
==== Congenital malformations ====
Congenital malformations are present at birth and include conditions such as cleft lip and palate, Down Syndrome, and heart defects. Some congenital malformations may be more likely when the mother consumes alcohol, but they can also be caused by genetics or unknown factors.
Congenital malformations have had a significant impact on infant mortality, but malnutrition and infectious diseases remain the main causes of death in less developed countries. For example, in the Caribbean and Latin America in the 1980s, congenital malformations only accounted for 5% of infant deaths, while malnutrition and infectious diseases accounted for 7% to 27% of infant deaths. In more developed countries, such as the United States, there was a rise in infant deaths due to congenital malformations, mostly heart and central nervous system problems. In the 20th century, there was a decrease in the number of infant deaths from heart conditions, from 1979 to 1997, there was a 39% decline.
=== Medicine and biology ===
Causes of infant mortality and deaths that are related to medical conditions include: low birth weight, sudden infant death syndrome, malnutrition, congenital malformations, infectious diseases, and low income for health care, including neglected tropical diseases.
The American Academy of Pediatrics recommends that infants need multiple doses of vaccines such as diphtheria–tetanus–acellular pertussis vaccine, Haemophilus influenzae type b (Hib) vaccine, hepatitis B (HepB) vaccine, inactivated polio vaccine (IPV), and pneumococcal vaccine (PCV). Research conducted by the Institute of Medicine's Immunization Safety Review Committee concluded that there is no relationship between these vaccines and the risk of SIDS in infants.: 77–78
==== Low birth weight ====
Low birth weight makes up 60–80% of the infant mortality rate in developing countries. The New England Journal of Medicine stated that "The lowest mortality rates occur among infants weighing 3,000 to 3,500 g (6.6 to 7.7 lb). For infants born weighing 2,500 g (5.5 lb) or less, the mortality rate rapidly increases with decreasing weight, and most of the infants weighing 1,000 g (2.2 lb) or less die. As compared with normal-birth-weight infants, those with low weight at birth are almost 40 times more likely to die in the neonatal period; for infants with very low weight at birth the relative risk of neonatal death is almost 200 times greater." Infant mortality due to low birth weight is usually a direct cause stemming from other medical complications such as preterm birth, poor maternal nutritional status, a lack of prenatal care, maternal sickness during pregnancy, and unhygienic home environments. Birth weight and the length of gestation are the two most important predictors of an infant's chances of survival and their overall health.
According to the New England Journal of Medicine, "in the past two decades, the infant mortality rate (deaths under one year of age per thousand live births) in the United States has declined sharply." The rate of low birth weights among African Americans remains twice as high as the rate for white people. Low birth weight, the leading cause of infant deaths, is preventable by effective programs to help prevent low birth weight are a combination of health care, education, the environment,mental modification, and public policy. Preterm birth is the leading cause of newborn deaths worldwide. Even though America has a higher survival rate for premature infants, the percentage of Americans who deliver prematurely is comparable to those in developing countries. Reasons for this include teenage pregnancy, an increase in pregnancy after the age of 35, an increase in the use of in vitro fertilisation (which increases the risk of multiple births), obesity, and diabetes. Also, pregnant people who do not have access to health care are less likely to visit a doctor, therefore increasing their risk of delivering prematurely.
==== Malnutrition ====
Malnutrition or undernutrition is defined as inadequate intake of nourishment, such as proteins and vitamins, which adversely affects the growth, energy, and development of people all over the world. It is especially prevalent during pregnancy and in infants and children under 5 who live in developing countries within the poorer regions of Africa, Asia, and Latin America. Children are especially vulnerable as they have yet to fully develop a strong immune system and are dependent on their parents to provide the necessary food and nutritional intake. It is estimated that about 3.5 million children die each year as a result of childhood or maternal malnutrition, with stunted growth, low body weight, and low birth weight accounting for about 2.2 million associated deaths. Socioeconomic and environmental factors contribute to malnutrition, as do gender, location, and cultural practices surrounding breastfeeding. It is difficult to assess the most pressing factor as they can intertwine and vary among regions.
Children suffering from malnutrition can become underweight, and experience stunting or wasting. In Africa, the number of stunted children has risen, while Asia has the most children under 5 suffering from wasting. Inadequate nutrients adversely affect physical and cognitive development, increasing susceptibility to severe health problems. Micronutrient deficiency has been linked to anemia, fatigue, blindness, goiter, poor brain development, and death. Malnutrition also decreases the immune system's ability to fight infections, resulting in higher rates of death from diseases such as malaria, respiratory disease, and diarrhea.
Folic acid during pregnancy is one way to combat iron deficiency. A few public health measures used to lower levels of iron deficiency anemia include added iodine to salt or drinking water and including vitamin A and multivitamin supplements in the diet. A deficiency of this vitamin causes certain types of anemia (low red blood cell count).
==== Infectious diseases ====
Babies born in low- to middle-income countries in sub-Saharan Africa and southern Asia are at the highest risk of neonatal death. Bacterial infections of the bloodstream, lungs, and the brain's covering (meningitis) are responsible for 25% of neonatal deaths worldwide. Newborns can acquire infections during birth from bacteria present in the birth canal, the person may not be aware of the infection, or they may have an untreated pelvic inflammatory disease or a sexually transmitted disease. These bacteria can also move up the vaginal canal into the amniotic sac surrounding the baby causing in utero transmission. Maternal blood-borne infection is another route of bacterial infection. Neonatal infection is more likely with the premature rupture of the membranes (PROM) of the amniotic sac.
Seven out of ten childhood deaths are due to infectious diseases like acute respiratory infection, diarrhea, measles, and malaria. Acute respiratory infections such as pneumonia, bronchitis, and bronchiolitis account for 30% of childhood deaths; 95% of pneumonia cases occur in the developing world. Diarrhea is the second-largest cause of childhood mortality in the world, while malaria causes 11% of childhood deaths. Measles is the fifth-largest cause of childhood mortality.
=== Environmental ===
The infant mortality rate is one measure of a nation's health and social conditions. Its causes are a composite of a number rates that each have their own separate relationships with each other and with various other social factors. As such, IMR can often be seen as an indicator to measure the level of socioeconomic disparity within a country.
Organic water pollution is a better indicator of infant mortality than health expenditures per capita. Water contaminated by animal waste houses various pathogens including a host of parasitic and microbial infections. Areas of low socioeconomic status are more prone to inadequate plumbing infrastructure and poorly maintained facilities. Climate and geography often play a role in sanitation conditions. For example, the inaccessibility of clean water exacerbates poor sanitation conditions.
The burning of inefficient fuels doubles the rate of acute respiratory tract infections in children under 5 years old. People who live in areas where particulate matter air pollution is higher tend to have more health problems regardless of age. The short and long-term effects of air pollution are associated with an increased mortality rate, including infant mortality. Air pollution is consistently associated with postnatal mortality due to respiratory effects and sudden infant death syndrome (SIDS). Specifically, air pollution is highly associated with SIDS in the United States during the post-neonatal stage. High infant mortality is exacerbated because newborns are a vulnerable subgroup that is affected by air pollution. Newborns who were born into these environments are no exception, and pregnant women exposed to greater air pollution on a daily basis should be closely watched by their doctors, including after the baby is born. Babies who live in areas with less air pollution have a greater chance of living until their first birthday, meaning babies who live in environments with more air pollution are at greater risk for infant mortality. Areas that have higher air pollution also have a greater chance of having a higher population density, higher crime rates, and lower income levels, all of which can lead to higher infant mortality rates.
A key pollutant in infant mortality rates is carbon monoxide. Carbon monoxide is a colorless, odorless gas that can kill, and is especially dangerous to infants because of their immature respiratory systems.
Another major pollutant that can have detrimental effects on a fetus is second-hand smoke.
[I]n 2006, more than 42,000 Americans died of secondhand smoke-attributable diseases, including more than 41,000 adults and nearly 900 infants. Fully 36% of the infants who died of low birth weight caused by exposure to maternal smoking in utero were black, as were 28% of those dying of respiratory distress syndrome, 25% dying of other respiratory conditions, and 24% dying of sudden infant death syndrome.
Compared with nonsmoking women having their first birth, women who smoked less than one pack of cigarettes per day had a 25% greater risk of mortality, and those who smoked one or more packs per day had a 56% greater risk. Among women having their second or higher birth, smokers experienced 30% greater mortality than nonsmokers.
Modern research in the United States into racial disparities in infant mortality suggests a link between institutionalized racism and high rates of African American infant mortality. In synthesis of this research, it has been observed that "African American infant mortality remains elevated due to the social arrangements that exist between groups and the lifelong experiences responding to the resultant power dynamics of these arrangements."
It is important to note that infant mortality rates do not decline among African Americans if their socio-economic status improves. Parker Dominguez at the University of Southern California has made some headway in determining the reasons behind this, claiming black women in the US are more prone to psychological stress than women of other races. Stress is a leading factor in the start of labor, and therefore, high levels of stress during pregnancy could lead to premature births that have the potential to be fatal for the infant.
==== Early childhood trauma ====
Early childhood trauma includes physical, sexual, and psychological abuse of a child from birth to five years old. Trauma in early childhood has an extreme impact over the course of a lifetime and is a significant contributor to infant mortality. Developing organs are fragile, when an infant is shaken, beaten, strangled, or raped, the impact is exponentially more destructive than when the same abuse occurs to a fully developed body. Studies estimate that 1–2 per 100,000 U.S. children are fatally injured annually, and it is reasonable to assume that these statistics underrepresent actual mortality. Almost three-quarters (70.6%) of child fatalities in FFY 2018 involved children younger than 3 years, and children younger than 1 year accounted for half (49.4%) of all fatalities. In particular, correctly identifying deaths due to neglect is problematic, and children with sudden, unexpected deaths or deaths from apparently unintentional causes often have preventable risk factors that are substantially similar to those in families with maltreatment.
There is a direct relationship between the age at which maltreatment or injury occurs and the risk of death. The younger an infant is, the more dangerous the maltreatment.
Family configuration, child gender, social isolation, lack of support, maternal youth, marital status, poverty, parental adverse childhood experiences, and parenting practices are all thought to contribute to increased risk.
==== Socio-economic factors ====
Social class is a major factor in infant mortality, both historically and today. Between 1912 and 1915, the Children's Bureau in the United States examined data across eight cities and nearly 23,000 live births. They discovered that lower incomes tended to correlate with higher infant mortality. In cases where the father had no income, the rate of infant mortality was 357% higher than that for the highest income earners ($1,250+).: 5 Differences between races were also apparent. African-American mothers experience infant mortality at a rate 44% higher than average; however, research indicates that socio-economic factors do not totally account for the racial disparities in infant mortality.
While infant mortality is normally negatively correlated with GDP, there may be some beneficial short-term effects from a recession. A 2009 study in The Economist showed that economic slowdowns reduce air pollution, which results in a lower infant mortality rate. In the late 1970s and early 1980s, the recession's impact on air quality was estimated to have saved around 1,300 US babies. It is only during deep recessions that infant mortality increases. According to Norbert Schady and Marc-François Smitz, recessions when per capita GDP drops by 15% or more increase IMR.
Social class dictates which medical services are available to an individual. Disparities due to socioeconomic factors have been highlighted by advances in medical technology. Developed countries, most notably the United States, have seen a divergence in IMR between those living in poverty who cannot afford medically advanced resources, and those who can.
Developing nations with democratic governments tend to be more responsive to public opinion, social movements, and special interest groups on issues like infant mortality. In contrast, non-democratic governments are more interested in corporate issues than in health issues. Democratic status affects the dependency a nation has on its economic state via exports, investments from multinational corporations, and international lending institutions.
Levels of socioeconomic development and global integration are inversely related to a nation's infant mortality rate, meaning that as they increase, IMR decreases. A nation's internal impact is highly influenced by its position in the global economy, which has adverse effects on the survival of children in developing countries. Countries can experience disproportionate effects from trade and stratification within the global system, which contributes to the global division of labor, and distorts the domestic economies of developing nations. The dependency of developing nations can reduce the rate of economic growth, increase income inequality inter- and intra-nationally, and adversely affect the wellbeing of a nation's population. Collective cooperation between countries plays a role in development policies in the poorer countries of the world.
These economic factors present challenges to governments' public health policies. If the nation's ability to raise its own revenues is compromised, governments will lose funding for their health service programs, including those that aim to decrease infant mortality rates. Less developed countries face higher levels of vulnerability to the possible negative effects of globalization and trade in relation to more developed countries.
Even with a strong economy and economic growth (measured by a country's gross national product), the advances of medical technologies may not be felt by everyone, increasing social disparities. In England, from 2014 to 2017, a rise in infant mortality was disproportionately experienced by the poorest regions, where the previously declining trend was reversed and an additional 24 infant deaths per 100,000 live births occurred annually.
==== War ====
Infant mortality rates correlate with war, political unrest, and government corruption. In most cases, war-affected areas will experience a significant increase in infant mortality rates. Having a war take place when planning pregnancy is not only stressful on the mother and fetus but also has several detrimental effects.
Many other significant factors influence infant mortality rates in war-torn areas. Health care systems in developing countries in the midst of war often collapse, and obtaining basic medical supplies and care becomes increasingly difficult. During the Yugoslav Wars in the 1990s, Bosnia experienced a 60% decrease in child immunizations. Preventable diseases can quickly become epidemics during war.
Many developing countries rely on foreign aid for basic nutrition, and transport of aid becomes significantly more difficult in times of war. In most situations, the average weight of a population will drop substantially. Expectant mothers are affected even more by a lack of access to food and water. During the Yugoslav Wars in Bosnia, the number of premature babies born increased and the average birth weight decreased.
There have been several instances in recent years of systematic rape as a weapon of war. People who become pregnant as a result of war rape face even more significant challenges in bearing a healthy child. Studies suggest that people who experience sexual violence before or during pregnancy are more likely to experience infant death. Causes of infant mortality after abuse during pregnancy range from physical side effects of the initial trauma to psychological effects that lead to poor adjustment to society. Many people who became pregnant by rape in Bosnia were isolated from their hometowns, making life after childbirth exponentially more difficult.
=== Culture ===
High rates of infant mortality occur in developing countries where financial and material resources are scarce, and where there is a high tolerance for infant deaths. There are a number of developing countries where certain cultural situations, such as favoring male babies over female babies, are the norm. In developing countries such as Brazil, infant mortality rates are commonly not recorded due to not registering for death certificates. Another cultural reason for infant mortality, such as what is happening in Ghana, is that "besides the obvious, like rutted roads, there are prejudices against wives or newborns leaving the house." This makes it even more difficult for pregnant women and newborns to get the needed treatment that is available to them.
In the United States cultural influences and lifestyle habits can account for some infant deaths. Examples include teenage pregnancy, obesity, diabetes, and smoking. All are possible causes of premature births, which constitute the second-highest cause of infant mortality. According to the Journal of the American Medical Association, "the post neonatal mortality risk (28 to 364 days) was highest among continental Puerto Ricans" compared to non-Hispanic babies. Ethnic differences are accompanied by a higher prevalence of behavioral risk factors and sociodemographic challenges that each ethnic group faces.
==== Male sex favoritism ====
Historically, males have had higher infant mortality rates than females, with the difference being dependent on environmental, social, and economic conditions. More specifically, males are biologically more vulnerable to infections and conditions associated with prematurity and development. Before 1970, the reasons for male infant mortality were infections and chronic degenerative diseases. However, since 1970, male sex favoritism in certain cultures has led to a decrease in the infant mortality gap between males and females. Also, medical advances have resulted in a greater effect on the survival rate of male infants than female infants, due to the initial high infant mortality rate of males.
Genetic components result in newborn females being at a biological advantage when it comes to surviving their first birthday, versus newborn males, who have lower chances of surviving infancy. As infant mortality rates decreased globally, the gender ratios changed from males being at a biological disadvantage to females facing a societal disadvantage. Some developing nations have social and cultural patterns that favor boys over girls for their future earning potential. A country's ethnic composition, homogeneous or heterogeneous, can explain social attitudes and practices. Heterogeneous levels are a strong predictor of infant mortality.
==== Birth spacing ====
Birth spacing is the time between births. Births spaced at least three years apart are associated with the lowest rate of mortality. The longer the interval between births, the lower the risk of having complications at birth, or of infant, childhood, or maternal mortality. Conception less than six months after a birth, abortion, or miscarriage is associated with higher rates of preterm births and low birth weight, and also increases the chances of chronic and general undernutrition. In 55 developing countries 57% of reported pregnancies had birth spaces of less than three years, and 26% of less than two years. While only 20% of new parents report wanting another birth within two years, only 40% are taking steps like family planning to achieve this.
Unplanned pregnancies and birth intervals of less than twenty-four months are known to correlate with low birth weights and delivery complications. Also, mothers who are already small in stature tend to deliver smaller than average babies, perpetuating a cycle of being underweight.
== Prevention and outcomes ==
To reduce infant mortality rates across the world, health practitioners, governments, and non-governmental organizations have worked to create institutions, programs, and policies to generate better health outcomes. Current efforts focus on the development of human resources, strengthening health information systems, health service delivery, etc. Improvements in such areas aim to increase regional health systems and aid efforts to reduce mortality rates.
=== Policy ===
Reductions in infant mortality are possible at any stage of a country's development. Rate reductions are evidence that a country is advancing in human knowledge, social institutions, and physical capital. Governments can reduce mortality rates by addressing the combined need for education (such as universal primary education), nutrition, and access to basic maternal and infant health services. Focused policies has the potential to aid those most at risk for infant and childhood mortality, including rural, poor, and migrant populations.
Reducing the chances of babies being born at low birth weights and contracting pneumonia can be accomplished by improving air quality. Improving hygiene can prevent infant mortality. Home-based technology to chlorinate, filter, and use solar disinfection for organic water pollution could reduce cases of diarrhea in children by up to 48%. Improvements in food supplies and sanitation have been shown to work for the most vulnerable populations in the US, including among African Americans.
Promoting behavioral changes, such as hand washing with soap, can significantly reduce the rate of infant mortality from respiratory and diarrheal diseases. According to UNICEF, hand washing with soap before eating and after using the toilet can save children's lives by reducing deaths from diarrhea and acute respiratory infections.
Focusing on preventing preterm and low birth weight deliveries throughout all populations can help eliminate cases of infant mortality and decrease health care disparities within communities. In the United States, these two goals have decreased regional infant mortality rates, but there has yet to be further progress on a national level.
Increasing human resources such as physicians, nurses, and other health professionals will increase the number of skilled attendants and the number of people able to give out immunizations against diseases such as measles. Increasing the number of skilled professionals is correlated with lower maternal, infant, and childhood mortality. With the addition of one physician per 10,000 people, there is a potential for 7.08 fewer infant deaths per 10,000.
In certain parts of the US, specific programs aim to reduce levels of infant mortality. One such program is the "Best Babies Zone" (BBZ), based at the University of California, Berkeley. The BBZ uses the life course approach to address the structural causes of poor birth outcomes and toxic stress in three US neighborhoods. By employing community-generated solutions, the Best Babies Zone's ultimate goal is to achieve health equity in communities that are disproportionately impacted by infant mortality.
=== Prenatal care and maternal health ===
Certain steps can help to reduce the chance of complications during pregnancy. Attending regular prenatal care check-ups will help improve the baby's chances of being delivered in safer conditions and surviving. Additionally, taking supplementation, including folic acid, can help reduce the chances of birth defects, a leading cause of infant mortality. Many countries have instituted mandatory folic acid supplementation in their food supply, which has significantly reduced the occurrence of a birth defect known as spina bifida in newborns. Similarly, the fortification of salt with iodine, called salt iodization, has helped reduce negative birth outcomes associated with low iodine levels during pregnancy.
Abstinence from alcohol can also decrease the chances of harm to the fetus as drinking any amount of alcohol during pregnancy may lead to fetal alcohol spectrum disorders (FASD) or other alcohol related birth defects. Tobacco use during pregnancy has also been shown to significantly increase the risk of a preterm or low birth weight birth, both of which are leading causes of infant mortality. Pregnant women should consult with their doctors to best manage any pre-existing health conditions to avoid complications to both their health as well as the fetus's. Obese people are at an increased risk of developing complications during pregnancy, including gestational diabetes or pre-eclampsia. Additionally, they are more likely to experience a pre-term birth or have a child with birth defects.
=== Nutrition ===
Appropriate nutrition for newborns and infants can help keep them healthy, and can help avoid health complications during early childhood. The American Academy of Pediatrics recommends exclusively breastfeeding infants for the first 6 months of life, and continuing breastfeeding as other sources of food are introduced through the next 6 months of life (up to 1 year of age). Infants under 6 months of age who are exclusively breastfed have a lower risk of mortality compared to infants who are breastfed part of the time or not at all. For this reason, breast feeding is favored over formula feeding by healthcare professionals.
=== Vaccinations ===
The Centers for Disease Control and Prevention (CDC) defines infants as those 1 month of age to 1 year of age. For these infants, the CDC recommends the following vaccinations: Hepatitis B (HepB), Rotavirus (RV), Haemophilus Influenzae type B (HIB), Pneumococcal Conjugate (PCV13), Inactivated Poliovirus (IPV < 18 yrs), Influenza, Varicella, Measles, Mumps, Rubella (MMR), and Diphtheria, tetanus, acellular pertussis (DTapP < 7yrs). Each of these vaccinations are given at particular age ranges depending on the vaccination and are required to be done in a series of 1 to 3 doses over time depending on the vaccination.
The efficacy of these vaccinations can be seen immediately following their introduction to society. Following the advent of the Pneumococcal Conjugate vaccine (PCV13) in the United States in the year 2000, the World Health Organization (WHO) reports studies done in 2004 had shown a 57% decline in invasive penicillin resistant strains of the disease and a 59% reduction in multi drug resistant strains. This reduction was even greater for children under 2 years of age with studies finding an 81% reduction in those same strains.
As mentioned in a previous section, sudden infant death syndrome (SIDS) is the leading cause of infant mortality between 1 month and 1 year of age. Immunizations, when given in accordance to proper guidelines, have shown to reduce the risk of SIDS by 50%. For this reason, the American Academy of Pediatrics (AAP) and the Center for Disease Control (CDC) both recommend immunizations in accordance to their guidelines.
=== Socio-economic factors ===
It has been well documented that increased education among mothers, communities, and local health workers results in better family planning, improvement in children's health, and lower rates of children's deaths. High-risk areas, such as Sub-Saharan Africa, have demonstrated that an increase in women's educational attainment leads to a reduction in infant mortality by about 35%. Similarly, coordinated efforts to train community health workers in diagnosis, treatment, malnutrition prevention, reporting and referral services has reduced infant mortality in children under 5 by as much as 38%. Public health campaigns centered around the First 1000 days of life have been successful in providing cost-effective supplemental nutrition programs, as well as assisting young mothers in sanitation, hygiene and breastfeeding. Increased intake of nutrients and better sanitation habits have a positive impact on health, especially for developing children. Educational attainment and public health campaigns provide the knowledge and means to practice better habits and lead to lower infant mortality rates.
A decrease in GDP results in increased rates of infant mortality. A reduction in household income reduces the amount being spent on food and healthcare, affecting the quality of life, and reduces access to medical services that ensure full development and survival. Likewise, increased household income translates to more access to nutrients and healthcare, reducing the risks associated with malnutrition and infant mortality. Moreover, increased aggregate household incomes will produce better health facilities, water and sewer infrastructures for the entire community.
== Differences in measurement ==
The infant mortality rate correlates very strongly with the likelihood of state failure, and is among the best predictors thereof. IMR is therefore also a useful indicator of a country's level of health (development), and is a component of the physical quality of life index.
The method of calculating IMR often varies widely between countries, as it is based on how they define a live birth and how many premature infants are born in the country. Depending on a nation's live birth criterion, vital registration system, and reporting practices, reporting may be inconsistent or understated. The reported IMR provides one statistic which reflects the standard of living in each nation. Changes in the infant mortality rate "reflect enduring social and technical capacities that become attached to a population". The World Health Organization (WHO) defines a live birth as any infant born demonstrating independent signs of life, including breathing, heartbeat, umbilical cord pulsation or definite movement of voluntary muscles. This definition is used in Austria, and is also used in Germany, but with one slight modification: muscle movement is not considered to be a sign of life. Many countries, including certain European states (e.g. France) and Japan, only count cases where an infant breathes at birth as a live birth, which makes their reported IMR numbers somewhat lower and increases their rates of perinatal mortality. In other countries, the Czech Republic and Bulgaria, for instance, requirements for live birth are even higher.
Although many countries have vital registration systems and specific reporting practices, there are often inaccuracies in the statistics, particularly in rural communities in developing countries. In those communities, some other alternative methods for calculating infant mortality rate are used, for example, popular death reporting and household survey. Studies have shown that when comparing three information sources—official registries, household surveys, and popular reporters—the popular death reporters are the most accurate; popular death reporters include midwives, gravediggers, coffin builders, priests, and others, essentially people who knew the most about the child's death. In developing nations, access to vital registries, and other government-run systems which record births and deaths, is difficult for poor families for several reasons. These struggles force families to take drastic measures, like having unofficial death ceremonies for their deceased infants. As a result, government statistics will inaccurately reflect a nation's infant mortality rate. Popular death reporters have first-hand information, and, provided this information can be collected and collated, can provide reliable, accurate death counts for a nation, as well as meaningful causes of deaths that can be measured and studied.
UNICEF uses a statistical methodology to account for reporting differences among countries:
UNICEF compiles infant mortality country estimates derived from all sources and methods of estimation obtained either from standard reports, direct estimation from micro data sets, or from UNICEF's yearly exercise. In order to sort out differences between estimates produced from different sources, with different methods, UNICEF developed, in coordination with WHO, the WB and UNSD, an estimation methodology that minimizes the errors embodied in each estimate and harmonize trends along time. Since the estimates are not necessarily the exact values used as input for the model, they are often not recognized as the official IMR estimates used at the country level. However, as mentioned before, these estimates minimize errors and maximize the consistency of trends along time.
Another challenge in comparing infant mortality rates is the practice of counting frail or premature infants who die before the normal due date as miscarriages, or counting those who die during or immediately after childbirth as stillbirths. Therefore, the quality of a country's documentation of perinatal mortality can greatly affect the accuracy of its infant mortality statistics. This point is reinforced by the demographer Ansley Coale, who finds the high ratios of reported stillbirths to infant deaths in Hong Kong and Japan in the first 24 hours after birth dubious. As this pattern is consistent with the high male to female sex ratios recorded at birth in those countries it suggests two things: that many female infants who die in the first 24 hours are misreported as stillbirths rather than infant deaths; and that those countries do not follow WHO recommendations for the reporting of live births versus infant deaths.
Another seemingly paradoxical finding is that when countries with poor medical services introduce new medical centers and services, instead of declining, the reported IMRs often increase for a time. This is mainly because improvement in access to medical care is often accompanied by improvement in the registration of births and deaths. Deaths that might have occurred in a remote or rural area, and not been reported to the government, might now be reported by the new medical personnel or facilities. Thus, even if the new health services reduce the actual IMR, the reported IMR may increase.
The country-to-country variation in child mortality rates is huge, and growing wider despite progress in decreasing the overall IMR. Among the world's roughly 200 nations, only Somalia showed no decrease in the under-5 mortality rate over the past two decades. In 2011 the global rate of under-5 deaths was 51 deaths per 1,000 births. Singapore had the lowest rate at 2.6, while Sierra Leone had the highest at 185 child deaths per 1,000 births. In the U.S., the rate was 8 under-5 deaths per 1,000 births.
Infant mortality rate (IMR) is not only a statistic but also a reflection of socioeconomic development, as such it effectively represents the presence of medical services in a country. IMR is an effective resource for health departments making decisions on medical resource allocation, and also formulates global health strategies and helps evaluate their success. The use of IMR helps solve the inadequacies of other vital statistic systems for global health as most neglect infant mortality rates among the poor. There remains a certain amount of unrecorded infant death in rural area as they either do not have the concept of reporting early infant death, or they do not know about the importance of the IMR.
=== Europe and US ===
The inclusion or exclusion of high-risk neonates from the reported IMRs can cause problems in making comparisons. Many countries, including the United States, Sweden and Germany, count any birth exhibiting any sign of life as alive, no matter the month of gestation or neonatal size. All of the countries named in the table adopted the WHO definitions in the late 1980s or early 1990s, and they are used throughout the European Union. However, in 2009, the US CDC issued a report that stated that the American rates of infant mortality were affected by the high rates of premature babies in the United States compared to European countries. It also outlined the differences in reporting requirements between the United States and Europe, noting that France, the Czech Republic, Ireland, the Netherlands, and Poland do not report all live births under 500 g and/or 22 weeks of gestation. However, differences in reporting are unlikely to be the primary explanation for the high rate of infant mortality in the United States compared to countries at a similar level of economic development. Rather, the report concluded that the primary reason for the higher infant mortality rate in the US compared to Europe was the much higher number of preterm births.
Until the 1990s, Russia and the Soviet Union did not count, either as a live birth or as an infant death, extremely premature infants that were born alive but failed to survive for at least seven days (infants born weighing less than 1,000 g, of less than 28 weeks gestational age, or less than 35 cm in length, who that breathed, had a heartbeat, or exhibited voluntary muscle movement). Although such extremely premature infants typically accounted for only about 0.5% of all live-born children, their exclusion led to an estimated 22%–25% lower reported IMR. In some cases, too, perhaps because hospitals or regional health departments were held accountable for lowering the IMR in their catchment area, infant deaths that occurred in the 12th month were "transferred" statistically to the 13th month (i.e., the second year of life), and thus no longer classified as an infant death.
=== Brazil ===
In certain rural developing areas, such as northeastern Brazil, infant births are often not recorded, resulting in the discrepancies between the infant mortality rate (IMR) and the actual number of infant deaths. Access to vital registry systems for infant births and deaths is an extremely difficult and expensive task for poor parents living in rural areas. Government and bureaucracies tend to show an insensitivity to these parents and produce broad disclaimers in the IMR reports that the information has not been properly reported, resulting in discrepancies. Little has been done to address the underlying structural problems with the vital registry systems regarding the lack of reporting in rural areas, which has created a gap between the official and popular meanings of child death.
It is also argued that the bureaucratic separation of vital death recording from cultural death rituals is to blame for the inaccuracy of the infant mortality rate (IMR). Vital death registries often fail to recognize the cultural implications and importance of infant deaths. These systems can be an accurate representation of a region's socio-economic situation, if the statistics are valid, which is unfortunately not always the case. An alternate method of collecting and processing statistics on infant and child mortality is via "popular death reporters" who are culturally linked to infants and may be able to provide more accurate statistics. According to ethnographic data, "popular death reporters" refers to people who had inside knowledge of anjinhos, including the grave-digger, gatekeeper, midwife, popular healers etc.—all key participants in mortuary rituals. Combining the methods of household surveys, vital registries, and asking "popular death reporters" can increase the validity of child mortality rates. However there remain barriers that affect the validity of statistics of infant mortality, including political economic decisions: numbers are exaggerated when international funds are being doled out; and underestimated during reelection.
The bureaucratic separation of vital death reporting and cultural death rituals stems, in part, from structural violence. Individuals living in rural areas of Brazil need funds for lodging and travel in order to report births to a Brazilian Assistance League office, this deters registration as often these individuals are of lower income and cannot afford such expenses, similar barriers exist when choosing to report infant mortality. Financial constraints such as reliance on food supplementations may also lead to skewed infant mortality data.
In developing countries such as Brazil the deaths of impoverished infants are regularly not recorded into the countries vital registration system, which skews statistics. Culturally validity and contextual soundness can be used to ground the meaning of mortality from a statistical standpoint. In northeast Brazil they have accomplished this standpoint while conducting an ethnographic study combined with an alternative method to survey infant mortality. These types of techniques can develop quality data that will lead to a better portrayal of the IMR of a region.
Political economic reasons have been seen to skew the infant mortality data in the past when governor Ceara devised his presidency campaign on reducing the infant mortality rate during his term in office. By using this new way of surveying, these instances can be minimized and removed, overall creating accurate and sound data.
== Epidemiology ==
Global IMR, as well as the IMR for both less developed countries (LDCs) and more developed countries (MDCs), declined significantly between 1960 and 2001. According to the State of the World's Mothers report by Save the Children, the world IMR declined from 126 in 1960 to 57 in 2001. The global neonatal mortality rate, NMR, decreased from 36.6 in 1990 to 18.0 in 2017.
However, IMR was, and remains, higher in LDCs. In 2001, the IMR for 91 LDCs was about 10 times as large as it was for 8 MDCs. On average, for LDCs, the IMR is 17 times higher than that of MDCs. Also, while both LDCs and MDCs made significant reductions in IMR, the reduction rate has been lower in less developed countries than among the more developed countries. Among many low- and middle-income countries, there is also substantial variation in infant mortality rate at a subnational level.
As the lowest rate, in Monaco, is 1.80, and the highest IMR, in Afghanistan, is 121.63, a factor of about 67 separates them.
=== United Kingdom ===
A study published in the British Medical Journal in 2019 found that the rate of infant mortality in England had increased with an additional 24 infant deaths per 100,000 live births per year. There was no significant change from the pre-existing trend in the most affluent areas, thus the rise disproportionately affected the poorest areas of the country, and was attributed largely to rising child poverty, as a result of sustained reductions in the welfare benefits available to families with children.
=== United States ===
Of the 27 most developed countries, the U.S. has the highest infant mortality rate, despite spending more on health care, per capita, than any other country. Significant racial and socio-economic differences in the United States affect the IMR, in contrast with other developed countries with more homogeneous populations. In particular, IMR varies greatly by race in the US. The average IMR for the country as a whole is therefore not a fair representation of the wide variations that exist between segments of the population. Many theories have been explored as to why these racial differences exist, with socio economic factors usually coming out as a reasonable explanation. However, more studies have been conducted around this matter, and the largest advancement is around the idea of stress and how it affects pregnancy.
In the 1850s, the infant mortality rate in the United States was estimated at 216.8 per 1,000 white babies and 340.0 per 1,000 African American babies, but rates have significantly declined in modern times. This declining rate has been mainly due to modern improvements in basic health care and technology, as well as medical advances. In the last century, the infant mortality rate has decreased by 93%. Overall, the rates per 1,000 births have decreased drastically from 20 deaths in 1970 to 6.9 deaths in 2003. In 2003, the leading causes of infant mortality in the United States were congenital anomalies, disorders related to immaturity, AIDS, and maternal complications. Smoking during pregnancy declined to 10.2% with 12.4% of these births being low birth weights, compared with 7.7% of births being low birth weights for non-smokers. Overall, babies born with low birth weight increased to 8.1% between 2003 and 2004. According to the New York Times, "the main reason for the high rate is preterm delivery, and there was a 10% increase in such births from 2000 to 2006." Between 2007 and 2011, however, the preterm birth rate has decreased every year. In 2011, 11.73% of babies were born before the 37th week of gestation, down from a high of 12.80% in 2006.
Economic expenditures on labor and delivery and neonatal care are relatively high in the United States. A conventional birth averages US$9,775 with a C-section costing US$15,041. Preterm births in the US have been estimated to cost $51,600 per child, with a total yearly cost of $26.2 billion. Despite this spending, several reports state that infant mortality rate in the United States is significantly higher than in other developed nations. Estimates vary; the CIA's World Factbook ranks the US 55th internationally in 2014, with a rate of 6.17, while the UN figures from 2005 to 2010 place the US 34th.
Differences in measurement could play a substantial role in the disparity between the US and other nations. A non-viable birth in the US could be registered as a stillbirth in similarly developed nations like Japan, Sweden, Norway, Ireland, the Netherlands, and France, thereby reducing their IMR. Neonatal intensive care is also more likely to be applied in the US to marginally viable infants, although such interventions have been found to increase both costs and disability. A study following the implementation of the Born Alive Infant Protection Act of 2002 found universal resuscitation of infants born between 20 and 23 weeks increased the neonatal spending burden by $313.3 million while simultaneously decreasing quality-adjusted life years by 329.3.
The vast majority of research conducted in the late twentieth and early twenty-first century indicates that African-American infants are more than twice as likely to die in their first year of life than white infants. Although a decline occurred from 13.63 deaths in 2005 to 11.46 deaths per 1,000 live births in 2010, non-Hispanic black parents continued to report a rate 2.2 times as high as that for non-Hispanic white parents.
Contemporary research findings have demonstrated that nationwide racial disparities in infant mortality are linked to the experiences of the postpartum parent and that these disparities cannot be totally accounted for by socio-economic, behavioral or genetic factors. The Hispanic paradox, an effect observed in other health indicators, appears in the infant mortality rate, as well. Hispanic postpartum parents see an IMR comparable to non-Hispanic white postpartum parents, even with lower educational attainment and economic status. According to Mustillo's CARDIA (Coronary Artery Risk Development in Young Adults) study, "self reported experiences of racial discrimination were associated with pre-term and low-birthweight deliveries, and such experiences may contribute to black-white disparities in prenatal outcomes." A study in North Carolina, for example, concluded that "white women who did not complete high school have a lower infant mortality rate than black college graduates." Likewise, dozens of population-based studies indicate that "the subjective, or perceived experience of racial discrimination is strongly associated with an increased risk of infant death and with poor health prospects for future generations of African Americans."
==== African American ====
While earlier parts of this article have addressed racial differences in the infant death rate, a closer look into the effects of racial differences within the country is necessary to view discrepancies. Non-Hispanic Black women have the highest infant mortality rate with a rate of 11.3, while the IMR among white women is 5.1.
While the popular argument is that due to the trend of black women being of a lower socio-economic status there is in an increased likelihood of a child suffering, and while this does correlate, the theory is not congruent with the data on Latino IMR in the United States. Latino people are almost as likely to experience poverty as blacks in the U.S., however, the infant mortality rate of Latinos is much closer to white women than it is to black women. The poverty rate for blacks is 24.1% and for Latinos it is 21.4%; if there is a direct correlation, then the IMR of these two groups should be rather similar, however, blacks have an IMR double that of Latinos. Also, for black women who move out of poverty, or never experienced it in the first place, their IMR is not much lower than their counterparts experiencing higher levels of poverty.
Tyan Parker Dominguez at the University of Southern California offers a theory to explain the disproportionally high IMR among black women in the United States. She says African American women experience stress at much higher rates than any other group in the country. Stress produces particular hormones that can induce labor and contribute to other pregnancy problems. Considering premature birth is one of the leading causes of death of infants under the age of one, early labor is a legitimate concern. The idea of stress as a factor in IMR spans socio-economic status as Parker Dominguez says that for lower-class women stress comes from an unstable family life and chronic worry over poverty, while for middle-class women, battling racism, real or perceived, can be an extreme stressor.
Others believe black women are predisposed to a higher IMR, meaning ancestrally speaking, all women from African descent should experience an elevated rate. This theory is quickly disproven by looking at foreign-born African immigrants, these women come from a completely different social context and are not prone to the higher IMR experienced by American-born black women.
Arline Geronimus, a professor at the University of Michigan School of Public Health calls the phenomenon "weathering". She claims constantly dealing with disadvantages and racial prejudice causes black women's birth outcomes to deteriorate with age. Therefore, younger black women may experience stress with pregnancy due to social and economic factors, but older women experience stress at a compounding rate and therefore have pregnancy complications aside from economic factors.
Mary O. Hearst, a professor in the Department of Public Health at Saint Catherine University, researched the effects of segregation on the African American community to see if it contributed to the high IMR in black children. Hearst claims that residential segregation contributes to the high rates because of the political, economic, and health implications it poses on black mothers regardless of their socioeconomic status. Racism, economic disparities, and sexism in segregated communities are all examples of the daily stressors that pregnant black women face, and are risk factors for conditions that can affect their pregnancies such as pre-eclampsia and hypertension.
Studies have also shown that high IMR is due to the inadequate care that pregnant African Americans receive compared to other women in the country. In another study, it was shown that Black patients were more likely to receive ibuprofen after surgery instead of oxycodone. This unequal treatment stems from the idea that there are racial medical differences and is also rooted in racial biases and controlled images of black women. Because of this unequal treatment, research on maternal and prenatal care received by African American women and their infants, finds that black women do not receive the same urgency in medical care; they are also not taken as seriously regarding pain they feel or complications they think they are having, as exemplified by the complications tennis-star Serena Williams faced during her delivery.
Several peer-reviewed articles have documented a difference in the levels of care a black patient receives regardless of whether they have insurance. For white women IMR drops after age 20, and remains the same until she is in her 40s; for black women IMR does not decrease when accounting for higher education, nor change based on age, suggesting that there is a racial element. There is another element that must be considered: the effect of the intersection of race and gender. Misogynoir is a commonly cited and overlooked issue. Black feminists have often been cited as the backbone of numerous Civil Rights events, but they feel overlooked when it comes to meaningful change that positively changes the lives of Black women specifically. During the June 2020 Black Lives Matter protests, many black feminists criticized the movement for excluding them. When examined through this lens, the increased rates of IMR of African American women becomes a matter of equity and an issue of social justice.
Strides have been made, however, to combat this epidemic. In Los Angeles County, health officials have partnered with non-profits around the city to help black women after the delivery of their child. One non-profit that has made a large impact on many lives is Great Beginnings For Black Babies in Inglewood. The non-profit centers around helping women deal with stress by forming support networks, keeping an open dialogue around race and family life, and also finding these women a secure place in the workforce.
Some research argues that to end the high infant mortality rate of black children, the country needs to fix the social and societal issues that plague African Americans, such as institutional racism, mass incarceration, poverty, and health care disparities that are present amongst the African American population. Following this theory, if institutional inequalities are addressed and repaired by the United States Government, this will reduce daily stressors for African Americans, and African American women in particular, and lessen the risk of complications in pregnancy and infant mortality. Others argue that increasing diversity in the health care industry can help reduce the IMR as more representation can tackle deep-rooted racial biases and stereotypes that exist towards African American women. Another attempt to reduce high IMR among black children is the use of doulas throughout pregnancy.
== History ==
It was in the early 1900s when countries around the world started to notice that there was a need for better child health care services; first in Europe, and then with the United States creating a campaign to decrease the infant mortality rate. With this program, they were able to lower the IMR from 100 deaths to 10 deaths per every 1,000 births. When infant mortality began being noticed as a national problem it was viewed a social problem, and middle class American women with an educational background started to create a movement to provide housing for families of a lower social class. Through this movement they were able to establish public health care and government agencies, which in turn made more sanitary and healthier environments for infants. Medical professionals helped further the cause for infant health by creating the field of pediatrics, which is devoted to the medical care of children.
=== United States ===
In the 20th century decreases in infant mortality around the world were linked to several common trends, including social programs, improved sanitation, improved access to healthcare, and improved education, as well as scientific advancements like the discovery of penicillin and the development of safer blood transfusions.
In the United States, improving infant mortality in the early half of the 20th century meant tackling environmental factors. By improving sanitation, especially access to safe drinking water, the United States dramatically decreased infant mortality, which had been a growing concern in the United States since the 1850s. During this time the United States also endeavored to increase education and awareness regarding infant mortality. Pasteurization of milk also helped the United States combat infant mortality in the early 1900s, as it helped curb disease in infants. These factors, on top of a general increase in the standard of living in urban areas, helped the United States make dramatic improvements to their rates of infant mortality in the early 20th century.
Although the overall infant mortality rate was sharply dropping during this time, within the United States infant mortality varied greatly among racial and socio-economic groups. Between 1915 and 1933 the change in infant mortality per 1,000 births was, for the white population, 98.6 down to 52.8 per 1,000, and for the black population, 181.2 to 94.4 per 1,000 - studies imply that this has a direct correlation with relative economic conditions between these populations. Additionally, infant mortality in southern states was consistently 2% higher than other regions in the US across a 20-year period starting in 1985. Southern states also tend to perform worse on predictors for higher infant mortality, such as per capita income and poverty rate.
In the latter half of the 20th century, a focus on greater access to medical care for women spurred declines in infant mortality in the United States. The implementation of Medicaid, granting wider access to healthcare, contributed to a dramatic decrease in infant mortality, as did greater access to legal abortion and family-planning care, such the IUD and the birth control pill.
By 1984, the United States' decreasing infant mortality slowed. Funding for the federally subsidized Medicaid and Maternal and Infant Care programs was reduced, and availability of prenatal care decreased for low-income parents.
=== China ===
The growth of medical resources in the People's Republic of China's during the latter half of the 20th century partly explains its dramatic improvement regarding infant mortality during this time. The Rural Cooperative Medical System, which was founded in the 1950s, granted healthcare access to previously underserved rural populations, and is estimated to have covered 90% of China's rural population throughout the 1960s. The Cooperative Medical System achieved an infant mortality rate of 25.09 per 1,000; while it was later defunded, leaving many rural populations to rely on an expensive fee-for-service system, the rate continued to decline. As the Cooperative Medical System was replaced, the change caused a socio-economic gap in accessibility to medical care in China, however this was not reflected in its declining infant mortality rate; prenatal care was increasingly used, and delivery assistance remained accessible.
China's one-child policy, adopted in the 1980s, negatively impacted its infant mortality. Women carrying unapproved pregnancies faced state consequences and social stigma and were thus less likely to use prenatal care. Additionally, economic realities and long-held cultural factors incentivized male offspring, leading some families who already had sons to avoid prenatal care or professional delivery services, and causing China to have unusually high female infant mortality rates during this time.
== See also ==
Maternal mortality
Miscarriage
Stillbirth
List of countries by infant mortality rate
List of countries by maternal mortality ratio
History of public health in the United Kingdom
Related statistical categories:
Perinatal mortality only includes deaths between the fetal viability (22 weeks gestation) and the end of the 7th day after delivery.
Neonatal mortality only includes deaths in the first 28 days of life.
Postneonatal mortality only includes deaths after 28 days of life but before one year.
Child mortality includes deaths before the age of 5.
== Notes ==
== References ==
== Further reading ==
Abouharb, M. Rodwan, and Anessa L. Kimball. "A new dataset on infant mortality rates, 1816—2002." Journal of Peace Research 44.6 (2007): 743-754. online
Armstrong, David. "The invention of infant mortality." Sociology of Health & Illness 8.3 (1986): 211-232. online
Chen, Alice, Emily Oster, and Heidi Williams. "Why is infant mortality higher in the United States than in Europe?." American Economic Journal: Economic Policy 8.2 (2016): 89-124. online
Conley, Dalton, and Kristen W. Springer. "Welfare state and infant mortality." American Journal of Sociology 107.3 (2001): 768-807. online
Galley, Chris. Infant Mortality in England, 1538-2000 (Local Population Studies Society. 2023), pp. 8-10. online
MacDorman, Marian F., et al. "International comparisons of infant mortality and related factors: United States and Europe, 2010." (2014). online
Nathanson, Constance A. Disease prevention and social change, the state, society, and public health in the United States, France, Great Britain and Canada (Russell Sage Foundation, 2007) . pp.49–79.
Reidpath, Daniel D., and Pascale Allotey. "Infant mortality rate as an indicator of population health." Journal of Epidemiology & Community Health 57.5 (2003): 344-346. online
Singh, Gopal K., and Stella M. Yu. "Infant mortality in the United States: trends, differentials, and projections, 1950 through 2010." American Journal of Public Health 85.7 (1995): 957-964. online
== External links ==
Child and infant mortality estimates for all countries - website by UNICEF | Wikipedia/Infant_mortality_rate |
Birth rate, also known as natality, is the total number of live human births per 1,000 population for a given period divided by the length of the period in years. The number of live births is normally taken from a universal registration system for births; population counts from a census, and estimation through specialized demographic techniques such as population pyramids. The birth rate (along with mortality and migration rates) is used to calculate population growth. The estimated average population may be taken as the mid-year population.
When the crude death rate is subtracted from the crude birth rate (CBR), the result is the rate of natural increase (RNI). This is equal to the rate of population change (excluding migration).
The total (crude) birth rate (which includes all births)—typically indicated as births per 1,000 population—is distinguished from a set of age-specific rates (the number of births per 1,000 persons, or more usually 1,000 females, in each age group). The first known use of the term "birth rate" in English was in 1856.
The average global birth rate was 17 births per 1,000 total population in 2024. The death rate was 7.9 per 1,000. The RNI was thus 0.91 percent.
In 2012, the average global birth rate was 19.611 per 1,000 according to the World Bank and 19.15 births per 1,000 total population according to the CIA, compared to 20.09 per 1,000 total population in 2007.
The 2024 average of 17 births per 1,000 total population equates to approximately 4.3 births per second or about 260 births per minute for the world. On average, two people in the world die every second or about 121 per minute.
== In politics ==
The birth rate is an issue of concern and policy for national governments. Some (including those of Italy and Malaysia) seek to increase the birth rate with financial incentives or provision of support services to new mothers. Conversely, other countries have policies to reduce the birth rate (for example, China's one-child policy which was in effect from 1978 to 2015). Policies to increase the crude birth rate are known as pro-natalist policies, and policies to reduce the crude birth rate are known as anti-natalist policies. Non-coercive measures such as improved information on birth control and its availability have achieved good results in countries such as Iran and Bangladesh.
There has also been discussion on whether bringing women into the forefront of development initiatives will lead to a decline in birth rates. In some countries, government policies have focused on reducing birth rates by improving women's rights, sexual and reproductive health. Typically, high birth rates are associated with health problems, low life expectancy, low living standards, low social status for women and low educational levels. Demographic transition theory postulates that as a country undergoes economic development and social change its population growth declines, with birth rates serving as an indicator.
At the 1974 World Population Conference in Bucharest, Romania, women's issues gained considerable attention. Family programs were discussed, and 137 countries drafted a World Population Plan of Action. As part of the discussion, many countries accepted modern birth control methods such as the birth control pill and the condom while opposing abortion. Population concerns, as well as the desire to include women in the discourse, were discussed; it was agreed that improvements in women's status and initiatives in defense of reproductive health and freedom, the environment, and sustainable socioeconomic development were needed.
Birth rates ranging from 10 to 20 births per 1,000 are considered low, while rates from 40 to 50 births per 1,000 are considered high. There are problems associated with high birth rates, and there may be problems associated with low birth rates. High birth rates may contribute to malnutrition and starvation, stress government welfare and family programs, and more importantly store up overpopulation for the future, and increase human damage to other species and habitats, and environmental degradation. Additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental impact of a large population. Low birth rates may stress the government to provide adequate senior welfare systems and stress families who must support the elders themselves. There will be fewer younger able-bodied people who may be needed to support an ageing population, if a high proportion of older people become disabled and unable to care for themselves.
== Population control ==
In the 20th century, several authoritarian governments sought either to increase or to decrease the birth rates, sometimes through forceful intervention. One of the most notorious natalist policies was that in communist Romania in 1967–1990, during the time of communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. This policy has been depicted in movies and documentaries (such as 4 Months, 3 Weeks and 2 Days, and Children of the Decree). These policies temporarily increased birth rates for a few years, but this was followed by a decline due to the increased use of illegal abortion. Ceaușescu's policy resulted in over 9,000 deaths of women due to illegal abortions, large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s (when many orphanages were closed and the children ended on the streets), and overcrowding in homes and schools. Ultimately, this aggressive natalist policy led to a generation who eventually led the Romanian Revolution which overthrew and executed him.
In stark contrast to Ceaușescu's natalist policy was China's one child policy, in effect from 1978 to 2015, which included abuses such as forced abortions. This policy has also been deemed responsible for the common practice of sex-selective abortion which led to an imbalanced sex ratio in the country. Given strict family size limitations and a preference for sons, girls became unwanted in China because they were considered as depriving the parents of the chance of having a son. With the progress of prenatal sex-determination technologies and induced abortion, the one-child policy gradually turned into a one-son policy.
In many countries, the steady decline in birth rates over the past decades can largely be attributed to the significant gains in women's freedoms, such as tackling forced marriage and child marriage, access to contraception, equal access to education, and increased socioeconomic opportunities. Women of all economic, social, religious and educational persuasions are choosing to have fewer children as they are gaining more control over their own reproductive rights. Apart from more children living into their adult years, women are often more ambitious to take up education and paid work outside the home, and to live their own lives rather than just a life of reproduction and unpaid domestic work. Birth rates have fallen due to the introduction of family planning clinics and other access to contraception.
In Bangladesh, one of the poorest countries in the world, women are less likely to have two children (or more) than they were before 1999, according to Australian demographer Jack Caldwell. Bangladeshi women eagerly took up contraceptives, such as condoms and the pill, according to a study in 1994 by the World Bank. The study proved that family planning could be carried out and accepted practically anywhere. Caldwell also believes that agricultural improvements led to the need for less labour. Children not needed to plough the fields would be of surplus and require some education, so in turn, families become smaller and women are able to work and have greater ambitions. Other examples of non-coercive family planning policies are Ethiopia, Thailand and Indonesia.
Myanmar was controlled until 2011 by an austere military junta, intent on controlling every aspect of people's lives. The generals wanted the country's population doubled. In their view, women's job was to produce babies to power the country's labour force, so family planning was vehemently opposed. The women of Burma opposed this policy, and Peter McDonald of the Australian National University argues that this gave rise to a black market trade in contraceptives, smuggled in from neighbouring Thailand.
In 1990, five years after the Iraq-Iran war ended, Iran saw the fastest recorded fall in fertility in world history. Revolution gave way to consumerism and westernization. With TVs and cars came condoms and birth control pills. A generation of women had been expected to produce soldiers to fight Iraq, but the next generation of women could choose to enjoy some newfound luxuries. During the war, the women of Iran averaged about 8 children each, a ratio the hard-line Islamic President Mahmoud Ahmadinejad wanted to revive. As of 2010, the birth rate of Iran is 1.7 babies per woman. Some observers claim this to be a triumph of Western values of freedom for women against states with Islamic values.
Islamic clerics are also having less influence over women in other Muslim countries. In the past 30 years Turkey's fertility rate of children per woman has dropped from 4.07 to 2.08. Tunisia has dropped from 4.82 to 2.14 and Morocco from 5.4 to 2.52 children per woman.
Latin America, of predominately Catholic faith, has seen the same trends of falling fertility rates. Brazilian women are having half the children compared to 25 years ago: a rate of 1.7 children per woman. The Vatican now has less influence over women in other hard-line Catholic countries. Mexico, El Salvador, Ecuador, Nicaragua, Colombia, Venezuela and Peru have all seen significant drops in fertility in the same period, all going from over six to less than three children per woman. Forty percent of married Brazilian women are choosing to get sterilised after having children. Some observers claim this to be a triumph of modern Western values of freedom for women against states with Catholic values.
== National birth rates ==
According to the CIA's The World Factbook, who presumably get their figures from the World Health Organization, the country with the highest birth rate is Niger at 6.49 children born per woman and the country with the lowest birth rate is Taiwan, at 1.13 children born per woman. However, despite not having any official records, it can be presumed for obvious reasons (only men are allowed to be Catholic priests) that the Holy See has the lowest birth rate of any sovereign state.
Compared with the 1950s (when the birth rate was 36 per thousand), as of 2011, the world birth rate has declined by 16 per thousand.
As of 2017, Niger has had 49.443 births per thousand people.
Japan has one of the lowest birth rates in the world with 8 per thousand people.
While in Japan there are 126 million people and in Niger 21 million, both countries had around 1 million babies born in 2016.
=== Sub-Saharan Africa ===
The region of Sub-Saharan Africa has the highest birth rate in the world. As of 2016, Niger, Mali, Uganda, Zambia, and Burundi have the highest birth rates in the world. This is part of the fertility-income paradox, as these countries are very poor, and it may seem counter-intuitive for families there to have so many children. The inverse relationship between income and fertility has been termed a demographic-economic "paradox" by the notion that greater means would enable the production of more offspring as suggested by the influential Thomas Malthus.
=== Afghanistan ===
Afghanistan has the 11th highest birth rate in the world, and also the highest birth rate of any non-African country (as of 2016). The rapid population growth of Afghanistan is considered a problem when it prevents population stabilization and affects maternal and infant health. Reasons for large families include tradition, religion, the different roles of men and women, and the cultural desire to have several sons.
=== Australia ===
Historically, Australia has had a relatively low fertility rate, reaching a high of 3.14 births per woman in 1960. This was followed by a decline which continued until the mid-2000, when a one off cash incentive was introduced to reverse the decline. In 2004, the then Howard government introduced a non-means tested 'Maternity Payment' to parents of every newborn as a substitute to maternity leave. The payment known as the 'Baby Bonus' was A$3000 per child. This rose to A$5000 which was paid in 13 installments.
At a time when Australia's unemployment was at a 28-year low of 5.2%, the then Treasurer Peter Costello stated there was opportunity to go lower. With a good economic outlook for Australia, Costello held the view that now was a good time to expand the population, with his famous quote that every family should have three children "one for mum, one for dad and one for the country". Australia's fertility rate reached a peak of 1.95 children per woman in 2010, a 30-year high, although still below replacement rate.
Phil Ruthven of the business information firm IBISWorld believes the spike in fertility was more about timing and less about monetary incentives. Generation X was now aged 25 to 45 years old. With numerous women putting pregnancies off for a few years for the sake of a career, many felt the years closing in and their biological clocks ticking.
On 1 March 2014, the baby bonus was replaced with Family Tax Benefit A. By then the baby bonus had left its legacy on Australia.
In 2016, Australia's fertility rate has only decreased slightly to 1.91 children per woman.
=== France ===
France has been successful in increasing fertility rates from the low levels seen in the late 1980s, after a continuous fall in the birth rate. In 1994, the total fertility rate was as low as 1.66, but perhaps due to the active family policy of the government in the mid-1990s, it has increased, and maintained an average of 2.0 from 2008 until 2015.
France has embarked on a strong incentive policy based on two key measures to restore the birth rate: family benefits (les allocations familiales) and a family-coefficient of income tax (le quotient familial). Since the end of World War II, early family policy in France has been based on a family tradition that requires children to support multi-child family, so that a third child enables a multi-child family to benefit from family allowances and income tax exemptions. This is intended to allow families with three children to enjoy the same living standards as households without children.
In particular, the French income taxation system is structured so that families with children receive tax breaks greater than single adults without children. This income tax imposition system is known as the family coefficient of income tax. A characteristic of the family factor is that households with a large number of children, even if they are at the same standard of living, can receive more tax exemption benefits.
Since the 1970s, the focus has been on supporting families who are vulnerable such as single parent families and the children of a poor family in order to ensure equality of opportunity. In addition, as many women began to participate in the labor market, the government introduced policies of financial support for childcare leave as well as childcare facilities. In 1994, the government expanded the parent education allowance (l'allocation parentale d'éducation) for women with two children to ensure freedom of choice and reduce formal unemployment in order to promote family well-being and women's labor participation.
There are also:
an infant child care allowance, family allowance and family allowance for multichild family, and a multi-element family pension scheme.
a medical insurance system that covers all medical expenses, hospitalization costs, and medical expenses incurred after six months of pregnancy as 100% of the national health insurance in the national social security system, and the statutory leave system during pregnancy.
=== Germany ===
The birth rate in Germany is only 8.3 per thousand, lower than the UK and France.
=== Ireland ===
In Europe as of July 2011, Ireland's birth rate was 16.5 per 1000 (3.5 percent higher than the next-ranked country, the UK).
=== Japan ===
As of 2016, Japan has the third lowest crude birth rate (i.e. not allowing for the population's age distribution) in the world, with only Saint Pierre and Miquelon and Monaco having lower crude birth rates. Japan has an unbalanced population with many elderly but few young people, and this is projected to be more extreme in the future, unless there are major changes. An increasing number of Japanese people are staying unmarried: between 1980 and 2010, the percentage of the population who had never married increased from 22% to almost 30%, even as the population continued to age, and by 2035 one in four people will not marry during their childbearing years. The Japanese sociologist Masahiro Yamada coined the term "parasite singles" for unmarried adults in their late 20s and 30s who continue to live with their parents.
=== South Korea ===
Since joining the Organization for Economic Cooperation & Development (OECD) in 1996, South Korea's fertility rate has been on the decline. It recorded the lowest fertility rate among OECD countries in 2017, with just 1.1 children per woman being born. Subsequent studies indicate that Korea has broken its own record and that the fertility rate has fallen to below one child per woman. The total fertility rate in South Korea sharply declined from 4.53 in 1970 to 2.06 in 1983, falling below the replacement level of 2.10. The low birth rate accelerated in the 2000s, with the fertility rate dropping to 1.48 in 2000, 1.23 in 2010, and reaching 0.72 in 2023.
One example of Korea's economic crisis is the housing market. Tenants may choose to buy, rent, or use the Jeonse system of renting. Landlords require the renters to upfront as much as 70% of the property value as a type of security deposit, then live rent free for the duration of the contract, usually two years. At the end of the contract, the deposit is refunded 100% back to the renter. Historically, landlords have invested the security deposit and banked on rising property values. But as inflation rises higher than the interest rates, property values plummeted. Recent government caps, aimed at protecting the renters from being victims of price gouging, restricted the profit the landlord can make on renewing the contract.
The Korean government offers a wide range of financial incentives to parents; however, many new parents, both mother and father, refuse to take full advantage of postpartum parental leave. Some fathers fear being ridiculed for taking "mom leave" while both working parents fear the stigma of "falling behind" in their professional careers. The South Korean corporate world is very unsympathetic to family needs.
Abortion and divorce are other contributing factors for Korea's low birth rate. In the twentieth century, due mainly to Confucian beliefs and a strong desire to sire a son, female fetuses were being aborted. This strong desire to have a son as a first child has somewhat of an oxymoron effect on today's low birth rate, as many women will not want to marry the oldest son, aware of his financial obligation to feed, clothe and shelter the aging parents. As a result, in 1988 the government banned doctors from telling expectant parents the sex of the fetus. Effective 1 January 2021, abortion has been decriminalized. Divorce is another deterrent to childbirth. Although divorce has been on the rise over the last 50 years, it hit families especially hard after the economic crisis in 1997; fathers abandoning their families because they could not financially support them. In addition, the abortion of female fetuses lead to a relative shortage of women, resulting in overall lowerall birthrate in the country.
=== Taiwan ===
In August 2011, Taiwan's government announced that its birth rate declined in the previous year, despite the fact that the government implemented approaches to encourage fertility.
=== United Kingdom ===
In July 2011, the UK's Office for National Statistics (ONS) announced a 2.4 percent increase in live births in the United Kingdom in 2010. This is the highest birth rate in the UK in 40 years. However, the UK record year for births and birth rate remains 1920 (when the ONS reported over 957,000 births to a population of "around 40 million").
=== United States ===
There has been a dramatic decline in birth rates in the U.S. between 2007 and 2020. The Great Recession appears to have contributed to the decline in the early period. A 2022 study did not identify any other economic, policy, or social factor that contributed to the decline. The decline may be due to shifting life priorities of recent cohorts that go through childbearing age, as there have been "changes in preferences for having children, aspirations for life, and parenting norms."
A Pew research center study found evidence of a correlation between economic difficulties and fertility decline by race and ethnicity. Hispanics (particularly affected by the recession) have experienced the largest fertility decline, particularly compared to Caucasians. In 2008–2009 the birth rate declined 5.9 percent for Hispanic women, 2.4 percent for African American women and 1.6 percent for white women. The relatively large birth rate declines among Hispanics mirror their relatively large economic declines, in terms of jobs and wealth. According to the statistics using the data from National Centre for Health Statistics and U.S. Census Bureau, from 2007 to 2008, the employment rate among Hispanics declined by 1.6 percentage points, compared with declines of 0.7 points for whites. The unemployment rate shows a similar pattern—unemployment among Hispanics increased 2.0 percentage points from 2007 to 2008, while for whites the increase was 0.9 percentage points. A recent report from the Pew Hispanic Center revealed that Hispanics have also been the biggest losers in terms of wealth since the beginning of the recession, with Hispanic households losing 66% of their median wealth from 2005 to 2009. In comparison, black households lost 53% of their median wealth and white households lost only 16%.
Other factors (such as women's labor-force participation, contraceptive technology and public policy) make it difficult to determine how much economic change affects fertility. Research suggests that much of the fertility decline during an economic downturn is a postponement of childbearing, not a decision to have fewer (or no) children; people plan to "catch up" to their plans of bearing children when economic conditions improve. Younger women are more likely than older women to postpone pregnancy due to economic factors, since they have more years of fertility remaining.
In July 2011, the U.S. National Institutes of Health announced that the adolescent birth rate continues to decline. In 2013, teenage birth rates in the U.S. were at the lowest level in U.S. history. Teen birth rates in the U.S. have decreased from 1991 through 2012 (except for an increase from 2005 to 2007). The other aberration from this otherwise-steady decline in teen birth rates is the six percent decrease in birth rates for 15- to 19-year-olds between 2008 and 2009. Despite the decrease, U.S. teen birth rates remain higher than those in other developed nations. Racial differences affect teen birth and pregnancy rates: American Indian/Alaska Native, Hispanic, and non-Hispanic black teen pregnancy rates are more than double the non-Hispanic white teenage birth rate.
States strict in enforcing child support have up to 20 percent fewer unmarried births than states that are lax about getting unmarried dads to pay, the researchers found. Moreover, according to the results, if all 50 states in the United States had done at least as well in their enforcement efforts as the state ranked fifth from the top, that would have led to a 20 percent reduction in out-of-wedlock births.
The United States population growth is at a historical low level, mainly because the United States birth rates in the 2010s and 2020s are the lowest ever recorded. The low birth rates in the United States post-2010 can possibly be ascribed to the recession that started in 2008, which led families to postpone having children and fewer immigrants coming to the US. The US birth rates in 2010-2014 were not high enough to maintain the size of the U.S. population, according to The Economist. Since that period, the birth rate (births per 1,000 inhabitants) has further declined from roughly 12 to roughly 10.
== Factors affecting birth rate ==
There are many factors that interact in complex ways, influencing the birth rates of a population.
Developed countries have a lower birth rate than underdeveloped countries (see Income and fertility). A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have. Factors generally associated with increased fertility include religiosity, intention to have children, and maternal support. Factors generally associated with decreased fertility include wealth, education, female labor participation, urban residence, intelligence, increased female age, women's rights, access to family planning services and (to a lesser degree) increased male age. Many of these factors however are not universal, and differ by region and social class. For instance, at a global level, religion is correlated with increased fertility.
Reproductive health can also affect the birth rate, as untreated infections can lead to fertility problems, as can be seen in the "infertility belt" - a region that stretches across central Africa from the United Republic of Tanzania in the east to Gabon in the west, and which has a lower fertility than other African regions.
Child custody laws, affecting fathers' parental rights over their children from birth until child custody ends at age 18, may have an effect on the birth rate. U.S. states strict in enforcing child support have up to 20 percent fewer unmarried births than states that are lax about getting unmarried fathers to pay, the researchers found. Moreover, according to the results, if all 50 states in the United States had done at least as well in their enforcement efforts as the state ranked fifth from the top, that would have led to a 20 percent reduction in out-of-wedlock births.
Some scholars believe there exists a form of "cultural selection" that will significantly affect future demographics due to significant differences in birth rates between cultures, such as within certain religious groups, that cannot be explained by factors such as income. In the book Shall the Religious Inherit the Earth?, Eric Kaufmann argues that demographic trends point to religious fundamentalists greatly increasing as a share of the population over the next century. From the perspective of evolutionary psychology, it is expected that selection pressure should occur for whatever psychological or cultural traits maximize fertility.
== Crude birth rate ==
Crude birth rate is a measure of the number of live births occurring during the year, per 1,000 people in the population. It is normally used to predict population growth.
== See also ==
Human population planning – Practice of controlling rate of growth
Population ageing – Increasing median age in a population
Population decline – Concept in human demographics
Case studies
Ageing of Europe – Overview of ageing in Europe
Aging of Japan
Aging of South Korea – Overview of aging in South Korea
Aging of the United States
Demographic crisis of Russia – Aging population of RussiaPages displaying short descriptions of redirect targets
Lists
List of sovereign states and dependent territories by birth rate
List of sovereign states and dependencies by total fertility rate
== Notes ==
== References ==
United Nations World Population Prospects: The 2008 Revision Population Database
Audrey, Clark (1985). Longman Dictionary of Geography, Human and Physical. New York: Longman.
Douglas, Ian; Richard Huggett (2007). Companion Encyclopedia of Geography. New York: Routledge.
Norwood, Carolette (2009). "Re-thinking the integration of women in population development initiatives". Development in Practice. 19 (7): 906–911. doi:10.1080/09614520903122352. S2CID 162368226.
World Birth rate by IndexMundi
http://www.childtrends.org/?indicators=fertility-and-birth-rates Archived 26 May 2016 at the Wayback Machine
== External links ==
Media related to Birth rate at Wikimedia Commons
CIA World Factbook Birth Rate List by Rank | Wikipedia/Birth_rate |
Survey methodology is "the study of survey methods".
As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.
Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology.
== Overview ==
A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.
Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.
The most important methodological challenges of a survey methodologist include making decisions on how to:
Identify and select potential sample members.
Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond)
Evaluate and test questions.
Select the mode for posing questions and collecting responses.
Train and supervise interviewers (if they are involved).
Check data files for accuracy and internal consistency.
Adjust survey estimates to correct for identified errors.
Complement survey data with new data sources (if appropriate)
== Selecting samples ==
The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest. The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.
== Modes of data collection ==
There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including
costs,
coverage of the target population,
flexibility of asking questions,
respondents' willingness to participate and
response accuracy.
Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:
Telephone
Mail (post)
Online surveys
Mobile surveys
Personal in-home surveys
Personal mall or street intercept survey
Mixed modes
== Research designs ==
There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies.
=== Cross-sectional studies ===
In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once. A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.
=== Successive independent samples studies ===
A successive independent samples design draws multiple random samples from a population at one or more times. This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.
=== Longitudinal studies ===
Longitudinal studies take measure of the same random sample at multiple time points. Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.
However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC). These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet. Depending on the approach used, the ability to match some portion of the sample can be lost.
In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.
== Questionnaires ==
Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately. Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate.
=== Questionnaires as tools ===
A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample. Demographic variables include such measures as ethnicity, socioeconomic status, race, and age. Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale. Self-report scales are also used to examine the disparities among people on scale items. These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid.
=== Reliability and validity of self-report measures ===
Reliable measures of self-report are defined by their consistency. Thus, a reliable self-report measure produces consistent results every time it is executed. A test's reliability can be measured a few ways. First, one can calculate a test-retest reliability. A test-retest reliability entails conducting the same questionnaire to a large sample at two different times. For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest. Self-report measures will generally be more reliable when they have many items measuring a construct. Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested. Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment. Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure. Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.
=== Composing a questionnaire ===
Six steps can be employed to construct a questionnaire that will produce reliable and valid results. First, one must decide what kind of information should be collected. Second, one must decide how to conduct the questionnaire. Thirdly, one must construct a first draft of the questionnaire. Fourth, the questionnaire should be revised. Next, the questionnaire should be pretested. Finally, the questionnaire should be edited and the procedures for its use should be specified.
=== Guidelines for the effective wording of questions ===
The way that a question is phrased can have a large impact on how a research participant will answer the question. Thus, survey researchers must be conscious of their wording when writing survey questions. It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another. There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions. Free response questions are open-ended, whereas closed questions are usually multiple choice. Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding. Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder. In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words. Each question should be edited for "readability" and should avoid leading or loaded questions. Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias.
A respondent's answer to an open-ended question can be coded into a response scale afterwards, or analysed using more qualitative methods.
=== Order of questions ===
Survey researchers should carefully construct the order of questions in a questionnaire. For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end. Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence. Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.
=== Translating a questionnaire ===
Translation is crucial to collecting comparable survey data. Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. A team approach is recommended in the translation process to include translators, subject-matter experts and persons helpful to the process.
Survey translation best practice includes parallel translation, team discussions, and pretesting with real-life people. It is not a mechanical word placement process. The model TRAPD - Translation, Review, Adjudication, Pretest, and Documentation - originally developed for the European Social Surveys, is now "widely used in the global survey research community, although not always labeled as such or implemented in its complete form". For example, sociolinguistics provides a theoretical framework for questionnaire translation and complements TRAPD. This approach states that for the questionnaire translation to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language.
== Nonresponse reduction ==
The following ways have been recommended for reducing nonresponse in telephone and face-to-face surveys:
Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate.
Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.
Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.
A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions).
Other studies showed that quality of response degraded toward the end of long surveys.
Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail.
== Interviewer effects ==
Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race, gender,
and relative body weight (BMI).
These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,
interviewer sex responses to questions involving gender issues,
and interviewer BMI answers to eating and dieting-related questions.
While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.
== The role of big data ==
Since 2018, survey methodologists have started to examine how big data can complement survey methodology to allow researchers and practitioners to improve the production of survey statistics and its quality. Big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms of digital data. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020, 2023, and a conference forthcoming in 2025, a special issue in the Social Science Computer Review, a special issue in the Journal of the Royal Statistical Society, and a special issue in EP J Data Science, and a book called Big Data Meets Social Sciences edited by Craig A. Hill and five other Fellows of the American Statistical Association.
== See also ==
Survey data collection
Data Documentation Initiative
Enterprise feedback management (EFM)
Likert scale
Official statistics
Paid survey
Quantitative marketing research
Questionnaire construction
Ratio estimator
Social research
Total survey error
== References ==
== Further reading ==
Abramson, J. J. and Abramson, Z. H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences ISBN 0-443-06163-7
Adèr, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley. ISBN 0-471-21555-4
Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improving Survey Methods: Lessons from Recent Research. New York: Routledge. ISBN 978-0-415-81762-2
Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBN 0-471-61171-9
Griffith, James. (2014) "Survey Research in Military Settings." in Routledge Handbook of Research Methods in Military Studies edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp. 179–193. New York: Routledge.
Leung, Wai-Ching (2001) "Conducting a Survey", in Student BMJ, (British Medical Journal, Student Edition), May 2001
Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
Prince, S. a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & Tremblay, M. (2008). A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5(1), 56. http://doi.org/10.1186/1479-5868-5-56
Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. ISBN 0-07-111655-9 (pp. 143–192)
Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Selected Amy. Kluwer Academic Publishers, The Netherlands.
Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan.(2014). Routledge Handbook of Research Methods in Military Studies New York: Routledge.
Shackman, G. What is Program Evaluation? A Beginners Guide 2018
== External links ==
Media related to Survey methodology at Wikimedia Commons | Wikipedia/Survey_methodology |
The net migration rate is the difference between the number of immigrants (people coming into an area) and the number of emigrants (people leaving an area) per year divided by the population. When the number of immigrants is larger than the number of emigrants, a positive net migration rate occurs. A positive net migration rate indicates that there are more people entering than leaving an area. When more emigrate from a country, the result is a negative net migration rate, meaning that more people are leaving than entering the area. When there is an equal number of immigrants and emigrants, the net migration rate is balanced.
The net migration rate is calculated over a one-year period using the mid year population and a ratio.
== Factors ==
Migration occurs over a series of different push and pull factors that revolve around social, political, economical, and environmental factors according to Migration Trends.
Social migration is when an individual migrates reunite with family members, or to live in an area or country with which they identify more with (i.e., moving to an area where one's ethnic group is the majority).
Political migration then is when a person is going in as a refugee to escape war or political persecution. In many cases, this form of migration can also be considered as forced migration. This happens when refugees are moving to neighboring countries or more developed countries.
Economic migration is when an individual migrates to attain a higher standard of living by having access to better economic opportunities.
Lastly, environmental migration is when natural disasters force one to move into a new area.
If a country has more immigrants than emigrants, it is often a relatively wealthier country that has comparatively more economic opportunities and a higher standard of living. On the other hand, if few people are coming in and many more are leaving, there is a higher possibility of violence, lower economic opportunities, or not enough resources to support the existing population in the country.
== Formula ==
N = Net migration rate
I = Number of immigrants entering the area
E = Number of emigrants leaving the area
M = Mid-year population
=== Example ===
At the start of the year, country A had a population of 1,000,000. Throughout the year there was a total of 200,000 people that immigrated to (entered) country A, and 100,000 people that emigrated from (left) country A. Throughout the year there was a total of 100,000 births and 100,000 deaths. What is the net migration rate?
==== Step 1 ====
First, find the mid-year population for country A.
M = [ Population at Start of Year + Population at End of Year ] / 2
M = [ 1,000,000 + (1,000,000 + 200,000 - 100,000) ] / 2
M = [ 1,000,000 + 1,100,000 ] / 2
M = 2,100,000 / 2
M = 1,050,000
The mid-year population for country A is 1,050,000.
==== Step 2 ====
Second, find the net migration for country A. Note that this is simply the number of immigrants minus the number of emigrants, not the actual rate.
I - E = 200,000 - 100,000
I - E = 100,000
The net migration for country A is 100,000.
==== Step 3 ====
Third, substitute the result from step 2 into the formula to find the net migration rate for country A.
N = (I - E) / M X 1,000
N = 100,000 / 1,050,000 X 1,000
N = 95.23809523809524
N = 95.2
==== Result ====
The net migration rate for country A is 95.2 per 1,000 people. This means that for every 1,000 people in country A at the beginning of the year, the difference between the number of people moving in and the number of people moving out by the end of the year has a rate of 95.2 more people per 1,000 people. In essence this shows the change of population due to all migration flows, both in and out. This is helpful because the immigration rate shows growth only and emigration rate shows decline only. Combined net migration shows the impact of these two flows on the total population. This number shows the impact of migration on the country's population and allows for the comparison of country A's net migration rate to other country's net migration rate.
== Issues ==
If a country has a high net migration rate, it is generally relatively wealthier and more developed. In contrast, a country with a low rate is seen as undeveloped, having political problems, and lacking resources its citizens need.
Every country needs a stable number of people going in and out of its territory in order to have a stable economy. If the number of people coming in is greater than the number of people leaving, there will be a greater demand for resources and a tighter yet growing economy. On the other hand, a country with a lower migration rate will most likely lose many of its available resources due to a lack of consumerism and production.
Conflicts can arise due to migration, but people can still find it easier than ever to move to a different place. This can be due to more advanced technology and being able to communicate and have more efficient forms of transportation. All of this creates more opportunities which then increases the amount of net migration. The United States is an example of a country with growing opportunities as migration increases.
Other occurring problems caused by net migration is a rise in the dependency ratio, higher demand on government resources, and public congestion. A high dependency ratio can be a factor caused by net migration. The dependency ratio can increase as the elderly population proportionally increases, while the fertility rate decreases. This results in a decrease in the labor force, which can hurt a country's economy by causing it to slow down. In order to slow down this process, countries have various strategies, such as increasing the retirement age in order to keep the elderly involved in the workforce as much as possible.
== See also ==
Demography
Human migration
List of sovereign states by net migration rate
Population dynamics
== References ==
== External links ==
World Net migration rate Map
Planet Wire - Glossary of Terms
Census Bureau - Glossary | Wikipedia/Net_migration_rate |
Demography (from Ancient Greek δῆμος (dêmos) 'people, society' and -γραφία (-graphía) 'writing, drawing, description') is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration.
Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population.
Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers. In population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points.
Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and their categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender, date of death, postal code, ethnicity, blood type, emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history.
Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, institutional, cultural, and biological processes influencing a population.
== History ==
Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India. Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population.
In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground.
In the Middle Ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Muslim sociologists like Ibn Khaldun.
One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus De Morgan, On the Application of Probabilities to Life Contingencies (1838).
In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies. His work influenced Thomas Robert Malthus, who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst.
In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition.
The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international 'great demographers' like Adolphe Quetelet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis.
== Methods ==
Demography is the statistical and mathematical study of the size, composition, and spatial distribution of human populations and how these features change over time. Data are obtained from a census of the population and from registries: records of events like birth, deaths, migrations, marriages, divorces, diseases, and employment. To do this, there needs to be an understanding of how they are calculated and the questions they answer which are included in these four concepts: population change, standardization of population numbers, the demographic bookkeeping equation, and population composition.
There are two types of data collection—direct and indirect—with several methods of each type.
=== Direct methods ===
Direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration (registration of place of residence). In developed countries with good registration systems (such as the United States and much of Europe), registry statistics are the best method for estimating the number of births and deaths.
A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data.
Censuses do more than just count people. They typically collect information about families or households in addition to individual characteristics such as age, sex, marital status, literacy/education, employment status, and occupation, and geographical location. They may also collect data on migration (or place of birth or of previous residence), language, religion, nationality (or ethnicity or race), and citizenship. In countries in which the vital registration system may be incomplete, the censuses are also used as a direct source of information about fertility and mortality; for example, the censuses of the People's Republic of China gather information on births and deaths that occurred in the 18 months immediately preceding the census.
=== Indirect methods ===
Indirect methods of collecting data are required in countries and periods where full data are not available, such as is the case in much of the developing world, and most of historical demography. One of these techniques in contemporary demography is the sister method, where survey researchers ask women how many of their sisters have died or had children and at what age. With these surveys, researchers can then indirectly estimate birth or death rates for the entire population. Other indirect methods in contemporary demography include asking people about siblings, parents, and children. Other indirect methods are necessary in historical demography.
There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz).
The United Kingdom has a series of four national birth cohort studies, the first three spaced apart by 12 years: the 1946 National Survey of Health and Development, the 1958 National Child Development Study, the 1970 British Cohort Study, and the Millennium Cohort Study, begun much more recently in 2000. These have followed the lives of samples of people (typically beginning with around 17,000 in each study) for many years, and are still continuing. As the samples have been drawn in a nationally representative way, inferences can be drawn from these studies about the differences between four distinct generations of British people in terms of their health, education, attitudes, childbearing and employment patterns.
Indirect standardization is used when a population is small enough that the number of events (births, deaths, etc.) are also small. In this case, methods must be used to produce a standardized mortality rate (SMR) or standardized incidence rate (SIR).
== Population change ==
Population change is analyzed by measuring the change between one population size to another. Global population continues to rise, which makes population change an essential component to demographics. This is calculated by taking one population size minus the population size in an earlier census. The best way of measuring population change is using the intercensal percentage change. The intercensal percentage change is the absolute change in population between the censuses divided by the population size in the earlier census. Next, multiply this a hundredfold to receive a percentage. When this statistic is achieved, the population growth between two or more nations that differ in size, can be accurately measured and examined.
== Standardization of population numbers ==
For there to be a significant comparison, numbers must be altered for the size of the population that is under study. For example, the fertility rate is calculated as the ratio of the number of births to women of childbearing age to the total number of women in this age range. If these adjustments were not made, we would not know if a nation with a higher rate of births or deaths has a population with more women of childbearing age or more births per eligible woman.
Within the category of standardization, there are two major approaches: direct standardization and indirect standardization.
== Common rates and ratios ==
The crude birth rate, the annual number of live births per 1,000 people.
The general fertility rate, the annual number of live births per 1,000 women of childbearing age (often taken to be from 15 to 49 years old, but sometimes from 15 to 44).
The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.)
The crude death rate, the annual number of deaths per 1,000 people.
The infant mortality rate, the annual number of deaths of children less than 1 year old per 1,000 live births.
The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels.
The total fertility rate, the number of live births per woman completing her reproductive life, if her childbearing at each age reflected current age-specific fertility rates.
The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11.
The gross reproduction rate, the number of daughters who would be born to a woman completing her reproductive life at current age-specific fertility rates.
The net reproduction ratio is the expected number of daughters, per newborn prospective mother, who may or may not survive to and through the ages of childbearing.
A stable population, one that has had constant crude birth and death rates for such a long period of time that the percentage of people in every age class remains constant, or equivalently, the population pyramid has an unchanging structure.
A stationary population, one that is both stable and unchanging in size (the difference between crude birth rate and crude death rate is zero).
Measures of centralisation are concerned with the extent to which an area's population is concentrated in its urban centres.
A stable population does not necessarily remain fixed in size. It can be expanding or shrinking.
The crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher in developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy.
== Basic equations for regional populations ==
Suppose that a country (or other entity) contains Populationt persons at time t.
What is the size of the population at time t + 1 ?
Population
t
+
1
=
Population
t
+
Natural Increase
t
+
Net Migration
t
{\displaystyle {\text{Population}}_{t+1}={\text{Population}}_{t}+{\text{Natural Increase}}_{t}+{\text{Net Migration}}_{t}}
Natural increase from time t to t + 1:
Natural Increase
t
=
Births
t
−
Deaths
t
{\displaystyle {\text{Natural Increase}}_{t}={\text{Births}}_{t}-{\text{Deaths}}_{t}}
Net migration from time t to t + 1:
Net Migration
t
=
Immigration
t
−
Emigration
t
{\displaystyle {\text{Net Migration}}_{t}={\text{Immigration}}_{t}-{\text{Emigration}}_{t}}
These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another.
More generally, while the basic demographic equation holds true by definition, in practice the recording and counting of events (births, deaths, immigration, emigration) and the enumeration of the total population size are subject to error. So allowance needs to be made for error in the underlying statistics when any accounting of population size or change is made.
The figure in this section shows the latest (2004) UN (United Nations) WHO projections of world population out to the year 2150 (red = high, orange = medium, green = low). The UN "medium" projection shows world population reaching an approximate equilibrium at 9 billion by 2075. Working independently, demographers at the International Institute for Applied Systems Analysis in Austria expect world population to peak at 9 billion by 2070. Throughout the 21st century, the average age of the population is likely to continue to rise.
== The doomsday equation for the Earth's population ==
A 1960 issue of Science magazine included an article by Heinz von Foerster and his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958:
Fifty years ago, Science published a study with the provocative title “Doomsday: Friday, 13 November, A.D. 2026”. It fitted world population during the previous two millennia with P = 179 × 109/(2026.9 − t)0.99. This “quasi-hyperbolic” equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter.
—Taagepera, Rein. A world population growth model: Interaction with Earth's carrying capacity and technology in limited space Technological Forecasting and Social Change, vol. 82, February 2014, pp. 34–41
In 1975, von Hoerner suggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e. with the exponent in the denominator assumed to be 1.00):
Global population
=
179000000000
2026.9
−
t
,
{\displaystyle {\text{Global population}}={\frac {179000000000}{2026.9-t}},}
where
2026.9 is 13 November 2026 AD—the date of the so-called "demographic singularity" and von Foerster's 115th anniversary;
t is the number of a year of the Gregorian calendar.
Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671) predicts a population of 5,986,622,074 for the beginning of the year 1997:
179000000000
2026.9
−
1997
=
5986622074.
{\displaystyle {\frac {179000000000}{2026.9-1997}}=5986622074.}
The actual figure was 5,924,787,816.
The doomsday equation is called so because it predicts that the number of people living on the planet Earth will become maximally positive by 13 November 2026, and on the next moment will become negative. Said otherwise, the equation predicts that on 13 November 2026 all humans will instantaneously disappear.
== Science of population ==
Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential). Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the life table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population.
Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus, demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important.
Demography is today widely taught in many universities across the world, attracting students with initial training in social sciences, statistics or health studies. Being at the crossroads of several disciplines such as sociology, economics, epidemiology, geography, anthropology and history, demography offers tools to approach a large range of population issues by combining a more technical quantitative approach that represents the core of the discipline with many other methods borrowed from social or other sciences. Demographic research is conducted in universities, in research institutes, as well as in statistical departments and in several international agencies. Population institutions are part of the CICRED (International Committee for Coordination of Demographic Research) network while most individual scientists engaged in demographic research are members of the International Union for the Scientific Study of Population, or a national association such as the Population Association of America in the United States, or affiliates of the Federation of Canadian Demographers in Canada.
== Population composition ==
Population composition is the description of population defined by characteristics such as age, race, sex or marital status. These descriptions can be necessary for understanding the social dynamics from historical and comparative research. This data is often compared using a population pyramid.
Population composition is also a very important part of historical research. Information ranging back hundreds of years is not always worthwhile, because the numbers of people for which data are available may not provide the information that is important (such as population size). Lack of information on the original data-collection procedures may prevent accurate evaluation of data quality.
== Demographic analysis in institutions and organizations ==
=== Labor market ===
The demographic analysis of labor markets can be used to show slow population growth, population ageing, and the increased importance of immigration. The U.S. Census Bureau projects that in the next 100 years, the United States will face some dramatic demographic changes. The population is expected to grow more slowly and age more rapidly than ever before and the nation will become a nation of immigrants. This influx is projected to rise over the next century as new immigrants and their children will account for over half the U.S. population. These demographic shifts could ignite major adjustments in the economy, more specifically, in labor markets.
=== Turnover and in internal labor markets ===
People decide to exit organizations for many reasons, such as, better jobs, dissatisfaction, and concerns within the family. The causes of turnover can be split into two separate factors, one linked with the culture of the organization, and the other relating to all other factors. People who do not fully accept a culture might leave voluntarily. Or, some individuals might leave because they fail to fit in and fail to change within a particular organization.
=== Population ecology of organizations ===
A basic definition of population ecology is a study of the distribution and abundance of organisms. As it relates to organizations and demography, organizations go through various liabilities to their continued survival. Hospitals, like all other large and complex organizations are impacted in the environment they work. For example, a study was done on the closure of acute care hospitals in Florida between a particular time. The study examined effect size, age, and niche density of these particular hospitals. A population theory says that organizational outcomes are mostly determined by environmental factors. Among several factors of the theory, there are four that apply to the hospital closure example: size, age, density of niches in which organizations operate, and density of niches in which organizations are established.
==== Business organizations ====
Problems in which demographers may be called upon to assist business organizations are when determining the best prospective location in an area of a branch store or service outlet, predicting the demand for a new product, and to analyze certain dynamics of a company's workforce. Choosing a new location for a branch of a bank, choosing the area in which to start a new supermarket, consulting a bank loan officer that a particular location would be a beneficial site to start a car wash, and determining what shopping area would be best to buy and be redeveloped in metropolis area are types of problems in which demographers can be called upon.
Standardization is a useful demographic technique used in the analysis of a business. It can be used as an interpretive and analytic tool for the comparison of different markets.
==== Nonprofit organizations ====
These organizations have interests about the number and characteristics of their clients so they can maximize the sale of their products, their outlook on their influence, or the ends of their power, services, and beneficial works.
== See also ==
== References ==
== Further reading ==
Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011.
Glad, John. 2008. Future Human Evolution: Eugenics in the Twenty-First Century. Hermitage Publishers, ISBN 1-55779-154-6
Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128.
Preston, Samuel, Patrick Heuveline, and Michel Guillot. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334.
Paul R. Ehrlich (1968), The Population Bomb Controversial Neo-Malthusianist pamphlet
Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7
Andrey Korotayev & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS ISBN 5-484-00414-4 [2]
Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131.
Paul Demeny and Geoffrey McNicoll (Eds.). 2003. The Encyclopedia of Population. New York, Macmillan Reference USA, vol.1, 32-37
Phillip Longman (2004), The Empty Cradle: how falling birth rates threaten global prosperity and what to do about it
Sven Kunisch, Stephan A. Boehm, Michael Boppel (eds) (2011). From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg, ISBN 978-3-642-15593-2
Joe McFalls (2007), Population: A Lively Introduction, Population Reference Bureau [3] Archived 1 June 2013 at the Wayback Machine
Ben J. Wattenberg (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X
Perry, Marc J. & Mackun, Paul J. Population Change & Distribution: Census 2000 Brief. (2001)
Preston, Samuel; Heuveline, Patrick; and Guillot Michel. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Schutt, Russell K. 2006. "Investigating the Social World: The Process and Practice of Research". SAGE Publications.
Siegal, Jacob S. (2002), Applied Demography: Applications to Business, Government, Law, and Public Policy. San Diego: Academic Press.
Wattenberg, Ben J. (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X
== External links ==
Quick demography data lookup (archived 4 March 2016)
Historicalstatistics.org Links to historical demographic and economic statistics
United Nations Population Division: Homepage
World Population Prospects, the 2012 Revision, Population estimates and projections for 230 countries and areas (archived 6 May 2011)
World Urbanization Prospects, the 2011 Revision, Estimates and projections of urban and rural populations and urban agglomerations
Probabilistic Population Projections, the 2nd Revision, Probabilistic Population Projections, based on the 2010 Revision of the World Population Prospects (archived 13 December 2012)
Java Simulation of Population Dynamics.
Basic Guide to the World: Population changes and trends, 1960–2003
Brief review of world basic demographic trends
Family and Fertility Surveys (FFS) | Wikipedia/Demographic |
The world's principal religions and spiritual traditions may be classified into a small number of major groups, though this is not a uniform practice. This theory began in the 18th century with the goal of recognizing the relative degrees of civility in different societies, but this concept of a ranking order has since fallen into disrepute in many contemporary cultures.
== Religious demographics ==
One way to define a major religion is by the number of current adherents. The population numbers by religion are computed by a combination of census reports and population surveys (in countries where religion data is not collected in census, for example the United States or France), but results can vary widely depending on the way questions are phrased, the definitions of religion used and the bias of the agencies or organizations conducting the survey. Informal or unorganized religions are especially difficult to count.
There is no consensus among researchers as to the best methodology for determining the religiosity profile of the world's population. A number of fundamental aspects are unresolved:
Whether to count "historically predominant religious culture[s]"
Whether to count only those who actively "practice" a particular religion
Whether to count based on a concept of "self-identification as adherents"
Whether to count only those who expressly self-identify with a particular denomination
Whether to count only adults, or to include children as well
Whether to rely on official government-provided statistics
Whether to use multiple sources and ranges or single "best source(s)"
=== Largest religious groups ===
=== Medium-sized religions ===
== By region ==
Religions by country according to The World Factbook – CIA
Religion by region
Religion in Africa
Religion in Antarctica
Religion in Asia
Religion in the Middle East
Muslim world (SW Asia and N Africa)
Religion in Europe
Religion in the European Union
Christian world
Religion in North America
Religion in Oceania
Religion in South America
== Trends in adherence ==
== Maps of self-reported adherence ==
== Classification ==
Religious traditions fall into super-groups in comparative religion, arranged by historical origin and mutual influence. Abrahamic religions originate in the Middle East, Indian religions in the Indian subcontinent (South Asia) and East Asian religions in East Asia. Another group with supra-regional influence are Afro-American religion, which have their origins in Central and West Africa.
Middle Eastern religions:
Abrahamic religions are the largest group, and these consist mainly of Judaism, Christianity, Islam, and the Baháʼí Faith. They are named for the Hebrew patriarch Abraham, and are unified by the practice of monotheism. Today, at least 3.8 billion people are followers of Abrahamic religions and are spread widely around the world apart from the regions around East and Southeast Asia. Several Abrahamic organizations are vigorous proselytizers. Abrahamic religions with fewer adherents include the Baháʼí Faith, the Druze faith, Samaritanism, and Rastafari.
Iranian religions, partly of Indo-European origins, include Zoroastrianism, Yazdânism, Uatsdin, Yarsanism, Manichaeism, and Yazidism.
Gnosticism, including historical traditions of Mandaeism, which is still alive in the Middle East and diaspora.
Eastern religions:
Indian religions, originated in Greater India and they tend to share a number of key concepts, such as dharma, karma, reincarnation among others. They are of the most influence across the Indian subcontinent, East Asia, Southeast Asia, as well as isolated parts of Russia. The main Indian religions are Hinduism, Jainism, Buddhism and Sikhism.
East Asian religions consist of several East Asian religions which make use of the concept of Tao (in Chinese), Đạo (in Vietnamese) or Dō (in Japanese or Korean). They include many Chinese folk religions, Taoism and Confucianism, as well as Vietnamese, Korean and Japanese religions, which are influenced by Chinese religious thought.
Indigenous ethnic religions, found on every continent, now marginalized by the major organized faiths in many parts of the world or persisting as undercurrents (folk religions) of major religions. Includes traditional African religions, Asian shamanism, Native American religions, Austronesian and Australian Aboriginal traditions, Chinese folk religions, and postwar Shinto. Under more traditional listings, this has been referred to as "paganism" along with historical polytheism.
African religions:
The religions of the tribal peoples of Sub-Saharan Africa, but excluding ancient Egyptian religion, which is considered to belong to the ancient Middle East;
African diasporic religions practiced in the Americas, imported as a result of the Atlantic slave trade of the 16th to 18th centuries, building on traditional religions of Central and West Africa.
New religious movement is the term applied to any religious faith which has emerged since the 19th century, often syncretizing, re-interpreting or reviving aspects of older traditions such as Ayyavazhi, Mormonism, Ahmadiyya, Jehovah's Witnesses, polytheistic reconstructionism, and so forth.
== History of religious categories ==
=== Christian categorizations ===
Initially, Christians had a simple dichotomy of world beliefs: Christian civility versus foreign heresy or barbarity. In the 18th century, "heresy" was clarified to mean Judaism and Islam; along with paganism, this created a fourfold classification which spawned such works as John Toland's Nazarenus, or Jewish, Gentile, and Mahometan Christianity, which represented the three Abrahamic religions as different "nations" or sects within religion itself, the "true monotheism."
Daniel Defoe described the original definition as follows: "Religion is properly the Worship given to God, but 'tis also applied to the Worship of Idols and false Deities." At the turn of the 19th century, in between 1780 and 1810, the language dramatically changed: instead of "religion" being synonymous with spirituality, authors began using the plural, "religions", to refer to both Christianity and other forms of worship. Therefore, Hannah Adams's early encyclopedia, for example, had its name changed from An Alphabetical Compendium of the Various Sects... to A Dictionary of All Religions and Religious Denominations.
In 1838, the four-way division of Christianity, Judaism, Mahommedanism (archaic terminology for Islam) and paganism was multiplied considerably by Josiah Conder's Analytical and Comparative View of All Religions Now Extant among Mankind. Conder's work still adhered to the four-way classification, but in his eye for detail he puts together much historical work to create something resembling the modern Western image: he includes Druze, Yazidis, Mandaeans, and Elamites under a list of possibly monotheistic groups, and under the final category, of "polytheism and pantheism", he listed Zoroastrianism, "Vedas, Puranas, Tantras, Reformed sects" of India as well as "Brahminical idolatry", Buddhism, Jainism, Sikhism, Lamaism, "religion of China and Japan", and "illiterate superstitions" as others.
The modern meaning of the phrase "world religion", putting non-Christians at the same level as Christians, began with the 1893 Parliament of the World's Religions in Chicago. The Parliament spurred the creation of a dozen privately funded lectures with the intent of informing people of the diversity of religious experience: these lectures funded researchers such as William James, D. T. Suzuki, and Alan Watts, who greatly influenced the public conception of world religions.
In the latter half of the 20th century, the category of "world religion" fell into serious question, especially for drawing parallels between vastly different cultures, and thereby creating an arbitrary separation between the religious and the secular.
=== Islam categorizations ===
In Islam, the Quran mentions three categories: Muslims, the People of the Book, and idol worshipers.
== See also ==
Irreligion
List of religions and spiritual traditions
List of religious populations
World religions
Numinous
Religious conversion
State religion
== Notes ==
== References ==
== Sources ==
== Further reading ==
== External links ==
Animated history of World Religions—from the "Religion & Ethics" part of the BBC website, interactive animated view of the spread of world religions (requires Flash plug-in).
BBC A-Z of Religions and Beliefs
Major World Religions
International Council for Inter-Religious Cooperation | Wikipedia/Religious_demography |
The gross reproduction rate (GRR) is the average number of daughters a woman would have if she survived all of her childbearing years, which is roughly to the age of 45, subject to the age-specific fertility rate and sex ratio at birth throughout that period. This rate is a measure of replacement fertility if mortality is not in the equation. It is often regarded as the extent to which the generation of daughters replaces the preceding generation of women. If the value is equal to one that indicates that women will replace themselves. If the value is more than one that indicates that the next generation of women will outnumber the current one. If the value is less than one that indicates that the next generation of women will be less numerous than the current one.
The gross reproduction rate is similar to the net reproduction rate (NRR), the average number of daughters a woman would have if she survived her lifetime subject to the age-specific fertility rate and mortality rate throughout that period.
== Formulas ==
G
R
R
=
Σ
A
S
F
R
f
′
×
5
{\displaystyle GRR=\Sigma ASFR_{f'}\times 5}
*Note that we did not multiply by 1,000 because the results for individual women not per 1,000 women.
== See also ==
Net reproduction rate
Sub-replacement fertility
Total fertility rate
== References == | Wikipedia/Gross_reproduction_rate |
The total fertility rate (TFR) of a population is the average number of children that are born to a woman over her lifetime, if they were to experience the exact current age-specific fertility rates (ASFRs) through their lifetime, and they were to live from birth until the end of their reproductive life.
As of 2023, the total fertility rate varied widely across the world, from 0.7 in South Korea, to 6.1 in Niger. Among sovereign countries that were not city states or had a very small number of inhabitants, in 2024 the following countries had a TFR of 1.0 or lower: South Korea, Taiwan, and Ukraine; the following countries had a TFR of 1.2 or lower: Chile, China, Japan, Malta, Poland, and Spain.
Fertility tends to be inversely correlated with levels of economic development. Historically, developed countries have significantly lower fertility rates, generally correlated with greater wealth, education, urbanization, and other factors. Conversely, in least developed countries, fertility rates tend to be higher. Families desire children for their labor and as caregivers for their parents in old age. Fertility rates are also higher due to the lack of access to contraceptives, generally lower levels of female education, and lower rates of female employment.
From antiquity to the beginning of the industrial revolution, around the year 1800, total fertility rates of 4.5 to 7.5 were common around the world. 76-77, After this TFR declined only slightly and up until the 1960s the global average TFR was still 5. Since then, global average TFR has dropped steadily to less than half that number, 2.3 births per woman in 2023.
The United Nations predicts that global fertility will continue to decline for the remainder of this century and reach a below-replacement level of 1.8 by 2100, and that world population will peak in 2084.
== Parameter characteristics ==
The Total Fertility Rate (TFR) is not based on the actual fertility of a specific group of women, as that would require waiting until they have completed childbearing. It also does not involve counting the total number of children born over their lifetime. Instead, the TFR is based on the age-specific fertility rates of women in their "child-bearing years," typically considered to be ages 15–44 in international statistical usage.
The TFR is a measure of the fertility of an imaginary woman who experiences the age-specific fertility rates for ages 15–49 that were recorded for a specific population in a given year. It represents the average number of children a woman would potentially have if she were to go through all her childbearing years in a single year, subject to the age-specific fertility rates for that year. In simpler terms, the TFR is the number of children a woman would have if she were to experience the prevailing fertility rates at all ages from a single given year and survived throughout her childbearing years.
== Related parameters ==
=== Net reproduction rate ===
An alternative measure of fertility is the net reproduction rate (NRR), which calculates the number of daughters a female would have in her lifetime if she were subject to prevailing age-specific fertility and mortality rates in a given year. When the NRR is exactly 1, each generation of females is precisely replacing itself.
The NRR is not as commonly used as the TFR, but it is particularly relevant in cases where the number of male babies born is very high due to gender imbalance and sex selection. This is a significant consideration in world population dynamics, especially given the high level of gender imbalance in the heavily populated nations of China and India. The gross reproduction rate (GRR) is the same as the NRR, except that, like the TFR, it disregards life expectancy.
=== Total period fertility rate ===
The TFR, sometimes called TPFR—total period fertility rate, is a better index of fertility than the crude birth rate (annual number of births per thousand population) because it is independent of the age structure of the population, but it is a poorer estimate of actual completed family size than the total cohort fertility rate, which is obtained by summing the age-specific fertility rates that actually applied to each cohort as they aged through time.
In particular, the TFR does not necessarily predict how many children young women now will eventually have, as their fertility rates in years to come may change from those of older women now. However, the TFR is a reasonable summary of current fertility levels. TFR and long term population growth rate, g, are closely related. For a population structure in a steady state, growth rate equals
log
(
T
F
R
/
2
)
/
X
m
{\displaystyle \log(\mathrm {TFR} /2)/X_{m}}
, where
X
m
{\displaystyle X_{m}}
is the mean age for childbearing women.
==== Tempo effect ====
The TPFR (total period fertility rate) is affected by a tempo effect—if age of childbearing increases, and life cycle fertility is unchanged, then while the age of childbearing is increasing, TPFR will be lower, because the births are occurring later, and then the age of childbearing stops increasing, the TPFR will increase, due to the deferred births occurring in the later period, even though the life cycle fertility has been unchanged. In other words, the TPFR is a misleading measure of life cycle fertility when childbearing age is changing, due to this statistical artifact. This is a significant factor in some countries, such as the Czech Republic and Spain in the 1990s. Some measures seek to adjust for this timing effect to gain a better measure of life-cycle fertility.
=== Replacement rates ===
Replacement fertility is the total fertility rate at which women give birth to enough babies to sustain population levels, assuming that mortality rates remain constant and net migration is zero. If replacement level fertility is sustained over a sufficiently long period, each generation will exactly replace itself. In 2003, the replacement fertility rate was 2.1 births per female for most developed countries (2.1 in the UK, for example), but could be as high as 3.5 in undeveloped countries because of higher mortality rates, especially child mortality. The global average for the replacement total fertility rate, eventually leading to a stable global population, for 2010–2015, was 2.3 children per female.
== Lowest-low fertility ==
The term lowest-low fertility is defined as a TFR at or below 1.3. Lowest-low fertility is found almost exclusively within East Asian countries and Southern European countries. The East Asian American community in the United States also exhibits lowest-low fertility. At one point in the late 20th century and early 21st century this was also observed in Eastern and Southern Europe. However, the fertility rate then began to rise in most countries of Europe. Since the 2020s, however, TFR are falling again: in 2023, Spain's TFR fell to 1.19, and Italy's TFR fell to 1.2 children per woman. In Canada, the TFR in 2023 fell to its lowest ever recorded level, at 1.26 children per woman, with Statistics Canada reporting that Canada "has now joined the group of ‘lowest-low’ fertility countries".
The lowest TFR recorded anywhere in the world in recorded history, is for the Xiangyang district of Jiamusi city (Heilongjiang, China) which had a TFR of 0.41 in 2000. In 2023, South Korea's TFR was 0.72 the world's lowest for that year.
Outside Asia, the lowest TFR ever recorded was 0.80 for Eastern Germany in 1994. The low Eastern German value was influenced by a change to higher maternal age at birth, with the consequence that neither older cohorts (e.g. women born until the late 1960s), who often already had children, nor younger cohorts, who were postponing childbirth, had many children during that time. The total cohort fertility rate of each age cohort of women in East Germany did not drop as significantly.
== Population-lag effect ==
A population that maintained a TFR of 3.8 over an extended period, without a correspondingly high death or emigration rate, would increase rapidly, doubling period ≈32 years. A population that maintained a TFR of 2.0 over a long time would decrease, unless it had a large enough immigration.
It may take several generations for a change in the total fertility rate to be reflected in birth rate, because the age distribution must reach equilibrium. For example, a population that has recently dropped below replacement-level fertility will continue to grow, because the recent high fertility produced large numbers of young couples, who would now be in their childbearing years.
This phenomenon carries forward for several generations and is called population momentum, population inertia, or population-lag effect. This time-lag effect is of great importance to the growth rates of human populations.
TFR (net) and long-term population growth rate, g, are closely related. For a population structure in a steady state and with zero migration,
g
=
log
(
TFR
/
2
)
X
m
{\textstyle g={\tfrac {\log({\text{TFR}}/2)}{{\text{X}}_{m}}}}
, where
X
m
{\displaystyle {\text{X}}_{m}}
is mean age for childbearing women and thus
P
(
t
)
=
P
(
0
)
(
g
t
)
{\textstyle P(t)=P(0)^{(gt)}}
. At the left side is shown the empirical relation between the two variables in a cross-section of countries with the most recent y-y growth rate.
The parameter
1
b
{\textstyle {\tfrac {1}{b}}}
should be an estimate of the
X
m
{\displaystyle {\text{X}}_{m}}
; here equal to
1
0.02
=
50
{\textstyle {\tfrac {1}{0.02}}=50}
years, way off the mark because of population momentum. E.g. for
log
(
TFR
2
)
=
0
{\textstyle {\log }({\tfrac {\text{TFR}}{2}})=0}
, g should be exactly zero, which is seen not to be the case.
== Influencing factors ==
Fertility factors are determinants of the number of children that an individual is likely to have. Fertility factors are mostly positive or negative correlations without certain causations.
Factors generally associated with increased fertility include the intention to have children, very high level of gender inequality, inter-generational transmission of values, marriage and cohabitation, maternal and social support, rural residence, pro family government programs, low IQ and increased food production.
Factors generally associated with decreased fertility include rising income, value and attitude changes, education, female labor participation, population control, age, contraception, partner reluctance to having children, a low level of gender inequality, and infertility. The effect of all these factors can be summarized with a plot of total fertility rate against Human Development Index (HDI) for a sample of countries. The chart shows that the two factors are inversely correlated, that is, in general, the lower a country's HDI the higher its fertility.
Another common way of summarizing the relationship between economic development and fertility is a plot of TFR against per capita GDP, a proxy for standard of living. This chart shows that per capita GDP is also inversely correlated with fertility.
The impact of human development on TFR can best be summarized by a quote from Karan Singh, a former minister of population in India. At a 1974 United Nations population conference in Bucharest, he said "Development is the best contraceptive."
Wealthy countries, those with high per capita GDP, usually have a lower fertility rate than poor countries, those with low per capita GDP. This may seem counter-intuitive. The inverse relationship between income and fertility has been termed a demographic-economic paradox because evolutionary biology suggests that greater means should enable the production of more offspring, not fewer.
Many of these factors may differ by region and social class. For instance, Scandinavian countries and France are among the least religious in the EU, but have the highest TFR, while the opposite is true about Portugal, Greece, Cyprus, Poland and Spain.
== National efforts to increase or decrease fertility ==
Governments have often set population targets, to either increase or decrease the total fertility rate, or to have certain ethnic or socioeconomic groups have a lower or higher fertility rate. Often such policies have been interventionist, and abusive. The most notorious natalist policies of the 20th century include those in communist Romania and communist Albania, under Nicolae Ceaușescu and Enver Hoxha respectively.
The natalist policy in Romania between 1967 and 1989 was very aggressive, including outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. It resulted in large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s, when many orphanages were closed and the children ended up on the streets, overcrowding in homes and schools, and over 9,000 women who died due to illegal abortions.
Conversely, in China the government sought to lower the fertility rate, and, as such, enacted the one-child policy (1978–2015), which included abuses such as forced abortions. In India, during the national emergency of 1975, a massive compulsory sterilization drive was carried out in India, but it is considered to be a failure and is criticized for being an abuse of power.
Some governments have sought to regulate which groups of society could reproduce through eugenic policies, including forced sterilizations of population groups they considered undesirable. Such policies were carried out against ethnic minorities in Europe and North America in the first half of the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, former President Alberto Fujimori has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechua and Aymara people).
Within these historical contexts, the notion of reproductive rights has developed. Such rights are based on the concept that each person freely decides if, when, and how many children to have - not the state or religion. According to the Office of the United Nations High Commissioner for Human Rights, reproductive rights "rest on the recognition of the basic rights of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. It also includes the right to make decisions concerning reproduction free of discrimination, coercion and violence, as expressed in human rights documents".
== History and future projections ==
From around 10,000 BC to the beginning of the Industrial Revolution, fertility rates around the world were high by 21st-century standards, ranging from 4.5 to 7.5 children per woman.76-77,. The onset of the Industrial Revolution around the year 1800 brought about what has come to be called the demographic transition. This eventually led to a long-term decline in TFR in every region of the world that has continued in the 21st century.
=== Before 1800 ===
During this period fertility rates of 4.5 to 7.5 were common around the world. 76-77 Child mortality could reach 50% and that plus the need to produce workers, male heirs, and old-age caregivers required a high fertility rate by 21st-century standards. To produce two adult children in this high mortality environment required at least four or more births. For example, fertility rates in Western Europe before 1800 ranged from 4.5 in Scandinavia to 6.2 in Belgium.: 76 In 1800, the TFR in the United States was 7.0. Fertility rates in East Asia during this period were similar to those in Europe.: 74 Fertility rates in Roman Egypt were 7.4., p77
Despite these high fertility rates, the number of surviving children per woman was always around two because of high mortality rates. As a result, global population growth was still very slow, about 0.04% per year.
=== 1800 to 1950 ===
After 1800, the Industrial Revolution began in some places, particularly Great Britain, continental Europe, and the United States, and they underwent the beginnings of what is now called the demographic transition. Stage two of this process fueled a steady reduction in mortality rates due to improvements in public sanitation, personal hygiene and the food supply, which reduced the number of famines.
These reductions in mortality rates, particularly reductions in child mortality, that increased the fraction of children surviving, plus other major societal changes such as urbanization, and the increased social status of women, led to stage three of the demographic transition. There was a reduction in fertility rates, because there was simply no longer a need to birth so many children.: 294
The example from the US of the correlation between child mortality and the fertility rate is illustrative. In 1800, child mortality in the US was 33%, meaning that one third of all children born would die before their fifth birthday. The TFR in 1800 was 7.0, meaning that the average female would bear seven children during their lifetime. In 1900, child mortality in the US had declined to 23%, a reduction of almost one third, and the TFR had declined to 3.9, a reduction of 44%. By 1950, child mortality had declined dramatically to 4%, a reduction of 84%, and the TFR declined to 3.2. By 2018, child mortality had declined further to 0.6% and the TFR declined to 1.9, below replacement level.
The chart shows that the decline in the TFR since the 1960s has occurred in every region of the world. The global TFR is projected to continue declining for the remainder of the century, and reach a below-replacement level of 1.8 by 2100.
In 2022, the global TFR was 2.3. Because the global fertility replacement rate for 2010–2015 was estimated to be 2.3, humanity has achieved or is approaching a significant milestone where the global fertility rate is equal to the global replacement rate.
The global fertility rate may have fallen below the global replacement level of 2.2 children per woman as early as 2023. Numerous developing countries have experienced an accelerated fertility decline in the 2010s and early 2020s. The average fertility rate in countries such as Thailand or Chile approached the mark of one child per woman, which triggered concerns about the rapid aging of populations worldwide.
==== Total fertility rates in 2050 and 2100 ====
The table shows that after 1965, the demographic transition spread around the world, and the global TFR began a long decline that continues in the 21st century.
== By region ==
The United Nations Population Division divides the world into six geographical regions. The table below shows the estimated TFR for each region.
In 2013, the TFR of Europe, Latin America and the Caribbean, and Northern America were below the global replacement-level fertility rate of 2.1 children per female.
=== Africa ===
Africa has a TFR of 4.1, the highest in the world. Angola, Benin, DR Congo, Mali, and the Niger have the highest TFR. In 2023, the most populous country in Africa, Nigeria, had an estimated TFR of 4.57. In 2023, the second most populous African country, Ethiopia, had an estimated TFR of 3.92.
The poverty of Africa, and the high maternal mortality and infant mortality had led to calls from WHO for family planning, and the encouragement of smaller families.
=== Asia ===
==== Eastern Asia ====
Hong Kong, Macau, Singapore, South Korea, and Taiwan have the lowest-low fertility, defined as TFR at or below 1.3, and are among the lowest in the world. In 2004, Macau had a TFR below 1.0. In 2018, North Korea had the highest TFR in East Asia, at 1.95.
===== China =====
In 2022, China's TFR was 1.09. China implemented the one-child policy in January 1979 as a drastic population planning measure to control the ever-growing population at the time. In January 2016, the policy was replaced with the two-child policy. In July 2021, a three-child policy was introduced, as China's population is aging faster than almost any other country in modern history.
===== Japan =====
In 2022, Japan had a TFR of 1.26. Japan's population is rapidly aging due to both a long life expectancy and a low birth rate. The total population is shrinking, losing 430,000 in 2018, to a total of 126.4 million. Hong Kong and Singapore mitigate this through immigrant workers. In Japan, a serious demographic imbalance has developed, partly due to limited immigration to Japan.
===== South Korea =====
In South Korea, a low birthrate is one of its most urgent socio-economic challenges. Rising housing expenses, shrinking job opportunities for younger generations, insufficient support to families with newborns either from the government or employers are among the major explanations for its crawling TFR, which fell to 0.92 in 2019. Koreans are yet to find viable solutions to make the birthrate rebound, even after trying out dozens of programs over a decade, including subsidizing rearing expenses, giving priorities for public rental housing to couples with multiple children, funding day care centers, reserving seats in public transportation for pregnant women, and so on.
In the past 20 years, South Korea has recorded some of the lowest fertility and marriage levels in the world. As of 2022, South Korea is the country with the world's lowest total fertility rate, at 0.78. In 2022, the TFR of the capital Seoul was 0.57.
==== Southern Asia ====
===== Bangladesh =====
The fertility rate fell from 6.8 in 1970–1975, to 2.0 in 2020, an interval of about 47 years, or a little less than two generations.
===== India =====
The Indian fertility rate has declined significantly over the early 21st century. The Indian TFR declined from 5.2 in 1971 to 2.2 in 2018. The TFR in India declined to 2.0 in 2019–2020, marking the first time it has gone below replacement level.
===== Iran =====
In the Iranian calendar year (March 2019 – March 2020), Iran's total fertility rate fell to 1.8.
==== Western Asia ====
In 2023, the TFR of Turkey reached 1.51.
=== Europe ===
The average total fertility rate in the European Union (EU-27) was calculated at 1.53 children per female in 2021. In 2021, France had the highest TFR among EU countries at 1.84, followed by Czechia (1.83), Romania (1.81), Ireland (1.78) and Denmark (1.72). In 2021, Malta had the lowest TFR among the EU countries, at 1.13. Other southern European countries also had very low TFR (Portugal 1.35, Cyprus 1.39, Greece 1.43, Spain 1.19, and Italy 1.25).
In 2021, the United Kingdom had a TFR of 1.53. In 2021 estimates for the non-EU European post-Soviet states group, Russia had a TFR of 1.60, Moldova 1.59, Ukraine 1.57, and Belarus 1.52.
Emigration of young adults from Eastern Europe to the West aggravates the demographic problems of those countries. People from countries such as Bulgaria, Moldova, Romania, and Ukraine are particularly moving abroad.
=== Latin America and the Caribbean ===
In 2023, the TFR of Brazil, the most populous country in the region, was estimated at 1.75. In 2021, the second most populous country, Mexico, had an estimated TFR of 1.73. The next most populous four countries in the region had estimated TFRs of between 1.9 and 2.2 in 2023, including Colombia (1.94), Argentina (2.17), Peru (2.18), and Venezuela (2.20). Belize had the highest estimated TFR in the region at 2.59 in 2023. In 2021, Puerto Rico had the lowest, at 1.25.
=== Northern America ===
==== Canada ====
In 2023, the TFR of Canada was 1.26.
==== United States ====
The total fertility rate in the United States after World War II peaked at about 3.8 children per female in the late 1950s, dropped to below replacement in the early 70s, and by 1999 was at 2 children. Currently, the fertility is below replacement among those native born, and above replacement among immigrant families, most of whom come to the US from countries with higher fertility. However, the fertility rate of immigrants to the US has been found to decrease sharply in the second generation, correlating with improved education and income. In 2021, the US TFR was 1.664, ranging between over 2 in some states and under 1.6 in others.
=== Oceania ===
==== Australia ====
After World War II, Australia's TFR was approximately 3.0. In 2017, Australia's TFR was 1.74, i.e. below replacement.
== See also ==
List of countries by total fertility rate
Birth rate
Fertility and intelligence
Income and fertility
List of countries by past fertility rate
Sub-replacement fertility
Zero population growth
== References ==
== Further reading ==
== External links ==
CIA World Factbook - Total Fertility Rate by country
eurostat - Your key to European statistics
Population Reference Bureau Glossary of Population Terms
Java Simulation of Total Fertility.
Java Simulation of Population Dynamics.
How Fertility Changes Across Immigrant Generations.
Fertility Trends, Marriage Patterns and Savant Typologies.
Human Fertility Database: Collection of age specific fertility rates for some developed countries. | Wikipedia/Total_fertility_rate |
In statistics, model specification is part of the process of building a statistical model: specification consists of selecting an appropriate functional form for the model and choosing which variables to include. For example, given personal income
y
{\displaystyle y}
together with years of schooling
s
{\displaystyle s}
and on-the-job experience
x
{\displaystyle x}
, we might specify a functional relationship
y
=
f
(
s
,
x
)
{\displaystyle y=f(s,x)}
as follows:
ln
y
=
ln
y
0
+
ρ
s
+
β
1
x
+
β
2
x
2
+
ε
{\displaystyle \ln y=\ln y_{0}+\rho s+\beta _{1}x+\beta _{2}x^{2}+\varepsilon }
where
ε
{\displaystyle \varepsilon }
is the unexplained error term that is supposed to comprise independent and identically distributed Gaussian variables.
The statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
== Specification error and bias ==
Specification error occurs when the functional form or the choice of independent variables poorly represent relevant aspects of the true data-generating process. In particular, bias (the expected value of the difference of an estimated parameter and the true underlying value) occurs if an independent variable is correlated with the errors inherent in the underlying process. There are several different possible causes of specification error; some are listed below.
An inappropriate functional form could be employed.
A variable omitted from the model may have a relationship with both the dependent variable and one or more of the independent variables (causing omitted-variable bias).
An irrelevant variable may be included in the model (although this does not create bias, it involves overfitting and so can lead to poor predictive performance).
The dependent variable may be part of a system of simultaneous equations (giving simultaneity bias).
Additionally, measurement errors may affect the independent variables: while this is not a specification error, it can create statistical bias.
Note that all models will have some specification error. Indeed, in statistics there is a common aphorism that "all models are wrong". In the words of Burnham & Anderson, "Modeling is an art as well as a science and is directed toward finding a good approximating model ... as the basis for statistical inference".
=== Detection of misspecification ===
The Ramsey RESET test can help test for specification error in regression analysis.
In the example given above relating personal income to schooling and job experience, if the assumptions of the model are correct, then the least squares estimates of the parameters
ρ
{\displaystyle \rho }
and
β
{\displaystyle \beta }
will be efficient and unbiased. Hence specification diagnostics usually involve testing the first to fourth moment of the residuals.
== Model building ==
Building a model involves finding a set of relationships to represent the process that is generating the data. This requires avoiding all the sources of misspecification mentioned above.
One approach is to start with a model in general form that relies on a theoretical understanding of the data-generating process. Then the model can be fit to the data and checked for the various sources of misspecification, in a task called statistical model validation. Theoretical understanding can then guide the modification of the model in such a way as to retain theoretical validity while removing the sources of misspecification. But if it proves impossible to find a theoretically acceptable specification that fits the data, the theoretical model may have to be rejected and replaced with another one.
A quotation from Karl Popper is apposite here: "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve".
Another approach to model building is to specify several different models as candidates, and then compare those candidate models to each other. The purpose of the comparison is to determine which candidate model is most appropriate for statistical inference. Common criteria for comparing models include the following: R2, Bayes factor, and the likelihood-ratio test together with its generalization relative likelihood. For more on this topic, see statistical model selection.
== See also ==
== Notes ==
== Further reading ==
Akaike, Hirotugu (1994), "Implications of informational point of view on the development of statistical science", in Bozdogan, H. (ed.), Proceedings of the First US/JAPAN Conference on The Frontiers of Statistical Modeling: An Informational Approach—Volume 3, Kluwer Academic Publishers, pp. 27–38.
Asteriou, Dimitrios; Hall, Stephen G. (2011). "Misspecification: Wrong regressors, measurement errors and wrong functional forms". Applied Econometrics (Second ed.). Palgrave Macmillan. pp. 172–197.
Colegrave, N.; Ruxton, G. D. (2017). "Statistical model specification and power: recommendations on the use of test-qualified pooling in analysis of experimental data". Proceedings of the Royal Society B. 284 (1851): 20161850. doi:10.1098/rspb.2016.1850. PMC 5378071. PMID 28330912.
Gujarati, Damodar N.; Porter, Dawn C. (2009). "Econometric modeling: Model specification and diagnostic testing". Basic Econometrics (Fifth ed.). McGraw-Hill/Irwin. pp. 467–522. ISBN 978-0-07-337577-9.
Harrell, Frank (2001), Regression Modeling Strategies, Springer.
Kmenta, Jan (1986). Elements of Econometrics (Second ed.). New York: Macmillan Publishers. pp. 442–455. ISBN 0-02-365070-2.
Lehmann, E. L. (1990). "Model specification: The views of Fisher and Neyman, and later developments". Statistical Science. 5 (2): 160–168. doi:10.1214/ss/1177012164.
MacKinnon, James G. (1992). "Model specification tests and artificial regressions". Journal of Economic Literature. 30 (1): 102–146. JSTOR 2727880.
Maddala, G. S.; Lahiri, Kajal (2009). "Diagnostic checking, model selection, and specification testing". Introduction to Econometrics (Fourth ed.). Wiley. pp. 401–449. ISBN 978-0-470-01512-4.
Sapra, Sunil (2005). "A regression error specification test (RESET) for generalized linear models" (PDF). Economics Bulletin. 3 (1): 1–6. | Wikipedia/Statistical_model_specification |
In macroeconomics, the workforce or labour force is the sum of people either working (i.e., the employed) or looking for work (i.e., the unemployed):
Labour force
=
Employed
+
Unemployed
{\displaystyle {\text{Labour force}}={\text{Employed}}+{\text{Unemployed}}}
Those neither working in the marketplace nor looking for work are out of the labour force.
The sum of the labour force and out of the labour force results in the noninstitutional civilian population, that is, the number of people who (1) work (i.e., the employed), (2) can work but don't, although they are looking for a job (i.e., the unemployed), or (3) can work but don't, and are not looking for a job (i.e., out of the labour force). Stated otherwise, the noninstitutional civilian population is the total population minus people who cannot or choose not to work (children, retirees, soldiers, and incarcerated people). The noninstitutional civilian population is the number of people potentially available for civilian employment.
Noninstitutional civilian population
=
Labour force
+
Out of the labour force
=
Employed
+
Unemployed
+
Out of the labour force
=
Total Population
−
People who can not work
{\displaystyle {\begin{aligned}{\text{Noninstitutional civilian population}}&={\text{Labour force}}+{\text{Out of the labour force}}\\&={\text{Employed}}+{\text{Unemployed}}+{\text{Out of the labour force}}\\&={\text{Total Population}}-{\text{People who can not work}}\end{aligned}}}
The labour force participation rate is defined as the ratio of the civilian labour force to the noninstitutional civilian population.
Labour force participation rate
=
Labour force
Noninstitutional civilian population
{\displaystyle {\text{Labour force participation rate}}={\dfrac {\text{Labour force}}{\text{Noninstitutional civilian population}}}}
== Formal and informal ==
Formal labour is any sort of employment that is structured and paid in a formal way. They are paid formally using payrolls paper, electronic card and alike. Unlike the informal sector of the economy, formal labour within a country contributes to that country's gross national product. Informal labour is labour that falls short of being a formal arrangement in law or in practice. Labour inherit may come as formal or non-formal, an employee old enough but below retirement age bracket passing on to his children. It can be paid or unpaid and it is always unstructured and unregulated. Formal employment is more reliable than informal employment. Generally, the former yields higher income and greater benefits and securities for both men and women.
=== Informal labour ===
The contribution of informal labourers is immense. Informal labour is expanding globally, most significantly in developing countries. According to a study done by Jacques Charmes, in the year 2000 informal labour made up 57% of non-agricultural employment, 40% of urban employment, and 83% of the new jobs in Latin America. That same year, informal labour made up 78% of non-agricultural employment, 61% of urban employment, and 93% of the new jobs in Africa. Particularly after an economic crisis, labourers tend to shift from the formal sector to the informal sector. This trend was seen after the Asian economic crisis which began in 1997.
=== Informal labour and gender ===
Gender is frequently associated with informal labour. Women are employed more often informally than they are formally, and informal labour is an overall larger source of employment for females than it is for males. Women frequent the informal sector of the economy through occupations like home-based workers and street vendors. The Penguin Atlas of Women in the World shows that in the 1990s, 81% of women in Benin were street vendors, 55% in Guatemala, 44% in Mexico, 33% in Kenya, and 14% in India. Overall, 60% of women workers in the developing world are employed in the informal sector.
The specific percentages are 84% and 58% for women in Sub-Saharan Africa and Latin America respectively. The percentages for men in both of these areas of the world are lower, amounting to 63% and 48% respectively. In Asia, 65% of women workers and 65% of men workers are employed in the informal sector. Globally, a large percentage of women that are formally employed also work in the informal sector behind the scenes. These women make up the hidden work force.
According to a 2021 FAO study, currently, 85 per cent of economic activity in Africa is conducted in the informal sector where women account for nearly 90 per cent of the informal labour force. According to the ILO's 2016 employment analysis, 64 per cent of informal employment is in agriculture (relative to industry and services) in sub-Saharan Africa. Women have higher rates of informal employment than men with 92 per cent of women workers in informal employment versus 86 per cent of men.
Formal and informal labour can be divided into the subcategories of agricultural work and non-agricultural work. Martha Chen et al. believe these four categories of labour are closely related to one another. A majority of agricultural work is informal, which the Penguin Atlas for Women in the World defines as unregistered or unstructured. Non-agricultural work can also be informal. According to Martha Chen et al., informal labour makes up 48% of non-agricultural work in North Africa, 51% in Latin America, 65% in Asia, and 72% in Sub-Saharan Africa.
Agriculture and informal economic activity are among some of the most important sources of livelihood for women. Women are estimated to account for approximately 70 per cent of informal cross-border traders and are also prevalent among owners of micro, small, or medium-sized enterprises (MSMEs). MSMEs are more vulnerable to market shocks and market disruptions. For women-owned MSMEs this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses. However, MSMEs are often more vulnerable to market shocks and market disruptions. For women-owned MSMEs, this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses.
== Agricultural work ==
== Paid and unpaid ==
Paid and unpaid work are also closely related with formal and informal labour. Some informal work is unpaid, or paid under the table. Unpaid work can be work that is done at home to sustain a family, like child care work, or actual habitual daily labour that is not monetarily rewarded, like working the fields. Unpaid workers have zero earnings, and although their work is valuable, it is hard to estimate its value. Men and women tend to work in different areas of the economy, regardless of whether their work is paid or unpaid. Women focus on the service sector, while men focus on the industrial sector.
=== Unpaid work and gender ===
Women usually work fewer hours in income generating jobs than men do. Often it is housework that is unpaid. Worldwide, women and girls are responsible for a great amount of household work.
The Penguin Atlas of Women in the World, published in 2008, stated that in Madagascar, women spend 20 hours per week on housework, while men spend only two. In Mexico, women spend 33 hours and men spend 5 hours. In Mongolia the housework hours amount to 27 and 12 for women and men respectively. In Spain, women spend 26 hours on housework and men spend 4 hours. Only in the Netherlands do men spend 10% more time than women do on activities within the home or for the household.
The Penguin Atlas of Women in the World also stated that in developing countries, women and girls spend a significant amount of time fetching water for the week, while men do not. For example, in Malawi women spend 6.3 hours per week fetching water, while men spend 43 minutes. Girls in Malawi spend 3.3 hours per week fetching water, and boys spend 1.1 hours. Even if women and men both spend time on household work and other unpaid activities, this work is also gendered.
=== Sick leave and gender ===
In the United Kingdom in 2014, two-thirds of workers on long-term sick leave were women, despite women only constituting half of the workforce, even after excluding maternity leave.
== Globalisation of the labour market ==
The global supply of labour almost doubled in absolute numbers between the 1980s and early 2000s, with half of that growth coming from Asia. At the same time, the rate at which new workers entered the workforce in the Western world began to decline. The growing pool of global labour is accessed by employers in more advanced economies through various methods, including imports of goods, offshoring of production, and immigration. Global labor arbitrage, the practice of accessing the lowest-cost workers from all parts of the world, is partly a result of this enormous growth in the workforce. While most of the absolute increase in this global labour supply consisted of less-educated workers (those without higher education), the relative supply of workers with higher education increased by about 50 percent during the same period. From 1980 to 2010, the global workforce grew from 1.2 to 2.9 billion people. According to a 2012 report by the McKinsey Global Institute, this was caused mostly by developing nations, where there was a "farm to factory" transition. Non-farming jobs grew from 54 percent in 1980 to almost 73 percent in 2010. This industrialization took an estimated 620 million people out of poverty and contributed to the economic development of China, India and others.
Under the "old" international division of labor, until around 1970, underdeveloped areas were incorporated into the world economy principally as suppliers of minerals and agricultural commodities. However, as developing economies are merged into the world economy, more production takes place in these economies. This has led to a trend of transference, or what is also known as the "global industrial shift ", in which production processes are relocated from developed countries (such as the US, European countries, and Japan) to developing countries in Asia (such as China, Vietnam, and India), Mexico and Central America. This is because companies search for the cheapest locations to manufacture and assemble components, so low-cost labor-intensive parts of the manufacturing process are shifted to the developing world where costs are substantially lower.
But not only manufacturing processes are shifted to the developing world. The growth of offshore outsourcing of IT-enabled services (such as offshore custom software development and business process outsourcing) is linked to the availability of large amounts of reliable and affordable communication infrastructure following the telecommunication and Internet expansion of the late 1990s.
== See also ==
== References ==
== Sources ==
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from Seizing the opportunities of the African Continental Free Trade Area for the economic empowerment of women in agriculture, FAO, FAO.
== External links ==
Media related to Workforce at Wikimedia Commons
About the difference, in English, between the use/meaning of workforce/work force and labor/labour/labo(u)r pool | Wikipedia/Labor_force |
Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
== Meaning of 'science' ==
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
== History ==
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
=== Early development ===
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
=== Later development ===
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way.: Chap. I Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:: Chapter XI
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".: p. 86
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.: p. 14
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
== Objective vs. subjective experiences ==
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
== Human science in universities ==
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
== See also ==
History of the Human Sciences (journal)
Social science
Humanism
Humanities
== References ==
== Bibliography ==
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
== External links ==
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | Wikipedia/Human_science |
Linguistic demography is the statistical study of languages among all populations. Estimating the number of speakers of a given language is not straightforward, and various estimates may diverge considerably. This is first of all due to the question of defining "language" vs. "dialect". Identification of varieties as a single language or as distinct languages is often based on ethnic, cultural, or political considerations rather than mutual intelligibility. The second difficulty is multilingualism, complicating the definition of "native language". Finally, in many countries, insufficient census data add to the difficulties.
Demolinguistics is a branch of Sociology of language observing linguistic trends as affected by population distribution and redistribution and by the status of societies.
== Most spoken languages ==
The following table compares the estimates of Comrie (1998) and Weber (1997) (number of native speakers in millions). Also given are the estimates of SIL Ethnologue (2005).
Comparing estimates that do not date to the same year is problematic due to the 1.14% per year growth of world population (with significant regional differences).
This table shows that for the world's largest languages, it is impossible to give an estimate of the number of native speakers with a certainty better than maybe 10% or 20% or so.
== See also ==
List of languages by number of native speakers
List of languages by total number of speakers
Abstand and ausbau languages
Autonomous language
Language geography
Languages in censuses
Case studies:
Language demographics of Quebec
Language Spoken at Home
== Notes ==
== Literature ==
Johanna Nichols, Linguistic Diversity in Space and Time, University of Chicago Press (1992), ISBN 978-0-226-58056-2.
David I. Kertzer and Dominique Arel (eds.), Census and Indentiry : The Politics of Race, Ethnicity, and Language in National Censuses, ISBN 978-0-521-80823-1.
Jacques Pohl, Demolinguistics and Language Problems (1972).
H. Kloss, G. McConnell (eds.), Linguistic Composition of the Nations of the World vol. 2, North America, Quebec (1974–1984).
== External links ==
CIA - The World Factbook
M.Turner compares five language surveys - first language vs total speakers, degree of influence, etc. Plus graphs and charts.
Top 100 languages
Ethnologue
Unicode.org Top Languages by GDP Graphs | Wikipedia/Linguistic_demography |
The Lee–Carter model is a numerical algorithm used in mortality forecasting and life expectancy forecasting. The input to the model is a matrix of age specific mortality rates ordered monotonically by time, usually with ages in columns and years in rows. The output is a forecasted matrix of mortality rates in the same format as the input.
The model uses singular value decomposition (SVD) to find:
A univariate time series vector
k
t
{\displaystyle \mathbf {k} _{t}}
that captures 80–90% of the mortality trend (here the subscript
t
{\displaystyle t}
refers to time),
A vector
b
x
{\displaystyle \mathbf {b} _{x}}
that describes the relative mortality at each age (here the subscript
x
{\displaystyle x}
refers to age), and
A scaling constant (referred to here as
s
1
{\displaystyle s_{1}}
but unnamed in the literature).
k
t
{\displaystyle \mathbf {k} _{t}}
is usually linear, implying that gains to life expectancy are fairly constant year after year in most populations. Prior to computing SVD, age specific mortality rates are first transformed into
A
x
,
t
{\displaystyle \mathbf {A} _{x,t}}
, by taking their logarithms, and then centering them by subtracting their age-specific means over time. The age-specific mean over time is denoted by
a
x
{\displaystyle \mathbf {a} _{x}}
. The subscript
x
,
t
{\displaystyle x,t}
refers to the fact that
A
x
,
t
{\displaystyle \mathbf {A} _{x,t}}
spans both age and time.
Many researchers adjust the
k
t
{\displaystyle \mathbf {k} _{t}}
vector by fitting it to empirical life expectancies for each year, using the
a
x
{\displaystyle \mathbf {a} _{x}}
and
b
x
{\displaystyle \mathbf {b} _{x}}
generated with SVD. When adjusted using this approach, changes to
k
t
{\displaystyle \mathbf {k} _{t}}
are usually small.
To forecast mortality,
k
t
{\displaystyle \mathbf {k} _{t}}
(either adjusted or not) is projected into
n
{\displaystyle n}
future years using an ARIMA model. The corresponding forecasted
A
x
,
t
+
n
{\displaystyle \mathbf {A} _{x,t+n}}
is recovered by multiplying
k
t
+
n
{\displaystyle \mathbf {k} _{t+n}}
by
b
x
{\displaystyle \mathbf {b} _{x}}
and the first diagonal element of S (when
U
S
V
∗
=
svd
(
A
x
,
t
)
{\displaystyle \mathbf {U} \mathbf {S} \mathbf {V^{*}} ={\text{svd}}(\mathbf {A} _{x,t})}
). The actual mortality rates are recovered by taking exponentials of this vector.
Because of the linearity of
k
t
{\displaystyle \mathbf {k} _{t}}
, it is generally modeled as a random walk with trend. Life expectancy and other life table measures can be calculated from this forecasted matrix after adding back the means and taking exponentials to yield regular mortality rates.
In most implementations, confidence intervals for the forecasts are generated by simulating multiple mortality forecasts using Monte Carlo Methods. A band of mortality between 5% and 95% percentiles of the simulated results is considered to be a valid forecast. These simulations are done by extending
k
t
{\displaystyle \mathbf {k} _{t}}
into the future using randomization based on the standard error of
k
t
{\displaystyle \mathbf {k} _{t}}
derived from the input data.
== Algorithm ==
The algorithm seeks to find the least squares solution to the equation:
ln
(
m
x
,
t
)
=
a
x
+
b
x
k
t
+
ϵ
x
,
t
{\displaystyle \ln {(\mathbf {m} _{x,t})}=\mathbf {a} _{x}+\mathbf {b} _{x}\mathbf {k} _{t}+\epsilon _{x,t}}
where
m
x
,
t
{\displaystyle \mathbf {m} _{x,t}}
is a matrix of mortality rate for each age
x
{\displaystyle x}
in each year
t
{\displaystyle t}
.
Compute
a
x
{\displaystyle \mathbf {a} _{x}}
which is the average over time of
ln
(
m
x
,
t
)
{\displaystyle \ln {(\mathbf {m} _{x,t})}}
for each age:
a
x
=
∑
t
=
1
T
ln
(
m
x
,
t
)
T
{\displaystyle \mathbf {a} _{x}={\frac {\sum _{t=1}^{T}{\ln {(\mathbf {m} _{x,t})}}}{T}}}
Compute
A
x
,
t
{\displaystyle \mathbf {A} _{x,t}}
which will be used in SVD:
A
x
,
t
=
ln
(
m
x
,
t
)
−
a
x
{\displaystyle \mathbf {A} _{x,t}=\ln {(\mathbf {m} _{x,t})}-\mathbf {a} _{x}}
Compute the singular value decomposition of
A
x
,
t
{\displaystyle \mathbf {A} _{x,t}}
:
U
S
V
∗
=
svd
(
A
x
,
t
)
{\displaystyle \mathbf {U} \mathbf {S} \mathbf {V^{*}} ={\text{svd}}(\mathbf {A} _{x,t})}
Derive
b
x
{\displaystyle \mathbf {b} _{x}}
,
s
1
{\displaystyle s_{1}}
(the scaling eigenvalue), and
k
t
{\displaystyle \mathbf {k} _{t}}
from
U
{\displaystyle \mathbf {U} }
,
S
{\displaystyle \mathbf {S} }
, and
V
∗
{\displaystyle \mathbf {V^{*}} }
:
b
x
=
(
u
1
,
1
,
u
2
,
1
,
.
.
.
,
u
x
,
1
)
{\displaystyle \mathbf {b} _{x}=(u_{1,1},u_{2,1},...,u_{x,1})}
k
t
=
(
v
1
,
1
,
v
1
,
2
,
.
.
.
,
v
1
,
t
)
{\displaystyle \mathbf {k} _{t}=(v_{1,1},v_{1,2},...,v_{1,t})}
Forecast
k
t
{\displaystyle \mathbf {k} _{t}}
using a standard univariate ARIMA model to
n
{\displaystyle n}
additional years:
k
t
+
n
=
ARIMA
(
k
t
,
n
)
{\displaystyle \mathbf {k} _{t+n}={\text{ARIMA}}(\mathbf {k} _{t},n)}
Use the forecasted
k
t
+
n
{\displaystyle \mathbf {k} _{t+n}}
, with the original
b
x
{\displaystyle \mathbf {b} _{x}}
, and
a
x
{\displaystyle \mathbf {a} _{x}}
to calculate the forecasted mortality rate for each age:
m
x
,
t
+
n
=
exp
(
a
x
+
s
1
k
t
+
n
b
x
)
{\displaystyle \mathbf {m} _{x,t+n}=\exp(\mathbf {a} _{x}+s_{1}\mathbf {k} _{t+n}\mathbf {b} _{x})}
== Discussion ==
Without applying SVD or some other method of dimension reduction the table of mortality data is a highly correlated multivariate data series, and the complexity of these multidimensional time series makes them difficult to forecast. SVD has become widely used as a method of dimension reduction in many different fields, including by Google in their page rank algorithm.
The Lee–Carter model was introduced by Ronald D. Lee and Lawrence Carter in 1992 with the article "Modeling and Forecasting U.S. Mortality". The model grew out of their work in the late 1980s and early 1990s attempting to use inverse projection to infer rates in historical demography. The model has been used by the United States Social Security Administration, the US Census Bureau, and the United Nations. It has become the most widely used mortality forecasting technique in the world today.
There have been extensions to the Lee–Carter model, most notably to account for missing years, correlated male and female populations, and large scale coherency in populations that share a mortality regime (western Europe, for example). Many related papers can be found on Professor Ronald Lee's website.
== Implementations ==
There are few software packages for forecasting with the Lee–Carter model.
LCFIT is a web-based package with interactive forms.
Professor Rob J. Hyndman provides an R package for demography that includes routines for creating and forecasting a Lee–Carter model.
Alternatives in R include the StMoMo package of Villegas, Millossovich and Kaishev (2015).
Professor German Rodriguez provides code for the Lee–Carter Model using Stata.
Using Matlab, Professor Eric Jondeau and Professor Michael Rockinger have put together the Longevity Toolbox for parameter estimation.
== References == | Wikipedia/Lee–Carter_model |
The total fertility rate (TFR) of a population is the average number of children that are born to a woman over her lifetime, if they were to experience the exact current age-specific fertility rates (ASFRs) through their lifetime, and they were to live from birth until the end of their reproductive life.
As of 2023, the total fertility rate varied widely across the world, from 0.7 in South Korea, to 6.1 in Niger. Among sovereign countries that were not city states or had a very small number of inhabitants, in 2024 the following countries had a TFR of 1.0 or lower: South Korea, Taiwan, and Ukraine; the following countries had a TFR of 1.2 or lower: Chile, China, Japan, Malta, Poland, and Spain.
Fertility tends to be inversely correlated with levels of economic development. Historically, developed countries have significantly lower fertility rates, generally correlated with greater wealth, education, urbanization, and other factors. Conversely, in least developed countries, fertility rates tend to be higher. Families desire children for their labor and as caregivers for their parents in old age. Fertility rates are also higher due to the lack of access to contraceptives, generally lower levels of female education, and lower rates of female employment.
From antiquity to the beginning of the industrial revolution, around the year 1800, total fertility rates of 4.5 to 7.5 were common around the world. 76-77, After this TFR declined only slightly and up until the 1960s the global average TFR was still 5. Since then, global average TFR has dropped steadily to less than half that number, 2.3 births per woman in 2023.
The United Nations predicts that global fertility will continue to decline for the remainder of this century and reach a below-replacement level of 1.8 by 2100, and that world population will peak in 2084.
== Parameter characteristics ==
The Total Fertility Rate (TFR) is not based on the actual fertility of a specific group of women, as that would require waiting until they have completed childbearing. It also does not involve counting the total number of children born over their lifetime. Instead, the TFR is based on the age-specific fertility rates of women in their "child-bearing years," typically considered to be ages 15–44 in international statistical usage.
The TFR is a measure of the fertility of an imaginary woman who experiences the age-specific fertility rates for ages 15–49 that were recorded for a specific population in a given year. It represents the average number of children a woman would potentially have if she were to go through all her childbearing years in a single year, subject to the age-specific fertility rates for that year. In simpler terms, the TFR is the number of children a woman would have if she were to experience the prevailing fertility rates at all ages from a single given year and survived throughout her childbearing years.
== Related parameters ==
=== Net reproduction rate ===
An alternative measure of fertility is the net reproduction rate (NRR), which calculates the number of daughters a female would have in her lifetime if she were subject to prevailing age-specific fertility and mortality rates in a given year. When the NRR is exactly 1, each generation of females is precisely replacing itself.
The NRR is not as commonly used as the TFR, but it is particularly relevant in cases where the number of male babies born is very high due to gender imbalance and sex selection. This is a significant consideration in world population dynamics, especially given the high level of gender imbalance in the heavily populated nations of China and India. The gross reproduction rate (GRR) is the same as the NRR, except that, like the TFR, it disregards life expectancy.
=== Total period fertility rate ===
The TFR, sometimes called TPFR—total period fertility rate, is a better index of fertility than the crude birth rate (annual number of births per thousand population) because it is independent of the age structure of the population, but it is a poorer estimate of actual completed family size than the total cohort fertility rate, which is obtained by summing the age-specific fertility rates that actually applied to each cohort as they aged through time.
In particular, the TFR does not necessarily predict how many children young women now will eventually have, as their fertility rates in years to come may change from those of older women now. However, the TFR is a reasonable summary of current fertility levels. TFR and long term population growth rate, g, are closely related. For a population structure in a steady state, growth rate equals
log
(
T
F
R
/
2
)
/
X
m
{\displaystyle \log(\mathrm {TFR} /2)/X_{m}}
, where
X
m
{\displaystyle X_{m}}
is the mean age for childbearing women.
==== Tempo effect ====
The TPFR (total period fertility rate) is affected by a tempo effect—if age of childbearing increases, and life cycle fertility is unchanged, then while the age of childbearing is increasing, TPFR will be lower, because the births are occurring later, and then the age of childbearing stops increasing, the TPFR will increase, due to the deferred births occurring in the later period, even though the life cycle fertility has been unchanged. In other words, the TPFR is a misleading measure of life cycle fertility when childbearing age is changing, due to this statistical artifact. This is a significant factor in some countries, such as the Czech Republic and Spain in the 1990s. Some measures seek to adjust for this timing effect to gain a better measure of life-cycle fertility.
=== Replacement rates ===
Replacement fertility is the total fertility rate at which women give birth to enough babies to sustain population levels, assuming that mortality rates remain constant and net migration is zero. If replacement level fertility is sustained over a sufficiently long period, each generation will exactly replace itself. In 2003, the replacement fertility rate was 2.1 births per female for most developed countries (2.1 in the UK, for example), but could be as high as 3.5 in undeveloped countries because of higher mortality rates, especially child mortality. The global average for the replacement total fertility rate, eventually leading to a stable global population, for 2010–2015, was 2.3 children per female.
== Lowest-low fertility ==
The term lowest-low fertility is defined as a TFR at or below 1.3. Lowest-low fertility is found almost exclusively within East Asian countries and Southern European countries. The East Asian American community in the United States also exhibits lowest-low fertility. At one point in the late 20th century and early 21st century this was also observed in Eastern and Southern Europe. However, the fertility rate then began to rise in most countries of Europe. Since the 2020s, however, TFR are falling again: in 2023, Spain's TFR fell to 1.19, and Italy's TFR fell to 1.2 children per woman. In Canada, the TFR in 2023 fell to its lowest ever recorded level, at 1.26 children per woman, with Statistics Canada reporting that Canada "has now joined the group of ‘lowest-low’ fertility countries".
The lowest TFR recorded anywhere in the world in recorded history, is for the Xiangyang district of Jiamusi city (Heilongjiang, China) which had a TFR of 0.41 in 2000. In 2023, South Korea's TFR was 0.72 the world's lowest for that year.
Outside Asia, the lowest TFR ever recorded was 0.80 for Eastern Germany in 1994. The low Eastern German value was influenced by a change to higher maternal age at birth, with the consequence that neither older cohorts (e.g. women born until the late 1960s), who often already had children, nor younger cohorts, who were postponing childbirth, had many children during that time. The total cohort fertility rate of each age cohort of women in East Germany did not drop as significantly.
== Population-lag effect ==
A population that maintained a TFR of 3.8 over an extended period, without a correspondingly high death or emigration rate, would increase rapidly, doubling period ≈32 years. A population that maintained a TFR of 2.0 over a long time would decrease, unless it had a large enough immigration.
It may take several generations for a change in the total fertility rate to be reflected in birth rate, because the age distribution must reach equilibrium. For example, a population that has recently dropped below replacement-level fertility will continue to grow, because the recent high fertility produced large numbers of young couples, who would now be in their childbearing years.
This phenomenon carries forward for several generations and is called population momentum, population inertia, or population-lag effect. This time-lag effect is of great importance to the growth rates of human populations.
TFR (net) and long-term population growth rate, g, are closely related. For a population structure in a steady state and with zero migration,
g
=
log
(
TFR
/
2
)
X
m
{\textstyle g={\tfrac {\log({\text{TFR}}/2)}{{\text{X}}_{m}}}}
, where
X
m
{\displaystyle {\text{X}}_{m}}
is mean age for childbearing women and thus
P
(
t
)
=
P
(
0
)
(
g
t
)
{\textstyle P(t)=P(0)^{(gt)}}
. At the left side is shown the empirical relation between the two variables in a cross-section of countries with the most recent y-y growth rate.
The parameter
1
b
{\textstyle {\tfrac {1}{b}}}
should be an estimate of the
X
m
{\displaystyle {\text{X}}_{m}}
; here equal to
1
0.02
=
50
{\textstyle {\tfrac {1}{0.02}}=50}
years, way off the mark because of population momentum. E.g. for
log
(
TFR
2
)
=
0
{\textstyle {\log }({\tfrac {\text{TFR}}{2}})=0}
, g should be exactly zero, which is seen not to be the case.
== Influencing factors ==
Fertility factors are determinants of the number of children that an individual is likely to have. Fertility factors are mostly positive or negative correlations without certain causations.
Factors generally associated with increased fertility include the intention to have children, very high level of gender inequality, inter-generational transmission of values, marriage and cohabitation, maternal and social support, rural residence, pro family government programs, low IQ and increased food production.
Factors generally associated with decreased fertility include rising income, value and attitude changes, education, female labor participation, population control, age, contraception, partner reluctance to having children, a low level of gender inequality, and infertility. The effect of all these factors can be summarized with a plot of total fertility rate against Human Development Index (HDI) for a sample of countries. The chart shows that the two factors are inversely correlated, that is, in general, the lower a country's HDI the higher its fertility.
Another common way of summarizing the relationship between economic development and fertility is a plot of TFR against per capita GDP, a proxy for standard of living. This chart shows that per capita GDP is also inversely correlated with fertility.
The impact of human development on TFR can best be summarized by a quote from Karan Singh, a former minister of population in India. At a 1974 United Nations population conference in Bucharest, he said "Development is the best contraceptive."
Wealthy countries, those with high per capita GDP, usually have a lower fertility rate than poor countries, those with low per capita GDP. This may seem counter-intuitive. The inverse relationship between income and fertility has been termed a demographic-economic paradox because evolutionary biology suggests that greater means should enable the production of more offspring, not fewer.
Many of these factors may differ by region and social class. For instance, Scandinavian countries and France are among the least religious in the EU, but have the highest TFR, while the opposite is true about Portugal, Greece, Cyprus, Poland and Spain.
== National efforts to increase or decrease fertility ==
Governments have often set population targets, to either increase or decrease the total fertility rate, or to have certain ethnic or socioeconomic groups have a lower or higher fertility rate. Often such policies have been interventionist, and abusive. The most notorious natalist policies of the 20th century include those in communist Romania and communist Albania, under Nicolae Ceaușescu and Enver Hoxha respectively.
The natalist policy in Romania between 1967 and 1989 was very aggressive, including outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. It resulted in large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s, when many orphanages were closed and the children ended up on the streets, overcrowding in homes and schools, and over 9,000 women who died due to illegal abortions.
Conversely, in China the government sought to lower the fertility rate, and, as such, enacted the one-child policy (1978–2015), which included abuses such as forced abortions. In India, during the national emergency of 1975, a massive compulsory sterilization drive was carried out in India, but it is considered to be a failure and is criticized for being an abuse of power.
Some governments have sought to regulate which groups of society could reproduce through eugenic policies, including forced sterilizations of population groups they considered undesirable. Such policies were carried out against ethnic minorities in Europe and North America in the first half of the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, former President Alberto Fujimori has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechua and Aymara people).
Within these historical contexts, the notion of reproductive rights has developed. Such rights are based on the concept that each person freely decides if, when, and how many children to have - not the state or religion. According to the Office of the United Nations High Commissioner for Human Rights, reproductive rights "rest on the recognition of the basic rights of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. It also includes the right to make decisions concerning reproduction free of discrimination, coercion and violence, as expressed in human rights documents".
== History and future projections ==
From around 10,000 BC to the beginning of the Industrial Revolution, fertility rates around the world were high by 21st-century standards, ranging from 4.5 to 7.5 children per woman.76-77,. The onset of the Industrial Revolution around the year 1800 brought about what has come to be called the demographic transition. This eventually led to a long-term decline in TFR in every region of the world that has continued in the 21st century.
=== Before 1800 ===
During this period fertility rates of 4.5 to 7.5 were common around the world. 76-77 Child mortality could reach 50% and that plus the need to produce workers, male heirs, and old-age caregivers required a high fertility rate by 21st-century standards. To produce two adult children in this high mortality environment required at least four or more births. For example, fertility rates in Western Europe before 1800 ranged from 4.5 in Scandinavia to 6.2 in Belgium.: 76 In 1800, the TFR in the United States was 7.0. Fertility rates in East Asia during this period were similar to those in Europe.: 74 Fertility rates in Roman Egypt were 7.4., p77
Despite these high fertility rates, the number of surviving children per woman was always around two because of high mortality rates. As a result, global population growth was still very slow, about 0.04% per year.
=== 1800 to 1950 ===
After 1800, the Industrial Revolution began in some places, particularly Great Britain, continental Europe, and the United States, and they underwent the beginnings of what is now called the demographic transition. Stage two of this process fueled a steady reduction in mortality rates due to improvements in public sanitation, personal hygiene and the food supply, which reduced the number of famines.
These reductions in mortality rates, particularly reductions in child mortality, that increased the fraction of children surviving, plus other major societal changes such as urbanization, and the increased social status of women, led to stage three of the demographic transition. There was a reduction in fertility rates, because there was simply no longer a need to birth so many children.: 294
The example from the US of the correlation between child mortality and the fertility rate is illustrative. In 1800, child mortality in the US was 33%, meaning that one third of all children born would die before their fifth birthday. The TFR in 1800 was 7.0, meaning that the average female would bear seven children during their lifetime. In 1900, child mortality in the US had declined to 23%, a reduction of almost one third, and the TFR had declined to 3.9, a reduction of 44%. By 1950, child mortality had declined dramatically to 4%, a reduction of 84%, and the TFR declined to 3.2. By 2018, child mortality had declined further to 0.6% and the TFR declined to 1.9, below replacement level.
The chart shows that the decline in the TFR since the 1960s has occurred in every region of the world. The global TFR is projected to continue declining for the remainder of the century, and reach a below-replacement level of 1.8 by 2100.
In 2022, the global TFR was 2.3. Because the global fertility replacement rate for 2010–2015 was estimated to be 2.3, humanity has achieved or is approaching a significant milestone where the global fertility rate is equal to the global replacement rate.
The global fertility rate may have fallen below the global replacement level of 2.2 children per woman as early as 2023. Numerous developing countries have experienced an accelerated fertility decline in the 2010s and early 2020s. The average fertility rate in countries such as Thailand or Chile approached the mark of one child per woman, which triggered concerns about the rapid aging of populations worldwide.
==== Total fertility rates in 2050 and 2100 ====
The table shows that after 1965, the demographic transition spread around the world, and the global TFR began a long decline that continues in the 21st century.
== By region ==
The United Nations Population Division divides the world into six geographical regions. The table below shows the estimated TFR for each region.
In 2013, the TFR of Europe, Latin America and the Caribbean, and Northern America were below the global replacement-level fertility rate of 2.1 children per female.
=== Africa ===
Africa has a TFR of 4.1, the highest in the world. Angola, Benin, DR Congo, Mali, and the Niger have the highest TFR. In 2023, the most populous country in Africa, Nigeria, had an estimated TFR of 4.57. In 2023, the second most populous African country, Ethiopia, had an estimated TFR of 3.92.
The poverty of Africa, and the high maternal mortality and infant mortality had led to calls from WHO for family planning, and the encouragement of smaller families.
=== Asia ===
==== Eastern Asia ====
Hong Kong, Macau, Singapore, South Korea, and Taiwan have the lowest-low fertility, defined as TFR at or below 1.3, and are among the lowest in the world. In 2004, Macau had a TFR below 1.0. In 2018, North Korea had the highest TFR in East Asia, at 1.95.
===== China =====
In 2022, China's TFR was 1.09. China implemented the one-child policy in January 1979 as a drastic population planning measure to control the ever-growing population at the time. In January 2016, the policy was replaced with the two-child policy. In July 2021, a three-child policy was introduced, as China's population is aging faster than almost any other country in modern history.
===== Japan =====
In 2022, Japan had a TFR of 1.26. Japan's population is rapidly aging due to both a long life expectancy and a low birth rate. The total population is shrinking, losing 430,000 in 2018, to a total of 126.4 million. Hong Kong and Singapore mitigate this through immigrant workers. In Japan, a serious demographic imbalance has developed, partly due to limited immigration to Japan.
===== South Korea =====
In South Korea, a low birthrate is one of its most urgent socio-economic challenges. Rising housing expenses, shrinking job opportunities for younger generations, insufficient support to families with newborns either from the government or employers are among the major explanations for its crawling TFR, which fell to 0.92 in 2019. Koreans are yet to find viable solutions to make the birthrate rebound, even after trying out dozens of programs over a decade, including subsidizing rearing expenses, giving priorities for public rental housing to couples with multiple children, funding day care centers, reserving seats in public transportation for pregnant women, and so on.
In the past 20 years, South Korea has recorded some of the lowest fertility and marriage levels in the world. As of 2022, South Korea is the country with the world's lowest total fertility rate, at 0.78. In 2022, the TFR of the capital Seoul was 0.57.
==== Southern Asia ====
===== Bangladesh =====
The fertility rate fell from 6.8 in 1970–1975, to 2.0 in 2020, an interval of about 47 years, or a little less than two generations.
===== India =====
The Indian fertility rate has declined significantly over the early 21st century. The Indian TFR declined from 5.2 in 1971 to 2.2 in 2018. The TFR in India declined to 2.0 in 2019–2020, marking the first time it has gone below replacement level.
===== Iran =====
In the Iranian calendar year (March 2019 – March 2020), Iran's total fertility rate fell to 1.8.
==== Western Asia ====
In 2023, the TFR of Turkey reached 1.51.
=== Europe ===
The average total fertility rate in the European Union (EU-27) was calculated at 1.53 children per female in 2021. In 2021, France had the highest TFR among EU countries at 1.84, followed by Czechia (1.83), Romania (1.81), Ireland (1.78) and Denmark (1.72). In 2021, Malta had the lowest TFR among the EU countries, at 1.13. Other southern European countries also had very low TFR (Portugal 1.35, Cyprus 1.39, Greece 1.43, Spain 1.19, and Italy 1.25).
In 2021, the United Kingdom had a TFR of 1.53. In 2021 estimates for the non-EU European post-Soviet states group, Russia had a TFR of 1.60, Moldova 1.59, Ukraine 1.57, and Belarus 1.52.
Emigration of young adults from Eastern Europe to the West aggravates the demographic problems of those countries. People from countries such as Bulgaria, Moldova, Romania, and Ukraine are particularly moving abroad.
=== Latin America and the Caribbean ===
In 2023, the TFR of Brazil, the most populous country in the region, was estimated at 1.75. In 2021, the second most populous country, Mexico, had an estimated TFR of 1.73. The next most populous four countries in the region had estimated TFRs of between 1.9 and 2.2 in 2023, including Colombia (1.94), Argentina (2.17), Peru (2.18), and Venezuela (2.20). Belize had the highest estimated TFR in the region at 2.59 in 2023. In 2021, Puerto Rico had the lowest, at 1.25.
=== Northern America ===
==== Canada ====
In 2023, the TFR of Canada was 1.26.
==== United States ====
The total fertility rate in the United States after World War II peaked at about 3.8 children per female in the late 1950s, dropped to below replacement in the early 70s, and by 1999 was at 2 children. Currently, the fertility is below replacement among those native born, and above replacement among immigrant families, most of whom come to the US from countries with higher fertility. However, the fertility rate of immigrants to the US has been found to decrease sharply in the second generation, correlating with improved education and income. In 2021, the US TFR was 1.664, ranging between over 2 in some states and under 1.6 in others.
=== Oceania ===
==== Australia ====
After World War II, Australia's TFR was approximately 3.0. In 2017, Australia's TFR was 1.74, i.e. below replacement.
== See also ==
List of countries by total fertility rate
Birth rate
Fertility and intelligence
Income and fertility
List of countries by past fertility rate
Sub-replacement fertility
Zero population growth
== References ==
== Further reading ==
== External links ==
CIA World Factbook - Total Fertility Rate by country
eurostat - Your key to European statistics
Population Reference Bureau Glossary of Population Terms
Java Simulation of Total Fertility.
Java Simulation of Population Dynamics.
How Fertility Changes Across Immigrant Generations.
Fertility Trends, Marriage Patterns and Savant Typologies.
Human Fertility Database: Collection of age specific fertility rates for some developed countries. | Wikipedia/Fertility_rate |
Biodemography is a multidisciplinary approach, integrating biological knowledge (studies on human biology and animal models) with demographic research on human longevity and survival. Biodemographic studies are important for understanding the driving forces of the current longevity revolution (dramatic increase in human life expectancy), forecasting the future of human longevity, and identification of new strategies for further increase in healthy and productive life span.
== Theory ==
Biodemographic studies have found a remarkable similarity in survival dynamics between humans and laboratory animals. Specifically, three general biodemographic laws of survival are found:
Gompertz–Makeham law of mortality
Compensation law of mortality
Late-life mortality deceleration (now disputed)
The Gompertz–Makeham law states that death rate is a sum of an age-independent component (Makeham term) and an age-dependent component (Gompertz function), which increases exponentially with age.
The compensation law of mortality (late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species are decreasing with age, because the higher initial death rates are compensated by lower pace of their increase with age.
The disputed late-life mortality deceleration law states that death rates stop increasing exponentially at advanced ages and level off to the late-life mortality plateau. A consequence of this deceleration is that there would be no fixed upper limit to human longevity — no fixed number which separates possible and impossible values of lifespan. If true, this would challenges the common belief in existence of a fixed maximal human life span.
Biodemographic studies have found that even genetically identical laboratory animals kept in constant environment have very different lengths of life, suggesting a crucial role of chance and early-life developmental noise in longevity determination. This leads to new approaches in understanding causes of exceptional human longevity.
As for the future of human longevity, biodemographic studies found that evolution of human lifespan had two very distinct stages – the initial stage of mortality decline at younger ages is now replaced by a new trend of preferential improvement of the oldest-old survival. This phenomenon invalidates methods of longevity forecasting based on extrapolation of long-term historical trends.
A general explanation of these biodemographic laws of aging and longevity has been suggested based on system reliability theory.
== See also ==
Stress Modeling
Demography
Biodemography
Longevity
Life extension
List of life extension-related topics
Reliability theory of aging and longevity
== References ==
== Further reading ==
Leonid A. Gavrilov & Natalia S. Gavrilova (1991). The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher. ISBN 3-7186-4983-7.
Gavrilov LA, Gavrilova NS, Olshansky SJ, Carnes BA (2002). "Genealogical data and biodemography of human longevity". Social Biology. 49 (3–4): 160–173. doi:10.1080/19485565.2002.9989056. PMID 14652915. S2CID 1898725.
Gavrilov LA, Gavrilova NS (2001). "Biodemographic study of familial determinants of human longevity". Population: An English Selection. 13 (1): 197–222.
== External links ==
Biodemography of Human Longevity — abstract of keynote lecture, p. 42. In: Inaugural International Conference on Longevity. Final Programme and Abstracts. Sydney Convention & Exhibition Centre. Sydney, Australia, March 5–7, 2004, 94 pp | Wikipedia/Biodemography_of_human_longevity |
Behavioural science is the branch of science concerned with human behaviour. While the term can technically be applied to the study of behaviour amongst all living organisms, it is nearly always used with reference to humans as the primary target of investigation (though animals may be studied in some instances, e.g. invasive techniques). The behavioural sciences sit in between the conventional natural sciences and social studies in terms of scientific rigor. It encompasses fields such as psychology, neuroscience, linguistics, and economics.
== Scope ==
The behavioural sciences encompass both natural and social scientific disciplines, including various branches of psychology, neuroscience and biobehavioural sciences, behavioural economics and certain branches of criminology, sociology and political science. This interdisciplinary nature allows behavioural scientists to coordinate findings from psychological experiments, genetics and neuroimaging, self-report studies, interspecies and cross-cultural comparisons, and correlational and longitudinal designs to understand the nature, frequency, mechanisms, causes and consequences of given behaviours.
With respect to the applied behavioural science and behavioural insights, the focus is usually narrower, tending to encompass cognitive psychology, social psychology and behavioural economics generally, and invoking other more specific fields (e.g. health psychology) where needed. In applied settings behavioural scientists exploit their knowledge of cognitive biases, heuristics, and peculiarities of how decision-making is affected by various factors to develop behaviour change interventions or develop policies which 'nudge' people to acting more auspiciously (see Applications below).
=== Future and emerging techniques ===
Robila explains how using modern technology to study and understand behavioral patterns on a greater scale, such as artificial intelligence, machine learning, and greater data has a future in brightening up behavioral science assistance/ research. Creating cutting-edge therapies and interventions with immersive technology like virtual reality/ AI would also be beneficial to behavioral science future(s). These concepts are only a hint of the many paths behavioral science may take in the future.
== Applications ==
Insights from several pure disciplines across behavioural sciences are explored by various applied disciplines and practiced in the context of everyday life and business.
Consumer behaviour, for instance, is the study of the decision making process consumers make when purchasing goods or services. It studies the way consumers recognise problems and discover solutions. Behavioural science is applied in this study by examining the patterns consumers make when making purchases, the factors that influenced those decisions, and how to take advantage of these patterns.
Organisational behaviour is the application of behavioural science in a business setting. It studies what motivates employees, how to make them work more effectively, what influences this behaviour, and how to use these patterns in order to achieve the company's goals. Managers often use organisational behaviour to better lead their employees.
Using insights from psychology and economics, behavioural science can be leveraged to understand how individuals make decisions regarding their health and ultimately reduce disease burden through interventions such as loss aversion, framing, defaults, nudges, and more.
Other applied disciplines of behavioural science include operations research and media psychology.
== Differentiation from social sciences ==
The terms behavioural sciences and social sciences are interconnected fields that both study systematic processes of behaviour, but they differ on their level of scientific analysis for various dimensions of behaviour.
Behavioural sciences abstract empirical data to investigate the decision process and communication strategies within and between organisms in a social system. This characteristically involves fields like psychology, social neuroscience, ethology, and cognitive science. In contrast, social sciences provide a perceptive framework to study the processes of a social system through impacts of a social organisation on the structural adjustment of the individual and of groups. They typically include fields like sociology, economics, public health, anthropology, demography, and political science.
Many subfields of these disciplines test the boundaries between behavioural and social sciences. For example, political psychology and behavioural economics use behavioural approaches, despite the predominant focus on systemic and institutional factors in the broader fields of political science and economics.
== See also ==
Behaviour
Human behaviour
loss aversion
List of academic disciplines
Science
Fields of science
Natural sciences
Social sciences
History of science
History of technology
== References ==
== Selected bibliography ==
George Devereux: From anxiety to method in the behavioral sciences, The Hague, Paris. Mouton & Co, 1967
Fred N. Kerlinger (1979). Behavioural Research: A Conceptual Approach. New York: Holt, Rinehart & Winston. ISBN 0-03-013331-9.
E.D. Klemke, R. Hollinger & A.D. Kline, (eds.) (1980). Introductory Readings in the Philosophy of Science. Prometheus Books, New York.
Neil J. Smelser & Paul B. Baltes, eds. (2001). International Encyclopedia of the Social & Behavioral Sciences, 26 v. Oxford: Elsevier. ISBN 978-0-08-043076-8
Mills, J. A. (1998). Control a history of behavioral psychology. New York University Press.
== External links ==
Media related to Behavioral sciences at Wikimedia Commons | Wikipedia/Behavioral_sciences |
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. Graphical models are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.
== Types of graphical models ==
Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a distribution over a multi-dimensional space and a graph that is a compact or factorized representation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely, Bayesian networks and Markov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce.
=== Undirected Graphical Model ===
The undirected graph shown may have one of several interpretations; the common feature is that the presence of an edge implies some sort of dependence between the corresponding random variables. From this graph, we might deduce that B, C, and D are all conditionally independent given A. This means that if the value of A is known, then the values of B, C, and D provide no further information about each other. Equivalently (in this case), the joint probability distribution can be factorized as:
P
[
A
,
B
,
C
,
D
]
=
f
A
B
[
A
,
B
]
⋅
f
A
C
[
A
,
C
]
⋅
f
A
D
[
A
,
D
]
{\displaystyle P[A,B,C,D]=f_{AB}[A,B]\cdot f_{AC}[A,C]\cdot f_{AD}[A,D]}
for some non-negative functions
f
A
B
,
f
A
C
,
f
A
D
{\displaystyle f_{AB},f_{AC},f_{AD}}
.
=== Bayesian network ===
If the network structure of the model is a directed acyclic graph, the model represents a factorization of the joint probability of all random variables. More precisely, if the events are
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
then the joint probability satisfies
P
[
X
1
,
…
,
X
n
]
=
∏
i
=
1
n
P
[
X
i
|
pa
(
X
i
)
]
{\displaystyle P[X_{1},\ldots ,X_{n}]=\prod _{i=1}^{n}P[X_{i}|{\text{pa}}(X_{i})]}
where
pa
(
X
i
)
{\displaystyle {\text{pa}}(X_{i})}
is the set of parents of node
X
i
{\displaystyle X_{i}}
(nodes with edges directed towards
X
i
{\displaystyle X_{i}}
). In other words, the joint distribution factors into a product of conditional distributions. For example, in the directed acyclic graph shown in the Figure this factorization would be
P
[
A
,
B
,
C
,
D
]
=
P
[
A
]
⋅
P
[
B
|
A
]
⋅
P
[
C
|
A
]
⋅
P
[
D
|
A
,
C
]
{\displaystyle P[A,B,C,D]=P[A]\cdot P[B|A]\cdot P[C|A]\cdot P[D|A,C]}
.
Any two nodes are conditionally independent given the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph. Local independences and global independences are equivalent in Bayesian networks.
This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks.
One of the simplest Bayesian Networks is the Naive Bayes classifier.
=== Cyclic Directed Graphical Models ===
The next figure depicts a graphical model with a cycle. This may be interpreted in terms of each variable 'depending' on the values of its parents in some manner.
The particular graph shown suggests a joint probability density that factors as
P
[
A
,
B
,
C
,
D
]
=
P
[
A
]
⋅
P
[
B
]
⋅
P
[
C
,
D
|
A
,
B
]
{\displaystyle P[A,B,C,D]=P[A]\cdot P[B]\cdot P[C,D|A,B]}
,
but other interpretations are possible.
=== Other types ===
Dependency network where cycles are allowed
Tree-augmented classifier or TAN model
Targeted Bayesian network learning (TBNL)
A factor graph is an undirected bipartite graph connecting variables and factors. Each factor represents a function over the variables it is connected to. This is a helpful representation for understanding and implementing belief propagation.
A clique tree or junction tree is a tree of cliques, used in the junction tree algorithm.
A chain graph is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks.
An ancestral graph is a further extension, having directed, bidirected and undirected edges.
Random field techniques
A Markov random field, also known as a Markov network, is a model over an undirected graph. A graphical model with many repeated subunits can be represented with plate notation.
A conditional random field is a discriminative model specified over an undirected graph.
A restricted Boltzmann machine is a bipartite generative model specified over an undirected graph.
== Applications ==
The framework of the models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract the unstructured information, allows them to be constructed and utilized effectively. Applications of graphical models include causal inference, information extraction, speech recognition, computer vision, decoding of low-density parity-check codes, modeling of gene regulatory networks, gene finding and diagnosis of diseases, and graphical models for protein structure.
== See also ==
Belief propagation
Structural equation model
== Notes ==
== Further reading ==
=== Books and book chapters ===
Barber, David (2012). Bayesian Reasoning and Machine Learning. Cambridge University Press. ISBN 978-0-521-51814-7.
Bishop, Christopher M. (2006). "Chapter 8. Graphical Models" (PDF). Pattern Recognition and Machine Learning. Springer. pp. 359–422. ISBN 978-0-387-31073-2. MR 2247587.
Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999). Probabilistic networks and expert systems. Berlin: Springer. ISBN 978-0-387-98767-5. MR 1697175. A more advanced and statistically oriented book
Jensen, Finn (1996). An introduction to Bayesian networks. Berlin: Springer. ISBN 978-0-387-91502-9.
Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems (2nd revised ed.). San Mateo, CA: Morgan Kaufmann. ISBN 978-1-55860-479-7. MR 0965765. A computational reasoning approach, where the relationships between graphs and probabilities were formally introduced.
=== Journal articles ===
Edoardo M. Airoldi (2007). "Getting Started in Probabilistic Graphical Models". PLOS Computational Biology. 3 (12): e252. arXiv:0706.2040. Bibcode:2007PLSCB...3..252A. doi:10.1371/journal.pcbi.0030252. PMC 2134967. PMID 18069887.
Jordan, M. I. (2004). "Graphical Models". Statistical Science. 19: 140–155. doi:10.1214/088342304000000026.
Ghahramani, Zoubin (May 2015). "Probabilistic machine learning and artificial intelligence". Nature. 521 (7553): 452–459. Bibcode:2015Natur.521..452G. doi:10.1038/nature14541. PMID 26017444. S2CID 216356.
=== Other ===
Heckerman's Bayes Net Learning Tutorial
A Brief Introduction to Graphical Models and Bayesian Networks
Sargur Srihari's lecture slides on probabilistic graphical models
== External links ==
Graphical models and Conditional Random Fields
Probabilistic Graphical Models taught by Eric Xing at CMU | Wikipedia/Graphical_model |
Medieval demography is the study of human demography in Europe and the Mediterranean during the Middle Ages. It estimates and seeks to explain the number of people who were alive during the Medieval period, population trends, life expectancy, family structure, and related issues. Demography is considered a crucial element of historical change throughout the Middle Ages.
The population of Europe remained at a low level in the Early Middle Ages, boomed during the High Middle Ages and reached a peak around 1300, then a number of calamities caused a steep decline, the nature of which historians have debated. Population levels began to recover around the late 15th century, gaining momentum in the early 16th century.
The science of medieval demography relies on various lines of evidence, such as administrative records, wills and other types of records, archaeological field data, economic data, and written histories. Because the data are often incomplete and/or ambiguous, there can be significant disagreement among medieval demographers.
== Demographic history of Europe ==
The population levels of Europe during the Middle Ages can be roughly categorized:
400–600 (Late Antiquity): population decline
600–1000 (Early Middle Ages): stable at a low level, with intermittent growth.
1000–1250 (High Middle Ages): population boom and expansion.
1250–1348 (Late Middle Ages): stable or intermittently rising at a high level, with fall in 1315–17 in most of Europe.
1348–1420 (Late Middle Ages): steep decline in England and France, growth in East Central Europe.
1420–1470 (Late Middle Ages): stable or intermittently falling to a low level in Western Europe, growth in East Central Europe.
1470–onward: slow expansion gaining momentum in the early 16th century.
=== Late Antiquity ===
Late Antiquity saw various indicators of Roman civilization beginning to decline, including urbanization, seaborne commerce, and total population. Only 40% as many Mediterranean shipwrecks have been found for the 3rd century as for the 1st. During the period from 150 to 400, with the intermittent appearance of plague, the population of the Roman Empire ranged from a high of 70 to a low of 50 million, followed by a fairly good recovery if not to the previous highs of the Early Empire. Serious gradual depopulation began in the West only in the 5th century and in the East due to the appearance of bubonic plague in 541 after 250 years of economic growth after the troubles which afflicted the empire from the 250s to 270s. Proximate causes of the population decrease include the Antonine Plague (165–180), the Plague of Cyprian (250 to c. 260), and the Crisis of the Third Century. European population probably reached a minimum during the extreme weather events of 535–536 and the ensuing Plague of Justinian (541–542). Some have connected this demographic transition to the Migration Period Pessimum, when a decrease in global temperatures impaired agricultural yields.
=== Early Middle Ages ===
A major plague epidemic struck the Mediterranean, and much of Europe, in the 6th century.
The Early Middle Ages saw relatively little population growth with urbanization well below its Roman peak, reflecting a low technological level, limited trade and political, social and economic dislocation exacerbated by the impact of Viking expansion in the north, Arab expansion in the south and the movement of Slavs and Bulgarians, and later the Magyars in the east. This rural, uncertain life spurred the development of feudalism and the Christianization of Europe. Estimates of the total population of Europe are speculative, but at the time of Charlemagne it is thought to have been between 25 and 30 million, of which perhaps half were in the Carolingian Empire that covered modern France, the Low Countries, western Germany, Austria, Slovenia, northern Italy and part of northern Spain. Most medieval settlements remained small, with agricultural land and large zones of unpopulated and lawless wilderness in between (Geographers estimate that around 800, as much as three-quarters of Europe was still forested).
Population density was only two to five persons per square kilometer in Britain, similarly in Germany, and somewhat higher in France.
Manorial surveys and some allusions to provincial hearth taxes suggest a population of 5 million for Carolingian France. Presumed densities of settlement support estimates of 4 million for Italy and a similar number for Iberia, as well as German lands (including Scandinavia); 6 million for Slavic lands and perhaps 2 million for Greece and southern Balkans; 1.5 million people for the entire British Isles.
=== High Middle Ages ===
In the 10th–13th centuries, agriculture expanded into the wilderness, in what has been termed the "great clearances". During the High Middle Ages, many forests and marshes were cleared and cultivated. At the same time, during the Ostsiedlung, Germans resettled east of the Elbe and Saale rivers, in regions previously only sparsely populated by Polabian Slavs. Crusaders expanded to the Crusader states, parts of the Iberian Peninsula were reconquered from the Moors, and the Normans colonized England and southern Italy. These movements and conquests are part of a larger pattern of population expansion and resettlement that occurred in Europe at this time.
Reasons for this expansion and colonization include an improving climate known as the Medieval warm period, which resulted in longer and more productive growing seasons; the end of the raids by Vikings, Arabs, and Magyars, resulting in greater political stability; advancements in medieval technology allowing more land to be farmed; 11th-century reforms of the Church that further increased social stability; and the rise of Feudalism, which also brought a measure of social stability. Towns and trade revived, and the rise of a money economy began to weaken the bonds of serfdom that tied peasants to the land. Land was at first plentiful while labour to clear and work the land was scarce; lords who owned the land found new ways to attract and keep labour. Urban centres were able to attract serfs with the promise of freedom. As new regions were settled, both internally and externally, population naturally increased.
Overall, European population tripled between the years 1000 to 1348 and is estimated to have reached a peak of 73.5 million to as high as 100 million.
England – The population of England, between 1.25 and 2 million in 1086, is estimated to have grown to somewhere between 3.7 million and 5–7 million, although the 14th-century estimates derive from sources after the first plague epidemics, and the estimates for pre-plague population depends on assumed plague mortality, the proportion of children and the rate of omissions in returns of taxable population.
Germany/Scandinavia – The population in Germany and Scandinavia rose from 4 million in 1000, to 11.5 million by the 1340.
Italy – Italy's population around 1300 has been variously estimated at between 10 and 13 million. The two largest cities in Italy, Venice and Florence, had about 100,000 persons each. The larger cities constituted as high as twenty percent of Italy's population. By 1300, the population of the entire province of Tuscany may have then surpassed 2 million people — a level the region would not reach again until after 1850.
Denmark – Danish population reached a peak of 1 million by 13th century, estimated from a survey partially preserved in Waldemar's Land Book (1231).
France – In 1328, France is believed to have supported between 13.4 million people (in a smaller geographical area than today's) and 18 to 20 million people (in the present-day area), the latter not reached again until the early modern period.
Kingdom of Hungary – The population of the Carpathian Basin probably did not exceed 1 million at the beginning of the 12th century and it may have been between one and two million before the Mongol (Tatar) invasion of 1240. The extent of destruction is reflected in low population growth in the subsequent period. Even in the early 14th century the population was only slightly higher, between 1.4 and 2.3 million. In the fourteenth century, under Angevin dynasty (1308-1386), the population of the Kingdom reached around 3 million, before the Plague. Transylvania, in the eastern part of the Kingdom, had around 550.000 people by 1300.
Wallachia – The region in the Southern part of modern Romania had a population of around 400,000 in fifteenth century.
Bulgaria – The population of the territories forming modern Bulgaria grew from around 1.1 million in the year 700 to 2.6 million in 1365.
Constantinople – In 1203 the population of Constantinople stood 400,000 to 500,000; when the Byzantines reclaimed the city in 1261 there were only about 35,000 inhabitants left. The population of the city stood between 40,000 and 50,000 by the 1450s. The number of people captured by the Ottomans after the fall of the city was around 33,000.
Kievan Rus – the population of Kievan Rus is estimated to be between 4.5 million and 8 million, in the absence of historical sources these estimates are based on the assumed population density.
=== Late Middle Ages ===
By the 14th century, the frontiers of settled cultivation had ceased to expand and internal colonization was coming to an end, but population levels remained high. Then a series of events—sometimes called the Crisis of the Late Middle Ages—collectively killed millions. Starting with the Great Famine in 1315 and the Black Death from 1348, the population of Europe fell abruptly. The period between 1348 and 1420 saw the heaviest loss. In parts of Germany, about 40% of the named inhabitants disappeared. The population of Provence was reportedly halved and in some parts of Tuscany, 70% were lost during this period.
Historians have struggled to explain why so many died. Some have questioned the long-standing theory that the decline in population was caused only by infectious disease (see further discussions at Black Death) and so historians have examined other social factors, as follows.
A classic Malthusian argument has been put forward that Europe was overpopulated: even in good times it was barely able to feed its population. Grain yields in the 14th century were between 2:1 and 7:1 (2:1 means for every seed planted, 2 are harvested. Modern grain yields are 30:1 or more.) Malnutrition developed gradually over decades, lowering resistance to disease, and competition for resources meant more warfare, and then finally crop yields were pushed down by the Little Ice Age.
An alternative theory is that competition for resources exacerbated the imbalance between property-owners and workers, and that the money supply ceased to keep up with fixed increased economic activity (being commodity money based principally on silver) so that wages sank while rents rose, leading to demographic stagnation. The economic conditions of the poor also aggravated the calamities of the plague because they had no recourse, such as fleeing to a villa in the country in the manner of the nobles in the Decameron. The poor lived in crowded conditions and could not isolate the sick, and had weaker immunities from a deficient diet, difficult living and working conditions and poor sanitation. After the plague and other exogenous causes of population decline lowered the labor supply, wages increased. This increased the mobility of labour and led to a redistribution of wealth, although property-owners' attempts to resist change through wage freezes and price controls contributed to popular uprisings such as the Peasants' Revolt of 1381. By 1450, the total population of Europe was substantially below that of 150 years earlier, but all classes overall had a higher standard of living.
=== The Brenner Debate ===
Still yet another theory, as introduced by Robert Paul Brenner in a 1976 paper, is that the economic system of the High Middle Ages limited population growth. Feudal lords and landlords controlled most of Europe's land; they could charge high enough rents or demand a large enough percentage of peasants' profit that peasants on these lands were forced to survive at subsistence levels. With any surplus of food, labor, and income absorbed by the landowners, the peasants did not have enough capital to invest in their farms or enough incentive to increase the productivity of their land.
In addition, the small size of most peasants' farms inhibited centralized and more efficient cultivation of land on larger fields. In regions of Europe where primogeniture was less widely practiced, peasant lands were subdivided and re-subdivided with each generation of heirs; Brenner writes that consequently: "This too naturally reduced the general level of peasant income, the surplus available for potential investment in agriculture, and the slim hope of agricultural innovation."
As a result, on account of the social and economic system, the size of Europe's population was limited; the existing agricultural system and technology could not support a population beyond a certain size. When the population of Europe surpassed the threshold that the existing economic structure permitted: population loss, social instability, and famine could result. Only through modifying the existing social structure of land ownership and distribution could Europe's population surpass early 14th century levels.
The above paragraphs are a synopsis of Brenner's argument. The 1976 article has the full text of his original argument. Later, in 1985, T.H. Ashton and C.H.E. Philpin compiled a larger volume containing Brenner's original article and several scholarly responses to it.
Regardless of the cause, populations continued to fall into the 15th century and remained low into the 16th because the plague returned in cycles over the course of the fourteenth and fifteenth centuries, although subsequent plagues, such as the "children's plagues" of the 1360s were less virulent than the Great Plague of 1347–1348.
== Science and art of medieval demography ==
Sources traditionally used by modern demographers, such as marriage, birth and death records, are often not available for this period, so scholars rely on other sources, such as archaeological surveys, and written records when available.
Examples of field data include the physical size of a settlement, and how it grows over time, and the appearance, or disappearance, of settlements. For example, after the Black Death the archaeological record shows the abandonment of upwards of 25% of all villages in Spain. However, archaeological data are often difficult to interpret. It is often difficult to assign a precise age to discoveries. Also, some of the largest and most important sites are still occupied and cannot be investigated. Available archaeological records may be concentrated on the more peripheral regions, for example early Middle Ages Anglo–Saxon burials at Sutton Hoo, in East Anglia in England, for which otherwise no records exist.
Because of these limitations, much of our knowledge comes from written records: descriptive and administrative accounts. Descriptive accounts include those of chroniclers who wrote about the size of armies, victims of war or famine, participants in an oath. However these cannot be relied on as accurate, and are most useful as supporting evidence rather than being taken factually on their own.
The most important written accounts are those contained in administrative records. These accounts are more objective and accurate because the motivation for writing them was not to influence others. These records can be divided into two categories: surveys and serial documents. Surveys cover an estate or region on a particular date, rather like a modern inventory. Manorial surveys were very common throughout the Middle Ages, in particular in France and England, but faded as serfdom gave way to a money economy. Fiscal surveys came with the rise of the money economy, the most famous and earliest being the Domesday Book in 1086. The Book of Hearths from Italy in 1244 is another example. The largest fiscal survey was of France in 1328. As kings continued to look for new ways to raise money, these fiscal surveys increased in number and scope over time. Surveys have limitations, because they are only a snapshot in time; they do not show long-term trends, and they tend to exclude elements of society.
Serial records come in different forms. The earliest are from the 8th century and are land conveyances, such as sales, exchanges, donations, and leases. Other types of serial records include death records from religious institutions and baptismal registrations. Other helpful records include heriots, court records, food prices and rent prices, from which inferences can be made.
== Demographic tables of Europe's population ==
The tables below are estimated by Urlanis 1941, pp. 91, 414.
== Major scholars on medieval demography ==
Thomas Robert Malthus – founder of demography centered the Malthusian model of economic history.
Michael Postan – prominent scholar of the Malthusian model of medieval demographics.
Robert Brenner – prominent scholar of the Marxist model of medieval demographics, centered on social class and economic structure instead of population growth alone.
Karl Julius Beloch
Fernand Braudel
== See also ==
Historical demography
Classical demography
Early modern demography
Crisis of the Late Middle Ages
Dark Ages (historiography)
Life expectancy
List of famines
List of disasters
Little Ice Age
Medieval household
Migration Period
Slavery in medieval Europe
== References ==
== Bibliography ==
Herlihy, David (1989), "Medieval Demography", in Strayer, Joseph R. (ed.), Dictionary of the Middle Ages, vol. 4, New York: Scribner, ISBN 0-684-17024-8.
Urlanis, B T︠S︡ (1941). Rost naselenii︠a︡ v Evrope : opyt ischislenii︠a︡ [Population growth in Europe] (in Russian). Moskva: OGIZ-Gospolitizdat. OCLC 42379320.
== Further reading ==
Biller, Peter (2001), The Measure of Multitude: Population in Medieval Thought, New York: Oxford University Press, ISBN 0-19-820632-1.
Hollingsworth, Thomas (1969), Historical Demography, Ithaca, NY: Cornell University Press, ISBN 0-8014-0497-5.
Russell, Josiah (1987), Medieval Demography: Essays, AMS Studies in the Middle Ages, vol. 12, New York: AMS Press, ISBN 0-404-61442-6. | Wikipedia/Medieval_demography |
Historical demography is the quantitative study of human population in the past. It is concerned with population size, with the three basic components of population change (fertility, mortality, and migration), and with population characteristics related to those components, such as marriage, socioeconomic status, and the configuration of families.
== Sources ==
The sources of historical demography vary according to the period and topics of the study.
For the recent period — beginning in the early nineteenth century in most European countries, and later in the rest of the world — historical demographers make use of data collected by governments, including censuses and vital statistics.
In the early modern period, historical demographers rely heavily on ecclesiastical records of baptisms, marriages, and burials, using methods developed by French historian Louis Henry, as well as hearth and poll tax records. In 1749 the first population census covering the whole country was conducted in the kingdom of Sweden, including today's Finland.
For population size, sources can also include the size of cities and towns, the size and density of smaller settlements, relying on field survey techniques, the presence or absence of agriculture on marginal land, and inferences from historical records. For population health and life expectancy, paleodemography, based on the study of skeletal remains, is another important approach for populations that precede the modern era, as is the study of ages of death recorded on funerary monuments.
The PUMS (Public User Microdata Samples) data set allows researchers to analyze contemporary and historical data sets.
== Development of techniques ==
Historical analysis has played a central role in the study of population, from Thomas Malthus in the eighteenth century to major twentieth-century demographers such as Ansley Coale and Samuel H. Preston. The French historian Louis Henry (1911-1991) was chiefly responsible for the development of historical demography as a distinct subfield of demography. In recent years, new research in historical demography has proliferated owing to the development of massive new population data collections, including the Demographic Data Base in Umeå, Sweden, the Historical Sample of the Netherlands, and the Integrated Public Use Microdata Series (IPUMS).
According to Willigan and Lynch, the main sources used by demographic historians include archaeological methods, parish registers starting about 1500 in Europe, civil registration records, enumerations, national census beginning about 1800, genealogies and family reconstruction studies, population registers, and organizational and institutional records. Statistical methods have included model life tables, time series analysis, event history analysis, causal model building and hypothesis testing, as well as theories of the demographic transition and the epidemiological transition.
== References ==
== Further reading ==
Alter, George C. "Generation to Generation Life Course, Family, and Community." Social Science History (2013) 37#1 pp: 1-26. abstract
Alter, George C., et al. "Introduction: Longitudinal analysis of historical-demographic data." Journal of Interdisciplinary History (2012) 42#4 pp: 503-517. Online
Alter, George C., et al. "Completing Life Histories with Imputed Exit Dates: A Method for Historical Data from Passive Registration Systems," Population (2009) 64:293–318.
Arriaga, Eduardo E. "A New Approach to the Measurements of Urbanization" Economic Development & Cultural Change (1970) 18#2 pp 206–18 in JSTOR
Coale, Ansley J. Regional Model Life Tables and Stable Populations (2nd ed. 1983)
Fauve-Chamoux, Antoinette. "A Personal Account of the History of Historical Demography in Europe at the End of the Glorious Thirty (1967-1975)." Essays in Economic & Business History 35.1 (2017): 175-205.
Gutmann, Myron P. et al. eds. Navigating Time and Space in Population Studies (2012) excerpt and text search
Henry, Louis. Population: analysis and models (London: Edward Arnold, 1976)
Henry, Louis. On the measurement of human fertility: selected writings of Louis Henry (Elsevier Pub. Co, 1972)
Henry, Louis. "The verification of data in historical demography." Population studies 22.1 (1968): 61-81.
Nusteling, Hubert. "Fertility in historical demography and a homeostatic method for reconstituting populations in pre-statistical periods." Historical Methods: A Journal of Quantitative and Interdisciplinary History (2005) 38#3 pp: 126-142. DOI:10.3200/HMTS.38.3.126-142
Smith, Daniel Scott. "A perspective on demographic methods and effects in social history." William and Mary Quarterly (1982 ): 442-468. in JSTOR
Reher, David S., and Roger Schofield. Old and new methods in historical demography (Clarendon Press 1993), 426 pp.
Swanson, David A. and Jacob S. Siegel. The Methods and Materials of Demography (2nd ed. 2004); rewritten version of Henry S. Shryock and Jacob S. Siegel, The Methods and Materials of Demography (1976); compendium of techniques
Swedlund, Alan C. "Historical demography as population ecology." Annual Review of Anthropology (1978) pp: 137-173.
van de Walle, Etienne. "Historical Demography" in Dudley L. Poston and Michael Micklin, eds. Handbook of Population (Springer US, 2005) pp 577–600
Watkins, Susan Cotts, and Myron P. Gutmann. "Methodological issues in the use of population registers for fertility analysis." Historical Methods: A Journal of Quantitative and Interdisciplinary History (1983) 16#3: 109-120.
Weiss, Volkmar. Local Population Studies in Central Europe. A Review of Historical Demography and Social History. (KDP, 2020), 253 pp.
Willigan, J. Dennis, and Katherine A. Lynch, Sources and Methods of Historical Demography, (New York: Academic Press, 1982) 505 p. Abstract
Wrigely, E. A., ed. An Introduction to English Historical Demography, London: Weidenfeld & Nicolson, 1966.
== External links ==
International Commission for Historical Demography Archived 2007-05-24 at the Wayback Machine
H-Demog, an international scholarly online discussion list on demographic history
POPULATION STATISTICS in historical perspective | Wikipedia/Historical_demography |
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. The hazard rate at time
t
{\displaystyle t}
is the probability per short time dt that an event will occur between
t
{\displaystyle t}
and
t
+
d
t
{\displaystyle t+dt}
given that up to time
t
{\displaystyle t}
no event has occurred yet.
For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed, may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated).
== Background ==
Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding.
The proportional hazards condition states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time
t
{\displaystyle t}
, while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
. The covariate is not restricted to binary predictors; in the case of a continuous covariate
x
{\displaystyle x}
, it is typically assumed that the hazard responds exponentially; each unit increase in
x
{\displaystyle x}
results in proportional scaling of the hazard.
== The Cox model ==
=== Introduction ===
Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted
β
i
{\displaystyle \beta _{i}}
below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky.
Let Xi = (Xi1, … , Xip) be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form
λ
(
t
|
X
i
)
=
λ
0
(
t
)
exp
(
β
1
X
i
1
+
⋯
+
β
p
X
i
p
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
{\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\end{aligned}}}
This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
is identical (has no dependency on i). The only difference between subjects' hazards comes from the baseline scaling factor
exp
(
X
i
⋅
β
)
{\displaystyle \exp(X_{i}\cdot \beta )}
.
=== Why it is called "proportional" ===
To start, suppose we only have a single covariate,
x
{\displaystyle x}
, and therefore a single coefficient,
β
1
{\displaystyle \beta _{1}}
. Our model looks like:
λ
(
t
|
x
)
=
λ
0
(
t
)
exp
(
β
1
x
)
{\displaystyle \lambda (t|x)=\lambda _{0}(t)\exp(\beta _{1}x)}
Consider the effect of increasing
x
{\displaystyle x}
by 1:
λ
(
t
|
x
+
1
)
=
λ
0
(
t
)
exp
(
β
1
(
x
+
1
)
)
=
λ
0
(
t
)
exp
(
β
1
x
+
β
1
)
=
(
λ
0
(
t
)
exp
(
β
1
x
)
)
exp
(
β
1
)
=
λ
(
t
|
x
)
exp
(
β
1
)
{\displaystyle {\begin{aligned}\lambda (t|x+1)&=\lambda _{0}(t)\exp(\beta _{1}(x+1))\\&=\lambda _{0}(t)\exp(\beta _{1}x+\beta _{1})\\&={\Bigl (}\lambda _{0}(t)\exp(\beta _{1}x){\Bigr )}\exp(\beta _{1})\\&=\lambda (t|x)\exp(\beta _{1})\end{aligned}}}
We can see that increasing a covariate by 1 scales the original hazard by the constant
exp
(
β
1
)
{\displaystyle \exp(\beta _{1})}
. Rearranging things slightly, we see that:
λ
(
t
|
x
+
1
)
λ
(
t
|
x
)
=
exp
(
β
1
)
{\displaystyle {\frac {\lambda (t|x+1)}{\lambda (t|x)}}=\exp(\beta _{1})}
The right-hand-side is constant over time (no term has a
t
{\displaystyle t}
in it). This relationship,
x
/
y
=
constant
{\displaystyle x/y={\text{constant}}}
, is called a proportional relationship.
More generally, consider two subjects, i and j, with covariates
X
i
{\displaystyle X_{i}}
and
X
j
{\displaystyle X_{j}}
respectively. Consider the ratio of their hazards:
λ
(
t
|
X
i
)
λ
(
t
|
X
j
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
λ
0
(
t
)
exp
(
X
j
⋅
β
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
λ
0
(
t
)
exp
(
X
j
⋅
β
)
=
exp
(
(
X
i
−
X
j
)
⋅
β
)
{\displaystyle {\begin{aligned}{\frac {\lambda (t|X_{i})}{\lambda (t|X_{j})}}&={\frac {\lambda _{0}(t)\exp(X_{i}\cdot \beta )}{\lambda _{0}(t)\exp(X_{j}\cdot \beta )}}\\&={\frac {{\cancel {\lambda _{0}(t)}}\exp(X_{i}\cdot \beta )}{{\cancel {\lambda _{0}(t)}}\exp(X_{j}\cdot \beta )}}\\&=\exp((X_{i}-X_{j})\cdot \beta )\end{aligned}}}
The right-hand-side isn't dependent on time, as the only time-dependent factor,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, was cancelled out. Thus the ratio of hazards of two subjects is a constant, i.e. the hazards are proportional.
=== Absence of an intercept term ===
Often there is an intercept term (also called a constant term or bias term) used in regression models. The Cox model lacks one because the baseline hazard,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, takes the place of it. Let's see what would happen if we did include an intercept term anyways, denoted
β
0
{\displaystyle \beta _{0}}
:
λ
(
t
|
X
i
)
=
λ
0
(
t
)
exp
(
β
1
X
i
1
+
⋯
+
β
p
X
i
p
+
β
0
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
exp
(
β
0
)
=
(
exp
(
β
0
)
λ
0
(
t
)
)
exp
(
X
i
⋅
β
)
=
λ
0
∗
(
t
)
exp
(
X
i
⋅
β
)
{\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}+\beta _{0})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\exp(\beta _{0})\\&=\left(\exp(\beta _{0})\lambda _{0}(t)\right)\exp(X_{i}\cdot \beta )\\&=\lambda _{0}^{*}(t)\exp(X_{i}\cdot \beta )\end{aligned}}}
where we've redefined
exp
(
β
0
)
λ
0
(
t
)
{\displaystyle \exp(\beta _{0})\lambda _{0}(t)}
to be a new baseline hazard,
λ
0
∗
(
t
)
{\displaystyle \lambda _{0}^{*}(t)}
. Thus, the baseline hazard incorporates all parts of the hazard that are not dependent on the subjects' covariates, which includes any intercept term (which is constant for all subjects, by definition). In other words, adding an intercept term would make the model unidentifiable.
=== Likelihood for unique times ===
The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". It is simply the probability for subjects to have experienced events in the order that they actually have occurred, given the set of times of occurrences and given the subjects' covariates.
The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios.
To calculate the partial likelihood, the probability for the order of events, let us index the M samples for which events have already occurred by increasing time of occurrence, Y1 < Y2 < ... < YM. Covariates of all other subjects for which no event has occurred get indices M+1,.., N. The partial likelihood can be factorized into one factor for each event that has occurred. The i 'th factor is the probability that out of all subjects (i,i+1,..., N) for which no event has occurred before time Yi, the one that actually occurred at time Yi is the event for subject i:
L
i
(
β
)
=
λ
(
Y
i
∣
X
i
)
∑
j
=
i
N
λ
(
Y
i
∣
X
j
)
=
λ
0
(
Y
i
)
θ
i
∑
j
=
i
N
λ
0
(
Y
i
)
θ
j
=
θ
i
∑
j
=
i
N
θ
j
,
{\displaystyle L_{i}(\beta )={\frac {\lambda (Y_{i}\mid X_{i})}{\sum _{j=i}^{N}\lambda (Y_{i}\mid X_{j})}}={\frac {\lambda _{0}(Y_{i})\theta _{i}}{\sum _{j=i}^{N}\lambda _{0}(Y_{i})\theta _{j}}}={\frac {\theta _{i}}{\sum _{j=i}^{N}\theta _{j}}},}
where θj = exp(Xj ⋅ β) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β) ≤ 1.
Treating the subjects as statistically independent of each other, the partial likelihood for the order of events
is
L
(
β
)
=
∏
i
=
1
M
L
i
(
β
)
=
∏
i
:
C
i
=
1
L
i
(
β
)
,
{\displaystyle L(\beta )=\prod _{i=1}^{M}L_{i}(\beta )=\prod _{i:C_{i}=1}L_{i}(\beta ),}
where the subjects for which an event has occurred are indicated by Ci = 1 and all others by Ci = 0. The corresponding log partial likelihood is
ℓ
(
β
)
=
∑
i
:
C
i
=
1
(
X
i
⋅
β
−
log
∑
j
:
Y
j
≥
Y
i
θ
j
)
,
{\displaystyle \ell (\beta )=\sum _{i:C_{i}=1}\left(X_{i}\cdot \beta -\log \sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right),}
where we have written
∑
j
=
i
N
{\displaystyle \sum _{j=i}^{N}}
using the indexing introduced above in a more general way, as
∑
j
:
Y
j
≥
Y
i
{\displaystyle \sum _{j:Y_{j}\geq Y_{i}}}
.
Crucially, the effect of the covariates can be estimated without the need to specify the hazard function
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
over time. The partial likelihood can be maximized over β to produce maximum partial likelihood estimates of the model parameters.
The partial score function is
ℓ
′
(
β
)
=
∑
i
:
C
i
=
1
(
X
i
−
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
∑
j
:
Y
j
≥
Y
i
θ
j
)
,
{\displaystyle \ell ^{\prime }(\beta )=\sum _{i:C_{i}=1}\left(X_{i}-{\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}\right),}
and the Hessian matrix of the partial log likelihood is
ℓ
′
′
(
β
)
=
−
∑
i
:
C
i
=
1
(
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
X
j
′
∑
j
:
Y
j
≥
Y
i
θ
j
−
[
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
]
[
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
′
]
[
∑
j
:
Y
j
≥
Y
i
θ
j
]
2
)
.
{\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{i:C_{i}=1}\left({\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}X_{j}^{\prime }}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}-{\frac {\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}\right]\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}^{\prime }\right]}{\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right]^{2}}}\right).}
Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.
=== Likelihood when there exist tied times ===
Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method. Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood.
L
(
β
)
=
∏
j
∏
i
∈
H
j
θ
i
∏
ℓ
=
0
m
j
−
1
[
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
]
.
{\displaystyle L(\beta )=\prod _{j}{\frac {\prod _{i\in H_{j}}\theta _{i}}{\prod _{\ell =0}^{m_{j}-1}\left[\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right]}}.}
The corresponding log partial likelihood is
ℓ
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
⋅
β
−
∑
ℓ
=
0
m
j
−
1
log
(
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
)
,
{\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right),}
the score function is
ℓ
′
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
−
∑
ℓ
=
0
m
j
−
1
∑
i
:
Y
i
≥
t
j
θ
i
X
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
,
{\displaystyle \ell ^{\prime }(\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}-\sum _{\ell =0}^{m_{j}-1}{\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}}{\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}}\right),}
and the Hessian matrix is
ℓ
′
′
(
β
)
=
−
∑
j
∑
ℓ
=
0
m
j
−
1
(
∑
i
:
Y
i
≥
t
j
θ
i
X
i
X
i
′
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
X
i
′
ϕ
j
,
ℓ
,
m
j
−
Z
j
,
ℓ
,
m
j
Z
j
,
ℓ
,
m
j
′
ϕ
j
,
ℓ
,
m
j
2
)
,
{\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{j}\sum _{\ell =0}^{m_{j}-1}\left({\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}X_{i}^{\prime }-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}X_{i}^{\prime }}{\phi _{j,\ell ,m_{j}}}}-{\frac {Z_{j,\ell ,m_{j}}Z_{j,\ell ,m_{j}}^{\prime }}{\phi _{j,\ell ,m_{j}}^{2}}}\right),}
where
ϕ
j
,
ℓ
,
m
j
=
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
{\displaystyle \phi _{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}
Z
j
,
ℓ
,
m
j
=
∑
i
:
Y
i
≥
t
j
θ
i
X
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
.
{\displaystyle Z_{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}.}
Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero.
=== Examples ===
Below are some worked examples of the Cox model in practice.
==== A single binary covariate ====
Suppose the endpoint we are interested in is patient survival during a 5-year observation period after a surgery. Patients can die within the 5-year period, and we record when they died, or patients can live past 5 years, and we only record that they lived past 5 years. The surgery was performed at one of two hospitals, A or B, and we would like to know if the hospital location is associated with 5-year survival. Specifically, we would like to know the relative increase (or decrease) in hazard from a surgery performed at hospital A compared to hospital B. Provided is some (fake) data, where each row represents a patient: T is how long the patient was observed for before death or 5 years (measured in months), and C denotes if the patient died in the 5-year period. We have encoded the hospital as a binary variable denoted X: 1 if from hospital A, 0 from hospital B.
Our single-covariate Cox proportional model looks like the following, with
β
1
{\displaystyle \beta _{1}}
representing the hospital's effect, and i indexing each patient:
λ
(
t
|
X
i
)
⏞
hazard for i
=
λ
0
(
t
)
⏟
baseline
hazard
⋅
exp
(
β
1
X
i
)
⏞
scaling factor for i
{\displaystyle \overbrace {\lambda (t|X_{i})} ^{\text{hazard for i}}=\underbrace {\lambda _{0}(t)} _{{\text{baseline}} \atop {\text{hazard}}}\cdot \overbrace {\exp(\beta _{1}X_{i})} ^{\text{scaling factor for i}}}
Using statistical software, we can estimate
β
1
{\displaystyle \beta _{1}}
to be 2.12. The hazard ratio is the exponential of this value,
exp
(
β
1
)
=
exp
(
2.12
)
{\displaystyle \exp(\beta _{1})=\exp(2.12)}
. To see why, consider the ratio of hazards, specifically:
λ
(
t
|
X
=
1
)
λ
(
t
|
X
=
0
)
=
λ
0
(
t
)
exp
(
β
1
⋅
1
)
λ
0
(
t
)
exp
(
β
1
⋅
0
)
=
exp
(
β
1
)
{\displaystyle {\frac {\lambda (t|X=1)}{\lambda (t|X=0)}}={\frac {{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 1)}{{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 0)}}=\exp(\beta _{1})}
Thus, the hazard ratio of hospital A to hospital B is
exp
(
2.12
)
=
8.32
{\displaystyle \exp(2.12)=8.32}
. Putting aside statistical significance for a moment, we can make a statement saying that patients in hospital A are associated with a 8.3x higher risk of death occurring in any short period of time compared to hospital B.
There are important caveats to mention about the interpretation:
a 8.3x higher risk of death does not mean that 8.3x more patients will die in hospital A: survival analysis examines how quickly events occur, not simply whether they occur.
More specifically, "risk of death" is a measure of a rate. A rate has units, like meters per second. However, a relative rate does not: a bicycle can go two times faster than another bicycle (the reference bicycle), without specifying any units. Likewise, the risk of death (comparable to the speed of a bike) in hospital A is 8.3 times higher (faster) than the risk of death in hospital B (the reference group).
the inverse quantity,
1
/
8.32
=
1
exp
(
2.12
)
=
exp
(
−
2.12
)
=
0.12
{\displaystyle 1/8.32={\frac {1}{\exp(2.12)}}=\exp(-2.12)=0.12}
is the hazard ratio of hospital B relative to hospital A.
We haven't made any inferences about probabilities of survival between the hospitals. This is because we would need an estimate of the baseline hazard rate,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, as well as our
β
1
{\displaystyle \beta _{1}}
estimate. However, standard estimation of the Cox proportional hazard model does not directly estimate the baseline hazard rate.
Because we have ignored the only time varying component of the model, the baseline hazard rate, our estimate is timescale-invariant. For example, if we had measured time in years instead of months, we would get the same estimate.
It is tempting to say that the hospital caused the difference in hazards between the two groups, but since our study is not causal (that is, we do not know how the data was generated), we stick with terminology like "associated".
==== A single continuous covariate ====
To demonstrate a less traditional use case of survival analysis, the next example will be an economics question: what is the relationship between a company's price-to-earnings ratio (P/E) on their first IPO anniversary and their future survival? More specifically, if we consider a company's "birth event" to be their first IPO anniversary, and any bankruptcy, sale, going private, etc. as a "death" event the company, we'd like to know the influence of the companies' P/E ratio at their "birth" (first IPO anniversary) on their survival.
Provided is a (fake) dataset with survival data from 12 companies: T represents the number of days between first IPO anniversary and death (or an end date of 2022-01-01, if did not die). C represents if the company died before 2022-01-01 or not. P/E represents the company's price-to-earnings ratio at its 1st IPO anniversary.
Unlike the previous example where there was a binary variable, this dataset has a continuous variable, P/E; however, the model looks similar:
λ
(
t
|
P
i
)
=
λ
0
(
t
)
⋅
exp
(
β
1
P
i
)
{\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(\beta _{1}P_{i})}
where
P
i
{\displaystyle P_{i}}
represents a company's P/E ratio. Running this dataset through a Cox model produces an estimate of the value of the unknown
β
1
{\displaystyle \beta _{1}}
, which is -0.34. Therefore, an estimate of the entire hazard is:
λ
(
t
|
P
i
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
P
i
)
{\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(-0.34P_{i})}
Since the baseline hazard,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, was not estimated, the entire hazard is not able to be calculated. However, consider the ratio of the companies i and j's hazards:
λ
(
t
|
P
i
)
λ
(
t
|
P
j
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
P
i
)
λ
0
(
t
)
⋅
exp
(
−
0.34
P
j
)
=
exp
(
−
0.34
(
P
i
−
P
j
)
)
{\displaystyle {\begin{aligned}{\frac {\lambda (t|P_{i})}{\lambda (t|P_{j})}}&={\frac {{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{i})}{{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{j})}}\\&=\exp(-0.34(P_{i}-P_{j}))\end{aligned}}}
All terms on the right are known, so calculating the ratio of hazards between companies is possible. Since there is no time-dependent term on the right (all terms are constant), the hazards are proportional to each other. For example, the hazard ratio of company 5 to company 2 is
exp
(
−
0.34
(
6.3
−
3.0
)
)
=
0.33
{\displaystyle \exp(-0.34(6.3-3.0))=0.33}
. This means that, within the interval of study, company 5's risk of "death" is 0.33 ≈ 1/3 as large as company 2's risk of death.
There are important caveats to mention about the interpretation:
The hazard ratio is the quantity
exp
(
β
1
)
{\displaystyle \exp(\beta _{1})}
, which is
exp
(
−
0.34
)
=
0.71
{\displaystyle \exp(-0.34)=0.71}
in the above example. From the last calculation above, an interpretation of this is as the ratio of hazards between two "subjects" that have their variables differ by one unit: if
P
i
=
P
j
+
1
{\displaystyle P_{i}=P_{j}+1}
, then
exp
(
β
1
(
P
i
−
P
j
)
=
exp
(
β
1
(
1
)
)
{\displaystyle \exp(\beta _{1}(P_{i}-P_{j})=\exp(\beta _{1}(1))}
. The choice of "differ by one unit" is convenience, as it communicates precisely the value of
β
1
{\displaystyle \beta _{1}}
.
The baseline hazard can be represented when the scaling factor is 1, i.e.
P
=
0
{\displaystyle P=0}
.
λ
(
t
|
P
i
=
0
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
⋅
0
)
=
λ
0
(
t
)
{\displaystyle \lambda (t|P_{i}=0)=\lambda _{0}(t)\cdot \exp(-0.34\cdot 0)=\lambda _{0}(t)}
Can we interpret the baseline hazard as the hazard of a "baseline" company whose P/E happens to be 0? This interpretation of the baseline hazard as "hazard of a baseline subject" is imperfect, as the covariate being 0 is impossible in this application: a P/E of 0 is meaningless (it means the company's stock price is 0, i.e., they are "dead"). A more appropriate interpretation would be "the hazard when all variables are nil".
It is tempting to want to understand and interpret a value like
exp
(
β
1
P
i
)
{\displaystyle \exp(\beta _{1}P_{i})}
to represent the hazard of a company. However, consider what this is actually representing:
exp
(
β
1
P
i
)
=
exp
(
β
1
(
P
i
−
0
)
)
=
exp
(
β
1
P
i
)
exp
(
β
1
0
)
=
λ
(
t
|
P
i
)
λ
(
t
|
0
)
{\displaystyle \exp(\beta _{1}P_{i})=\exp(\beta _{1}(P_{i}-0))={\frac {\exp(\beta _{1}P_{i})}{\exp(\beta _{1}0)}}={\frac {\lambda (t|P_{i})}{\lambda (t|0)}}}
. There is implicitly a ratio of hazards here, comparing company i's hazard to an imaginary baseline company with 0 P/E. However, as explained above, a P/E of 0 is impossible in this application, so
exp
(
β
1
P
i
)
{\displaystyle \exp(\beta _{1}P_{i})}
is meaningless in this example. Ratios between plausible hazards are meaningful, however.
== Time-varying predictors and coefficients ==
Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill. One example of the use of hazard models with time-varying regressors is estimating the effect of unemployment insurance on unemployment spells.
In addition to allowing time-varying covariates (i.e., predictors), the Cox model may be generalized to time-varying coefficients as well. That is, the proportional effect of a treatment may vary with time; e.g. a drug may be very effective if administered within one month of morbidity, and become less effective as time goes on. The hypothesis of no change with time (stationarity) of the coefficient may then be tested. Details and software (R package) are available in Martinussen and Scheike (2006).
In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards, i.e. specifying
λ
(
t
|
X
i
)
=
λ
0
(
t
)
+
β
1
X
i
1
+
⋯
+
β
p
X
i
p
=
λ
0
(
t
)
+
X
i
⋅
β
.
{\displaystyle \lambda (t|X_{i})=\lambda _{0}(t)+\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}=\lambda _{0}(t)+X_{i}\cdot \beta .}
If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict
λ
(
t
∣
X
i
)
{\displaystyle \lambda (t\mid X_{i})}
to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required.
== Specifying the baseline hazard function ==
The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model.
Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models.
The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast.
Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function, to acknowledge the debt of the entire field to David Cox.
The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model.
== Relationship to Poisson models ==
There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981) provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models.
== Under high-dimensional setup ==
In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint.
ℓ
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
⋅
β
−
∑
ℓ
=
0
m
j
−
1
log
(
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
)
+
λ
‖
β
‖
1
,
{\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right)+\lambda \|\beta \|_{1},}
There has been theoretical progress on this topic recently.
== Software implementations ==
Mathematica: CoxModelFit function.
R: coxph() function, located in the survival package.
SAS: phreg procedure
Stata: stcox command
Python: CoxPHFitter located in the lifelines library. phreg in the statsmodels library.
SPSS: Available under Cox Regression.
MATLAB: fitcox or coxphfit function
Julia: Available in the Survival.jl library.
JMP: Available in Fit Proportional Hazards platform.
Prism: Available in Survival Analyses and Multiple Variable Analyses
== See also ==
Accelerated failure time model
One in ten rule
Weibull distribution
Hypertabastic distribution
== Notes ==
== References == | Wikipedia/Proportional_hazards_models |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.