text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges.
== Symbols ==
Square brackets [ ]
G[S] is the induced subgraph of a graph G for vertex subset S.
Prime symbol '
The prime symbol is often used to modify notation for graph invariants so that it applies to the line graph instead of the given graph. For instance, α(G) is the independence number of a graph; α′(G) is the matching number of the graph, which equals the independence number of its line graph. Similarly, χ(G) is the chromatic number of a graph; χ ′(G) is the chromatic index of the graph, which equals the chromatic number of its line graph.
== A ==
absorbing
An absorbing set
A
{\displaystyle A}
of a directed graph
G
{\displaystyle G}
is a set of vertices such that for any vertex
v
∈
G
∖
A
{\displaystyle v\in G\setminus A}
, there is an edge from
v
{\displaystyle v}
towards a vertex of
A
{\displaystyle A}
.
achromatic
The achromatic number of a graph is the maximum number of colors in a complete coloring.
acyclic
1. A graph is acyclic if it has no cycles. An undirected acyclic graph is the same thing as a forest. An acyclic directed graph, which is a digraph without directed cycles, is often called a directed acyclic graph, especially in computer science.
2. An acyclic coloring of an undirected graph is a proper coloring in which every two color classes induce a forest.
adjacency matrix
The adjacency matrix of a graph is a matrix whose rows and columns are both indexed by vertices of the graph, with a one in the cell for row i and column j when vertices i and j are adjacent, and a zero otherwise.
adjacent
1. The relation between two vertices that are both endpoints of the same edge.
2. The relation between two distinct edges that share an end vertex.
α
For a graph G, α(G) (using the Greek letter alpha) is its independence number (see independent), and α′(G) is its matching number (see matching).
alternating
In a graph with a matching, an alternating path is a path whose edges alternate between matched and unmatched edges. An alternating cycle is, similarly, a cycle whose edges alternate between matched and unmatched edges. An augmenting path is an alternating path that starts and ends at unsaturated vertices. A larger matching can be found as the symmetric difference of the matching and the augmenting path; a matching is maximum if and only if it has no augmenting path.
antichain
In a directed acyclic graph, a subset S of vertices that are pairwise incomparable, i.e., for any
x
≤
y
{\displaystyle x\leq y}
in S, there is no directed path from x to y or from y to x. Inspired by the notion of antichains in partially ordered sets.
anti-edge
Synonym for non-edge, a pair of non-adjacent vertices.
anti-triangle
A three-vertex independent set, the complement of a triangle.
apex
1. An apex graph is a graph in which one vertex can be removed, leaving a planar subgraph. The removed vertex is called the apex. A k-apex graph is a graph that can be made planar by the removal of k vertices.
2. Synonym for universal vertex, a vertex adjacent to all other vertices.
arborescence
Synonym for a rooted and directed tree; see tree.
arc
See edge.
arrow
An ordered pair of vertices, such as an edge in a directed graph. An arrow (x, y) has a tail x, a head y, and a direction from x to y; y is said to be the direct successor to x and x the direct predecessor to y. The arrow (y, x) is the inverted arrow of the arrow (x, y).
articulation point
A vertex in a connected graph whose removal would disconnect the graph. More generally, a vertex whose removal increases the number of components.
-ary
A k-ary tree is a rooted tree in which every internal vertex has no more than k children. A 1-ary tree is just a path. A 2-ary tree is also called a binary tree, although that term more properly refers to 2-ary trees in which the children of each node are distinguished as being left or right children (with at most one of each type). A k-ary tree is said to be complete if every internal vertex has exactly k children.
augmenting
A special type of alternating path; see alternating.
automorphism
A graph automorphism is a symmetry of a graph, an isomorphism from the graph to itself.
== B ==
bag
One of the sets of vertices in a tree decomposition.
balanced
A bipartite or multipartite graph is balanced if each two subsets of its vertex partition have sizes within one of each other.
ball
A ball (also known as a neighborhood ball or distance ball) is the set of all vertices that are at most distance r from a vertex. More formally, for a given vertex v and radius r, the ball B(v,r) consists of all vertices whose shortest path distance to v is less than or equal to r.
bandwidth
The bandwidth of a graph G is the minimum, over all orderings of vertices of G, of the length of the longest edge (the number of steps in the ordering between its two endpoints). It is also one less than the size of the maximum clique in a proper interval completion of G, chosen to minimize the clique size.
biclique
Synonym for complete bipartite graph or complete bipartite subgraph; see complete.
biconnected
Usually a synonym for 2-vertex-connected, but sometimes includes K2 though it is not 2-connected. See connected; for biconnected components, see component.
binding number
The smallest possible ratio of the number of neighbors of a proper subset of vertices to the size of the subset.
bipartite
A bipartite graph is a graph whose vertices can be divided into two disjoint sets such that the vertices in one set are not connected to each other, but may be connected to vertices in the other set. Put another way, a bipartite graph is a graph with no odd cycles; equivalently, it is a graph that may be properly colored with two colors. Bipartite graphs are often written G = (U,V,E) where U and V are the subsets of vertices of each color. However, unless the graph is connected, it may not have a unique 2-coloring.
biregular
A biregular graph is a bipartite graph in which there are only two different vertex degrees, one for each set of the vertex bipartition.
block
1. A block of a graph G is a maximal subgraph which is either an isolated vertex, a bridge edge, or a 2-connected subgraph. If a block is 2-connected, every pair of vertices in it belong to a common cycle. Every edge of a graph belongs in exactly one block.
2. The block graph of a graph G is another graph whose vertices are the blocks of G, with an edge connecting two vertices when the corresponding blocks share an articulation point; that is, it is the intersection graph of the blocks of G. The block graph of any graph is a forest.
3. The block-cut (or block-cutpoint) graph of a graph G is a bipartite graph where one partite set consists of the cut-vertices of G, and the other has a vertex
b
i
{\displaystyle b_{i}}
for each block
B
i
{\displaystyle B_{i}}
of G. When G is connected, its block-cutpoint graph is a tree.
4. A block graph (also called a clique tree if connected, and sometimes erroneously called a Husimi tree) is a graph all of whose blocks are complete graphs. A forest is a block graph; so in particular the block graph of any graph is a block graph, and every block graph may be constructed as the block graph of a graph.
bond
A minimal cut-set: a set of edges whose removal disconnects the graph, for which no proper subset has the same property.
book
1. A book, book graph, or triangular book is a complete tripartite graph K1,1,n; a collection of n triangles joined at a shared edge.
2. Another type of graph, also called a book, or a quadrilateral book, is a collection of 4-cycles joined at a shared edge; the Cartesian product of a star with an edge.
3. A book embedding is an embedding of a graph onto a topological book, a space formed by joining a collection of half-planes along a shared line. Usually, the vertices of the embedding are required to be on the line, which is called the spine of the embedding, and the edges of the embedding are required to lie within a single half-plane, one of the pages of the book.
boundary
1. In a graph embedding, a boundary walk is the subgraph containing all incident edges and vertices to a face.
bramble
A bramble is a collection of mutually touching connected subgraphs, where two subgraphs touch if they share a vertex or each includes one endpoint of an edge. The order of a bramble is the smallest size of a set of vertices that has a nonempty intersection with all of the subgraphs. The treewidth of a graph is the maximum order of any of its brambles.
branch
A path of degree-two vertices, ending at vertices whose degree is unequal to two.
branch-decomposition
A branch-decomposition of G is a hierarchical clustering of the edges of G, represented by an unrooted binary tree with its leaves labeled by the edges of G. The width of a branch-decomposition is the maximum, over edges e of this binary tree, of the number of shared vertices between the subgraphs determined by the edges of G in the two subtrees separated by e. The branchwidth of G is the minimum width of any branch-decomposition of G.
branchwidth
See branch-decomposition.
bridge
1. A bridge, isthmus, or cut edge is an edge whose removal would disconnect the graph. A bridgeless graph is one that has no bridges; equivalently, a 2-edge-connected graph.
2. A bridge of a subgraph H is a maximal connected subgraph separated from the rest of the graph by H. That is, it is a maximal subgraph that is edge-disjoint from H and in which each two vertices and edges belong to a path that is internally disjoint from H. H may be a set of vertices. A chord is a one-edge bridge.
In planarity testing, H is a cycle and a peripheral cycle is a cycle with at most one bridge; it must be a face boundary in any planar embedding of its graph.
3. A bridge of a cycle can also mean a path that connects two vertices of a cycle but is shorter than either of the paths in the cycle connecting the same two vertices. A bridged graph is a graph in which every cycle of four or more vertices has a bridge.
bridgeless
A bridgeless or isthmus-free graph is a graph that has no bridge edges (i.e., isthmi); that is, each connected component is a 2-edge-connected graph.
butterfly
1. The butterfly graph has five vertices and six edges; it is formed by two triangles that share a vertex.
2. The butterfly network is a graph used as a network architecture in distributed computing, closely related to the cube-connected cycles.
== C ==
C
Cn is an n-vertex cycle graph; see cycle.
cactus
A cactus graph, cactus tree, cactus, or Husimi tree is a connected graph in which each edge belongs to at most one cycle. Its blocks are cycles or single edges. If, in addition, each vertex belongs to at most two blocks, then it is called a Christmas cactus.
cage
A cage is a regular graph with the smallest possible order for its girth.
canonical
canonization
A canonical form of a graph is an invariant such that two graphs have equal invariants if and only if they are isomorphic. Canonical forms may also be called canonical invariants or complete invariants, and are sometimes defined only for the graphs within a particular family of graphs. Graph canonization is the process of computing a canonical form.
card
A graph formed from a given graph by deleting one vertex, especially in the context of the reconstruction conjecture. See also deck, the multiset of all cards of a graph.
carving width
Carving width is a notion of graph width analogous to branchwidth, but using hierarchical clusterings of vertices instead of hierarchical clusterings of edges.
caterpillar
A caterpillar tree or caterpillar is a tree in which the internal nodes induce a path.
center
The center of a graph is the set of vertices of minimum eccentricity.
centroid
A centroid of a tree is a vertex v such that if rooted at v, no other vertex has subtree size greater than half the size of the tree.
chain
1. Synonym for walk.
2. When applying methods from algebraic topology to graphs, an element of a chain complex, namely a set of vertices or a set of edges.
Cheeger constant
See expansion.
cherry
A cherry is a path on three vertices.
χ
χ(G) (using the Greek letter chi) is the chromatic number of G and χ ′(G) is its chromatic index; see chromatic and coloring.
child
In a rooted tree, a child of a vertex v is a neighbor of v along an outgoing edge, one that is directed away from the root.
chord
chordal
1. A chord of a cycle is an edge that does not belong to the cycle, for which both endpoints belong to the cycle.
2. A chordal graph is a graph in which every cycle of four or more vertices has a chord, so the only induced cycles are triangles.
3. A strongly chordal graph is a chordal graph in which every cycle of length six or more has an odd chord.
4. A chordal bipartite graph is not chordal (unless it is a forest); it is a bipartite graph in which every cycle of six or more vertices has a chord, so the only induced cycles are 4-cycles.
5. A chord of a circle is a line segment connecting two points on the circle; the intersection graph of a collection of chords is called a circle graph.
chromatic
Having to do with coloring; see color. Chromatic graph theory is the theory of graph coloring. The chromatic number χ(G) is the minimum number of colors needed in a proper coloring of G. χ ′(G) is the chromatic index of G, the minimum number of colors needed in a proper edge coloring of G.
choosable
choosability
A graph is k-choosable if it has a list coloring whenever each vertex has a list of k available colors. The choosability of the graph is the smallest k for which it is k-choosable.
circle
A circle graph is the intersection graph of chords of a circle.
circuit
A circuit may refer to a closed trail or an element of the cycle space (an Eulerian spanning subgraph). The circuit rank of a graph is the dimension of its cycle space.
circumference
The circumference of a graph is the length of its longest simple cycle. The graph is Hamiltonian if and only if its circumference equals its order.
class
1. A class of graphs or family of graphs is a (usually infinite) collection of graphs, often defined as the graphs having some specific property. The word "class" is used rather than "set" because, unless special restrictions are made (such as restricting the vertices to be drawn from a particular set, and defining edges to be sets of two vertices) classes of graphs are usually not sets when formalized using set theory.
2. A color class of a colored graph is the set of vertices or edges having one particular color.
3. In the context of Vizing's theorem, on edge coloring simple graphs, a graph is said to be of class one if its chromatic index equals its maximum degree, and class two if its chromatic index equals one plus the degree. According to Vizing's theorem, all simple graphs are either of class one or class two.
claw
A claw is a tree with one internal vertex and three leaves, or equivalently the complete bipartite graph K1,3. A claw-free graph is a graph that does not have an induced subgraph that is a claw.
clique
A clique is a set of mutually adjacent vertices (or the complete subgraph induced by that set). Sometimes a clique is defined as a maximal set of mutually adjacent vertices (or maximal complete subgraph), one that is not part of any larger such set (or subgraph). A k-clique is a clique of order k. The clique number ω(G) of a graph G is the order of its largest clique. The clique graph of a graph G is the intersection graph of the maximal cliques in G. See also biclique, a complete bipartite subgraph.
clique tree
A synonym for a block graph.
clique-width
The clique-width of a graph G is the minimum number of distinct labels needed to construct G by operations that create a labeled vertex, form the disjoint union of two labeled graphs, add an edge connecting all pairs of vertices with given labels, or relabel all vertices with a given label. The graphs of clique-width at most 2 are exactly the cographs.
closed
1. A closed neighborhood is one that includes its central vertex; see neighbourhood.
2. A closed walk is one that starts and ends at the same vertex; see walk.
3. A graph is transitively closed if it equals its own transitive closure; see transitive.
4. A graph property is closed under some operation on graphs if, whenever the argument or arguments to the operation have the property, then so does the result. For instance, hereditary properties are closed under induced subgraphs; monotone properties are closed under subgraphs; and minor-closed properties are closed under minors.
closure
1. For the transitive closure of a directed graph, see transitive.
2. A closure of a directed graph is a set of vertices that have no outgoing edges to vertices outside the closure. For instance, a sink is a one-vertex closure. The closure problem is the problem of finding a closure of minimum or maximum weight.
co-
This prefix has various meanings usually involving complement graphs. For instance, a cograph is a graph produced by operations that include complementation; a cocoloring is a coloring in which each vertex induces either an independent set (as in proper coloring) or a clique (as in a coloring of the complement).
color
coloring
1. A graph coloring is a labeling of the vertices of a graph by elements from a given set of colors, or equivalently a partition of the vertices into subsets, called "color classes", each of which is associated with one of the colors.
2. Some authors use "coloring", without qualification, to mean a proper coloring, one that assigns different colors to the endpoints of each edge. In graph coloring, the goal is to find a proper coloring that uses as few colors as possible; for instance, bipartite graphs are the graphs that have colorings with only two colors, and the four color theorem states that every planar graph can be colored with at most four colors. A graph is said to be k-colored if it has been (properly) colored with k colors, and k-colorable or k-chromatic if this is possible.
3. Many variations of coloring have been studied, including edge coloring (coloring edges so that no two edges with the same endpoint share a color), list coloring (proper coloring with each vertex restricted to a subset of the available colors), acyclic coloring (every 2-colored subgraph is acyclic), co-coloring (every color class induces an independent set or a clique), complete coloring (every two color classes share an edge), and total coloring (both edges and vertices are colored).
4. The coloring number of a graph is one plus the degeneracy. It is so called because applying a greedy coloring algorithm to a degeneracy ordering of the graph uses at most this many colors.
comparability
An undirected graph is a comparability graph if its vertices are the elements of a partially ordered set and two vertices are adjacent when they are comparable in the partial order. Equivalently, a comparability graph is a graph that has a transitive orientation. Many other classes of graphs can be defined as the comparability graphs of special types of partial order.
complement
The complement graph
G
¯
{\displaystyle {\bar {G}}}
of a simple graph G is another graph on the same vertex set as G, with an edge for each two vertices that are not adjacent in G.
complete
1. A complete graph is one in which every two vertices are adjacent: all edges that could exist are present. A complete graph with n vertices is often denoted Kn. A complete bipartite graph is one in which every two vertices on opposite sides of the partition of vertices are adjacent. A complete bipartite graph with a vertices on one side of the partition and b vertices on the other side is often denoted Ka,b. The same terminology and notation has also been extended to complete multipartite graphs, graphs in which the vertices are divided into more than two subsets and every pair of vertices in different subsets are adjacent; if the numbers of vertices in the subsets are a, b, c, ... then this graph is denoted Ka, b, c, ....
2. A completion of a given graph is a supergraph that has some desired property. For instance, a chordal completion is a supergraph that is a chordal graph.
3. A complete matching is a synonym for a perfect matching; see matching.
4. A complete coloring is a proper coloring in which each pairs of colors is used for the endpoints of at least one edge. Every coloring with a minimum number of colors is complete, but there may exist complete colorings with larger numbers of colors. The achromatic number of a graph is the maximum number of colors in a complete coloring.
5. A complete invariant of a graph is a synonym for a canonical form, an invariant that has different values for non-isomorphic graphs.
component
A connected component of a graph is a maximal connected subgraph. The term is also used for maximal subgraphs or subsets of a graph's vertices that have some higher order of connectivity, including biconnected components, triconnected components, and strongly connected components.
condensation
The condensation of a directed graph G is a directed acyclic graph with one vertex for each strongly connected component of G, and an edge connecting pairs of components that contain the two endpoints of at least one edge in G.
cone
A graph that contains a universal vertex.
connect
Cause to be connected.
connected
A connected graph is one in which each pair of vertices forms the endpoints of a path. Higher forms of connectivity include strong connectivity in directed graphs (for each two vertices there are paths from one to the other in both directions), k-vertex-connected graphs (removing fewer than k vertices cannot disconnect the graph), and k-edge-connected graphs (removing fewer than k edges cannot disconnect the graph).
connected component
Synonym for component.
contraction
Edge contraction is an elementary operation that removes an edge from a graph while merging the two vertices that it previously joined. Vertex contraction (sometimes called vertex identification) is similar, but the two vertices are not necessarily connected by an edge. Path contraction occurs upon the set of edges in a path that contract to form a single edge between the endpoints of the path. The inverse of edge contraction is vertex splitting.
converse
The converse graph is a synonym for the transpose graph; see transpose.
core
1. A k-core is the induced subgraph formed by removing all vertices of degree less than k, and all vertices whose degree becomes less than k after earlier removals. See degeneracy.
2. A core is a graph G such that every graph homomorphism from G to itself is an isomorphism.
3. The core of a graph G is a minimal graph H such that there exist homomorphisms from G to H and vice versa. H is unique up to isomorphism. It can be represented as an induced subgraph of G, and is a core in the sense that all of its self-homomorphisms are isomorphisms.
4. In the theory of graph matchings, the core of a graph is an aspect of its Dulmage–Mendelsohn decomposition, formed as the union of all maximum matchings.
cotree
1. The complement of a spanning tree.
2. A rooted tree structure used to describe a cograph, in which each cograph vertex is a leaf of the tree, each internal node of the tree is labeled with 0 or 1, and two cograph vertices are adjacent if and only if their lowest common ancestor in the tree is labeled 1.
cover
A vertex cover is a set of vertices incident to every edge in a graph. An edge cover is a set of edges incident to every vertex in a graph. A set of subgraphs of a graph covers that graph if its union – taken vertex-wise and edge-wise – is equal to the graph.
critical
A critical graph for a given property is a graph that has the property but such that every subgraph formed by deleting a single vertex does not have the property. For instance, a factor-critical graph is one that has a perfect matching (a 1-factor) for every vertex deletion, but (because it has an odd number of vertices) has no perfect matching itself. Compare hypo-, used for graphs which do not have a property but for which every one-vertex deletion does.
cube
cubic
1. Cube graph, the eight-vertex graph of the vertices and edges of a cube.
2. Hypercube graph, a higher-dimensional generalization of the cube graph.
3. Folded cube graph, formed from a hypercube by adding a matching connecting opposite vertices.
4. Halved cube graph, the half-square of a hypercube graph.
5. Partial cube, a distance-preserving subgraph of a hypercube.
6. The cube of a graph G is the graph power G3.
7. Cubic graph, another name for a 3-regular graph, one in which each vertex has three incident edges.
8. Cube-connected cycles, a cubic graph formed by replacing each vertex of a hypercube by a cycle.
cut
cut-set
A cut is a partition of the vertices of a graph into two subsets, or the set (also known as a cut-set) of edges that span such a partition, if that set is non-empty. An edge is said to span the partition if it has endpoints in both subsets. Thus, the removal of a cut-set from a connected graph disconnects it.
cut point
See articulation point.
cut space
The cut space of a graph is a GF(2)-vector space having the cut-sets of the graph as its elements and symmetric difference of sets as its vector addition operation.
cycle
1. A cycle may be either a kind of graph or a kind of walk. As a walk it may be either be a closed walk (also called a tour) or more usually a closed walk without repeated vertices and consequently edges (also called a simple cycle). In the latter case it is usually regarded as a graph, i.e., the choices of first vertex and direction are usually considered unimportant; that is, cyclic permutations and reversals of the walk produce the same cycle. Important special types of cycle include Hamiltonian cycles, induced cycles, peripheral cycles, and the shortest cycle, which defines the girth of a graph. A k-cycle is a cycle of length k; for instance a 2-cycle is a digon and a 3-cycle is a triangle. A cycle graph is a graph that is itself a simple cycle; a cycle graph with n vertices is commonly denoted Cn.
2. The cycle space is a vector space generated by the simple cycles in a graph, often over the field of 2 elements but also over other fields.
== D ==
DAG
Abbreviation for directed acyclic graph, a directed graph without any directed cycles.
deck
The multiset of graphs formed from a single graph G by deleting a single vertex in all possible ways, especially in the context of the reconstruction conjecture. An edge-deck is formed in the same way by deleting a single edge in all possible ways. The graphs in a deck are also called cards. See also critical (graphs that have a property that is not held by any card) and hypo- (graphs that do not have a property that is held by all cards).
decomposition
See tree decomposition, path decomposition, or branch-decomposition.
degenerate
degeneracy
A k-degenerate graph is an undirected graph in which every induced subgraph has minimum degree at most k. The degeneracy of a graph is the smallest k for which it is k-degenerate. A degeneracy ordering is an ordering of the vertices such that each vertex has minimum degree in the induced subgraph of it and all later vertices; in a degeneracy ordering of a k-degenerate graph, every vertex has at most k later neighbours. Degeneracy is also known as the k-core number, width, and linkage, and one plus the degeneracy is also called the coloring number or Szekeres–Wilf number. k-degenerate graphs have also been called k-inductive graphs.
degree
1. The degree of a vertex in a graph is its number of incident edges. The degree of a graph G (or its maximum degree) is the maximum of the degrees of its vertices, often denoted Δ(G); the minimum degree of G is the minimum of its vertex degrees, often denoted δ(G). Degree is sometimes called valency; the degree of v in G may be denoted dG(v), d(G), or deg(v). The total degree is the sum of the degrees of all vertices; by the handshaking lemma it is an even number. The degree sequence is the collection of degrees of all vertices, in sorted order from largest to smallest. In a directed graph, one may distinguish the in-degree (number of incoming edges) and out-degree (number of outgoing edges).
2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor.
Δ, δ
Δ(G) (using the Greek letter delta) is the maximum degree of a vertex in G, and δ(G) is the minimum degree; see degree.
density
In a graph of n nodes, the density is the ratio of the number of edges of the graph to the number of edges in a complete graph on n nodes. See dense graph.
depth
The depth of a node in a rooted tree is the number of edges in the path from the root to the node. For instance, the depth of the root is 0 and the depth of any one of its adjacent nodes is 1. It is the level of a node minus one. Note, however, that some authors instead use depth as a synonym for the level of a node.
diameter
The diameter of a connected graph is the maximum length of a shortest path. That is, it is the maximum of the distances between pairs of vertices in the graph. If the graph has weights on its edges, then its weighted diameter measures path length by the sum of the edge weights along a path, while the unweighted diameter measures path length by the number of edges.
For disconnected graphs, definitions vary: the diameter may be defined as infinite, or as the largest diameter of a connected component, or it may be undefined.
diamond
The diamond graph is an undirected graph with four vertices and five edges.
diconnected
Strongly connected. (Not to be confused with disconnected)
digon
A digon is a simple cycle of length two in a directed graph or a multigraph. Digons cannot occur in simple undirected graphs as they require repeating the same edge twice, which violates the definition of simple.
digraph
Synonym for directed graph.
dipath
See directed path.
direct predecessor
The tail of a directed edge whose head is the given vertex.
direct successor
The head of a directed edge whose tail is the given vertex.
directed
A directed graph is one in which the edges have a distinguished direction, from one vertex to another. In a mixed graph, a directed edge is again one that has a distinguished direction; directed edges may also be called arcs or arrows.
directed arc
See arrow.
directed edge
See arrow.
directed line
See arrow.
directed path
A path in which all the edges have the same direction. If a directed path leads from vertex x to vertex y, x is a predecessor of y, y is a successor of x, and y is said to be reachable from x.
direction
1. The asymmetric relation between two adjacent vertices in a graph, represented as an arrow.
2. The asymmetric relation between two vertices in a directed path.
disconnect
Cause to be disconnected.
disconnected
Not connected.
disjoint
1. Two subgraphs are edge disjoint if they share no edges, and vertex disjoint if they share no vertices.
2. The disjoint union of two or more graphs is a graph whose vertex and edge sets are the disjoint unions of the corresponding sets.
dissociation number
A subset of vertices in a graph G is called dissociation if it induces a subgraph with maximum degree 1.
distance
The distance between any two vertices in a graph is the length of the shortest path having the two vertices as its endpoints.
domatic
A domatic partition of a graph is a partition of the vertices into dominating sets. The domatic number of the graph is the maximum number of dominating sets in such a partition.
dominating
A dominating set is a set of vertices that includes or is adjacent to every vertex in the graph; not to be confused with a vertex cover, a vertex set that is incident to all edges in the graph. Important special types of dominating sets include independent dominating sets (dominating sets that are also independent sets) and connected dominating sets (dominating sets that induced connected subgraphs). A single-vertex dominating set may also be called a universal vertex. The domination number of a graph is the number of vertices in the smallest dominating set.
dualA dual graph of a plane graph G is a graph that has a vertex for each face of G.
== E ==
E
E(G) is the edge set of G; see edge set.
ear
An ear of a graph is a path whose endpoints may coincide but in which otherwise there are no repetitions of vertices or edges.
ear decomposition
An ear decomposition is a partition of the edges of a graph into a sequence of ears, each of whose endpoints (after the first one) belong to a previous ear and each of whose interior points do not belong to any previous ear. An open ear is a simple path (an ear without repeated vertices), and an open ear decomposition is an ear decomposition in which each ear after the first is open; a graph has an open ear decomposition if and only if it is biconnected. An ear is odd if it has an odd number of edges, and an odd ear decomposition is an ear decomposition in which each ear is odd; a graph has an odd ear decomposition if and only if it is factor-critical.
eccentricity
The eccentricity of a vertex is the farthest distance from it to any other vertex.
edge
An edge is (together with vertices) one of the two basic units out of which graphs are constructed. Each edge has two (or in hypergraphs, more) vertices to which it is attached, called its endpoints. Edges may be directed or undirected; undirected edges are also called lines and directed edges are also called arcs or arrows. In an undirected simple graph, an edge may be represented as the set of its vertices, and in a directed simple graph it may be represented as an ordered pair of its vertices. An edge that connects vertices x and y is sometimes written xy.
edge cut
A set of edges whose removal disconnects the graph. A one-edge cut is called a bridge, isthmus, or cut edge.
edge set
The set of edges of a given graph G, sometimes denoted by E(G).
edgeless graph
The edgeless graph or totally disconnected graph on a given set of vertices is the graph that has no edges. It is sometimes called the empty graph, but this term can also refer to a graph with no vertices.
embedding
A graph embedding is a topological representation of a graph as a subset of a topological space with each vertex represented as a point, each edge represented as a curve having the endpoints of the edge as endpoints of the curve, and no other intersections between vertices or edges. A planar graph is a graph that has such an embedding onto the Euclidean plane, and a toroidal graph is a graph that has such an embedding onto a torus. The genus of a graph is the minimum possible genus of a two-dimensional manifold onto which it can be embedded.
empty graph
1. An edgeless graph on a nonempty set of vertices.
2. The order-zero graph, a graph with no vertices and no edges.
end
An end of an infinite graph is an equivalence class of rays, where two rays are equivalent if there is a third ray that includes infinitely many vertices from both of them.
endpoint
One of the two vertices joined by a given edge, or one of the first or last vertex of a walk, trail or path. The first endpoint of a given directed edge is called the tail and the second endpoint is called the head.
enumeration
Graph enumeration is the problem of counting the graphs in a given class of graphs, as a function of their order. More generally, enumeration problems can refer either to problems of counting a certain class of combinatorial objects (such as cliques, independent sets, colorings, or spanning trees), or of algorithmically listing all such objects.
Eulerian
An Eulerian path is a walk that uses every edge of a graph exactly once. An Eulerian circuit (also called an Eulerian cycle or an Euler tour) is a closed walk that uses every edge exactly once. An Eulerian graph is a graph that has an Eulerian circuit. For an undirected graph, this means that the graph is connected and every vertex has even degree. For a directed graph, this means that the graph is strongly connected and every vertex has in-degree equal to the out-degree. In some cases, the connectivity requirement is loosened, and a graph meeting only the degree requirements is called Eulerian.
even
Divisible by two; for instance, an even cycle is a cycle whose length is even.
expander
An expander graph is a graph whose edge expansion, vertex expansion, or spectral expansion is bounded away from zero.
expansion
1. The edge expansion, isoperimetric number, or Cheeger constant of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of edges leaving S to the number of vertices in S.
2. The vertex expansion, vertex isoperimetric number, or magnification of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of vertices outside but adjacent to S to the number of vertices in S.
3. The unique neighbor expansion of a graph G is the minimum ratio, over subsets of at most half of the vertices of G, of the number of vertices outside S but adjacent to a unique vertex in S to the number of vertices in S.
4. The spectral expansion of a d-regular graph G is the spectral gap between the largest eigenvalue d of its adjacency matrix and the second-largest eigenvalue.
5. A family of graphs has bounded expansion if all its r-shallow minors have a ratio of edges to vertices bounded by a function of r, and polynomial expansion if the function of r is a polynomial.
== F ==
face
In a plane graph or graph embedding, a connected component of the subset of the plane or surface of the embedding that is disjoint from the graph. For an embedding in the plane, all but one face will be bounded; the one exceptional face that extends to infinity is called the outer (or infinite) face.
factor
A factor of a graph is a spanning subgraph: a subgraph that includes all of the vertices of the graph. The term is primarily used in the context of regular subgraphs: a k-factor is a factor that is k-regular. In particular, a 1-factor is the same thing as a perfect matching. A factor-critical graph is a graph for which deleting any one vertex produces a graph with a 1-factor.
factorization
A graph factorization is a partition of the edges of the graph into factors; a k-factorization is a partition into k-factors. For instance a 1-factorization is an edge coloring with the additional property that each vertex is incident to an edge of each color.
family
A synonym for class.
finite
A graph is finite if it has a finite number of vertices and a finite number of edges. Many sources assume that all graphs are finite without explicitly saying so. A graph is locally finite if each vertex has a finite number of incident edges. An infinite graph is a graph that is not finite: it has infinitely many vertices, infinitely many edges, or both.
first order
The first order logic of graphs is a form of logic in which variables represent vertices of a graph, and there exists a binary predicate to test whether two vertices are adjacent. To be distinguished from second order logic, in which variables can also represent sets of vertices or edges.
-flap
For a set of vertices X, an X-flap is a connected component of the induced subgraph formed by deleting X. The flap terminology is commonly used in the context of havens, functions that map small sets of vertices to their flaps. See also the bridge of a cycle, which is either a flap of the cycle vertices or a chord of the cycle.
forbidden
A forbidden graph characterization is a characterization of a family of graphs as being the graphs that do not have certain other graphs as subgraphs, induced subgraphs, or minors. If H is one of the graphs that does not occur as a subgraph, induced subgraph, or minor, then H is said to be forbidden.
forcing graph
A forcing graph is a graph H such that evaluating the subgraph density of H in the graphs of a graph sequence G(n) is sufficient to test whether that sequence is quasi-random.
forest
A forest is an undirected graph without cycles (a disjoint union of unrooted trees), or a directed graph formed as a disjoint union of rooted trees.
free edge
An edge which is not in a matching.
free vertex
1. A vertex not on a matched edge in a matching
2. A vertex which has not been matched.
Frucht
1. Robert Frucht
2. The Frucht graph, one of the two smallest cubic graphs with no nontrivial symmetries.
3. Frucht's theorem that every finite group is the group of symmetries of a finite graph.
full
Synonym for induced.
functional graph
A functional graph is a directed graph where every vertex has out-degree one. Equivalently, a functional graph is a maximal directed pseudoforest.
== G ==
G
A variable often used to denote a graph.
genus
The genus of a graph is the minimum genus of a surface onto which it can be embedded; see embedding.
geodesic
As a noun, a geodesic is a synonym for a shortest path. When used as an adjective, it means related to shortest paths or shortest path distances.
giant
In the theory of random graphs, a giant component is a connected component that contains a constant fraction of the vertices of the graph. In standard models of random graphs, there is typically at most one giant component.
girth
The girth of a graph is the length of its shortest cycle.
graph
The fundamental object of study in graph theory, a system of vertices connected in pairs by edges. Often subdivided into directed graphs or undirected graphs according to whether the edges have an orientation or not. Mixed graphs include both types of edges.
greedy
Produced by a greedy algorithm. For instance, a greedy coloring of a graph is a coloring produced by considering the vertices in some sequence and assigning each vertex the first available color.
Grötzsch
1. Herbert Grötzsch
2. The Grötzsch graph, the smallest triangle-free graph requiring four colors in any proper coloring.
3. Grötzsch's theorem that triangle-free planar graphs can always be colored with at most three colors.
Grundy number
1. The Grundy number of a graph is the maximum number of colors produced by a greedy coloring, with a badly-chosen vertex ordering.
== H ==
H
A variable often used to denote a graph, especially when another graph has already been denoted by G.
H-coloring
An H-coloring of a graph G (where H is also a graph) is a homomorphism from H to G.
H-free
A graph is H-free if it does not have an induced subgraph isomorphic to H, that is, if H is a forbidden induced subgraph. The H-free graphs are the family of all graphs (or, often, all finite graphs) that are H-free. For instance the triangle-free graphs are the graphs that do not have a triangle graph as a subgraph. The property of being H-free is always hereditary. A graph is H-minor-free if it does not have a minor isomorphic to H.
Hadwiger
1. Hugo Hadwiger
2. The Hadwiger number of a graph is the order of the largest complete minor of the graph. It is also called the contraction clique number or the homomorphism degree.
3. The Hadwiger conjecture is the conjecture that the Hadwiger number is never less than the chromatic number.
Hamiltonian
A Hamiltonian path or Hamiltonian cycle is a simple spanning path or simple spanning cycle: it covers all of the vertices in the graph exactly once. A graph is Hamiltonian if it contains a Hamiltonian cycle, and traceable if it contains a Hamiltonian path.
haven
A k-haven is a function that maps every set X of fewer than k vertices to one of its flaps, often satisfying additional consistency conditions. The order of a haven is the number k. Havens can be used to characterize the treewidth of finite graphs and the ends and Hadwiger numbers of infinite graphs.
height
1. The height of a node in a rooted tree is the number of edges in a longest path, going away from the root (i.e. its nodes have strictly increasing depth), that starts at that node and ends at a leaf.
2. The height of a rooted tree is the height of its root. That is, the height of a tree is the number of edges in a longest possible path, going away from the root, that starts at the root and ends at a leaf.
3. The height of a directed acyclic graph is the maximum length of a directed path in this graph.
hereditary
A hereditary property of graphs is a property that is closed under induced subgraphs: if G has a hereditary property, then so must every induced subgraph of G. Compare monotone (closed under all subgraphs) or minor-closed (closed under minors).
hexagon
A simple cycle consisting of exactly six edges and six vertices.
hole
A hole is an induced cycle of length four or more. An odd hole is a hole of odd length. An anti-hole is an induced subgraph of order four whose complement is a cycle; equivalently, it is a hole in the complement graph. This terminology is mainly used in the context of perfect graphs, which are characterized by the strong perfect graph theorem as being the graphs with no odd holes or odd anti-holes. The hole-free graphs are the same as the chordal graphs.
homomorphic equivalence
Two graphs are homomorphically equivalent if there exist two homomorphisms, one from each graph to the other graph.
homomorphism
1. A graph homomorphism is a mapping from the vertex set of one graph to the vertex set of another graph that maps adjacent vertices to adjacent vertices. This type of mapping between graphs is the one that is most commonly used in category-theoretic approaches to graph theory. A proper graph coloring can equivalently be described as a homomorphism to a complete graph.
2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor.
hyperarc
A directed hyperedge having a source and target set.
hyperedge
An edge in a hypergraph, having any number of endpoints, in contrast to the requirement that edges of graphs have exactly two endpoints.
hypercube
A hypercube graph is a graph formed from the vertices and edges of a geometric hypercube.
hypergraph
A hypergraph is a generalization of a graph in which each edge (called a hyperedge in this context) may have more than two endpoints.
hypo-
This prefix, in combination with a graph property, indicates a graph that does not have the property but such that every subgraph formed by deleting a single vertex does have the property. For instance, a hypohamiltonian graph is one that does not have a Hamiltonian cycle, but for which every one-vertex deletion produces a Hamiltonian subgraph. Compare critical, used for graphs which have a property but for which every one-vertex deletion does not.
== I ==
in-degree
The number of incoming edges in a directed graph; see degree.
incidence
An incidence in a graph is a vertex-edge pair such that the vertex is an endpoint of the edge.
incidence matrix
The incidence matrix of a graph is a matrix whose rows are indexed by vertices of the graph, and whose columns are indexed by edges, with a one in the cell for row i and column j when vertex i and edge j are incident, and a zero otherwise.
incident
The relation between an edge and one of its endpoints.
incomparability
An incomparability graph is the complement of a comparability graph; see comparability.
independent
1. An independent set is a set of vertices that induces an edgeless subgraph. It may also be called a stable set or a coclique. The independence number α(G) is the size of the maximum independent set.
2. In the graphic matroid of a graph, a subset of edges is independent if the corresponding subgraph is a tree or forest. In the bicircular matroid, a subset of edges is independent if the corresponding subgraph is a pseudoforest.
indifference
An indifference graph is another name for a proper interval graph or unit interval graph; see proper.
induced
An induced subgraph or full subgraph of a graph is a subgraph formed from a subset of vertices and from all of the edges that have both endpoints in the subset. Special cases include induced paths and induced cycles, induced subgraphs that are paths or cycles.
inductive
Synonym for degenerate.
infinite
An infinite graph is one that is not finite; see finite.
internal
A vertex of a path or tree is internal if it is not a leaf; that is, if its degree is greater than one. Two paths are internally disjoint (some people call it independent) if they do not have any vertex in common, except the first and last ones.
intersection
1. The intersection of two graphs is their largest common subgraph, the graph formed by the vertices and edges that belong to both graphs.
2. An intersection graph is a graph whose vertices correspond to sets or geometric objects, with an edge between two vertices exactly when the corresponding two sets or objects have a nonempty intersection. Several classes of graphs may be defined as the intersection graphs of certain types of objects, for instance chordal graphs (intersection graphs of subtrees of a tree), circle graphs (intersection graphs of chords of a circle), interval graphs (intersection graphs of intervals of a line), line graphs (intersection graphs of the edges of a graph), and clique graphs (intersection graphs of the maximal cliques of a graph). Every graph is an intersection graph for some family of sets, and this family is called an intersection representation of the graph. The intersection number of a graph G is the minimum total number of elements in any intersection representation of G.
interval
1. An interval graph is an intersection graph of intervals of a line.
2. The interval [u, v] in a graph is the union of all shortest paths from u to v.
3. Interval thickness is a synonym for pathwidth.
invariant
A synonym of property.
inverted arrow
An arrow with an opposite direction compared to another arrow. The arrow (y, x) is the inverted arrow of the arrow (x, y).
isolated
An isolated vertex of a graph is a vertex whose degree is zero, that is, a vertex with no incident edges.
isomorphic
Two graphs are isomorphic if there is an isomorphism between them; see isomorphism.
isomorphism
A graph isomorphism is a one-to-one incidence preserving correspondence of the vertices and edges of one graph to the vertices and edges of another graph. Two graphs related in this way are said to be isomorphic.
isoperimetric
See expansion.
isthmus
Synonym for bridge, in the sense of an edge whose removal disconnects the graph.
== J ==
join
The join of two graphs is formed from their disjoint union by adding an edge from each vertex of one graph to each vertex of the other. Equivalently, it is the complement of the disjoint union of the complements.
== K ==
K
For the notation for complete graphs, complete bipartite graphs, and complete multipartite graphs, see complete.
κ
κ(G) (using the Greek letter kappa) can refer to the vertex connectivity of G or to the clique number of G.
kernel
A kernel of a directed graph is a set of vertices which is both stable and absorbing.
knot
An inescapable section of a directed graph. See knot (mathematics) and knot theory.
== L ==
L
L(G) is the line graph of G; see line.
label
1. Information associated with a vertex or edge of a graph. A labeled graph is a graph whose vertices or edges have labels. The terms vertex-labeled or edge-labeled may be used to specify which objects of a graph have labels. Graph labeling refers to several different problems of assigning labels to graphs subject to certain constraints. See also graph coloring, in which the labels are interpreted as colors.
2. In the context of graph enumeration, the vertices of a graph are said to be labeled if they are all distinguishable from each other. For instance, this can be made to be true by fixing a one-to-one correspondence between the vertices and the integers from 1 to the order of the graph. When vertices are labeled, graphs that are isomorphic to each other (but with different vertex orderings) are counted as separate objects. In contrast, when the vertices are unlabeled, graphs that are isomorphic to each other are not counted separately.
leaf
1. A leaf vertex or pendant vertex (especially in a tree) is a vertex whose degree is 1. A leaf edge or pendant edge is the edge connecting a leaf vertex to its single neighbour.
2. A leaf power of a tree is a graph whose vertices are the leaves of the tree and whose edges connect leaves whose distance in the tree is at most a given threshold.
length
In an unweighted graph, the length of a cycle, path, or walk is the number of edges it uses. In a weighted graph, it may instead be the sum of the weights of the edges that it uses. Length is used to define the shortest path, girth (shortest cycle length), and longest path between two vertices in a graph.
level
1. This is the depth of a node plus 1, although some define it instead to be synonym of depth. A node's level in a rooted tree is the number of nodes in the path from the root to the node. For instance, the root has level 1 and any one of its adjacent nodes has level 2.
2. A set of all node having the same level or depth.
line
A synonym for an undirected edge. The line graph L(G) of a graph G is a graph with a vertex for each edge of G and an edge for each pair of edges that share an endpoint in G.
linkage
A synonym for degeneracy.
list
1. An adjacency list is a computer representation of graphs for use in graph algorithms.
2. List coloring is a variation of graph coloring in which each vertex has a list of available colors.
local
A local property of a graph is a property that is determined only by the neighbourhoods of the vertices in the graph. For instance, a graph is locally finite if all of its neighborhoods are finite.
loop
A loop or self-loop is an edge both of whose endpoints are the same vertex. It forms a cycle of length 1. These are not allowed in simple graphs.
== M ==
magnification
Synonym for vertex expansion.
matching
A matching is a set of edges in which no two share any vertex. A vertex is matched or saturated if it is one of the endpoints of an edge in the matching. A perfect matching or complete matching is a matching that matches every vertex; it may also be called a 1-factor, and can only exist when the order is even. A near-perfect matching, in a graph with odd order, is one that saturates all but one vertex. A maximum matching is a matching that uses as many edges as possible; the matching number α′(G) of a graph G is the number of edges in a maximum matching. A maximal matching is a matching to which no additional edges can be added.
maximal
1. A subgraph of given graph G is maximal for a particular property if it has that property but no other supergraph of it that is also a subgraph of G also has the same property. That is, it is a maximal element of the subgraphs with the property. For instance, a maximal clique is a complete subgraph that cannot be expanded to a larger complete subgraph. The word "maximal" should be distinguished from "maximum": a maximum subgraph is always maximal, but not necessarily vice versa.
2. A simple graph with a given property is maximal for that property if it is not possible to add any more edges to it (keeping the vertex set unchanged) while preserving both the simplicity of the graph and the property. Thus, for instance, a maximal planar graph is a planar graph such that adding any more edges to it would create a non-planar graph.
maximum
A subgraph of a given graph G is maximum for a particular property if it is the largest subgraph (by order or size) among all subgraphs with that property. For instance, a maximum clique is any of the largest cliques in a given graph.
median
1. A median of a triple of vertices, a vertex that belongs to shortest paths between all pairs of vertices, especially in median graphs and modular graphs.
2. A median graph is a graph in which every three vertices have a unique median.
Meyniel
1. Henri Meyniel, French graph theorist.
2. A Meyniel graph is a graph in which every odd cycle of length five or more has at least two chords.
minimal
A subgraph of given graph is minimal for a particular property if it has that property but no proper subgraph of it also has the same property. That is, it is a minimal element of the subgraphs with the property.
minimum cut
A cut whose cut-set has minimum total weight, possibly restricted to cuts that separate a designated pair of vertices; they are characterized by the max-flow min-cut theorem.
minor
A graph H is a minor of another graph G if H can be obtained by deleting edges or vertices from G and contracting edges in G. It is a shallow minor if it can be formed as a minor in such a way that the subgraphs of G that were contracted to form vertices of H all have small diameter. H is a topological minor of G if G has a subgraph that is a subdivision of H. A graph is H-minor-free if it does not have H as a minor. A family of graphs is minor-closed if it is closed under minors; the Robertson–Seymour theorem characterizes minor-closed families as having a finite set of forbidden minors.
mixed
A mixed graph is a graph that may include both directed and undirected edges.
modular
1. Modular graph, a graph in which each triple of vertices has at least one median vertex that belongs to shortest paths between all pairs of the triple.
2. Modular decomposition, a decomposition of a graph into subgraphs within which all vertices connect to the rest of the graph in the same way.
3. Modularity of a graph clustering, the difference of the number of cross-cluster edges from its expected value.
monotone
A monotone property of graphs is a property that is closed under subgraphs: if G has a monotone property, then so must every subgraph of G. Compare hereditary (closed under induced subgraphs) or minor-closed (closed under minors).
Moore graph
A Moore graph is a regular graph for which the Moore bound is met exactly. The Moore bound is an inequality relating the degree, diameter, and order of a graph, proved by Edward F. Moore. Every Moore graph is a cage.
multigraph
A multigraph is a graph that allows multiple adjacencies (and, often, self-loops); a graph that is not required to be simple.
multiple adjacency
A multiple adjacency or multiple edge is a set of more than one edge that all have the same endpoints (in the same direction, in the case of directed graphs). A graph with multiple edges is often called a multigraph.
multiplicity
The multiplicity of an edge is the number of edges in a multiple adjacency. The multiplicity of a graph is the maximum multiplicity of any of its edges.
== N ==
N
1. For the notation for open and closed neighborhoods, see neighbourhood.
2. A lower-case n is often used (especially in computer science) to denote the number of vertices in a given graph.
neighbor
neighbour
A vertex that is adjacent to a given vertex.
neighborhood
neighbourhood
The open neighbourhood (or neighborhood) of a vertex v is the subgraph induced by all vertices that are adjacent to v. The closed neighbourhood is defined in the same way but also includes v itself. The open neighborhood of v in G may be denoted NG(v) or N(v), and the closed neighborhood may be denoted NG[v] or N[v]. When the openness or closedness of a neighborhood is not specified, it is assumed to be open.
network
A graph in which attributes (e.g. names) are associated with the nodes and/or edges.
node
A synonym for vertex.
non-edge
A non-edge or anti-edge is a pair of vertices that are not adjacent; the edges of the complement graph.
null graph
See empty graph.
== O ==
odd
1. An odd cycle is a cycle whose length is odd. The odd girth of a non-bipartite graph is the length of its shortest odd cycle. An odd hole is a special case of an odd cycle: one that is induced and has four or more vertices.
2. An odd vertex is a vertex whose degree is odd. By the handshaking lemma every finite undirected graph has an even number of odd vertices.
3. An odd ear is a simple path or simple cycle with an odd number of edges, used in odd ear decompositions of factor-critical graphs; see ear.
4. An odd chord is an edge connecting two vertices that are an odd distance apart in an even cycle. Odd chords are used to define strongly chordal graphs.
5. An odd graph is a special case of a Kneser graph, having one vertex for each (n − 1)-element subset of a (2n − 1)-element set, and an edge connecting two subsets when their corresponding sets are disjoint.
open
1. See neighbourhood.
2. See walk.
order
1. The order of a graph G is the number of its vertices, |V(G)|. The variable n is often used for this quantity. See also size, the number of edges.
2. A type of logic of graphs; see first order and second order.
3. An order or ordering of a graph is an arrangement of its vertices into a sequence, especially in the context of topological ordering (an order of a directed acyclic graph in which every edge goes from an earlier vertex to a later vertex in the order) and degeneracy ordering (an order in which each vertex has minimum degree in the induced subgraph of it and all later vertices).
4. For the order of a haven or bramble, see haven and bramble.
orientation
oriented
1. An orientation of an undirected graph is an assignment of directions to its edges, making it into a directed graph. An oriented graph is one that has been assigned an orientation. So, for instance, a polytree is an oriented tree; it differs from a directed tree (an arborescence) in that there is no requirement of consistency in the directions of its edges. Other special types of orientation include tournaments, orientations of complete graphs; strong orientations, orientations that are strongly connected; acyclic orientations, orientations that are acyclic; Eulerian orientations, orientations that are Eulerian; and transitive orientations, orientations that are transitively closed.
2. Oriented graph, used by some authors as a synonym for a directed graph.
out-degree
See degree.
outer
See face.
outerplanar
An outerplanar graph is a graph that can be embedded in the plane (without crossings) so that all vertices are on the outer face of the graph.
== P ==
parent
In a rooted tree, a parent of a vertex v is a neighbor of v along the incoming edge, the one that is directed toward the root.
path
A path may either be a walk or a walk without repeated vertices and consequently edges (also called a simple path), depending on the source. Important special cases include induced paths and shortest paths.
path decomposition
A path decomposition of a graph G is a tree decomposition whose underlying tree is a path. Its width is defined in the same way as for tree decompositions, as one less than the size of the largest bag. The minimum width of any path decomposition of G is the pathwidth of G.
pathwidth
The pathwidth of a graph G is the minimum width of a path decomposition of G. It may also be defined in terms of the clique number of an interval completion of G. It is always between the bandwidth and the treewidth of G. It is also known as interval thickness, vertex separation number, or node searching number.
pendant
See leaf.
perfect
1. A perfect graph is a graph in which, in every induced subgraph, the chromatic number equals the clique number. The perfect graph theorem and strong perfect graph theorem are two theorems about perfect graphs, the former proving that their complements are also perfect and the latter proving that they are exactly the graphs with no odd holes or anti-holes.
2. A perfectly orderable graph is a graph whose vertices can be ordered in such a way that a greedy coloring algorithm with this ordering optimally colors every induced subgraph. The perfectly orderable graphs are a subclass of the perfect graphs.
3. A perfect matching is a matching that saturates every vertex; see matching.
4. A perfect 1-factorization is a partition of the edges of a graph into perfect matchings so that each two matchings form a Hamiltonian cycle.
peripheral
1. A peripheral cycle or non-separating cycle is a cycle with at most one bridge.
2. A peripheral vertex is a vertex whose eccentricity is maximum. In a tree, this must be a leaf.
Petersen
1. Julius Petersen (1839–1910), Danish graph theorist.
2. The Petersen graph, a 10-vertex 15-edge graph frequently used as a counterexample.
3. Petersen's theorem that every bridgeless cubic graph has a perfect matching.
planar
A planar graph is a graph that has an embedding onto the Euclidean plane. A plane graph is a planar graph for which a particular embedding has already been fixed. A k-planar graph is one that can be drawn in the plane with at most k crossings per edge.
polytree
A polytree is an oriented tree; equivalently, a directed acyclic graph whose underlying undirected graph is a tree.
power
1. A graph power Gk of a graph G is another graph on the same vertex set; two vertices are adjacent in Gk when they are at distance at most k in G. A leaf power is a closely related concept, derived from a power of a tree by taking the subgraph induced by the tree's leaves.
2. Power graph analysis is a method for analyzing complex networks by identifying cliques, bicliques, and stars within the network.
3. Power laws in the degree distributions of scale-free networks are a phenomenon in which the number of vertices of a given degree is proportional to a power of the degree.
predecessor
A vertex coming before a given vertex in a directed path.
prime
1. A prime graph is defined from an algebraic group, with a vertex for each prime number that divides the order of the group.
2. In the theory of modular decomposition, a prime graph is a graph without any nontrivial modules.
3. In the theory of splits, cuts whose cut-set is a complete bipartite graph, a prime graph is a graph without any splits. Every quotient graph of a maximal decomposition by splits is a prime graph, a star, or a complete graph.
4. A prime graph for the Cartesian product of graphs is a connected graph that is not itself a product. Every connected graph can be uniquely factored into a Cartesian product of prime graphs.
proper
1. A proper subgraph is a subgraph that removes at least one vertex or edge relative to the whole graph; for finite graphs, proper subgraphs are never isomorphic to the whole graph, but for infinite graphs they can be.
2. A proper coloring is an assignment of colors to the vertices of a graph (a coloring) that assigns different colors to the endpoints of each edge; see color.
3. A proper interval graph or proper circular arc graph is an intersection graph of a collection of intervals or circular arcs (respectively) such that no interval or arc contains another interval or arc. Proper interval graphs are also called unit interval graphs (because they can always be represented by unit intervals) or indifference graphs.
property
A graph property is something that can be true of some graphs and false of others, and that depends only on the graph structure and not on incidental information such as labels. Graph properties may equivalently be described in terms of classes of graphs (the graphs that have a given property). More generally, a graph property may also be a function of graphs that is again independent of incidental information, such as the size, order, or degree sequence of a graph; this more general definition of a property is also called an invariant of the graph.
pseudoforest
A pseudoforest is an undirected graph in which each connected component has at most one cycle, or a directed graph in which each vertex has at most one outgoing edge.
pseudograph
A pseudograph is a graph or multigraph that allows self-loops.
== Q ==
quasi-line graph
A quasi-line graph or locally co-bipartite graph is a graph in which the open neighborhood of every vertex can be partitioned into two cliques. These graphs are always claw-free and they include as a special case the line graphs. They are used in the structure theory of claw-free graphs.
quasi-random graph sequence
A quasi-random graph sequence is a sequence of graphs that shares several properties with a sequence of random graphs generated according to the Erdős–Rényi random graph model.
quiver
A quiver is a directed multigraph, as used in category theory. The edges of a quiver are called arrows.
== R ==
radius
The radius of a graph is the minimum eccentricity of any vertex.
Ramanujan
A Ramanujan graph is a graph whose spectral expansion is as large as possible. That is, it is a d-regular graph, such that the second-largest eigenvalue of its adjacency matrix is at most
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
.
ray
A ray, in an infinite graph, is an infinite simple path with exactly one endpoint. The ends of a graph are equivalence classes of rays.
reachability
The ability to get from one vertex to another within a graph.
reachable
Has an affirmative reachability. A vertex y is said to be reachable from a vertex x if there exists a path from x to y.
recognizable
In the context of the reconstruction conjecture, a graph property is recognizable if its truth can be determined from the deck of the graph. Many graph properties are known to be recognizable. If the reconstruction conjecture is true, all graph properties are recognizable.
reconstruction
The reconstruction conjecture states that each undirected graph G is uniquely determined by its deck, a multiset of graphs formed by removing one vertex from G in all possible ways. In this context, reconstruction is the formation of a graph from its deck.
rectangle
A simple cycle consisting of exactly four edges and four vertices.
regular
A graph is d-regular when all of its vertices have degree d. A regular graph is a graph that is d-regular for some d.
regular tournament
A regular tournament is a tournament where in-degree equals out-degree for all vertices.
reverse
See transpose.
root
1. A designated vertex in a graph, particularly in directed trees and rooted graphs.
2. The inverse operation to a graph power: a kth root of a graph G is another graph on the same vertex set such that two vertices are adjacent in G if and only if they have distance at most k in the root.
== S ==
saturated
See matching.
searching number
Node searching number is a synonym for pathwidth.
second order
The second order logic of graphs is a form of logic in which variables may represent vertices, edges, sets of vertices, and (sometimes) sets of edges. This logic includes predicates for testing whether a vertex and edge are incident, as well as whether a vertex or edge belongs to a set. To be distinguished from first order logic, in which variables can only represent vertices.
self-loop
Synonym for loop.
separating vertex
See articulation point.
separation number
Vertex separation number is a synonym for pathwidth.
sibling
In a rooted tree, a sibling of a vertex v is a vertex which has the same parent vertex as v.
simple
1. A simple graph is a graph without loops and without multiple adjacencies. That is, each edge connects two distinct endpoints and no two edges have the same endpoints. A simple edge is an edge that is not part of a multiple adjacency. In many cases, graphs are assumed to be simple unless specified otherwise.
2. A simple path or a simple cycle is a path or cycle that has no repeated vertices and consequently no repeated edges.
sink
A sink, in a directed graph, is a vertex with no outgoing edges (out-degree equals 0).
size
The size of a graph G is the number of its edges, |E(G)|. The variable m is often used for this quantity. See also order, the number of vertices.
small-world network
A small-world network is a graph in which most nodes are not neighbors of one another, but most nodes can be reached from every other node by a small number of hops or steps. Specifically, a small-world network is defined to be a graph where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network
snark
A snark is a simple, connected, bridgeless cubic graph with chromatic index equal to 4.
source
A source, in a directed graph, is a vertex with no incoming edges (in-degree equals 0).
space
In algebraic graph theory, several vector spaces over the binary field may be associated with a graph. Each has sets of edges or vertices for its vectors, and symmetric difference of sets as its vector sum operation. The edge space is the space of all sets of edges, and the vertex space is the space of all sets of vertices. The cut space is a subspace of the edge space that has the cut-sets of the graph as its elements. The cycle space has the Eulerian spanning subgraphs as its elements.
spanner
A spanner is a (usually sparse) graph whose shortest path distances approximate those in a dense graph or other metric space. Variations include geometric spanners, graphs whose vertices are points in a geometric space; tree spanners, spanning trees of a graph whose distances approximate the graph distances, and graph spanners, sparse subgraphs of a dense graph whose distances approximate the original graph's distances. A greedy spanner is a graph spanner constructed by a greedy algorithm, generally one that considers all edges from shortest to longest and keeps the ones that are needed to preserve the distance approximation.
spanning
A subgraph is spanning when it includes all of the vertices of the given graph.
Important cases include spanning trees, spanning subgraphs that are trees, and perfect matchings, spanning subgraphs that are matchings. A spanning subgraph may also be called a factor, especially (but not only) when it is regular.
sparse
A sparse graph is one that has few edges relative to its number of vertices. In some definitions the same property should also be true for all subgraphs of the given graph.
spectral
spectrum
The spectrum of a graph is the collection of eigenvalues of its adjacency matrix. Spectral graph theory is the branch of graph theory that uses spectra to analyze graphs. See also spectral expansion.
split
1. A split graph is a graph whose vertices can be partitioned into a clique and an independent set. A related class of graphs, the double split graphs, are used in the proof of the strong perfect graph theorem.
2. A split of an arbitrary graph is a partition of its vertices into two nonempty subsets, such that the edges spanning this cut form a complete bipartite subgraph. The splits of a graph can be represented by a tree structure called its split decomposition. A split is called a strong split when it is not crossed by any other split. A split is called nontrivial when both of its sides have more than one vertex. A graph is called prime when it has no nontrivial splits.
3. Vertex splitting (sometimes called vertex cleaving) is an elementary graph operation that splits a vertex into two, where these two new vertices are adjacent to the vertices that the original vertex was adjacent to. The inverse of vertex splitting is vertex contraction.
square
1. The square of a graph G is the graph power G2; in the other direction, G is the square root of G2. The half-square of a bipartite graph is the subgraph of its square induced by one side of the bipartition.
2. A squaregraph is a planar graph that can be drawn so that all bounded faces are 4-cycles and all vertices of degree ≤ 3 belong to the outer face.
3. A square grid graph is a lattice graph defined from points in the plane with integer coordinates connected by unit-length edges.
stable
A stable set is a synonym for an independent set.
star
A star is a tree with one internal vertex; equivalently, it is a complete bipartite graph K1,n for some n ≥ 2. The special case of a star with three leaves is called a claw.
strength
The strength of a graph is the minimum ratio of the number of edges removed from the graph to components created, over all possible removals; it is analogous to toughness, based on vertex removals.
strong
1. For strong connectivity and strongly connected components of directed graphs, see connected and component. A strong orientation is an orientation that is strongly connected; see orientation.
2. For the strong perfect graph theorem, see perfect.
3. A strongly regular graph is a regular graph in which every two adjacent vertices have the same number of shared neighbours and every two non-adjacent vertices have the same number of shared neighbours.
4. A strongly chordal graph is a chordal graph in which every even cycle of length six or more has an odd chord.
5. A strongly perfect graph is a graph in which every induced subgraph has an independent set meeting all maximal cliques. The Meyniel graphs are also called "very strongly perfect graphs" because in them, every vertex belongs to such an independent set.
subforest
A subgraph of a forest.
subgraph
A subgraph of a graph G is another graph formed from a subset of the vertices and edges of G. The vertex subset must include all endpoints of the edge subset, but may also include additional vertices. A spanning subgraph is one that includes all vertices of the graph; an induced subgraph is one that includes all the edges whose endpoints belong to the vertex subset.
subtree
A subtree is a connected subgraph of a tree. Sometimes, for rooted trees, subtrees are defined to be a special type of connected subgraph, formed by all vertices and edges reachable from a chosen vertex.
successor
A vertex coming after a given vertex in a directed path.
superconcentrator
A superconcentrator is a graph with two designated and equal-sized subsets of vertices I and O, such that for every two equal-sized subsets S of I and T of O there exists a family of disjoint paths connecting every vertex in S to a vertex in T. Some sources require in addition that a superconcentrator be a directed acyclic graph, with I as its sources and O as its sinks.
supergraph
A graph formed by adding vertices, edges, or both to a given graph. If H is a subgraph of G, then G is a supergraph of H.
== T ==
theta
1. A theta graph is the union of three internally disjoint (simple) paths that have the same two distinct end vertices.
2. The theta graph of a collection of points in the Euclidean plane is constructed by constructing a system of cones surrounding each point and adding one edge per cone, to the point whose projection onto a central ray of the cone is smallest.
3. The Lovász number or Lovász theta function of a graph is a graph invariant related to the clique number and chromatic number that can be computed in polynomial time by semidefinite programming.
Thomsen graph
The Thomsen graph is a name for the complete bipartite graph
K
3
,
3
{\displaystyle K_{3,3}}
.
topological
1. A topological graph is a representation of the vertices and edges of a graph by points and curves in the plane (not necessarily avoiding crossings).
2. Topological graph theory is the study of graph embeddings.
3. Topological sorting is the algorithmic problem of arranging a directed acyclic graph into a topological order, a vertex sequence such that each edge goes from an earlier vertex to a later vertex in the sequence.
totally disconnected
Synonym for edgeless.
tour
A closed trail, a walk that starts and ends at the same vertex and has no repeated edges. Euler tours are tours that use all of the graph edges; see Eulerian.
tournament
A tournament is an orientation of a complete graph; that is, it is a directed graph such that every two vertices are connected by exactly one directed edge (going in only one of the two directions between the two vertices).
traceable
A traceable graph is a graph that contains a Hamiltonian path.
trail
A walk without repeated edges.
transitive
Having to do with the transitive property. The transitive closure of a given directed graph is a graph on the same vertex set that has an edge from one vertex to another whenever the original graph has a path connecting the same two vertices. A transitive reduction of a graph is a minimal graph having the same transitive closure; directed acyclic graphs have a unique transitive reduction. A transitive orientation is an orientation of a graph that is its own transitive closure; it exists only for comparability graphs.
transpose
The transpose graph of a given directed graph is a graph on the same vertices, with each edge reversed in direction. It may also be called the converse or reverse of the graph.
tree
1. A tree is an undirected graph that is both connected and acyclic, or a directed graph in which there exists a unique walk from one vertex (the root of the tree) to all remaining vertices.
2. A k-tree is a graph formed by gluing (k + 1)-cliques together on shared k-cliques. A tree in the ordinary sense is a 1-tree according to this definition.
tree decomposition
A tree decomposition of a graph G is a tree whose nodes are labeled with sets of vertices of G; these sets are called bags. For each vertex v, the bags that contain v must induce a subtree of the tree, and for each edge uv there must exist a bag that contains both u and v. The width of a tree decomposition is one less than the maximum number of vertices in any of its bags; the treewidth of G is the minimum width of any tree decomposition of G.
treewidth
The treewidth of a graph G is the minimum width of a tree decomposition of G. It can also be defined in terms of the clique number of a chordal completion of G, the order of a haven of G, or the order of a bramble of G.
triangle
A cycle of length three in a graph. A triangle-free graph is an undirected graph that does not have any triangle subgraphs.
trivial
A trivial graph is a graph with 0 or 1 vertices. A graph with 0 vertices is also called null graph.
Turán
1. Pál Turán
2. A Turán graph is a balanced complete multipartite graph.
3. Turán's theorem states that Turán graphs have the maximum number of edges among all clique-free graphs of a given order.
4. Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph.
twin
Two vertices u,v are true twins if they have the same closed neighborhood: NG[u] = NG[v] (this implies u and v are neighbors), and they are false twins if they have the same open neighborhood: NG(u) = NG(v)) (this implies u and v are not neighbors).
== U ==
unary vertex
In a rooted tree, a unary vertex is a vertex which has exactly one child vertex.
undirected
An undirected graph is a graph in which the two endpoints of each edge are not distinguished from each other. See also directed and mixed. In a mixed graph, an undirected edge is again one in which the endpoints are not distinguished from each other.
uniform
A hypergraph is k-uniform when all its edges have k endpoints, and uniform when it is k-uniform for some k. For instance, ordinary graphs are the same as 2-uniform hypergraphs.
universal
1. A universal graph is a graph that contains as subgraphs all graphs in a given family of graphs, or all graphs of a given size or order within a given family of graphs.
2. A universal vertex (also called an apex or dominating vertex) is a vertex that is adjacent to every other vertex in the graph. For instance, wheel graphs and connected threshold graphs always have a universal vertex.
3. In the logic of graphs, a vertex that is universally quantified in a formula may be called a universal vertex for that formula.
unweighted graph
A graph whose vertices and edges have not been assigned weights; the opposite of a weighted graph.
utility graph
The utility graph is a name for the complete bipartite graph
K
3
,
3
{\displaystyle K_{3,3}}
.
== V ==
V
See vertex set.
valency
Synonym for degree.
vertex
A vertex (plural vertices) is (together with edges) one of the two basic units out of which graphs are constructed. Vertices of graphs are often considered to be atomic objects, with no internal structure.
vertex cut
separating set
A set of vertices whose removal disconnects the graph. A one-vertex cut is called an articulation point or cut vertex.
vertex set
The set of vertices of a given graph G, sometimes denoted by V(G).
vertices
See vertex.
Vizing
1. Vadim G. Vizing
2. Vizing's theorem that the chromatic index is at most one more than the maximum degree.
3. Vizing's conjecture on the domination number of Cartesian products of graphs.
volume
The sum of the degrees of a set of vertices.
== W ==
W
The letter W is used in notation for wheel graphs and windmill graphs. The notation is not standardized.
Wagner
1. Klaus Wagner
2. The Wagner graph, an eight-vertex Möbius ladder.
3. Wagner's theorem characterizing planar graphs by their forbidden minors.
4. Wagner's theorem characterizing the K5-minor-free graphs.
walk
A walk is a finite or infinite sequence of edges which joins a sequence of vertices. Walks are also sometimes called chains. A walk is open if its first and last vertices are distinct, and closed if they are repeated.
weakly connected
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph.
weight
A numerical value, assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph.
weighted graph
A graph whose vertices or edges have been assigned weights. A vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges.
well-colored
A well-colored graph is a graph all of whose greedy colorings use the same number of colors.
well-covered
A well-covered graph is a graph all of whose maximal independent sets are the same size.
wheel
A wheel graph is a graph formed by adding a universal vertex to a simple cycle.
width
1. A synonym for degeneracy.
2. For other graph invariants known as width, see bandwidth, branchwidth, clique-width, pathwidth, and treewidth.
3. The width of a tree decomposition or path decomposition is one less than the maximum size of one of its bags, and may be used to define treewidth and pathwidth.
4. The width of a directed acyclic graph is the maximum cardinality of an antichain.
windmill
A windmill graph is the union of a collection of cliques, all of the same order as each other, with one shared vertex belonging to all the cliques and all other vertices and edges distinct.
== See also ==
List of graph theory topics
Gallery of named graphs
Graph algorithms
Glossary of areas of mathematics
== References == | Wikipedia/Infinite_graph |
In graph theory, a perfect graph is a graph in which the chromatic number equals the size of the maximum clique, both in the graph itself and in every induced subgraph. In all graphs, the chromatic number is greater than or equal to the size of the maximum clique, but they can be far apart. A graph is perfect when these numbers are equal, and remain equal after the deletion of arbitrary subsets of vertices.
The perfect graphs include many important families of graphs and serve to unify results relating colorings and cliques in those families. For instance, in all perfect graphs, the graph coloring problem, maximum clique problem, and maximum independent set problem can all be solved in polynomial time, despite their greater complexity for non-perfect graphs. In addition, several important minimax theorems in combinatorics, including Dilworth's theorem and Mirsky's theorem on partially ordered sets, Kőnig's theorem on matchings, and the Erdős–Szekeres theorem on monotonic sequences, can be expressed in terms of the perfection of certain associated graphs.
The perfect graph theorem states that the complement graph of a perfect graph is also perfect. The strong perfect graph theorem characterizes the perfect graphs in terms of certain forbidden induced subgraphs, leading to a polynomial time algorithm for testing whether a graph is perfect.
== Definitions and characterizations ==
A clique in an undirected graph is a subset of its vertices that are all adjacent to each other, such as the subsets of vertices connected by heavy edges in the illustration. The clique number is the number of vertices in the largest clique: two in the illustrated seven-vertex cycle, and three in the other graph shown. A graph coloring assigns a color to each vertex so that each two adjacent vertices have different colors, also shown in the illustration. The chromatic number of a graph is the minimum number of colors in any coloring. The colorings shown are optimal, so the chromatic number is three for the 7-cycle and four for the other graph shown. The vertices of any clique must have different colors, so the chromatic number is always greater than or equal to the clique number. For some graphs, they are equal; for others, such as the ones shown, they are unequal. The perfect graphs are defined as the graphs for which these two numbers are equal, not just in the graph itself, but in every induced subgraph obtained by deleting some of its vertices.
The perfect graph theorem asserts that the complement graph of a perfect graph is itself perfect. The complement graph has an edge between two vertices if and only if the given graph does not. A clique, in the complement graph, corresponds to an independent set in the given. A coloring of the complement graph corresponds to a clique cover, a partition of the vertices of the given graph into cliques. The fact that the complement of a perfect graph
G
{\displaystyle G}
is also perfect implies that, in
G
{\displaystyle G}
itself, the independence number (the size of its maximum independent set), equals its clique cover number (the fewest number of cliques needed in a clique cover). More strongly, the same thing is true in every induced subgraph of the complement graph. This provides an alternative and equivalent definition of the perfect graphs: they are the graphs for which, in each induced subgraph, the independence number equals the clique cover number.
The strong perfect graph theorem gives a different way of defining perfect graphs, by their structure instead of by their properties.
It is based on the existence of cycle graphs and their complements within a given graph. A cycle of odd length, greater than three, is not perfect: its clique number is two, but its chromatic number is three. By the perfect graph theorem, the complement of an odd cycle of length greater than three is also not perfect. The complement of a length-5 cycle is another length-5 cycle, but for larger odd lengths the complement is not a cycle; it is called an anticycle. The strong perfect graph theorem asserts that these are the only forbidden induced subgraphs for the perfect graphs: a graph is perfect if and only if its induced subgraphs include neither an odd cycle nor an odd anticycle of five or more vertices. In this context, induced cycles that are not triangles are called "holes", and their complements are called "antiholes", so the strong perfect graph theorem can be stated more succinctly: a graph is perfect if and only if it has neither an odd hole nor an odd antihole.
These results can be combined in another characterization of perfect graphs: they are the graphs for which the product of the clique number and independence number is greater than or equal to the number of vertices, and for which the same is true for all induced subgraphs. Because the statement of this characterization remains invariant under complementation of graphs, it implies the perfect graph theorem. One direction of this characterization follows easily from the original definition of perfect: the number of vertices in any graph equals the sum of the sizes of the color classes in an optimal coloring, and is less than or equal to the number of colors multiplied by the independence number. In a perfect graph, the number of colors equals the clique number, and can be replaced by the clique number in this inequality. The other direction can be proved directly, but it also follows from the strong perfect graph theorem: if a graph is not perfect, it contains an odd cycle or its complement, and in these subgraphs the product of the clique number and independence number is one less than the number of vertices.
== History ==
The theory of perfect graphs developed from a 1958 result of Tibor Gallai that in modern language can be interpreted as stating that the complement of a bipartite graph is perfect; this result can also be viewed as a simple equivalent of Kőnig's theorem, a much earlier result relating matchings and vertex covers in bipartite graphs. The first formulation of the concept of perfect graphs more generally was in a 1961 paper by Claude Berge, in German, and the first use of the phrase "perfect graph" appears to be in a 1963 paper of Berge. In these works he unified Gallai's result with several similar results by defining perfect graphs, and he conjectured both the perfect graph theorem and the strong perfect graph theorem. In formulating these concepts, Berge was motivated by the concept of the Shannon capacity of a graph, by the fact that for (co-)perfect graphs it equals the independence number, and by the search for minimal examples of graphs for which this is not the case. Until the strong perfect graph theorem was proven, the graphs described by it (that is, the graphs with no odd hole and no odd antihole) were called Berge graphs.
The perfect graph theorem was proven by László Lovász in 1972, who in the same year proved the stronger inequality between the number of vertices and the product of the clique number and independence number, without benefit of the strong perfect graph theorem. In 1991, Alfred Lehman won the Fulkerson Prize, sponsored jointly by the Mathematical Optimization Society and American Mathematical Society, for his work on generalizations of the theory of perfect graphs to logical matrices. The conjectured strong perfect graph theorem became the focus of research in the theory of perfect graphs for many years, until its proof was announced in 2002 by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas, and published by them in 2006. This work won its authors the 2009 Fulkerson Prize. The perfect graph theorem has a short proof, but the proof of the strong perfect graph theorem is long and technical, based on a deep structural decomposition of Berge graphs. Related decomposition techniques have also borne fruit in the study of other graph classes, and in particular for the claw-free graphs. The symmetric characterization of perfect graphs in terms of the product of clique number and independence number was originally suggested by Hajnal and proven by Lovász.
== Families of graphs ==
Many well-studied families of graphs are perfect, and in many cases the fact that these graphs are perfect corresponds to a minimax theorem for some kinds of combinatorial structure defined by these graphs. Examples of this phenomenon include the perfection of bipartite graphs and their line graphs, associated with Kőnig's theorem relating maximum matchings and vertex covers in bipartite graphs, and the perfection of comparability graphs, associated with Dilworth's theorem and Mirsky's theorem on chains and antichains in partially ordered sets. Other important classes of graphs, defined by having a structure related to the holes and antiholes of the strong perfect graph theorem, include the chordal graphs, Meyniel graphs, and their subclasses.
=== Bipartite graphs and line graphs ===
In bipartite graphs (with at least one edge) the chromatic number and clique number both equal two. Their induced subgraphs remain bipartite, so bipartite graphs are perfect. Other important families of graphs are bipartite, and therefore also perfect, including for instance the trees and median graphs. By the perfect graph theorem, maximum independent sets in bipartite graphs have the same size as their minimum clique covers. The maximum independent set is complementary to a minimum vertex cover, a set of vertices that touches all edges. A minimum clique cover consists of a maximum matching (as many disjoint edges as possible) together with one-vertex cliques for all remaining vertices, and its size is the number of vertices minus the number of matching edges. Therefore, this equality can be expressed equivalently as an equality between the size of the maximum matching and the minimum vertex cover in bipartite graphs, the usual formulation of Kőnig's theorem.
A matching, in any graph
G
{\displaystyle G}
, is the same thing as an independent set in the line graph
L
(
G
)
{\displaystyle L(G)}
, a graph that has a vertex for each edge in
G
{\displaystyle G}
and an edge between two vertices in
L
(
G
)
{\displaystyle L(G)}
for each pair of edges in
G
{\displaystyle G}
that share an endpoint. Line graphs have two kinds of cliques: sets of edges in
G
{\displaystyle G}
with a common endpoint, and triangles in
G
{\displaystyle G}
. In bipartite graphs, there are no triangles, so a clique cover in
L
(
G
)
{\displaystyle L(G)}
corresponds to a vertex cover in
G
{\displaystyle G}
. Therefore, in line graphs of bipartite graphs, the independence number and clique cover number are equal. Induced subgraphs of line graphs of bipartite graphs are line graphs of subgraphs, so the line graphs of bipartite graphs are perfect. Examples include the rook's graphs, the line graphs of complete bipartite graphs. Every line graph of a bipartite graph is an induced subgraph of a rook's graph.
Because line graphs of bipartite graphs are perfect, their clique number equals their chromatic number. The clique number of the line graph of a bipartite graph is the maximum degree of any vertex of the underlying bipartite graph. The chromatic number of the line graph of a bipartite graph is the chromatic index of the underlying bipartite graph, the minimum number of colors needed to color the edges so that touching edges have different colors. Each color class forms a matching, and the chromatic index is the minimum number of matchings needed to cover all edges. The equality of maximum degree and chromatic index, in bipartite graphs, is another theorem of Dénes Kőnig. In arbitrary simple graphs, they can differ by one; this is Vizing's theorem.
The underlying graph
G
{\displaystyle G}
of a perfect line graph
L
(
G
)
{\displaystyle L(G)}
is a line perfect graph. These are the graphs whose biconnected components are bipartite graphs, the complete graph
K
4
{\displaystyle K_{4}}
, and triangular books, sets of triangles sharing an edge. These components are perfect, and their combination preserves perfection, so every line perfect graph is perfect.
The bipartite graphs, their complements, and the line graphs of bipartite graphs and their complements form four basic classes of perfect graphs that play a key role in the proof of the strong perfect graph theorem. According to the structural decomposition of perfect graphs used as part of this proof, every perfect graph that is not already in one of these four classes can be decomposed by partitioning its vertices into subsets, in one of four ways, called a 2-join, the complement of a 2-join, a homogeneous pair, or a skew partition.
=== Comparability graphs ===
A partially ordered set is defined by its set of elements, and a comparison relation
≤
{\displaystyle \leq }
that is reflexive (for all elements
x
{\displaystyle x}
,
x
≤
x
{\displaystyle x\leq x}
), antisymmetric (if
x
≤
y
{\displaystyle x\leq y}
and
y
≤
x
{\displaystyle y\leq x}
, then
x
=
y
{\displaystyle x=y}
, and transitive (if
x
≤
y
{\displaystyle x\leq y}
and
y
≤
z
{\displaystyle y\leq z}
, then
x
≤
z
{\displaystyle x\leq z}
). Elements
x
{\displaystyle x}
and
y
{\displaystyle y}
are comparable if
x
≤
y
{\displaystyle x\leq y}
or
y
≤
x
{\displaystyle y\leq x}
, and incomparable otherwise. For instance, set inclusion (
⊆
{\displaystyle \subseteq }
) partially orders any family of sets. The comparability graph of a partially ordered set has the set elements as its vertices, with an edge connecting any two comparable elements. Its complement is called an incomparability graph. Different partial orders may have the same comparability graph; for instance, reversing all comparisons changes the order but not the graph.
Finite comparability graphs (and their complementary incomparability graphs) are always perfect. A clique, in a comparability graph, comes from a subset of elements that are all pairwise comparable; such a subset is called a chain, and it is linearly ordered by the given partial order. An independent set comes from a subset of elements no two of which are comparable; such a subset is called an antichain. For instance, in the illustrated partial order and comparability graph,
{
A
,
B
,
C
}
{\displaystyle \{A,B,C\}}
is a chain in the order and a clique in the graph, while
{
C
,
D
}
{\displaystyle \{C,D\}}
is an antichain in the order and an independent set in the graph. Thus, a coloring of a comparability graph is a partition of its elements into antichains, and a clique cover is a partition of its elements into chains. Dilworth's theorem, in the theory of partial orders, states that for every finite partial order, the size of the largest antichain equals the minimum number of chains into which the elements can be partitioned. In the language of graphs, this can be stated as: every finite comparability graph is perfect. Similarly, Mirsky's theorem states that for every finite partial order, the size of the largest chain equals the minimum number of antichains into which the elements can be partitioned, or that every finite incomparability graph is perfect. These two theorems are equivalent via the perfect graph theorem, but Mirsky's theorem is easier to prove directly than Dilworth's theorem: if each element is labeled by the size of the largest chain in which it is maximal, then the subsets with equal labels form a partition into antichains, with the number of antichains equal to the size of the largest chain overall. Every bipartite graph is a comparability graph. Thus, Kőnig's theorem can be seen as a special case of Dilworth's theorem, connected through the theory of perfect graphs.
A permutation graph is defined from a permutation on a totally ordered sequence of elements (conventionally, the integers from
1
{\displaystyle 1}
to
n
{\displaystyle n}
), which form the vertices of the graph. The edges of a permutation graph connect pairs of elements whose ordering is reversed by the given permutation. These are naturally incomparability graphs, for a partial order in which
x
≤
y
{\displaystyle x\leq y}
whenever
x
{\displaystyle x}
occurs before
y
{\displaystyle y}
in both the given sequence and its permutation. The complement of a permutation graph is another permutation graph, for the reverse of the given permutation. Therefore, as well as being incomparability graphs, permutation graphs are comparability graphs. In fact, the permutation graphs are exactly the graphs that are both comparability and incomparability graphs. A clique, in a permutation graph, is a subsequence of elements that appear in increasing order in the given permutation, and an independent set is a subsequence of elements that appear in decreasing order. In any perfect graph, the product of the clique number and independence number are at least the number of vertices; the special case of this inequality for permutation graphs is the Erdős–Szekeres theorem.
The interval graphs are the incomparability graphs of interval orders, orderings defined by sets of intervals on the real line with
x
≤
y
{\displaystyle x\leq y}
whenever interval
x
{\displaystyle x}
is completely to the left of interval
y
{\displaystyle y}
. In the corresponding interval graph, there is an edge from
x
{\displaystyle x}
to
y
{\displaystyle y}
whenever the two intervals have a point in common. Coloring these graphs can be used to model problems of assigning resources to tasks (such as classrooms to classes) with intervals describing the scheduled time of each task. Both interval graphs and permutation graphs are generalized by the trapezoid graphs. Systems of intervals in which no two are nested produce a more restricted class of graphs, the indifference graphs, the incomparability graphs of semiorders. These have been used to model human preferences under the assumption that, when items have utilities that are very close to each other, they will be incomparable. Intervals where every pair is nested or disjoint produce trivially perfect graphs, the comparability graphs of ordered trees. In them, the independence number equals the number of maximal cliques.
=== Split graphs and random perfect graphs ===
A split graph is a graph that can be partitioned into a clique and an independent set. It can be colored by assigning a separate color to each vertex of a maximal clique, and then coloring each remaining vertex the same as a non-adjacent clique vertex. Therefore, these graphs have equal clique numbers and chromatic numbers, and are perfect. A broader class of graphs, the unipolar graphs can be partitioned into a clique and a cluster graph, a disjoint union of cliques. These include also the bipartite graphs, for which the cluster graph is just a single clique. The unipolar graphs and their complements together form the class of generalized split graphs. Almost all perfect graphs are generalized split graphs, in the sense that the fraction of perfect
n
{\displaystyle n}
-vertex graphs that are generalized split graphs goes to one in the limit as
n
{\displaystyle n}
grows arbitrarily large.
Other limiting properties of almost all perfect graphs can be determined by studying the generalized split graphs. In this way, it has been shown that almost all perfect graphs contain a Hamiltonian cycle. If
H
{\displaystyle H}
is an arbitrary graph, the limiting probability that
H
{\displaystyle H}
occurs as an induced subgraph of a large random perfect graph is 0, 1/2, or 1, respectively as
H
{\displaystyle H}
is not a generalized split graph, is unipolar or co-unipolar but not both, or is both unipolar and co-unipolar.
=== Incremental constructions ===
Several families of perfect graphs can be characterized by an incremental construction in which the graphs in the family are built up by adding one vertex at a time, according to certain rules, which guarantee that after each vertex is added the graph remains perfect.
The chordal graphs are the graphs formed by a construction of this type in which, at the time a vertex is added, its neighbors form a clique. Chordal graphs may also be characterized as the graphs that have no holes (even or odd). They include as special cases the forests, the interval graphs, and the maximal outerplanar graphs. The split graphs are exactly the graphs that are chordal and have a chordal complement. The k-trees, central to the definition of treewidth, are chordal graphs formed by starting with a (k + 1)-vertex clique and repeatedly adding a vertex so that it and its neighbors form a clique of the same size.
The distance-hereditary graphs are formed, starting from a single-vertex graph, by repeatedly adding degree-one vertices ("pendant vertices") or copies of existing vertices (with the same neighbors). Each vertex and its copy may be adjacent (true twins) or non-adjacent (false twins). In every connected induced subgraph of these graphs, the distances between vertices are the same as in the whole graph. If only the twin operations are used, the result is a cograph. The cographs are the comparability graphs of series-parallel partial orders and can also be formed by a different construction process combining complementation and the disjoint union of graphs.
The graphs that are both chordal and distance-hereditary are called Ptolemaic graphs, because their distances obey Ptolemy's inequality. They have a restricted form of the distance-hereditary construction sequence, in which a false twin can only be added when its neighbors would form a clique. They include as special cases the windmill graphs consisting of cliques joined at a single vertex, and the block graphs in which each biconnected component is a clique.
The threshold graphs are formed from an empty graph by repeatedly adding either an isolated vertex (connected to nothing else) or a universal vertex (connected to all other vertices). They are special cases of the split graphs and the trivially perfect graphs. They are exactly the graphs that are both trivially perfect and the complement of a trivially perfect graph; they are also exactly the graphs that are both cographs and split graphs.
If the vertices of a chordal graph are colored in the order of an incremental construction sequence using a greedy coloring algorithm, the result will be an optimal coloring. The reverse of the vertex ordering used in this construction is called an elimination order. Similarly, if the vertices of a distance-hereditary graph are colored in the order of an incremental construction sequence, the resulting coloring will be optimal. If the vertices of a comparability graph are colored in the order of a linear extension of its underlying partial order, the resulting coloring will be optimal. This property is generalized in the family of perfectly orderable graphs, the graphs for which there exists an ordering that, when restricted to any induced subgraph, causes greedy coloring to be optimal. The cographs are exactly the graphs for which all vertex orderings have this property. Another subclass of perfectly orderable graphs are the complements of tolerance graphs, a generalization of interval graphs.
=== Strong perfection ===
The strongly perfect graphs are graphs in which, in every induced subgraph, there exists an independent set that intersects all maximal cliques. In the Meyniel graphs or very strongly perfect graphs, every vertex belongs to such an independent set. The Meyniel graphs can also be characterized as the graphs in which every odd cycle of length five or more has at least two chords.
A parity graph is defined by the property that between every two vertices, all induced paths have equal parity: either they are all even in length, or they are all odd in length. These include the distance-hereditary graphs, in which all induced paths between two vertices have the same length, and bipartite graphs, for which all paths (not just induced paths) between any two vertices have equal parity. Parity graphs are Meyniel graphs, and therefore perfect: if a long odd cycle had only one chord, the two parts of the cycle between the endpoints of the chord would be induced paths of different parity. The prism over any parity graph (its Cartesian product with a single edge) is another parity graph, and the parity graphs are the only graphs whose prisms are perfect.
== Matrices, polyhedra, and integer programming ==
Perfect graphs are closely connected to the theory of linear programming and integer programming. Both linear programs and integer programs are expressed in canonical form as seeking a vector
x
{\displaystyle x}
that maximizes a linear objective function
c
⋅
x
{\displaystyle c\cdot x}
, subject to the linear constraints
x
≥
0
{\displaystyle x\geq 0}
and
A
x
≤
b
{\displaystyle Ax\leq b}
. Here,
A
{\displaystyle A}
is given as a matrix, and
b
{\displaystyle b}
and
c
{\displaystyle c}
are given as two vectors. Although linear programs and integer programs are specified in this same way, they differ in that, in a linear program, the solution vector
x
{\displaystyle x}
is allowed to have arbitrary real numbers as its coefficients, whereas in an integer program these unknown coefficients must be integers. This makes a very big difference in the computational complexity of these problems: linear programming can be solved in polynomial time, but integer programming is NP-hard.
When the same given values
A
{\displaystyle A}
,
b
{\displaystyle b}
, and
c
{\displaystyle c}
are used to define both a linear program and an integer program, they commonly have different optimal solutions. The linear program is called an integral linear program if an optimal solution to the integer program is also optimal for the linear program. (Otherwise, the ratio between the two solution values is called the integrality gap, and is important in analyzing approximation algorithms for the integer program.) Perfect graphs may be used to characterize the (0, 1) matrices
A
{\displaystyle A}
(that is, matrices where all coefficients are 0 or 1) with the following property: if
b
{\displaystyle b}
is the all-ones vector, then for all choices of
c
{\displaystyle c}
the resulting linear program is integral.
As Václav Chvátal proved, every matrix
A
{\displaystyle A}
with this property is (up to removal of irrelevant "dominated" rows) the maximal clique versus vertex incidence matrix of a perfect graph. This matrix has a column for each vertex of the graph, and a row for each maximal clique, with a coefficient that is one in the columns of vertices that belong to the clique and zero in the remaining columns. The integral linear programs encoded by this matrix seek the maximum-weight independent set of the given graph, with weights given by the vector
c
{\displaystyle c}
.
For a matrix
A
{\displaystyle A}
defined in this way from a perfect graph, the vectors
x
{\displaystyle x}
satisfying the system of inequalities
x
≥
0
{\displaystyle x\geq 0}
,
A
x
≤
1
{\displaystyle Ax\leq 1}
form an integral polytope. It is the convex hull of the indicator vectors of independent sets in the graph, with facets corresponding to the maximal cliques in the graph. The perfect graphs are the only graphs for which the two polytopes defined in this way from independent sets and from maximal cliques coincide.
== Algorithms ==
In all perfect graphs, the graph coloring problem, maximum clique problem, and maximum independent set problem can all be solved in polynomial time. The algorithm for the general case involves the Lovász number of these graphs. The Lovász number of any graph can be determined by labeling its vertices by high dimensional unit vectors, so that each two non-adjacent vertices have perpendicular labels, and so that all of the vectors lie in a cone with as small an opening angle as possible. Then, the Lovász number is
1
/
cos
2
θ
{\displaystyle 1/\cos ^{2}\theta }
, where
θ
{\displaystyle \theta }
is the half-angle of this cone. Despite this complicated definition, an accurate numerical value of the Lovász number can be computed using semidefinite programming, and for any graph the Lovász number is sandwiched between the chromatic number and clique number. Because these two numbers equal each other in perfect graphs, they also equal the Lovász number. Thus, they can be computed by approximating the Lovász number accurately enough and rounding the result to the nearest integer.
The solution method for semidefinite programs, used by this algorithm, is based on the ellipsoid method for linear programming. It leads to a polynomial time algorithm for computing the chromatic number and clique number in perfect graphs. However, solving these problems using the Lovász number and the ellipsoid method is complicated and has a high polynomial exponent. More efficient combinatorial algorithms are known for many special cases.
This method can also be generalized to find the maximum weight of a clique, in a weighted graph, instead of the clique number. A maximum or maximum weight clique itself, and an optimal coloring of the graph, can also be found by these methods, and a maximum independent set can be found by applying the same approach to the complement of the graph. For instance, a maximum clique can be found by the following algorithm:
Loop through the vertices of the graph. For each vertex
v
{\displaystyle v}
, perform the following steps:
Tentatively remove
v
{\displaystyle v}
from the graph.
Use semidefinite programming to determine the clique number of the resulting induced subgraph.
If this clique number is the same as for the whole graph, permanently remove
v
{\displaystyle v}
; otherwise, restore
v
{\displaystyle v}
to the graph.
Return the subgraph that remains after all the permanent removals.
The algorithm for finding an optimal coloring is more complicated, and depends on the duality theory of linear programs, using this clique-finding algorithm as a separation oracle.
Beyond solving these problems, another important computational problem concerning perfect graphs is their recognition, the problem of testing whether a given graph is perfect. For many years the complexity of recognizing Berge graphs and perfect graphs were considered separately (as they were not yet known to be equivalent) and both remained open. They were both known to be in co-NP; for Berge graphs, this follows from the definition, while for perfect graphs it follows from the characterization using the product of the clique number and independence number. After the strong perfect graph theorem was proved, Chudnovsky, Cornuéjols, Liu, Seymour, and Vušković discovered a polynomial time algorithm for testing the existence of odd holes or anti-holes. By the strong perfect graph theorem, this can be used to test whether a given graph is perfect, in polynomial time.
== Related concepts ==
Generalizing the perfect graphs, a graph class is said to be χ-bounded if the chromatic number of the graphs in the class can be bounded by a function of their clique number. The perfect graphs are exactly the graphs for which this function is the identity, both for the graph itself and for all its induced subgraphs.
The equality of the clique number and chromatic number in perfect graphs has motivated the definition of other graph classes, in which other graph invariants are set equal to each other. For instance, the domination perfect graphs are defined as graphs in which, in every induced subgraph, the smallest dominating set (a set of vertices adjacent to all remaining vertices) equals the size of the smallest independent set that is a dominating set. These include, for instance, the claw-free graphs.
== References ==
== External links ==
The Strong Perfect Graph Theorem by Václav Chvátal.
Open problems on perfect graphs, maintained by the American Institute of Mathematics.
Perfect Problems, maintained by Václav Chvátal.
Information System on Graph Class Inclusions: perfect graph | Wikipedia/Perfect_graph |
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph, or a planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.
Every graph that can be drawn on a plane can be drawn on the sphere as well, and vice versa, by means of stereographic projection.
Plane graphs can be encoded by combinatorial maps or rotation systems.
An equivalence class of topologically equivalent drawings on the sphere, usually with additional assumptions such as the absence of isthmuses, is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map has a particular status.
Planar graphs generalize to graphs drawable on a surface of a given genus. In this terminology, planar graphs have genus 0, since the plane (and the sphere) are surfaces of genus 0. See "graph embedding" for other related topics.
== Planarity criteria ==
=== Kuratowski's and Wagner's theorems ===
The Polish mathematician Kazimierz Kuratowski provided a characterization of planar graphs in terms of forbidden graphs, now known as Kuratowski's theorem:
A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of the complete graph K5 or the complete bipartite graph K3,3 (utility graph).
A subdivision of a graph results from inserting vertices into edges (for example, changing an edge • —— • to • — • — • ) zero or more times.
Instead of considering subdivisions, Wagner's theorem deals with minors:
A finite graph is planar if and only if it does not have K5 or K3,3 as a minor.
A minor of a graph results from taking a subgraph and repeatedly contracting an edge into a vertex, with each neighbor of the original end-vertices becoming a neighbor of the new vertex.
Klaus Wagner asked more generally whether any minor-closed class of graphs is determined by a finite set of "forbidden minors". This is now the Robertson–Seymour theorem, proved in a long series of papers. In the language of this theorem, K5 and K3,3 are the forbidden minors for the class of finite planar graphs.
=== Other criteria ===
In practice, it is difficult to use Kuratowski's criterion to quickly decide whether a given graph is planar. However, there exist fast algorithms for this problem: for a graph with n vertices, it is possible to determine in time O(n) (linear time) whether the graph may be planar or not (see planarity testing).
For a simple, connected, planar graph with v vertices and e edges and f faces, the following simple conditions hold for v ≥ 3:
Theorem 1. e ≤ 3v − 6;
Theorem 2. If there are no cycles of length 3, then e ≤ 2v − 4.
Theorem 3. f ≤ 2v − 4.
In this sense, planar graphs are sparse graphs, in that they have only O(v) edges, asymptotically smaller than the maximum O(v2). The graph K3,3, for example, has 6 vertices, 9 edges, and no cycles of length 3. Therefore, by Theorem 2, it cannot be planar. These theorems provide necessary conditions for planarity that are not sufficient conditions, and therefore can only be used to prove a graph is not planar, not that it is planar. If both theorem 1 and 2 fail, other methods may be used.
Whitney's planarity criterion gives a characterization based on the existence of an algebraic dual;
Mac Lane's planarity criterion gives an algebraic characterization of finite planar graphs, via their cycle spaces;
The Fraysseix–Rosenstiehl planarity criterion gives a characterization based on the existence of a bipartition of the cotree edges of a depth-first search tree. It is central to the left-right planarity testing algorithm;
Schnyder's theorem gives a characterization of planarity in terms of partial order dimension;
Colin de Verdière's planarity criterion gives a characterization based on the maximum multiplicity of the second eigenvalue of certain Schrödinger operators defined by the graph.
The Hanani–Tutte theorem states that a graph is planar if and only if it has a drawing in which each independent pair of edges crosses an even number of times; it can be used to characterize the planar graphs via a system of equations modulo 2.
== Properties ==
=== Euler's formula ===
Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and v is the number of vertices, e is the number of edges and f is the number of faces (regions bounded by edges, including the outer, infinitely large region), then
v
−
e
+
f
=
2.
{\displaystyle v-e+f=2.}
As an illustration, in the butterfly graph given above, v = 5, e = 6 and f = 3.
In general, if the property holds for all planar graphs of f faces, any change to the graph that creates an additional face while keeping the graph planar would keep v − e + f an invariant. Since the property holds for all graphs with f = 2, by mathematical induction it holds for all cases. Euler's formula can also be proved as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both e and f by one, leaving v − e + f constant. Repeat until the remaining graph is a tree; trees have v = e + 1 and f = 1, yielding v − e + f = 2, i. e., the Euler characteristic is 2.
In a finite, connected, simple, planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces, so 3f ≤ 2e; using Euler's formula, one can then show that these graphs are sparse in the sense that if v ≥ 3:
e
≤
3
v
−
6.
{\displaystyle e\leq 3v-6.}
Euler's formula is also valid for convex polyhedra. This is no coincidence: every convex polyhedron can be turned into a connected, simple, planar graph by using the Schlegel diagram of the polyhedron, a perspective projection of the polyhedron onto a plane with the center of perspective chosen near the center of one of the polyhedron's faces. Not every planar graph corresponds to a convex polyhedron in this way: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra are precisely the finite 3-connected simple planar graphs. More generally, Euler's formula applies to any polyhedron whose faces are simple polygons that form a surface topologically equivalent to a sphere, regardless of its convexity.
=== Average degree ===
Connected planar graphs with more than one edge obey the inequality 2e ≥ 3f, because each face has at least three face-edge incidences and each edge contributes exactly two incidences. It follows via algebraic transformations of this inequality with Euler's formula v − e + f = 2 that for finite planar graphs the average degree is strictly less than 6. Graphs with higher average degree cannot be planar.
=== Coin graphs ===
We say that two circles drawn in a plane kiss (or osculate) whenever they intersect in exactly one point. A "coin graph" is a graph formed by a set of circles, no two of which have overlapping interiors, by making a vertex for each circle and an edge for each pair of circles that kiss. The circle packing theorem, first proved by Paul Koebe in 1936, states that a graph is planar if and only if it is a coin graph.
This result provides an easy proof of Fáry's theorem, that every simple planar graph can be embedded in the plane in such a way that its edges are straight line segments that do not cross each other. If one places each vertex of the graph at the center of the corresponding circle in a coin graph representation, then the line segments between centers of kissing circles do not cross any of the other edges.
=== Planar graph density ===
The meshedness coefficient or density D of a planar graph, or network, is the ratio of the number f − 1 of bounded faces (the same as the circuit rank of the graph, by Mac Lane's planarity criterion) by its maximal possible values 2v − 5 for a graph with v vertices:
D
=
f
−
1
2
v
−
5
{\displaystyle D={\frac {f-1}{2v-5}}}
The density obeys 0 ≤ D ≤ 1, with D = 0 for
a completely sparse planar graph (a tree), and D = 1 for a completely dense (maximal) planar graph.
=== Dual graph ===
Given an embedding G of a (not necessarily simple) connected graph in the plane without edge intersections, we construct the dual graph G* as follows: we choose one vertex in each face of G (including the outer face) and for each edge e in G we introduce a new edge in G* connecting the two vertices in G* corresponding to the two faces in G that meet at e. Furthermore, this edge is drawn so that it crosses e exactly once and that no other edge of G or G* is intersected. Then G* is again the embedding of a (not necessarily simple) planar graph; it has as many edges as G, as many vertices as G has faces and as many faces as G has vertices. The term "dual" is justified by the fact that G** = G; here the equality is the equivalence of embeddings on the sphere. If G is the planar graph corresponding to a convex polyhedron, then G* is the planar graph corresponding to the dual polyhedron.
Duals are useful because many properties of the dual graph are related in simple ways to properties of the original graph, enabling results to be proven about graphs by examining their dual graphs.
While the dual constructed for a particular embedding is unique (up to isomorphism), graphs may have different (i.e. non-isomorphic) duals, obtained from different (i.e. non-homeomorphic) embeddings.
== Families of planar graphs ==
=== Maximal planar graphs ===
A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (including the outer one) are then bounded by three edges, explaining the alternative term plane triangulation (which technically means a plane drawing of the graph). The alternative names "triangular graph" or "triangulated graph" have also been used, but are ambiguous, as they more commonly refer to the line graph of a complete graph and to the chordal graphs respectively. Every maximal planar graph on more than 3 vertices is at least 3-connected.
If a maximal planar graph has v vertices with v > 2, then it has precisely 3v − 6 edges and 2v − 4 faces.
Apollonian networks are the maximal planar graphs formed by repeatedly splitting triangular faces into triples of smaller triangles. Equivalently, they are the planar 3-trees.
Strangulated graphs are the graphs in which every peripheral cycle is a triangle. In a maximal planar graph (or more generally a polyhedral graph) the peripheral cycles are the faces, so maximal planar graphs are strangulated. The strangulated graphs include also the chordal graphs, and are exactly the graphs that can be formed by clique-sums (without deleting edges) of complete graphs and maximal planar graphs.
=== Outerplanar graphs ===
Outerplanar graphs are graphs with an embedding in the plane such that all vertices belong to the unbounded face of the embedding. Every outerplanar graph is planar, but the converse is not true: K4 is planar but not outerplanar. A theorem similar to Kuratowski's states that a finite graph is outerplanar if and only if it does not contain a subdivision of K4 or of K2,3. The above is a direct corollary of the fact that a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph.
A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For k > 1 a planar embedding is k-outerplanar if removing the vertices on the outer face results in a (k − 1)-outerplanar embedding. A graph is k-outerplanar if it has a k-outerplanar embedding.
=== Halin graphs ===
A Halin graph is a graph formed from an undirected plane tree (with no degree-two nodes) by connecting its leaves into a cycle, in the order given by the plane embedding of the tree. Equivalently, it is a polyhedral graph in which one face is adjacent to all the others. Every Halin graph is planar. Like outerplanar graphs, Halin graphs have low treewidth, making many algorithmic problems on them more easily solved than in unrestricted planar graphs.
=== Upward planar graphs ===
An upward planar graph is a directed acyclic graph that can be drawn in the plane with its edges as non-crossing curves that are consistently oriented in an upward direction. Not every planar directed acyclic graph is upward planar, and it is NP-complete to test whether a given graph is upward planar.
=== Convex planar graphs ===
A planar graph is said to be convex if all of its faces (including the outer face) are convex polygons. Not all planar graphs have a convex embedding (e.g. the complete bipartite graph K2,4). A sufficient condition that a graph can be drawn convexly is that it is a subdivision of a 3-vertex-connected planar graph. Tutte's spring theorem even states that for simple 3-vertex-connected planar graphs the position of the inner vertices can be chosen to be the average of its neighbors.
=== Word-representable planar graphs ===
Word-representable planar graphs include triangle-free planar graphs and, more generally, 3-colourable planar graphs, as well as certain face subdivisions of triangular grid graphs, and certain triangulations of grid-covered cylinder graphs.
== Theorems ==
=== Enumeration of planar graphs ===
The asymptotic for the number of (labeled) planar graphs on
n
{\displaystyle n}
vertices is
g
⋅
n
−
7
/
2
⋅
γ
n
⋅
n
!
{\displaystyle g\cdot n^{-7/2}\cdot \gamma ^{n}\cdot n!}
, where
γ
≈
27.22687
{\displaystyle \gamma \approx 27.22687}
and
g
≈
0.43
×
10
−
5
{\displaystyle g\approx 0.43\times 10^{-5}}
.
Almost all planar graphs have an exponential number of automorphisms.
The number of unlabeled (non-isomorphic) planar graphs on
n
{\displaystyle n}
vertices is between
27.2
n
{\displaystyle 27.2^{n}}
and
30.06
n
{\displaystyle 30.06^{n}}
.
=== Other results ===
The four color theorem states that every planar graph is 4-colorable (i.e., 4-partite).
Fáry's theorem states that every simple planar graph admits a representation as a planar straight-line graph. A universal point set is a set of points such that every planar graph with n vertices has such an embedding with all vertices in the point set; there exist universal point sets of quadratic size, formed by taking a rectangular subset of the integer lattice. Every simple outerplanar graph admits an embedding in the plane such that all vertices lie on a fixed circle and all edges are straight line segments that lie inside the disk and don't intersect, so n-vertex regular polygons are universal for outerplanar graphs.
Scheinerman's conjecture (now a theorem) states that every planar graph can be represented as an intersection graph of line segments in the plane.
The planar separator theorem states that every n-vertex planar graph can be partitioned into two subgraphs of size at most 2n/3 by the removal of O(√n) vertices. As a consequence, planar graphs also have treewidth and branch-width O(√n).
The planar product structure theorem states that every planar graph is a subgraph of the strong graph product of a graph of treewidth at most 8 and a path.
This result has been used to show that planar graphs have bounded queue number, bounded non-repetitive chromatic number, and universal graphs of near-linear size. It also has applications to vertex ranking
and p-centered colouring
of planar graphs.
For two planar graphs with v vertices, it is possible to determine in time O(v) whether they are isomorphic or not (see also graph isomorphism problem).
Any planar graph on n nodes has at most 8(n-2) maximal cliques, which implies that the class of planar graphs is a class with few cliques.
== Generalizations ==
An apex graph is a graph that may be made planar by the removal of one vertex, and a k-apex graph is a graph that may be made planar by the removal of at most k vertices.
A 1-planar graph is a graph that may be drawn in the plane with at most one simple crossing per edge, and a k-planar graph is a graph that may be drawn with at most k simple crossings per edge.
A map graph is a graph formed from a set of finitely many simply-connected interior-disjoint regions in the plane by connecting two regions when they share at least one boundary point. When at most three regions meet at a point, the result is a planar graph, but when four or more regions meet at a point, the result can be nonplanar (for example, if one thinks of a circle divided into sectors, with the sectors being the regions, then the corresponding map graph is the complete graph as all the sectors have a common boundary point - the centre point).
A toroidal graph is a graph that can be embedded without crossings on the torus. More generally, the genus of a graph is the minimum genus of a two-dimensional surface into which the graph may be embedded; planar graphs have genus zero and nonplanar toroidal graphs have genus one. Every graph can be embedded without crossings into some (orientable, connected) closed two-dimensional surface (sphere with handles) and thus the genus of a graph is well defined. Obviously, if the graph can be embedded without crossings into a (orientable, connected, closed) surface with genus g, it can be embedded without crossings into all (orientable, connected, closed) surfaces with greater or equal genus. There are also other concepts in graph theory that are called "X genus" with "X" some qualifier; in general these differ from the above defined concept of "genus" without any qualifier. Especially the non-orientable genus of a graph (using non-orientable surfaces in its definition) is different for a general graph from the genus of that graph (using orientable surfaces in its definition).
Any graph may be embedded into three-dimensional space without crossings. In fact, any graph can be drawn without crossings in a two plane setup, where two planes are placed on top of each other and the edges are allowed to "jump up" and "drop down" from one plane to the other at any place (not just at the graph vertices) so that the edges can avoid intersections with other edges. This can be interpreted as saying that it is possible to make any electrical conductor network with a two-sided circuit board where electrical connection between the sides of the board can be made (as is possible with typical real life circuit boards, with the electrical connections on the top side of the board achieved through pieces of wire and at the bottom side by tracks of copper constructed on to the board itself and electrical connection between the sides of the board achieved through drilling holes, passing the wires through the holes and soldering them into the tracks); one can also interpret this as saying that in order to build any road network, one only needs just bridges or just tunnels, not both (2 levels is enough, 3 is not needed). Also, in three dimensions the question about drawing the graph without crossings is trivial. However, a three-dimensional analogue of the planar graphs is provided by the linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles are topologically linked with each other. In analogy to Kuratowski's and Wagner's characterizations of the planar graphs as being the graphs that do not contain K5 or K3,3 as a minor, the linklessly embeddable graphs may be characterized as the graphs that do not contain as a minor any of the seven graphs in the Petersen family. In analogy to the characterizations of the outerplanar and planar graphs as being the graphs with Colin de Verdière graph invariant at most two or three, the linklessly embeddable graphs are the graphs that have Colin de Verdière invariant at most four.
== See also ==
Combinatorial map a combinatorial object that can encode plane graphs
Planarization, a planar graph formed from a drawing with crossings by replacing each crossing point by a new vertex
Thickness (graph theory), the smallest number of planar graphs into which the edges of a given graph may be partitioned
Planarity, a puzzle computer game in which the objective is to embed a planar graph onto a plane
Sprouts (game), a pencil-and-paper game where a planar graph subject to certain constraints is constructed as part of the game play
Three utilities problem, a popular puzzle
== Notes ==
== References ==
Kuratowski, Kazimierz (1930), "Sur le problème des courbes gauches en topologie" (PDF), Fundamenta Mathematicae (in French), 15: 271–283, doi:10.4064/fm-15-1-271-283.
Wagner, K. (1937), "Über eine Eigenschaft der ebenen Komplexe", Mathematische Annalen (in German), 114: 570–590, doi:10.1007/BF01594196, S2CID 123534907.
Boyer, John M.; Myrvold, Wendy J. (2005), "On the cutting edge: Simplified O(n) planarity by edge addition" (PDF), Journal of Graph Algorithms and Applications, 8 (3): 241–273, doi:10.7155/jgaa.00091.
McKay, Brendan; Brinkmann, Gunnar, A useful planar graph generator.
de Fraysseix, H.; Ossona de Mendez, P.; Rosenstiehl, P. (2006), "Trémaux trees and planarity", International Journal of Foundations of Computer Science, 17 (5): 1017–1029, arXiv:math/0610935, doi:10.1142/S0129054106004248, S2CID 40107560. Special Issue on Graph Drawing.
Bader, D.A.; Sreshta, S. (October 1, 2003), A New Parallel Algorithm for Planarity Testing (Technical report), UNM-ECE Technical Report 03-002, archived from the original on 2016-03-16
Fisk, Steve (1978), "A short proof of Chvátal's watchman theorem", Journal of Combinatorial Theory, Series B, 24 (3): 374, doi:10.1016/0095-8956(78)90059-X.
== External links ==
Edge Addition Planarity Algorithm Source Code, version 1.0 — Free C source code for reference implementation of Boyer–Myrvold planarity algorithm, which provides both a combinatorial planar embedder and Kuratowski subgraph isolator. An open source project with free licensing provides the Edge Addition Planarity Algorithms, current version.
Public Implementation of a Graph Algorithm Library and Editor — GPL graph algorithm library including planarity testing, planarity embedder and Kuratowski subgraph exhibition in linear time.
Boost Graph Library tools for planar graphs, including linear time planarity testing, embedding, Kuratowski subgraph isolation, and straight-line drawing.
3 Utilities Puzzle and Planar Graphs
NetLogo Planarity model — NetLogo version of John Tantalo's game | Wikipedia/Planar_graph |
In the mathematical field of graph theory, the term "null graph" may refer either to the order-zero graph, or alternatively, to any edgeless graph (the latter is sometimes called an "empty graph").
== Order-zero graph ==
The order-zero graph, K0, is the unique graph having no vertices (hence its order is zero). It follows that K0 also has no edges. Thus the null graph is a regular graph of degree zero. Some authors exclude K0 from consideration as a graph (either by definition, or more simply as a matter of convenience). Whether including K0 as a valid graph is useful depends on context. On the positive side, K0 follows naturally from the usual set-theoretic definitions of a graph (it is the ordered pair (V, E) for which the vertex and edge sets, V and E, are both empty), in proofs it serves as a natural base case for mathematical induction, and similarly, in recursively defined data structures K0 is useful for defining the base case for recursion (by treating the null tree as the child of missing edges in any non-null binary tree, every non-null binary tree has exactly two children). On the negative side, including K0 as a graph requires that many well-defined formulas for graph properties include exceptions for it (for example, either "counting all strongly connected components of a graph" becomes "counting all non-null strongly connected components of a graph", or the definition of connected graphs has to be modified not to include K0). To avoid the need for such exceptions, it is often assumed in literature that the term graph implies "graph with at least one vertex" unless context suggests otherwise.
In category theory, the order-zero graph is, according to some definitions of "category of graphs," the initial object in the category.
K0 does fulfill (vacuously) most of the same basic graph properties as does K1 (the graph with one vertex and no edges). As some examples, K0 is of size zero, it is equal to its complement graph K0, a forest, and a planar graph. It may be considered undirected, directed, or even both; when considered as directed, it is a directed acyclic graph. And it is both a complete graph and an edgeless graph. However, definitions for each of these graph properties will vary depending on whether context allows for K0.
== Edgeless graph ==
For each natural number n, the edgeless graph (or empty graph) Kn of order n is the graph with n vertices and zero edges. An edgeless graph is occasionally referred to as a null graph in contexts where the order-zero graph is not permitted.
It is a 0-regular graph. The notation Kn arises from the fact that the n-vertex edgeless graph is the complement of the complete graph Kn.
== See also ==
Glossary of graph theory
Cycle graph
Path graph
== Notes ==
== References ==
== External links ==
Media related to Null graphs at Wikimedia Commons | Wikipedia/Null_graph |
In the mathematical field of graph theory, a distance-transitive graph is a graph such that, given any two vertices v and w at any distance i, and any other two vertices x and y at the same distance, there is an automorphism of the graph that carries v to x and w to y. Distance-transitive graphs were first defined in 1971 by Norman L. Biggs and D. H. Smith.
A distance-transitive graph is interesting partly because it has a large automorphism group. Some interesting finite groups are the automorphism groups of distance-transitive graphs, especially of those whose diameter is 2.
== Examples ==
Some first examples of families of distance-transitive graphs include:
The Johnson graphs.
The Grassmann graphs.
The Hamming Graphs (including Hypercube graphs).
The folded cube graphs.
The square rook's graphs.
The Livingstone graph.
== Classification of cubic distance-transitive graphs ==
After introducing them in 1971, Biggs and Smith showed that there are only 12 finite connected trivalent distance-transitive graphs. These are:
== Relation to distance-regular graphs ==
Every distance-transitive graph is distance-regular, but the converse is not necessarily true.
In 1969, before publication of the Biggs–Smith definition, a Russian group led by Georgy Adelson-Velsky showed that there exist graphs that are distance-regular but not distance-transitive. The smallest distance-regular graph that is not distance-transitive is the Shrikhande graph, with 16 vertices and degree 6. The only graph of this type with degree three is the 126-vertex Tutte 12-cage. Complete lists of distance-transitive graphs are known for some degrees larger than three, but the classification of distance-transitive graphs with arbitrarily large vertex degree remains open.
== References ==
Early works
Adel'son-Vel'skii, G. M.; Veĭsfeĭler, B. Ju.; Leman, A. A.; Faradžev, I. A. (1969), "An example of a graph which has no transitive group of automorphisms", Doklady Akademii Nauk SSSR, 185: 975–976, MR 0244107.
Biggs, Norman (1971), "Intersection matrices for linear graphs", Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969), London: Academic Press, pp. 15–23, MR 0285421.
Biggs, Norman (1971), Finite Groups of Automorphisms, London Mathematical Society Lecture Note Series, vol. 6, London & New York: Cambridge University Press, MR 0327563.
Biggs, N. L.; Smith, D. H. (1971), "On trivalent graphs", Bulletin of the London Mathematical Society, 3 (2): 155–158, doi:10.1112/blms/3.2.155, MR 0286693.
Smith, D. H. (1971), "Primitive and imprimitive graphs", The Quarterly Journal of Mathematics, Second Series, 22 (4): 551–557, doi:10.1093/qmath/22.4.551, MR 0327584.
Surveys
Biggs, N. L. (1993), "Distance-Transitive Graphs", Algebraic Graph Theory (2nd ed.), Cambridge University Press, pp. 155–163, chapter 20.
Van Bon, John (2007), "Finite primitive distance-transitive graphs", European Journal of Combinatorics, 28 (2): 517–532, doi:10.1016/j.ejc.2005.04.014, MR 2287450.
Brouwer, A. E.; Cohen, A. M.; Neumaier, A. (1989), "Distance-Transitive Graphs", Distance-Regular Graphs, New York: Springer-Verlag, pp. 214–234, chapter 7.
Cohen, A. M. Cohen (2004), "Distance-transitive graphs", in Beineke, L. W.; Wilson, R. J. (eds.), Topics in Algebraic Graph Theory, Encyclopedia of Mathematics and its Applications, vol. 102, Cambridge University Press, pp. 222–249.
Godsil, C.; Royle, G. (2001), "Distance-Transitive Graphs", Algebraic Graph Theory, New York: Springer-Verlag, pp. 66–69, section 4.5.
Ivanov, A. A. (1992), "Distance-transitive graphs and their classification", in Faradžev, I. A.; Ivanov, A. A.; Klin, M.; et al. (eds.), The Algebraic Theory of Combinatorial Objects, Math. Appl. (Soviet Series), vol. 84, Dordrecht: Kluwer, pp. 283–378, MR 1321634.
== External links ==
Weisstein, Eric W. "Distance-Transitive Graph". MathWorld. | Wikipedia/Distance-transitive_graph |
In the mathematical field of graph theory, the complement or inverse of a graph G is a graph H on the same vertices such that two distinct vertices of H are adjacent if and only if they are not adjacent in G. That is, to generate the complement of a graph, one fills in all the missing edges required to form a complete graph, and removes all the edges that were previously there.
The complement is not the set complement of the graph; only the edges are complemented.
== Definition ==
Let G = (V, E) be a simple graph and let K consist of all 2-element subsets of V. Then H = (V, K \ E) is the complement of G, where K \ E is the relative complement of E in K. For directed graphs, the complement can be defined in the same way, as a directed graph on the same vertex set, using the set of all 2-element ordered pairs of V in place of the set K in the formula above. In terms of the adjacency matrix A of the graph, if Q is the adjacency matrix of the complete graph of the same number of vertices (i.e. all entries are unity except the diagonal entries which are zero), then the adjacency matrix of the complement of A is Q-A.
The complement is not defined for multigraphs. In graphs that allow self-loops (but not multiple adjacencies) the complement of G may be defined by adding a self-loop to every vertex that does not have one in G, and otherwise using the same formula as above. This operation is, however, different from the one for simple graphs, since applying it to a graph with no self-loops would result in a graph with self-loops on all vertices.
== Applications and examples ==
Several graph-theoretic concepts are related to each other via complementation:
The complement of an edgeless graph is a complete graph and vice versa.
Any induced subgraph of the complement graph of a graph G is the complement of the corresponding induced subgraph in G.
An independent set in a graph is a clique in the complement graph and vice versa. This is a special case of the previous two properties, as an independent set is an edgeless induced subgraph and a clique is a complete induced subgraph.
The automorphism group of a graph is the automorphism group of its complement.
The complement of every triangle-free graph is a claw-free graph, although the reverse is not true.
== Self-complementary graphs and graph classes ==
A self-complementary graph is a graph that is isomorphic to its own complement. Examples include the four-vertex path graph and five-vertex cycle graph. There is no known characterization of self-complementary graphs.
Several classes of graphs are self-complementary, in the sense that the complement of any graph in one of these classes is another graph in the same class.
Perfect graphs are the graphs in which, for every induced subgraph, the chromatic number equals the size of the maximum clique. The fact that the complement of a perfect graph is also perfect is the perfect graph theorem of László Lovász.
Cographs are defined as the graphs that can be built up from single vertices by disjoint union and complementation operations. They form a self-complementary family of graphs: the complement of any cograph is another different cograph. For cographs of more than one vertex, exactly one graph in each complementary pair is connected, and one equivalent definition of cographs is that each of their connected induced subgraphs has a disconnected complement. Another, self-complementary definition is that they are the graphs with no induced subgraph in the form of a four-vertex path.
Another self-complementary class of graphs is the class of split graphs, the graphs in which the vertices can be partitioned into a clique and an independent set. The same partition gives an independent set and a clique in the complement graph.
The threshold graphs are the graphs formed by repeatedly adding either an independent vertex (one with no neighbors) or a universal vertex (adjacent to all previously-added vertices). These two operations are complementary and they generate a self-complementary class of graphs.
== Algorithmic aspects ==
In the analysis of algorithms on graphs, the distinction between a graph and its complement is an important one, because a sparse graph (one with a small number of edges compared to the number of pairs of vertices) will in general not have a sparse complement, and so an algorithm that takes time proportional to the number of edges on a given graph may take a much larger amount of time if the same algorithm is run on an explicit representation of the complement graph. Therefore, researchers have studied algorithms that perform standard graph computations on the complement of an input graph, using an implicit graph representation that does not require the explicit construction of the complement graph. In particular, it is possible to simulate either depth-first search or breadth-first search on the complement graph, in an amount of time that is linear in the size of the given graph, even when the complement graph may have a much larger size. It is also possible to use these simulations to compute other properties concerning the connectivity of the complement graph.
== References == | Wikipedia/Complement_graph |
In graph theory, a cycle in a graph is a non-empty trail in which only the first and last vertices are equal. A directed cycle in a directed graph is a non-empty directed trail in which only the first and last vertices are equal.
A graph without cycles is called an acyclic graph. A directed graph without directed cycles is called a directed acyclic graph. A connected graph without cycles is called a tree.
== Definitions ==
=== Circuit and cycle ===
A circuit is a non-empty trail in which the first and last vertices are equal (closed trail).
Let G = (V, E, Φ) be a graph. A circuit is a non-empty trail (e1, e2, ..., en) with a vertex sequence (v1, v2, ..., vn, v1).
A cycle or simple circuit is a circuit in which only the first and last vertices are equal.
n is called the length of the circuit resp. length of the cycle.
=== Directed circuit and directed cycle ===
A directed circuit is a non-empty directed trail in which the first and last vertices are equal (closed directed trail).
Let G = (V, E, Φ) be a directed graph. A directed circuit is a non-empty directed trail (e1, e2, ..., en) with a vertex sequence (v1, v2, ..., vn, v1).
A directed cycle or simple directed circuit is a directed circuit in which only the first and last vertices are equal.
n is called the length of the directed circuit resp. length of the directed cycle.
== Chordless cycle ==
A chordless cycle in a graph, also called a hole or an induced cycle, is a cycle such that no two vertices of the cycle are connected by an edge that does not itself belong to the cycle. An antihole is the complement of a graph hole. Chordless cycles may be used to characterize perfect graphs: by the strong perfect graph theorem, a graph is perfect if and only if none of its holes or antiholes have an odd number of vertices that is greater than three. A chordal graph, a special type of perfect graph, has no holes of any size greater than three.
The girth of a graph is the length of its shortest cycle; this cycle is necessarily chordless. Cages are defined as the smallest regular graphs with given combinations of degree and girth.
A peripheral cycle is a cycle in a graph with the property that every two edges not on the cycle can be connected by a path whose interior vertices avoid the cycle. In a graph that is not formed by adding one edge to a cycle, a peripheral cycle must be an induced cycle.
== Cycle space ==
The term cycle may also refer to an element of the cycle space of a graph. There are many cycle spaces, one for each coefficient field or ring. The most common is the binary cycle space (usually called simply the cycle space), which consists of the edge sets that have even degree at every vertex; it forms a vector space over the two-element field. By Veblen's theorem, every element of the cycle space may be formed as an edge-disjoint union of simple cycles. A cycle basis of the graph is a set of simple cycles that forms a basis of the cycle space.
Using ideas from algebraic topology, the binary cycle space generalizes to vector spaces or modules over other rings such as the integers, rational or real numbers, etc.
== Cycle detection ==
The existence of a cycle in directed and undirected graphs can be determined by whether a depth-first search (DFS) finds an edge that points to an ancestor of the current vertex (i.e., it contains a back edge). All the back edges which DFS skips over are part of cycles. In an undirected graph, the edge to the parent of a node should not be counted as a back edge, but finding any other already visited vertex will indicate a back edge. In the case of undirected graphs, only O(n) time is required to find a cycle in an n-vertex graph, since at most n − 1 edges can be tree edges.
Many topological sorting algorithms will detect cycles too, since those are obstacles for topological order to exist. Also, if a directed graph has been divided into strongly connected components, cycles only exist within the components and not between them, since cycles are strongly connected.
For directed graphs, distributed message-based algorithms can be used. These algorithms rely on the idea that a message sent by a vertex in a cycle will come back to itself.
Distributed cycle detection algorithms are useful for processing large-scale graphs using a distributed graph processing system on a computer cluster (or supercomputer).
Applications of cycle detection include the use of wait-for graphs to detect deadlocks in concurrent systems.
=== Algorithm ===
The aforementioned use of depth-first search to find a cycle can be described as follows:
For every vertex v: visited(v) = finished(v) = false
For every vertex v: DFS(v)
where
DFS(v) =
if finished(v): return
if visited(v):
"Cycle found"
return
visited(v) = true
for every neighbour w: DFS(w)
finished(v) = true
For undirected graphs, "neighbour" means all vertices connected to v, except for the one that recursively called DFS(v). This omission prevents the algorithm from finding a trivial cycle of the form v→w→v; these exist in every undirected graph with at least one edge.
A variant using breadth-first search instead will find a cycle of the smallest possible length.
== Covering graphs by cycle ==
In his 1736 paper on the Seven Bridges of Königsberg, widely considered to be the birth of graph theory, Leonhard Euler proved that, for a finite undirected graph to have a closed walk that visits each edge exactly once (making it a closed trail), it is necessary and sufficient that it be connected except for isolated vertices (that is, all edges are contained in one component) and have even degree at each vertex. The corresponding characterization for the existence of a closed walk visiting each edge exactly once in a directed graph is that the graph be strongly connected and have equal numbers of incoming and outgoing edges at each vertex. In either case, the resulting closed trail is known as an Eulerian trail. If a finite undirected graph has even degree at each of its vertices, regardless of whether it is connected, then it is possible to find a set of simple cycles that together cover each edge exactly once: this is Veblen's theorem. When a connected graph does not meet the conditions of Euler's theorem, a closed walk of minimum length covering each edge at least once can nevertheless be found in polynomial time by solving the route inspection problem.
The problem of finding a single simple cycle that covers each vertex exactly once, rather than covering the edges, is much harder. Such a cycle is known as a Hamiltonian cycle, and determining whether it exists is NP-complete. Much research has been published concerning classes of graphs that can be guaranteed to contain Hamiltonian cycles; one example is Ore's theorem that a Hamiltonian cycle can always be found in a graph for which every non-adjacent pair of vertices have degrees summing to at least the total number of vertices in the graph.
The cycle double cover conjecture states that, for every bridgeless graph, there exists a multiset of simple cycles that covers each edge of the graph exactly twice. Proving that this is true (or finding a counterexample) remains an open problem.
== Graph classes defined by cycle ==
Several important classes of graphs can be defined by or characterized by their cycles. These include:
Bipartite graph, a graph without odd cycles (cycles with an odd number of vertices)
Cactus graph, a graph in which every nontrivial biconnected component is a cycle
Cycle graph, a graph that consists of a single cycle
Chordal graph, a graph in which every induced cycle is a triangle
Directed acyclic graph, a directed graph with no directed cycles
Forest, a cycle-free graph
Line perfect graph, a graph in which every odd cycle is a triangle
Perfect graph, a graph with no induced cycles or their complements of odd length greater than three
Pseudoforest, a graph in which each connected component has at most one cycle
Strangulated graph, a graph in which every peripheral cycle is a triangle
Strongly connected graph, a directed graph in which every edge is part of a cycle
Triangle-free graph, a graph without three-vertex cycles
Even-cycle-free graph, a graph without even cycles
Even-hole-free graph, a graph without even cycles of length larger or equal to 6
== See also ==
Cycle space
Cycle basis
Cycle detection in a sequence of iterated function values
Minimum mean weight cycle
== References ==
Balakrishnan, V. K. (2005). Schaum's outline of theory and problems of graph theory ([Nachdr.] ed.). McGraw–Hill. ISBN 978-0070054899.
Bender, Edward A.; Williamson, S. Gill (2010). Lists, Decisions and Graphs. With an Introduction to Probability. | Wikipedia/Cycle_(graph_theory) |
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge.
There are 2 distinct notions of multiple edges:
Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes.
Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges.
A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two.
For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops.
== Undirected multigraph (edges without own identity) ==
A multigraph G is an ordered pair G := (V, E) with
V a set of vertices or nodes,
E a multiset of unordered pairs of vertices, called edges or lines.
== Undirected multigraph (edges with own identity) ==
A multigraph G is an ordered triple G := (V, E, r) with
V a set of vertices or nodes,
E a set of edges or lines,
r : E → {{x,y} : x, y ∈ V}, assigning to each edge an unordered pair of endpoint nodes.
Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself, while others call these pseudographs, reserving the term multigraph for the case with no loops.
== Directed multigraph (edges without own identity) ==
A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G := (V, A) with
V a set of vertices or nodes,
A a multiset of ordered pairs of vertices called directed edges, arcs or arrows.
A mixed multigraph G := (V, E, A) may be defined in the same way as a mixed graph.
== Directed multigraph (edges with own identity) ==
A multidigraph or quiver G is an ordered 4-tuple G := (V, A, s, t) with
V a set of vertices or nodes,
A a set of edges or lines,
s
:
A
→
V
{\displaystyle s:A\rightarrow V}
, assigning to each edge its source node,
t
:
A
→
V
{\displaystyle t:A\rightarrow V}
, assigning to each edge its target node.
This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations.
In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph.
== Labeling ==
Multigraphs and multidigraphs also support the notion of graph labeling, in a similar way. However there is no unity in terminology in this case.
The definitions of labeled multigraphs and labeled multidigraphs are similar, and we define only the latter ones here.
Definition 1: A labeled multidigraph is a labeled graph with labeled arcs.
Formally: A labeled multidigraph G is a multigraph with labeled vertices and arcs. Formally it is an 8-tuple
G
=
(
Σ
V
,
Σ
A
,
V
,
A
,
s
,
t
,
ℓ
V
,
ℓ
A
)
{\displaystyle G=(\Sigma _{V},\Sigma _{A},V,A,s,t,\ell _{V},\ell _{A})}
where
V
{\displaystyle V}
is a set of vertices and
A
{\displaystyle A}
is a set of arcs.
Σ
V
{\displaystyle \Sigma _{V}}
and
Σ
A
{\displaystyle \Sigma _{A}}
are finite alphabets of the available vertex and arc labels,
s
:
A
→
V
{\displaystyle s\colon A\rightarrow \ V}
and
t
:
A
→
V
{\displaystyle t\colon A\rightarrow \ V}
are two maps indicating the source and target vertex of an arc,
ℓ
V
:
V
→
Σ
V
{\displaystyle \ell _{V}\colon V\rightarrow \Sigma _{V}}
and
ℓ
A
:
A
→
Σ
A
{\displaystyle \ell _{A}\colon A\rightarrow \Sigma _{A}}
are two maps describing the labeling of the vertices and arcs.
Definition 2: A labeled multidigraph is a labeled graph with multiple labeled arcs, i.e. arcs with the same end vertices and the same arc label (note that this notion of a labeled graph is different from the notion given by the article graph labeling).
== See also ==
Multidimensional network
Glossary of graph theory terms
Graph theory
== Notes ==
== References ==
Balakrishnan, V. K. (1997). Graph Theory. McGraw-Hill. ISBN 0-07-005489-4.
Bollobás, Béla (2002). Modern Graph Theory. Graduate Texts in Mathematics. Vol. 184. Springer. ISBN 0-387-98488-7.
Chartrand, Gary; Zhang, Ping (2012). A First Course in Graph Theory. Dover. ISBN 978-0-486-48368-9.
Diestel, Reinhard (2010). Graph Theory. Graduate Texts in Mathematics. Vol. 173 (4th ed.). Springer. ISBN 978-3-642-14278-9.
Gross, Jonathan L.; Yellen, Jay (1998). Graph Theory and Its Applications. CRC Press. ISBN 0-8493-3982-0.
Gross, Jonathan L.; Yellen, Jay, eds. (2003). Handbook of Graph Theory. CRC. ISBN 1-58488-090-2.
Harary, Frank (1995). Graph Theory. Addison Wesley. ISBN 0-201-41033-8.
Janson, Svante; Knuth, Donald E.; Luczak, Tomasz; Pittel, Boris (1993). "The birth of the giant component". Random Structures and Algorithms. 4 (3): 231–358. arXiv:math/9310236. Bibcode:1993math.....10236J. doi:10.1002/rsa.3240040303. ISSN 1042-9832. MR 1220220. S2CID 206454812.
Wilson, Robert A. (2002). Graphs, Colourings and the Four-Colour Theorem. Oxford Science Publ. ISBN 0-19-851062-4.
Zwillinger, Daniel (2002). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 1-58488-291-3.
== External links ==
This article incorporates public domain material from Paul E. Black. "Multigraph". Dictionary of Algorithms and Data Structures. NIST. | Wikipedia/Multigraph |
In graph theory, a connected graph G is said to be k-vertex-connected (or k-connected) if it has more than k vertices and remains connected whenever fewer than k vertices are removed.
The vertex-connectivity, or just connectivity, of a graph is the largest k for which the graph is k-vertex-connected.
== Definitions ==
A graph (other than a complete graph) has connectivity k if k is the size of the smallest subset of vertices such that the graph becomes disconnected if you delete them. In complete graphs, there is no subset whose removal would disconnect the graph. Some sources modify the definition of connectivity to handle this case, by defining it as the size of the smallest subset of vertices whose deletion results in either a disconnected graph or a single vertex. For this variation, the connectivity of a complete graph
K
n
{\displaystyle K_{n}}
is
n
−
1
{\displaystyle n-1}
.
An equivalent definition is that a graph with at least two vertices is k-connected if, for every pair of its vertices, it is possible to find k vertex-independent paths connecting these vertices; see Menger's theorem (Diestel 2005, p. 55). This definition produces the same answer, n − 1, for the connectivity of the complete graph Kn.
A k-connected graph is by definition connected; it is called biconnected for k ≥ 2 and triconnected for k ≥ 3.
== Applications ==
=== Components ===
Every graph decomposes into a disjoint union of 1-connected components. 1-connected graphs decompose into a tree of biconnected components. 2-connected graphs decompose into a tree of triconnected components.
=== Polyhedral combinatorics ===
The 1-skeleton of any k-dimensional convex polytope forms a k-vertex-connected graph (Balinski's theorem). As a partial converse, Steinitz's theorem states that any 3-vertex-connected planar graph forms the skeleton of a convex polyhedron.
== Computational complexity ==
The vertex-connectivity of an input graph G can be computed in polynomial time in the following way consider all possible pairs
(
s
,
t
)
{\displaystyle (s,t)}
of nonadjacent nodes to disconnect, using Menger's theorem to justify that the minimal-size separator for
(
s
,
t
)
{\displaystyle (s,t)}
is the number of pairwise vertex-independent paths between them, encode the input by doubling each vertex as an edge to reduce to a computation of the number of pairwise edge-independent paths, and compute the maximum number of such paths by computing the maximum flow in the graph between
s
{\displaystyle s}
and
t
{\displaystyle t}
with capacity 1 to each edge, noting that a flow of
k
{\displaystyle k}
in this graph corresponds, by the integral flow theorem, to
k
{\displaystyle k}
pairwise edge-independent paths from
s
{\displaystyle s}
to
t
{\displaystyle t}
.
== Properties ==
Let k≥2.
Every
k
{\displaystyle k}
-connected graph of order at least
2
k
{\displaystyle 2k}
contains a cycle of length at least
2
k
{\displaystyle 2k}
In a
k
{\displaystyle k}
-connected graph, any
k
{\displaystyle k}
vertices in
G
{\displaystyle G}
lie on a common cycle.
The cycle space of a
3
{\displaystyle 3}
-connected graph is generated by its non-separating induced cycles.
== k-linked graph ==
A graph with at least
2
k
{\displaystyle 2k}
vertices is called
k
{\displaystyle k}
-linked if there are
k
{\displaystyle k}
disjoint paths for any sequences
a
1
,
…
,
a
k
{\displaystyle a_{1},\dots ,a_{k}}
and
b
1
,
…
,
b
k
{\displaystyle b_{1},\dots ,b_{k}}
of
2
k
{\displaystyle 2k}
distinct vertices. Every
k
{\displaystyle k}
-linked graph is
(
2
k
−
1
)
{\displaystyle (2k-1)}
-connected graph, but not necessarily
2
k
{\displaystyle 2k}
-connected.
If a graph is
2
k
{\displaystyle 2k}
-connected and has average degree of at least
16
k
{\displaystyle 16k}
, then it is
k
{\displaystyle k}
-linked.
== See also ==
k-edge-connected graph
Connectivity (graph theory)
Menger's theorem
Structural cohesion
Tutte embedding
Vertex separator
== Notes ==
== References ==
Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4
Diestel, Reinhard (2012), Graph Theory (corrected 4th electronic ed.)
Diestel, Reinhard (2016), Graph Theory (5th ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-662-53621-6 | Wikipedia/K-vertex-connected_graph |
A geometric network is an object commonly used in geographic information systems to model a series of interconnected features. A geometric network is similar to a graph in mathematics and computer science, and can be described and analyzed using theories and concepts similar to graph theory. Geometric networks are often used to model road networks and public utility networks (such as electric, gas, and water utilities). Geometric networks are called in recent years very often spatial networks.
== Composition of a Geometric Network ==
A geometric network is composed of edges that are connected. Connectivity rules for the network specify which edges are connected and at what points they are connected, commonly referred to as junction or intersection points. These edges can have weights or flow direction assigned to them, which dictate certain properties of these edges that affect analysis results
. In the case of certain types of networks, source points (points where flow originates) and sink points (points where flow terminates) may also exist. In the case of utility networks, a source point may correlate with an electric substation or a water pumping station, and a sink point may correlate with a service connection at a residential household.
== Functions ==
Networks define the interconnectedness of features. Through analyzing this connectivity, paths from one point to another on the network can be traced and calculated. Through optimization algorithms and utilizing network weights and flow, these paths can also be optimized to show specialized paths, such as the shortest path between two points on the network, as is commonly done in the calculation of driving directions. Networks can also be used to perform spatial analysis to determine points or edges that are encompassed in a certain area or within a certain distance of a specified point. This has applications in hydrology and urban planning, among other fields.
== Applications ==
Routing: for calculating driving directions, paths from one point of interest to another, locating nearby points of interest
Urban Planning: for site suitability studies, and traffic and congestion studies.
Electric Utility Industry: for modeling an electrical grid in GIS, tracing from a generation source
Other Public Utilities: for modeling water distribution flow and natural gas distribution
== See also ==
Graphs
Graph theory
Geographic Information Systems
== References == | Wikipedia/Geometric_networks |
In graph theory, the strong product is a way of combining two graphs to make a larger graph. Two vertices are adjacent in the strong product when they come from pairs of vertices in the factor graphs that are either adjacent or identical. The strong product is one of several different graph product operations that have been studied in graph theory. The strong product of any two graphs can be constructed as the union of two other products of the same two graphs, the Cartesian product of graphs and the tensor product of graphs.
An example of a strong product is the king's graph, the graph of moves of a chess king on a chessboard, which can be constructed as a strong product of path graphs. Decompositions of planar graphs and related graph classes into strong products have been used as a central tool to prove many other results about these graphs.
Care should be exercised when encountering the term strong product in the literature, since it has also been used to denote the tensor product of graphs.
== Definition and example ==
The strong product G ⊠ H of graphs G and H is a graph such that
the vertex set of G ⊠ H is the Cartesian product V(G) × V(H); and
distinct vertices (u,u' ) and (v,v' ) are adjacent in G ⊠ H if and only if:
u = v and u' is adjacent to v', or
u' = v' and u is adjacent to v, or
u is adjacent to v and u' is adjacent to v'.
It is the union of the Cartesian product and the tensor product.
For example, the king's graph, a graph whose vertices are squares of a chessboard and whose edges represent possible moves of a chess king, is a strong product of two path graphs. Its horizontal edges come from the Cartesian product, and its diagonal edges come from the tensor product of the same two paths. Together, these two kinds of edges make up the entire strong product.
== Properties and applications ==
Every planar graph is a subgraph of a strong product of a path and a graph of treewidth at most six. This result has been used to prove that planar graphs have bounded queue number, small universal graphs and concise adjacency labeling schemes, and bounded nonrepetitive chromatic number and centered chromatic number. This product structure can be found in linear time. Beyond planar graphs, extensions of these results have been proven for graphs of bounded genus, graphs with a forbidden minor that is an apex graph, bounded-degree graphs with any forbidden minor, and k-planar graphs.
The clique number of the strong product of any two graphs equals the product of the clique numbers of the two graphs. If two graphs both have bounded twin-width, and in addition one of them has bounded degree, then their strong product also has bounded twin-width.
A leaf power is a graph formed from the leaves of a tree by making two leaves adjacent when their distance in the tree is below some threshold
k
{\displaystyle k}
. If
G
{\displaystyle G}
is a
k
{\displaystyle k}
-leaf power of a tree
T
{\displaystyle T}
, then
T
{\displaystyle T}
can be found as a subgraph of a strong product of
G
{\displaystyle G}
with a
k
{\displaystyle k}
-vertex cycle. This embedding has been used in recognition algorithms for leaf powers.
The strong product of a 7-vertex cycle graph and a 4-vertex complete graph,
C
7
⊠
K
4
{\displaystyle C_{7}\boxtimes K_{4}}
, has been suggested as a possibility for a 10-chromatic biplanar graph that would improve the known bounds on the Earth–Moon problem; another suggested example is the graph obtained by removing any vertex from
C
5
⊠
K
4
{\displaystyle C_{5}\boxtimes K_{4}}
. In both cases, the number of vertices in these graphs is more than 9 times the size of their largest independent set, implying that their chromatic number is at least 10. However, it is not known whether these graphs are biplanar.
== References == | Wikipedia/Strong_product_of_graphs |
In mathematics, computer science and network science, network theory is a part of graph theory. It defines networks as graphs where the vertices or edges possess attributes. Network theory analyses these networks over the symmetric relations or asymmetric relations between their (discrete) components.
Network theory has applications in many disciplines, including statistical physics, particle physics, computer science, electrical engineering, biology, archaeology, linguistics, economics, finance, operations research, climatology, ecology, public health, sociology, psychology, and neuroscience. Applications of network theory include logistical networks, the World Wide Web, Internet, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc.; see List of network theory topics for more examples.
Euler's solution of the Seven Bridges of Königsberg problem is considered to be the first true proof in the theory of networks.
== Network optimization ==
Network problems that involve finding an optimal way of doing something are studied as combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, critical path analysis, and program evaluation and review technique.
== Network analysis ==
=== Electric network analysis ===
The analysis of electric power systems could be conducted using network theory from two main points of view:
An abstract perspective (i.e., as a graph consists from nodes and edges), regardless of the electric power aspects (e.g., transmission line impedances). Most of these studies focus only on the abstract structure of the power grid using node degree distribution and betweenness distribution, which introduces substantial insight regarding the vulnerability assessment of the grid. Through these types of studies, the category of the grid structure could be identified from the complex network perspective (e.g., single-scale, scale-free). This classification might help the electric power system engineers in the planning stage or while upgrading the infrastructure (e.g., add a new transmission line) to maintain a proper redundancy level in the transmission system.
Weighted graphs that blend an abstract understanding of complex network theories and electric power systems properties.
=== Social network analysis ===
Social network analysis examines the structure of relationships between social entities. These entities are often persons, but may also be groups, organizations, nation states, web sites, or scholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology. Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. It has been used to study recruitment into political movements, armed groups, and other social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature.
=== Biological network analysis ===
With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this context is closely related to social network analysis, but often focusing on local patterns in the network. For example, network motifs are small subgraphs that are over-represented in the network. Similarly, activity motifs are patterns in the attributes of nodes and edges in the network that are over-represented given the network structure. Using networks to analyze patterns in biological systems, such as food-webs, allows us to visualize the nature and strength of interactions between species. The analysis of biological networks with respect to diseases has led to the development of the field of network medicine. Recent examples of application of network theory in biology include applications to understanding the cell cycle as well as a quantitative framework for developmental processes.
=== Narrative network analysis ===
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale. The resulting narrative networks, which can contain thousands of nodes, are then analyzed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by Quantitative Narrative Analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.
=== Link analysis ===
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed. Links are also derived from similarity of time behavior in both nodes. Examples include climate networks where the links between two locations (nodes) are determined, for example, by the similarity of the rainfall or temperature fluctuations in both sites.
==== Web link analysis ====
Several Web search ranking algorithms use link-based centrality metrics, including Google's PageRank, Kleinberg's HITS algorithm, the CheiRank and TrustRank algorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' websites or blogs. Another use is for classifying pages according to their mention in other pages.
=== Centrality measures ===
Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. For example, eigenvector centrality uses the eigenvectors of the adjacency matrix corresponding to a network, to determine nodes that tend to be frequently visited. Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, subgraph centrality, and Katz centrality. The purpose or objective of analysis generally determines the type of centrality measure to be used. For example, if one is interested in dynamics on networks or the robustness of a network to node/link removal, often the dynamical importance of a node is the most relevant centrality measure.
=== Assortative and disassortative mixing ===
These concepts are used to characterize the linking preferences of hubs in a network. Hubs are nodes which have a large number of links. Some hubs tend to link to other hubs while others avoid connecting to hubs and prefer to connect to nodes with low connectivity. We say a hub is assortative when it tends to connect to other hubs. A disassortative hub avoids connecting to other hubs. If hubs have connections with the expected random probabilities, they are said to be neutral. There are three methods to quantify degree correlations.
=== Recurrence networks ===
The recurrence matrix of a recurrence plot can be considered as the adjacency matrix of an undirected and unweighted network. This allows for the analysis of time series by network measures. Applications range from detection of regime changes over characterizing dynamics to synchronization analysis.
== Spatial networks ==
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain neural networks. Several models for spatial networks have been developed.
== Temporal networks ==
Other networks emphasise the evolution over time of systems of nodes and their interconnections. Temporal networks are used for example to study how financial risk has spread across countries. In this study, temporal networks are used to also visually trace the intricate dynamics of financial contagion during crises. Unlike traditional network approaches that aggregate or analyze static snapshots, the study uses a time-respecting path methodology to preserve the sequence and timing of financial crises contagion events. This enables the identification of nodes as sources, transmitters, or receivers of financial stress, avoiding mischaracterizations inherent in static or aggregated methods. Following this approach, banks are found to serve as key intermediaries in contagion paths, and temporal analysis pinpoints smaller countries like Greece and Italy as significant origins of shocks during crises—insights obscured by static approaches that overemphasize large economies like the US or Japan.
Temporal networks can also be used to explore how cooperation evolves in dynamic, real-world population structures where interactions are time-dependent. Here the authors find that network temporality enhances cooperation compared to static networks, even though "bursty" interaction patterns typically hinder it. This finding also shows how cooperation and other emergent behaviours can thrive in realistic, time-varying population structures, challenging conventional assumptions rooted in static models.
In psychology, temporal networks enable the understanding of psychological disorders by framing them as dynamic systems of interconnected symptoms rather than outcomes of a single underlying cause. Using "nodes" to represent symptoms and "edges" to signify their direct interactions, symptoms like insomnia and fatigue are shown how they influence each other over time; also, disorders such as depression are shown not to be fixed entities but evolving networks, where identifying "bridge symptoms" like concentration difficulties can explain comorbidity between conditions such as depression and anxiety.
Lastly, temporal networks enable a better understanding and controlling of the spread of infectious diseases. Unlike traditional static networks, which assume continuous, unchanging connections, temporal networks account for the precise timing and duration of interactions between individuals. This dynamic approach reveals critical nuances, such as how diseases can spread via time-sensitive pathways that static models miss. Temporal data, such as interactions captured through Bluetooth sensors or in hospital wards, can improve predictions of outbreak speed and extent. Overlooking temporal correlations can lead to significant errors in estimating epidemic dynamics, emphasizing the need for a temporal framework to develop more accurate strategies for disease control.
== Spread ==
Content in a complex network can spread via two major methods: conserved spread and non-conserved spread. In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the original source is infinite. Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases, neural excitation, information and rumors, etc.
=== Network immunization ===
The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case
p
c
{\displaystyle pc}
is relatively high and fewer nodes are needed to be immunized.
However, in most realistic networks the global structure is not available and the largest degree nodes are unknown.
== See also ==
== References ==
== Books ==
== External links ==
netwiki Scientific wiki dedicated to network theory
New Network Theory International Conference on 'New Network Theory'
Network Workbench: A Large-Scale Network Analysis, Modeling and Visualization Toolkit
Optimization of the Large Network doi:10.13140/RG.2.2.20183.06565/6
Network analysis of computer networks
Network analysis of organizational networks
Network analysis of terrorist networks
Network analysis of a disease outbreak
Link Analysis: An Information Science Approach (book)
Connected: The Power of Six Degrees (documentary)
A short course on complex networks
A course on complex network analysis by Albert-László Barabási
The Journal of Network Theory in Finance
Network theory in Operations Research from the Institute for Operations Research and the Management Sciences (INFORMS) | Wikipedia/Network_theory |
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.
For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory).
In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
== History ==
In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831–1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.
== Definition ==
Formally, a structure can be defined as a triple
A
=
(
A
,
σ
,
I
)
{\displaystyle {\mathcal {A}}=(A,\sigma ,I)}
consisting of a domain
A
,
{\displaystyle A,}
a signature
σ
,
{\displaystyle \sigma ,}
and an interpretation function
I
{\displaystyle I}
that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature
σ
{\displaystyle \sigma }
one can refer to it as a
σ
{\displaystyle \sigma }
-structure.
=== Domain ===
The domain of a structure is an arbitrary set; it is also called the underlying set of the structure, its carrier (especially in universal algebra), its universe (especially in model theory, cf. universe), or its domain of discourse. In classical first-order logic, the definition of a structure prohibits the empty domain.
Sometimes the notation
dom
(
A
)
{\displaystyle \operatorname {dom} ({\mathcal {A}})}
or
|
A
|
{\displaystyle |{\mathcal {A}}|}
is used for the domain of
A
,
{\displaystyle {\mathcal {A}},}
but often no notational distinction is made between a structure and its domain (that is, the same symbol
A
{\displaystyle {\mathcal {A}}}
refers both to the structure and its domain.)
=== Signature ===
The signature
σ
=
(
S
,
ar
)
{\displaystyle \sigma =(S,\operatorname {ar} )}
of a structure consists of:
a set
S
{\displaystyle S}
of function symbols and relation symbols, along with
a function
ar
:
S
→
N
0
{\displaystyle \operatorname {ar} :\ S\to \mathbb {N} _{0}}
that ascribes to each symbol
s
{\displaystyle s}
a natural number
n
=
ar
(
s
)
.
{\displaystyle n=\operatorname {ar} (s).}
The natural number
n
=
ar
(
s
)
{\displaystyle n=\operatorname {ar} (s)}
of a symbol
s
{\displaystyle s}
is called the arity of
s
{\displaystyle s}
because it is the arity of the interpretation of
s
.
{\displaystyle s.}
Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.
=== Interpretation function ===
The interpretation function
I
{\displaystyle I}
of
A
{\displaystyle {\mathcal {A}}}
assigns functions and relations to the symbols of the signature. To each function symbol
f
{\displaystyle f}
of arity
n
{\displaystyle n}
is assigned an
n
{\displaystyle n}
-ary function
f
A
=
I
(
f
)
{\displaystyle f^{\mathcal {A}}=I(f)}
on the domain. Each relation symbol
R
{\displaystyle R}
of arity
n
{\displaystyle n}
is assigned an
n
{\displaystyle n}
-ary relation
R
A
=
I
(
R
)
⊆
A
a
r
(
R
)
{\displaystyle R^{\mathcal {A}}=I(R)\subseteq A^{\operatorname {ar(R)} }}
on the domain. A nullary (
=
0
{\displaystyle =\,0}
-ary) function symbol
c
{\displaystyle c}
is called a constant symbol, because its interpretation
I
(
c
)
{\displaystyle I(c)}
can be identified with a constant element of the domain.
When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol
s
{\displaystyle s}
and its interpretation
I
(
s
)
.
{\displaystyle I(s).}
For example, if
f
{\displaystyle f}
is a binary function symbol of
A
,
{\displaystyle {\mathcal {A}},}
one simply writes
f
:
A
2
→
A
{\displaystyle f:{\mathcal {A}}^{2}\to {\mathcal {A}}}
rather than
f
A
:
|
A
|
2
→
|
A
|
.
{\displaystyle f^{\mathcal {A}}:|{\mathcal {A}}|^{2}\to |{\mathcal {A}}|.}
=== Examples ===
The standard signature
σ
f
{\displaystyle \sigma _{f}}
for fields consists of two binary function symbols
+
{\displaystyle \mathbf {+} }
and
×
{\displaystyle \mathbf {\times } }
where additional symbols can be derived, such as a unary function symbol
−
{\displaystyle \mathbf {-} }
(uniquely determined by
+
{\displaystyle \mathbf {+} }
) and the two constant symbols
0
{\displaystyle \mathbf {0} }
and
1
{\displaystyle \mathbf {1} }
(uniquely determined by
+
{\displaystyle \mathbf {+} }
and
×
{\displaystyle \mathbf {\times } }
respectively).
Thus a structure (algebra) for this signature consists of a set of elements
A
{\displaystyle A}
together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers
Q
,
{\displaystyle \mathbb {Q} ,}
the real numbers
R
{\displaystyle \mathbb {R} }
and the complex numbers
C
,
{\displaystyle \mathbb {C} ,}
like any other field, can be regarded as
σ
{\displaystyle \sigma }
-structures in an obvious way:
Q
=
(
Q
,
σ
f
,
I
Q
)
R
=
(
R
,
σ
f
,
I
R
)
C
=
(
C
,
σ
f
,
I
C
)
{\displaystyle {\begin{alignedat}{3}{\mathcal {Q}}&=(\mathbb {Q} ,\sigma _{f},I_{\mathcal {Q}})\\{\mathcal {R}}&=(\mathbb {R} ,\sigma _{f},I_{\mathcal {R}})\\{\mathcal {C}}&=(\mathbb {C} ,\sigma _{f},I_{\mathcal {C}})\\\end{alignedat}}}
In all three cases we have the standard signature given by
σ
f
=
(
S
f
,
ar
f
)
{\displaystyle \sigma _{f}=(S_{f},\operatorname {ar} _{f})}
with
S
f
=
{
+
,
×
,
−
,
0
,
1
}
{\displaystyle S_{f}=\{+,\times ,-,0,1\}}
and
ar
f
(
+
)
=
2
,
ar
f
(
×
)
=
2
,
ar
f
(
−
)
=
1
,
ar
f
(
0
)
=
0
,
ar
f
(
1
)
=
0.
{\displaystyle {\begin{alignedat}{3}\operatorname {ar} _{f}&(+)&&=2,\\\operatorname {ar} _{f}&(\times )&&=2,\\\operatorname {ar} _{f}&(-)&&=1,\\\operatorname {ar} _{f}&(0)&&=0,\\\operatorname {ar} _{f}&(1)&&=0.\\\end{alignedat}}}
The interpretation function
I
Q
{\displaystyle I_{\mathcal {Q}}}
is:
I
Q
(
+
)
:
Q
×
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(+):\mathbb {Q} \times \mathbb {Q} \to \mathbb {Q} }
is addition of rational numbers,
I
Q
(
×
)
:
Q
×
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(\times ):\mathbb {Q} \times \mathbb {Q} \to \mathbb {Q} }
is multiplication of rational numbers,
I
Q
(
−
)
:
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(-):\mathbb {Q} \to \mathbb {Q} }
is the function that takes each rational number
x
{\displaystyle x}
to
−
x
,
{\displaystyle -x,}
and
I
Q
(
0
)
∈
Q
{\displaystyle I_{\mathcal {Q}}(0)\in \mathbb {Q} }
is the number
0
,
{\displaystyle 0,}
and
I
Q
(
1
)
∈
Q
{\displaystyle I_{\mathcal {Q}}(1)\in \mathbb {Q} }
is the number
1
;
{\displaystyle 1;}
and
I
R
{\displaystyle I_{\mathcal {R}}}
and
I
C
{\displaystyle I_{\mathcal {C}}}
are similarly defined.
But the ring
Z
{\displaystyle \mathbb {Z} }
of integers, which is not a field, is also a
σ
f
{\displaystyle \sigma _{f}}
-structure in the same way. In fact, there is no requirement that any of the field axioms hold in a
σ
f
{\displaystyle \sigma _{f}}
-structure.
A signature for ordered fields needs an additional binary relation such as
<
{\displaystyle \,<\,}
or
≤
,
{\displaystyle \,\leq ,\,}
and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word.
The ordinary signature for set theory includes a single binary relation
∈
.
{\displaystyle \in .}
A structure for this signature consists of a set of elements and an interpretation of the
∈
{\displaystyle \in }
relation as a binary relation on these elements.
== Induced substructures and closed subsets ==
A
{\displaystyle {\mathcal {A}}}
is called an (induced) substructure of
B
{\displaystyle {\mathcal {B}}}
if
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
have the same signature
σ
(
A
)
=
σ
(
B
)
;
{\displaystyle \sigma ({\mathcal {A}})=\sigma ({\mathcal {B}});}
the domain of
A
{\displaystyle {\mathcal {A}}}
is contained in the domain of
B
:
{\displaystyle {\mathcal {B}}:}
|
A
|
⊆
|
B
|
;
{\displaystyle |{\mathcal {A}}|\subseteq |{\mathcal {B}}|;}
and
the interpretations of all function and relation symbols agree on
|
A
|
.
{\displaystyle |{\mathcal {A}}|.}
The usual notation for this relation is
A
⊆
B
.
{\displaystyle {\mathcal {A}}\subseteq {\mathcal {B}}.}
A subset
B
⊆
|
A
|
{\displaystyle B\subseteq |{\mathcal {A}}|}
of the domain of a structure
A
{\displaystyle {\mathcal {A}}}
is called closed if it is closed under the functions of
A
,
{\displaystyle {\mathcal {A}},}
that is, if the following condition is satisfied: for every natural number
n
,
{\displaystyle n,}
every
n
{\displaystyle n}
-ary function symbol
f
{\displaystyle f}
(in the signature of
A
{\displaystyle {\mathcal {A}}}
) and all elements
b
1
,
b
2
,
…
,
b
n
∈
B
,
{\displaystyle b_{1},b_{2},\dots ,b_{n}\in B,}
the result of applying
f
{\displaystyle f}
to the
n
{\displaystyle n}
-tuple
b
1
b
2
…
b
n
{\displaystyle b_{1}b_{2}\dots b_{n}}
is again an element of
B
:
{\displaystyle B:}
f
(
b
1
,
b
2
,
…
,
b
n
)
∈
B
.
{\displaystyle f(b_{1},b_{2},\dots ,b_{n})\in B.}
For every subset
B
⊆
|
A
|
{\displaystyle B\subseteq |{\mathcal {A}}|}
there is a smallest closed subset of
|
A
|
{\displaystyle |{\mathcal {A}}|}
that contains
B
.
{\displaystyle B.}
It is called the closed subset generated by
B
,
{\displaystyle B,}
or the hull of
B
,
{\displaystyle B,}
and denoted by
⟨
B
⟩
{\displaystyle \langle B\rangle }
or
⟨
B
⟩
A
{\displaystyle \langle B\rangle _{\mathcal {A}}}
. The operator
⟨
⟩
{\displaystyle \langle \rangle }
is a finitary closure operator on the set of subsets of
|
A
|
{\displaystyle |{\mathcal {A}}|}
.
If
A
=
(
A
,
σ
,
I
)
{\displaystyle {\mathcal {A}}=(A,\sigma ,I)}
and
B
⊆
A
{\displaystyle B\subseteq A}
is a closed subset, then
(
B
,
σ
,
I
′
)
{\displaystyle (B,\sigma ,I')}
is an induced substructure of
A
,
{\displaystyle {\mathcal {A}},}
where
I
′
{\displaystyle I'}
assigns to every symbol of σ the restriction to
B
{\displaystyle B}
of its interpretation in
A
.
{\displaystyle {\mathcal {A}}.}
Conversely, the domain of an induced substructure is a closed subset.
The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail.
=== Examples ===
Let
σ
=
{
+
,
×
,
−
,
0
,
1
}
{\displaystyle \sigma =\{+,\times ,-,0,1\}}
be again the standard signature for fields. When regarded as
σ
{\displaystyle \sigma }
-structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms.
The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield.
The most obvious way to define a graph is a structure with a signature
σ
{\displaystyle \sigma }
consisting of a single binary relation symbol
E
.
{\displaystyle E.}
The vertices of the graph form the domain of the structure, and for two vertices
a
{\displaystyle a}
and
b
,
{\displaystyle b,}
(
a
,
b
)
∈
E
{\displaystyle (a,b)\!\in {\text{E}}}
means that
a
{\displaystyle a}
and
b
{\displaystyle b}
are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let
G
{\displaystyle G}
be a graph consisting of two vertices connected by an edge, and let
H
{\displaystyle H}
be the graph consisting of the same vertices but no edges.
H
{\displaystyle H}
is a subgraph of
G
,
{\displaystyle G,}
but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs.
== Homomorphisms and embeddings ==
=== Homomorphisms ===
Given two structures
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
of the same signature σ, a (σ-)homomorphism from
A
{\displaystyle {\mathcal {A}}}
to
B
{\displaystyle {\mathcal {B}}}
is a map
h
:
|
A
|
→
|
B
|
{\displaystyle h:|{\mathcal {A}}|\rightarrow |{\mathcal {B}}|}
that preserves the functions and relations. More precisely:
For every n-ary function symbol f of σ and any elements
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
, the following equation holds:
h
(
f
(
a
1
,
a
2
,
…
,
a
n
)
)
=
f
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
{\displaystyle h(f(a_{1},a_{2},\dots ,a_{n}))=f(h(a_{1}),h(a_{2}),\dots ,h(a_{n}))}
.
For every n-ary relation symbol R of σ and any elements
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
, the following implication holds:
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
⟹
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
∈
R
B
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}\implies (h(a_{1}),h(a_{2}),\dots ,h(a_{n}))\in R^{\mathcal {B}}}
where
R
A
{\displaystyle R^{\mathcal {A}}}
,
R
B
{\displaystyle R^{\mathcal {B}}}
is the interpretation of the relation symbol
R
{\displaystyle R}
of the object theory in the structure
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
respectively.
A homomorphism h from
A
{\displaystyle {\mathcal {A}}}
to
B
{\displaystyle {\mathcal {B}}}
is typically denoted as
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
, although technically the function h is between the domains
|
A
|
{\displaystyle |{\mathcal {A}}|}
,
|
B
|
{\displaystyle |{\mathcal {B}}|}
of the two structures
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
.
For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms.
A homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
is sometimes called strong if:
For every n-ary relation symbol R of the object theory and any elements
b
1
,
b
2
,
…
,
b
n
∈
|
B
|
{\displaystyle b_{1},b_{2},\dots ,b_{n}\in |{\mathcal {B}}|}
such that
(
b
1
,
b
2
,
…
,
b
n
)
∈
R
B
{\displaystyle (b_{1},b_{2},\dots ,b_{n})\in R^{\mathcal {B}}}
, there are
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
such that
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}}
and
b
1
=
h
(
a
1
)
,
b
2
=
h
(
a
2
)
,
…
,
b
n
=
h
(
a
n
)
.
{\displaystyle b_{1}=h(a_{1}),\,b_{2}=h(a_{2}),\,\dots ,\,b_{n}=h(a_{n}).}
The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above.
=== Embeddings ===
A (σ-)homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
is called a (σ-)embedding if it is one-to-one and
for every n-ary relation symbol R of σ and any elements
a
1
,
a
2
,
…
,
a
n
{\displaystyle a_{1},a_{2},\dots ,a_{n}}
, the following equivalence holds:
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
⟺
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
∈
R
B
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}\iff (h(a_{1}),h(a_{2}),\dots ,h(a_{n}))\in R^{\mathcal {B}}}
(where as before
R
A
{\displaystyle R^{\mathcal {A}}}
,
R
B
{\displaystyle R^{\mathcal {B}}}
refers to the interpretation of the relation symbol R of the object theory σ in the structure
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
respectively).
Thus an embedding is the same thing as a strong homomorphism which is one-to-one.
The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom.
Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom.
=== Example ===
As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure.
=== Homomorphism problem ===
The following problem is known as the homomorphism problem:
Given two finite structures
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
of a finite relational signature, find a homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
or show that no such homomorphism exists.
Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the complexity of CSP can be studied using the methods of finite model theory.
Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem.
== Structures and first-order logic ==
Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer.
=== Satisfaction relation ===
Each first-order structure
M
=
(
M
,
σ
,
I
)
{\displaystyle {\mathcal {M}}=(M,\sigma ,I)}
has a satisfaction relation
M
⊨
ϕ
{\displaystyle {\mathcal {M}}\vDash \phi }
defined for all formulas
ϕ
{\displaystyle \,\phi }
in the language consisting of the language of
M
{\displaystyle {\mathcal {M}}}
together with a constant symbol for each element of
M
,
{\displaystyle M,}
which is interpreted as that element.
This relation is defined inductively using Tarski's T-schema.
A structure
M
{\displaystyle {\mathcal {M}}}
is said to be a model of a theory
T
{\displaystyle T}
if the language of
M
{\displaystyle {\mathcal {M}}}
is the same as the language of
T
{\displaystyle T}
and every sentence in
T
{\displaystyle T}
is satisfied by
M
.
{\displaystyle {\mathcal {M}}.}
Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms.
=== Definable relations ===
An
n
{\displaystyle n}
-ary relation
R
{\displaystyle R}
on the universe (i.e. domain)
M
{\displaystyle M}
of the structure
M
{\displaystyle {\mathcal {M}}}
is said to be definable (or explicitly definable cf. Beth definability, or
∅
{\displaystyle \emptyset }
-definable, or definable with parameters from
∅
{\displaystyle \emptyset }
cf. below) if there is a formula
φ
(
x
1
,
…
,
x
n
)
{\displaystyle \varphi (x_{1},\ldots ,x_{n})}
such that
R
=
{
(
a
1
,
…
,
a
n
)
∈
M
n
:
M
⊨
φ
(
a
1
,
…
,
a
n
)
}
.
{\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.}
In other words,
R
{\displaystyle R}
is definable if and only if there is a formula
φ
{\displaystyle \varphi }
such that
(
a
1
,
…
,
a
n
)
∈
R
⇔
M
⊨
φ
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\ldots ,a_{n})\in R\Leftrightarrow {\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})}
is correct.
An important special case is the definability of specific elements. An element
m
{\displaystyle m}
of
M
{\displaystyle M}
is definable in
M
{\displaystyle {\mathcal {M}}}
if and only if there is a formula
φ
(
x
)
{\displaystyle \varphi (x)}
such that
M
⊨
∀
x
(
x
=
m
↔
φ
(
x
)
)
.
{\displaystyle {\mathcal {M}}\vDash \forall x(x=m\leftrightarrow \varphi (x)).}
==== Definability with parameters ====
A relation
R
{\displaystyle R}
is said to be definable with parameters (or
|
M
|
{\displaystyle |{\mathcal {M}}|}
-definable) if there is a formula
φ
{\displaystyle \varphi }
with parameters from
M
{\displaystyle {\mathcal {M}}}
such that
R
{\displaystyle R}
is definable using
φ
.
{\displaystyle \varphi .}
Every element of a structure is definable using the element itself as a parameter.
Some authors use definable to mean definable without parameters, while other authors mean definable with parameters. Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists.
==== Implicit definability ====
Recall from above that an
n
{\displaystyle n}
-ary relation
R
{\displaystyle R}
on the universe
M
{\displaystyle M}
of
M
{\displaystyle {\mathcal {M}}}
is explicitly definable if there is a formula
φ
(
x
1
,
…
,
x
n
)
{\displaystyle \varphi (x_{1},\ldots ,x_{n})}
such that
R
=
{
(
a
1
,
…
,
a
n
)
∈
M
n
:
M
⊨
φ
(
a
1
,
…
,
a
n
)
}
.
{\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.}
Here the formula
φ
{\displaystyle \varphi }
used to define a relation
R
{\displaystyle R}
must be over the signature of
M
{\displaystyle {\mathcal {M}}}
and so
φ
{\displaystyle \varphi }
may not mention
R
{\displaystyle R}
itself, since
R
{\displaystyle R}
is not in the signature of
M
.
{\displaystyle {\mathcal {M}}.}
If there is a formula
φ
{\displaystyle \varphi }
in the extended language containing the language of
M
{\displaystyle {\mathcal {M}}}
and a new symbol
R
,
{\displaystyle R,}
and the relation
R
{\displaystyle R}
is the only relation on
M
{\displaystyle {\mathcal {M}}}
such that
M
⊨
φ
,
{\displaystyle {\mathcal {M}}\vDash \varphi ,}
then
R
{\displaystyle R}
is said to be implicitly definable over
M
.
{\displaystyle {\mathcal {M}}.}
By Beth's theorem, every implicitly definable relation is explicitly definable.
== Many-sorted structures ==
Structures as defined above are sometimes called one-sorted structures to distinguish them from the more general many-sorted structures. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers.
Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols:
If V is a vector space over a field F, the corresponding two-sorted structure
V
{\displaystyle {\mathcal {V}}}
consists of the vector domain
|
V
|
V
=
V
{\displaystyle |{\mathcal {V}}|_{V}=V}
, the scalar domain
|
V
|
S
=
F
{\displaystyle |{\mathcal {V}}|_{S}=F}
, and the obvious functions, such as the vector zero
0
V
V
=
0
∈
|
V
|
V
{\displaystyle 0_{V}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{V}}
, the scalar zero
0
S
V
=
0
∈
|
V
|
S
{\displaystyle 0_{S}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{S}}
, or scalar multiplication
×
V
:
|
V
|
S
×
|
V
|
V
→
|
V
|
V
{\displaystyle \times ^{\mathcal {V}}:|{\mathcal {V}}|_{S}\times |{\mathcal {V}}|_{V}\rightarrow |{\mathcal {V}}|_{V}}
.
Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly.
In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory.
== Other generalizations ==
=== Partial algebras ===
Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.
∀
{\displaystyle \forall }
x
∀
{\displaystyle \forall }
y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1.
In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity.
=== Structures for typed languages ===
In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type.
=== Higher-order languages ===
There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language.
=== Structures that are proper classes ===
In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class.
In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain.
== See also ==
Mathematical structure – Additional mathematical object
== Notes ==
== References ==
Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
Chang, Chen Chung; Keisler, H. Jerome (1989) [1973], Model Theory, Elsevier, ISBN 978-0-7204-0692-4
Diestel, Reinhard (2005) [1997], Graph Theory, Graduate Texts in Mathematics, vol. 173 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4
Ebbinghaus, Heinz-Dieter; Flum, Jörg; Thomas, Wolfgang (1994), Mathematical Logic (2nd ed.), New York: Springer, ISBN 978-0-387-94258-2
Hinman, P. (2005), Fundamentals of Mathematical Logic, A K Peters, ISBN 978-1-56881-262-5
Hodges, Wilfrid (1993), Model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-30442-9
Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6
Marker, David (2002), Model Theory: An Introduction, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98760-6
Poizat, Bruno (2000), A Course in Model Theory: An Introduction to Contemporary Mathematical Logic, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98655-5
Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6
Rothmaler, Philipp (2000), Introduction to Model Theory, London: CRC Press, ISBN 978-90-5699-313-9
== External links ==
Semantics section in Classical Logic (an entry of Stanford Encyclopedia of Philosophy) | Wikipedia/Structure_(model_theory) |
In the mathematical field of graph theory, the Petersen graph is an undirected graph with 10 vertices and 15 edges. It is a small graph that serves as a useful example and counterexample for many problems in graph theory. The Petersen graph is named after Julius Petersen, who in 1898 constructed it to be the smallest bridgeless cubic graph with no three-edge-coloring.
Although the graph is generally credited to Petersen, it had in fact first appeared 12 years earlier, in a paper by A. B. Kempe (1886). Kempe observed that its vertices can represent the ten lines of the Desargues configuration, and its edges represent pairs of lines that do not meet at one of the ten points of the configuration.
Donald Knuth states that the Petersen graph is "a remarkable configuration that serves as a counterexample to many optimistic predictions about what might be true for graphs in general."
The Petersen graph also makes an appearance in tropical geometry. The cone over the Petersen graph is naturally identified with the moduli space of five-pointed rational tropical curves.
== Constructions ==
The Petersen graph is the complement of the line graph of
K
5
{\displaystyle K_{5}}
. It is also the Kneser graph
K
G
5
,
2
{\displaystyle KG_{5,2}}
; this means that it has one vertex for each 2-element subset of a 5-element set, and two vertices are connected by an edge if and only if the corresponding 2-element subsets are disjoint from each other. As a Kneser graph of the form
K
G
2
n
−
1
,
n
−
1
{\displaystyle KG_{2n-1,n-1}}
it is an example of an odd graph.
Geometrically, the Petersen graph is the graph formed by the vertices and edges of the hemi-dodecahedron, that is, a dodecahedron with opposite points, lines and faces identified together.
== Embeddings ==
The Petersen graph is nonplanar. Any nonplanar graph has as minors either the complete graph
K
5
{\displaystyle K_{5}}
, or the complete bipartite graph
K
3
,
3
{\displaystyle K_{3,3}}
, but the Petersen graph has both as minors. The
K
5
{\displaystyle K_{5}}
minor can be formed by contracting the edges of a perfect matching, for instance the five short edges in the first picture. The
K
3
,
3
{\displaystyle K_{3,3}}
minor can be formed by deleting one vertex (for instance the central vertex of the 3-symmetric drawing) and contracting an edge incident to each neighbor of the deleted vertex.
The most common and symmetric plane drawing of the Petersen graph, as a pentagram within a pentagon, has five crossings. However, this is not the best drawing for minimizing crossings; there exists another drawing (shown in the figure) with only two crossings. Because it is nonplanar, it has at least one crossing in any drawing, and if a crossing edge is removed from any drawing it remains nonplanar and has another crossing; therefore, its crossing number is exactly 2. Each edge in this drawing is crossed at most once, so the Petersen graph is 1-planar. On a torus the Petersen graph can be drawn without edge crossings; it therefore has orientable genus 1.
The Petersen graph can also be drawn (with crossings) in the plane in such a way that all the edges have equal length. That is, it is a unit distance graph.
The simplest non-orientable surface on which the Petersen graph can be embedded without crossings is the projective plane. This is the embedding given by the hemi-dodecahedron construction of the Petersen graph (shown in the figure). The projective plane embedding can also be formed from the standard pentagonal drawing of the Petersen graph by placing a cross-cap within the five-point star at the center of the drawing, and routing the star edges through this cross-cap; the resulting drawing has six pentagonal faces. This construction forms a regular map and shows that the Petersen graph has non-orientable genus 1.
== Symmetries ==
The Petersen graph is strongly regular (with signature srg(10,3,0,1)). It is also symmetric, meaning that it is edge transitive and vertex transitive. More strongly, it is 3-arc-transitive: every directed three-edge path in the Petersen graph can be transformed into every other such path by a symmetry of the graph.
It is one of only 13 cubic distance-regular graphs.
The automorphism group of the Petersen graph is the symmetric group
S
5
{\displaystyle S_{5}}
; the action of
S
5
{\displaystyle S_{5}}
on the Petersen graph follows from its construction as a Kneser graph. The Petersen graph is a core: every homomorphism of the Petersen graph to itself is an automorphism. As shown in the figures, the drawings of the Petersen graph may exhibit five-way or three-way symmetry, but it is not possible to draw the Petersen graph in the plane in such a way that the drawing exhibits the full symmetry group of the graph.
Despite its high degree of symmetry, the Petersen graph is not a Cayley graph. It is the smallest vertex-transitive graph that is not a Cayley graph.
== Hamiltonian paths and cycles ==
The Petersen graph has a Hamiltonian path but no Hamiltonian cycle. It is the smallest bridgeless cubic graph with no Hamiltonian cycle. It is hypohamiltonian, meaning that although it has no Hamiltonian cycle, deleting any vertex makes it Hamiltonian, and is the smallest hypohamiltonian graph.
As a finite connected vertex-transitive graph that does not have a Hamiltonian cycle, the Petersen graph is a counterexample to a variant of the Lovász conjecture, but the canonical formulation of the conjecture asks for a Hamiltonian path and is verified by the Petersen graph.
Only five connected vertex-transitive graphs with no Hamiltonian cycles are known: the complete graph K2, the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. If G is a 2-connected, r-regular graph with at most 3r + 1 vertices, then G is Hamiltonian or G is the Petersen graph.
To see that the Petersen graph has no Hamiltonian cycle, consider the edges in the cut disconnecting the inner 5-cycle from the outer one. If there is a Hamiltonian cycle C, it must contain an even number of these edges. If it contains only two of them, their end-vertices must be adjacent in the two 5-cycles, which is not possible. Hence, it contains exactly four of them. Assume that the top edge of the cut is not contained in C (all the other cases are the same by symmetry). Of the five edges in the outer cycle, the two top edges must be in C, the two side edges must not be in C, and hence the bottom edge must be in C. The top two edges in the inner cycle must be in C, but this completes a non-spanning cycle, which cannot be part of a Hamiltonian cycle. Alternatively, we can also describe the ten-vertex 3-regular graphs that do have a Hamiltonian cycle and show that none of them is the Petersen graph, by finding a cycle in each of them that is shorter than any cycle in the Petersen graph. Any ten-vertex Hamiltonian 3-regular graph consists of a ten-vertex cycle C plus five chords. If any chord connects two vertices at distance two or three along C from each other, the graph has a 3-cycle or 4-cycle, and therefore cannot be the Petersen graph. If two chords connect opposite vertices of C to vertices at distance four along C, there is again a 4-cycle. The only remaining case is a Möbius ladder formed by connecting each pair of opposite vertices by a chord, which again has a 4-cycle. Since the Petersen graph has girth five, it cannot be formed in this way and has no Hamiltonian cycle.
== Coloring ==
The Petersen graph has chromatic number 3, meaning that its vertices can be colored with three colors — but not with two — such that no edge connects vertices of the same color. It has a list coloring with 3 colors, by Brooks' theorem for list colorings.
The Petersen graph has chromatic index 4; coloring the edges requires four colors. As a connected bridgeless cubic graph with chromatic index four, the Petersen graph is a snark. It is the smallest possible snark, and was the only known snark from 1898 until 1946. The snark theorem, a result conjectured by W. T. Tutte and announced in 2001 by Robertson, Sanders, Seymour, and Thomas, states that every snark has the Petersen graph as a minor.
Additionally, the graph has fractional chromatic index 3, proving that the difference between the chromatic index and fractional chromatic index can be as large as 1. The long-standing Goldberg-Seymour Conjecture proposes that this is the largest gap possible.
The Thue number (a variant of the chromatic index) of the Petersen graph is 5.
The Petersen graph requires at least three colors in any (possibly improper) coloring that breaks all of its symmetries; that is, its distinguishing number is three. Except for the complete graphs, it is the only Kneser graph whose distinguishing number is not two.
== Other properties ==
The Petersen graph:
is 3-connected and hence 3-edge-connected and bridgeless. See the glossary.
has independence number 4 and is 3-partite. See the glossary.
is cubic, has domination number 3, and has a perfect matching and a 2-factor.
has 6 distinct perfect matchings.
is the smallest cubic graph of girth 5. (It is the unique
(
3
,
5
)
{\displaystyle (3,5)}
-cage. In fact, since it has only 10 vertices, it is the unique
(
3
,
5
)
{\displaystyle (3,5)}
-Moore graph.)
every cubic bridgeless graph without Petersen minor has a cycle double cover.
is the smallest cubic graph with Colin de Verdière graph invariant μ = 5.
is the smallest graph of cop number 3.
has radius 2 and diameter 2. It is the largest cubic graph with diameter 2.
has 2000 spanning trees, the most of any 10-vertex cubic graph.
has chromatic polynomial
t
(
t
−
1
)
(
t
−
2
)
(
t
7
−
12
t
6
+
67
t
5
−
230
t
4
+
529
t
3
−
814
t
2
+
775
t
−
352
)
{\displaystyle t(t-1)(t-2)\left(t^{7}-12t^{6}+67t^{5}-230t^{4}+529t^{3}-814t^{2}+775t-352\right)}
.
has characteristic polynomial
(
t
−
1
)
5
(
t
+
2
)
4
(
t
−
3
)
{\displaystyle (t-1)^{5}(t+2)^{4}(t-3)}
, making it an integral graph—a graph whose spectrum consists entirely of integers.
== Petersen coloring conjecture ==
An Eulerian subgraph of a graph
G
{\displaystyle G}
is a subgraph consisting of a subset of the edges of
G
{\displaystyle G}
, touching every vertex of
G
{\displaystyle G}
an even number of times. These subgraphs are the elements of the cycle space of
G
{\displaystyle G}
and are sometimes called cycles. If
G
{\displaystyle G}
and
H
{\displaystyle H}
are any two graphs, a function from the edges of
G
{\displaystyle G}
to the edges of
H
{\displaystyle H}
is defined to be cycle-continuous if the pre-image of every cycle of
H
{\displaystyle H}
is a cycle of
G
{\displaystyle G}
. A conjecture of Jaeger asserts that every bridgeless graph has a cycle-continuous mapping to the Petersen graph. Jaeger showed this conjecture implies the 5-cycle-double-cover conjecture and the Berge-Fulkerson conjecture."
== Related graphs ==
The generalized Petersen graph
G
(
n
,
k
)
{\displaystyle G(n,k)}
is formed by connecting the vertices of a regular n-gon to the corresponding vertices of a star polygon with Schläfli symbol {n/k}. For instance, in this notation, the Petersen graph is
G
(
5
,
2
)
{\displaystyle G(5,2)}
: it can be formed by connecting corresponding vertices of a pentagon and five-point star, and the edges in the star connect every second vertex. The generalized Petersen graphs also include the n-prism
G
(
n
,
1
)
{\displaystyle G(n,1)}
the Dürer graph
G
(
6
,
2
)
{\displaystyle G(6,2)}
, the Möbius-Kantor graph
G
(
8
,
3
)
{\displaystyle G(8,3)}
, the dodecahedron
G
(
10
,
2
)
{\displaystyle G(10,2)}
, the Desargues graph
G
(
10
,
3
)
{\displaystyle G(10,3)}
and the Nauru graph
G
(
12
,
5
)
{\displaystyle G(12,5)}
.
The Petersen family consists of the seven graphs that can be formed from the Petersen graph by zero or more applications of Δ-Y or Y-Δ transforms. The complete graph K6 is also in the Petersen family. These graphs form the forbidden minors for linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles in the graph are linked.
The Clebsch graph contains many copies of the Petersen graph as induced subgraphs: for each vertex v of the Clebsch graph, the ten non-neighbors of v induce a copy of the Petersen graph.
== Notes ==
== References ==
== Further reading ==
Exoo, Geoffrey; Harary, Frank; Kabell, Jerald (1981), "The crossing numbers of some generalized Petersen graphs", Mathematica Scandinavica, 48: 184–188, doi:10.7146/math.scand.a-11910.
Lovász, László (1993), Combinatorial Problems and Exercises (2nd ed.), North-Holland, ISBN 0-444-81504-X.
Schwenk, A. J. (1989), "Enumeration of Hamiltonian cycles in certain generalized Petersen graphs", Journal of Combinatorial Theory, Series B, 47 (1): 53–59, doi:10.1016/0095-8956(89)90064-6
Zhang, Cun-Quan (1997), Integer Flows and Cycle Covers of Graphs, CRC Press, ISBN 978-0-8247-9790-4.
Zhang, Cun-Quan (2012), Circuit Double Cover of Graphs, Cambridge University Press, ISBN 978-0-5212-8235-2.
== External links ==
Weisstein, Eric W., "Petersen Graph", MathWorld
Petersen Graph in the On-Line Encyclopedia of Integer Sequences | Wikipedia/Petersen_graph |
In graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees.
A directed tree, oriented tree, polytree, or singly connected network is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
The various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory, although such data structures are generally rooted trees. A rooted tree may be directed, called a directed rooted tree, either making all its edges point away from the root—in which case it is called an arborescence or out-tree—or making all its edges point towards the root—in which case it is called an anti-arborescence or in-tree. A rooted tree itself has been defined by some authors as a directed graph. A rooted forest is a disjoint union of rooted trees. A rooted forest may be directed, called a directed rooted forest, either making all its edges point away from the root in each rooted tree—in which case it is called a branching or out-forest—or making all its edges point towards the root in each rooted tree—in which case it is called an anti-branching or in-forest.
The term tree was coined in 1857 by the British mathematician Arthur Cayley.
== Definitions ==
=== Tree ===
A tree is an undirected graph G that satisfies any of the following equivalent conditions:
G is connected and acyclic (contains no cycles).
G is acyclic, and a simple cycle is formed if any edge is added to G.
G is connected, but would become disconnected if any single edge is removed from G.
G is connected and the complete graph K3 is not a minor of G.
Any two vertices in G can be connected by a unique simple path.
If G has finitely many vertices, say n of them, then the above statements are also equivalent to any of the following conditions:
G is connected and has n − 1 edges.
G is connected, and every subgraph of G includes at least one vertex with zero or one incident edges. (That is, G is connected and 1-degenerate.)
G has no simple cycles and has n − 1 edges.
As elsewhere in graph theory, the order-zero graph (graph with no vertices) is generally not considered to be a tree: while it is vacuously connected as a graph (any two vertices can be connected by a path), it is not 0-connected (or even (−1)-connected) in algebraic topology, unlike non-empty trees, and violates the "one more vertex than edges" relation. It may, however, be considered as a forest consisting of zero trees.
An internal vertex (or inner vertex) is a vertex of degree at least 2. Similarly, an external vertex (or outer vertex, terminal vertex or leaf) is a vertex of degree 1. A branch vertex in a tree is a vertex of degree at least 3.
An irreducible tree (or series-reduced tree) is a tree in which there is no vertex of degree 2 (enumerated at sequence A000014 in the OEIS).
=== Forest ===
A forest is an undirected acyclic graph or equivalently a disjoint union of trees. Trivially so, each connected component of a forest is a tree. As special cases, the order-zero graph (a forest consisting of zero trees), a single tree, and an edgeless graph, are examples of forests.
Since for every tree V − E = 1, we can easily count the number of trees that are within a forest by subtracting the difference between total vertices and total edges. V − E = number of trees in a forest.
=== Polytree ===
A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic.
Some authors restrict the phrase "directed tree" to the case where the edges are all directed towards a particular vertex, or all directed away from a particular vertex (see arborescence).
=== Polyforest ===
A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is acyclic.
As with directed trees, some authors restrict the phrase "directed forest" to the case where the edges of each connected component are all directed towards a particular vertex, or all directed away from a particular vertex (see branching).
=== Rooted tree ===
A rooted tree is a tree in which one vertex has been designated the root. The edges of a rooted tree can be assigned a natural orientation, either away from or towards the root, in which case the structure becomes a directed rooted tree. When a directed rooted tree has an orientation away from the root, it is called an arborescence or out-tree; when it has an orientation towards the root, it is called an anti-arborescence or in-tree. The tree-order is the partial ordering on the vertices of a tree with u < v if and only if the unique path from the root to v passes through u. A rooted tree T that is a subgraph of some graph G is a normal tree if the ends of every T-path in G are comparable in this tree-order (Diestel 2005, p. 15). Rooted trees, often with an additional structure such as an ordering of the neighbors at each vertex, are a key data structure in computer science; see tree data structure.
In a context where trees typically have a root, a tree without any designated root is called a free tree.
A labeled tree is a tree in which each vertex is given a unique label. The vertices of a labeled tree on n vertices (for nonnegative integers n) are typically given the labels 1, 2, …, n. A recursive tree is a labeled rooted tree where the vertex labels respect the tree order (i.e., if u < v for two vertices u and v, then the label of u is smaller than the label of v).
In a rooted tree, the parent of a vertex v is the vertex connected to v on the path to the root; every vertex has a unique parent, except the root has no parent. A child of a vertex v is a vertex of which v is the parent. An ascendant of a vertex v is any vertex that is either the parent of v or is (recursively) an ascendant of a parent of v. A descendant of a vertex v is any vertex that is either a child of v or is (recursively) a descendant of a child of v. A sibling to a vertex v is any other vertex on the tree that shares a parent with v. A leaf is a vertex with no children. An internal vertex is a vertex that is not a leaf.
The height of a vertex in a rooted tree is the length of the longest downward path to a leaf from that vertex. The height of the tree is the height of the root. The depth of a vertex is the length of the path to its root (root path). The depth of a tree is the maximum depth of any vertex. Depth is commonly needed in the manipulation of the various self-balancing trees, AVL trees in particular. The root has depth zero, leaves have height zero, and a tree with only a single vertex (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (a tree with no vertices, if such are allowed) has depth and height −1.
A k-ary tree (for nonnegative integers k) is a rooted tree in which each vertex has at most k children. 2-ary trees are often called binary trees, while 3-ary trees are sometimes called ternary trees.
=== Ordered tree ===
An ordered tree (alternatively, plane tree or positional tree) is a rooted tree in which an ordering is specified for the children of each vertex. This is called a "plane tree" because an ordering of the children is equivalent to an embedding of the tree in the plane, with the root at the top and the children of each vertex lower than that vertex. Given an embedding of a rooted tree in the plane, if one fixes a direction of children, say left to right, then an embedding gives an ordering of the children. Conversely, given an ordered tree, and conventionally drawing the root at the top, then the child vertices in an ordered tree can be drawn left-to-right, yielding an essentially unique planar embedding.
== Properties ==
Every tree is a bipartite graph. A graph is bipartite if and only if it contains no cycles of odd length. Since a tree contains no cycles at all, it is bipartite.
Every tree with only countably many vertices is a planar graph.
Every connected graph G admits a spanning tree, which is a tree that contains every vertex of G and whose edges are edges of G. More specific types spanning trees, existing in every connected finite graph, include depth-first search trees and breadth-first search trees. Generalizing the existence of depth-first-search trees, every connected graph with only countably many vertices has a Trémaux tree. However, some uncountable-order graphs do not have such a tree.
Every finite tree with n vertices, with n > 1, has at least two terminal vertices (leaves). This minimal number of leaves is characteristic of path graphs; the maximal number, n − 1, is attained only by star graphs. The number of leaves is at least the maximum vertex degree.
For any three vertices in a tree, the three paths between them have exactly one vertex in common. More generally, a vertex in a graph that belongs to three shortest paths among three vertices is called a median of these vertices. Because every three vertices in a tree have a unique median, every tree is a median graph.
Every tree has a center consisting of one vertex or two adjacent vertices. The center is the middle vertex or middle two vertices in every longest path. Similarly, every n-vertex tree has a centroid consisting of one vertex or two adjacent vertices. In the first case removal of the vertex splits the tree into subtrees of fewer than n/2 vertices. In the second case, removal of the edge between the two centroidal vertices splits the tree into two subtrees of exactly n/2 vertices.
The maximal cliques of a tree are precisely its edges, implying that the class of trees has few cliques.
== Enumeration ==
=== Labeled trees ===
Cayley's formula states that there are nn−2 trees on n labeled vertices. A classic proof uses Prüfer sequences, which naturally show a stronger result: the number of trees with vertices 1, 2, …, n of degrees d1, d2, …, dn respectively, is the multinomial coefficient
(
n
−
2
d
1
−
1
,
d
2
−
1
,
…
,
d
n
−
1
)
.
{\displaystyle {n-2 \choose d_{1}-1,d_{2}-1,\ldots ,d_{n}-1}.}
A more general problem is to count spanning trees in an undirected graph, which is addressed by the matrix tree theorem. (Cayley's formula is the special case of spanning trees in a complete graph.) The similar problem of counting all the subtrees regardless of size is #P-complete in the general case (Jerrum (1994)).
=== Unlabeled trees ===
Counting the number of unlabeled free trees is a harder problem. No closed formula for the number t(n) of trees with n vertices up to graph isomorphism is known. The first few values of t(n) are
1, 1, 1, 1, 2, 3, 6, 11, 23, 47, 106, 235, 551, 1301, 3159, … (sequence A000055 in the OEIS).
Otter (1948) proved the asymptotic estimate
t
(
n
)
∼
C
α
n
n
−
5
/
2
as
n
→
∞
,
{\displaystyle t(n)\sim C\alpha ^{n}n^{-5/2}\quad {\text{as }}n\to \infty ,}
with C ≈ 0.534949606... and α ≈ 2.95576528565... (sequence A051491 in the OEIS). Here, the ~ symbol means that
lim
n
→
∞
t
(
n
)
C
α
n
n
−
5
/
2
=
1.
{\displaystyle \lim _{n\to \infty }{\frac {t(n)}{C\alpha ^{n}n^{-5/2}}}=1.}
This is a consequence of his asymptotic estimate for the number r(n) of unlabeled rooted trees with n vertices:
r
(
n
)
∼
D
α
n
n
−
3
/
2
as
n
→
∞
,
{\displaystyle r(n)\sim D\alpha ^{n}n^{-3/2}\quad {\text{as }}n\to \infty ,}
with D ≈ 0.43992401257... and the same α as above (cf. Knuth (1997), chap. 2.3.4.4 and Flajolet & Sedgewick (2009), chap. VII.5, p. 475).
The first few values of r(n) are
1, 1, 2, 4, 9, 20, 48, 115, 286, 719, 1842, 4766, 12486, 32973, … (sequence A000081 in the OEIS).
== Types of trees ==
A path graph (or linear graph) consists of n vertices arranged in a line, so that vertices i and i + 1 are connected by an edge for i = 1, …, n – 1.
A starlike tree consists of a central vertex called root and several path graphs attached to it. More formally, a tree is starlike if it has exactly one vertex of degree greater than 2.
A star tree is a tree which consists of a single internal vertex (and n – 1 leaves). In other words, a star tree of order n is a tree of order n with as many leaves as possible.
A caterpillar tree is a tree in which all vertices are within distance 1 of a central path subgraph.
A lobster tree is a tree in which all vertices are within distance 2 of a central path subgraph.
A regular tree of degree d is the infinite tree with d edges at each vertex. These arise as the Cayley graphs of free groups, and in the theory of Tits buildings. In statistical mechanics they are known as Bethe lattices.
== See also ==
Decision tree
Hypertree
Multitree
Pseudoforest
Tree structure (general)
Tree (data structure)
Unrooted binary tree
== Notes ==
== References ==
Bender, Edward A.; Williamson, S. Gill (2010), Lists, Decisions and Graphs. With an Introduction to Probability
Dasgupta, Sanjoy (1999), "Learning polytrees", Proc. 15th Conference on Uncertainty in Artificial Intelligence (UAI 1999), Stockholm, Sweden, July–August 1999 (PDF), pp. 134–141.
Deo, Narsingh (1974), Graph Theory with Applications to Engineering and Computer Science (PDF), Englewood, New Jersey: Prentice-Hall, ISBN 0-13-363473-6, archived (PDF) from the original on 2019-05-17
Harary, Frank; Prins, Geert (1959), "The number of homeomorphically irreducible trees, and other species", Acta Mathematica, 101 (1–2): 141–162, doi:10.1007/BF02559543, ISSN 0001-5962
Harary, Frank; Sumner, David (1980), "The dichromatic number of an oriented tree", Journal of Combinatorics, Information & System Sciences, 5 (3): 184–187, MR 0603363.
Kim, Jin H.; Pearl, Judea (1983), "A computational model for causal and diagnostic reasoning in inference engines", Proc. 8th International Joint Conference on Artificial Intelligence (IJCAI 1983), Karlsruhe, Germany, August 1983 (PDF), pp. 190–193.
Li, Gang (1996), "Generation of Rooted Trees and Free Trees", M.S. Thesis, Dept. of Computer Science, University of Victoria, BC, Canada (PDF), p. 9.
Simion, Rodica (1991), "Trees with 1-factors and oriented trees", Discrete Mathematics, 88 (1): 93–104, doi:10.1016/0012-365X(91)90061-6, MR 1099270.
== Further reading ==
Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4.
Flajolet, Philippe; Sedgewick, Robert (2009), Analytic Combinatorics, Cambridge University Press, ISBN 978-0-521-89806-5
"Tree", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Knuth, Donald E. (November 14, 1997), The Art of Computer Programming Volume 1: Fundamental Algorithms (3rd ed.), Addison-Wesley Professional
Jerrum, Mark (1994), "Counting trees in a graph is #P-complete", Information Processing Letters, 51 (3): 111–116, doi:10.1016/0020-0190(94)00085-9, ISSN 0020-0190.
Otter, Richard (1948), "The Number of Trees", Annals of Mathematics, Second Series, 49 (3): 583–599, doi:10.2307/1969046, JSTOR 1969046. | Wikipedia/Tree_(graph_theory) |
In the mathematical field of graph theory, a complete bipartite graph or biclique is a special kind of bipartite graph where every vertex of the first set is connected to every vertex of the second set.
Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königsberg. However, drawings of complete bipartite graphs were already printed as early as 1669, in connection with an edition of the works of Ramon Llull edited by Athanasius Kircher. Llull himself had made similar drawings of complete graphs three centuries earlier.
== Definition ==
A complete bipartite graph is a graph whose vertices can be partitioned into two subsets V1 and V2 such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is part of the graph. That is, it is a bipartite graph (V1, V2, E) such that for every two vertices v1 ∈ V1 and v2 ∈ V2, v1v2 is an edge in E. A complete bipartite graph with partitions of size |V1| = m and |V2| = n, is denoted Km,n; every two graphs with the same notation are isomorphic.
== Examples ==
For any k, K1,k is called a star. All complete bipartite graphs which are trees are stars.
The graph K1,3 is called a claw, and is used to define the claw-free graphs.
The graph K3,3 is called the utility graph. This usage comes from a standard mathematical puzzle in which three utilities must each be connected to three buildings; it is impossible to solve without crossings due to the nonplanarity of K3,3.
The maximal bicliques found as subgraphs of the digraph of a relation are called concepts. When a lattice is formed by taking meets and joins of these subgraphs, the relation has an Induced concept lattice. This type of analysis of relations is called formal concept analysis.
== Properties ==
Given a bipartite graph, testing whether it contains a complete bipartite subgraph Ki,i for a parameter i is an NP-complete problem.
A planar graph cannot contain K3,3 as a minor; an outerplanar graph cannot contain K3,2 as a minor (These are not sufficient conditions for planarity and outerplanarity, but necessary). Conversely, every nonplanar graph contains either K3,3 or the complete graph K5 as a minor; this is Wagner's theorem.
Every complete bipartite graph. Kn,n is a Moore graph and a (n,4)-cage.
The complete bipartite graphs Kn,n and Kn,n+1 have the maximum possible number of edges among all triangle-free graphs with the same number of vertices; this is Mantel's theorem. Mantel's result was generalized to k-partite graphs and graphs that avoid larger cliques as subgraphs in Turán's theorem, and these two complete bipartite graphs are examples of Turán graphs, the extremal graphs for this more general problem.
The complete bipartite graph Km,n has a vertex covering number of min{m, n} and an edge covering number of max{m, n}.
The complete bipartite graph Km,n has a maximum independent set of size max{m, n}.
The adjacency matrix of a complete bipartite graph Km,n has eigenvalues √nm, −√nm and 0; with multiplicity 1, 1 and n + m − 2 respectively.
The Laplacian matrix of a complete bipartite graph Km,n has eigenvalues n + m, n, m, and 0; with multiplicity 1, m − 1, n − 1 and 1 respectively.
A complete bipartite graph Km,n has mn−1 nm−1 spanning trees.
A complete bipartite graph Km,n has a maximum matching of size min{m,n}.
A complete bipartite graph Kn,n has a proper n-edge-coloring corresponding to a Latin square.
Every complete bipartite graph is a modular graph: every triple of vertices has a median that belongs to shortest paths between each pair of vertices.
== See also ==
Biclique-free graph, a class of sparse graphs defined by avoidance of complete bipartite subgraphs
Crown graph, a graph formed by removing a perfect matching from a complete bipartite graph
Complete multipartite graph, a generalization of complete bipartite graphs to more than two sets of vertices
Biclique attack
== References == | Wikipedia/Complete_bipartite_graph |
In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices which, by most definitions, are all distinct (and since the vertices are distinct, so are the edges). A directed path (sometimes called dipath) in a directed graph is a finite or infinite sequence of edges which joins a sequence of distinct vertices, but with the added restriction that the edges be all directed in the same direction.
Paths are fundamental concepts of graph theory, described in the introductory sections of most graph theory texts. See e.g. Bondy & Murty (1976), Gibbons (1985), or Diestel (2005). Korte et al. (1990) cover more advanced algorithmic topics concerning paths in graphs.
== Definitions ==
=== Walk, trail, and path ===
A walk is a finite or infinite sequence of edges which joins a sequence of vertices.
Let G = (V, E, Φ) be a graph. A finite walk is a sequence of edges (e1, e2, ..., en − 1) for which there is a sequence of vertices (v1, v2, ..., vn) such that Φ(ei) = {vi, vi + 1} for i = 1, 2, ..., n − 1. (v1, v2, ..., vn) is the vertex sequence of the walk. The walk is closed if v1 = vn, and it is open otherwise. An infinite walk is a sequence of edges of the same type described here, but with no first or last vertex, and a semi-infinite walk (or ray) has a first vertex but no last vertex.
A trail is a walk in which all edges are distinct.
A path is a trail in which all vertices (and therefore also all edges) are distinct.
If w = (e1, e2, ..., en − 1) is a finite walk with vertex sequence (v1, v2, ..., vn) then w is said to be a walk from v1 to vn. Similarly for a trail or a path. If there is a finite walk between two distinct vertices then there is also a finite trail and a finite path between them.
Some authors do not require that all vertices of a path be distinct and instead use the term simple path to refer to such a path where all vertices are distinct.
A weighted graph associates a value (weight) with every edge in the graph. The weight of a walk (or trail or path) in a weighted graph is the sum of the weights of the traversed edges. Sometimes the words cost or length are used instead of weight.
=== Directed walk, directed trail, and directed path ===
A directed walk is a finite or infinite sequence of edges directed in the same direction which joins a sequence of vertices.
Let G = (V, E, Φ) be a directed graph. A finite directed walk is a sequence of edges (e1, e2, ..., en − 1) for which there is a sequence of vertices (v1, v2, ..., vn) such that Φ(ei) = (vi, vi + 1) for i = 1, 2, ..., n − 1. (v1, v2, ..., vn) is the vertex sequence of the directed walk. The directed walk is closed if v1 = vn, and it is open otherwise. An infinite directed walk is a sequence of edges of the same type described here, but with no first or last vertex, and a semi-infinite directed walk (or ray) has a first vertex but no last vertex.
A directed trail is a directed walk in which all edges are distinct.
A directed path is a directed trail in which all vertices are distinct.
If w = (e1, e2, ..., en − 1) is a finite directed walk with vertex sequence (v1, v2, ..., vn) then w is said to be a walk from v1 to vn. Similarly for a directed trail or a path. If there is a finite directed walk between two distinct vertices then there is also a finite directed trail and a finite directed path between them.
A "simple directed path" is a path where all vertices are distinct.
A weighted directed graph associates a value (weight) with every edge in the directed graph. The weight of a directed walk (or trail or path) in a weighted directed graph is the sum of the weights of the traversed edges. Sometimes the words cost or length are used instead of weight.
== Examples ==
A graph is connected if there are paths containing each pair of vertices.
A directed graph is strongly connected if there are oppositely oriented directed paths containing each pair of vertices.
A path such that no graph edges connect two nonconsecutive path vertices is called an induced path.
A path that includes every vertex of the graph without repeats is known as a Hamiltonian path.
Two paths are vertex-independent (alternatively, internally disjoint or internally vertex-disjoint) if they do not have any internal vertex or edge in common. Similarly, two paths are edge-independent (or edge-disjoint) if they do not have any edge in common. Two internally disjoint paths are edge-disjoint, but the converse is not necessarily true.
The distance between two vertices in a graph is the length of a shortest path between them, if one exists, and otherwise the distance is infinity.
The diameter of a connected graph is the largest distance (defined above) between pairs of vertices of the graph.
== Finding paths ==
Several algorithms exist to find shortest and longest paths in graphs, with the important distinction that the former problem is computationally much easier than the latter.
Dijkstra's algorithm produces a list of shortest paths from a source vertex to every other vertex in directed and undirected graphs with non-negative edge weights (or no edge weights), whilst the Bellman–Ford algorithm can be applied to directed graphs with negative edge weights. The Floyd–Warshall algorithm can be used to find the shortest paths between all pairs of vertices in weighted directed graphs.
== The path partition problem ==
The k-path partition problem is the problem of partitioning a given graph to a smallest collection of vertex-disjoint paths of length at most k.
== See also ==
Glossary of graph theory
Path graph
Polygonal chain
Shortest path problem
Longest path problem
Dijkstra's algorithm
Bellman–Ford algorithm
Floyd–Warshall algorithm
Self-avoiding walk
Shortest-path graph
== Notes ==
== References == | Wikipedia/Path_(graph_theory) |
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.
More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.
== Example ==
Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.
In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ 6×10−13 (using the unit conversion 3.6×1012 nanoseconds = 1 hour).
There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.
== Absolutely continuous univariate distributions ==
A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable
X
{\displaystyle X}
has density
f
X
{\displaystyle f_{X}}
, where
f
X
{\displaystyle f_{X}}
is a non-negative Lebesgue-integrable function, if:
Pr
[
a
≤
X
≤
b
]
=
∫
a
b
f
X
(
x
)
d
x
.
{\displaystyle \Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)\,dx.}
Hence, if
F
X
{\displaystyle F_{X}}
is the cumulative distribution function of
X
{\displaystyle X}
, then:
F
X
(
x
)
=
∫
−
∞
x
f
X
(
u
)
d
u
,
{\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(u)\,du,}
and (if
f
X
{\displaystyle f_{X}}
is continuous at
x
{\displaystyle x}
)
f
X
(
x
)
=
d
d
x
F
X
(
x
)
.
{\displaystyle f_{X}(x)={\frac {d}{dx}}F_{X}(x).}
Intuitively, one can think of
f
X
(
x
)
d
x
{\displaystyle f_{X}(x)\,dx}
as being the probability of
X
{\displaystyle X}
falling within the infinitesimal interval
[
x
,
x
+
d
x
]
{\displaystyle [x,x+dx]}
.
== Formal definition ==
(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)
A random variable
X
{\displaystyle X}
with values in a measurable space
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
(usually
R
n
{\displaystyle \mathbb {R} ^{n}}
with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
: the density of
X
{\displaystyle X}
with respect to a reference measure
μ
{\displaystyle \mu }
on
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
is the Radon–Nikodym derivative:
f
=
d
X
∗
P
d
μ
.
{\displaystyle f={\frac {dX_{*}P}{d\mu }}.}
That is, f is any measurable function with the property that:
Pr
[
X
∈
A
]
=
∫
X
−
1
A
d
P
=
∫
A
f
d
μ
{\displaystyle \Pr[X\in A]=\int _{X^{-1}A}\,dP=\int _{A}f\,d\mu }
for any measurable set
A
∈
A
.
{\displaystyle A\in {\mathcal {A}}.}
=== Discussion ===
In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).
It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.
== Further details ==
Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.
The standard normal distribution has probability density
f
(
x
)
=
1
2
π
e
−
x
2
/
2
.
{\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\,e^{-x^{2}/2}.}
If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as
E
[
X
]
=
∫
−
∞
∞
x
f
(
x
)
d
x
.
{\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,f(x)\,dx.}
Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.
A distribution has a density function if its cumulative distribution function F(x) is absolutely continuous. In this case: F is almost everywhere differentiable, and its derivative can be used as probability density:
d
d
x
F
(
x
)
=
f
(
x
)
.
{\displaystyle {\frac {d}{dx}}F(x)=f(x).}
If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets.
Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.
In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:
If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or:
Pr
(
t
<
X
<
t
+
d
t
)
=
f
(
t
)
d
t
.
{\displaystyle \Pr(t<X<t+dt)=f(t)\,dt.}
== Link between discrete and continuous distributions ==
It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability 1⁄2 each. The density of probability associated with this variable is:
f
(
t
)
=
1
2
(
δ
(
t
+
1
)
+
δ
(
t
−
1
)
)
.
{\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).}
More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is:
f
(
t
)
=
∑
i
=
1
n
p
i
δ
(
t
−
x
i
)
,
{\displaystyle f(t)=\sum _{i=1}^{n}p_{i}\,\delta (t-x_{i}),}
where
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are the discrete values accessible to the variable and
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
are the probabilities associated with these values.
This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability.
== Families of densities ==
It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by
μ
{\displaystyle \mu }
and
σ
2
{\displaystyle \sigma ^{2}}
respectively, giving the family of densities
f
(
x
;
μ
,
σ
2
)
=
1
σ
2
π
e
−
1
2
(
x
−
μ
σ
)
2
.
{\displaystyle f(x;\mu ,\sigma ^{2})={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}.}
Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.
Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones.
== Densities associated with multiple variables ==
For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is
Pr
(
X
1
,
…
,
X
n
∈
D
)
=
∫
D
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
d
x
1
⋯
d
x
n
.
{\displaystyle \Pr \left(X_{1},\ldots ,X_{n}\in D\right)=\int _{D}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{n}.}
If F(x1, ..., xn) = Pr(X1 ≤ x1, ..., Xn ≤ xn) is the cumulative distribution function of the vector (X1, ..., Xn), then the joint probability density function can be computed as a partial derivative
f
(
x
)
=
∂
n
F
∂
x
1
⋯
∂
x
n
|
x
{\displaystyle f(x)=\left.{\frac {\partial ^{n}F}{\partial x_{1}\cdots \partial x_{n}}}\right|_{x}}
=== Marginal densities ===
For i = 1, 2, ..., n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n − 1 variables:
f
X
i
(
x
i
)
=
∫
f
(
x
1
,
…
,
x
n
)
d
x
1
⋯
d
x
i
−
1
d
x
i
+
1
⋯
d
x
n
.
{\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}.}
=== Independence ===
Continuous random variables X1, ..., Xn admitting a joint density are all independent from each other if
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
=
f
X
1
(
x
1
)
⋯
f
X
n
(
x
n
)
.
{\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).}
=== Corollary ===
If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
=
f
1
(
x
1
)
⋯
f
n
(
x
n
)
,
{\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),}
(where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by
f
X
i
(
x
i
)
=
f
i
(
x
i
)
∫
f
i
(
x
)
d
x
.
{\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.}
=== Example ===
This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call
R
→
{\displaystyle {\vec {R}}}
a 2-dimensional random vector of coordinates (X, Y): the probability to obtain
R
→
{\displaystyle {\vec {R}}}
in the quarter plane of positive x and y is
Pr
(
X
>
0
,
Y
>
0
)
=
∫
0
∞
∫
0
∞
f
X
,
Y
(
x
,
y
)
d
x
d
y
.
{\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.}
== Function of random variables and change of variables in the probability density function ==
If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator.
It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X). However, rather than computing
E
(
g
(
X
)
)
=
∫
−
∞
∞
y
f
g
(
X
)
(
y
)
d
y
,
{\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }yf_{g(X)}(y)\,dy,}
one may find instead
E
(
g
(
X
)
)
=
∫
−
∞
∞
g
(
x
)
f
X
(
x
)
d
x
.
{\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.}
The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.
=== Scalar to scalar ===
Let
g
:
R
→
R
{\displaystyle g:\mathbb {R} \to \mathbb {R} }
be a monotonic function, then the resulting density function is
f
Y
(
y
)
=
f
X
(
g
−
1
(
y
)
)
|
d
d
y
(
g
−
1
(
y
)
)
|
.
{\displaystyle f_{Y}(y)=f_{X}{\big (}g^{-1}(y){\big )}\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|.}
Here g−1 denotes the inverse function.
This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,
|
f
Y
(
y
)
d
y
|
=
|
f
X
(
x
)
d
x
|
,
{\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,}
or
f
Y
(
y
)
=
|
d
x
d
y
|
f
X
(
x
)
=
|
d
d
y
(
x
)
|
f
X
(
x
)
=
|
d
d
y
(
g
−
1
(
y
)
)
|
f
X
(
g
−
1
(
y
)
)
=
|
(
g
−
1
)
′
(
y
)
|
⋅
f
X
(
g
−
1
(
y
)
)
.
{\displaystyle f_{Y}(y)=\left|{\frac {dx}{dy}}\right|f_{X}(x)=\left|{\frac {d}{dy}}(x)\right|f_{X}(x)=\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|f_{X}{\big (}g^{-1}(y){\big )}={\left|\left(g^{-1}\right)'(y)\right|}\cdot f_{X}{\big (}g^{-1}(y){\big )}.}
For functions that are not monotonic, the probability density function for y is
∑
k
=
1
n
(
y
)
|
d
d
y
g
k
−
1
(
y
)
|
⋅
f
X
(
g
k
−
1
(
y
)
)
,
{\displaystyle \sum _{k=1}^{n(y)}\left|{\frac {d}{dy}}g_{k}^{-1}(y)\right|\cdot f_{X}{\big (}g_{k}^{-1}(y){\big )},}
where n(y) is the number of solutions in x for the equation
g
(
x
)
=
y
{\displaystyle g(x)=y}
, and
g
k
−
1
(
y
)
{\displaystyle g_{k}^{-1}(y)}
are these solutions.
=== Vector to vector ===
Suppose x is an n-dimensional random variable with joint density f. If y = G(x), where G is a bijective, differentiable function, then y has density pY:
p
Y
(
y
)
=
f
(
G
−
1
(
y
)
)
|
det
[
d
G
−
1
(
z
)
d
z
|
z
=
y
]
|
{\displaystyle p_{Y}(\mathbf {y} )=f{\Bigl (}G^{-1}(\mathbf {y} ){\Bigr )}\left|\det \left[\left.{\frac {dG^{-1}(\mathbf {z} )}{d\mathbf {z} }}\right|_{\mathbf {z} =\mathbf {y} }\right]\right|}
with the differential regarded as the Jacobian of the inverse of G(⋅), evaluated at y.
For example, in the 2-dimensional case x = (x1, x2), suppose the transform G is given as y1 = G1(x1, x2), y2 = G2(x1, x2) with inverses x1 = G1−1(y1, y2), x2 = G2−1(y1, y2). The joint distribution for y = (y1, y2) has density
p
Y
1
,
Y
2
(
y
1
,
y
2
)
=
f
X
1
,
X
2
(
G
1
−
1
(
y
1
,
y
2
)
,
G
2
−
1
(
y
1
,
y
2
)
)
|
∂
G
1
−
1
∂
y
1
∂
G
2
−
1
∂
y
2
−
∂
G
1
−
1
∂
y
2
∂
G
2
−
1
∂
y
1
|
.
{\displaystyle p_{Y_{1},Y_{2}}(y_{1},y_{2})=f_{X_{1},X_{2}}{\big (}G_{1}^{-1}(y_{1},y_{2}),G_{2}^{-1}(y_{1},y_{2}){\big )}\left\vert {\frac {\partial G_{1}^{-1}}{\partial y_{1}}}{\frac {\partial G_{2}^{-1}}{\partial y_{2}}}-{\frac {\partial G_{1}^{-1}}{\partial y_{2}}}{\frac {\partial G_{2}^{-1}}{\partial y_{1}}}\right\vert .}
=== Vector to scalar ===
Let
V
:
R
n
→
R
{\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} }
be a differentiable function and
X
{\displaystyle X}
be a random vector taking values in
R
n
{\displaystyle \mathbb {R} ^{n}}
,
f
X
{\displaystyle f_{X}}
be the probability density function of
X
{\displaystyle X}
and
δ
(
⋅
)
{\displaystyle \delta (\cdot )}
be the Dirac delta function. It is possible to use the formulas above to determine
f
Y
{\displaystyle f_{Y}}
, the probability density function of
Y
=
V
(
X
)
{\displaystyle Y=V(X)}
, which will be given by
f
Y
(
y
)
=
∫
R
n
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} .}
This result leads to the law of the unconscious statistician:
E
Y
[
Y
]
=
∫
R
y
f
Y
(
y
)
d
y
=
∫
R
y
∫
R
n
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
x
d
y
=
∫
R
n
∫
R
y
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
y
d
x
=
∫
R
n
V
(
x
)
f
X
(
x
)
d
x
=
E
X
[
V
(
X
)
]
.
{\displaystyle {\begin{aligned}\operatorname {E} _{Y}[Y]&=\int _{\mathbb {R} }yf_{Y}(y)\,dy\\&=\int _{\mathbb {R} }y\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} \,dy\\&=\int _{{\mathbb {R} }^{n}}\int _{\mathbb {R} }yf_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,dy\,d\mathbf {x} \\&=\int _{\mathbb {R} ^{n}}V(\mathbf {x} )f_{X}(\mathbf {x} )\,d\mathbf {x} =\operatorname {E} _{X}[V(X)].\end{aligned}}}
Proof:
Let
Z
{\displaystyle Z}
be a collapsed random variable with probability density function
p
Z
(
z
)
=
δ
(
z
)
{\displaystyle p_{Z}(z)=\delta (z)}
(i.e., a constant equal to zero). Let the random vector
X
~
{\displaystyle {\tilde {X}}}
and the transform
H
{\displaystyle H}
be defined as
H
(
Z
,
X
)
=
[
Z
+
V
(
X
)
X
]
=
[
Y
X
~
]
.
{\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde {X}}\end{bmatrix}}.}
It is clear that
H
{\displaystyle H}
is a bijective mapping, and the Jacobian of
H
−
1
{\displaystyle H^{-1}}
is given by:
d
H
−
1
(
y
,
x
~
)
d
y
d
x
~
=
[
1
−
d
V
(
x
~
)
d
x
~
0
n
×
1
I
n
×
n
]
,
{\displaystyle {\frac {dH^{-1}(y,{\tilde {\mathbf {x} }})}{dy\,d{\tilde {\mathbf {x} }}}}={\begin{bmatrix}1&-{\frac {dV({\tilde {\mathbf {x} }})}{d{\tilde {\mathbf {x} }}}}\\\mathbf {0} _{n\times 1}&\mathbf {I} _{n\times n}\end{bmatrix}},}
which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that
f
Y
,
X
(
y
,
x
)
=
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
,
{\displaystyle f_{Y,X}(y,x)=f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )},}
which if marginalized over
x
{\displaystyle x}
leads to the desired probability density function.
== Sums of independent random variables ==
The probability density function of the sum of two independent random variables U and V, each of which has a probability density function, is the convolution of their separate density functions:
f
U
+
V
(
x
)
=
∫
−
∞
∞
f
U
(
y
)
f
V
(
x
−
y
)
d
y
=
(
f
U
∗
f
V
)
(
x
)
{\displaystyle f_{U+V}(x)=\int _{-\infty }^{\infty }f_{U}(y)f_{V}(x-y)\,dy=\left(f_{U}*f_{V}\right)(x)}
It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1, ..., UN:
f
U
1
+
⋯
+
U
(
x
)
=
(
f
U
1
∗
⋯
∗
f
U
N
)
(
x
)
{\displaystyle f_{U_{1}+\cdots +U}(x)=\left(f_{U_{1}}*\cdots *f_{U_{N}}\right)(x)}
This can be derived from a two-way change of variables involving Y = U + V and Z = V, similarly to the example below for the quotient of independent random variables.
== Products and quotients of independent random variables ==
Given two independent random variables U and V, each of which has a probability density function, the density of the product Y = UV and quotient Y = U/V can be computed by a change of variables.
=== Example: Quotient distribution ===
To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation:
Y
=
U
/
V
Z
=
V
{\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}}
Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density.
The inverse transformation is
U
=
Y
Z
V
=
Z
{\displaystyle {\begin{aligned}U&=YZ\\V&=Z\end{aligned}}}
The absolute value of the Jacobian matrix determinant
J
(
U
,
V
∣
Y
,
Z
)
{\displaystyle J(U,V\mid Y,Z)}
of this transformation is:
|
det
[
∂
u
∂
y
∂
u
∂
z
∂
v
∂
y
∂
v
∂
z
]
|
=
|
det
[
z
y
0
1
]
|
=
|
z
|
.
{\displaystyle \left|\det {\begin{bmatrix}{\frac {\partial u}{\partial y}}&{\frac {\partial u}{\partial z}}\\{\frac {\partial v}{\partial y}}&{\frac {\partial v}{\partial z}}\end{bmatrix}}\right|=\left|\det {\begin{bmatrix}z&y\\0&1\end{bmatrix}}\right|=|z|.}
Thus:
p
(
y
,
z
)
=
p
(
u
,
v
)
J
(
u
,
v
∣
y
,
z
)
=
p
(
u
)
p
(
v
)
J
(
u
,
v
∣
y
,
z
)
=
p
U
(
y
z
)
p
V
(
z
)
|
z
|
.
{\displaystyle p(y,z)=p(u,v)\,J(u,v\mid y,z)=p(u)\,p(v)\,J(u,v\mid y,z)=p_{U}(yz)\,p_{V}(z)\,|z|.}
And the distribution of Y can be computed by marginalizing out Z:
p
(
y
)
=
∫
−
∞
∞
p
U
(
y
z
)
p
V
(
z
)
|
z
|
d
z
{\displaystyle p(y)=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz}
This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U − V and product UV.
Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.
=== Example: Quotient of two standard normals ===
Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions:
p
(
u
)
=
1
2
π
e
−
u
2
/
2
p
(
v
)
=
1
2
π
e
−
v
2
/
2
{\displaystyle {\begin{aligned}p(u)&={\frac {1}{\sqrt {2\pi }}}e^{-{u^{2}}/{2}}\\[1ex]p(v)&={\frac {1}{\sqrt {2\pi }}}e^{-{v^{2}}/{2}}\end{aligned}}}
We transform as described above:
Y
=
U
/
V
Z
=
V
{\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}}
This leads to:
p
(
y
)
=
∫
−
∞
∞
p
U
(
y
z
)
p
V
(
z
)
|
z
|
d
z
=
∫
−
∞
∞
1
2
π
e
−
1
2
y
2
z
2
1
2
π
e
−
1
2
z
2
|
z
|
d
z
=
∫
−
∞
∞
1
2
π
e
−
1
2
(
y
2
+
1
)
z
2
|
z
|
d
z
=
2
∫
0
∞
1
2
π
e
−
1
2
(
y
2
+
1
)
z
2
z
d
z
=
∫
0
∞
1
π
e
−
(
y
2
+
1
)
u
d
u
u
=
1
2
z
2
=
−
1
π
(
y
2
+
1
)
e
−
(
y
2
+
1
)
u
|
u
=
0
∞
=
1
π
(
y
2
+
1
)
{\displaystyle {\begin{aligned}p(y)&=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}y^{2}z^{2}}{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}z^{2}}|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}|z|\,dz\\[5pt]&=2\int _{0}^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}z\,dz\\[5pt]&=\int _{0}^{\infty }{\frac {1}{\pi }}e^{-\left(y^{2}+1\right)u}\,du&&u={\tfrac {1}{2}}z^{2}\\[5pt]&=\left.-{\frac {1}{\pi \left(y^{2}+1\right)}}e^{-\left(y^{2}+1\right)u}\right|_{u=0}^{\infty }\\[5pt]&={\frac {1}{\pi \left(y^{2}+1\right)}}\end{aligned}}}
This is the density of a standard Cauchy distribution.
== See also ==
Density estimation – Estimate of an unobservable underlying probability density function
Kernel density estimation – EstimatorPages displaying short descriptions with no spaces
Likelihood function – Function related to statistics and probability theory
List of probability distributions
Probability amplitude – Complex number whose squared absolute value is a probability
Probability mass function – Discrete-variable probability distribution
Secondary measure – Concept in mathematics
Merging independent probability density functions
Uses as position probability density:
Atomic orbital – Function describing an electron in an atom
Home range – The area in which an animal lives and moves on a periodic basis
== References ==
== Further reading ==
Billingsley, Patrick (1979). Probability and Measure. New York, Toronto, London: John Wiley and Sons. ISBN 0-471-00710-2.
Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.). Thomson Learning. pp. 34–37. ISBN 0-534-24312-6.
Stirzaker, David (2003). Elementary Probability. Cambridge University Press. ISBN 0-521-42028-8. Chapters 7 to 9 are about continuous variables.
== External links ==
Ushakov, N.G. (2001) [1994], "Density of a probability distribution", Encyclopedia of Mathematics, EMS Press
Weisstein, Eric W. "Probability density function". MathWorld. | Wikipedia/Probability_density_function |
In mathematics, the characteristic equation (or auxiliary equation) is an algebraic equation of degree n upon which depends the solution of a given nth-order differential equation or difference equation. The characteristic equation can only be formed when the differential equation is linear and homogeneous, and has constant coefficients. Such a differential equation, with y as the dependent variable, superscript (n) denoting nth-derivative, and an, an − 1, ..., a1, a0 as constants,
a
n
y
(
n
)
+
a
n
−
1
y
(
n
−
1
)
+
⋯
+
a
1
y
′
+
a
0
y
=
0
,
{\displaystyle a_{n}y^{(n)}+a_{n-1}y^{(n-1)}+\cdots +a_{1}y'+a_{0}y=0,}
will have a characteristic equation of the form
a
n
r
n
+
a
n
−
1
r
n
−
1
+
⋯
+
a
1
r
+
a
0
=
0
{\displaystyle a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}=0}
whose solutions r1, r2, ..., rn are the roots from which the general solution can be formed. Analogously, a linear difference equation of the form
y
t
+
n
=
b
1
y
t
+
n
−
1
+
⋯
+
b
n
y
t
{\displaystyle y_{t+n}=b_{1}y_{t+n-1}+\cdots +b_{n}y_{t}}
has characteristic equation
r
n
−
b
1
r
n
−
1
−
⋯
−
b
n
=
0
,
{\displaystyle r^{n}-b_{1}r^{n-1}-\cdots -b_{n}=0,}
discussed in more detail at Linear recurrence with constant coefficients.
The characteristic roots (roots of the characteristic equation) also provide qualitative information about the behavior of the variable whose evolution is described by the dynamic equation. For a differential equation parameterized on time, the variable's evolution is stable if and only if the real part of each root is negative. For difference equations, there is stability if and only if the modulus of each root is less than 1. For both types of equation, persistent fluctuations occur if there is at least one pair of complex roots.
The method of integrating linear ordinary differential equations with constant coefficients was discovered by Leonhard Euler, who found that the solutions depended on an algebraic 'characteristic' equation. The qualities of the Euler's characteristic equation were later considered in greater detail by French mathematicians Augustin-Louis Cauchy and Gaspard Monge.
== Derivation ==
Starting with a linear homogeneous differential equation with constant coefficients an, an − 1, ..., a1, a0,
a
n
y
(
n
)
+
a
n
−
1
y
(
n
−
1
)
+
⋯
+
a
1
y
′
+
a
0
y
=
0
,
{\displaystyle a_{n}y^{(n)}+a_{n-1}y^{(n-1)}+\cdots +a_{1}y^{\prime }+a_{0}y=0,}
it can be seen that if y(x) = e rx, each term would be a constant multiple of e rx. This results from the fact that the derivative of the exponential function e rx is a multiple of itself. Therefore, y′ = re rx, y″ = r2e rx, and y(n) = rne rx are all multiples. This suggests that certain values of r will allow multiples of e rx to sum to zero, thus solving the homogeneous differential equation. In order to solve for r, one can substitute y = e rx and its derivatives into the differential equation to get
a
n
r
n
e
r
x
+
a
n
−
1
r
n
−
1
e
r
x
+
⋯
+
a
1
r
e
r
x
+
a
0
e
r
x
=
0
{\displaystyle a_{n}r^{n}e^{rx}+a_{n-1}r^{n-1}e^{rx}+\cdots +a_{1}re^{rx}+a_{0}e^{rx}=0}
Since e rx can never equal zero, it can be divided out, giving the characteristic equation
a
n
r
n
+
a
n
−
1
r
n
−
1
+
⋯
+
a
1
r
+
a
0
=
0.
{\displaystyle a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}=0.}
By solving for the roots, r, in this characteristic equation, one can find the general solution to the differential equation. For example, if r has roots equal to 3, 11, and 40, then the general solution will be
y
(
x
)
=
c
1
e
3
x
+
c
2
e
11
x
+
c
3
e
40
x
{\displaystyle y(x)=c_{1}e^{3x}+c_{2}e^{11x}+c_{3}e^{40x}}
, where
c
1
{\displaystyle c_{1}}
,
c
2
{\displaystyle c_{2}}
, and
c
3
{\displaystyle c_{3}}
are arbitrary constants which need to be determined by the boundary and/or initial conditions.
== Formation of the general solution ==
Solving the characteristic equation for its roots, r1, ..., rn, allows one to find the general solution of the differential equation. The roots may be real or complex, as well as distinct or repeated. If a characteristic equation has parts with distinct real roots, h repeated roots, or k complex roots corresponding to general solutions of yD(x), yR1(x), ..., yRh(x), and yC1(x), ..., yCk(x), respectively, then the general solution to the differential equation is
y
(
x
)
=
y
D
(
x
)
+
y
R
1
(
x
)
+
⋯
+
y
R
h
(
x
)
+
y
C
1
(
x
)
+
⋯
+
y
C
k
(
x
)
{\displaystyle y(x)=y_{\mathrm {D} }(x)+y_{\mathrm {R} _{1}}(x)+\cdots +y_{\mathrm {R} _{h}}(x)+y_{\mathrm {C} _{1}}(x)+\cdots +y_{\mathrm {C} _{k}}(x)}
=== Example ===
The linear homogeneous differential equation with constant coefficients
y
(
5
)
+
y
(
4
)
−
4
y
(
3
)
−
16
y
″
−
20
y
′
−
12
y
=
0
{\displaystyle y^{(5)}+y^{(4)}-4y^{(3)}-16y''-20y'-12y=0}
has the characteristic equation
r
5
+
r
4
−
4
r
3
−
16
r
2
−
20
r
−
12
=
0
{\displaystyle r^{5}+r^{4}-4r^{3}-16r^{2}-20r-12=0}
By factoring the characteristic equation into
(
r
−
3
)
(
r
2
+
2
r
+
2
)
2
=
0
{\displaystyle (r-3)(r^{2}+2r+2)^{2}=0}
one can see that the solutions for r are the distinct single root r1 = 3 and the double complex roots r2,3,4,5 = 1 ± i. This corresponds to the real-valued general solution
y
(
x
)
=
c
1
e
3
x
+
e
x
(
c
2
cos
x
+
c
3
sin
x
)
+
x
e
x
(
c
4
cos
x
+
c
5
sin
x
)
{\displaystyle y(x)=c_{1}e^{3x}+e^{x}(c_{2}\cos x+c_{3}\sin x)+xe^{x}(c_{4}\cos x+c_{5}\sin x)}
with constants c1, ..., c5.
=== Distinct real roots ===
The superposition principle for linear homogeneous says that if u1, ..., un are n linearly independent solutions to a particular differential equation, then c1u1 + ⋯ + cnun is also a solution for all values c1, ..., cn. Therefore, if the characteristic equation has distinct real roots r1, ..., rn, then a general solution will be of the form
y
D
(
x
)
=
c
1
e
r
1
x
+
c
2
e
r
2
x
+
⋯
+
c
n
e
r
n
x
{\displaystyle y_{\mathrm {D} }(x)=c_{1}e^{r_{1}x}+c_{2}e^{r_{2}x}+\cdots +c_{n}e^{r_{n}x}}
=== Repeated real roots ===
If the characteristic equation has a root r1 that is repeated k times, then it is clear that yp(x) = c1e r1x is at least one solution. However, this solution lacks linearly independent solutions from the other k − 1 roots. Since r1 has multiplicity k, the differential equation can be factored into
(
d
d
x
−
r
1
)
k
y
=
0.
{\displaystyle \left({\frac {d}{dx}}-r_{1}\right)^{k}y=0.}
The fact that yp(x) = c1e r1x is one solution allows one to presume that the general solution may be of the form y(x) = u(x)e r1x, where u(x) is a function to be determined. Substituting ue r1x gives
(
d
d
x
−
r
1
)
u
e
r
1
x
=
d
d
x
(
u
e
r
1
x
)
−
r
1
u
e
r
1
x
=
d
d
x
(
u
)
e
r
1
x
+
r
1
u
e
r
1
x
−
r
1
u
e
r
1
x
=
d
d
x
(
u
)
e
r
1
x
{\displaystyle \left({\frac {d}{dx}}-r_{1}\right)\!ue^{r_{1}x}={\frac {d}{dx}}\left(ue^{r_{1}x}\right)-r_{1}ue^{r_{1}x}={\frac {d}{dx}}(u)e^{r_{1}x}+r_{1}ue^{r_{1}x}-r_{1}ue^{r_{1}x}={\frac {d}{dx}}(u)e^{r_{1}x}}
when k = 1. By applying this fact k times, it follows that
(
d
d
x
−
r
1
)
k
u
e
r
1
x
=
d
k
d
x
k
(
u
)
e
r
1
x
=
0.
{\displaystyle \left({\frac {d}{dx}}-r_{1}\right)^{k}ue^{r_{1}x}={\frac {d^{k}}{dx^{k}}}(u)e^{r_{1}x}=0.}
By dividing out e r1x, it can be seen that
d
k
d
x
k
(
u
)
=
u
(
k
)
=
0.
{\displaystyle {\frac {d^{k}}{dx^{k}}}(u)=u^{(k)}=0.}
Therefore, the general case for u(x) is a polynomial of degree k − 1, so that u(x) = c1 + c2x + c3x2 + ⋯ + ckxk −1. Since y(x) = ue r1x, the part of the general solution corresponding to r1 is
y
R
(
x
)
=
e
r
1
x
(
c
1
+
c
2
x
+
⋯
+
c
k
x
k
−
1
)
.
{\displaystyle y_{\mathrm {R} }(x)=e^{r_{1}x}\!\left(c_{1}+c_{2}x+\cdots +c_{k}x^{k-1}\right).}
=== Complex roots ===
If a second-order differential equation has a characteristic equation with complex conjugate roots of the form r1 = a + bi and r2 = a − bi, then the general solution is accordingly y(x) = c1e(a + bi )x + c2e(a − bi )x. By Euler's formula, which states that eiθ = cos θ + i sin θ, this solution can be rewritten as follows:
y
(
x
)
=
c
1
e
(
a
+
b
i
)
x
+
c
2
e
(
a
−
b
i
)
x
=
c
1
e
a
x
(
cos
b
x
+
i
sin
b
x
)
+
c
2
e
a
x
(
cos
b
x
−
i
sin
b
x
)
=
(
c
1
+
c
2
)
e
a
x
cos
b
x
+
i
(
c
1
−
c
2
)
e
a
x
sin
b
x
{\displaystyle {\begin{aligned}y(x)&=c_{1}e^{(a+bi)x}+c_{2}e^{(a-bi)x}\\&=c_{1}e^{ax}(\cos bx+i\sin bx)+c_{2}e^{ax}(\cos bx-i\sin bx)\\&=\left(c_{1}+c_{2}\right)e^{ax}\cos bx+i(c_{1}-c_{2})e^{ax}\sin bx\end{aligned}}}
where c1 and c2 are constants that can be non-real and which depend on the initial conditions. (Indeed, since y(x) is real, c1 − c2 must be imaginary or zero and c1 + c2 must be real, in order for both terms after the last equals sign to be real.)
For example, if c1 = c2 = 1/2, then the particular solution y1(x) = e ax cos bx is formed. Similarly, if c1 = 1/2i and c2 = −1/2i, then the independent solution formed is y2(x) = e ax sin bx. Thus by the superposition principle for linear homogeneous differential equations, a second-order differential equation having complex roots r = a ± bi will result in the following general solution:
y
C
(
x
)
=
e
a
x
(
C
1
cos
b
x
+
C
2
sin
b
x
)
{\displaystyle y_{\mathrm {C} }(x)=e^{ax}(C_{1}\cos bx+C_{2}\sin bx)}
This analysis also applies to the parts of the solutions of a higher-order differential equation whose characteristic equation involves non-real complex conjugate roots.
== See also ==
Characteristic polynomial
== References == | Wikipedia/Characteristic_equation_(calculus) |
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by
H
^
{\displaystyle {\hat {H}}}
, where the hat indicates that it is an operator. It can also be written as
H
{\displaystyle H}
or
H
ˇ
{\displaystyle {\check {H}}}
.
== Introduction ==
The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
== Schrödinger Hamiltonian ==
=== One particle ===
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
H
^
=
T
^
+
V
^
,
{\displaystyle {\hat {H}}={\hat {T}}+{\hat {V}},}
where
V
^
=
V
=
V
(
r
,
t
)
,
{\displaystyle {\hat {V}}=V=V(\mathbf {r} ,t),}
is the potential energy operator and
T
^
=
p
^
⋅
p
^
2
m
=
p
^
2
2
m
=
−
ℏ
2
2
m
∇
2
,
{\displaystyle {\hat {T}}={\frac {\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} }{2m}}={\frac {{\hat {p}}^{2}}{2m}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2},}
is the kinetic energy operator in which
m
{\displaystyle m}
is the mass of the particle, the dot denotes the dot product of vectors, and
p
^
=
−
i
ℏ
∇
,
{\displaystyle {\hat {p}}=-i\hbar \nabla ,}
is the momentum operator where a
∇
{\displaystyle \nabla }
is the del operator. The dot product of
∇
{\displaystyle \nabla }
with itself is the Laplacian
∇
2
{\displaystyle \nabla ^{2}}
. In three dimensions using Cartesian coordinates the Laplace operator is
∇
2
=
∂
2
∂
x
2
+
∂
2
∂
y
2
+
∂
2
∂
z
2
{\displaystyle \nabla ^{2}={\frac {\partial ^{2}}{{\partial x}^{2}}}+{\frac {\partial ^{2}}{{\partial y}^{2}}}+{\frac {\partial ^{2}}{{\partial z}^{2}}}}
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation:
H
^
=
T
^
+
V
^
=
p
^
⋅
p
^
2
m
+
V
(
r
,
t
)
=
−
ℏ
2
2
m
∇
2
+
V
(
r
,
t
)
{\displaystyle {\begin{aligned}{\hat {H}}&={\hat {T}}+{\hat {V}}\\[6pt]&={\frac {\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} }{2m}}+V(\mathbf {r} ,t)\\[6pt]&=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} ,t)\end{aligned}}}
which allows one to apply the Hamiltonian to systems described by a wave function
Ψ
(
r
,
t
)
{\displaystyle \Psi (\mathbf {r} ,t)}
. This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
==== Expectation value ====
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
T
=
−
ℏ
2
2
m
∫
−
∞
+
∞
ψ
∗
d
2
ψ
d
x
2
d
x
=
−
ℏ
2
2
m
(
[
ψ
′
(
x
)
ψ
∗
(
x
)
]
−
∞
+
∞
−
∫
−
∞
+
∞
d
ψ
d
x
d
ψ
∗
d
x
d
x
)
=
ℏ
2
2
m
∫
−
∞
+
∞
|
d
ψ
d
x
|
2
d
x
≥
0
{\displaystyle {\begin{aligned}T&=-{\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\psi ^{*}{\frac {d^{2}\psi }{dx^{2}}}\,dx\\[1ex]&=-{\frac {\hbar ^{2}}{2m}}\left({\left[\psi '(x)\psi ^{*}(x)\right]}_{-\infty }^{+\infty }-\int _{-\infty }^{+\infty }{\frac {d\psi }{dx}}{\frac {d\psi ^{*}}{dx}}\,dx\right)\\[1ex]&={\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\left|{\frac {d\psi }{dx}}\right|^{2}\,dx\geq 0\end{aligned}}}
Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
E
=
T
+
⟨
V
(
x
)
⟩
=
T
+
∫
−
∞
+
∞
V
(
x
)
|
ψ
(
x
)
|
2
d
x
≥
V
min
(
x
)
∫
−
∞
+
∞
|
ψ
(
x
)
|
2
d
x
≥
V
min
(
x
)
{\displaystyle E=T+\langle V(x)\rangle =T+\int _{-\infty }^{+\infty }V(x)|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)\int _{-\infty }^{+\infty }|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)}
which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
=== Many particles ===
The formalism can be extended to
N
{\displaystyle N}
particles:
H
^
=
∑
n
=
1
N
T
^
n
+
V
^
{\displaystyle {\hat {H}}=\sum _{n=1}^{N}{\hat {T}}_{n}+{\hat {V}}}
where
V
^
=
V
(
r
1
,
r
2
,
…
,
r
N
,
t
)
,
{\displaystyle {\hat {V}}=V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t),}
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
T
^
n
=
p
^
n
⋅
p
^
n
2
m
n
=
−
ℏ
2
2
m
n
∇
n
2
{\displaystyle {\hat {T}}_{n}={\frac {\mathbf {\hat {p}} _{n}\cdot \mathbf {\hat {p}} _{n}}{2m_{n}}}=-{\frac {\hbar ^{2}}{2m_{n}}}\nabla _{n}^{2}}
is the kinetic energy operator of particle
n
{\displaystyle n}
,
∇
n
{\displaystyle \nabla _{n}}
is the gradient for particle
n
{\displaystyle n}
, and
∇
n
2
{\displaystyle \nabla _{n}^{2}}
is the Laplacian for particle n:
∇
n
2
=
∂
2
∂
x
n
2
+
∂
2
∂
y
n
2
+
∂
2
∂
z
n
2
,
{\displaystyle \nabla _{n}^{2}={\frac {\partial ^{2}}{\partial x_{n}^{2}}}+{\frac {\partial ^{2}}{\partial y_{n}^{2}}}+{\frac {\partial ^{2}}{\partial z_{n}^{2}}},}
Combining these yields the Schrödinger Hamiltonian for the
N
{\displaystyle N}
-particle case:
H
^
=
∑
n
=
1
N
T
^
n
+
V
^
=
∑
n
=
1
N
p
^
n
⋅
p
^
n
2
m
n
+
V
(
r
1
,
r
2
,
…
,
r
N
,
t
)
=
−
ℏ
2
2
∑
n
=
1
N
1
m
n
∇
n
2
+
V
(
r
1
,
r
2
,
…
,
r
N
,
t
)
{\displaystyle {\begin{aligned}{\hat {H}}&=\sum _{n=1}^{N}{\hat {T}}_{n}+{\hat {V}}\\[6pt]&=\sum _{n=1}^{N}{\frac {\mathbf {\hat {p}} _{n}\cdot \mathbf {\hat {p}} _{n}}{2m_{n}}}+V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)\\[6pt]&=-{\frac {\hbar ^{2}}{2}}\sum _{n=1}^{N}{\frac {1}{m_{n}}}\nabla _{n}^{2}+V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)\end{aligned}}}
However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles:
−
ℏ
2
2
M
∇
i
⋅
∇
j
{\displaystyle -{\frac {\hbar ^{2}}{2M}}\nabla _{i}\cdot \nabla _{j}}
where
M
{\displaystyle M}
denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below).
For
N
{\displaystyle N}
interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function
V
{\displaystyle V}
is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
V
=
∑
i
=
1
N
V
(
r
i
,
t
)
=
V
(
r
1
,
t
)
+
V
(
r
2
,
t
)
+
⋯
+
V
(
r
N
,
t
)
{\displaystyle V=\sum _{i=1}^{N}V(\mathbf {r} _{i},t)=V(\mathbf {r} _{1},t)+V(\mathbf {r} _{2},t)+\cdots +V(\mathbf {r} _{N},t)}
The general form of the Hamiltonian in this case is:
H
^
=
−
ℏ
2
2
∑
i
=
1
N
1
m
i
∇
i
2
+
∑
i
=
1
N
V
i
=
∑
i
=
1
N
(
−
ℏ
2
2
m
i
∇
i
2
+
V
i
)
=
∑
i
=
1
N
H
^
i
{\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N}{\frac {1}{m_{i}}}\nabla _{i}^{2}+\sum _{i=1}^{N}V_{i}\\[6pt]&=\sum _{i=1}^{N}\left(-{\frac {\hbar ^{2}}{2m_{i}}}\nabla _{i}^{2}+V_{i}\right)\\[6pt]&=\sum _{i=1}^{N}{\hat {H}}_{i}\end{aligned}}}
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
== Schrödinger equation ==
The Hamiltonian generates the time evolution of quantum states. If
|
ψ
(
t
)
⟩
{\displaystyle \left|\psi (t)\right\rangle }
is the state of the system at time
t
{\displaystyle t}
, then
H
|
ψ
(
t
)
⟩
=
i
ℏ
d
d
t
|
ψ
(
t
)
⟩
.
{\displaystyle H\left|\psi (t)\right\rangle =i\hbar {d \over \ dt}\left|\psi (t)\right\rangle .}
This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons
H
{\displaystyle H}
is also called the Hamiltonian. Given the state at some initial time (
t
=
0
{\displaystyle t=0}
), we can solve it to obtain the state at any subsequent time. In particular, if
H
{\displaystyle H}
is independent of time, then
|
ψ
(
t
)
⟩
=
e
−
i
H
t
/
ℏ
|
ψ
(
0
)
⟩
.
{\displaystyle \left|\psi (t)\right\rangle =e^{-iHt/\hbar }\left|\psi (0)\right\rangle .}
The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in
H
{\displaystyle H}
. One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
U
=
e
−
i
H
t
/
ℏ
{\displaystyle U=e^{-iHt/\hbar }}
is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent,
{
U
(
t
)
}
{\displaystyle \{U(t)\}}
form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
== Dirac formalism ==
However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:
The eigenkets of
H
{\displaystyle H}
, denoted
|
a
⟩
{\displaystyle \left|a\right\rangle }
, provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted
{
E
a
}
{\displaystyle \{E_{a}\}}
, solving the equation:
H
|
a
⟩
=
E
a
|
a
⟩
.
{\displaystyle H\left|a\right\rangle =E_{a}\left|a\right\rangle .}
Since
H
{\displaystyle H}
is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
== Expressions for the Hamiltonian ==
Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by
m
{\displaystyle m}
, and charges by
q
{\displaystyle q}
.
=== Free particle ===
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:
H
^
=
−
ℏ
2
2
m
∂
2
∂
x
2
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}}
and in higher dimensions:
H
^
=
−
ℏ
2
2
m
∇
2
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}}
=== Constant-potential well ===
For a particle in a region of constant potential
V
=
V
0
{\displaystyle V=V_{0}}
(no dependence on space or time), in one dimension, the Hamiltonian is:
H
^
=
−
ℏ
2
2
m
∂
2
∂
x
2
+
V
0
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V_{0}}
in three dimensions
H
^
=
−
ℏ
2
2
m
∇
2
+
V
0
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V_{0}}
This applies to the elementary "particle in a box" problem, and step potentials.
=== Simple harmonic oscillator ===
For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:
V
=
k
2
x
2
=
m
ω
2
2
x
2
{\displaystyle V={\frac {k}{2}}x^{2}={\frac {m\omega ^{2}}{2}}x^{2}}
where the angular frequency
ω
{\displaystyle \omega }
, effective spring constant
k
{\displaystyle k}
, and mass
m
{\displaystyle m}
of the oscillator satisfy:
ω
2
=
k
m
{\displaystyle \omega ^{2}={\frac {k}{m}}}
so the Hamiltonian is:
H
^
=
−
ℏ
2
2
m
∂
2
∂
x
2
+
m
ω
2
2
x
2
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}}
For three dimensions, this becomes
H
^
=
−
ℏ
2
2
m
∇
2
+
m
ω
2
2
r
2
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+{\frac {m\omega ^{2}}{2}}r^{2}}
where the three-dimensional position vector
r
{\displaystyle \mathbf {r} }
using Cartesian coordinates is
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
, its magnitude is
r
2
=
r
⋅
r
=
|
r
|
2
=
x
2
+
y
2
+
z
2
{\displaystyle r^{2}=\mathbf {r} \cdot \mathbf {r} =|\mathbf {r} |^{2}=x^{2}+y^{2}+z^{2}}
Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:
H
^
=
−
ℏ
2
2
m
(
∂
2
∂
x
2
+
∂
2
∂
y
2
+
∂
2
∂
z
2
)
+
m
ω
2
2
(
x
2
+
y
2
+
z
2
)
=
(
−
ℏ
2
2
m
∂
2
∂
x
2
+
m
ω
2
2
x
2
)
+
(
−
ℏ
2
2
m
∂
2
∂
y
2
+
m
ω
2
2
y
2
)
+
(
−
ℏ
2
2
m
∂
2
∂
z
2
+
m
ω
2
2
z
2
)
{\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}\right)+{\frac {m\omega ^{2}}{2}}\left(x^{2}+y^{2}+z^{2}\right)\\[6pt]&=\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}\right)+\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {m\omega ^{2}}{2}}y^{2}\right)+\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial z^{2}}}+{\frac {m\omega ^{2}}{2}}z^{2}\right)\end{aligned}}}
=== Rigid rotor ===
For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is:
H
^
=
−
ℏ
2
2
I
x
x
J
^
x
2
−
ℏ
2
2
I
y
y
J
^
y
2
−
ℏ
2
2
I
z
z
J
^
z
2
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2I_{xx}}}{\hat {J}}_{x}^{2}-{\frac {\hbar ^{2}}{2I_{yy}}}{\hat {J}}_{y}^{2}-{\frac {\hbar ^{2}}{2I_{zz}}}{\hat {J}}_{z}^{2}}
where
I
x
x
{\displaystyle I_{xx}}
,
I
y
y
{\displaystyle I_{yy}}
, and
I
z
z
{\displaystyle I_{zz}}
are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and
J
^
x
{\displaystyle {\hat {J}}_{x}}
,
J
^
y
{\displaystyle {\hat {J}}_{y}}
, and
J
^
z
{\displaystyle {\hat {J}}_{z}}
are the total angular momentum operators (components), about the
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
axes respectively.
=== Electrostatic (Coulomb) potential ===
The Coulomb potential energy for two point charges
q
1
{\displaystyle q_{1}}
and
q
2
{\displaystyle q_{2}}
(i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism):
V
=
q
1
q
2
4
π
ε
0
|
r
|
{\displaystyle V={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}|\mathbf {r} |}}}
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For
N
{\displaystyle N}
charges, the potential energy of charge
q
j
{\displaystyle q_{j}}
due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
V
j
=
1
2
∑
i
≠
j
q
i
ϕ
(
r
i
)
=
1
8
π
ε
0
∑
i
≠
j
q
i
q
j
|
r
i
−
r
j
|
{\displaystyle V_{j}={\frac {1}{2}}\sum _{i\neq j}q_{i}\phi (\mathbf {r} _{i})={\frac {1}{8\pi \varepsilon _{0}}}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}}
where
ϕ
(
r
i
)
{\displaystyle \phi (\mathbf {r} _{i})}
is the electrostatic potential of charge
q
j
{\displaystyle q_{j}}
at
r
i
{\displaystyle \mathbf {r} _{i}}
. The total potential of the system is then the sum over
j
{\displaystyle j}
:
V
=
1
8
π
ε
0
∑
j
=
1
N
∑
i
≠
j
q
i
q
j
|
r
i
−
r
j
|
{\displaystyle V={\frac {1}{8\pi \varepsilon _{0}}}\sum _{j=1}^{N}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}}
so the Hamiltonian is:
H
^
=
−
ℏ
2
2
∑
j
=
1
N
1
m
j
∇
j
2
+
1
8
π
ε
0
∑
j
=
1
N
∑
i
≠
j
q
i
q
j
|
r
i
−
r
j
|
=
∑
j
=
1
N
(
−
ℏ
2
2
m
j
∇
j
2
+
1
8
π
ε
0
∑
i
≠
j
q
i
q
j
|
r
i
−
r
j
|
)
{\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2}}\sum _{j=1}^{N}{\frac {1}{m_{j}}}\nabla _{j}^{2}+{\frac {1}{8\pi \varepsilon _{0}}}\sum _{j=1}^{N}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}\\&=\sum _{j=1}^{N}\left(-{\frac {\hbar ^{2}}{2m_{j}}}\nabla _{j}^{2}+{\frac {1}{8\pi \varepsilon _{0}}}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}\right)\\\end{aligned}}}
=== Electric dipole in an electric field ===
For an electric dipole moment
d
{\displaystyle \mathbf {d} }
constituting charges of magnitude
q
{\displaystyle q}
, in a uniform, electrostatic field (time-independent)
E
{\displaystyle \mathbf {E} }
, positioned in one place, the potential is:
V
=
−
d
^
⋅
E
{\displaystyle V=-\mathbf {\hat {d}} \cdot \mathbf {E} }
the dipole moment itself is the operator
d
^
=
q
r
^
{\displaystyle \mathbf {\hat {d}} =q\mathbf {\hat {r}} }
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
H
^
=
−
d
^
⋅
E
=
−
q
r
^
⋅
E
{\displaystyle {\hat {H}}=-\mathbf {\hat {d}} \cdot \mathbf {E} =-q\mathbf {\hat {r}} \cdot \mathbf {E} }
=== Magnetic dipole in a magnetic field ===
For a magnetic dipole moment
μ
{\displaystyle {\boldsymbol {\mu }}}
in a uniform, magnetostatic field (time-independent)
B
{\displaystyle \mathbf {B} }
, positioned in one place, the potential is:
V
=
−
μ
⋅
B
{\displaystyle V=-{\boldsymbol {\mu }}\cdot \mathbf {B} }
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
H
^
=
−
μ
⋅
B
{\displaystyle {\hat {H}}=-{\boldsymbol {\mu }}\cdot \mathbf {B} }
For a spin-1⁄2 particle, the corresponding spin magnetic moment is:
μ
S
=
g
s
e
2
m
S
{\displaystyle {\boldsymbol {\mu }}_{S}={\frac {g_{s}e}{2m}}\mathbf {S} }
where
g
s
{\displaystyle g_{s}}
is the "spin g-factor" (not to be confused with the gyromagnetic ratio),
e
{\displaystyle e}
is the electron charge,
S
{\displaystyle \mathbf {S} }
is the spin operator vector, whose components are the Pauli matrices, hence
H
^
=
g
s
e
2
m
S
⋅
B
{\displaystyle {\hat {H}}={\frac {g_{s}e}{2m}}\mathbf {S} \cdot \mathbf {B} }
=== Charged particle in an electromagnetic field ===
For a particle with mass
m
{\displaystyle m}
and charge
q
{\displaystyle q}
in an electromagnetic field, described by the scalar potential
ϕ
{\displaystyle \phi }
and vector potential
A
{\displaystyle \mathbf {A} }
, there are two parts to the Hamiltonian to substitute for. The canonical momentum operator
p
^
{\displaystyle \mathbf {\hat {p}} }
, which includes a contribution from the
A
{\displaystyle \mathbf {A} }
field and fulfils the canonical commutation relation, must be quantized;
p
^
=
m
r
˙
+
q
A
,
{\displaystyle \mathbf {\hat {p}} =m{\dot {\mathbf {r} }}+q\mathbf {A} ,}
where
m
r
˙
{\displaystyle m{\dot {\mathbf {r} }}}
is the kinetic momentum. The quantization prescription reads
p
^
=
−
i
ℏ
∇
,
{\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla ,}
so the corresponding kinetic energy operator is
T
^
=
1
2
m
r
˙
⋅
r
˙
=
1
2
m
(
p
^
−
q
A
)
2
{\displaystyle {\hat {T}}={\frac {1}{2}}m{\dot {\mathbf {r} }}\cdot {\dot {\mathbf {r} }}={\frac {1}{2m}}\left(\mathbf {\hat {p}} -q\mathbf {A} \right)^{2}}
and the potential energy, which is due to the
ϕ
{\displaystyle \phi }
field, is given by
V
^
=
q
ϕ
.
{\displaystyle {\hat {V}}=q\phi .}
Casting all of these into the Hamiltonian gives
H
^
=
1
2
m
(
−
i
ℏ
∇
−
q
A
)
2
+
q
ϕ
.
{\displaystyle {\hat {H}}={\frac {1}{2m}}\left(-i\hbar \nabla -q\mathbf {A} \right)^{2}+q\phi .}
== Energy eigenket degeneracy, symmetry, and conservation laws ==
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the
x
{\displaystyle x}
direction is a different state from one propagating in the
y
{\displaystyle y}
direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator
U
{\displaystyle U}
commutes with the Hamiltonian. To see this, suppose that
|
a
⟩
{\displaystyle |a\rangle }
is an energy eigenket. Then
U
|
a
⟩
{\displaystyle U|a\rangle }
is an energy eigenket with the same eigenvalue, since
U
H
|
a
⟩
=
U
E
a
|
a
⟩
=
E
a
(
U
|
a
⟩
)
=
H
(
U
|
a
⟩
)
.
{\displaystyle UH|a\rangle =UE_{a}|a\rangle =E_{a}(U|a\rangle )=H\;(U|a\rangle ).}
Since
U
{\displaystyle U}
is nontrivial, at least one pair of
|
a
⟩
{\displaystyle |a\rangle }
and
U
|
a
⟩
{\displaystyle U|a\rangle }
must represent distinct states. Therefore,
H
{\displaystyle H}
has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let
G
{\displaystyle G}
be the Hermitian generator of
U
{\displaystyle U}
:
U
=
I
−
i
ε
G
+
O
(
ε
2
)
{\displaystyle U=I-i\varepsilon G+O(\varepsilon ^{2})}
It is straightforward to show that if
U
{\displaystyle U}
commutes with
H
{\displaystyle H}
, then so does
G
{\displaystyle G}
:
[
H
,
G
]
=
0
{\displaystyle [H,G]=0}
Therefore,
∂
∂
t
⟨
ψ
(
t
)
|
G
|
ψ
(
t
)
⟩
=
1
i
ℏ
⟨
ψ
(
t
)
|
[
G
,
H
]
|
ψ
(
t
)
⟩
=
0.
{\displaystyle {\frac {\partial }{\partial t}}\langle \psi (t)|G|\psi (t)\rangle ={\frac {1}{i\hbar }}\langle \psi (t)|[G,H]|\psi (t)\rangle =0.}
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
⟨
ψ
(
t
)
|
H
=
−
i
ℏ
d
d
t
⟨
ψ
(
t
)
|
.
{\displaystyle \langle \psi (t)|H=-i\hbar {d \over \ dt}\langle \psi (t)|.}
Thus, the expected value of the observable
G
{\displaystyle G}
is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
== Hamilton's equations ==
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states
{
|
n
⟩
}
{\displaystyle \left\{\left|n\right\rangle \right\}}
, which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
⟨
n
′
|
n
⟩
=
δ
n
n
′
{\displaystyle \langle n'|n\rangle =\delta _{nn'}}
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time
t
{\displaystyle t}
,
|
ψ
(
t
)
⟩
{\displaystyle \left|\psi \left(t\right)\right\rangle }
, can be expanded in terms of these basis states:
|
ψ
(
t
)
⟩
=
∑
n
a
n
(
t
)
|
n
⟩
{\displaystyle |\psi (t)\rangle =\sum _{n}a_{n}(t)|n\rangle }
where
a
n
(
t
)
=
⟨
n
|
ψ
(
t
)
⟩
.
{\displaystyle a_{n}(t)=\langle n|\psi (t)\rangle .}
The coefficients
a
n
(
t
)
{\displaystyle a_{n}(t)}
are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
⟨
H
(
t
)
⟩
=
d
e
f
⟨
ψ
(
t
)
|
H
|
ψ
(
t
)
⟩
=
∑
n
n
′
a
n
′
∗
a
n
⟨
n
′
|
H
|
n
⟩
{\displaystyle \langle H(t)\rangle \mathrel {\stackrel {\mathrm {def} }{=}} \langle \psi (t)|H|\psi (t)\rangle =\sum _{nn'}a_{n'}^{*}a_{n}\langle n'|H|n\rangle }
where the last step was obtained by expanding
|
ψ
(
t
)
⟩
{\displaystyle \left|\psi \left(t\right)\right\rangle }
in terms of the basis states.
Each
a
n
(
t
)
{\displaystyle a_{n}(t)}
actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use
a
n
(
t
)
{\displaystyle a_{n}(t)}
and its complex conjugate
a
n
∗
(
t
)
{\displaystyle a_{n}^{*}(t)}
. With this choice of independent variables, we can calculate the partial derivative
∂
⟨
H
⟩
∂
a
n
′
∗
=
∑
n
a
n
⟨
n
′
|
H
|
n
⟩
=
⟨
n
′
|
H
|
ψ
⟩
{\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n'}^{*}}}=\sum _{n}a_{n}\langle n'|H|n\rangle =\langle n'|H|\psi \rangle }
By applying the Schrödinger equation and using the orthonormality of the basis states, this further reduces to
∂
⟨
H
⟩
∂
a
n
′
∗
=
i
ℏ
∂
a
n
′
∂
t
{\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n'}^{*}}}=i\hbar {\frac {\partial a_{n'}}{\partial t}}}
Similarly, one can show that
∂
⟨
H
⟩
∂
a
n
=
−
i
ℏ
∂
a
n
∗
∂
t
{\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n}}}=-i\hbar {\frac {\partial a_{n}^{*}}{\partial t}}}
If we define "conjugate momentum" variables
π
n
{\displaystyle \pi _{n}}
by
π
n
(
t
)
=
i
ℏ
a
n
∗
(
t
)
{\displaystyle \pi _{n}(t)=i\hbar a_{n}^{*}(t)}
then the above equations become
∂
⟨
H
⟩
∂
π
n
=
∂
a
n
∂
t
,
∂
⟨
H
⟩
∂
a
n
=
−
∂
π
n
∂
t
{\displaystyle {\frac {\partial \langle H\rangle }{\partial \pi _{n}}}={\frac {\partial a_{n}}{\partial t}},\quad {\frac {\partial \langle H\rangle }{\partial a_{n}}}=-{\frac {\partial \pi _{n}}{\partial t}}}
which is precisely the form of Hamilton's equations, with the
a
n
{\displaystyle a_{n}}
s as the generalized coordinates, the
π
n
{\displaystyle \pi _{n}}
s as the conjugate momenta, and
⟨
H
⟩
{\displaystyle \langle H\rangle }
taking the place of the classical Hamiltonian.
== See also ==
== References ==
== Further reading ==
Schrödinger, Erwin (1926). "Quantisierung als Eigenwertproblem" [Quantization as an Eigenvalue Problem]. Annalen der Physik (in German). 79 (4): 361–376. Bibcode:1926AnP...384..361S. doi:10.1002/andp.19263840404. This paper is foundational in quantum mechanics, introducing the Schrödinger equation and its application to the Hamiltonian operator.
Dirac, Paul A. M. (1928). "The Quantum Theory of the Electron". Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023. This paper introduced the Dirac equation, which unified quantum mechanics with special relativity and accounted for the electron's spin.
Von Neumann, John (1932). Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. Berlin: Springer. Translated into English in 1955, Von Neumann's work formalized quantum mechanics using Hilbert spaces and linear operators. It remains a cornerstone in the field.
== External links ==
Quotations related to Hamiltonian (quantum mechanics) at Wikiquote | Wikipedia/Hamiltonian_(quantum_mechanics) |
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics.
The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements.
The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals.
Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects.
== Definition ==
Informally, a field is a set, along with two operations defined on that set: an addition operation a + b and a multiplication operation a ⋅ b, both of which behave similarly as they do for rational numbers and real numbers. This includes the existence of an additive inverse −a for all elements a and of a multiplicative inverse b−1 for every nonzero element b. This allows the definition of the so-called inverse operations, subtraction a − b and division a / b, as a − b = a + (−b) and a / b = a ⋅ b−1.
Often the product a ⋅ b is represented by juxtaposition, as ab.
=== Classic definition ===
Formally, a field is a set F together with two binary operations on F called addition and multiplication. A binary operation on F is a mapping F × F → F, that is, a correspondence that associates with each ordered pair of elements of F a uniquely determined element of F. The result of the addition of a and b is called the sum of a and b, and is denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, and is denoted a ⋅ b. These operations are required to satisfy the following properties, referred to as field axioms.
These axioms are required to hold for all elements a, b, c of the field F:
Associativity of addition and multiplication: a + (b + c) = (a + b) + c, and a ⋅ (b ⋅ c) = (a ⋅ b) ⋅ c.
Commutativity of addition and multiplication: a + b = b + a, and a ⋅ b = b ⋅ a.
Additive and multiplicative identity: there exist two distinct elements 0 and 1 in F such that a + 0 = a and a ⋅ 1 = a.
Additive inverses: for every a in F, there exists an element in F, denoted −a, called the additive inverse of a, such that a + (−a) = 0.
Multiplicative inverses: for every a ≠ 0 in F, there exists an element in F, denoted by a−1 or 1/a, called the multiplicative inverse of a, such that a ⋅ a−1 = 1.
Distributivity of multiplication over addition: a ⋅ (b + c) = (a ⋅ b) + (a ⋅ c).
An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with 0 as the additive identity; the nonzero elements form a group under multiplication with 1 as the multiplicative identity; and multiplication distributes over addition.
Even more succinctly: a field is a commutative ring where 0 ≠ 1 and all nonzero elements are invertible under multiplication.
=== Alternative definition ===
Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants 0 and 1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants 1 and −1, since 0 = 1 + (−1) and −a = (−1)a.
== Examples ==
=== Rational numbers ===
Rational numbers have been widely used a long time before the elaboration of the concept of field.
They are numbers that can be written as fractions
a/b, where a and b are integers, and b ≠ 0. The additive inverse of such a fraction is −a/b, and the multiplicative inverse (provided that a ≠ 0) is b/a, which can be seen as follows:
b
a
⋅
a
b
=
b
a
a
b
=
1.
{\displaystyle {\frac {b}{a}}\cdot {\frac {a}{b}}={\frac {ba}{ab}}=1.}
The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows:
a
b
⋅
(
c
d
+
e
f
)
=
a
b
⋅
(
c
d
⋅
f
f
+
e
f
⋅
d
d
)
=
a
b
⋅
(
c
f
d
f
+
e
d
f
d
)
=
a
b
⋅
c
f
+
e
d
d
f
=
a
(
c
f
+
e
d
)
b
d
f
=
a
c
f
b
d
f
+
a
e
d
b
d
f
=
a
c
b
d
+
a
e
b
f
=
a
b
⋅
c
d
+
a
b
⋅
e
f
.
{\displaystyle {\begin{aligned}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}+{\frac {e}{f}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}\cdot {\frac {f}{f}}+{\frac {e}{f}}\cdot {\frac {d}{d}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {cf}{df}}+{\frac {ed}{fd}}\right)={\frac {a}{b}}\cdot {\frac {cf+ed}{df}}\\[6pt]={}&{\frac {a(cf+ed)}{bdf}}={\frac {acf}{bdf}}+{\frac {aed}{bdf}}={\frac {ac}{bd}}+{\frac {ae}{bf}}\\[6pt]={}&{\frac {a}{b}}\cdot {\frac {c}{d}}+{\frac {a}{b}}\cdot {\frac {e}{f}}.\end{aligned}}}
=== Real and complex numbers ===
The real numbers R, with the usual operations of addition and multiplication, also form a field. The complex numbers C consist of expressions
a + bi, with a, b real,
where i is the imaginary unit, i.e., a (non-real) number satisfying i2 = −1.
Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for C. For example, the distributive law enforces
(a + bi)(c + di) = ac + bci + adi + bdi2 = (ac − bd) + (bc + ad)i.
It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines.
=== Constructible numbers ===
In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field Q of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within Q. Using the labeling in the illustration, construct the segments AB, BD, and a semicircle over AD (center at the midpoint C), which intersects the perpendicular line through B in a point F, at a distance of exactly
h
=
p
{\displaystyle h={\sqrt {p}}}
from B when BD has length one.
Not all real numbers are constructible. It can be shown that
2
3
{\displaystyle {\sqrt[{3}]{2}}}
is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks.
=== A field with four elements ===
In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called O, I, A, and B. The notation is chosen such that O plays the role of the additive identity element (denoted 0 in the axioms above), and I is the multiplicative identity (denoted 1 in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example,
A ⋅ (B + A) = A ⋅ I = A, which equals A ⋅ B + A ⋅ A = I + B = A, as required by the distributivity.
This field is called a finite field or Galois field with four elements, and is denoted F4 or GF(4). The subset consisting of O and I (highlighted in red in the tables at the right) is also a field, known as the binary field F2 or GF(2).
== Elementary notions ==
In this section, F denotes an arbitrary field and a and b are arbitrary elements of F.
=== Consequences of the definition ===
One has a ⋅ 0 = 0 and −a = (−1) ⋅ a. In particular, one may deduce the additive inverse of every element as soon as one knows −1.
If ab = 0 then a or b must be 0, since, if a ≠ 0, then
b = (a−1a)b = a−1(ab) = a−1 ⋅ 0 = 0. This means that every field is an integral domain.
In addition, the following properties are true for any elements a and b:
−0 = 0
1−1 = 1
(−(−a)) = a
(−a) ⋅ b = a ⋅ (−b) = −(a ⋅ b)
(a−1)−1 = a if a ≠ 0
=== Additive and multiplicative groups of a field ===
The axioms of a field F imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by (F, +) when denoting it simply as F could be confusing.
Similarly, the nonzero elements of F form an abelian group under multiplication, called the multiplicative group, and denoted by
(
F
∖
{
0
}
,
⋅
)
{\displaystyle (F\smallsetminus \{0\},\cdot )}
or just
F
∖
{
0
}
{\displaystyle F\smallsetminus \{0\}}
, or F×.
A field may thus be defined as set F equipped with two operations denoted as an addition and a multiplication such that F is an abelian group under addition,
F
∖
{
0
}
{\displaystyle F\smallsetminus \{0\}}
is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses −a and a−1 are uniquely determined by a.
The requirement 1 ≠ 0 is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields.
Every finite subgroup of the multiplicative group of a field is cyclic (see Root of unity § Cyclic groups).
=== Characteristic ===
In addition to the multiplication of two elements of F, it is possible to define the product n ⋅ a of an arbitrary element a of F by a positive integer n to be the n-fold sum
a + a + ... + a (which is an element of F.)
If there is no positive integer such that
n ⋅ 1 = 0,
then F is said to have characteristic 0. For example, the field of rational numbers Q has characteristic 0 since no positive integer n is zero. Otherwise, if there is a positive integer n satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by p and the field is said to have characteristic p then.
For example, the field F4 has characteristic 2 since (in the notation of the above addition table) I + I = O.
If F has characteristic p, then p ⋅ a = 0 for all a in F. This implies that
(a + b)p = ap + bp,
since all other binomial coefficients appearing in the binomial formula are divisible by p. Here, ap := a ⋅ a ⋅ ⋯ ⋅ a (p factors) is the pth power, i.e., the p-fold product of the element a. Therefore, the Frobenius map
F → F : x ↦ xp
is compatible with the addition in F (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic p quite different from fields of characteristic 0.
=== Subfields and prime fields ===
A subfield E of a field F is a subset of F that is a field with respect to the field operations of F. Equivalently E is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that 1 ∊ E, that for all a, b ∊ E both a + b and a ⋅ b are in E, and that for all a ≠ 0 in E, both −a and 1/a are in E.
Field homomorphisms are maps φ: E → F between two fields such that φ(e1 + e2) = φ(e1) + φ(e2), φ(e1e2) = φ(e1) φ(e2), and φ(1E) = 1F, where e1 and e2 are arbitrary elements of E. All field homomorphisms are injective. If φ is also surjective, it is called an isomorphism (or the fields E and F are called isomorphic).
A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field F contains a prime field. If the characteristic of F is p (a prime number), the prime field is isomorphic to the finite field Fp introduced below. Otherwise the prime field is isomorphic to Q.
== Finite fields ==
Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example F4 is a field with four elements. Its subfield F2 is the smallest field, because by definition a field has at least two distinct elements, 0 and 1.
The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer n, arithmetic "modulo n" means to work with the numbers
Z/nZ = {0, 1, ..., n − 1}.
The addition and multiplication on this set are done by performing the operation in question in the set Z of integers, dividing by n and taking the remainder as result. This construction yields a field precisely if n is a prime number. For example, taking the prime n = 2 results in the above-mentioned field F2. For n = 4 and more generally, for any composite number (i.e., any number n which can be expressed as a product n = r ⋅ s of two strictly smaller natural numbers), Z/nZ is not a field: the product of two non-zero elements is zero since r ⋅ s = 0 in Z/nZ, which, as was explained above, prevents Z/nZ from being a field. The field Z/pZ with p elements (p being prime) constructed in this way is usually denoted by Fp.
Every finite field F has q = pn elements, where p is prime and n ≥ 1. This statement holds since F may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say n, which implies the asserted statement.
A field with q = pn elements can be constructed as the splitting field of the polynomial
f(x) = xq − x.
Such a splitting field is an extension of Fp in which the polynomial f has q zeros. This means f has as many zeros as possible since the degree of f is q. For q = 22 = 4, it can be checked case by case using the above multiplication table that all four elements of F4 satisfy the equation x4 = x, so they are zeros of f. By contrast, in F2, f has only two zeros (namely 0 and 1), so f does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with q elements, denoted by Fq or GF(q).
== History ==
Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros x1, x2, x3 of a cubic polynomial in the expression
(x1 + ωx2 + ω2x3)3
(with ω being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown x to a quadratic equation for x3. Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation
x p = 1
for a prime p and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular p-gon can be constructed if p = 22k + 1. Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree 5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group.
In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by Moore (1893).
By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system.
In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q(X). Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of e and π, respectively.
The first clear definition of an abstract field is due to Weber (1893). In particular, Heinrich Martin Weber's notion included the field Fp. Giuseppe Veronese (1891) studied the field of formal power series, which led Hensel (1904) to introduce the field of p-adic numbers. Steinitz (1910) synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. Artin & Schreier (1927) linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem.
== Constructing fields ==
=== Constructing fields from rings ===
A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses a−1. For example, the integers Z form a commutative ring, but not a field: the reciprocal of an integer n is not itself an integer, unless n = ±1.
In the hierarchy of algebraic structures fields can be characterized as the commutative rings R in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, (0) and R. Fields are also precisely the commutative rings in which (0) is the only prime ideal.
Given a commutative ring R, there are two ways to construct a field related to R, i.e., two ways of modifying R such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of Z is Q, the rationals, while the residue fields of Z are the finite fields Fp.
==== Field of fractions ====
Given an integral domain R, its field of fractions Q(R) is built with the fractions of two elements of R exactly as Q is constructed from the integers. More precisely, the elements of Q(R) are the fractions a/b where a and b are in R, and b ≠ 0. Two fractions a/b and c/d are equal if and only if ad = bc. The operation on the fractions work exactly as for rational numbers. For example,
a
b
+
c
d
=
a
d
+
b
c
b
d
.
{\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.}
It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field.
The field F(x) of the rational fractions over a field (or an integral domain) F is the field of fractions of the polynomial ring F[x]. The field F((x)) of Laurent series
∑
i
=
k
∞
a
i
x
i
(
k
∈
Z
,
a
i
∈
F
)
{\displaystyle \sum _{i=k}^{\infty }a_{i}x^{i}\ (k\in \mathbb {Z} ,a_{i}\in F)}
over a field F is the field of fractions of the ring F[[x]] of formal power series (in which k ≥ 0). Since any Laurent series is a fraction of a power series divided by a power of x (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though.
==== Residue fields ====
In addition to the field of fractions, which embeds R injectively into a field, a field can be obtained from a commutative ring R by means of a surjective map onto a field F. Any field obtained in this way is a quotient R / m, where m is a maximal ideal of R. If R has only one maximal ideal m, this field is called the residue field of R.
The ideal generated by a single polynomial f in the polynomial ring R = E[X] (over a field E) is maximal if and only if f is irreducible in E, i.e., if f cannot be expressed as the product of two polynomials in E[X] of smaller degree. This yields a field
F = E[X] / (f(X)).
This field F contains an element x (namely the residue class of X) which satisfies the equation
f(x) = 0.
For example, C is obtained from R by adjoining the imaginary unit symbol i, which satisfies f(i) = 0, where f(X) = X2 + 1. Moreover, f is irreducible over R, which implies that the map that sends a polynomial f(X) ∊ R[X] to f(i) yields an isomorphism
R
[
X
]
/
(
X
2
+
1
)
⟶
≅
C
.
{\displaystyle \mathbf {R} [X]/\left(X^{2}+1\right)\ {\stackrel {\cong }{\longrightarrow }}\ \mathbf {C} .}
=== Constructing fields within a bigger field ===
Fields can be constructed inside a given bigger container field. Suppose given a field E, and a field F containing E as a subfield. For any element x of F, there is a smallest subfield of F containing E and x, called the subfield of F generated by x and denoted E(x). The passage from E to E(x) is referred to by adjoining an element to E. More generally, for a subset S ⊂ F, there is a minimal subfield of F containing E and S, denoted by E(S).
The compositum of two subfields E and E′ of some field F is the smallest subfield of F containing both E and E′. The compositum can be used to construct the biggest subfield of F satisfying a certain property, for example the biggest subfield of F, which is, in the language introduced below, algebraic over E.
=== Field extensions ===
The notion of a subfield E ⊂ F can also be regarded from the opposite point of view, by referring to F being a field extension (or just extension) of E, denoted by
F / E,
and read "F over E".
A basic datum of a field extension is its degree [F : E], i.e., the dimension of F as an E-vector space. It satisfies the formula
[G : E] = [G : F] [F : E].
Extensions whose degree is finite are referred to as finite extensions. The extensions C / R and F4 / F2 are of degree 2, whereas R / Q is an infinite extension.
==== Algebraic extensions ====
A pivotal notion in the study of field extensions F / E are algebraic elements. An element x ∈ F is algebraic over E if it is a root of a polynomial with coefficients in E, that is, if it satisfies a polynomial equation
en xn + en−1xn−1 + ⋯ + e1x + e0 = 0,
with en, ..., e0 in E, and en ≠ 0.
For example, the imaginary unit i in C is algebraic over R, and even over Q, since it satisfies the equation
i2 + 1 = 0.
A field extension in which every element of F is algebraic over E is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula.
The subfield E(x) generated by an element x, as above, is an algebraic extension of E if and only if x is an algebraic element. That is to say, if x is algebraic, all other elements of E(x) are necessarily algebraic as well. Moreover, the degree of the extension E(x) / E, i.e., the dimension of E(x) as an E-vector space, equals the minimal degree n such that there is a polynomial equation involving x, as above. If this degree is n, then the elements of E(x) have the form
∑
k
=
0
n
−
1
a
k
x
k
,
a
k
∈
E
.
{\displaystyle \sum _{k=0}^{n-1}a_{k}x^{k},\ \ a_{k}\in E.}
For example, the field Q(i) of Gaussian rationals is the subfield of C consisting of all numbers of the form a + bi where both a and b are rational numbers: summands of the form i2 (and similarly for higher exponents) do not have to be considered here, since a + bi + ci2 can be simplified to a − c + bi.
==== Transcendence bases ====
The above-mentioned field of rational fractions E(X), where X is an indeterminate, is not an algebraic extension of E since there is no polynomial equation with coefficients in E whose zero is X. Elements, such as X, which are not algebraic are called transcendental. Informally speaking, the indeterminate X and its powers do not interact with elements of E. A similar construction can be carried out with a set of indeterminates, instead of just one.
Once again, the field extension E(x) / E discussed above is a key example: if x is not algebraic (i.e., x is not a root of a polynomial with coefficients in E), then E(x) is isomorphic to E(X). This isomorphism is obtained by substituting x to X in rational fractions.
A subset S of a field F is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over E and if F is an algebraic extension of E(S). Any field extension F / E has a transcendence basis. Thus, field extensions can be split into ones of the form E(S) / E (purely transcendental extensions) and algebraic extensions.
=== Closure operations ===
A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation
fn xn + fn−1xn−1 + ⋯ + f1x + f0 = 0, with coefficients fn, ..., f0 ∈ F, n > 0,
has a solution x ∊ F. By the fundamental theorem of algebra, C is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation
x2 + 1 = 0
does not have any rational or real solution. A field containing F is called an algebraic closure of F if it is algebraic over F (roughly speaking, not too big compared to F) and is algebraically closed (big enough to contain solutions of all polynomial equations).
By the above, C is an algebraic closure of R. The situation that the algebraic closure is a finite extension of the field F is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily 2, and F is elementarily equivalent to R. Such fields are also known as real closed fields.
Any field F has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted F. For example, the algebraic closure Q of Q is called the field of algebraic numbers. The field F is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of Fq, is exceptionally simple. It is the union of the finite fields containing Fq (the ones of order qn). For any algebraically closed field F of characteristic 0, the algebraic closure of the field F((t)) of Laurent series is the field of Puiseux series, obtained by adjoining roots of t.
== Fields with additional structure ==
Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas.
=== Ordered fields ===
A field F is called an ordered field if any two elements can be compared, so that x + y ≥ 0 and xy ≥ 0 whenever x ≥ 0 and y ≥ 0. For example, the real numbers form an ordered field, with the usual ordering ≥. The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation
x
1
2
+
x
2
2
+
⋯
+
x
n
2
=
0
{\displaystyle x_{1}^{2}+x_{2}^{2}+\dots +x_{n}^{2}=0}
only has the solution x1 = x2 = ⋯ = xn = 0. The set of all possible orders on a fixed field F is isomorphic to the set of ring homomorphisms from the Witt ring W(F) of quadratic forms over F, to Z.
An Archimedean field is an ordered field such that for each element there exists a finite expression
1 + 1 + ⋯ + 1
whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of R.
An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of F is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence 1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit.
Since every proper subfield of the reals also contains such gaps, R is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals.
The hyperreals R* form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis.
=== Topological fields ===
Another refinement of the notion of a field is a topological field, in which the set F is a topological space, such that all operations of the field (addition, multiplication, the maps a ↦ −a and a ↦ a−1) are continuous maps with respect to the topology of the space.
The topology of all the fields discussed below is induced from a metric, i.e., a function
d : F × F → R,
that measures a distance between any two elements of F.
The completion of F is another field in which, informally speaking, the "gaps" in the original field F are filled, if there are any. For example, any irrational number x, such as x = √2, is a "gap" in the rationals Q in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers p/q, in the sense that distance of x and p/q given by the absolute value |x − p/q| is as small as desired.
The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for n → ∞) is zero.
The field Qp is used in number theory and p-adic analysis. The algebraic closure Qp carries a unique norm extending the one on Qp, but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by Cp.
==== Local fields ====
The following topological fields are called local fields:
finite extensions of Qp (local fields of characteristic zero)
finite extensions of Fp((t)), the field of Laurent series over Fp (local fields of characteristic p).
These two types of local fields share some fundamental similarities. In this relation, the elements p ∈ Qp and t ∈ Fp((t)) (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in Fp. (However, since the addition in Qp is done using carrying, which is not the case in Fp((t)), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper:
Any first-order statement that is true for almost all Qp is also true for almost all Fp((t)). An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in Qp.
Tamely ramified extensions of both fields are in bijection to one another.
Adjoining arbitrary p-power roots of p (in Qp), respectively of t (in Fp((t))), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields:
Gal
(
Q
p
(
p
1
/
p
∞
)
)
≅
Gal
(
F
p
(
(
t
)
)
(
t
1
/
p
∞
)
)
.
{\displaystyle \operatorname {Gal} \left(\mathbf {Q} _{p}\left(p^{1/p^{\infty }}\right)\right)\cong \operatorname {Gal} \left(\mathbf {F} _{p}((t))\left(t^{1/p^{\infty }}\right)\right).}
=== Differential fields ===
Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field R(X), together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations.
== Galois theory ==
Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions F / E, which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form
F = E[X] / f(X),
where f is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of f are contained in F and that f has only simple zeros. The latter condition is always satisfied if E has characteristic 0.
For a finite Galois extension, the Galois group Gal(F/E) is the group of field automorphisms of F that are trivial on E (i.e., the bijections σ : F → F that preserve addition and multiplication and that send elements of E to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of Gal(F/E) and the set of intermediate extensions of the extension F/E. By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of f cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving
n
{\displaystyle {\sqrt[{n}]{~}}}
. For example, the symmetric groups Sn is not solvable for n ≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem:
f(X) = X5 − 4X + 2 (and E = Q),
f(X) = Xn + an−1Xn−1 + ⋯ + a0 (where f is regarded as a polynomial in E(a0, ..., an−1), for some indeterminates ai, E is any field, and n ≥ 5).
The tensor product of fields is not usually a field. For example, a finite extension F / E of degree n is a Galois extension if and only if there is an isomorphism of F-algebras
F ⊗E F ≅ Fn.
This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects.
== Invariants of fields ==
Basic invariants of a field F include the characteristic and the transcendence degree of F over its prime field. The latter is defined as the maximal number of elements in F that are algebraically independent over the prime field. Two algebraically closed fields E and F are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, Qp, Cp and C are isomorphic (but not isomorphic as topological fields).
=== Model theory of fields ===
In model theory, a branch of mathematical logic, two fields E and F are called elementarily equivalent if every mathematical statement that is true for E is also true for F and conversely. The mathematical statements in question are required to be first-order sentences (involving 0, 1, the addition and multiplication). A typical example, for n > 0, n an integer, is
φ(E) = "any polynomial of degree n in E has a zero in E"
The set of such formulas for all n expresses that E is algebraically closed.
The Lefschetz principle states that C is elementarily equivalent to any algebraically closed field F of characteristic zero. Moreover, any fixed statement φ holds in C if and only if it holds in any algebraically closed field of sufficiently high characteristic.
If U is an ultrafilter on a set I, and Fi is a field for every i in I, the ultraproduct of the Fi with respect to U is a field. It is denoted by
ulimi→∞ Fi,
since it behaves in several ways as a limit of the fields Fi: Łoś's theorem states that any first order statement that holds for all but finitely many Fi, also holds for the ultraproduct. Applied to the above sentence φ, this shows that there is an isomorphism
ulim
p
→
∞
F
¯
p
≅
C
.
{\displaystyle \operatorname {ulim} _{p\to \infty }{\overline {\mathbf {F} }}_{p}\cong \mathbf {C} .}
The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes p)
ulimp Qp ≅ ulimp Fp((t)).
In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function exp : F → F×).
=== Absolute Galois group ===
For fields that are not algebraically closed (or not separably closed), the absolute Galois group Gal(F) is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of F. By elementary means, the group Gal(Fq) can be shown to be the Prüfer group, the profinite completion of Z. This statement subsumes the fact that the only algebraic extensions of Gal(Fq) are the fields Gal(Fqn) for n > 0, and that the Galois groups of these finite extensions are given by
Gal(Fqn / Fq) = Z/nZ.
A description in terms of generators and relations is also known for the Galois groups of p-adic number fields (finite extensions of Qp).
Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple F-algebras, can be reinterpreted as a Galois cohomology group, namely
Br(F) = H2(F, Gm).
=== K-theory ===
Milnor K-theory is defined as
K
n
M
(
F
)
=
F
×
⊗
⋯
⊗
F
×
/
⟨
x
⊗
(
1
−
x
)
∣
x
∈
F
∖
{
0
,
1
}
⟩
.
{\displaystyle K_{n}^{M}(F)=F^{\times }\otimes \cdots \otimes F^{\times }/\left\langle x\otimes (1-x)\mid x\in F\smallsetminus \{0,1\}\right\rangle .}
The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism
K
n
M
(
F
)
/
p
=
H
n
(
F
,
μ
l
⊗
n
)
.
{\displaystyle K_{n}^{M}(F)/p=H^{n}(F,\mu _{l}^{\otimes n}).}
Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism K1(F) = F×. Matsumoto's theorem shows that K2(F) agrees with K2M(F). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general.
== Applications ==
=== Linear algebra and commutative algebra ===
If a ≠ 0, then the equation
ax = b
has a unique solution x in a field F, namely
x
=
a
−
1
b
.
{\displaystyle x=a^{-1}b.}
This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis.
The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring Z of the integers.
=== Finite fields: cryptography and coding theory ===
A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing
an = a ⋅ a ⋅ ⋯ ⋅ a (n factors, for an integer n ≥ 1)
in a (large) finite field Fq can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution n to an equation
an = b.
In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form
y2 = x3 + ax + b.
Finite fields are also used in coding theory and combinatorics.
=== Geometry: field of functions ===
Functions on a suitable topological space X into a field F can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain:
(f ⋅ g)(x) = f(x) ⋅ g(x).
This makes these functions a F-commutative algebra.
For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form
f
(
x
)
g
(
x
)
,
{\displaystyle {\frac {f(x)}{g(x)}},}
form a field, called field of functions.
This occurs in two main cases. When X is a complex manifold X. In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on X.
The function field of an algebraic variety X (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the n-dimensional space over a field F is F(x1, ..., xn), i.e., the field consisting of ratios of polynomials in n indeterminates. The function field of X is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing X by a (slightly) smaller subvariety.
The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of F(X), is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field F(X) is very close to X: if X is smooth and proper (the analogue of being compact), X can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about X. The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field.
=== Number theory: global fields ===
Global fields are in the limelight in algebraic number theory and arithmetic geometry.
They are, by definition, number fields (finite extensions of Q) or function fields over Fq (finite extensions of Fq(t)). As for local fields, these two types of fields share several similar features, even though they are of characteristic 0 and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne).
Cyclotomic fields are among the most intensely studied number fields. They are of the form Q(ζn), where ζn is a primitive nth root of unity, i.e., a complex number ζ that satisfies ζn = 1 and ζm ≠ 1 for all 0 < m < n. For n being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation
xn + yn = zn.
Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of Q, a global field, are the local fields Qp and R. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in R and Qp, whose solutions can easily be described.
Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group Gal(F/Q) for some number field F. Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian Qab extension of Q: it is the field
Q(ζn, n ≥ 2)
obtained by adjoining all primitive nth roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of Fab of general number fields F. For imaginary quadratic fields,
F
=
Q
(
−
d
)
{\displaystyle F=\mathbf {Q} ({\sqrt {-d}})}
, d > 0, the theory of complex multiplication describes Fab using elliptic curves. For general number fields, no such explicit description is known.
== Related notions ==
In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field 0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields Fp, as p tends to 1. In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields.
There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well.
=== Division rings ===
Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". The only division rings that are finite-dimensional R-vector spaces are R itself, C (which is a field), and the quaternions H (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions O, for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor.
Wedderburn's little theorem states that all finite division rings are fields.
== Notes ==
== Citations ==
== References ==
== External links == | Wikipedia/Field_(algebra) |
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form
d
d
x
[
p
(
x
)
d
y
d
x
]
+
q
(
x
)
y
=
−
λ
w
(
x
)
y
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\left[p(x){\frac {\mathrm {d} y}{\mathrm {d} x}}\right]+q(x)y=-\lambda w(x)y}
for given functions
p
(
x
)
{\displaystyle p(x)}
,
q
(
x
)
{\displaystyle q(x)}
and
w
(
x
)
{\displaystyle w(x)}
, together with some boundary conditions at extreme values of
x
{\displaystyle x}
. The goals of a given Sturm–Liouville problem are:
To find the λ for which there exists a non-trivial solution to the problem. Such values λ are called the eigenvalues of the problem.
For each eigenvalue λ, to find the corresponding solution
y
=
y
(
x
)
{\displaystyle y=y(x)}
of the problem. Such functions
y
{\displaystyle y}
are called the eigenfunctions associated to each λ.
Sturm–Liouville theory is the general study of Sturm–Liouville problems. In particular, for a "regular" Sturm–Liouville problem, it can be shown that there are an infinite number of eigenvalues each with a unique eigenfunction, and that these eigenfunctions form an orthonormal basis of a certain Hilbert space of functions.
This theory is important in applied mathematics, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear partial differential equations. For example, in quantum mechanics, the one-dimensional time-independent Schrödinger equation is a Sturm–Liouville problem.
Sturm–Liouville theory is named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882), who developed the theory.
== Main results ==
The main results in Sturm–Liouville theory apply to a Sturm–Liouville problem
on a finite interval
[
a
,
b
]
{\displaystyle [a,b]}
that is "regular". The problem is said to be regular if:
the coefficient functions
p
,
q
,
w
{\displaystyle p,q,w}
and the derivative
p
′
{\displaystyle p'}
are all continuous on
[
a
,
b
]
{\displaystyle [a,b]}
;
p
(
x
)
>
0
{\displaystyle p(x)>0}
and
w
(
x
)
>
0
{\displaystyle w(x)>0}
for all
x
∈
[
a
,
b
]
{\displaystyle x\in [a,b]}
;
the problem has separated boundary conditions of the form
The function
w
=
w
(
x
)
{\displaystyle w=w(x)}
, sometimes denoted
r
=
r
(
x
)
{\displaystyle r=r(x)}
, is called the weight or density function.
The goals of a Sturm–Liouville problem are:
to find the eigenvalues: those λ for which there exists a non-trivial solution;
for each eigenvalue λ, to find the corresponding eigenfunction
y
=
y
(
x
)
{\displaystyle y=y(x)}
.
For a regular Sturm–Liouville problem, a function
y
=
y
(
x
)
{\displaystyle y=y(x)}
is called a solution if it is continuously differentiable and satisfies the equation (1) at every
x
∈
(
a
,
b
)
{\displaystyle x\in (a,b)}
. In the case of more general
p
,
q
,
w
{\displaystyle p,q,w}
, the solutions must be understood in a weak sense.
The terms eigenvalue and eigenvector are used because the solutions correspond to the eigenvalues and eigenfunctions of a Hermitian differential operator in an appropriate Hilbert space of functions with inner product defined using the weight function. Sturm–Liouville theory studies the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in the function space.
The main result of Sturm–Liouville theory states that, for any regular Sturm–Liouville problem:
The eigenvalues
λ
1
,
λ
2
,
…
{\displaystyle \lambda _{1},\lambda _{2},\dots }
are real and can be numbered so that
λ
1
<
λ
2
<
⋯
<
λ
n
<
⋯
→
∞
.
{\displaystyle \lambda _{1}<\lambda _{2}<\cdots <\lambda _{n}<\cdots \to \infty .}
Corresponding to each eigenvalue
λ
n
{\displaystyle \lambda _{n}}
is a unique (up to constant multiple) eigenfunction
y
n
=
y
n
(
x
)
{\displaystyle y_{n}=y_{n}(x)}
with exactly
n
−
1
{\displaystyle n-1}
zeros in
[
a
,
b
]
{\displaystyle [a,b]}
, called the nth fundamental solution.
The normalized eigenfunctions
y
n
{\displaystyle y_{n}}
form an orthonormal basis under the w-weighted inner product in the Hilbert space
L
2
(
[
a
,
b
]
,
w
(
x
)
d
x
)
{\displaystyle L^{2}{\big (}[a,b],w(x)\,\mathrm {d} x{\big )}}
; that is,
⟨
y
n
,
y
m
⟩
=
∫
a
b
y
n
(
x
)
y
m
(
x
)
w
(
x
)
d
x
=
δ
n
m
,
{\displaystyle \langle y_{n},y_{m}\rangle =\int _{a}^{b}y_{n}(x)y_{m}(x)w(x)\,\mathrm {d} x=\delta _{nm},}
where
δ
n
m
{\displaystyle \delta _{nm}}
is the Kronecker delta.
== Reduction to Sturm–Liouville form ==
The differential equation (1) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear homogenous ordinary differential equations can be recast in the form on the left-hand side of (1) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if y is a vector). Some examples are below.
=== Bessel equation ===
x
2
y
″
+
x
y
′
+
(
x
2
−
ν
2
)
y
=
0
{\displaystyle x^{2}y''+xy'+\left(x^{2}-\nu ^{2}\right)y=0}
which can be written in Sturm–Liouville form (first by dividing through by x, then by collapsing the first two terms on the left into one term) as
(
x
y
′
)
′
+
(
x
−
ν
2
x
)
y
=
0.
{\displaystyle \left(xy'\right)'+\left(x-{\frac {\nu ^{2}}{x}}\right)y=0.}
=== Legendre equation ===
(
1
−
x
2
)
y
″
−
2
x
y
′
+
ν
(
ν
+
1
)
y
=
0
{\displaystyle \left(1-x^{2}\right)y''-2xy'+\nu (\nu +1)y=0}
which can be put into Sturm–Liouville form, since d/dx(1 − x2) = −2x, so the Legendre equation is equivalent to
(
(
1
−
x
2
)
y
′
)
′
+
ν
(
ν
+
1
)
y
=
0
{\displaystyle \left(\left(1-x^{2}\right)y'\right)'+\nu (\nu +1)y=0}
=== Example using an integrating factor ===
x
3
y
″
−
x
y
′
+
2
y
=
0
{\displaystyle x^{3}y''-xy'+2y=0}
Divide throughout by x3:
y
″
−
1
x
2
y
′
+
2
x
3
y
=
0
{\displaystyle y''-{\frac {1}{x^{2}}}y'+{\frac {2}{x^{3}}}y=0}
Multiplying throughout by an integrating factor of
μ
(
x
)
=
exp
(
∫
−
d
x
x
2
)
=
e
1
/
x
,
{\displaystyle \mu (x)=\exp \left(\int -{\frac {dx}{x^{2}}}\right)=e^{{1}/{x}},}
gives
e
1
/
x
y
″
−
e
1
/
x
x
2
y
′
+
2
e
1
/
x
x
3
y
=
0
{\displaystyle e^{{1}/{x}}y''-{\frac {e^{{1}/{x}}}{x^{2}}}y'+{\frac {2e^{{1}/{x}}}{x^{3}}}y=0}
which can be put into Sturm–Liouville form since
d
d
x
e
1
/
x
=
−
e
1
/
x
x
2
{\displaystyle {\frac {d}{dx}}e^{{1}/{x}}=-{\frac {e^{{1}/{x}}}{x^{2}}}}
so the differential equation is equivalent to
(
e
1
/
x
y
′
)
′
+
2
e
1
/
x
x
3
y
=
0.
{\displaystyle \left(e^{{1}/{x}}y'\right)'+{\frac {2e^{{1}/{x}}}{x^{3}}}y=0.}
=== Integrating factor for general second-order homogenous equation ===
P
(
x
)
y
″
+
Q
(
x
)
y
′
+
R
(
x
)
y
=
0
{\displaystyle P(x)y''+Q(x)y'+R(x)y=0}
Multiplying through by the integrating factor
μ
(
x
)
=
1
P
(
x
)
exp
(
∫
Q
(
x
)
P
(
x
)
d
x
)
,
{\displaystyle \mu (x)={\frac {1}{P(x)}}\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right),}
and then collecting gives the Sturm–Liouville form:
d
d
x
(
μ
(
x
)
P
(
x
)
y
′
)
+
μ
(
x
)
R
(
x
)
y
=
0
,
{\displaystyle {\frac {d}{dx}}\left(\mu (x)P(x)y'\right)+\mu (x)R(x)y=0,}
or, explicitly:
d
d
x
(
exp
(
∫
Q
(
x
)
P
(
x
)
d
x
)
y
′
)
+
R
(
x
)
P
(
x
)
exp
(
∫
Q
(
x
)
P
(
x
)
d
x
)
y
=
0.
{\displaystyle {\frac {d}{dx}}\left(\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right)y'\right)+{\frac {R(x)}{P(x)}}\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right)y=0.}
== Sturm–Liouville equations as self-adjoint differential operators ==
The mapping defined by:
L
u
=
−
1
w
(
x
)
(
d
d
x
[
p
(
x
)
d
u
d
x
]
+
q
(
x
)
u
)
{\displaystyle Lu=-{\frac {1}{w(x)}}\left({\frac {d}{dx}}\left[p(x)\,{\frac {du}{dx}}\right]+q(x)u\right)}
can be viewed as a linear operator L mapping a function u to another function Lu, and it can be studied in the context of functional analysis. In fact, equation (1) can be written as
L
u
=
λ
u
.
{\displaystyle Lu=\lambda u.}
This is precisely the eigenvalue problem; that is, one seeks eigenvalues λ1, λ2, λ3,... and the corresponding eigenvectors u1, u2, u3,... of the L operator. The proper setting for this problem is the Hilbert space
L
2
(
[
a
,
b
]
,
w
(
x
)
d
x
)
{\displaystyle L^{2}([a,b],w(x)\,dx)}
with scalar product
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
¯
g
(
x
)
w
(
x
)
d
x
.
{\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}g(x)w(x)\,dx.}
In this space L is defined on sufficiently smooth functions which satisfy the above regular boundary conditions. Moreover, L is a self-adjoint operator:
⟨
L
f
,
g
⟩
=
⟨
f
,
L
g
⟩
.
{\displaystyle \langle Lf,g\rangle =\langle f,Lg\rangle .}
This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem, one looks at the resolvent
(
L
−
z
)
−
1
,
z
∈
R
,
{\displaystyle \left(L-z\right)^{-1},\qquad z\in \mathbb {R} ,}
where z is not an eigenvalue. Then, computing the resolvent amounts to solving a nonhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem, this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that
(
L
−
z
)
−
1
u
=
α
u
,
L
u
=
(
z
+
α
−
1
)
u
,
{\displaystyle \left(L-z\right)^{-1}u=\alpha u,\qquad Lu=\left(z+\alpha ^{-1}\right)u,}
are equivalent, so we may take
λ
=
z
+
α
−
1
{\displaystyle \lambda =z+\alpha ^{-1}}
with the same eigenfunctions.
If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls L singular. In this case, the spectrum no longer consists of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a Sturm–Liouville equation.
== Application to inhomogeneous second-order boundary value problems ==
Consider a general inhomogeneous second-order linear differential equation
P
(
x
)
y
″
+
Q
(
x
)
y
′
+
R
(
x
)
y
=
f
(
x
)
{\displaystyle P(x)y''+Q(x)y'+R(x)y=f(x)}
for given functions
P
(
x
)
,
Q
(
x
)
,
R
(
x
)
,
f
(
x
)
{\displaystyle P(x),Q(x),R(x),f(x)}
. As before, this can be reduced to the Sturm–Liouville form
L
y
=
f
{\displaystyle Ly=f}
: writing a general Sturm–Liouville operator as:
L
u
=
p
w
(
x
)
u
″
+
p
′
w
(
x
)
u
′
+
q
w
(
x
)
u
,
{\displaystyle Lu={\frac {p}{w(x)}}u''+{\frac {p'}{w(x)}}u'+{\frac {q}{w(x)}}u,}
one solves the system:
p
=
P
w
,
p
′
=
Q
w
,
q
=
R
w
.
{\displaystyle p=Pw,\quad p'=Qw,\quad q=Rw.}
It suffices to solve the first two equations, which amounts to solving (Pw)′ = Qw, or
w
′
=
Q
−
P
′
P
w
:=
α
w
.
{\displaystyle w'={\frac {Q-P'}{P}}w:=\alpha w.}
A solution is:
w
=
exp
(
∫
α
d
x
)
,
p
=
P
exp
(
∫
α
d
x
)
,
q
=
R
exp
(
∫
α
d
x
)
.
{\displaystyle w=\exp \left(\int \alpha \,dx\right),\quad p=P\exp \left(\int \alpha \,dx\right),\quad q=R\exp \left(\int \alpha \,dx\right).}
Given this transformation, one is left to solve:
L
y
=
f
.
{\displaystyle Ly=f.}
In general, if initial conditions at some point are specified, for example y(a) = 0 and y′(a) = 0, a second order differential equation can be solved using ordinary methods and the Picard–Lindelöf theorem ensures that the differential equation has a unique solution in a neighbourhood of the point where the initial conditions have been specified.
But if in place of specifying initial values at a single point, it is desired to specify values at two different points (so-called boundary values), e.g. y(a) = 0 and y(b) = 1, the problem turns out to be much more difficult. Notice that by adding a suitable known differentiable function to y, whose values at a and b satisfy the desired boundary conditions, and injecting inside the proposed differential equation, it can be assumed without loss of generality that the boundary conditions are of the form y(a) = 0 and y(b) = 0.
Here, the Sturm–Liouville theory comes in play: indeed, a large class of functions f can be expanded in terms of a series of orthonormal eigenfunctions ui of the associated Liouville operator with corresponding eigenvalues λi:
f
(
x
)
=
∑
i
α
i
u
i
(
x
)
,
α
i
∈
R
.
{\displaystyle f(x)=\sum _{i}\alpha _{i}u_{i}(x),\quad \alpha _{i}\in {\mathbb {R} }.}
Then a solution to the proposed equation is evidently:
y
=
∑
i
α
i
λ
i
u
i
.
{\displaystyle y=\sum _{i}{\frac {\alpha _{i}}{\lambda _{i}}}u_{i}.}
This solution will be valid only over the open interval a < x < b, and may fail at the boundaries.
=== Example: Fourier series ===
Consider the Sturm–Liouville problem:
for the unknowns are λ and u(x). For boundary conditions, we take for example:
u
(
0
)
=
u
(
π
)
=
0.
{\displaystyle u(0)=u(\pi )=0.}
Observe that if k is any integer, then the function
u
k
(
x
)
=
sin
k
x
{\displaystyle u_{k}(x)=\sin kx}
is a solution with eigenvalue λ = k2. We know that the solutions of a Sturm–Liouville problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the Sturm–Liouville problem in this case has no other eigenvectors.
Given the preceding, let us now solve the inhomogeneous problem
L
y
=
x
,
x
∈
(
0
,
π
)
{\displaystyle Ly=x,\qquad x\in (0,\pi )}
with the same boundary conditions
y
(
0
)
=
y
(
π
)
=
0
{\displaystyle y(0)=y(\pi )=0}
. In this case, we must expand f(x) = x as a Fourier series. The reader may check, either by integrating ∫ eikxx dx or by consulting a table of Fourier transforms, that we thus obtain
L
y
=
∑
k
=
1
∞
−
2
(
−
1
)
k
k
sin
k
x
.
{\displaystyle Ly=\sum _{k=1}^{\infty }-2{\frac {\left(-1\right)^{k}}{k}}\sin kx.}
This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in L2 which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier series converge at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series).
Therefore, by using formula (4), we obtain the solution:
y
=
∑
k
=
1
∞
2
(
−
1
)
k
k
3
sin
k
x
=
1
6
(
x
3
−
π
2
x
)
.
{\displaystyle y=\sum _{k=1}^{\infty }2{\frac {(-1)^{k}}{k^{3}}}\sin kx={\tfrac {1}{6}}(x^{3}-\pi ^{2}x).}
In this case, we could have found the answer using antidifferentiation, but this is no longer useful in most cases when the differential equation is in many variables.
== Application to partial differential equations ==
=== Normal modes ===
Certain partial differential equations can be solved with the help of Sturm–Liouville theory. Suppose we are interested in the vibrational modes of a thin membrane, held in a rectangular frame, 0 ≤ x ≤ L1, 0 ≤ y ≤ L2. The equation of motion for the vertical membrane's displacement, W(x,y,t) is given by the wave equation:
∂
2
W
∂
x
2
+
∂
2
W
∂
y
2
=
1
c
2
∂
2
W
∂
t
2
.
{\displaystyle {\frac {\partial ^{2}W}{\partial x^{2}}}+{\frac {\partial ^{2}W}{\partial y^{2}}}={\frac {1}{c^{2}}}{\frac {\partial ^{2}W}{\partial t^{2}}}.}
The method of separation of variables suggests looking first for solutions of the simple form W = X(x) × Y(y) × T(t). For such a function W the partial differential equation becomes X″/X + Y″/Y = 1/c2 T″/T. Since the three terms of this equation are functions of x, y, t separately, they must be constants. For example, the first term gives X″ = λX for a constant λ. The boundary conditions ("held in a rectangular frame") are W = 0 when x = 0, L1 or y = 0, L2 and define the simplest possible Sturm–Liouville eigenvalue problems as in the example, yielding the "normal mode solutions" for W with harmonic time dependence,
W
m
n
(
x
,
y
,
t
)
=
A
m
n
sin
(
m
π
x
L
1
)
sin
(
n
π
y
L
2
)
cos
(
ω
m
n
t
)
{\displaystyle W_{mn}(x,y,t)=A_{mn}\sin \left({\frac {m\pi x}{L_{1}}}\right)\sin \left({\frac {n\pi y}{L_{2}}}\right)\cos \left(\omega _{mn}t\right)}
where m and n are non-zero integers, Amn are arbitrary constants, and
ω
m
n
2
=
c
2
(
m
2
π
2
L
1
2
+
n
2
π
2
L
2
2
)
.
{\displaystyle \omega _{mn}^{2}=c^{2}\left({\frac {m^{2}\pi ^{2}}{L_{1}^{2}}}+{\frac {n^{2}\pi ^{2}}{L_{2}^{2}}}\right).}
The functions Wmn form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution W can be decomposed into a sum of these modes, which vibrate at their individual frequencies ωmn. This representation may require a convergent infinite sum.
=== Second-order linear equation ===
Consider a linear second-order differential equation in one spatial dimension and first-order in time of the form:
f
(
x
)
∂
2
u
∂
x
2
+
g
(
x
)
∂
u
∂
x
+
h
(
x
)
u
=
∂
u
∂
t
+
k
(
t
)
u
,
{\displaystyle f(x){\frac {\partial ^{2}u}{\partial x^{2}}}+g(x){\frac {\partial u}{\partial x}}+h(x)u={\frac {\partial u}{\partial t}}+k(t)u,}
u
(
a
,
t
)
=
u
(
b
,
t
)
=
0
,
u
(
x
,
0
)
=
s
(
x
)
.
{\displaystyle u(a,t)=u(b,t)=0,\qquad u(x,0)=s(x).}
Separating variables, we assume that
u
(
x
,
t
)
=
X
(
x
)
T
(
t
)
.
{\displaystyle u(x,t)=X(x)T(t).}
Then our above partial differential equation may be written as:
L
^
X
(
x
)
X
(
x
)
=
M
^
T
(
t
)
T
(
t
)
{\displaystyle {\frac {{\hat {L}}X(x)}{X(x)}}={\frac {{\hat {M}}T(t)}{T(t)}}}
where
L
^
=
f
(
x
)
d
2
d
x
2
+
g
(
x
)
d
d
x
+
h
(
x
)
,
M
^
=
d
d
t
+
k
(
t
)
.
{\displaystyle {\hat {L}}=f(x){\frac {d^{2}}{dx^{2}}}+g(x){\frac {d}{dx}}+h(x),\qquad {\hat {M}}={\frac {d}{dt}}+k(t).}
Since, by definition, L̂ and X(x) are independent of time t and M̂ and T(t) are independent of position x, then both sides of the above equation must be equal to a constant:
L
^
X
(
x
)
=
λ
X
(
x
)
,
X
(
a
)
=
X
(
b
)
=
0
,
M
^
T
(
t
)
=
λ
T
(
t
)
.
{\displaystyle {\hat {L}}X(x)=\lambda X(x),\qquad X(a)=X(b)=0,\qquad {\hat {M}}T(t)=\lambda T(t).}
The first of these equations must be solved as a Sturm–Liouville problem in terms of the eigenfunctions Xn(x) and eigenvalues λn. The second of these equations can be analytically solved once the eigenvalues are known.
d
d
t
T
n
(
t
)
=
(
λ
n
−
k
(
t
)
)
T
n
(
t
)
{\displaystyle {\frac {d}{dt}}T_{n}(t)={\bigl (}\lambda _{n}-k(t){\bigr )}T_{n}(t)}
T
n
(
t
)
=
a
n
exp
(
λ
n
t
−
∫
0
t
k
(
τ
)
d
τ
)
{\displaystyle T_{n}(t)=a_{n}\exp \left(\lambda _{n}t-\int _{0}^{t}k(\tau )\,d\tau \right)}
u
(
x
,
t
)
=
∑
n
a
n
X
n
(
x
)
exp
(
λ
n
t
−
∫
0
t
k
(
τ
)
d
τ
)
{\displaystyle u(x,t)=\sum _{n}a_{n}X_{n}(x)\exp \left(\lambda _{n}t-\int _{0}^{t}k(\tau )\,d\tau \right)}
a
n
=
⟨
X
n
(
x
)
,
s
(
x
)
⟩
⟨
X
n
(
x
)
,
X
n
(
x
)
⟩
{\displaystyle a_{n}={\frac {{\bigl \langle }X_{n}(x),s(x){\bigr \rangle }}{{\bigl \langle }X_{n}(x),X_{n}(x){\bigr \rangle }}}}
where
⟨
y
(
x
)
,
z
(
x
)
⟩
=
∫
a
b
y
(
x
)
z
(
x
)
w
(
x
)
d
x
,
{\displaystyle {\bigl \langle }y(x),z(x){\bigr \rangle }=\int _{a}^{b}y(x)z(x)w(x)\,dx,}
w
(
x
)
=
exp
(
∫
g
(
x
)
f
(
x
)
d
x
)
f
(
x
)
.
{\displaystyle w(x)={\frac {\exp \left(\int {\frac {g(x)}{f(x)}}\,dx\right)}{f(x)}}.}
== Representation of solutions and numerical calculation ==
The Sturm–Liouville differential equation (1) with boundary conditions may be solved analytically, which can be exact or provide an approximation, by the Rayleigh–Ritz method, or by the matrix-variational method of Gerck et al.
Numerically, a variety of methods are also available. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places.
Shooting methods
Finite difference method
Spectral parameter power series method
=== Shooting methods ===
Shooting methods proceed by guessing a value of λ, solving an initial value problem defined by the boundary conditions at one endpoint, say, a, of the interval [a,b], comparing the value this solution takes at the other endpoint b with the other desired boundary condition, and finally increasing or decreasing λ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues.
=== Spectral parameter power series method ===
The spectral parameter power series (SPPS) method makes use of a generalization of the following fact about homogeneous second-order linear ordinary differential equations: if y is a solution of equation (1) that does not vanish at any point of [a,b], then the function
y
(
x
)
∫
a
x
d
t
p
(
t
)
y
(
t
)
2
{\displaystyle y(x)\int _{a}^{x}{\frac {dt}{p(t)y(t)^{2}}}}
is a solution of the same equation and is linearly independent from y. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value λ∗0 (often λ∗0 = 0; it does not need to be an eigenvalue) and any solution y0 of (1) with λ = λ∗0 which does not vanish on [a,b]. (Discussion below of ways to find appropriate y0 and λ∗0.) Two sequences of functions X(n)(t), X̃(n)(t) on [a,b], referred to as iterated integrals, are defined recursively as follows. First when n = 0, they are taken to be identically equal to 1 on [a,b]. To obtain the next functions they are multiplied alternately by 1/py20 and wy20 and integrated, specifically, for n > 0:
The resulting iterated integrals are now applied as coefficients in the following two power series in λ:
u
0
=
y
0
∑
k
=
0
∞
(
λ
−
λ
0
∗
)
k
X
~
(
2
k
)
,
{\displaystyle u_{0}=y_{0}\sum _{k=0}^{\infty }\left(\lambda -\lambda _{0}^{*}\right)^{k}{\tilde {X}}^{(2k)},}
u
1
=
y
0
∑
k
=
0
∞
(
λ
−
λ
0
∗
)
k
X
(
2
k
+
1
)
.
{\displaystyle u_{1}=y_{0}\sum _{k=0}^{\infty }\left(\lambda -\lambda _{0}^{*}\right)^{k}X^{(2k+1)}.}
Then for any λ (real or complex), u0 and u1 are linearly independent solutions of the corresponding equation (1). (The functions p(x) and q(x) take part in this construction through their influence on the choice of y0.)
Next one chooses coefficients c0 and c1 so that the combination y = c0u0 + c1u1 satisfies the first boundary condition (2). This is simple to do since X(n)(a) = 0 and X̃(n)(a) = 0, for n > 0. The values of X(n)(b) and X̃(n)(b) provide the values of u0(b) and u1(b) and the derivatives u′0(b) and u′0(b), so the second boundary condition (3) becomes an equation in a power series in λ. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in λ whose roots are approximations of the sought-after eigenvalues.
When λ = λ0, this reduces to the original construction described above for a solution linearly independent to a given one. The representations (5) and (6) also have theoretical applications in Sturm–Liouville theory.
=== Construction of a nonvanishing solution ===
The SPPS method can, itself, be used to find a starting solution y0. Consider the equation (py′)′ = μqy; i.e., q, w, and λ are replaced in (1) by 0, −q, and μ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue μ0 = 0. While there is no guarantee that u0 or u1 will not vanish, the complex function y0 = u0 + iu1 will never vanish because two linearly-independent solutions of a regular Sturm–Liouville equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution y0 of (1) for the value λ0 = 0. In practice if (1) has real coefficients, the solutions based on y0 will have very small imaginary parts which must be discarded.
== See also ==
Normal mode
Oscillation theory
Self-adjoint
Variation of parameters
Spectral theory of ordinary differential equations
Atkinson–Mingarelli theorem
== References ==
== Further reading ==
Gesztesy, Fritz; Nichols, Roger; Zinchenko, Maxim (2024-09-24). Sturm–Liouville Operators, Their Spectral Theory, and Some Applications. Providence, Rhode Island: American Mathematical Society. ISBN 978-1-4704-7666-3.
"Sturm–Liouville theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Hartman, Philip (2002). Ordinary Differential Equations (2 ed.). Philadelphia: SIAM. ISBN 978-0-89871-510-1.
Polyanin, A. D. & Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2 ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. (Chapter 5)
Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society. ISBN 978-0-8218-4660-5. (see Chapter 9 for singular Sturm–Liouville operators and connections with quantum mechanics)
Zettl, Anton (2005). Sturm–Liouville Theory. Providence: American Mathematical Society. ISBN 0-8218-3905-5.
Birkhoff, Garrett (1973). A source book in classical analysis. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-82245-5. (See Chapter 8, part B, for excerpts from the works of Sturm and Liouville and commentary on them.)
Kravchenko, Vladislav (2020). Direct and Inverse Sturm-Liouville Problems: A Method of Solution. Cham: Birkhäuser. ISBN 978-3-030-47848-3. | Wikipedia/Sturm–Liouville_theory |
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents.
Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them.
Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics.
== Fundamental aspects ==
A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress.
Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures.
The most common topics covered in solid mechanics include:
stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics
dynamical systems and chaos - dealing with mechanical systems highly sensitive to their given initial position
thermomechanics - analyzing materials with models derived from principles of thermodynamics
biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue
geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock
vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering
fracture and damage mechanics - dealing with crack-growth mechanics in solid materials
composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass
variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM)
experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures
== Relationship to continuum mechanics ==
As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics.
== Response models ==
A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region.
It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common.
These are basic models that describe how a solid responds to an applied stress:
Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law.
Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence.
Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent.
Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud.
Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models.
== Timeline ==
1452–1519 Leonardo da Vinci made many contributions
1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures
1660: Hooke's law by Robert Hooke
1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion
1750: Euler–Bernoulli beam equation
1700–1782: Daniel Bernoulli introduced the principle of virtual work
1707–1783: Leonhard Euler developed the theory of buckling of columns
1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures
1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case
1874: Otto Mohr formalized the idea of a statically indeterminate structure.
1922: Timoshenko corrects the Euler–Bernoulli beam equation
1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames.
1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework
1942: R. Courant divided a domain into finite subregions
1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today
== See also ==
Strength of materials - Specific definitions and the relationships between stress and strain.
Applied mechanics
Materials science
Continuum mechanics
Fracture mechanics
Impact (mechanics)
Solid-state physics
Rigid body
== References ==
=== Notes ===
=== Bibliography ===
L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X
J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2
P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover, ISBN 0-486-66958-0
R.W. Ogden, Non-linear Elastic Deformation, Dover, ISBN 0-486-69648-0
S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970.
G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000
A.I. Lurie, Theory of Elasticity, Springer, 1999.
L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990.
R. Hill, The Mathematical Theory of Plasticity, Oxford University, 1950.
J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990.
J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010.
D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012.
Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, ISBN 978-981-4713-64-1. | Wikipedia/Solid_mechanics |
In mathematics, a square-integrable function, also called a quadratically integrable function or
L
2
{\displaystyle L^{2}}
function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
is defined as follows.
One may also speak of quadratic integrability over bounded intervals such as
[
a
,
b
]
{\displaystyle [a,b]}
for
a
≤
b
{\displaystyle a\leq b}
.
An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable. For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part.
The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure) forms the
L
p
{\displaystyle L^{p}}
space with
p
=
2.
{\displaystyle p=2.}
Among the
L
p
{\displaystyle L^{p}}
spaces, the class of square integrable functions is unique in being compatible with an inner product, which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space, since all of the
L
p
{\displaystyle L^{p}}
spaces are complete under their respective
p
{\displaystyle p}
-norms.
Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere.
== Properties ==
The square integrable functions (in the sense mentioned in which a "function" actually means an equivalence class of functions that are equal almost everywhere) form an inner product space with inner product given by
⟨
f
,
g
⟩
=
∫
A
f
(
x
)
g
(
x
)
¯
d
x
,
{\displaystyle \langle f,g\rangle =\int _{A}f(x){\overline {g(x)}}\,\mathrm {d} x,}
where
f
{\displaystyle f}
and
g
{\displaystyle g}
are square integrable functions,
f
(
x
)
¯
{\displaystyle {\overline {f(x)}}}
is the complex conjugate of
f
(
x
)
,
{\displaystyle f(x),}
A
{\displaystyle A}
is the set over which one integrates—in the first definition (given in the introduction above),
A
{\displaystyle A}
is
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
, in the second,
A
{\displaystyle A}
is
[
a
,
b
]
{\displaystyle [a,b]}
.
Since
|
a
|
2
=
a
⋅
a
¯
{\displaystyle |a|^{2}=a\cdot {\overline {a}}}
, square integrability is the same as saying
⟨
f
,
f
⟩
<
∞
.
{\displaystyle \langle f,f\rangle <\infty .\,}
It can be shown that square integrable functions form a complete metric space under the metric induced by the inner product defined above.
A complete metric space is also called a Cauchy space, because sequences in such metric spaces converge if and only if they are Cauchy.
A space that is complete under the metric induced by a norm is a Banach space.
Therefore, the space of square integrable functions is a Banach space, under the metric induced by the norm, which in turn is induced by the inner product.
As we have the additional property of the inner product, this is specifically a Hilbert space, because the space is complete under the metric induced by the inner product.
This inner product space is conventionally denoted by
(
L
2
,
⟨
⋅
,
⋅
⟩
2
)
{\displaystyle \left(L_{2},\langle \cdot ,\cdot \rangle _{2}\right)}
and many times abbreviated as
L
2
.
{\displaystyle L_{2}.}
Note that
L
2
{\displaystyle L_{2}}
denotes the set of square integrable functions, but no selection of metric, norm or inner product are specified by this notation.
The set, together with the specific inner product
⟨
⋅
,
⋅
⟩
2
{\displaystyle \langle \cdot ,\cdot \rangle _{2}}
specify the inner product space.
The space of square integrable functions is the
L
p
{\displaystyle L^{p}}
space in which
p
=
2.
{\displaystyle p=2.}
== Examples ==
The function
1
x
n
,
{\displaystyle {\tfrac {1}{x^{n}}},}
defined on
(
0
,
1
)
,
{\displaystyle (0,1),}
is in
L
2
{\displaystyle L^{2}}
for
n
<
1
2
{\displaystyle n<{\tfrac {1}{2}}}
but not for
n
=
1
2
.
{\displaystyle n={\tfrac {1}{2}}.}
The function
1
x
,
{\displaystyle {\tfrac {1}{x}},}
defined on
[
1
,
∞
)
,
{\displaystyle [1,\infty ),}
is square-integrable.
Bounded functions, defined on
[
0
,
1
]
,
{\displaystyle [0,1],}
are square-integrable. These functions are also in
L
p
,
{\displaystyle L^{p},}
for any value of
p
.
{\displaystyle p.}
=== Non-examples ===
The function
1
x
,
{\displaystyle {\tfrac {1}{x}},}
defined on
[
0
,
1
]
,
{\displaystyle [0,1],}
where the value at
0
{\displaystyle 0}
is arbitrary. Furthermore, this function is not in
L
p
{\displaystyle L^{p}}
for any value of
p
{\displaystyle p}
in
[
1
,
∞
)
.
{\displaystyle [1,\infty ).}
== See also ==
Inner product space
L
p
{\displaystyle L^{p}}
space – Function spaces generalizing finite-dimensional p norm spaces
== References == | Wikipedia/Square-integrable_function |
Q methodology is a research method used in psychology and in social sciences to study people's "subjectivity"—that is, their viewpoint. Q was developed by psychologist William Stephenson. It has been used both in clinical settings for assessing a patient's progress over time (intra-rater comparison), as well as in research settings to examine how people think about a specific topic (inter-rater comparisons).
== Technical overview ==
The name "Q" comes from the form of factor analysis that is used to analyze the data. Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking. It is sometimes said that Q factor analysis is R factor analysis with the data table turned sideways. While helpful as a heuristic for understanding Q, this explanation may be misleading, as most Q methodologists argue that for mathematical reasons no one data matrix would be suitable for analysis with both Q and R.
The data for Q factor analysis come from a series of "Q sorts" performed by one or more subjects. A Q sort is a ranking of variables—typically presented as statements printed on small cards—according to some "condition of instruction." For example, in a Q study of people's views of a celebrity, a subject might be given statements like "He is a deeply religious man" and "He is a liar," and asked to sort them from "most like how I think about this celebrity" to "least like how I think about this celebrity." The use of ranking, rather than asking subjects to rate their agreement with statements individually, is meant to capture the idea that people think about ideas in relation to other ideas, rather than in isolation. Usually, this ranking is done on a score sheet with the most salient options at the extreme ends of the sheet, such as "most agree" and "most disagree". This score sheet is usually in the form of a bell curve where a respondent can place most Q sorts at the middle and the fewest Q sorts at the far ends of the score sheets.
The sample of statements for a Q sort is drawn from and claimed by the researcher to be representative of a "concourse"—the sum of all things people say or think about the issue being investigated. Commonly, Q methodologists use a structured sampling approach in order to try and represent the full breadth of the concourse.
One salient difference between Q and other social science research methodologies, such as surveys, is that it typically uses many fewer subjects. This can be a strength, as Q is sometimes used with a single subject, and it makes research far less expensive. In such cases, a person will rank the same set of statements under different conditions of instruction. For example, someone might be given a set of statements about personality traits and then asked to rank them according to how well they describe herself, her ideal self, her father, her mother, etc. Working with a single individual is particularly relevant in the study of how an individual's rankings change over time and this was the first use of Q methodology. As Q methodology works with a small non-representative sample, conclusions are limited to those who participated in the study.
In studies of intelligence, Q factor analysis can generate consensus based assessment (CBA) scores as direct measures. Alternatively, the unit of measurement of a person in this context is his factor loading for a Q-sort he or she performs. Factors represent norms with respect to schemata. The individual who gains the highest factor loading on an Operant factor is the person most able to conceive the norm for the factor. What the norm means is a matter, always, for conjecture and refutation (Popper). It may be indicative of the wisest solution, or the most responsible, the most important, or an optimized-balanced solution. These are all untested hypotheses that require future study.
An alternative method that determines the similarity among subjects somewhat like Q methodology, as well as the cultural "truth" of the statements used in the test, is Cultural Consensus Theory.
The "Q sort" data collection procedure is traditionally done using a paper template and the sample of statements or other stimuli printed on individual cards. However, there are also computer software applications for conducting online Q sorts. For example, UC Riverside's Riverside Situational Q-sort (RSQ), claims to measure the psychological properties of situations. Their International Situations Project is using the tool to explore the psychologically salient aspects of situations and how those aspects may differ across cultures with this university-developed web-based application. To date there has been no study of differences in sorts produced by use of computer based vs. physical sorting.
One Q-sort should produce two sets of data. The first is the physical distribution of sorted objects. The second is either an ongoing 'think-out-loud' narrative or a discussion that immediately follows the sorting exercise. The purpose of these narratives were, in the first instance, to elicit discussion of the reasons for particular placements. While the relevance of this qualitative data is often suppressed in current uses of Q-methodology, the modes of reasoning behind placement of an item can be more analytically relevant than the absolute placement of cards.
== Application ==
Q-methodology has been used as a research tool in a wide variety of disciplines including nursing, veterinary medicine, public health, transportation, education, rural sociology, hydrology, mobile communication, and even robotics.
The methodology is particularly useful when researchers wish to understand and describe the variety of subjective viewpoints on an issue.
== Validation ==
Some information on validation of the method is available. Furthermore, the issue of validity concerning the Q-method has been discussed variously. However, Lundberg et al. point out that "[s]ince participants’ Q sorts are neither right nor wrong, but constructed through respondents’ rank-ordering of self-referent items, validity in line with quantitative tenets of research is of no concern in Q".
== Criticism of Q methodology ==
In 2013, an article was published under the title "Overly ambitious: contributions and current status of Q methodology" written by Jarl K. Kampen & Peter Tamás. Kampen & Tamás state that "Q methodology neither delivers its promised insight into human subjectivity nor accounts adequately for threats to the validity of the claims it can legitimately make". This in turn makes the method, according to the authors, "inappropriate for its declared purpose".
In response to Kampen & Tamás's criticism, Steven R. Brown, Stentor Danielson & Job van Exel published the response article "Overly ambitious critics and the Medici Effect: a reply to Kampen and Tamás". Brown et al. states that since its inception, Q methodology "has been a recurring target of hastily assembled critiques" which has served no other purpose than to misinform other researchers and readers. Due to the amount of criticism towards the Q methodology, Brown et al. gather these criticisms under the term the Medici Effect, named after the famed family that denied Galileo Galilei's evidence while refusing to look through his telescope.
Brown et al. continue by responding to certain points of Kampen & Tamás's criticisms:
On the nature of subjectivity
Concourse and Q samples
Factor analysis and interpretation
The force Q-sort distribution
Items:persons ratio
Researcher bias
Miscellany
One point which the authors point out in section 3 is that Kampen & Tamás seek to claim that "the limits in the number of factors that Q can produce" defies logic because "Q can identify no more factors than there are Q statements". This argument, however, Brown et al. "have never before encountered". The authors continue on this argument by stating that "the data of Q methodology are not responses to individual statements alone, but more importantly in their relationships, as when they are rank-ordered".
In their conclusion, Brown et al. point out that, much like Medici's refusal to look through Galileo's telescope, "these critics have failed to engage personally with Q in order to see if their abstract critiques hold up in practice".
== See also ==
Card sorting
Factor analysis
Group concept mapping
Validation and verification
Varimax rotation
== References == | Wikipedia/Q_methodology |
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
== Eigenvalues and eigenvectors ==
Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation
(
A
−
λ
I
)
k
v
=
0
,
{\displaystyle \left(A-\lambda I\right)^{k}{\mathbf {v} }=0,}
where v is a nonzero n × 1 column vector, I is the n × n identity matrix, k is a positive integer, and both λ and v are allowed to be complex even when A is real. When k = 1, the vector is called simply an eigenvector, and the pair is called an eigenpair. In this case, Av = λv. Any eigenvalue λ of A has ordinary eigenvectors associated to it, for if k is the smallest integer such that (A − λI)k v = 0 for a generalized eigenvector v, then (A − λI)k−1 v is an ordinary eigenvector. The value k can always be taken as less than or equal to n. In particular, (A − λI)n v = 0 for all generalized eigenvectors v associated with λ.
For each eigenvalue λ of A, the kernel ker(A − λI) consists of all eigenvectors associated with λ (along with 0), called the eigenspace of λ, while the vector space ker((A − λI)n) consists of all generalized eigenvectors, and is called the generalized eigenspace. The geometric multiplicity of λ is the dimension of its eigenspace. The algebraic multiplicity of λ is the dimension of its generalized eigenspace. The latter terminology is justified by the equation
p
A
(
z
)
=
det
(
z
I
−
A
)
=
∏
i
=
1
k
(
z
−
λ
i
)
α
i
,
{\displaystyle p_{A}\left(z\right)=\det \left(zI-A\right)=\prod _{i=1}^{k}(z-\lambda _{i})^{\alpha _{i}},}
where det is the determinant function, the λi are all the distinct eigenvalues of A and the αi are the corresponding algebraic multiplicities. The function pA(z) is the characteristic polynomial of A. So the algebraic multiplicity is the multiplicity of the eigenvalue as a zero of the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up to n, the degree of the characteristic polynomial. The equation pA(z) = 0 is called the characteristic equation, as its roots are exactly the eigenvalues of A. By the Cayley–Hamilton theorem, A itself obeys the same equation: pA(A) = 0. As a consequence, the columns of the matrix
∏
i
≠
j
(
A
−
λ
i
I
)
α
i
{\textstyle \prod _{i\neq j}(A-\lambda _{i}I)^{\alpha _{i}}}
must be either 0 or generalized eigenvectors of the eigenvalue λj, since they are annihilated by
(
A
−
λ
j
I
)
α
j
{\displaystyle (A-\lambda _{j}I)^{\alpha _{j}}}
. In fact, the column space is the generalized eigenspace of λj.
Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all of Cn can be chosen consisting of generalized eigenvectors. More particularly, this basis {vi}ni=1 can be chosen and organized so that
if vi and vj have the same eigenvalue, then so does vk for each k between i and j, and
if vi is not an ordinary eigenvector, and if λi is its eigenvalue, then (A − λiI)vi = vi−1 (in particular, v1 must be an ordinary eigenvector).
If these basis vectors are placed as the column vectors of a matrix V = [v1 v2 ⋯ vn], then V can be used to convert A to its Jordan normal form:
V
−
1
A
V
=
[
λ
1
β
1
0
…
0
0
λ
2
β
2
…
0
0
0
λ
3
…
0
⋮
⋮
⋮
⋱
⋮
0
0
0
…
λ
n
]
,
{\displaystyle V^{-1}AV={\begin{bmatrix}\lambda _{1}&\beta _{1}&0&\ldots &0\\0&\lambda _{2}&\beta _{2}&\ldots &0\\0&0&\lambda _{3}&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\ldots &\lambda _{n}\end{bmatrix}},}
where the λi are the eigenvalues, βi = 1 if (A − λi+1)vi+1 = vi and βi = 0 otherwise.
More generally, if W is any invertible matrix, and λ is an eigenvalue of A with generalized eigenvector v, then (W−1AW − λI)k W−kv = 0. Thus λ is an eigenvalue of W−1AW with generalized eigenvector W−kv. That is, similar matrices have the same eigenvalues.
=== Normal, Hermitian, and real-symmetric matrices ===
The adjoint M* of a complex matrix M is the transpose of the conjugate of M: M * = M T. A square matrix A is called normal if it commutes with its adjoint: A*A = AA*. It is called Hermitian if it is equal to its adjoint: A* = A. All Hermitian matrices are normal. If A has only real elements, then the adjoint is just the transpose, and A is Hermitian if and only if it is symmetric. When applied to column vectors, the adjoint can be used to define the canonical inner product on Cn: w ⋅ v = w* v. Normal, Hermitian, and real-symmetric matrices have several useful properties:
Every generalized eigenvector of a normal matrix is an ordinary eigenvector.
Any normal matrix is similar to a diagonal matrix, since its Jordan normal form is diagonal.
Eigenvectors of distinct eigenvalues of a normal matrix are orthogonal.
The null space and the image (or column space) of a normal matrix are orthogonal to each other.
For any normal matrix A, Cn has an orthonormal basis consisting of eigenvectors of A. The corresponding matrix of eigenvectors is unitary.
The eigenvalues of a Hermitian matrix are real, since (λ − λ)v = (A* − A)v = (A − A)v = 0 for a non-zero eigenvector v.
If A is real, there is an orthonormal basis for Rn consisting of eigenvectors of A if and only if A is symmetric.
It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. For example, a real triangular matrix has its eigenvalues along its diagonal, but in general is not symmetric.
== Condition number ==
Any problem of numeric calculation can be viewed as the evaluation of some function f for some input x. The condition number κ(f, x) of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be very ill-conditioned. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.
For the problem of solving the linear equation Av = b where A is invertible, the matrix condition number κ(A−1, b) is given by ||A||op||A−1||op, where || ||op is the operator norm subordinate to the normal Euclidean norm on Cn. Since this number is independent of b and is the same for A and A−1, it is usually just called the condition number κ(A) of the matrix A. This value κ(A) is also the absolute value of the ratio of the largest singular value of A to its smallest. If A is unitary, then ||A||op = ||A−1||op = 1, so κ(A) = 1. For general matrices, the operator norm is often difficult to calculate. For this reason, other matrix norms are commonly used to estimate the condition number.
For the eigenvalue problem, Bauer and Fike proved that if λ is an eigenvalue for a diagonalizable n × n matrix A with eigenvector matrix V, then the absolute error in calculating λ is bounded by the product of κ(V) and the absolute error in A. As a result, the condition number for finding λ is κ(λ, A) = κ(V) = ||V ||op ||V −1||op. If A is normal, then V is unitary, and κ(λ, A) = 1. Thus the eigenvalue problem for all normal matrices is well-conditioned.
The condition number for the problem of finding the eigenspace of a normal matrix A corresponding to an eigenvalue λ has been shown to be inversely proportional to the minimum distance between λ and the other distinct eigenvalues of A. In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.
== Algorithms ==
The most reliable and most widely used algorithm for computing eigenvalues is John G. F. Francis' and Vera N. Kublanovskaya's QR algorithm, considered one of the top ten algorithms of 20th century.
Any monic polynomial is the characteristic polynomial of its companion matrix. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The Abel–Ruffini theorem shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are iterative, producing better approximate solutions with each iteration.
Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalue λ of a matrix A has been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer has λ as a solution.
Redirection is usually accomplished by shifting: replacing A with A − μI for some constant μ. The eigenvalue found for A − μI must have μ added back in to get an eigenvalue for A. For example, for power iteration, μ = λ. Power iteration finds the largest eigenvalue in absolute value, so even when λ is only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely, inverse iteration based methods find the lowest eigenvalue, so μ is chosen well away from λ and hopefully closer to some other eigenvalue.
Reduction can be accomplished by restricting A to the column space of the matrix A − λI, which A carries to itself. Since A - λI is singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found.
If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with μ set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to μ. For small matrices, an alternative is to look at the column space of the product of A − λ'I for each of the other eigenvalues λ'.
A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.
If A is an
n
×
n
{\textstyle n\times n}
normal matrix with eigenvalues λi(A) and corresponding unit eigenvectors vi whose component entries are vi,j, let Aj be the
n
−
1
×
n
−
1
{\textstyle n-1\times n-1}
matrix obtained by removing the i-th row and column from A, and let λk(Aj) be its k-th eigenvalue. Then
|
v
i
,
j
|
2
∏
k
=
1
,
k
≠
i
n
(
λ
i
(
A
)
−
λ
k
(
A
)
)
=
∏
k
=
1
n
−
1
(
λ
i
(
A
)
−
λ
k
(
A
j
)
)
{\displaystyle |v_{i,j}|^{2}\prod _{k=1,k\neq i}^{n}(\lambda _{i}(A)-\lambda _{k}(A))=\prod _{k=1}^{n-1}(\lambda _{i}(A)-\lambda _{k}(A_{j}))}
If
p
,
p
j
{\displaystyle p,p_{j}}
are the characteristic polynomials of
A
{\displaystyle A}
and
A
j
{\displaystyle A_{j}}
, the formula can be re-written as
|
v
i
,
j
|
2
=
p
j
(
λ
i
(
A
)
)
p
′
(
λ
i
(
A
)
)
{\displaystyle |v_{i,j}|^{2}={\frac {p_{j}(\lambda _{i}(A))}{p'(\lambda _{i}(A))}}}
assuming the derivative
p
′
{\displaystyle p'}
is not zero at
λ
i
(
A
)
{\displaystyle \lambda _{i}(A)}
.
== Hessenberg and tridiagonal matrices ==
Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method like gaussian elimination to convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. An upper Hessenberg matrix is a square matrix for which all entries below the subdiagonal are zero. A lower Hessenberg matrix is one for which all entries above the superdiagonal are zero. Matrices that are both upper and lower Hessenberg are tridiagonal. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or Hermitian, then the resulting matrix will be tridiagonal.
When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix.
For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.
== Iterative algorithms ==
Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices.
== Direct calculation ==
While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:
=== Triangular matrices ===
Since the determinant of a triangular matrix is the product of its diagonal entries, if T is triangular, then
det
(
λ
I
−
T
)
=
∏
i
(
λ
−
T
i
i
)
{\textstyle \det(\lambda I-T)=\prod _{i}(\lambda -T_{ii})}
. Thus the eigenvalues of T are its diagonal entries.
=== Factorable polynomial equations ===
If p is any polynomial and p(A) = 0, then the eigenvalues of A also satisfy the same equation. If p happens to have a known factorization, then the eigenvalues of A lie among its roots.
For example, a projection is a square matrix P satisfying P2 = P. The roots of the corresponding scalar polynomial equation, λ2 = λ, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the nullity of P, while the multiplicity of 1 is the rank of P.
Another example is a matrix A that satisfies A2 = α2I for some scalar α. The eigenvalues must be ±α. The projection operators
P
+
=
1
2
(
I
+
A
α
)
{\displaystyle P_{+}={\frac {1}{2}}\left(I+{\frac {A}{\alpha }}\right)}
P
−
=
1
2
(
I
−
A
α
)
{\displaystyle P_{-}={\frac {1}{2}}\left(I-{\frac {A}{\alpha }}\right)}
satisfy
A
P
+
=
α
P
+
A
P
−
=
−
α
P
−
{\displaystyle AP_{+}=\alpha P_{+}\quad AP_{-}=-\alpha P_{-}}
and
P
+
P
+
=
P
+
P
−
P
−
=
P
−
P
+
P
−
=
P
−
P
+
=
0.
{\displaystyle P_{+}P_{+}=P_{+}\quad P_{-}P_{-}=P_{-}\quad P_{+}P_{-}=P_{-}P_{+}=0.}
The column spaces of P+ and P− are the eigenspaces of A corresponding to +α and −α, respectively.
=== 2×2 matrices ===
For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the root formulas makes this approach less attractive.
For the 2×2 matrix
A
=
[
a
b
c
d
]
,
{\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}},}
the characteristic polynomial is
det
[
λ
−
a
−
b
−
c
λ
−
d
]
=
λ
2
−
(
a
+
d
)
λ
+
(
a
d
−
b
c
)
=
λ
2
−
λ
t
r
(
A
)
+
det
(
A
)
.
{\displaystyle \det {\begin{bmatrix}\lambda -a&-b\\-c&\lambda -d\end{bmatrix}}=\lambda ^{2}\,-\,\left(a+d\right)\lambda \,+\,\left(ad-bc\right)=\lambda ^{2}\,-\,\lambda \,{\rm {tr}}(A)\,+\,\det(A).}
Thus the eigenvalues can be found by using the quadratic formula:
λ
=
t
r
(
A
)
±
t
r
2
(
A
)
−
4
det
(
A
)
2
.
{\displaystyle \lambda ={\frac {{\rm {tr}}(A)\pm {\sqrt {{\rm {tr}}^{2}(A)-4\det(A)}}}{2}}.}
Defining
g
a
p
(
A
)
=
t
r
2
(
A
)
−
4
det
(
A
)
{\textstyle {\rm {gap}}\left(A\right)={\sqrt {{\rm {tr}}^{2}(A)-4\det(A)}}}
to be the distance between the two eigenvalues, it is straightforward to calculate
∂
λ
∂
a
=
1
2
(
1
±
a
−
d
g
a
p
(
A
)
)
,
∂
λ
∂
b
=
±
c
g
a
p
(
A
)
{\displaystyle {\frac {\partial \lambda }{\partial a}}={\frac {1}{2}}\left(1\pm {\frac {a-d}{{\rm {gap}}(A)}}\right),\qquad {\frac {\partial \lambda }{\partial b}}={\frac {\pm c}{{\rm {gap}}(A)}}}
with similar formulas for c and d. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated.
Eigenvectors can be found by exploiting the Cayley–Hamilton theorem. If λ1, λ2 are the eigenvalues, then (A − λ1I)(A − λ2I) = (A − λ2I)(A − λ1I) = 0, so the columns of (A − λ2I) are annihilated by (A − λ1I) and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then A is a multiple of the identity and any non-zero vector is an eigenvector.)
For example, suppose
A
=
[
4
3
−
2
−
3
]
,
{\displaystyle A={\begin{bmatrix}4&3\\-2&-3\end{bmatrix}},}
then tr(A) = 4 − 3 = 1 and det(A) = 4(−3) − 3(−2) = −6, so the characteristic equation is
0
=
λ
2
−
λ
−
6
=
(
λ
−
3
)
(
λ
+
2
)
,
{\displaystyle 0=\lambda ^{2}-\lambda -6=(\lambda -3)(\lambda +2),}
and the eigenvalues are 3 and -2. Now,
A
−
3
I
=
[
1
3
−
2
−
6
]
,
A
+
2
I
=
[
6
3
−
2
−
1
]
.
{\displaystyle A-3I={\begin{bmatrix}1&3\\-2&-6\end{bmatrix}},\qquad A+2I={\begin{bmatrix}6&3\\-2&-1\end{bmatrix}}.}
In both matrices, the columns are multiples of each other, so either column can be used. Thus, (1, −2) can be taken as an eigenvector associated with the eigenvalue -2, and (3, −1) as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by A.
=== Symmetric 3×3 matrices ===
The characteristic equation of a symmetric 3×3 matrix A is:
det
(
α
I
−
A
)
=
α
3
−
α
2
t
r
(
A
)
−
α
1
2
(
t
r
(
A
2
)
−
t
r
2
(
A
)
)
−
det
(
A
)
=
0.
{\displaystyle \det \left(\alpha I-A\right)=\alpha ^{3}-\alpha ^{2}{\rm {tr}}(A)-\alpha {\frac {1}{2}}\left({\rm {tr}}(A^{2})-{\rm {tr}}^{2}(A)\right)-\det(A)=0.}
This equation may be solved using the methods of Cardano or Lagrange, but an affine change to A will simplify the expression considerably, and lead directly to a trigonometric solution. If A = pB + qI, then A and B have the same eigenvectors, and β is an eigenvalue of B if and only if α = pβ + q is an eigenvalue of A. Letting
q
=
t
r
(
A
)
/
3
{\textstyle q={\rm {tr}}(A)/3}
and
p
=
(
t
r
(
(
A
−
q
I
)
2
)
/
6
)
1
/
2
{\textstyle p=\left({\rm {tr}}\left((A-qI)^{2}\right)/6\right)^{1/2}}
, gives
det
(
β
I
−
B
)
=
β
3
−
3
β
−
det
(
B
)
=
0.
{\displaystyle \det \left(\beta I-B\right)=\beta ^{3}-3\beta -\det(B)=0.}
The substitution β = 2cos θ and some simplification using the identity cos 3θ = 4cos3 θ − 3cos θ reduces the equation to cos 3θ = det(B) / 2. Thus
β
=
2
cos
(
1
3
arccos
(
det
(
B
)
/
2
)
+
2
k
π
3
)
,
k
=
0
,
1
,
2.
{\displaystyle \beta =2{\cos }\left({\frac {1}{3}}{\arccos }\left(\det(B)/2\right)+{\frac {2k\pi }{3}}\right),\quad k=0,1,2.}
If det(B) is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of k. This issue doesn't arise when A is real and symmetric, resulting in a simple algorithm:
Once again, the eigenvectors of A can be obtained by recourse to the Cayley–Hamilton theorem. If α1, α2, α3 are distinct eigenvalues of A, then (A − α1I)(A − α2I)(A − α3I) = 0. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if α3 = α1, then (A − α1I)2(A − α2I) = 0 and (A − α2I)(A − α1I)2 = 0. Thus the generalized eigenspace of α1 is spanned by the columns of A − α2I while the ordinary eigenspace is spanned by the columns of (A − α1I)(A − α2I). The ordinary eigenspace of α2 is spanned by the columns of (A − α1I)2.
For example, let
A
=
[
3
2
6
2
2
5
−
2
−
1
−
4
]
.
{\displaystyle A={\begin{bmatrix}3&2&6\\2&2&5\\-2&-1&-4\end{bmatrix}}.}
The characteristic equation is
0
=
λ
3
−
λ
2
−
λ
+
1
=
(
λ
−
1
)
2
(
λ
+
1
)
,
{\displaystyle 0=\lambda ^{3}-\lambda ^{2}-\lambda +1=(\lambda -1)^{2}(\lambda +1),}
with eigenvalues 1 (of multiplicity 2) and -1. Calculating,
A
−
I
=
[
2
2
6
2
1
5
−
2
−
1
−
5
]
,
A
+
I
=
[
4
2
6
2
3
5
−
2
−
1
−
3
]
{\displaystyle A-I={\begin{bmatrix}2&2&6\\2&1&5\\-2&-1&-5\end{bmatrix}},\qquad A+I={\begin{bmatrix}4&2&6\\2&3&5\\-2&-1&-3\end{bmatrix}}}
and
(
A
−
I
)
2
=
[
−
4
0
−
8
−
4
0
−
8
4
0
8
]
,
(
A
−
I
)
(
A
+
I
)
=
[
0
4
4
0
2
2
0
−
2
−
2
]
{\displaystyle (A-I)^{2}={\begin{bmatrix}-4&0&-8\\-4&0&-8\\4&0&8\end{bmatrix}},\qquad (A-I)(A+I)={\begin{bmatrix}0&4&4\\0&2&2\\0&-2&-2\end{bmatrix}}}
Thus (−4, −4, 4) is an eigenvector for −1, and (4, 2, −2) is an eigenvector for 1. (2, 3, −1) and (6, 5, −3) are both generalized eigenvectors associated with 1, either one of which could be combined with (−4, −4, 4) and (4, 2, −2) to form a basis of generalized eigenvectors of A. Once found, the eigenvectors can be normalized if needed.
==== Eigenvectors of normal 3×3 matrices ====
If a 3×3 matrix
A
{\displaystyle A}
is normal, then the cross-product can be used to find eigenvectors. If
λ
{\displaystyle \lambda }
is an eigenvalue of
A
{\displaystyle A}
, then the null space of
A
−
λ
I
{\displaystyle A-\lambda I}
is perpendicular to its column space. The cross product of two independent columns of
A
−
λ
I
{\displaystyle A-\lambda I}
will be in the null space. That is, it will be an eigenvector associated with
λ
{\displaystyle \lambda }
. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it.
If
A
−
λ
I
{\displaystyle A-\lambda I}
does not contain two independent columns but is not 0, the cross-product can still be used. In this case
λ
{\displaystyle \lambda }
is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose
v
{\displaystyle \mathbf {v} }
is a non-zero column of
A
−
λ
I
{\displaystyle A-\lambda I}
. Choose an arbitrary vector
u
{\displaystyle \mathbf {u} }
not parallel to
v
{\displaystyle \mathbf {v} }
. Then
v
×
u
{\displaystyle \mathbf {v} \times \mathbf {u} }
and
(
v
×
u
)
×
v
{\displaystyle (\mathbf {v} \times \mathbf {u} )\times \mathbf {v} }
will be perpendicular to
v
{\displaystyle \mathbf {v} }
and thus will be eigenvectors of
λ
{\displaystyle \lambda }
.
This does not work when
A
{\displaystyle A}
is not normal, as the null space and column space do not need to be perpendicular for such matrices.
== See also ==
List of eigenvalue algorithms
== Notes ==
== References ==
== Further reading ==
Bojanczyk, Adam W.; Adam Lutoborski (Jan 1991). "Computation of the Euler angles of a symmetric 3X3 matrix". SIAM Journal on Matrix Analysis and Applications. 12 (1): 41–48. doi:10.1137/0612005. | Wikipedia/Eigenvalue_algorithm |
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system.: 1–2 Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.: II:268
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics".
The equation given by Schrödinger is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. Another partial differential equation, the Klein–Gordon equation, led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square root of the Klein–Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein–Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles.
== Definition ==
=== Preliminaries ===
Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
i
ℏ
∂
∂
t
Ψ
(
x
,
t
)
=
[
−
ℏ
2
2
m
∂
2
∂
x
2
+
V
(
x
,
t
)
]
Ψ
(
x
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).}
Here,
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
is a wave function, a function that assigns a complex number to each point
x
{\displaystyle x}
at each time
t
{\displaystyle t}
. The parameter
m
{\displaystyle m}
is the mass of the particle, and
V
(
x
,
t
)
{\displaystyle V(x,t)}
is the potential that represents the environment in which the particle exists.: 74 The constant
i
{\displaystyle i}
is the imaginary unit, and
ℏ
{\displaystyle \hbar }
is the reduced Planck constant, which has units of action (energy multiplied by time).: 10
Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector
|
ψ
⟩
{\displaystyle |\psi \rangle }
belonging to a separable complex Hilbert space
H
{\displaystyle {\mathcal {H}}}
. This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys
⟨
ψ
|
ψ
⟩
=
1
{\displaystyle \langle \psi |\psi \rangle =1}
. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions
L
2
{\displaystyle L^{2}}
, while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space
C
2
{\displaystyle \mathbb {C} ^{2}}
with the usual inner product.: 322
Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue
λ
{\displaystyle \lambda }
is non-degenerate and the probability is given by
|
⟨
λ
|
ψ
⟩
|
2
{\displaystyle |\langle \lambda |\psi \rangle |^{2}}
, where
|
λ
⟩
{\displaystyle |\lambda \rangle }
is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by
⟨
ψ
|
P
λ
|
ψ
⟩
{\displaystyle \langle \psi |P_{\lambda }|\psi \rangle }
, where
P
λ
{\displaystyle P_{\lambda }}
is the projector onto its associated eigenspace.
A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states.: 100–105 Thus, a position-space wave function
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
as used above can be written as the inner product of a time-dependent state vector
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
with unphysical but convenient "position eigenstates"
|
x
⟩
{\displaystyle |x\rangle }
:
Ψ
(
x
,
t
)
=
⟨
x
|
Ψ
(
t
)
⟩
.
{\displaystyle \Psi (x,t)=\langle x|\Psi (t)\rangle .}
=== Time-dependent equation ===
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:: 143
where
t
{\displaystyle t}
is time,
|
Ψ
(
t
)
⟩
{\displaystyle \vert \Psi (t)\rangle }
is the state vector of the quantum system (
Ψ
{\displaystyle \Psi }
being the Greek letter psi), and
H
^
{\displaystyle {\hat {H}}}
is an observable, the Hamiltonian operator.
The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.: 78 For example, given a wave function in position space
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
as above, we have
Pr
(
x
,
t
)
=
|
Ψ
(
x
,
t
)
|
2
.
{\displaystyle \Pr(x,t)=|\Psi (x,t)|^{2}.}
=== Time-independent equation ===
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
where
E
{\displaystyle E}
is the energy of the system.: 134 This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s)
E
{\displaystyle E}
.
== Properties ==
=== Linearity ===
The Schrödinger equation is a linear differential equation, meaning that if two state vectors
|
ψ
1
⟩
{\displaystyle |\psi _{1}\rangle }
and
|
ψ
2
⟩
{\displaystyle |\psi _{2}\rangle }
are solutions, then so is any linear combination
|
ψ
⟩
=
a
|
ψ
1
⟩
+
b
|
ψ
2
⟩
{\displaystyle |\psi \rangle =a|\psi _{1}\rangle +b|\psi _{2}\rangle }
of the two state vectors where a and b are any complex numbers.: 25 Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
can be written as the linear combination
|
Ψ
(
t
)
⟩
=
∑
n
A
n
e
−
i
E
n
t
/
ℏ
|
ψ
E
n
⟩
,
{\displaystyle |\Psi (t)\rangle =\sum _{n}A_{n}e^{{-iE_{n}t}/\hbar }|\psi _{E_{n}}\rangle ,}
where
A
n
{\displaystyle A_{n}}
are complex numbers and the vectors
|
ψ
E
n
⟩
{\displaystyle |\psi _{E_{n}}\rangle }
are solutions of the time-independent equation
H
^
|
ψ
E
n
⟩
=
E
n
|
ψ
E
n
⟩
{\displaystyle {\hat {H}}|\psi _{E_{n}}\rangle =E_{n}|\psi _{E_{n}}\rangle }
.
=== Unitarity ===
Holding the Hamiltonian
H
^
{\displaystyle {\hat {H}}}
constant, the Schrödinger equation has the solution
|
Ψ
(
t
)
⟩
=
e
−
i
H
^
t
/
ℏ
|
Ψ
(
0
)
⟩
.
{\displaystyle |\Psi (t)\rangle =e^{-i{\hat {H}}t/\hbar }|\Psi (0)\rangle .}
The operator
U
^
(
t
)
=
e
−
i
H
^
t
/
ℏ
{\displaystyle {\hat {U}}(t)=e^{-i{\hat {H}}t/\hbar }}
is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is
|
Ψ
(
0
)
⟩
{\displaystyle |\Psi (0)\rangle }
, then the state at a later time
t
{\displaystyle t}
will be given by
|
Ψ
(
t
)
⟩
=
U
^
(
t
)
|
Ψ
(
0
)
⟩
{\displaystyle |\Psi (t)\rangle ={\hat {U}}(t)|\Psi (0)\rangle }
for some unitary operator
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
. Conversely, suppose that
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
is a continuous family of unitary operators parameterized by
t
{\displaystyle t}
. Without loss of generality, the parameterization can be chosen so that
U
^
(
0
)
{\displaystyle {\hat {U}}(0)}
is the identity operator and that
U
^
(
t
/
N
)
N
=
U
^
(
t
)
{\displaystyle {\hat {U}}(t/N)^{N}={\hat {U}}(t)}
for any
N
>
0
{\displaystyle N>0}
. Then
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
depends upon the parameter
t
{\displaystyle t}
in such a way that
U
^
(
t
)
=
e
−
i
G
^
t
{\displaystyle {\hat {U}}(t)=e^{-i{\hat {G}}t}}
for some self-adjoint operator
G
^
{\displaystyle {\hat {G}}}
, called the generator of the family
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
. A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units).
To see that the generator is Hermitian, note that with
U
^
(
δ
t
)
≈
U
^
(
0
)
−
i
G
^
δ
t
{\displaystyle {\hat {U}}(\delta t)\approx {\hat {U}}(0)-i{\hat {G}}\delta t}
, we have
U
^
(
δ
t
)
†
U
^
(
δ
t
)
≈
(
U
^
(
0
)
†
+
i
G
^
†
δ
t
)
(
U
^
(
0
)
−
i
G
^
δ
t
)
=
I
+
i
δ
t
(
G
^
†
−
G
^
)
+
O
(
δ
t
2
)
,
{\displaystyle {\hat {U}}(\delta t)^{\dagger }{\hat {U}}(\delta t)\approx ({\hat {U}}(0)^{\dagger }+i{\hat {G}}^{\dagger }\delta t)({\hat {U}}(0)-i{\hat {G}}\delta t)=I+i\delta t({\hat {G}}^{\dagger }-{\hat {G}})+O(\delta t^{2}),}
so
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
is unitary only if, to first order, its derivative is Hermitian.
=== Changes of basis ===
The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle.: 182 The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term:
i
ℏ
d
d
t
|
Ψ
(
t
)
⟩
=
(
1
2
m
p
^
2
+
V
^
)
|
Ψ
(
t
)
⟩
.
{\displaystyle i\hbar {\frac {d}{dt}}|\Psi (t)\rangle =\left({\frac {1}{2m}}{\hat {p}}^{2}+{\hat {V}}\right)|\Psi (t)\rangle .}
Writing
r
{\displaystyle \mathbf {r} }
for a three-dimensional position vector and
p
{\displaystyle \mathbf {p} }
for a three-dimensional momentum vector, the position-space Schrödinger equation is
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
−
ℏ
2
2
m
∇
2
Ψ
(
r
,
t
)
+
V
(
r
)
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).}
The momentum-space counterpart involves the Fourier transforms of the wave function and the potential:
i
ℏ
∂
∂
t
Ψ
~
(
p
,
t
)
=
p
2
2
m
Ψ
~
(
p
,
t
)
+
(
2
π
ℏ
)
−
3
/
2
∫
d
3
p
′
V
~
(
p
−
p
′
)
Ψ
~
(
p
′
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}{\tilde {\Psi }}(\mathbf {p} ,t)={\frac {\mathbf {p} ^{2}}{2m}}{\tilde {\Psi }}(\mathbf {p} ,t)+(2\pi \hbar )^{-3/2}\int d^{3}\mathbf {p} '\,{\tilde {V}}(\mathbf {p} -\mathbf {p} '){\tilde {\Psi }}(\mathbf {p} ',t).}
The functions
Ψ
(
r
,
t
)
{\displaystyle \Psi (\mathbf {r} ,t)}
and
Ψ
~
(
p
,
t
)
{\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)}
are derived from
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
by
Ψ
(
r
,
t
)
=
⟨
r
|
Ψ
(
t
)
⟩
,
{\displaystyle \Psi (\mathbf {r} ,t)=\langle \mathbf {r} |\Psi (t)\rangle ,}
Ψ
~
(
p
,
t
)
=
⟨
p
|
Ψ
(
t
)
⟩
,
{\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)=\langle \mathbf {p} |\Psi (t)\rangle ,}
where
|
r
⟩
{\displaystyle |\mathbf {r} \rangle }
and
|
p
⟩
{\displaystyle |\mathbf {p} \rangle }
do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.
When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables
x
{\displaystyle x}
and
p
{\displaystyle p}
are promoted to self-adjoint operators
x
^
{\displaystyle {\hat {x}}}
and
p
^
{\displaystyle {\hat {p}}}
that satisfy the canonical commutation relation
[
x
^
,
p
^
]
=
i
ℏ
.
{\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar .}
This implies that: 190
⟨
x
|
p
^
|
Ψ
⟩
=
−
i
ℏ
d
d
x
Ψ
(
x
)
,
{\displaystyle \langle x|{\hat {p}}|\Psi \rangle =-i\hbar {\frac {d}{dx}}\Psi (x),}
so the action of the momentum operator
p
^
{\displaystyle {\hat {p}}}
in the position-space representation is
−
i
ℏ
d
d
x
{\textstyle -i\hbar {\frac {d}{dx}}}
. Thus,
p
^
2
{\displaystyle {\hat {p}}^{2}}
becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian
∇
2
{\displaystyle \nabla ^{2}}
.
The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform.: 103–104 In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples
Ψ
~
(
p
)
{\displaystyle {\tilde {\Psi }}(p)}
with
Ψ
~
(
p
+
ℏ
K
)
{\displaystyle {\tilde {\Psi }}(p+\hbar K)}
for only discrete reciprocal lattice vectors
K
{\displaystyle K}
. This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.: 138
=== Probability current ===
The Schrödinger equation is consistent with local probability conservation.: 238 It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent.
The continuity equation for probability in non relativistic quantum mechanics is stated as:
∂
∂
t
ρ
(
r
,
t
)
+
∇
⋅
j
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,}
where
j
=
1
2
m
(
Ψ
∗
p
^
Ψ
−
Ψ
p
^
Ψ
∗
)
=
−
i
ℏ
2
m
(
ψ
∗
∇
ψ
−
ψ
∇
ψ
∗
)
=
ℏ
m
Im
(
ψ
∗
∇
ψ
)
{\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)=-{\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}\operatorname {Im} (\psi ^{*}\nabla \psi )}
is the probability current or probability flux (flow per unit area).
If the wavefunction is represented as
ψ
(
x
,
t
)
=
ρ
(
x
,
t
)
exp
(
i
S
(
x
,
t
)
ℏ
)
,
{\textstyle \psi ({\bf {x}},t)={\sqrt {\rho ({\bf {x}},t)}}\exp \left({\frac {iS({\bf {x}},t)}{\hbar }}\right),}
where
S
(
x
,
t
)
{\displaystyle S(\mathbf {x} ,t)}
is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:
j
=
ρ
∇
S
m
{\displaystyle \mathbf {j} ={\frac {\rho \nabla S}{m}}}
Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the
∇
S
m
{\textstyle {\frac {\nabla S}{m}}}
term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle.
=== Separation of variables ===
If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads:
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
[
−
ℏ
2
2
m
∇
2
+
V
(
r
)
]
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=\left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} ,t).}
The operator on the left side depends only on time; the one on the right side depends only on space.
Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts
Ψ
(
r
,
t
)
=
ψ
(
r
)
τ
(
t
)
,
{\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )\tau (t),}
where
ψ
(
r
)
{\displaystyle \psi (\mathbf {r} )}
is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and
τ
(
t
)
{\displaystyle \tau (t)}
is a function of time only. Substituting this expression for
Ψ
{\displaystyle \Psi }
into the time dependent left hand side shows that
τ
(
t
)
{\displaystyle \tau (t)}
is a phase factor:
Ψ
(
r
,
t
)
=
ψ
(
r
)
e
−
i
E
t
/
ℏ
.
{\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-i{Et/\hbar }}.}
A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.: 143ff
The spatial part of the full wave function solves the equation
∇
2
ψ
(
r
)
+
2
m
ℏ
2
[
E
−
V
(
r
)
]
ψ
(
r
)
=
0
,
{\displaystyle \nabla ^{2}\psi (\mathbf {r} )+{\frac {2m}{\hbar ^{2}}}\left[E-V(\mathbf {r} )\right]\psi (\mathbf {r} )=0,}
where the energy
E
{\displaystyle E}
appears in the phase factor.
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is an example of the spectral theorem, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, as in
ψ
(
r
)
=
ψ
x
(
x
)
ψ
y
(
y
)
ψ
z
(
z
)
,
{\displaystyle \psi (\mathbf {r} )=\psi _{x}(x)\psi _{y}(y)\psi _{z}(z),}
or radial and angular coordinates might be separated:
ψ
(
r
)
=
ψ
r
(
r
)
ψ
θ
(
θ
)
ψ
ϕ
(
ϕ
)
.
{\displaystyle \psi (\mathbf {r} )=\psi _{r}(r)\psi _{\theta }(\theta )\psi _{\phi }(\phi ).}
== Examples ==
=== Particle in a box ===
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside.: 77–78 For the one-dimensional case in the
x
{\displaystyle x}
direction, the time-independent Schrödinger equation may be written
−
ℏ
2
2
m
d
2
ψ
d
x
2
=
E
ψ
.
{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}
With the differential operator defined by
p
^
x
=
−
i
ℏ
d
d
x
{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}
the previous equation is evocative of the classic kinetic energy analogue,
1
2
m
p
^
x
2
=
E
,
{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}
with state
ψ
{\displaystyle \psi }
in this case having energy
E
{\displaystyle E}
coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
ψ
(
x
)
=
A
e
i
k
x
+
B
e
−
i
k
x
E
=
ℏ
2
k
2
2
m
{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}
or, from Euler's formula,
ψ
(
x
)
=
C
sin
(
k
x
)
+
D
cos
(
k
x
)
.
{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).}
The infinite potential walls of the box determine the values of
C
,
D
,
{\displaystyle C,D,}
and
k
{\displaystyle k}
at
x
=
0
{\displaystyle x=0}
and
x
=
L
{\displaystyle x=L}
where
ψ
{\displaystyle \psi }
must be zero. Thus, at
x
=
0
{\displaystyle x=0}
,
ψ
(
0
)
=
0
=
C
sin
(
0
)
+
D
cos
(
0
)
=
D
{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}
and
D
=
0
{\displaystyle D=0}
. At
x
=
L
{\displaystyle x=L}
,
ψ
(
L
)
=
0
=
C
sin
(
k
L
)
,
{\displaystyle \psi (L)=0=C\sin(kL),}
in which
C
{\displaystyle C}
cannot be zero as this would conflict with the postulate that
ψ
{\displaystyle \psi }
has norm 1. Therefore, since
sin
(
k
L
)
=
0
{\displaystyle \sin(kL)=0}
,
k
L
{\displaystyle kL}
must be an integer multiple of
π
{\displaystyle \pi }
,
k
=
n
π
L
n
=
1
,
2
,
3
,
…
.
{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}
This constraint on
k
{\displaystyle k}
implies a constraint on the energy levels, yielding
E
n
=
ℏ
2
π
2
n
2
2
m
L
2
=
n
2
h
2
8
m
L
2
.
{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
=== Harmonic oscillator ===
The Schrödinger equation for this situation is
E
ψ
=
−
ℏ
2
2
m
d
2
d
x
2
ψ
+
1
2
m
ω
2
x
2
ψ
,
{\displaystyle E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi +{\frac {1}{2}}m\omega ^{2}x^{2}\psi ,}
where
x
{\displaystyle x}
is the displacement and
ω
{\displaystyle \omega }
the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
The solutions in position space are
ψ
n
(
x
)
=
1
2
n
n
!
(
m
ω
π
ℏ
)
1
/
4
e
−
m
ω
x
2
2
ℏ
H
n
(
m
ω
ℏ
x
)
,
{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\ \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\ e^{-{\frac {m\omega x^{2}}{2\hbar }}}\ {\mathcal {H}}_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),}
where
n
∈
{
0
,
1
,
2
,
…
}
{\displaystyle n\in \{0,1,2,\ldots \}}
, and the functions
H
n
{\displaystyle {\mathcal {H}}_{n}}
are the Hermite polynomials of order
n
{\displaystyle n}
. The solution set may be generated by
ψ
n
(
x
)
=
1
n
!
(
m
ω
2
ℏ
)
n
(
x
−
ℏ
m
ω
d
d
x
)
n
(
m
ω
π
ℏ
)
1
4
e
−
m
ω
x
2
2
ℏ
.
{\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {n!}}}\left({\sqrt {\frac {m\omega }{2\hbar }}}\right)^{n}\left(x-{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)^{n}\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{\frac {-m\omega x^{2}}{2\hbar }}.}
The eigenvalues are
E
n
=
(
n
+
1
2
)
ℏ
ω
.
{\displaystyle E_{n}=\left(n+{\frac {1}{2}}\right)\hbar \omega .}
The case
n
=
0
{\displaystyle n=0}
is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.
The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.: 352
=== Hydrogen atom ===
The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is
E
ψ
=
−
ℏ
2
2
μ
∇
2
ψ
−
q
2
4
π
ε
0
r
ψ
{\displaystyle E\psi =-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi -{\frac {q^{2}}{4\pi \varepsilon _{0}r}}\psi }
where
q
{\displaystyle q}
is the electron charge,
r
{\displaystyle \mathbf {r} }
is the position of the electron relative to the nucleus,
r
=
|
r
|
{\displaystyle r=|\mathbf {r} |}
is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein
ε
0
{\displaystyle \varepsilon _{0}}
is the permittivity of free space and
μ
=
m
q
m
p
m
q
+
m
p
{\displaystyle \mu ={\frac {m_{q}m_{p}}{m_{q}+m_{p}}}}
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass
m
p
{\displaystyle m_{p}}
and the electron of mass
m
q
{\displaystyle m_{q}}
. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus,
ψ
(
r
,
θ
,
φ
)
=
R
(
r
)
Y
ℓ
m
(
θ
,
φ
)
=
R
(
r
)
Θ
(
θ
)
Φ
(
φ
)
,
{\displaystyle \psi (r,\theta ,\varphi )=R(r)Y_{\ell }^{m}(\theta ,\varphi )=R(r)\Theta (\theta )\Phi (\varphi ),}
where R are radial functions and
Y
l
m
(
θ
,
φ
)
{\displaystyle Y_{l}^{m}(\theta ,\varphi )}
are spherical harmonics of degree
ℓ
{\displaystyle \ell }
and order
m
{\displaystyle m}
. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are:
ψ
n
ℓ
m
(
r
,
θ
,
φ
)
=
(
2
n
a
0
)
3
(
n
−
ℓ
−
1
)
!
2
n
[
(
n
+
ℓ
)
!
]
e
−
r
/
n
a
0
(
2
r
n
a
0
)
ℓ
L
n
−
ℓ
−
1
2
ℓ
+
1
(
2
r
n
a
0
)
⋅
Y
ℓ
m
(
θ
,
φ
)
{\displaystyle \psi _{n\ell m}(r,\theta ,\varphi )={\sqrt {\left({\frac {2}{na_{0}}}\right)^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\varphi )}
where
a
0
=
4
π
ε
0
ℏ
2
m
q
q
2
{\displaystyle a_{0}={\frac {4\pi \varepsilon _{0}\hbar ^{2}}{m_{q}q^{2}}}}
is the Bohr radius,
L
n
−
ℓ
−
1
2
ℓ
+
1
(
⋯
)
{\displaystyle L_{n-\ell -1}^{2\ell +1}(\cdots )}
are the generalized Laguerre polynomials of degree
n
−
ℓ
−
1
{\displaystyle n-\ell -1}
,
n
,
ℓ
,
m
{\displaystyle n,\ell ,m}
are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values
n
=
1
,
2
,
3
,
…
,
{\displaystyle n=1,2,3,\dots ,}
ℓ
=
0
,
1
,
2
,
…
,
n
−
1
,
{\displaystyle \ell =0,1,2,\dots ,n-1,}
m
=
−
ℓ
,
…
,
ℓ
.
{\displaystyle m=-\ell ,\dots ,\ell .}
=== Approximate solutions ===
It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.
== Semiclassical limit ==
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics.: 302 The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential
V
{\displaystyle V}
, the Ehrenfest theorem says
m
d
d
t
⟨
x
⟩
=
⟨
p
⟩
;
d
d
t
⟨
p
⟩
=
−
⟨
V
′
(
X
)
⟩
.
{\displaystyle m{\frac {d}{dt}}\langle x\rangle =\langle p\rangle ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .}
Although the first of these equations is consistent with the classical behavior, the second is not: If the pair
(
⟨
X
⟩
,
⟨
P
⟩
)
{\displaystyle (\langle X\rangle ,\langle P\rangle )}
were to satisfy Newton's second law, the right-hand side of the second equation would have to be
−
V
′
(
⟨
X
⟩
)
{\displaystyle -V'\left(\left\langle X\right\rangle \right)}
which is typically not the same as
−
⟨
V
′
(
X
)
⟩
{\displaystyle -\left\langle V'(X)\right\rangle }
. For a general
V
′
{\displaystyle V'}
, therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however,
V
′
{\displaystyle V'}
is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point
x
0
{\displaystyle x_{0}}
, then
V
′
(
⟨
X
⟩
)
{\displaystyle V'\left(\left\langle X\right\rangle \right)}
and
⟨
V
′
(
X
)
⟩
{\displaystyle \left\langle V'(X)\right\rangle }
will be almost the same, since both will be approximately equal to
V
′
(
x
0
)
{\displaystyle V'(x_{0})}
. In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
The Schrödinger equation in its general form
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
H
^
Ψ
(
r
,
t
)
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi \left(\mathbf {r} ,t\right)={\hat {H}}\Psi \left(\mathbf {r} ,t\right)}
is closely related to the Hamilton–Jacobi equation (HJE)
−
∂
∂
t
S
(
q
i
,
t
)
=
H
(
q
i
,
∂
S
∂
q
i
,
t
)
{\displaystyle -{\frac {\partial }{\partial t}}S(q_{i},t)=H\left(q_{i},{\frac {\partial S}{\partial q_{i}}},t\right)}
where
S
{\displaystyle S}
is the classical action and
H
{\displaystyle H}
is the Hamiltonian function (not operator).: 308 Here the generalized coordinates
q
i
{\displaystyle q_{i}}
for
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
(used in the context of the HJE) can be set to the position in Cartesian coordinates as
r
=
(
q
1
,
q
2
,
q
3
)
=
(
x
,
y
,
z
)
{\displaystyle \mathbf {r} =(q_{1},q_{2},q_{3})=(x,y,z)}
.
Substituting
Ψ
=
ρ
(
r
,
t
)
e
i
S
(
r
,
t
)
/
ℏ
{\displaystyle \Psi ={\sqrt {\rho (\mathbf {r} ,t)}}e^{iS(\mathbf {r} ,t)/\hbar }}
where
ρ
{\displaystyle \rho }
is the probability density, into the Schrödinger equation and then taking the limit
ℏ
→
0
{\displaystyle \hbar \to 0}
in the resulting equation yield the Hamilton–Jacobi equation.
== Density matrices ==
Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead.: 74 A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written
ρ
^
=
|
Ψ
⟩
⟨
Ψ
|
.
{\displaystyle {\hat {\rho }}=|\Psi \rangle \langle \Psi |.}
The density-matrix analogue of the Schrödinger equation for wave functions is
i
ℏ
∂
ρ
^
∂
t
=
[
H
^
,
ρ
^
]
,
{\displaystyle i\hbar {\frac {\partial {\hat {\rho }}}{\partial t}}=[{\hat {H}},{\hat {\rho }}],}
where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices.: 312 If the Hamiltonian is time-independent, this equation can be easily solved to yield
ρ
^
(
t
)
=
e
−
i
H
^
t
/
ℏ
ρ
^
(
0
)
e
i
H
^
t
/
ℏ
.
{\displaystyle {\hat {\rho }}(t)=e^{-i{\hat {H}}t/\hbar }{\hat {\rho }}(0)e^{i{\hat {H}}t/\hbar }.}
More generally, if the unitary operator
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by
ρ
^
(
t
)
=
U
^
(
t
)
ρ
^
(
0
)
U
^
(
t
)
†
.
{\displaystyle {\hat {\rho }}(t)={\hat {U}}(t){\hat {\rho }}(0){\hat {U}}(t)^{\dagger }.}
Unitary evolution of a density matrix conserves its von Neumann entropy.: 267
== Relativistic quantum physics and quantum field theory ==
The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.
=== Klein–Gordon and Dirac equations ===
Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation
E
2
=
(
p
c
)
2
+
(
m
0
c
2
)
2
,
{\displaystyle E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2},}
instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation,
−
1
c
2
∂
2
∂
t
2
ψ
+
∇
2
ψ
=
m
2
c
2
ℏ
2
ψ
,
{\displaystyle -{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi +\nabla ^{2}\psi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,}
was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices
α
1
,
α
2
,
α
3
,
β
{\displaystyle \alpha _{1},\alpha _{2},\alpha _{3},\beta }
. Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read
(
β
m
c
2
+
c
(
∑
n
=
1
3
α
n
p
n
)
)
ψ
=
i
ℏ
∂
ψ
∂
t
.
{\displaystyle \left(\beta mc^{2}+c\left(\sum _{n\mathop {=} 1}^{3}\alpha _{n}p_{n}\right)\right)\psi =i\hbar {\frac {\partial \psi }{\partial t}}.}
This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is:
H
^
Dirac
=
γ
0
[
c
γ
⋅
(
p
^
−
q
A
)
+
m
c
2
+
γ
0
q
φ
]
,
{\displaystyle {\hat {H}}_{\text{Dirac}}=\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot \left({\hat {\mathbf {p} }}-q\mathbf {A} \right)+mc^{2}+\gamma ^{0}q\varphi \right],}
in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1⁄2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.
For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).
In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.
=== Fock space ===
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
. This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.
== History ==
Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum
p
{\displaystyle p}
of a photon is inversely proportional to its wavelength
λ
{\displaystyle \lambda }
, or proportional to its wave number
k
{\displaystyle k}
:
p
=
h
λ
=
ℏ
k
,
{\displaystyle p={\frac {h}{\lambda }}=\hbar k,}
where
h
{\displaystyle h}
is the Planck constant and
ℏ
=
h
/
2
π
{\displaystyle \hbar ={h}/{2\pi }}
is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.
These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum
L
{\displaystyle L}
according to
L
=
n
h
2
π
=
n
ℏ
.
{\displaystyle L=n{\frac {h}{2\pi }}=n\hbar .}
According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:
n
λ
=
2
π
r
.
{\displaystyle n\lambda =2\pi r.}
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius
r
{\displaystyle r}
.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen.
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.
The equation he found is
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
−
ℏ
2
2
m
∇
2
Ψ
(
r
,
t
)
+
V
(
r
)
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).}
By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
(
E
+
e
2
r
)
2
ψ
(
x
)
=
−
∇
2
ψ
(
x
)
+
m
2
ψ
(
x
)
.
{\displaystyle \left(E+{\frac {e^{2}}{r}}\right)^{2}\psi (x)=-\nabla ^{2}\psi (x)+m^{2}\psi (x).}
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.
While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl: 3 ) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.: 1 Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave
Ψ
(
x
,
t
)
{\displaystyle \Psi (\mathbf {x} ,t)}
, moving in a potential well
V
{\displaystyle V}
, created by the proton. This computation accurately reproduced the energy levels of the Bohr model.
The Schrödinger equation details the behavior of
Ψ
{\displaystyle \Psi }
but says nothing of its nature. Schrödinger tried to interpret the real part of
Ψ
∂
Ψ
∗
∂
t
{\displaystyle \Psi {\frac {\partial \Psi ^{*}}{\partial t}}}
as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of
Ψ
{\displaystyle \Psi }
is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted
Ψ
{\displaystyle \Psi }
as the probability amplitude, whose modulus squared is equal to probability density.: 220 Later, Schrödinger himself explained this interpretation as follows:
The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog.
== Interpretation ==
The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.
Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why should we assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.
Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.
== See also ==
== Notes ==
== References ==
== External links ==
"Schrödinger equation". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware
The Modern Revolution in Physics – an online textbook.
Quantum Physics I at MIT OpenCourseWare | Wikipedia/Schrödinger_equation |
Molecular physics is the study of the physical properties of molecules and molecular dynamics. The field overlaps significantly with physical chemistry, chemical physics, and quantum chemistry. It is often considered as a sub-field of atomic, molecular, and optical physics. Research groups studying molecular physics are typically designated as one of these other fields. Molecular physics addresses phenomena due to both molecular structure and individual atomic processes within molecules. Like atomic physics, it relies on a combination of classical and quantum mechanics to describe interactions between electromagnetic radiation and matter. Experiments in the field often rely heavily on techniques borrowed from atomic physics, such as spectroscopy and scattering.
== Molecular structure ==
In a molecule, both the electrons and nuclei experience similar-scale forces from the Coulomb interaction. However, the nuclei remain at nearly fixed locations in the molecule while the electrons move significantly. This picture of a molecule is based on the idea that nucleons are much heavier than electrons, so will move much less in response to the same force. Neutron scattering experiments on molecules have been used to verify this description.
=== Molecular energy levels and spectra ===
When atoms join into molecules, their inner electrons remain bound to their original nucleus while the outer valence electrons are distributed around the molecule. The charge distribution of these valence electrons determines the electronic energy level of a molecule, and can be described by molecular orbital theory, which closely follows the atomic orbital theory used for single atoms. Assuming that the momenta of the electrons are on the order of ħ/a (where ħ is the reduced Planck constant and a is the average internuclear distance within a molecule, ~ 1 Å), the magnitude of the energy spacing for electronic states can be estimated at a few electron volts. This is the case for most low-lying molecular energy states, and corresponds to transitions in the visible and ultraviolet regions of the electromagnetic spectrum.
In addition to the electronic energy levels shared with atoms, molecules have additional quantized energy levels corresponding to vibrational and rotational states. Vibrational energy levels refer to motion of the nuclei about their equilibrium positions in the molecule. The approximate energy spacing of these levels can be estimated by treating each nucleus as a quantum harmonic oscillator in the potential produced by the molecule, and comparing its associated frequency to that of an electron experiencing the same potential. The result is an energy spacing about 100× smaller than that for electronic levels. In agreement with this estimate, vibrational spectra show transitions in the near infrared (about 1–5 μm). Finally, rotational energy states describe semi-rigid rotation of the entire molecule and produce transition wavelengths in the far infrared and microwave regions (about 100-10,000 μm in wavelength). These are the smallest energy spacings, and their size can be understood by comparing the energy of a diatomic molecule with internuclear spacing ~ 1 Å to the energy of a valence electron (estimated above as ~ ħ/a).
Actual molecular spectra also show transitions which simultaneously couple electronic, vibrational, and rotational states. For example, transitions involving both rotational and vibrational states are often referred to as rotational-vibrational or rovibrational transitions. Vibronic transitions combine electronic and vibrational transitions, and rovibronic transitions combine electronic, rotational, and vibrational transitions. Due to the very different frequencies associated with each type of transition, the wavelengths associated with these mixed transitions vary across the electromagnetic spectrum.
== Experiments ==
In general, the goals of molecular physics experiments are to characterize shape and size, electric and magnetic properties, internal energy levels, and ionization and dissociation energies for molecules. In terms of shape and size, rotational spectra and vibrational spectra allow for the determination of molecular moments of inertia, which allows for calculations of internuclear distances in molecules. X-ray diffraction allows determination of internuclear spacing directly, especially for molecules containing heavy elements. All branches of spectroscopy contribute to determination of molecular energy levels due to the wide range of applicable energies (ultraviolet to microwave regimes).
=== Current research ===
Within atomic, molecular, and optical physics, there are numerous studies using molecules to verify fundamental constants and probe for physics beyond the Standard Model. Certain molecular structures are predicted to be sensitive to new physics phenomena, such as parity and time-reversal violation. Molecules are also considered a potential future platform for trapped ion quantum computing, as their more complex energy level structure could facilitate higher efficiency encoding of quantum information than individual atoms. From a chemical physics perspective, intramolecular vibrational energy redistribution experiments use vibrational spectra to determine how energy is redistributed between different quantum states of a vibrationally excited molecule.
== See also ==
== Sources ==
ATOMIC, MOLECULAR AND OPTICAL PHYSICS: NEW RESEARCH by L.T. Chen; Nova Science Publishers, Inc. New York
== References == | Wikipedia/Molecular_physics |
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
Measuring a quantum system generally changes the quantum state that describes that system. This is a central feature of quantum mechanics, one that is both mathematically intricate and conceptually subtle. The mathematical tools for making predictions about what measurement outcomes may occur, and how quantum states can change, were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability. However, on a more philosophical level, debates continue about the meaning of the measurement concept.
== Mathematical formalism ==
=== "Observables" as self-adjoint operators ===
In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable".: 17 These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space), exotic possibilities for sets of eigenvalues, like Cantor sets; and so forth.: 79 These issues can be satisfactorily resolved using spectral theory;: 101 the present article will avoid them whenever possible.
=== Projective measurement ===
The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that
P
(
x
i
)
=
tr
(
Π
i
ρ
)
,
{\displaystyle P(x_{i})=\operatorname {tr} (\Pi _{i}\rho ),}
where
ρ
{\displaystyle \rho }
is the density operator, and
Π
i
{\displaystyle \Pi _{i}}
is the projection operator onto the basis vector corresponding to the measurement outcome
x
i
{\displaystyle x_{i}}
. The average of the eigenvalues of a von Neumann observable, weighted by the Born rule probabilities, is the expectation value of that observable. For an observable
A
{\displaystyle A}
, the expectation value given a quantum state
ρ
{\displaystyle \rho }
is
⟨
A
⟩
=
tr
(
A
ρ
)
.
{\displaystyle \langle A\rangle =\operatorname {tr} (A\rho ).}
A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e.,
P
(
x
)
=
1
{\displaystyle P(x)=1}
for some outcome
x
{\displaystyle x}
). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it.
The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator.
=== Generalized measurement (POVM) ===
In functional analysis and quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information.
In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices
{
F
i
}
{\displaystyle \{F_{i}\}}
on a Hilbert space
H
{\displaystyle {\mathcal {H}}}
that sum to the identity matrix,: 90
∑
i
=
1
n
F
i
=
I
.
{\displaystyle \sum _{i=1}^{n}F_{i}=\operatorname {I} .}
In quantum mechanics, the POVM element
F
i
{\displaystyle F_{i}}
is associated with the measurement outcome
i
{\displaystyle i}
, such that the probability of obtaining it when making a measurement on the quantum state
ρ
{\displaystyle \rho }
is given by
Prob
(
i
)
=
tr
(
ρ
F
i
)
{\displaystyle {\text{Prob}}(i)=\operatorname {tr} (\rho F_{i})}
,
where
tr
{\displaystyle \operatorname {tr} }
is the trace operator. When the quantum state being measured is a pure state
|
ψ
⟩
{\displaystyle |\psi \rangle }
this formula reduces to
Prob
(
i
)
=
tr
(
|
ψ
⟩
⟨
ψ
|
F
i
)
=
⟨
ψ
|
F
i
|
ψ
⟩
{\displaystyle {\text{Prob}}(i)=\operatorname {tr} (|\psi \rangle \langle \psi |F_{i})=\langle \psi |F_{i}|\psi \rangle }
.
=== State change due to measurement ===
A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process.: 134 To remedy this, further information is specified by decomposing each POVM element into a product:
E
i
=
A
i
†
A
i
.
{\displaystyle E_{i}=A_{i}^{\dagger }A_{i}.}
The Kraus operators
A
i
{\displaystyle A_{i}}
, named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products
A
i
†
A
i
{\displaystyle A_{i}^{\dagger }A_{i}}
are. If upon performing the measurement the outcome
E
i
{\displaystyle E_{i}}
is obtained, then the initial state
ρ
{\displaystyle \rho }
is updated to
ρ
→
ρ
′
=
A
i
ρ
A
i
†
P
r
o
b
(
i
)
=
A
i
ρ
A
i
†
tr
(
ρ
E
i
)
.
{\displaystyle \rho \to \rho '={\frac {A_{i}\rho A_{i}^{\dagger }}{\mathrm {Prob} (i)}}={\frac {A_{i}\rho A_{i}^{\dagger }}{\operatorname {tr} (\rho E_{i})}}.}
An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable:
ρ
→
ρ
′
=
Π
i
ρ
Π
i
tr
(
ρ
Π
i
)
.
{\displaystyle \rho \to \rho '={\frac {\Pi _{i}\rho \Pi _{i}}{\operatorname {tr} (\rho \Pi _{i})}}.}
If the initial state
ρ
{\displaystyle \rho }
is pure, and the projectors
Π
i
{\displaystyle \Pi _{i}}
have rank 1, they can be written as projectors onto the vectors
|
ψ
⟩
{\displaystyle |\psi \rangle }
and
|
i
⟩
{\displaystyle |i\rangle }
, respectively. The formula simplifies thus to
ρ
=
|
ψ
⟩
⟨
ψ
|
→
ρ
′
=
|
i
⟩
⟨
i
|
ψ
⟩
⟨
ψ
|
i
⟩
⟨
i
|
|
⟨
i
|
ψ
⟩
|
2
=
|
i
⟩
⟨
i
|
.
{\displaystyle \rho =|\psi \rangle \langle \psi |\to \rho '={\frac {|i\rangle \langle i|\psi \rangle \langle \psi |i\rangle \langle i|}{|\langle i|\psi \rangle |^{2}}}=|i\rangle \langle i|.}
Lüders rule has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state
|
i
⟩
{\displaystyle |i\rangle }
implies a probability-one prediction for any von Neumann observable that has
|
i
⟩
{\displaystyle |i\rangle }
as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again.: 91
We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:
ρ
→
∑
i
A
i
ρ
A
i
†
.
{\displaystyle \rho \to \sum _{i}A_{i}\rho A_{i}^{\dagger }.}
It is an example of a quantum channel,: 150 and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost.: 159
=== Examples ===
The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states
|
0
⟩
{\displaystyle |0\rangle }
and
|
1
⟩
{\displaystyle |1\rangle }
with complex coefficients:
|
ψ
⟩
=
α
|
0
⟩
+
β
|
1
⟩
{\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle }
A measurement in the
(
|
0
⟩
,
|
1
⟩
)
{\displaystyle (|0\rangle ,|1\rangle )}
basis will yield outcome
|
0
⟩
{\displaystyle |0\rangle }
with probability
|
α
|
2
{\displaystyle |\alpha |^{2}}
and outcome
|
1
⟩
{\displaystyle |1\rangle }
with probability
|
β
|
2
{\displaystyle |\beta |^{2}}
, so by normalization,
|
α
|
2
+
|
β
|
2
=
1.
{\displaystyle |\alpha |^{2}+|\beta |^{2}=1.}
An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for
2
×
2
{\displaystyle 2\times 2}
self-adjoint matrices:: 126
ρ
=
1
2
(
I
+
r
x
σ
x
+
r
y
σ
y
+
r
z
σ
z
)
,
{\displaystyle \rho ={\tfrac {1}{2}}\left(I+r_{x}\sigma _{x}+r_{y}\sigma _{y}+r_{z}\sigma _{z}\right),}
where the real numbers
(
r
x
,
r
y
,
r
z
)
{\displaystyle (r_{x},r_{y},r_{z})}
are the coordinates of a point within the unit ball and
σ
x
=
(
0
1
1
0
)
,
σ
y
=
(
0
−
i
i
0
)
,
σ
z
=
(
1
0
0
−
1
)
.
{\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}
POVM elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates
(
r
x
,
r
y
,
r
z
)
{\displaystyle (r_{x},r_{y},r_{z})}
of the state
ρ
{\displaystyle \rho }
are the expectation values of the three von Neumann measurements defined by the Pauli matrices.: 126 If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of
σ
z
{\displaystyle \sigma _{z}}
are the basis states
|
0
⟩
{\displaystyle |0\rangle }
and
|
1
⟩
{\displaystyle |1\rangle }
, and a measurement of
σ
z
{\displaystyle \sigma _{z}}
is often called a measurement in the "computational basis.": 76 After a measurement in the computational basis, the outcome of a
σ
x
{\displaystyle \sigma _{x}}
or
σ
y
{\displaystyle \sigma _{y}}
measurement is maximally uncertain.
A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis,: 36 a set of four maximally entangled states:
|
Φ
+
⟩
=
1
2
(
|
0
⟩
A
⊗
|
0
⟩
B
+
|
1
⟩
A
⊗
|
1
⟩
B
)
|
Φ
−
⟩
=
1
2
(
|
0
⟩
A
⊗
|
0
⟩
B
−
|
1
⟩
A
⊗
|
1
⟩
B
)
|
Ψ
+
⟩
=
1
2
(
|
0
⟩
A
⊗
|
1
⟩
B
+
|
1
⟩
A
⊗
|
0
⟩
B
)
|
Ψ
−
⟩
=
1
2
(
|
0
⟩
A
⊗
|
1
⟩
B
−
|
1
⟩
A
⊗
|
0
⟩
B
)
{\displaystyle {\begin{aligned}|\Phi ^{+}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}+|1\rangle _{A}\otimes |1\rangle _{B})\\|\Phi ^{-}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}-|1\rangle _{A}\otimes |1\rangle _{B})\\|\Psi ^{+}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}+|1\rangle _{A}\otimes |0\rangle _{B})\\|\Psi ^{-}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B})\end{aligned}}}
A common and useful example of quantum mechanics applied to a continuous degree of freedom is the quantum harmonic oscillator.: 24 This system is defined by the Hamiltonian
H
=
p
2
2
m
+
1
2
m
ω
2
x
2
,
{\displaystyle {H}={\frac {{p}^{2}}{2m}}+{\frac {1}{2}}m\omega ^{2}{x}^{2},}
where
H
{\displaystyle {H}}
, the momentum operator
p
{\displaystyle {p}}
and the position operator
x
{\displaystyle {x}}
are self-adjoint operators on the Hilbert space of square-integrable functions on the real line. The energy eigenstates solve the time-independent Schrödinger equation:
H
|
n
⟩
=
E
n
|
n
⟩
.
{\displaystyle {H}|n\rangle =E_{n}|n\rangle .}
These eigenvalues can be shown to be given by
E
n
=
ℏ
ω
(
n
+
1
2
)
,
{\displaystyle E_{n}=\hbar \omega \left(n+{\tfrac {1}{2}}\right),}
and these values give the possible numerical outcomes of an energy measurement upon the oscillator. The set of possible outcomes of a position measurement on a harmonic oscillator is continuous, and so predictions are stated in terms of a probability density function
P
(
x
)
{\displaystyle P(x)}
that gives the probability of the measurement outcome lying in the infinitesimal interval from
x
{\displaystyle x}
to
x
+
d
x
{\displaystyle x+dx}
.
== History of the measurement concept ==
=== The "old quantum theory" ===
The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include Planck's calculation of the blackbody radiation spectrum, Einstein's explanation of the photoelectric effect, Einstein and Debye's work on the specific heat of solids, Bohr and van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects.
The Stern–Gerlach experiment, proposed in 1921 and implemented in 1922, became a prototypical example of a quantum measurement having a discrete set of possible outcomes. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to the particles' quantized spin.
=== Transition to the “new” quantum theory ===
A 1925 paper by Heisenberg, known in English as "Quantum theoretical re-interpretation of kinematic and mechanical relations", marked a pivotal moment in the maturation of quantum physics. Heisenberg sought to develop a theory of atomic phenomena that relied only on "observable" quantities. At the time, and in contrast with the later standard presentation of quantum mechanics, Heisenberg did not regard the position of an electron bound within an atom as "observable". Instead, his principal quantities of interest were the frequencies of light emitted or absorbed by atoms.
The uncertainty principle dates to this period. It is frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Kennard, Pauli, and Weyl, and its generalization to arbitrary pairs of noncommuting observables is due to Robertson and Schrödinger.
Writing
x
{\displaystyle {x}}
and
p
{\displaystyle {p}}
for the self-adjoint operators representing position and momentum respectively, a standard deviation of position can be defined as
σ
x
=
⟨
x
2
⟩
−
⟨
x
⟩
2
,
{\displaystyle \sigma _{x}={\sqrt {\langle {x}^{2}\rangle -\langle {x}\rangle ^{2}}},}
and likewise for the momentum:
σ
p
=
⟨
p
2
⟩
−
⟨
p
⟩
2
.
{\displaystyle \sigma _{p}={\sqrt {\langle {p}^{2}\rangle -\langle {p}\rangle ^{2}}}.}
The Kennard–Pauli–Weyl uncertainty relation is
σ
x
σ
p
≥
ℏ
2
.
{\displaystyle \sigma _{x}\sigma _{p}\geq {\frac {\hbar }{2}}.}
This inequality means that no preparation of a quantum particle can imply simultaneously precise predictions for a measurement of position and for a measurement of momentum. The Robertson inequality generalizes this to the case of an arbitrary pair of self-adjoint operators
A
{\displaystyle A}
and
B
{\displaystyle B}
. The commutator of these two operators is
[
A
,
B
]
=
A
B
−
B
A
,
{\displaystyle [A,B]=AB-BA,}
and this provides the lower bound on the product of standard deviations:
σ
A
σ
B
≥
|
1
2
i
⟨
[
A
,
B
]
⟩
|
=
1
2
|
⟨
[
A
,
B
]
⟩
|
.
{\displaystyle \sigma _{A}\sigma _{B}\geq \left|{\frac {1}{2i}}\langle [A,B]\rangle \right|={\frac {1}{2}}\left|\langle [A,B]\rangle \right|.}
Substituting in the canonical commutation relation
[
x
,
p
]
=
i
ℏ
{\displaystyle [{x},{p}]=i\hbar }
, an expression first postulated by Max Born in 1925, recovers the Kennard–Pauli–Weyl statement of the uncertainty principle.
=== From uncertainty to no-hidden-variables ===
The existence of the uncertainty principle naturally raises the question of whether quantum mechanics can be understood as an approximation to a more exact theory. Do there exist "hidden variables", more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide? A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics.
Bell published the theorem now known by his name in 1964, investigating more deeply a thought experiment originally proposed in 1935 by Einstein, Podolsky and Rosen. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics. Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.
=== Quantum systems as measuring devices ===
The Robertson–Schrödinger uncertainty principle establishes that when two observables do not commute, there is a tradeoff in predictability between them. The Wigner–Araki–Yanase theorem demonstrates another consequence of non-commutativity: the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. Further investigation in this line led to the formulation of the Wigner–Yanase skew information.
Historically, experiments in quantum physics have often been described in semiclassical terms. For example, the spin of an atom in a Stern–Gerlach experiment might be treated as a quantum degree of freedom, while the atom is regarded as moving through a magnetic field described by the classical theory of Maxwell's equations.: 24 But the devices used to build the experimental apparatus are themselves physical systems, and so quantum mechanics should be applicable to them as well. Beginning in the 1950s, Rosenfeld, von Weizsäcker and others tried to develop consistency conditions that expressed when a quantum-mechanical system could be treated as a measuring apparatus. One proposal for a criterion regarding when a system used as part of a measuring device can be modeled semiclassically relies on the Wigner function, a quasiprobability distribution that can be treated as a probability distribution on phase space in those cases where it is everywhere non-negative.: 375
=== Decoherence ===
A quantum state for an imperfectly isolated system will generally evolve to be entangled with the quantum state for the environment. Consequently, even if the system's initial state is pure, the state at a later time, found by taking the partial trace of the joint system-environment state, will be mixed. This phenomenon of entanglement produced by system-environment interactions tends to obscure the more exotic features of quantum mechanics that the system could in principle manifest. Quantum decoherence, as this effect is known, was first studied in detail during the 1970s. (Earlier investigations into how classical physics might be obtained as a limit of quantum mechanics had explored the subject of imperfectly isolated systems, but the role of entanglement was not fully appreciated.) A significant portion of the effort involved in quantum computing is to avoid the deleterious effects of decoherence.: 239
To illustrate, let
ρ
S
{\displaystyle \rho _{S}}
denote the initial state of the system,
ρ
E
{\displaystyle \rho _{E}}
the initial state of the environment and
H
{\displaystyle H}
the Hamiltonian specifying the system-environment interaction. The density operator
ρ
E
{\displaystyle \rho _{E}}
can be diagonalized and written as a linear combination of the projectors onto its eigenvectors:
ρ
E
=
∑
i
p
i
|
ψ
i
⟩
⟨
ψ
i
|
.
{\displaystyle \rho _{E}=\sum _{i}p_{i}|\psi _{i}\rangle \langle \psi _{i}|.}
Expressing time evolution for a duration
t
{\displaystyle t}
by the unitary operator
U
=
e
−
i
H
t
/
ℏ
{\displaystyle U=e^{-iHt/\hbar }}
, the state for the system after this evolution is
ρ
S
′
=
t
r
E
U
[
ρ
S
⊗
(
∑
i
p
i
|
ψ
i
⟩
⟨
ψ
i
|
)
]
U
†
,
{\displaystyle \rho _{S}'={\rm {tr}}_{E}U\left[\rho _{S}\otimes \left(\sum _{i}p_{i}|\psi _{i}\rangle \langle \psi _{i}|\right)\right]U^{\dagger },}
which evaluates to
ρ
S
′
=
∑
i
j
p
i
⟨
ψ
j
|
U
|
ψ
i
⟩
ρ
S
p
i
⟨
ψ
i
|
U
†
|
ψ
j
⟩
.
{\displaystyle \rho _{S}'=\sum _{ij}{\sqrt {p_{i}}}\langle \psi _{j}|U|\psi _{i}\rangle \rho _{S}{\sqrt {p_{i}}}\langle \psi _{i}|U^{\dagger }|\psi _{j}\rangle .}
The quantities surrounding
ρ
S
{\displaystyle \rho _{S}}
can be identified as Kraus operators, and so this defines a quantum channel.
Specifying a form of interaction between system and environment can establish a set of "pointer states," states for the system that are (approximately) stable, apart from overall phase factors, with respect to environmental fluctuations. A set of pointer states defines a preferred orthonormal basis for the system's Hilbert space.: 423
== Quantum information and computation ==
Quantum information science studies how information science and its application as technology depend on quantum-mechanical phenomena. Understanding measurement in quantum physics is important for this field in many ways, some of which are briefly surveyed here.
=== Measurement, entropy, and distinguishability ===
The von Neumann entropy is a measure of the statistical uncertainty represented by a quantum state. For a density matrix
ρ
{\displaystyle \rho }
, the von Neumann entropy is
S
(
ρ
)
=
−
t
r
(
ρ
log
ρ
)
;
{\displaystyle S(\rho )=-{\rm {tr}}(\rho \log \rho );}
writing
ρ
{\displaystyle \rho }
in terms of its basis of eigenvectors,
ρ
=
∑
i
λ
i
|
i
⟩
⟨
i
|
,
{\displaystyle \rho =\sum _{i}\lambda _{i}|i\rangle \langle i|,}
the von Neumann entropy is
S
(
ρ
)
=
−
∑
i
λ
i
log
λ
i
.
{\displaystyle S(\rho )=-\sum _{i}\lambda _{i}\log \lambda _{i}.}
This is the Shannon entropy of the set of eigenvalues interpreted as a probability distribution, and so the von Neumann entropy is the Shannon entropy of the random variable defined by measuring in the eigenbasis of
ρ
{\displaystyle \rho }
. Consequently, the von Neumann entropy vanishes when
ρ
{\displaystyle \rho }
is pure.: 320 The von Neumann entropy of
ρ
{\displaystyle \rho }
can equivalently be characterized as the minimum Shannon entropy for a measurement given the quantum state
ρ
{\displaystyle \rho }
, with the minimization over all POVMs with rank-1 elements.: 323
Many other quantities used in quantum information theory also find motivation and justification in terms of measurements. For example, the trace distance between quantum states is equal to the largest difference in probability that those two quantum states can imply for a measurement outcome:: 254
1
2
|
|
ρ
−
σ
|
|
=
max
0
≤
E
≤
I
[
t
r
(
E
ρ
)
−
t
r
(
E
σ
)
]
.
{\displaystyle {\frac {1}{2}}||\rho -\sigma ||=\max _{0\leq E\leq I}[{\rm {tr}}(E\rho )-{\rm {tr}}(E\sigma )].}
Similarly, the fidelity of two quantum states, defined by
F
(
ρ
,
σ
)
=
(
Tr
ρ
σ
ρ
)
2
,
{\displaystyle F(\rho ,\sigma )=\left(\operatorname {Tr} {\sqrt {{\sqrt {\rho }}\sigma {\sqrt {\rho }}}}\right)^{2},}
expresses the probability that one state will pass a test for identifying a successful preparation of the other. The trace distance provides bounds on the fidelity via the Fuchs–van de Graaf inequalities:: 274
1
−
F
(
ρ
,
σ
)
≤
1
2
|
|
ρ
−
σ
|
|
≤
1
−
F
(
ρ
,
σ
)
.
{\displaystyle 1-{\sqrt {F(\rho ,\sigma )}}\leq {\frac {1}{2}}||\rho -\sigma ||\leq {\sqrt {1-F(\rho ,\sigma )}}.}
=== Quantum circuits ===
Quantum circuits are a model for quantum computation in which a computation is a sequence of quantum gates followed by measurements.: 93 The gates are reversible transformations on a quantum mechanical analog of an n-bit register. This analogous structure is referred to as an n-qubit register. Measurements, drawn on a circuit diagram as stylized pointer dials, indicate where and how a result is obtained from the quantum computer after the steps of the computation are executed. Without loss of generality, one can work with the standard circuit model, in which the set of gates are single-qubit unitary transformations and controlled NOT gates on pairs of qubits, and all measurements are in the computational basis.: 93
=== Measurement-based quantum computation ===
Measurement-based quantum computation (MBQC) is a model of quantum computing in which the answer to a question is, informally speaking, created in the act of measuring the physical system that serves as the computer.: 317
=== Quantum tomography ===
Quantum state tomography is a process by which, given a set of data representing the results of quantum measurements, a quantum state consistent with those measurement results is computed. It is named by analogy with tomography, the reconstruction of three-dimensional images from slices taken through them, as in a CT scan. Tomography of quantum states can be extended to tomography of quantum channels and even of measurements.
=== Quantum metrology ===
Quantum metrology is the use of quantum physics to aid the measurement of quantities that, generally, had meaning in classical physics, such as exploiting quantum effects to increase the precision with which a length can be measured. A celebrated example is the introduction of squeezed light into the LIGO experiment, which increased its sensitivity to gravitational waves.
== Laboratory implementations ==
The range of physical procedures to which the mathematics of quantum measurement can be applied is very broad. In the early years of the subject, laboratory procedures involved the recording of spectral lines, the darkening of photographic film, the observation of scintillations, finding tracks in cloud chambers, and hearing clicks from Geiger counters. Language from this era persists, such as the description of measurement outcomes in the abstract as "detector clicks".
The double-slit experiment is a prototypical illustration of quantum interference, typically described using electrons or photons. The first interference experiment to be carried out in a regime where both wave-like and particle-like aspects of photon behavior are significant was G. I. Taylor's test in 1909. Taylor used screens of smoked glass to attenuate the light passing through his apparatus, to the extent that, in modern language, only one photon would be illuminating the interferometer slits at a time. He recorded the interference patterns on photographic plates; for the dimmest light, the exposure time required was roughly three months. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi implemented the double-slit experiment using single electrons and a television tube. A quarter-century later, a team at the University of Vienna performed an interference experiment with buckyballs, in which the buckyballs that passed through the interferometer were ionized by a laser, and the ions then induced the emission of electrons, emissions which were in turn amplified and detected by an electron multiplier.
Modern quantum optics experiments can employ single-photon detectors. For example, in the "BIG Bell test" of 2018, several of the laboratory setups used single-photon avalanche diodes. Another laboratory setup used superconducting qubits. The standard method for performing measurements upon superconducting qubits is to couple a qubit with a resonator in such a way that the characteristic frequency of the resonator shifts according to the state for the qubit, and detecting this shift by observing how the resonator reacts to a probe signal.
== Interpretations of quantum mechanics ==
Despite the consensus among scientists that quantum physics is in practice a successful theory, disagreements persist on a more philosophical level. Many debates in the area known as quantum foundations concern the role of measurement in quantum mechanics. Recurring questions include which interpretation of probability theory is best suited for the probabilities calculated from the Born rule; and whether the apparent randomness of quantum measurement outcomes is fundamental, or a consequence of a deeper deterministic process. Worldviews that present answers to questions like these are known as "interpretations" of quantum mechanics; as the physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear."
A central concern within quantum foundations is the "quantum measurement problem," though how this problem is delimited, and whether it should be counted as one question or multiple separate issues, are contested topics. Of primary interest is the seeming disparity between apparently distinct types of time evolution. Von Neumann declared that quantum mechanics contains "two fundamentally different types" of quantum-state change.: §V.1 First, there are those changes involving a measurement process, and second, there is unitary time evolution in the absence of measurement. The former is stochastic and discontinuous, writes von Neumann, and the latter deterministic and continuous. This dichotomy has set the tone for much later debate. Some interpretations of quantum mechanics find the reliance upon two different types of time evolution distasteful and regard the ambiguity of when to invoke one or the other as a deficiency of the way quantum theory was historically presented. To bolster these interpretations, their proponents have worked to derive ways of regarding "measurement" as a secondary concept and deducing the seemingly stochastic effect of measurement processes as approximations to more fundamental deterministic dynamics. However, consensus has not been achieved among proponents of the correct way to implement this program, and in particular how to justify the use of the Born rule to calculate probabilities. Other interpretations regard quantum states as statistical information about quantum systems, thus asserting that abrupt and discontinuous changes of quantum states are not problematic, simply reflecting updates of the available information. Of this line of thought, Bell asked, "Whose information? Information about what?" Answers to these questions vary among proponents of the informationally-oriented interpretations.
== See also ==
== Notes ==
== References ==
== Further reading ==
Wheeler, John A.; Zurek, Wojciech H., eds. (1983). Quantum Theory and Measurement. Princeton University Press. ISBN 978-0-691-08316-2.
Braginsky, Vladimir B.; Khalili, Farid Ya. (1992). Quantum Measurement. Cambridge University Press. ISBN 978-0-521-41928-4.
Greenstein, George S.; Zajonc, Arthur G. (2006). The Quantum Challenge: Modern Research On The Foundations Of Quantum Mechanics (2nd ed.). ISBN 978-0763724702.
Alter, Orly; Yamamoto, Yoshihisa (2001). Quantum Measurement of a Single System. New York: Wiley. doi:10.1002/9783527617128. ISBN 9780471283089.
Jordan, Andrew N.; Siddiqi, Irfan A. (2024). Quantum Measurement: Theory and Practice. Cambridge University Press. ISBN 978-1009100069. | Wikipedia/Measurement_in_quantum_mechanics |
In mathematics, specifically in spectral theory, an eigenvalue of a closed linear operator is called normal if the space admits a decomposition into a direct sum of a finite-dimensional generalized eigenspace and an invariant subspace where
A
−
λ
I
{\displaystyle A-\lambda I}
has a bounded inverse.
The set of normal eigenvalues coincides with the discrete spectrum.
== Root lineal ==
Let
B
{\displaystyle {\mathfrak {B}}}
be a Banach space. The root lineal
L
λ
(
A
)
{\displaystyle {\mathfrak {L}}_{\lambda }(A)}
of a linear operator
A
:
B
→
B
{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}
with domain
D
(
A
)
{\displaystyle {\mathfrak {D}}(A)}
corresponding to the eigenvalue
λ
∈
σ
p
(
A
)
{\displaystyle \lambda \in \sigma _{p}(A)}
is defined as
L
λ
(
A
)
=
⋃
k
∈
N
{
x
∈
D
(
A
)
:
(
A
−
λ
I
B
)
j
x
∈
D
(
A
)
∀
j
∈
N
,
j
≤
k
;
(
A
−
λ
I
B
)
k
x
=
0
}
⊂
B
,
{\displaystyle {\mathfrak {L}}_{\lambda }(A)=\bigcup _{k\in \mathbb {N} }\{x\in {\mathfrak {D}}(A):\,(A-\lambda I_{\mathfrak {B}})^{j}x\in {\mathfrak {D}}(A)\,\forall j\in \mathbb {N} ,\,j\leq k;\,(A-\lambda I_{\mathfrak {B}})^{k}x=0\}\subset {\mathfrak {B}},}
where
I
B
{\displaystyle I_{\mathfrak {B}}}
is the identity operator in
B
{\displaystyle {\mathfrak {B}}}
.
This set is a linear manifold but not necessarily a vector space, since it is not necessarily closed in
B
{\displaystyle {\mathfrak {B}}}
. If this set is closed (for example, when it is finite-dimensional), it is called the generalized eigenspace of
A
{\displaystyle A}
corresponding to the eigenvalue
λ
{\displaystyle \lambda }
.
== Definition of a normal eigenvalue ==
An eigenvalue
λ
∈
σ
p
(
A
)
{\displaystyle \lambda \in \sigma _{p}(A)}
of a closed linear operator
A
:
B
→
B
{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}
in the Banach space
B
{\displaystyle {\mathfrak {B}}}
with domain
D
(
A
)
⊂
B
{\displaystyle {\mathfrak {D}}(A)\subset {\mathfrak {B}}}
is called normal (in the original terminology,
λ
{\displaystyle \lambda }
corresponds to a normally splitting finite-dimensional root subspace), if the following two conditions are satisfied:
The algebraic multiplicity of
λ
{\displaystyle \lambda }
is finite:
ν
=
dim
L
λ
(
A
)
<
∞
{\displaystyle \nu =\dim {\mathfrak {L}}_{\lambda }(A)<\infty }
, where
L
λ
(
A
)
{\displaystyle {\mathfrak {L}}_{\lambda }(A)}
is the root lineal of
A
{\displaystyle A}
corresponding to the eigenvalue
λ
{\displaystyle \lambda }
;
The space
B
{\displaystyle {\mathfrak {B}}}
could be decomposed into a direct sum
B
=
L
λ
(
A
)
⊕
N
λ
{\displaystyle {\mathfrak {B}}={\mathfrak {L}}_{\lambda }(A)\oplus {\mathfrak {N}}_{\lambda }}
, where
N
λ
{\displaystyle {\mathfrak {N}}_{\lambda }}
is an invariant subspace of
A
{\displaystyle A}
in which
A
−
λ
I
B
{\displaystyle A-\lambda I_{\mathfrak {B}}}
has a bounded inverse.
That is, the restriction
A
2
{\displaystyle A_{2}}
of
A
{\displaystyle A}
onto
N
λ
{\displaystyle {\mathfrak {N}}_{\lambda }}
is an operator with domain
D
(
A
2
)
=
N
λ
∩
D
(
A
)
{\displaystyle {\mathfrak {D}}(A_{2})={\mathfrak {N}}_{\lambda }\cap {\mathfrak {D}}(A)}
and with the range
R
(
A
2
−
λ
I
)
⊂
N
λ
{\displaystyle {\mathfrak {R}}(A_{2}-\lambda I)\subset {\mathfrak {N}}_{\lambda }}
which has a bounded inverse.
== Equivalent characterizations of normal eigenvalues ==
Let
A
:
B
→
B
{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}
be a closed linear densely defined operator in the Banach space
B
{\displaystyle {\mathfrak {B}}}
. The following statements are equivalent(Theorem III.88):
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is a normal eigenvalue;
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is an isolated point in
σ
(
A
)
{\displaystyle \sigma (A)}
and
A
−
λ
I
B
{\displaystyle A-\lambda I_{\mathfrak {B}}}
is semi-Fredholm;
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is an isolated point in
σ
(
A
)
{\displaystyle \sigma (A)}
and
A
−
λ
I
B
{\displaystyle A-\lambda I_{\mathfrak {B}}}
is Fredholm;
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is an isolated point in
σ
(
A
)
{\displaystyle \sigma (A)}
and
A
−
λ
I
B
{\displaystyle A-\lambda I_{\mathfrak {B}}}
is Fredholm of index zero;
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is an isolated point in
σ
(
A
)
{\displaystyle \sigma (A)}
and the rank of the corresponding Riesz projector
P
λ
{\displaystyle P_{\lambda }}
is finite;
λ
∈
σ
(
A
)
{\displaystyle \lambda \in \sigma (A)}
is an isolated point in
σ
(
A
)
{\displaystyle \sigma (A)}
, its algebraic multiplicity
ν
=
dim
L
λ
(
A
)
{\displaystyle \nu =\dim {\mathfrak {L}}_{\lambda }(A)}
is finite, and the range of
A
−
λ
I
B
{\displaystyle A-\lambda I_{\mathfrak {B}}}
is closed.
If
λ
{\displaystyle \lambda }
is a normal eigenvalue, then the root lineal
L
λ
(
A
)
{\displaystyle {\mathfrak {L}}_{\lambda }(A)}
coincides with the range of the Riesz projector,
R
(
P
λ
)
{\displaystyle {\mathfrak {R}}(P_{\lambda })}
.
== Relation to the discrete spectrum ==
The above equivalence shows that the set of normal eigenvalues coincides with the discrete spectrum, defined as the set of isolated points of the spectrum with finite rank of the corresponding Riesz projector.
== Decomposition of the spectrum of nonselfadjoint operators ==
The spectrum of a closed operator
A
:
B
→
B
{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}
in the Banach space
B
{\displaystyle {\mathfrak {B}}}
can be decomposed into the union of two disjoint sets, the set of normal eigenvalues and the fifth type of the essential spectrum:
σ
(
A
)
=
{
normal eigenvalues of
A
}
∪
σ
e
s
s
,
5
(
A
)
.
{\displaystyle \sigma (A)=\{{\text{normal eigenvalues of}}\ A\}\cup \sigma _{\mathrm {ess} ,5}(A).}
== See also ==
Decomposition of spectrum (functional analysis)
Discrete spectrum (mathematics)
Essential spectrum
Fredholm operator
Operator theory
Resolvent formalism
Riesz projector
Spectrum (functional analysis)
Spectrum of an operator
== References == | Wikipedia/Normal_eigenvalue |
The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relatively to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis.: 279 : 261 It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass and distance from the axis.
It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis.
For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other.
== Introduction ==
When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units.
The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by
m
r
2
{\displaystyle mr^{2}}
, where
r
{\displaystyle r}
is the distance of the point from the axis, and
m
{\displaystyle m}
is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object.
In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law.
The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body.
The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor.
The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw.
== Definition ==
The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section.
The moment of inertia I is also defined as the ratio of the net angular momentum L of a system to its angular velocity ω around a principal axis, that is
I
=
L
ω
.
{\displaystyle I={\frac {L}{\omega }}.}
If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster.
If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque τ on a body to the angular acceleration α around a principal axis, that is: 279 : 261, eq.9-19
τ
=
I
α
.
{\displaystyle \tau =I\alpha .}
For a simple pendulum, this definition yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as,
I
=
m
r
2
.
{\displaystyle I=mr^{2}.}
Thus, the moment of inertia of the pendulum depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation.
This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses dm each multiplied by the square of its perpendicular distance r to an axis k. An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass.
In general, given an object of mass m, an effective radius k can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is
I
=
m
k
2
,
{\displaystyle I=mk^{2},}
where k is known as the radius of gyration around the axis.
== Examples ==
=== Simple pendulum ===
Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle
m
{\displaystyle m}
with the square of its distance
r
{\displaystyle r}
to the pivot, that is
I
=
m
r
2
.
{\displaystyle I=mr^{2}.}
This can be shown as follows:
The force of gravity on the mass of a simple pendulum generates a torque
τ
=
r
×
F
{\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} }
around the axis perpendicular to the plane of the pendulum movement. Here
r
{\displaystyle \mathbf {r} }
is the distance vector from the torque axis to the pendulum center of mass, and
F
{\displaystyle \mathbf {F} }
is the net force on the mass. Associated with this torque is an angular acceleration,
α
{\displaystyle {\boldsymbol {\alpha }}}
, of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is
a
=
α
×
r
{\displaystyle \mathbf {a} ={\boldsymbol {\alpha }}\times \mathbf {r} }
. Since
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
the torque equation becomes:
τ
=
r
×
F
=
r
×
(
m
α
×
r
)
=
m
(
(
r
⋅
r
)
α
−
(
r
⋅
α
)
r
)
=
m
r
2
α
=
I
α
k
^
,
{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\mathbf {r} \times \mathbf {F} =\mathbf {r} \times (m{\boldsymbol {\alpha }}\times \mathbf {r} )\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\alpha }}-\left(\mathbf {r} \cdot {\boldsymbol {\alpha }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\alpha }}=I\alpha \mathbf {\hat {k}} ,\end{aligned}}}
where
k
^
{\displaystyle \mathbf {\hat {k}} }
is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of
α
{\displaystyle {\boldsymbol {\alpha }}}
and
r
{\displaystyle \mathbf {r} }
.) The quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
is the moment of inertia of this single mass around the pivot point.
The quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
also appears in the angular momentum of a simple pendulum, which is calculated from the velocity
v
=
ω
×
r
{\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {r} }
of the pendulum mass around the pivot, where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the mass about the pivot point. This angular momentum is given by
L
=
r
×
p
=
r
×
(
m
ω
×
r
)
=
m
(
(
r
⋅
r
)
ω
−
(
r
⋅
ω
)
r
)
=
m
r
2
ω
=
I
ω
k
^
,
{\displaystyle {\begin{aligned}\mathbf {L} &=\mathbf {r} \times \mathbf {p} =\mathbf {r} \times \left(m{\boldsymbol {\omega }}\times \mathbf {r} \right)\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\omega }}-\left(\mathbf {r} \cdot {\boldsymbol {\omega }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\omega }}=I\omega \mathbf {\hat {k}} ,\end{aligned}}}
using a similar derivation to the previous equation.
Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield
E
K
=
1
2
m
v
⋅
v
=
1
2
(
m
r
2
)
ω
2
=
1
2
I
ω
2
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}m\mathbf {v} \cdot \mathbf {v} ={\frac {1}{2}}\left(mr^{2}\right)\omega ^{2}={\frac {1}{2}}I\omega ^{2}.}
This shows that the quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values
m
r
2
{\displaystyle mr^{2}}
for all of the elements of mass in the body.
=== Compound pendulums ===
A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of.: 395–396 : 51–53 The natural frequency (
ω
n
{\displaystyle \omega _{\text{n}}}
) of a compound pendulum depends on its moment of inertia,
I
P
{\displaystyle I_{P}}
,
ω
n
=
m
g
r
I
P
,
{\displaystyle \omega _{\text{n}}={\sqrt {\frac {mgr}{I_{P}}}},}
where
m
{\displaystyle m}
is the mass of the object,
g
{\displaystyle g}
is local acceleration of gravity, and
r
{\displaystyle r}
is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.: 516–517
Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point
P
{\displaystyle P}
so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (
t
{\displaystyle t}
), to obtain
I
P
=
m
g
r
ω
n
2
=
m
g
r
t
2
4
π
2
,
{\displaystyle I_{P}={\frac {mgr}{\omega _{\text{n}}^{2}}}={\frac {mgrt^{2}}{4\pi ^{2}}},}
where
t
{\displaystyle t}
is the period (duration) of oscillation (usually averaged over multiple periods).
==== Center of oscillation ====
A simple pendulum that has the same natural frequency as a compound pendulum defines the length
L
{\displaystyle L}
from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length
L
{\displaystyle L}
is determined from the formula,
ω
n
=
g
L
=
m
g
r
I
P
,
{\displaystyle \omega _{\text{n}}={\sqrt {\frac {g}{L}}}={\sqrt {\frac {mgr}{I_{P}}}},}
or
L
=
g
ω
n
2
=
I
P
m
r
.
{\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}={\frac {I_{P}}{mr}}.}
The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of
π
r
a
d
/
s
{\displaystyle \pi \ \mathrm {rad/s} }
for the pendulum. In this case, the distance to the center of oscillation,
L
{\displaystyle L}
, can be computed to be
L
=
g
ω
n
2
≈
9.81
m
/
s
2
(
3.14
r
a
d
/
s
)
2
≈
0.99
m
.
{\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}\approx {\frac {9.81\ \mathrm {m/s^{2}} }{(3.14\ \mathrm {rad/s} )^{2}}}\approx 0.99\ \mathrm {m} .}
Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter.
== Measuring moment of inertia ==
The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system.
== Moment of inertia of area ==
Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia.
These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis
I
x
x
{\displaystyle I_{xx}}
and horizontal moment of the y-axis
I
y
y
{\displaystyle I_{yy}}
.
Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived,
r
{\displaystyle r}
=== Sectional areas moment calculated thus ===
Source:
Square:
I
x
x
=
I
y
y
=
b
4
12
{\displaystyle I_{xx}=I_{yy}={\frac {b^{4}}{12}}}
Rectangular:
I
x
x
=
b
h
3
12
{\displaystyle I_{xx}={\frac {bh^{3}}{12}}}
and;
I
y
y
=
h
b
3
12
{\displaystyle I_{yy}={\frac {hb^{3}}{12}}}
Triangular:
I
x
x
=
b
h
3
36
{\displaystyle I_{xx}={\frac {bh^{3}}{36}}}
Circular:
I
x
x
=
I
y
y
=
1
4
π
r
4
=
1
64
π
d
4
{\displaystyle I_{xx}=I_{yy}={\frac {1}{4}}{\pi }r^{4}={\frac {1}{64}}{\pi }d^{4}}
== Motion in a fixed plane ==
=== Point mass ===
The moment of inertia about an axis of a body is calculated by summing
m
r
2
{\displaystyle mr^{2}}
for every particle in the body, where
r
{\displaystyle r}
is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.)
Consider the kinetic energy of an assembly of
N
{\displaystyle N}
masses
m
i
{\displaystyle m_{i}}
that lie at the distances
r
i
{\displaystyle r_{i}}
from the pivot point
P
{\displaystyle P}
, which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,: 516–517 : 1084–1085 : 1296–1300
E
K
=
∑
i
=
1
N
1
2
m
i
v
i
⋅
v
i
=
∑
i
=
1
N
1
2
m
i
(
ω
r
i
)
2
=
1
2
ω
2
∑
i
=
1
N
m
i
r
i
2
.
{\displaystyle E_{\text{K}}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\left(\omega r_{i}\right)^{2}={\frac {1}{2}}\,\omega ^{2}\sum _{i=1}^{N}m_{i}r_{i}^{2}.}
This shows that the moment of inertia of the body is the sum of each of the
m
r
2
{\displaystyle mr^{2}}
terms, that is
I
P
=
∑
i
=
1
N
m
i
r
i
2
.
{\displaystyle I_{P}=\sum _{i=1}^{N}m_{i}r_{i}^{2}.}
Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia.
The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows:
I
P
=
∑
i
m
i
r
i
2
{\displaystyle I_{P}=\sum _{i}m_{i}r_{i}^{2}}
Another expression replaces the summation with an integral,
I
P
=
∭
Q
ρ
(
x
,
y
,
z
)
‖
r
‖
2
d
V
{\displaystyle I_{P}=\iiint _{Q}\rho (x,y,z)\left\|\mathbf {r} \right\|^{2}dV}
Here, the function
ρ
{\displaystyle \rho }
gives the mass density at each point
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
,
r
{\displaystyle \mathbf {r} }
is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
in the solid, and the integration is evaluated over the volume
V
{\displaystyle V}
of the body
Q
{\displaystyle Q}
. The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area.
Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the
z
{\displaystyle z}
-axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the
x
{\displaystyle x}
- and
y
{\displaystyle y}
-axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the
x
{\displaystyle x}
-axis or
y
{\displaystyle y}
-axis depending on the load.
==== Examples ====
The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass.
The moment of inertia of a thin rod with constant cross-section
s
{\displaystyle s}
and density
ρ
{\displaystyle \rho }
and with length
ℓ
{\displaystyle \ell }
about a perpendicular axis through its center of mass is determined by integration.: 1301 Align the
x
{\displaystyle x}
-axis with the rod and locate the origin its center of mass at the center of the rod, then
I
C
,
rod
=
∭
Q
ρ
x
2
d
V
=
∫
−
ℓ
2
ℓ
2
ρ
x
2
s
d
x
=
ρ
s
x
3
3
|
−
ℓ
2
ℓ
2
=
ρ
s
3
(
ℓ
3
8
+
ℓ
3
8
)
=
m
ℓ
2
12
,
{\displaystyle I_{C,{\text{rod}}}=\iiint _{Q}\rho \,x^{2}\,dV=\int _{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}\rho \,x^{2}s\,dx=\left.\rho s{\frac {x^{3}}{3}}\right|_{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}={\frac {\rho s}{3}}\left({\frac {\ell ^{3}}{8}}+{\frac {\ell ^{3}}{8}}\right)={\frac {m\ell ^{2}}{12}},}
where
m
=
ρ
s
ℓ
{\displaystyle m=\rho s\ell }
is the mass of the rod.
The moment of inertia of a thin disc of constant thickness
s
{\displaystyle s}
, radius
R
{\displaystyle R}
, and density
ρ
{\displaystyle \rho }
about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration.: 1301 Align the
z
{\displaystyle z}
-axis with the axis of the disc and define a volume element as
d
V
=
s
r
d
r
d
θ
{\displaystyle dV=sr\,dr\,d\theta }
, then
I
C
,
disc
=
∭
Q
ρ
r
2
d
V
=
∫
0
2
π
∫
0
R
ρ
r
2
s
r
d
r
d
θ
=
2
π
ρ
s
R
4
4
=
1
2
m
R
2
,
{\displaystyle I_{C,{\text{disc}}}=\iiint _{Q}\rho \,r^{2}\,dV=\int _{0}^{2\pi }\int _{0}^{R}\rho r^{2}sr\,dr\,d\theta =2\pi \rho s{\frac {R^{4}}{4}}={\frac {1}{2}}mR^{2},}
where
m
=
π
R
2
ρ
s
{\displaystyle m=\pi R^{2}\rho s}
is its mass.
The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point
P
{\displaystyle P}
as,
I
P
=
I
C
,
rod
+
M
rod
(
L
2
)
2
+
I
C
,
disc
+
M
disc
(
L
+
R
)
2
,
{\displaystyle I_{P}=I_{C,{\text{rod}}}+M_{\text{rod}}\left({\frac {L}{2}}\right)^{2}+I_{C,{\text{disc}}}+M_{\text{disc}}(L+R)^{2},}
where
L
{\displaystyle L}
is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum.
A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly.
As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation: 1301
x
2
+
y
2
+
z
2
=
R
2
,
{\displaystyle x^{2}+y^{2}+z^{2}=R^{2},}
then the square of the radius
r
{\displaystyle r}
of the disc at the cross-section
z
{\displaystyle z}
along the
z
{\displaystyle z}
-axis is
r
(
z
)
2
=
x
2
+
y
2
=
R
2
−
z
2
.
{\displaystyle r(z)^{2}=x^{2}+y^{2}=R^{2}-z^{2}.}
Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the
z
{\displaystyle z}
-axis,
I
C
,
sphere
=
∫
−
R
R
1
2
π
ρ
r
(
z
)
4
d
z
=
∫
−
R
R
1
2
π
ρ
(
R
2
−
z
2
)
2
d
z
=
1
2
π
ρ
[
R
4
z
−
2
3
R
2
z
3
+
1
5
z
5
]
−
R
R
=
π
ρ
(
1
−
2
3
+
1
5
)
R
5
=
2
5
m
R
2
,
{\displaystyle {\begin{aligned}I_{C,{\text{sphere}}}&=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho r(z)^{4}\,dz=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho \left(R^{2}-z^{2}\right)^{2}\,dz\\[1ex]&={\tfrac {1}{2}}\pi \rho \left[R^{4}z-{\tfrac {2}{3}}R^{2}z^{3}+{\tfrac {1}{5}}z^{5}\right]_{-R}^{R}\\[1ex]&=\pi \rho \left(1-{\tfrac {2}{3}}+{\tfrac {1}{5}}\right)R^{5}\\[1ex]&={\tfrac {2}{5}}mR^{2},\end{aligned}}}
where
m
=
4
3
π
R
3
ρ
{\textstyle m={\frac {4}{3}}\pi R^{3}\rho }
is the mass of the sphere.
=== Rigid body ===
If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis
k
^
{\displaystyle \mathbf {\hat {k}} }
parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles.
If a system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point
R
{\displaystyle \mathbf {R} }
, and absolute velocities
v
i
{\displaystyle \mathbf {v} _{i}}
:
Δ
r
i
=
r
i
−
R
,
v
i
=
ω
×
(
r
i
−
R
)
+
V
=
ω
×
Δ
r
i
+
V
,
{\displaystyle {\begin{aligned}\Delta \mathbf {r} _{i}&=\mathbf {r} _{i}-\mathbf {R} ,\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {V} ={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} ,\end{aligned}}}
where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the system and
V
{\displaystyle \mathbf {V} }
is the velocity of
R
{\displaystyle \mathbf {R} }
.
For planar movement the angular velocity vector is directed along the unit vector
k
{\displaystyle \mathbf {k} }
which is perpendicular to the plane of movement. Introduce the unit vectors
e
i
{\displaystyle \mathbf {e} _{i}}
from the reference point
R
{\displaystyle \mathbf {R} }
to a point
r
i
{\displaystyle \mathbf {r} _{i}}
, and the unit vector
t
^
i
=
k
^
×
e
^
i
{\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}}
, so
e
^
i
=
Δ
r
i
Δ
r
i
,
k
^
=
ω
ω
,
t
^
i
=
k
^
×
e
^
i
,
v
i
=
ω
×
Δ
r
i
+
V
=
ω
k
^
×
Δ
r
i
e
^
i
+
V
=
ω
Δ
r
i
t
^
i
+
V
{\displaystyle {\begin{aligned}\mathbf {\hat {e}} _{i}&={\frac {\Delta \mathbf {r} _{i}}{\Delta r_{i}}},\quad \mathbf {\hat {k}} ={\frac {\boldsymbol {\omega }}{\omega }},\quad \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i},\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} =\omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {V} =\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \end{aligned}}}
This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane.
Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement.
==== Angular momentum ====
The angular momentum vector for the planar movement of a rigid system of particles is given by
L
=
∑
i
=
1
n
m
i
Δ
r
i
×
v
i
=
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
×
(
ω
Δ
r
i
t
^
i
+
V
)
=
(
∑
i
=
1
n
m
i
Δ
r
i
2
)
ω
k
^
+
(
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
)
×
V
.
{\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\omega \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {V} .\end{aligned}}}
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point so
Δ
r
i
e
^
i
=
r
i
−
C
,
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
=
0
,
{\displaystyle {\begin{aligned}\Delta r_{i}\mathbf {\hat {e}} _{i}&=\mathbf {r} _{i}-\mathbf {C} ,\\\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}&=0,\end{aligned}}}
and define the moment of inertia relative to the center of mass
I
C
{\displaystyle I_{\mathbf {C} }}
as
I
C
=
∑
i
m
i
Δ
r
i
2
,
{\displaystyle I_{\mathbf {C} }=\sum _{i}m_{i}\,\Delta r_{i}^{2},}
then the equation for angular momentum simplifies to: 1028
L
=
I
C
ω
k
^
.
{\displaystyle \mathbf {L} =I_{\mathbf {C} }\omega \mathbf {\hat {k}} .}
The moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole).
For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body.
==== Kinetic energy ====
The kinetic energy of a rigid system of particles moving in the plane is given by
E
K
=
1
2
∑
i
=
1
n
m
i
v
i
⋅
v
i
,
=
1
2
∑
i
=
1
n
m
i
(
ω
Δ
r
i
t
^
i
+
V
)
⋅
(
ω
Δ
r
i
t
^
i
+
V
)
,
=
1
2
ω
2
(
∑
i
=
1
n
m
i
Δ
r
i
2
t
^
i
⋅
t
^
i
)
+
ω
V
⋅
(
∑
i
=
1
n
m
i
Δ
r
i
t
^
i
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
⋅
V
.
{\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i},\\&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\cdot \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right),\\&={\frac {1}{2}}\omega ^{2}\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\mathbf {\hat {t}} _{i}\cdot \mathbf {\hat {t}} _{i}\right)+\omega \mathbf {V} \cdot \left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {t}} _{i}\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} \cdot \mathbf {V} .\end{aligned}}}
Let the reference point be the center of mass
C
{\displaystyle \mathbf {C} }
of the system so the second term becomes zero, and introduce the moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
so the kinetic energy is given by: 1084
E
K
=
1
2
I
C
ω
2
+
1
2
M
V
⋅
V
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}I_{\mathbf {C} }\omega ^{2}+{\frac {1}{2}}M\mathbf {V} \cdot \mathbf {V} .}
The moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
is the polar moment of inertia of the body.
==== Newton's laws ====
Newton's laws for a rigid system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, can be written in terms of a resultant force and torque at a reference point
R
{\displaystyle \mathbf {R} }
, to yield
F
=
∑
i
=
1
n
m
i
A
i
,
τ
=
∑
i
=
1
n
Δ
r
i
×
m
i
A
i
,
{\displaystyle {\begin{aligned}\mathbf {F} &=\sum _{i=1}^{n}m_{i}\mathbf {A} _{i},\\{\boldsymbol {\tau }}&=\sum _{i=1}^{n}\Delta \mathbf {r} _{i}\times m_{i}\mathbf {A} _{i},\end{aligned}}}
where
r
i
{\displaystyle \mathbf {r} _{i}}
denotes the trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle
P
i
{\displaystyle P_{i}}
in terms of the position
R
{\displaystyle \mathbf {R} }
and acceleration
A
{\displaystyle \mathbf {A} }
of the reference particle as well as the angular velocity vector
ω
{\displaystyle {\boldsymbol {\omega }}}
and angular acceleration vector
α
{\displaystyle {\boldsymbol {\alpha }}}
of the rigid system of particles as,
A
i
=
α
×
Δ
r
i
+
ω
×
ω
×
Δ
r
i
+
A
.
{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times \Delta \mathbf {r} _{i}+{\boldsymbol {\omega }}\times {\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {A} .}
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along
k
^
{\displaystyle \mathbf {\hat {k}} }
perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors
e
^
i
{\displaystyle \mathbf {\hat {e}} _{i}}
from the reference point
R
{\displaystyle \mathbf {R} }
to a point
r
i
{\displaystyle \mathbf {r} _{i}}
and the unit vectors
t
^
i
=
k
^
×
e
^
i
{\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}}
, so
A
i
=
α
k
^
×
Δ
r
i
e
^
i
−
ω
k
^
×
ω
k
^
×
Δ
r
i
e
^
i
+
A
=
α
Δ
r
i
t
^
i
−
ω
2
Δ
r
i
e
^
i
+
A
.
{\displaystyle {\begin{aligned}\mathbf {A} _{i}&=\alpha \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}-\omega \mathbf {\hat {k}} \times \omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \\&=\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} .\end{aligned}}}
This yields the resultant torque on the system as
τ
=
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
×
(
α
Δ
r
i
t
^
i
−
ω
2
Δ
r
i
e
^
i
+
A
)
=
(
∑
i
=
1
n
m
i
Δ
r
i
2
)
α
k
^
+
(
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
)
×
A
,
{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\alpha \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {A} ,\end{aligned}}}
where
e
^
i
×
e
^
i
=
0
{\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {e}} _{i}=\mathbf {0} }
, and
e
^
i
×
t
^
i
=
k
^
{\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} }
is the unit vector perpendicular to the plane for all of the particles
P
i
{\displaystyle P_{i}}
.
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point and define the moment of inertia relative to the center of mass
I
C
{\displaystyle I_{\mathbf {C} }}
, then the equation for the resultant torque simplifies to: 1029
τ
=
I
C
α
k
^
.
{\displaystyle {\boldsymbol {\tau }}=I_{\mathbf {C} }\alpha \mathbf {\hat {k}} .}
== Motion in space of a rigid body, and the inertia matrix ==
The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles.
Let the system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
be located at the coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
with velocities
v
i
{\displaystyle \mathbf {v} _{i}}
relative to a fixed reference frame. For a (possibly moving) reference point
R
{\displaystyle \mathbf {R} }
, the relative positions are
Δ
r
i
=
r
i
−
R
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} }
and the (absolute) velocities are
v
i
=
ω
×
Δ
r
i
+
V
R
{\displaystyle \mathbf {v} _{i}={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }}
where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the system, and
V
R
{\displaystyle \mathbf {V_{R}} }
is the velocity of
R
{\displaystyle \mathbf {R} }
.
=== Angular momentum ===
Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix,
[
b
]
{\displaystyle \left[\mathbf {b} \right]}
, constructed from the components of
b
=
(
b
x
,
b
y
,
b
z
)
{\displaystyle \mathbf {b} =(b_{x},b_{y},b_{z})}
:
b
×
y
≡
[
b
]
y
[
b
]
≡
[
0
−
b
z
b
y
b
z
0
−
b
x
−
b
y
b
x
0
]
.
{\displaystyle {\begin{aligned}\mathbf {b} \times \mathbf {y} &\equiv \left[\mathbf {b} \right]\mathbf {y} \\\left[\mathbf {b} \right]&\equiv {\begin{bmatrix}0&-b_{z}&b_{y}\\b_{z}&0&-b_{x}\\-b_{y}&b_{x}&0\end{bmatrix}}.\end{aligned}}}
The inertia matrix is constructed by considering the angular momentum, with the reference point
R
{\displaystyle \mathbf {R} }
of the body chosen to be the center of mass
C
{\displaystyle \mathbf {C} }
:
L
=
∑
i
=
1
n
m
i
Δ
r
i
×
v
i
=
∑
i
=
1
n
m
i
Δ
r
i
×
(
ω
×
Δ
r
i
+
V
R
)
=
(
−
∑
i
=
1
n
m
i
Δ
r
i
×
(
Δ
r
i
×
ω
)
)
+
(
∑
i
=
1
n
m
i
Δ
r
i
×
V
R
)
,
{\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }\right)\\&=\left(-\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left(\Delta \mathbf {r} _{i}\times {\boldsymbol {\omega }}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {V} _{\mathbf {R} }\right),\end{aligned}}}
where the terms containing
V
R
{\displaystyle \mathbf {V_{R}} }
(
=
C
{\displaystyle =\mathbf {C} }
) sum to zero by the definition of center of mass.
Then, the skew-symmetric matrix
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
obtained from the relative position vector
Δ
r
i
=
r
i
−
C
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} }
, can be used to define,
L
=
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
=
I
C
ω
,
{\displaystyle \mathbf {L} =\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},}
where
I
C
{\displaystyle \mathbf {I_{C}} }
defined by
I
C
=
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
,
{\displaystyle \mathbf {I} _{\mathbf {C} }=-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2},}
is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass
C
{\displaystyle \mathbf {C} }
.
=== Kinetic energy ===
The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of
n
{\displaystyle n}
particles
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
be located at the coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
with velocities
v
i
{\displaystyle \mathbf {v} _{i}}
, then the kinetic energy is
E
K
=
1
2
∑
i
=
1
n
m
i
v
i
⋅
v
i
=
1
2
∑
i
=
1
n
m
i
(
ω
×
Δ
r
i
+
V
C
)
⋅
(
ω
×
Δ
r
i
+
V
C
)
,
{\displaystyle E_{\text{K}}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right),}
where
Δ
r
i
=
r
i
−
C
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} }
is the position vector of a particle relative to the center of mass.
This equation expands to yield three terms
E
K
=
1
2
(
∑
i
=
1
n
m
i
(
ω
×
Δ
r
i
)
⋅
(
ω
×
Δ
r
i
)
)
+
(
∑
i
=
1
n
m
i
V
C
⋅
(
ω
×
Δ
r
i
)
)
+
1
2
(
∑
i
=
1
n
m
i
V
C
⋅
V
C
)
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\right).}
Since the center of mass is defined by
∑
i
=
1
n
m
i
Δ
r
i
=
0
{\displaystyle \sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}=0}
, the second term in this equation is zero. Introduce the skew-symmetric matrix
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
so the kinetic energy becomes
E
K
=
1
2
(
∑
i
=
1
n
m
i
(
[
Δ
r
i
]
ω
)
⋅
(
[
Δ
r
i
]
ω
)
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
=
1
2
(
∑
i
=
1
n
m
i
(
ω
T
[
Δ
r
i
]
T
[
Δ
r
i
]
ω
)
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
=
1
2
ω
⋅
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
.
{\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\cdot \left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }.\end{aligned}}}
Thus, the kinetic energy of the rigid system of particles is given by
E
K
=
1
2
ω
⋅
I
C
ω
+
1
2
M
V
C
2
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }}+{\frac {1}{2}}M\mathbf {V} _{\mathbf {C} }^{2}.}
where
I
C
{\displaystyle \mathbf {I_{C}} }
is the inertia matrix relative to the center of mass and
M
{\displaystyle M}
is the total mass.
=== Resultant torque ===
The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is,
τ
=
∑
i
=
1
n
(
r
i
−
R
)
×
m
i
a
i
,
{\displaystyle {\boldsymbol {\tau }}=\sum _{i=1}^{n}\left(\mathbf {r_{i}} -\mathbf {R} \right)\times m_{i}\mathbf {a} _{i},}
where
a
i
{\displaystyle \mathbf {a} _{i}}
is the acceleration of the particle
P
i
{\displaystyle P_{i}}
. The kinematics of a rigid body yields the formula for the acceleration of the particle
P
i
{\displaystyle P_{i}}
in terms of the position
R
{\displaystyle \mathbf {R} }
and acceleration
A
R
{\displaystyle \mathbf {A} _{\mathbf {R} }}
of the reference point, as well as the angular velocity vector
ω
{\displaystyle {\boldsymbol {\omega }}}
and angular acceleration vector
α
{\displaystyle {\boldsymbol {\alpha }}}
of the rigid system as,
a
i
=
α
×
(
r
i
−
R
)
+
ω
×
(
ω
×
(
r
i
−
R
)
)
+
A
R
.
{\displaystyle \mathbf {a} _{i}={\boldsymbol {\alpha }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+{\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\mathbf {A} _{\mathbf {R} }.}
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point, and introduce the skew-symmetric matrix
[
Δ
r
i
]
=
[
r
i
−
C
]
{\displaystyle \left[\Delta \mathbf {r} _{i}\right]=\left[\mathbf {r} _{i}-\mathbf {C} \right]}
to represent the cross product
(
r
i
−
C
)
×
{\displaystyle (\mathbf {r} _{i}-\mathbf {C} )\times }
, to obtain
τ
=
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
α
+
ω
×
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
{\displaystyle {\boldsymbol {\tau }}=\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}}
The calculation uses the identity
Δ
r
i
×
(
ω
×
(
ω
×
Δ
r
i
)
)
+
ω
×
(
(
ω
×
Δ
r
i
)
×
Δ
r
i
)
=
0
,
{\displaystyle \Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\boldsymbol {\omega }}\times \left(\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\times \Delta \mathbf {r} _{i}\right)=0,}
obtained from the Jacobi identity for the triple cross product as shown in the proof below:
Thus, the resultant torque on the rigid system of particles is given by
τ
=
I
C
α
+
ω
×
I
C
ω
,
{\displaystyle {\boldsymbol {\tau }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},}
where
I
C
{\displaystyle \mathbf {I_{C}} }
is the inertia matrix relative to the center of mass.
=== Parallel axis theorem ===
The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass
C
{\displaystyle \mathbf {C} }
and the inertia matrix relative to another point
R
{\displaystyle \mathbf {R} }
. This relationship is called the parallel axis theorem.
Consider the inertia matrix
I
R
{\displaystyle \mathbf {I_{R}} }
obtained for a rigid system of particles measured relative to a reference point
R
{\displaystyle \mathbf {R} }
, given by
I
R
=
−
∑
i
=
1
n
m
i
[
r
i
−
R
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}\left[\mathbf {r} _{i}-\mathbf {R} \right]^{2}.}
Let
C
{\displaystyle \mathbf {C} }
be the center of mass of the rigid system, then
R
=
(
R
−
C
)
+
C
=
d
+
C
,
{\displaystyle \mathbf {R} =(\mathbf {R} -\mathbf {C} )+\mathbf {C} =\mathbf {d} +\mathbf {C} ,}
where
d
{\displaystyle \mathbf {d} }
is the vector from the center of mass
C
{\displaystyle \mathbf {C} }
to the reference point
R
{\displaystyle \mathbf {R} }
. Use this equation to compute the inertia matrix,
I
R
=
−
∑
i
=
1
n
m
i
[
r
i
−
(
C
+
d
)
]
2
=
−
∑
i
=
1
n
m
i
[
(
r
i
−
C
)
−
d
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\left(\mathbf {C} +\mathbf {d} \right)]^{2}=-\sum _{i=1}^{n}m_{i}[\left(\mathbf {r} _{i}-\mathbf {C} \right)-\mathbf {d} ]^{2}.}
Distribute over the cross product to obtain
I
R
=
−
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
2
)
+
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
)
[
d
]
+
[
d
]
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
)
−
(
∑
i
=
1
n
m
i
)
[
d
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]^{2}\right)+\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)[\mathbf {d} ]+[\mathbf {d} ]\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)-\left(\sum _{i=1}^{n}m_{i}\right)[\mathbf {d} ]^{2}.}
The first term is the inertia matrix
I
C
{\displaystyle \mathbf {I_{C}} }
relative to the center of mass. The second and third terms are zero by definition of the center of mass
C
{\displaystyle \mathbf {C} }
. And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix
[
d
]
{\displaystyle [\mathbf {d} ]}
constructed from
d
{\displaystyle \mathbf {d} }
.
The result is the parallel axis theorem,
I
R
=
I
C
−
M
[
d
]
2
,
{\displaystyle \mathbf {I} _{\mathbf {R} }=\mathbf {I} _{\mathbf {C} }-M[\mathbf {d} ]^{2},}
where
d
{\displaystyle \mathbf {d} }
is the vector from the center of mass
C
{\displaystyle \mathbf {C} }
to the reference point
R
{\displaystyle \mathbf {R} }
.
Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form
−
m
[
r
]
2
{\displaystyle -m\left[\mathbf {r} \right]^{2}}
, which is similar to the
m
r
2
{\displaystyle mr^{2}}
that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term
m
[
r
]
T
[
r
]
{\displaystyle m\left[\mathbf {r} \right]^{\mathsf {T}}\left[\mathbf {r} \right]}
, if desired, by using the skew-symmetry property of
[
r
]
{\displaystyle [\mathbf {r} ]}
.
=== Scalar moment of inertia in a plane ===
The scalar moment of inertia,
I
L
{\displaystyle I_{L}}
, of a body about a specified axis whose direction is specified by the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
and passes through the body at a point
R
{\displaystyle \mathbf {R} }
is as follows:
I
L
=
k
^
⋅
(
−
∑
i
=
1
N
m
i
[
Δ
r
i
]
2
)
k
^
=
k
^
⋅
I
R
k
^
=
k
^
T
I
R
k
^
,
{\displaystyle I_{L}=\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,}
where
I
R
{\displaystyle \mathbf {I_{R}} }
is the moment of inertia matrix of the system relative to the reference point
R
{\displaystyle \mathbf {R} }
, and
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
is the skew symmetric matrix obtained from the vector
Δ
r
i
=
r
i
−
R
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} }
.
This is derived as follows. Let a rigid assembly of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, have coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
. Choose
R
{\displaystyle \mathbf {R} }
as a reference point and compute the moment of inertia around a line L defined by the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
through the reference point
R
{\displaystyle \mathbf {R} }
,
L
(
t
)
=
R
+
t
k
^
{\displaystyle \mathbf {L} (t)=\mathbf {R} +t\mathbf {\hat {k}} }
. The perpendicular vector from this line to the particle
P
i
{\displaystyle P_{i}}
is obtained from
Δ
r
i
{\displaystyle \Delta \mathbf {r} _{i}}
by removing the component that projects onto
k
^
{\displaystyle \mathbf {\hat {k}} }
.
Δ
r
i
⊥
=
Δ
r
i
−
(
k
^
⋅
Δ
r
i
)
k
^
=
(
E
−
k
^
k
^
T
)
Δ
r
i
,
{\displaystyle \Delta \mathbf {r} _{i}^{\perp }=\Delta \mathbf {r} _{i}-\left(\mathbf {\hat {k}} \cdot \Delta \mathbf {r} _{i}\right)\mathbf {\hat {k}} =\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)\Delta \mathbf {r} _{i},}
where
E
{\displaystyle \mathbf {E} }
is the identity matrix, so as to avoid confusion with the inertia matrix, and
k
^
k
^
T
{\displaystyle \mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}}
is the outer product matrix formed from the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
along the line
L
{\displaystyle L}
.
To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix
[
k
^
]
{\displaystyle \left[\mathbf {\hat {k}} \right]}
such that
[
k
^
]
y
=
k
^
×
y
{\displaystyle \left[\mathbf {\hat {k}} \right]\mathbf {y} =\mathbf {\hat {k}} \times \mathbf {y} }
, then we have the identity
−
[
k
^
]
2
≡
|
k
^
|
2
(
E
−
k
^
k
^
T
)
=
E
−
k
^
k
^
T
,
{\displaystyle -\left[\mathbf {\hat {k}} \right]^{2}\equiv \left|\mathbf {\hat {k}} \right|^{2}\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)=\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}},}
noting that
k
^
{\displaystyle \mathbf {\hat {k}} }
is a unit vector.
The magnitude squared of the perpendicular vector is
|
Δ
r
i
⊥
|
2
=
(
−
[
k
^
]
2
Δ
r
i
)
⋅
(
−
[
k
^
]
2
Δ
r
i
)
=
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
{\displaystyle {\begin{aligned}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}&=\left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\cdot \left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\\&=\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\end{aligned}}}
The simplification of this equation uses the triple scalar product identity
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
≡
(
(
k
^
×
(
k
^
×
Δ
r
i
)
)
×
k
^
)
⋅
(
k
^
×
Δ
r
i
)
,
{\displaystyle \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\equiv \left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right),}
where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that
Δ
r
i
{\displaystyle \Delta \mathbf {r} _{i}}
and
k
^
{\displaystyle \mathbf {\hat {k}} }
are orthogonal:
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
=
(
(
k
^
×
(
k
^
×
Δ
r
i
)
)
×
k
^
)
⋅
(
k
^
×
Δ
r
i
)
=
(
k
^
×
Δ
r
i
)
⋅
(
−
Δ
r
i
×
k
^
)
=
−
k
^
⋅
(
Δ
r
i
×
Δ
r
i
×
k
^
)
=
−
k
^
⋅
[
Δ
r
i
]
2
k
^
.
{\displaystyle {\begin{aligned}&\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\\={}&\left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\\={}&\left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\cdot \left(-\Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left(\Delta \mathbf {r} _{i}\times \Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} .\end{aligned}}}
Thus, the moment of inertia around the line
L
{\displaystyle L}
through
R
{\displaystyle \mathbf {R} }
in the direction
k
^
{\displaystyle \mathbf {\hat {k}} }
is obtained from the calculation
I
L
=
∑
i
=
1
N
m
i
|
Δ
r
i
⊥
|
2
=
−
∑
i
=
1
N
m
i
k
^
⋅
[
Δ
r
i
]
2
k
^
=
k
^
⋅
(
−
∑
i
=
1
N
m
i
[
Δ
r
i
]
2
)
k
^
=
k
^
⋅
I
R
k
^
=
k
^
T
I
R
k
^
,
{\displaystyle {\begin{aligned}I_{L}&=\sum _{i=1}^{N}m_{i}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}\\&=-\sum _{i=1}^{N}m_{i}\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} \\&=\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,\end{aligned}}}
where
I
R
{\displaystyle \mathbf {I_{R}} }
is the moment of inertia matrix of the system relative to the reference point
R
{\displaystyle \mathbf {R} }
.
This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body.
== Inertia tensor ==
For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used.
=== Definition ===
For a rigid object of
N
{\displaystyle N}
point masses
m
k
{\displaystyle m_{k}}
, the moment of inertia tensor is given by
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
.
{\displaystyle \mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\I_{21}&I_{22}&I_{23}\\I_{31}&I_{32}&I_{33}\end{bmatrix}}.}
Its components are defined as
I
i
j
=
d
e
f
∑
k
=
1
N
m
k
(
‖
r
k
‖
2
δ
i
j
−
x
i
(
k
)
x
j
(
k
)
)
{\displaystyle I_{ij}\ {\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(\left\|\mathbf {r} _{k}\right\|^{2}\delta _{ij}-x_{i}^{(k)}x_{j}^{(k)}\right)}
where
i
{\displaystyle i}
,
j
{\displaystyle j}
is equal to 1, 2 or 3 for
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
, respectively,
r
k
=
(
x
1
(
k
)
,
x
2
(
k
)
,
x
3
(
k
)
)
{\displaystyle \mathbf {r} _{k}=\left(x_{1}^{(k)},x_{2}^{(k)},x_{3}^{(k)}\right)}
is the vector to the point mass
m
k
{\displaystyle m_{k}}
from the point about which the tensor is calculated and
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
Note that, by the definition,
I
{\displaystyle \mathbf {I} }
is a symmetric tensor.
The diagonal elements are more succinctly written as
I
x
x
=
d
e
f
∑
k
=
1
N
m
k
(
y
k
2
+
z
k
2
)
,
I
y
y
=
d
e
f
∑
k
=
1
N
m
k
(
x
k
2
+
z
k
2
)
,
I
z
z
=
d
e
f
∑
k
=
1
N
m
k
(
x
k
2
+
y
k
2
)
,
{\displaystyle {\begin{aligned}I_{xx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right),\\I_{yy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+z_{k}^{2}\right),\\I_{zz}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+y_{k}^{2}\right),\end{aligned}}}
while the off-diagonal elements, also called the products of inertia, are
I
x
y
=
I
y
x
=
d
e
f
−
∑
k
=
1
N
m
k
x
k
y
k
,
I
x
z
=
I
z
x
=
d
e
f
−
∑
k
=
1
N
m
k
x
k
z
k
,
I
y
z
=
I
z
y
=
d
e
f
−
∑
k
=
1
N
m
k
y
k
z
k
.
{\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}y_{k}z_{k}.\end{aligned}}}
Here
I
x
x
{\displaystyle I_{xx}}
denotes the moment of inertia around the
x
{\displaystyle x}
-axis when the objects are rotated around the x-axis,
I
x
y
{\displaystyle I_{xy}}
denotes the moment of inertia around the
y
{\displaystyle y}
-axis when the objects are rotated around the
x
{\displaystyle x}
-axis, and so on.
These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has
I
=
∭
V
ρ
(
x
,
y
,
z
)
(
‖
r
‖
2
E
3
−
r
⊗
r
)
d
x
d
y
d
z
,
{\displaystyle \mathbf {I} =\iiint _{V}\rho (x,y,z)\left(\|\mathbf {r} \|^{2}\mathbf {E} _{3}-\mathbf {r} \otimes \mathbf {r} \right)\,dx\,dy\,dz,}
where
r
⊗
r
{\displaystyle \mathbf {r} \otimes \mathbf {r} }
is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object.
Alternatively it can also be written in terms of the angular momentum operator
[
r
]
x
=
r
×
x
{\displaystyle [\mathbf {r} ]\mathbf {x} =\mathbf {r} \times \mathbf {x} }
:
I
=
∭
V
ρ
(
r
)
[
r
]
T
[
r
]
d
V
=
−
∭
Q
ρ
(
r
)
[
r
]
2
d
V
{\displaystyle \mathbf {I} =\iiint _{V}\rho (\mathbf {r} )[\mathbf {r} ]^{\textsf {T}}[\mathbf {r} ]\,dV=-\iiint _{Q}\rho (\mathbf {r} )[\mathbf {r} ]^{2}\,dV}
The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction
n
{\displaystyle \mathbf {n} }
,
I
n
=
n
⋅
I
⋅
n
,
{\displaystyle I_{n}=\mathbf {n} \cdot \mathbf {I} \cdot \mathbf {n} ,}
where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as
I
12
{\displaystyle I_{12}}
is obtained by the computation
I
12
=
e
1
⋅
I
⋅
e
2
,
{\displaystyle I_{12}=\mathbf {e} _{1}\cdot \mathbf {I} \cdot \mathbf {e} _{2},}
and can be interpreted as the moment of inertia around the
x
{\displaystyle x}
-axis when the object rotates around the
y
{\displaystyle y}
-axis.
The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by,
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
=
[
I
x
x
I
x
y
I
x
z
I
y
x
I
y
y
I
y
z
I
z
x
I
z
y
I
z
z
]
=
∑
k
=
1
N
[
m
k
(
y
k
2
+
z
k
2
)
−
m
k
x
k
y
k
−
m
k
x
k
z
k
−
m
k
x
k
y
k
m
k
(
x
k
2
+
z
k
2
)
−
m
k
y
k
z
k
−
m
k
x
k
z
k
−
m
k
y
k
z
k
m
k
(
x
k
2
+
y
k
2
)
]
.
{\displaystyle {\begin{aligned}\mathbf {I} &={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}={\begin{bmatrix}I_{xx}&I_{xy}&I_{xz}\\[1.8ex]I_{yx}&I_{yy}&I_{yz}\\[1.8ex]I_{zx}&I_{zy}&I_{zz}\end{bmatrix}}\\[2ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}}
It is common in rigid body mechanics to use notation that explicitly identifies the
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
-axes, such as
I
x
x
{\displaystyle I_{xx}}
and
I
x
y
{\displaystyle I_{xy}}
, for the components of the inertia tensor.
=== Alternate inertia convention ===
There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix:
I
x
y
=
I
y
x
=
d
e
f
∑
k
=
1
N
m
k
x
k
y
k
,
I
x
z
=
I
z
x
=
d
e
f
∑
k
=
1
N
m
k
x
k
z
k
,
I
y
z
=
I
z
y
=
d
e
f
∑
k
=
1
N
m
k
y
k
z
k
,
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
=
[
I
x
x
−
I
x
y
−
I
x
z
−
I
y
x
I
y
y
−
I
y
z
−
I
z
x
−
I
z
y
I
z
z
]
=
∑
k
=
1
N
[
m
k
(
y
k
2
+
z
k
2
)
−
m
k
x
k
y
k
−
m
k
x
k
z
k
−
m
k
x
k
y
k
m
k
(
x
k
2
+
z
k
2
)
−
m
k
y
k
z
k
−
m
k
x
k
z
k
−
m
k
y
k
z
k
m
k
(
x
k
2
+
y
k
2
)
]
.
{\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}y_{k}z_{k},\\[3pt]\mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}&={\begin{bmatrix}I_{xx}&-I_{xy}&-I_{xz}\\[1.8ex]-I_{yx}&I_{yy}&-I_{yz}\\[1.8ex]-I_{zx}&-I_{zy}&I_{zz}\end{bmatrix}}\\[1ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}}
==== Determine inertia convention (principal axes method) ====
If one has the inertia data
(
I
x
x
,
I
y
y
,
I
z
z
,
I
x
y
,
I
x
z
,
I
y
z
)
{\displaystyle (I_{xx},I_{yy},I_{zz},I_{xy},I_{xz},I_{yz})}
without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions:
The standard inertia convention has been used
(
I
12
=
I
x
y
,
I
13
=
I
x
z
,
I
23
=
I
y
z
)
{\displaystyle (I_{12}=I_{xy},I_{13}=I_{xz},I_{23}=I_{yz})}
.
The alternate inertia convention has been used
(
I
12
=
−
I
x
y
,
I
13
=
−
I
x
z
,
I
23
=
−
I
y
z
)
{\displaystyle (I_{12}=-I_{xy},I_{13}=-I_{xz},I_{23}=-I_{yz})}
.
Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used.
=== Derivation of the tensor components ===
The distance
r
{\displaystyle r}
of a particle at
x
{\displaystyle \mathbf {x} }
from the axis of rotation passing through the origin in the
n
^
{\displaystyle \mathbf {\hat {n}} }
direction is
|
x
−
(
x
⋅
n
^
)
n
^
|
{\displaystyle \left|\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right|}
, where
n
^
{\displaystyle \mathbf {\hat {n}} }
is unit vector. The moment of inertia on the axis is
I
=
m
r
2
=
m
(
x
−
(
x
⋅
n
^
)
n
^
)
⋅
(
x
−
(
x
⋅
n
^
)
n
^
)
=
m
(
x
2
−
2
x
(
x
⋅
n
^
)
n
^
+
(
x
⋅
n
^
)
2
n
^
2
)
=
m
(
x
2
−
(
x
⋅
n
^
)
2
)
.
{\displaystyle I=mr^{2}=m\left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)\cdot \left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)=m\left(\mathbf {x} ^{2}-2\mathbf {x} \left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} +\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\mathbf {\hat {n}} ^{2}\right)=m\left(\mathbf {x} ^{2}-\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\right).}
Rewrite the equation using matrix transpose:
I
=
m
(
x
T
x
−
n
^
T
x
x
T
n
^
)
=
m
⋅
n
^
T
(
x
T
x
⋅
E
3
−
x
x
T
)
n
^
,
{\displaystyle I=m\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} -\mathbf {\hat {n}} ^{\textsf {T}}\mathbf {x} \mathbf {x} ^{\textsf {T}}\mathbf {\hat {n}} \right)=m\cdot \mathbf {\hat {n}} ^{\textsf {T}}\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} \cdot \mathbf {E_{3}} -\mathbf {x} \mathbf {x} ^{\textsf {T}}\right)\mathbf {\hat {n}} ,}
where E3 is the 3×3 identity matrix.
This leads to a tensor formula for the moment of inertia
I
=
m
[
n
1
n
2
n
3
]
[
y
2
+
z
2
−
x
y
−
x
z
−
y
x
x
2
+
z
2
−
y
z
−
z
x
−
z
y
x
2
+
y
2
]
[
n
1
n
2
n
3
]
.
{\displaystyle I=m{\begin{bmatrix}n_{1}&n_{2}&n_{3}\end{bmatrix}}{\begin{bmatrix}y^{2}+z^{2}&-xy&-xz\\[0.5ex]-yx&x^{2}+z^{2}&-yz\\[0.5ex]-zx&-zy&x^{2}+y^{2}\end{bmatrix}}{\begin{bmatrix}n_{1}\\[0.7ex]n_{2}\\[0.7ex]n_{3}\end{bmatrix}}.}
For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct.
=== Inertia tensor of translation ===
Let
I
0
{\displaystyle \mathbf {I} _{0}}
be the inertia tensor of a body calculated at its center of mass, and
R
{\displaystyle \mathbf {R} }
be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by:
I
=
I
0
+
m
[
(
R
⋅
R
)
E
3
−
R
⊗
R
]
{\displaystyle \mathbf {I} =\mathbf {I} _{0}+m[(\mathbf {R} \cdot \mathbf {R} )\mathbf {E} _{3}-\mathbf {R} \otimes \mathbf {R} ]}
where
m
{\displaystyle m}
is the body's mass, E3 is the 3 × 3 identity matrix, and
⊗
{\displaystyle \otimes }
is the outer product.
=== Inertia tensor of rotation ===
Let
R
{\displaystyle \mathbf {R} }
be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by:
I
=
R
I
0
R
T
{\displaystyle \mathbf {I} =\mathbf {R} \mathbf {I_{0}} \mathbf {R} ^{\textsf {T}}}
== Inertia matrix in different reference frames ==
The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant.
=== Body frame ===
Let the body frame inertia matrix relative to the center of mass be denoted
I
C
B
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}}
, and define the orientation of the body frame relative to the inertial frame by the rotation matrix
A
{\displaystyle \mathbf {A} }
, such that,
x
=
A
y
,
{\displaystyle \mathbf {x} =\mathbf {A} \mathbf {y} ,}
where vectors
y
{\displaystyle \mathbf {y} }
in the body fixed coordinate frame have coordinates
x
{\displaystyle \mathbf {x} }
in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by
I
C
=
A
I
C
B
A
T
.
{\displaystyle \mathbf {I} _{\mathbf {C} }=\mathbf {A} \mathbf {I} _{\mathbf {C} }^{B}\mathbf {A} ^{\mathsf {T}}.}
Notice that
A
{\displaystyle \mathbf {A} }
changes as the body moves, while
I
C
B
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}}
remains constant.
=== Principal axes ===
Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix
Q
{\displaystyle \mathbf {Q} }
and a diagonal matrix
Λ
{\displaystyle {\boldsymbol {\Lambda }}}
, given by
I
C
B
=
Q
Λ
Q
T
,
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}=\mathbf {Q} {\boldsymbol {\Lambda }}\mathbf {Q} ^{\mathsf {T}},}
where
Λ
=
[
I
1
0
0
0
I
2
0
0
0
I
3
]
.
{\displaystyle {\boldsymbol {\Lambda }}={\begin{bmatrix}I_{1}&0&0\\0&I_{2}&0\\0&0&I_{3}\end{bmatrix}}.}
The columns of the rotation matrix
Q
{\displaystyle \mathbf {Q} }
define the directions of the principal axes of the body, and the constants
I
1
{\displaystyle I_{1}}
,
I
2
{\displaystyle I_{2}}
, and
I
3
{\displaystyle I_{3}}
are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. When the body has an axis of symmetry (sometimes called the figure axis or axis of figure) then the other two moments of inertia will be identical and any axis perpendicular to the axis of symmetry will be a principal axis.
A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis.
The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order
m
{\displaystyle m}
, meaning it is symmetrical under rotations of 360°/m about the given axis, that axis is a principal axis. When
m
>
2
{\displaystyle m>2}
, the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid.
The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis.
A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble.
Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type.
=== Ellipsoid ===
The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let
Λ
{\displaystyle {\boldsymbol {\Lambda }}}
be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface
x
T
Λ
x
=
1
,
{\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =1,}
or
I
1
x
2
+
I
2
y
2
+
I
3
z
2
=
1
,
{\displaystyle I_{1}x^{2}+I_{2}y^{2}+I_{3}z^{2}=1,}
defines an ellipsoid in the body frame. Write this equation in the form,
(
x
1
/
I
1
)
2
+
(
y
1
/
I
2
)
2
+
(
z
1
/
I
3
)
2
=
1
,
{\displaystyle \left({\frac {x}{1/{\sqrt {I_{1}}}}}\right)^{2}+\left({\frac {y}{1/{\sqrt {I_{2}}}}}\right)^{2}+\left({\frac {z}{1/{\sqrt {I_{3}}}}}\right)^{2}=1,}
to see that the semi-principal diameters of this ellipsoid are given by
a
=
1
I
1
,
b
=
1
I
2
,
c
=
1
I
3
.
{\displaystyle a={\frac {1}{\sqrt {I_{1}}}},\quad b={\frac {1}{\sqrt {I_{2}}}},\quad c={\frac {1}{\sqrt {I_{3}}}}.}
Let a point
x
{\displaystyle \mathbf {x} }
on this ellipsoid be defined in terms of its magnitude and direction,
x
=
‖
x
‖
n
{\displaystyle \mathbf {x} =\|\mathbf {x} \|\mathbf {n} }
, where
n
{\displaystyle \mathbf {n} }
is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia
I
n
{\displaystyle I_{\mathbf {n} }}
around an axis in the direction
n
{\displaystyle \mathbf {n} }
, yields
x
T
Λ
x
=
‖
x
‖
2
n
T
Λ
n
=
‖
x
‖
2
I
n
=
1.
{\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =\|\mathbf {x} \|^{2}\mathbf {n} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {n} =\|\mathbf {x} \|^{2}I_{\mathbf {n} }=1.}
Thus, the magnitude of a point
x
{\displaystyle \mathbf {x} }
in the direction
n
{\displaystyle \mathbf {n} }
on the inertia ellipsoid is
‖
x
‖
=
1
I
n
.
{\displaystyle \|\mathbf {x} \|={\frac {1}{\sqrt {I_{\mathbf {n} }}}}.}
== See also ==
Central moment
List of moments of inertia
Moment of inertia factor
Planar lamina
Rotational energy
== References ==
== External links ==
Angular momentum and rigid-body rotation in two and three dimensions
Lecture notes on rigid-body rotation and moments of inertia
The moment of inertia tensor
An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation)
Tutorial on finding moments of inertia, with problems and solutions on various basic shapes
Notes on mechanics of manipulation: the angular inertia tensor
Easy to use and Free Moment of Inertia Calculator online | Wikipedia/Principal_axis_(mechanics) |
In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm.
== Mathematical definition ==
Let
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
be a well-posed problem, i.e.
F
:
X
×
Y
→
R
{\displaystyle F:X\times Y\rightarrow \mathbb {R} }
is a real or complex functional relationship, defined on the Cartesian product of an input data set
X
{\displaystyle X}
and an output data set
Y
{\displaystyle Y}
, such that exists a locally lipschitz function
g
:
X
→
Y
{\displaystyle g:X\rightarrow Y}
called resolvent, which has the property that for every root
(
x
,
y
)
{\displaystyle (x,y)}
of
F
{\displaystyle F}
,
y
=
g
(
x
)
{\displaystyle y=g(x)}
. We define numerical method for the approximation of
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
, the sequence of problems
{
M
n
}
n
∈
N
=
{
F
n
(
x
n
,
y
n
)
=
0
}
n
∈
N
,
{\displaystyle \left\{M_{n}\right\}_{n\in \mathbb {N} }=\left\{F_{n}(x_{n},y_{n})=0\right\}_{n\in \mathbb {N} },}
with
F
n
:
X
n
×
Y
n
→
R
{\displaystyle F_{n}:X_{n}\times Y_{n}\rightarrow \mathbb {R} }
,
x
n
∈
X
n
{\displaystyle x_{n}\in X_{n}}
and
y
n
∈
Y
n
{\displaystyle y_{n}\in Y_{n}}
for every
n
∈
N
{\displaystyle n\in \mathbb {N} }
. The problems of which the method consists need not be well-posed. If they are, the method is said to be stable or well-posed.
== Consistency ==
Necessary conditions for a numerical method to effectively approximate
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
are that
x
n
→
x
{\displaystyle x_{n}\rightarrow x}
and that
F
n
{\displaystyle F_{n}}
behaves like
F
{\displaystyle F}
when
n
→
∞
{\displaystyle n\rightarrow \infty }
. So, a numerical method is called consistent if and only if the sequence of functions
{
F
n
}
n
∈
N
{\displaystyle \left\{F_{n}\right\}_{n\in \mathbb {N} }}
pointwise converges to
F
{\displaystyle F}
on the set
S
{\displaystyle S}
of its solutions:
lim
F
n
(
x
,
y
+
t
)
=
F
(
x
,
y
,
t
)
=
0
,
∀
(
x
,
y
,
t
)
∈
S
.
{\displaystyle \lim F_{n}(x,y+t)=F(x,y,t)=0,\quad \quad \forall (x,y,t)\in S.}
When
F
n
=
F
,
∀
n
∈
N
{\displaystyle F_{n}=F,\forall n\in \mathbb {N} }
on
S
{\displaystyle S}
the method is said to be strictly consistent.
== Convergence ==
Denote by
ℓ
n
{\displaystyle \ell _{n}}
a sequence of admissible perturbations of
x
∈
X
{\displaystyle x\in X}
for some numerical method
M
{\displaystyle M}
(i.e.
x
+
ℓ
n
∈
X
n
∀
n
∈
N
{\displaystyle x+\ell _{n}\in X_{n}\forall n\in \mathbb {N} }
) and with
y
n
(
x
+
ℓ
n
)
∈
Y
n
{\displaystyle y_{n}(x+\ell _{n})\in Y_{n}}
the value such that
F
n
(
x
+
ℓ
n
,
y
n
(
x
+
ℓ
n
)
)
=
0
{\displaystyle F_{n}(x+\ell _{n},y_{n}(x+\ell _{n}))=0}
. A condition which the method has to satisfy to be a meaningful tool for solving the problem
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
is convergence:
∀
ε
>
0
,
∃
n
0
(
ε
)
>
0
,
∃
δ
ε
,
n
0
such that
∀
n
>
n
0
,
∀
ℓ
n
:
‖
ℓ
n
‖
<
δ
ε
,
n
0
⇒
‖
y
n
(
x
+
ℓ
n
)
−
y
‖
≤
ε
.
{\displaystyle {\begin{aligned}&\forall \varepsilon >0,\exists n_{0}(\varepsilon )>0,\exists \delta _{\varepsilon ,n_{0}}{\text{ such that}}\\&\forall n>n_{0},\forall \ell _{n}:\|\ell _{n}\|<\delta _{\varepsilon ,n_{0}}\Rightarrow \|y_{n}(x+\ell _{n})-y\|\leq \varepsilon .\end{aligned}}}
One can easily prove that the point-wise convergence of
{
y
n
}
n
∈
N
{\displaystyle \{y_{n}\}_{n\in \mathbb {N} }}
to
y
{\displaystyle y}
implies the convergence of the associated method.
== See also ==
Numerical methods for ordinary differential equations
Numerical methods for partial differential equations
== References == | Wikipedia/Numerical_method |
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter.
A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function; in this case, they are also called indefinite integrals. The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations.
Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.
Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.
== History ==
=== Pre-calculus integration ===
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by
y
=
x
k
{\displaystyle y=x^{k}}
(which translates to the integral
∫
x
k
d
x
{\displaystyle \int x^{k}\,dx}
in contemporary notation), for any given non-negative integer value of
k
{\displaystyle k}
. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. The case n = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647.
Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
=== Leibniz and Newton ===
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
=== Formalization ===
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
=== Historical notation ===
The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–1820, reprinted in his book of 1822.
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with .x or x′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
=== First use of the term ===
The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur".
== Terminology and notation ==
In general, the integral of a real-valued function f(x) with respect to a real variable x on an interval [a, b] is written as
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx.}
The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) is called the integrand, the points a and b are called the limits (or bounds) of integration, and the integral is said to be over the interval [a, b], called the interval of integration.
A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral.
When the limits are omitted, as in
∫
f
(
x
)
d
x
,
{\displaystyle \int f(x)\,dx,}
the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).
In advanced settings, it is not uncommon to leave out dx when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write
∫
a
b
(
c
1
f
+
c
2
g
)
=
c
1
∫
a
b
f
+
c
2
∫
a
b
g
{\textstyle \int _{a}^{b}(c_{1}f+c_{2}g)=c_{1}\int _{a}^{b}f+c_{2}\int _{a}^{b}g}
to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.
== Interpretations ==
Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation.
As another example, to find the area of the region bounded by the graph of the function f(x) =
x
{\textstyle {\sqrt {x}}}
between x = 0 and x = 1, one can divide the interval into five pieces (0, 1/5, 2/5, ..., 1), then construct rectangles using the right end height of each piece (thus √0, √1/5, √2/5, ..., √1) and sum their areas to get the approximation
1
5
(
1
5
−
0
)
+
2
5
(
2
5
−
1
5
)
+
⋯
+
5
5
(
5
5
−
4
5
)
≈
0.7497
,
{\displaystyle \textstyle {\sqrt {\frac {1}{5}}}\left({\frac {1}{5}}-0\right)+{\sqrt {\frac {2}{5}}}\left({\frac {2}{5}}-{\frac {1}{5}}\right)+\cdots +{\sqrt {\frac {5}{5}}}\left({\frac {5}{5}}-{\frac {4}{5}}\right)\approx 0.7497,}
which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes
∫
0
1
x
d
x
=
2
3
,
{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx={\frac {2}{3}},}
which means 2/3 is the result of a weighted sum of function values, √x, multiplied by the infinitesimal step widths, denoted by dx, on the interval [0, 1].
== Formal definitions ==
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals.
=== Riemann integral ===
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval [a, b] on the real line is a finite sequence
a
=
x
0
≤
t
1
≤
x
1
≤
t
2
≤
x
2
≤
⋯
≤
x
n
−
1
≤
t
n
≤
x
n
=
b
.
{\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!}
This partitions the interval [a, b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a specific point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
∑
i
=
1
n
f
(
t
i
)
Δ
i
;
{\displaystyle \sum _{i=1}^{n}f(t_{i})\,\Delta _{i};}
thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, Δi = xi−xi−1. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1...n Δi. The Riemann integral of a function f over the interval [a, b] is equal to S if:
For all
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that, for any tagged partition
[
a
,
b
]
{\displaystyle [a,b]}
with mesh less than
δ
{\displaystyle \delta }
,
|
S
−
∑
i
=
1
n
f
(
t
i
)
Δ
i
|
<
ε
.
{\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\,\Delta _{i}\right|<\varepsilon .}
When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
=== Lebesgue integral ===
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
As Folland puts it, "To compute the Riemann integral of f, one partitions the domain [a, b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a, b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f " philosophy, the integral of a non-negative function f : R → R should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f∗(t) = μ{ x : f(x) > t }. The Lebesgue integral of f is then defined by
∫
f
=
∫
0
∞
f
∗
(
t
)
d
t
{\displaystyle \int f=\int _{0}^{\infty }f^{*}(t)\,dt}
where the integral on the right is an ordinary improper Riemann integral (f∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:
∫
E
|
f
|
d
μ
<
+
∞
.
{\displaystyle \int _{E}|f|\,d\mu <+\infty .}
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
∫
E
f
d
μ
=
∫
E
f
+
d
μ
−
∫
E
f
−
d
μ
{\displaystyle \int _{E}f\,d\mu =\int _{E}f^{+}\,d\mu -\int _{E}f^{-}\,d\mu }
where
f
+
(
x
)
=
max
{
f
(
x
)
,
0
}
=
{
f
(
x
)
,
if
f
(
x
)
>
0
,
0
,
otherwise,
f
−
(
x
)
=
max
{
−
f
(
x
)
,
0
}
=
{
−
f
(
x
)
,
if
f
(
x
)
<
0
,
0
,
otherwise.
{\displaystyle {\begin{alignedat}{3}&f^{+}(x)&&{}={}\max\{f(x),0\}&&{}={}{\begin{cases}f(x),&{\text{if }}f(x)>0,\\0,&{\text{otherwise,}}\end{cases}}\\&f^{-}(x)&&{}={}\max\{-f(x),0\}&&{}={}{\begin{cases}-f(x),&{\text{if }}f(x)<0,\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}
=== Other integrals ===
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral. A function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals.
The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable.
The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals.
The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures.
The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933.
The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
The Khinchin integral, named after Aleksandr Khinchin.
The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion.
The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation.
The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion.
The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953.
The Bochner integral, a generalization of the Lebesgue integral to functions that take values in a Banach space.
== Properties ==
=== Linearity ===
The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
f
↦
∫
a
b
f
(
x
)
d
x
{\displaystyle f\mapsto \int _{a}^{b}f(x)\;dx}
is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals:
∫
a
b
(
α
f
+
β
g
)
(
x
)
d
x
=
α
∫
a
b
f
(
x
)
d
x
+
β
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}(\alpha f+\beta g)(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.\,}
Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
f
↦
∫
E
f
d
μ
{\displaystyle f\mapsto \int _{E}f\,d\mu }
is a linear functional on this vector space, so that:
∫
E
(
α
f
+
β
g
)
d
μ
=
α
∫
E
f
d
μ
+
β
∫
E
g
d
μ
.
{\displaystyle \int _{E}(\alpha f+\beta g)\,d\mu =\alpha \int _{E}f\,d\mu +\beta \int _{E}g\,d\mu .}
More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
f
↦
∫
E
f
d
μ
,
{\displaystyle f\mapsto \int _{E}f\,d\mu ,\,}
that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K = C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See Hildebrandt 1953 for an axiomatic characterization of the integral.
=== Inequalities ===
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
m
(
b
−
a
)
≤
∫
a
b
f
(
x
)
d
x
≤
M
(
b
−
a
)
.
{\displaystyle m(b-a)\leq \int _{a}^{b}f(x)\,dx\leq M(b-a).}
Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
∫
a
b
f
(
x
)
d
x
≤
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx\leq \int _{a}^{b}g(x)\,dx.}
This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
∫
a
b
f
(
x
)
d
x
<
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx<\int _{a}^{b}g(x)\,dx.}
Subintervals. If [c, d] is a subinterval of [a, b] and f (x) is non-negative for all x, then
∫
c
d
f
(
x
)
d
x
≤
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{c}^{d}f(x)\,dx\leq \int _{a}^{b}f(x)\,dx.}
Products and absolute values of functions. If f and g are two functions, then we may consider their pointwise products and powers, and absolute values:
(
f
g
)
(
x
)
=
f
(
x
)
g
(
x
)
,
f
2
(
x
)
=
(
f
(
x
)
)
2
,
|
f
|
(
x
)
=
|
f
(
x
)
|
.
{\displaystyle (fg)(x)=f(x)g(x),\;f^{2}(x)=(f(x))^{2},\;|f|(x)=|f(x)|.}
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
|
∫
a
b
f
(
x
)
d
x
|
≤
∫
a
b
|
f
(
x
)
|
d
x
.
{\displaystyle \left|\int _{a}^{b}f(x)\,dx\right|\leq \int _{a}^{b}|f(x)|\,dx.}
Moreover, if f and g are both Riemann-integrable then fg is also Riemann-integrable, and
(
∫
a
b
(
f
g
)
(
x
)
d
x
)
2
≤
(
∫
a
b
f
(
x
)
2
d
x
)
(
∫
a
b
g
(
x
)
2
d
x
)
.
{\displaystyle \left(\int _{a}^{b}(fg)(x)\,dx\right)^{2}\leq \left(\int _{a}^{b}f(x)^{2}\,dx\right)\left(\int _{a}^{b}g(x)^{2}\,dx\right).}
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
|
∫
f
(
x
)
g
(
x
)
d
x
|
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
(
∫
|
g
(
x
)
|
q
d
x
)
1
/
q
.
{\displaystyle \left|\int f(x)g(x)\,dx\right|\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}\left(\int \left|g(x)\right|^{q}\,dx\right)^{1/q}.}
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then | f |p, | g |p and | f + g |p are also Riemann-integrable and the following Minkowski inequality holds:
(
∫
|
f
(
x
)
+
g
(
x
)
|
p
d
x
)
1
/
p
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
+
(
∫
|
g
(
x
)
|
p
d
x
)
1
/
p
.
{\displaystyle \left(\int \left|f(x)+g(x)\right|^{p}\,dx\right)^{1/p}\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}+\left(\int \left|g(x)\right|^{p}\,dx\right)^{1/p}.}
An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
=== Conventions ===
In this section, f is a real-valued Riemann-integrable function. The integral
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0 ≤ x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [x i , x i +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
∫
a
b
f
(
x
)
d
x
=
−
∫
b
a
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=-\int _{b}^{a}f(x)\,dx.}
With a = b, this implies:
∫
a
a
f
(
x
)
d
x
=
0.
{\displaystyle \int _{a}^{a}f(x)\,dx=0.}
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that if c is any element of [a, b], then:
∫
a
b
f
(
x
)
d
x
=
∫
a
c
f
(
x
)
d
x
+
∫
c
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\int _{a}^{c}f(x)\,dx+\int _{c}^{b}f(x)\,dx.}
With the first convention, the resulting relation
∫
a
c
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
−
∫
c
b
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
+
∫
b
c
f
(
x
)
d
x
{\displaystyle {\begin{aligned}\int _{a}^{c}f(x)\,dx&{}=\int _{a}^{b}f(x)\,dx-\int _{c}^{b}f(x)\,dx\\&{}=\int _{a}^{b}f(x)\,dx+\int _{b}^{c}f(x)\,dx\end{aligned}}}
is then well-defined for any cyclic permutation of a, b, and c.
== Fundamental theorem of calculus ==
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
=== First theorem ===
Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
F
′
(
x
)
=
f
(
x
)
{\displaystyle F'(x)=f(x)}
for all x in (a, b).
=== Second theorem ===
Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a, b]. That is, f and F are functions such that for all x in [a, b],
f
(
x
)
=
F
′
(
x
)
.
{\displaystyle f(x)=F'(x).}
If f is integrable on [a, b] then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
== Extensions ==
=== Improper integrals ===
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity:
∫
a
∞
f
(
x
)
d
x
=
lim
b
→
∞
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{\infty }f(x)\,dx=\lim _{b\to \infty }\int _{a}^{b}f(x)\,dx.}
If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result:
∫
a
b
f
(
x
)
d
x
=
lim
ε
→
0
∫
a
+
ϵ
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\varepsilon \to 0}\int _{a+\epsilon }^{b}f(x)\,dx.}
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
=== Multiple integration ===
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals
R
=
[
a
,
b
]
×
[
c
,
d
]
{\displaystyle R=[a,b]\times [c,d]}
can be written
∫
R
f
(
x
,
y
)
d
A
{\displaystyle \int _{R}f(x,y)\,dA}
where the differential dA indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of z = f(x,y) over the domain R. Under suitable conditions (e.g., if f is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral
∫
a
b
[
∫
c
d
f
(
x
,
y
)
d
y
]
d
x
.
{\displaystyle \int _{a}^{b}\left[\int _{c}^{d}f(x,y)\,dy\right]\,dx.}
This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign:
∬
R
f
(
x
,
y
)
d
A
.
{\displaystyle \iint _{R}f(x,y)\,dA.}
Integration over more general domains is possible. The integral of a function f, with respect to volume, over an n-dimensional region D of
R
n
{\displaystyle \mathbb {R} ^{n}}
is denoted by symbols such as:
∫
D
f
(
x
)
d
n
x
=
∫
D
f
d
V
.
{\displaystyle \int _{D}f(\mathbf {x} )d^{n}\mathbf {x} \ =\int _{D}f\,dV.}
=== Line integrals and surface integrals ===
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
W
=
F
⋅
s
.
{\displaystyle W=\mathbf {F} \cdot \mathbf {s} .}
For an object moving along a path C in a vector field F such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + ds. This gives the line integral
W
=
∫
C
F
⋅
d
s
.
{\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} .}
A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that a fluid flows through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, one need to take the dot product of v with the unit surface normal to S at each point, which will give a scalar field, which is integrated over the surface:
∫
S
v
⋅
d
S
.
{\displaystyle \int _{S}{\mathbf {v} }\cdot \,d{\mathbf {S} }.}
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
=== Contour integrals ===
In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve
γ
{\displaystyle \gamma }
in the complex plane, the integral is denoted as follows
∫
γ
f
(
z
)
d
z
.
{\displaystyle \int _{\gamma }f(z)\,dz.}
This is known as a contour integral.
=== Integrals of differential forms ===
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:
E
(
x
,
y
,
z
)
d
x
+
F
(
x
,
y
,
z
)
d
y
+
G
(
x
,
y
,
z
)
d
z
{\displaystyle E(x,y,z)\,dx+F(x,y,z)\,dy+G(x,y,z)\,dz}
where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes.
A differential two-form is a sum of the form
G
(
x
,
y
,
z
)
d
x
∧
d
y
+
E
(
x
,
y
,
z
)
d
y
∧
d
z
+
F
(
x
,
y
,
z
)
d
z
∧
d
x
.
{\displaystyle G(x,y,z)\,dx\wedge dy+E(x,y,z)\,dy\wedge dz+F(x,y,z)\,dz\wedge dx.}
Here the basic two-forms
d
x
∧
d
y
,
d
z
∧
d
x
,
d
y
∧
d
z
{\displaystyle dx\wedge dy,dz\wedge dx,dy\wedge dz}
measure oriented areas parallel to the coordinate two-planes. The symbol
∧
{\displaystyle \wedge }
denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of
E
i
+
F
j
+
G
k
{\displaystyle E\mathbf {i} +F\mathbf {j} +G\mathbf {k} }
.
Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.
=== Summations ===
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus.
=== Functional integrals ===
An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral.
== Applications ==
Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.
Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder,
π
r
2
h
{\displaystyle \pi r^{2}h}
, where
r
{\displaystyle r}
is the radius. In the case of a simple disc created by rotating a curve about the x-axis, the radius is given by f(x), and its height is the differential dx. Using an integral with bounds a and b, the volume of the disc is equal to:
π
∫
a
b
f
2
(
x
)
d
x
.
{\displaystyle \pi \int _{a}^{b}f^{2}(x)\,dx.}
Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval
[
a
,
b
]
{\displaystyle [a,b]}
is given by
x
(
b
)
−
x
(
a
)
=
∫
a
b
v
(
t
)
d
t
,
{\displaystyle x(b)-x(a)=\int _{a}^{b}v(t)\,dt,}
where
v
(
t
)
{\displaystyle v(t)}
is the velocity expressed as a function of time. The work done by a force
F
(
x
)
{\displaystyle F(x)}
(given as a function of position) from an initial position
A
{\displaystyle A}
to a final position
B
{\displaystyle B}
is:
W
A
→
B
=
∫
A
B
F
(
x
)
d
x
.
{\displaystyle W_{A\rightarrow B}=\int _{A}^{B}F(x)\,dx.}
Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.
== Computation ==
=== Analytical ===
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F′ = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions.
Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
=== Symbolic ===
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple.
A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient.
Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform.
=== Numerical ===
Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function.
Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree n polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials.
Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to T(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2n − 1.
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
=== Mechanical ===
The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.
=== Geometrical ===
Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square.
=== Integration by differentiation ===
Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator
∂
x
{\displaystyle \partial _{x}}
. This can also be applied to functional integrals, allowing them to be computed by functional differentiation.
== Examples ==
=== Using the fundamental theorem of calculus ===
The fundamental theorem of calculus allows straightforward calculations of basic functions:
∫
0
π
sin
(
x
)
d
x
=
−
cos
(
x
)
|
x
=
0
x
=
π
=
−
cos
(
π
)
−
(
−
cos
(
0
)
)
=
2.
{\displaystyle \int _{0}^{\pi }\sin(x)\,dx=-\cos(x){\big |}_{x=0}^{x=\pi }=-\cos(\pi )-{\big (}-\cos(0){\big )}=2.}
== See also ==
Integral equation – Equations with an unknown function under an integral sign
Integral symbol – Mathematical symbol used to denote integrals and antiderivatives
Lists of integrals
== Notes ==
== References ==
== Bibliography ==
== External links ==
"Integral", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Online Integral Calculator, Wolfram Alpha.
=== Online books ===
Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus
Crowell, Benjamin, Calculus, Fullerton College, an online textbook
Garrett, Paul, Notes on First-Year Calculus
Hussain, Faraz, Understanding Calculus, an online textbook
Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust.
Kowalk, W. P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook
Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
Numerical Methods of Integration at Holistic Numerical Methods Institute
P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques | Wikipedia/Integration_(calculus) |
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix
A
{\displaystyle A}
, the algorithm will produce a number
λ
{\displaystyle \lambda }
, which is the greatest (in absolute value) eigenvalue of
A
{\displaystyle A}
, and a nonzero vector
v
{\displaystyle v}
, which is a corresponding eigenvector of
λ
{\displaystyle \lambda }
, that is,
A
v
=
λ
v
{\displaystyle Av=\lambda v}
.
The algorithm is also known as the Von Mises iteration.
Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix
A
{\displaystyle A}
by a vector, so it is effective for a very large sparse matrix with appropriate implementation. The speed of convergence is like
(
λ
2
/
λ
1
)
k
{\displaystyle (\lambda _{2}/\lambda _{1})^{k}}
(see a later section). In words, convergence is exponential with base being the spectral gap.
== The method ==
The power iteration algorithm starts with a vector
b
0
{\displaystyle b_{0}}
, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation
b
k
+
1
=
A
b
k
‖
A
b
k
‖
{\displaystyle b_{k+1}={\frac {Ab_{k}}{\|Ab_{k}\|}}}
So, at every iteration, the vector
b
k
{\displaystyle b_{k}}
is multiplied by the matrix
A
{\displaystyle A}
and normalized.
If we assume
A
{\displaystyle A}
has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector
b
0
{\displaystyle b_{0}}
has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence
(
b
k
)
{\displaystyle \left(b_{k}\right)}
converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence
(
b
k
)
{\displaystyle \left(b_{k}\right)}
does not necessarily converge. In this sequence,
b
k
=
e
i
ϕ
k
v
1
+
r
k
{\displaystyle b_{k}=e^{i\phi _{k}}v_{1}+r_{k}}
,
where
v
1
{\displaystyle v_{1}}
is an eigenvector associated with the dominant eigenvalue, and
‖
r
k
‖
→
0
{\displaystyle \|r_{k}\|\rightarrow 0}
. The presence of the term
e
i
ϕ
k
{\displaystyle e^{i\phi _{k}}}
implies that
(
b
k
)
{\displaystyle \left(b_{k}\right)}
does not converge unless
e
i
ϕ
k
=
1
{\displaystyle e^{i\phi _{k}}=1}
. Under the two assumptions listed above, the sequence
(
μ
k
)
{\displaystyle \left(\mu _{k}\right)}
defined by
μ
k
=
b
k
∗
A
b
k
b
k
∗
b
k
{\displaystyle \mu _{k}={\frac {b_{k}^{*}Ab_{k}}{b_{k}^{*}b_{k}}}}
converges to the dominant eigenvalue (with Rayleigh quotient).
One may compute this with the following algorithm (shown in Python with NumPy):
The vector
b
k
{\displaystyle b_{k}}
converges to an associated eigenvector. Ideally, one should use the Rayleigh quotient in order to get the associated eigenvalue.
This algorithm is used to calculate the Google PageRank.
The method can also be used to calculate the spectral radius (the eigenvalue with the largest magnitude, for a square matrix) by computing the Rayleigh quotient
ρ
(
A
)
=
max
{
|
λ
1
|
,
…
,
|
λ
n
|
}
=
b
k
⊤
A
b
k
b
k
⊤
b
k
.
{\displaystyle \rho (A)=\max \left\{|\lambda _{1}|,\dotsc ,|\lambda _{n}|\right\}={\frac {b_{k}^{\top }Ab_{k}}{b_{k}^{\top }b_{k}}}.}
== Analysis ==
Let
A
{\displaystyle A}
be decomposed into its Jordan canonical form:
A
=
V
J
V
−
1
{\displaystyle A=VJV^{-1}}
, where the first column of
V
{\displaystyle V}
is an eigenvector of
A
{\displaystyle A}
corresponding to the dominant eigenvalue
λ
1
{\displaystyle \lambda _{1}}
. Since generically, the dominant eigenvalue of
A
{\displaystyle A}
is unique, the first Jordan block of
J
{\displaystyle J}
is the
1
×
1
{\displaystyle 1\times 1}
matrix
[
λ
1
]
,
{\displaystyle [\lambda _{1}],}
where
λ
1
{\displaystyle \lambda _{1}}
is the largest eigenvalue of A in magnitude. The starting vector
b
0
{\displaystyle b_{0}}
can be written as a linear combination of the columns of V:
b
0
=
c
1
v
1
+
c
2
v
2
+
⋯
+
c
n
v
n
.
{\displaystyle b_{0}=c_{1}v_{1}+c_{2}v_{2}+\cdots +c_{n}v_{n}.}
By assumption,
b
0
{\displaystyle b_{0}}
has a nonzero component in the direction of the dominant eigenvalue, so
c
1
≠
0
{\displaystyle c_{1}\neq 0}
.
The computationally useful recurrence relation for
b
k
+
1
{\displaystyle b_{k+1}}
can be rewritten as:
b
k
+
1
=
A
b
k
‖
A
b
k
‖
=
A
k
+
1
b
0
‖
A
k
+
1
b
0
‖
,
{\displaystyle b_{k+1}={\frac {Ab_{k}}{\|Ab_{k}\|}}={\frac {A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|}},}
where the expression:
A
k
+
1
b
0
‖
A
k
+
1
b
0
‖
{\displaystyle {\frac {A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|}}}
is more amenable to the following analysis.
b
k
=
A
k
b
0
‖
A
k
b
0
‖
=
(
V
J
V
−
1
)
k
b
0
‖
(
V
J
V
−
1
)
k
b
0
‖
=
V
J
k
V
−
1
b
0
‖
V
J
k
V
−
1
b
0
‖
=
V
J
k
V
−
1
(
c
1
v
1
+
c
2
v
2
+
⋯
+
c
n
v
n
)
‖
V
J
k
V
−
1
(
c
1
v
1
+
c
2
v
2
+
⋯
+
c
n
v
n
)
‖
=
V
J
k
(
c
1
e
1
+
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
V
J
k
(
c
1
e
1
+
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
=
(
λ
1
|
λ
1
|
)
k
c
1
|
c
1
|
v
1
+
1
c
1
V
(
1
λ
1
J
)
k
(
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
v
1
+
1
c
1
V
(
1
λ
1
J
)
k
(
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
{\displaystyle {\begin{aligned}b_{k}&={\frac {A^{k}b_{0}}{\|A^{k}b_{0}\|}}\\&={\frac {\left(VJV^{-1}\right)^{k}b_{0}}{\|\left(VJV^{-1}\right)^{k}b_{0}\|}}\\&={\frac {VJ^{k}V^{-1}b_{0}}{\|VJ^{k}V^{-1}b_{0}\|}}\\&={\frac {VJ^{k}V^{-1}\left(c_{1}v_{1}+c_{2}v_{2}+\cdots +c_{n}v_{n}\right)}{\|VJ^{k}V^{-1}\left(c_{1}v_{1}+c_{2}v_{2}+\cdots +c_{n}v_{n}\right)\|}}\\&={\frac {VJ^{k}\left(c_{1}e_{1}+c_{2}e_{2}+\cdots +c_{n}e_{n}\right)}{\|VJ^{k}\left(c_{1}e_{1}+c_{2}e_{2}+\cdots +c_{n}e_{n}\right)\|}}\\&=\left({\frac {\lambda _{1}}{|\lambda _{1}|}}\right)^{k}{\frac {c_{1}}{|c_{1}|}}{\frac {v_{1}+{\frac {1}{c_{1}}}V\left({\frac {1}{\lambda _{1}}}J\right)^{k}\left(c_{2}e_{2}+\cdots +c_{n}e_{n}\right)}{\left\|v_{1}+{\frac {1}{c_{1}}}V\left({\frac {1}{\lambda _{1}}}J\right)^{k}\left(c_{2}e_{2}+\cdots +c_{n}e_{n}\right)\right\|}}\end{aligned}}}
The expression above simplifies as
k
→
∞
{\displaystyle k\to \infty }
(
1
λ
1
J
)
k
=
[
[
1
]
(
1
λ
1
J
2
)
k
⋱
(
1
λ
1
J
m
)
k
]
→
[
1
0
⋱
0
]
as
k
→
∞
.
{\displaystyle \left({\frac {1}{\lambda _{1}}}J\right)^{k}={\begin{bmatrix}[1]&&&&\\&\left({\frac {1}{\lambda _{1}}}J_{2}\right)^{k}&&&\\&&\ddots &\\&&&\left({\frac {1}{\lambda _{1}}}J_{m}\right)^{k}\\\end{bmatrix}}\rightarrow {\begin{bmatrix}1&&&&\\&0&&&\\&&\ddots &\\&&&0\\\end{bmatrix}}\quad {\text{as}}\quad k\to \infty .}
The limit follows from the fact that the eigenvalue of
1
λ
1
J
i
{\displaystyle {\frac {1}{\lambda _{1}}}J_{i}}
is less than 1 in magnitude, so
(
1
λ
1
J
i
)
k
→
0
as
k
→
∞
.
{\displaystyle \left({\frac {1}{\lambda _{1}}}J_{i}\right)^{k}\to 0\quad {\text{as}}\quad k\to \infty .}
It follows that:
1
c
1
V
(
1
λ
1
J
)
k
(
c
2
e
2
+
⋯
+
c
n
e
n
)
→
0
as
k
→
∞
{\displaystyle {\frac {1}{c_{1}}}V\left({\frac {1}{\lambda _{1}}}J\right)^{k}\left(c_{2}e_{2}+\cdots +c_{n}e_{n}\right)\to 0\quad {\text{as}}\quad k\to \infty }
Using this fact,
b
k
{\displaystyle b_{k}}
can be written in a form that emphasizes its relationship with
v
1
{\displaystyle v_{1}}
when k is large:
b
k
=
(
λ
1
|
λ
1
|
)
k
c
1
|
c
1
|
v
1
+
1
c
1
V
(
1
λ
1
J
)
k
(
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
v
1
+
1
c
1
V
(
1
λ
1
J
)
k
(
c
2
e
2
+
⋯
+
c
n
e
n
)
‖
=
e
i
ϕ
k
c
1
|
c
1
|
v
1
‖
v
1
‖
+
r
k
{\displaystyle {\begin{aligned}b_{k}&=\left({\frac {\lambda _{1}}{|\lambda _{1}|}}\right)^{k}{\frac {c_{1}}{|c_{1}|}}{\frac {v_{1}+{\frac {1}{c_{1}}}V\left({\frac {1}{\lambda _{1}}}J\right)^{k}\left(c_{2}e_{2}+\cdots +c_{n}e_{n}\right)}{\left\|v_{1}+{\frac {1}{c_{1}}}V\left({\frac {1}{\lambda _{1}}}J\right)^{k}\left(c_{2}e_{2}+\cdots +c_{n}e_{n}\right)\right\|}}\\[6pt]&=e^{i\phi _{k}}{\frac {c_{1}}{|c_{1}|}}{\frac {v_{1}}{\|v_{1}\|}}+r_{k}\end{aligned}}}
where
e
i
ϕ
k
=
(
λ
1
/
|
λ
1
|
)
k
{\displaystyle e^{i\phi _{k}}=\left(\lambda _{1}/|\lambda _{1}|\right)^{k}}
and
‖
r
k
‖
→
0
{\displaystyle \|r_{k}\|\to 0}
as
k
→
∞
{\displaystyle k\to \infty }
The sequence
(
b
k
)
{\displaystyle \left(b_{k}\right)}
is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence
(
b
k
)
{\displaystyle \left(b_{k}\right)}
may not converge,
b
k
{\displaystyle b_{k}}
is nearly an eigenvector of A for large k.
Alternatively, if A is diagonalizable, then the following proof yields the same result
Let λ1, λ2, ..., λm be the m eigenvalues (counted with multiplicity) of A and let v1, v2, ..., vm be the corresponding eigenvectors. Suppose that
λ
1
{\displaystyle \lambda _{1}}
is the dominant eigenvalue, so that
|
λ
1
|
>
|
λ
j
|
{\displaystyle |\lambda _{1}|>|\lambda _{j}|}
for
j
>
1
{\displaystyle j>1}
.
The initial vector
b
0
{\displaystyle b_{0}}
can be written:
b
0
=
c
1
v
1
+
c
2
v
2
+
⋯
+
c
m
v
m
.
{\displaystyle b_{0}=c_{1}v_{1}+c_{2}v_{2}+\cdots +c_{m}v_{m}.}
If
b
0
{\displaystyle b_{0}}
is chosen randomly (with uniform probability), then c1 ≠ 0 with probability 1. Now,
A
k
b
0
=
c
1
A
k
v
1
+
c
2
A
k
v
2
+
⋯
+
c
m
A
k
v
m
=
c
1
λ
1
k
v
1
+
c
2
λ
2
k
v
2
+
⋯
+
c
m
λ
m
k
v
m
=
c
1
λ
1
k
(
v
1
+
c
2
c
1
(
λ
2
λ
1
)
k
v
2
+
⋯
+
c
m
c
1
(
λ
m
λ
1
)
k
v
m
)
→
c
1
λ
1
k
v
1
|
λ
j
λ
1
|
<
1
for
j
>
1
{\displaystyle {\begin{aligned}A^{k}b_{0}&=c_{1}A^{k}v_{1}+c_{2}A^{k}v_{2}+\cdots +c_{m}A^{k}v_{m}\\&=c_{1}\lambda _{1}^{k}v_{1}+c_{2}\lambda _{2}^{k}v_{2}+\cdots +c_{m}\lambda _{m}^{k}v_{m}\\&=c_{1}\lambda _{1}^{k}\left(v_{1}+{\frac {c_{2}}{c_{1}}}\left({\frac {\lambda _{2}}{\lambda _{1}}}\right)^{k}v_{2}+\cdots +{\frac {c_{m}}{c_{1}}}\left({\frac {\lambda _{m}}{\lambda _{1}}}\right)^{k}v_{m}\right)\\&\to c_{1}\lambda _{1}^{k}v_{1}&&\left|{\frac {\lambda _{j}}{\lambda _{1}}}\right|<1{\text{ for }}j>1\end{aligned}}}
On the other hand:
b
k
=
A
k
b
0
‖
A
k
b
0
‖
.
{\displaystyle b_{k}={\frac {A^{k}b_{0}}{\|A^{k}b_{0}\|}}.}
Therefore,
b
k
{\displaystyle b_{k}}
converges to (a multiple of) the eigenvector
v
1
{\displaystyle v_{1}}
. The convergence is geometric, with ratio
|
λ
2
λ
1
|
,
{\displaystyle \left|{\frac {\lambda _{2}}{\lambda _{1}}}\right|,}
where
λ
2
{\displaystyle \lambda _{2}}
denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.
== Applications ==
Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine, and Twitter uses it to show users recommendations of whom to follow. The power iteration method is especially suitable for sparse matrices, such as the web matrix, or as the matrix-free method that does not require storing the coefficient matrix
A
{\displaystyle A}
explicitly, but can instead access a function evaluating matrix-vector products
A
x
{\displaystyle Ax}
. For non-symmetric matrices that are well-conditioned the power iteration method can outperform more complex Arnoldi iteration. For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.
Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix
A
−
1
{\displaystyle A^{-1}}
. Other algorithms look at the whole subspace generated by the vectors
b
k
{\displaystyle b_{k}}
. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.
Gram iteration is a super-linear and deterministic method to compute the largest eigenpair.
== See also ==
Rayleigh quotient iteration
Inverse iteration
== References == | Wikipedia/Power_method |
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.
== Mathematical background ==
The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics."
There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis.
The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand, it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality.
One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices.
== Physical background ==
The background in the physics of vibrations has been explained in this way:
Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides. These vibrations have frequencies, and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones, which vary radically from one body to another.
Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz. It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory.
== A definition of spectrum ==
Consider a bounded linear transformation T defined everywhere over a general Banach space. We form the transformation:
R
ζ
=
(
ζ
I
−
T
)
−
1
.
{\displaystyle R_{\zeta }=\left(\zeta I-T\right)^{-1}.}
Here I is the identity operator and ζ is a complex number. The inverse of an operator T, that is T−1, is defined by:
T
T
−
1
=
T
−
1
T
=
I
.
{\displaystyle TT^{-1}=T^{-1}T=I.}
If the inverse exists, T is called regular. If it does not exist, T is called singular.
With these definitions, the resolvent set of T is the set of all complex numbers ζ such that Rζ exists and is bounded. This set often is denoted as ρ(T). The spectrum of T is the set of all complex numbers ζ such that Rζ fails to exist or is unbounded. Often the spectrum of T is denoted by σ(T). The function Rζ for all ζ in ρ(T) (that is, wherever Rζ exists as a bounded operator) is called the resolvent of T. The spectrum of T is therefore the complement of the resolvent set of T in the complex plane. Every eigenvalue of T belongs to σ(T), but σ(T) may contain non-eigenvalues.
This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general. On the other hand, Banach spaces include Hilbert spaces, and it is these spaces that find the greatest application and the richest theoretical results. With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators, the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum.
== Spectral theory briefly ==
In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist.
This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators. As an example, a very particular linear operator L might be written as a dyadic product:
L
=
|
k
1
⟩
⟨
b
1
|
,
{\displaystyle L=|k_{1}\rangle \langle b_{1}|,}
in terms of the "bra" ⟨b1| and the "ket" |k1⟩. A function f is described by a ket as |f ⟩. The function f(x) defined on the coordinates
(
x
1
,
x
2
,
x
3
,
…
)
{\displaystyle (x_{1},x_{2},x_{3},\dots )}
is denoted as
f
(
x
)
=
⟨
x
|
f
⟩
{\displaystyle f(x)=\langle x|f\rangle }
and the magnitude of f by
‖
f
‖
2
=
⟨
f
|
f
⟩
=
∫
⟨
f
|
x
⟩
⟨
x
|
f
⟩
d
x
=
∫
f
∗
(
x
)
f
(
x
)
d
x
{\displaystyle \|f\|^{2}=\langle f|f\rangle =\int \langle f|x\rangle \langle x|f\rangle \,dx=\int f^{*}(x)f(x)\,dx}
where the notation (*) denotes a complex conjugate. This inner product choice defines a very specific inner product space, restricting the generality of the arguments that follow.
The effect of L upon a function f is then described as:
L
|
f
⟩
=
|
k
1
⟩
⟨
b
1
|
f
⟩
{\displaystyle L|f\rangle =|k_{1}\rangle \langle b_{1}|f\rangle }
expressing the result that the effect of L on f is to produce a new function
|
k
1
⟩
{\displaystyle |k_{1}\rangle }
multiplied by the inner product represented by
⟨
b
1
|
f
⟩
{\displaystyle \langle b_{1}|f\rangle }
.
A more general linear operator L might be expressed as:
L
=
λ
1
|
e
1
⟩
⟨
f
1
|
+
λ
2
|
e
2
⟩
⟨
f
2
|
+
λ
3
|
e
3
⟩
⟨
f
3
|
+
…
,
{\displaystyle L=\lambda _{1}|e_{1}\rangle \langle f_{1}|+\lambda _{2}|e_{2}\rangle \langle f_{2}|+\lambda _{3}|e_{3}\rangle \langle f_{3}|+\dots ,}
where the
{
λ
i
}
{\displaystyle \{\,\lambda _{i}\,\}}
are scalars and the
{
|
e
i
⟩
}
{\displaystyle \{\,|e_{i}\rangle \,\}}
are a basis and the
{
⟨
f
i
|
}
{\displaystyle \{\,\langle f_{i}|\,\}}
a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by:
⟨
f
i
|
e
j
⟩
=
δ
i
j
{\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}}
If such a formalism applies, the
{
λ
i
}
{\displaystyle \{\,\lambda _{i}\,\}}
are eigenvalues of L and the functions
{
|
e
i
⟩
}
{\displaystyle \{\,|e_{i}\rangle \,\}}
are eigenfunctions of L. The eigenvalues are in the spectrum of L.
Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra.
== Resolution of the identity ==
This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment. A rigorous mathematical treatment may be found in various references. In particular, the dimension n of the space will be finite.
Using the bra–ket notation of the above section, the identity operator may be written as:
I
=
∑
i
=
1
n
|
e
i
⟩
⟨
f
i
|
{\displaystyle I=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|}
where it is supposed as above that
{
|
e
i
⟩
}
{\displaystyle \{|e_{i}\rangle \}}
are a basis and the
{
⟨
f
i
|
}
{\displaystyle \{\langle f_{i}|\}}
a reciprocal basis for the space satisfying the relation:
⟨
f
i
|
e
j
⟩
=
δ
i
j
.
{\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}.}
This expression of the identity operation is called a representation or a resolution of the identity. This formal representation satisfies the basic property of the identity:
I
k
=
I
{\displaystyle I^{k}=I}
valid for every positive integer k.
Applying the resolution of the identity to any function in the space
|
ψ
⟩
{\displaystyle |\psi \rangle }
, one obtains:
I
|
ψ
⟩
=
|
ψ
⟩
=
∑
i
=
1
n
|
e
i
⟩
⟨
f
i
|
ψ
⟩
=
∑
i
=
1
n
c
i
|
e
i
⟩
{\displaystyle I|\psi \rangle =|\psi \rangle =\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\psi \rangle =\sum _{i=1}^{n}c_{i}|e_{i}\rangle }
which is the generalized Fourier expansion of ψ in terms of the basis functions { ei }.
Here
c
i
=
⟨
f
i
|
ψ
⟩
{\displaystyle c_{i}=\langle f_{i}|\psi \rangle }
.
Given some operator equation of the form:
O
|
ψ
⟩
=
|
h
⟩
{\displaystyle O|\psi \rangle =|h\rangle }
with h in the space, this equation can be solved in the above basis through the formal manipulations:
O
|
ψ
⟩
=
∑
i
=
1
n
c
i
(
O
|
e
i
⟩
)
=
∑
i
=
1
n
|
e
i
⟩
⟨
f
i
|
h
⟩
,
{\displaystyle O|\psi \rangle =\sum _{i=1}^{n}c_{i}\left(O|e_{i}\rangle \right)=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|h\rangle ,}
⟨
f
j
|
O
|
ψ
⟩
=
∑
i
=
1
n
c
i
⟨
f
j
|
O
|
e
i
⟩
=
∑
i
=
1
n
⟨
f
j
|
e
i
⟩
⟨
f
i
|
h
⟩
=
⟨
f
j
|
h
⟩
,
∀
j
{\displaystyle \langle f_{j}|O|\psi \rangle =\sum _{i=1}^{n}c_{i}\langle f_{j}|O|e_{i}\rangle =\sum _{i=1}^{n}\langle f_{j}|e_{i}\rangle \langle f_{i}|h\rangle =\langle f_{j}|h\rangle ,\quad \forall j}
which converts the operator equation to a matrix equation determining the unknown coefficients cj in terms of the generalized Fourier coefficients
⟨
f
j
|
h
⟩
{\displaystyle \langle f_{j}|h\rangle }
of h and the matrix elements
O
j
i
=
⟨
f
j
|
O
|
e
i
⟩
{\displaystyle O_{ji}=\langle f_{j}|O|e_{i}\rangle }
of the operator O.
The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L:
L
|
e
i
⟩
=
λ
i
|
e
i
⟩
;
{\displaystyle L|e_{i}\rangle =\lambda _{i}|e_{i}\rangle \,;}
with the { λi } the eigenvalues of L from the spectrum of L. Then the resolution of the identity above provides the dyad expansion of L:
L
I
=
L
=
∑
i
=
1
n
L
|
e
i
⟩
⟨
f
i
|
=
∑
i
=
1
n
λ
i
|
e
i
⟩
⟨
f
i
|
.
{\displaystyle LI=L=\sum _{i=1}^{n}L|e_{i}\rangle \langle f_{i}|=\sum _{i=1}^{n}\lambda _{i}|e_{i}\rangle \langle f_{i}|.}
== Resolvent operator ==
Using spectral theory, the resolvent operator R:
R
=
(
λ
I
−
L
)
−
1
,
{\displaystyle R=(\lambda I-L)^{-1},\,}
can be evaluated in terms of the eigenfunctions and eigenvalues of L, and the Green's function corresponding to L can be found.
Applying R to some arbitrary function in the space, say
φ
{\displaystyle \varphi }
,
R
|
φ
⟩
=
(
λ
I
−
L
)
−
1
|
φ
⟩
=
∑
i
=
1
n
1
λ
−
λ
i
|
e
i
⟩
⟨
f
i
|
φ
⟩
.
{\displaystyle R|\varphi \rangle =(\lambda I-L)^{-1}|\varphi \rangle =\sum _{i=1}^{n}{\frac {1}{\lambda -\lambda _{i}}}|e_{i}\rangle \langle f_{i}|\varphi \rangle .}
This function has poles in the complex λ-plane at each eigenvalue of L. Thus, using the calculus of residues:
1
2
π
i
∮
C
R
|
φ
⟩
d
λ
=
−
∑
i
=
1
n
|
e
i
⟩
⟨
f
i
|
φ
⟩
=
−
|
φ
⟩
,
{\displaystyle {\frac {1}{2\pi i}}\oint _{C}R|\varphi \rangle d\lambda =-\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\varphi \rangle =-|\varphi \rangle ,}
where the line integral is over a contour C that includes all the eigenvalues of L.
Suppose our functions are defined over some coordinates {xj}, that is:
⟨
x
,
φ
⟩
=
φ
(
x
1
,
x
2
,
.
.
.
)
.
{\displaystyle \langle x,\varphi \rangle =\varphi (x_{1},x_{2},...).}
Introducing the notation
⟨
x
,
y
⟩
=
δ
(
x
−
y
)
,
{\displaystyle \langle x,y\rangle =\delta (x-y),}
where δ(x − y) = δ(x1 − y1, x2 − y2, x3 − y3, ...) is the Dirac delta function,
we can write
⟨
x
,
φ
⟩
=
∫
⟨
x
,
y
⟩
⟨
y
,
φ
⟩
d
y
.
{\displaystyle \langle x,\varphi \rangle =\int \langle x,y\rangle \langle y,\varphi \rangle dy.}
Then:
⟨
x
,
1
2
π
i
∮
C
φ
λ
I
−
L
d
λ
⟩
=
1
2
π
i
∮
C
d
λ
⟨
x
,
φ
λ
I
−
L
⟩
=
1
2
π
i
∮
C
d
λ
∫
d
y
⟨
x
,
y
λ
I
−
L
⟩
⟨
y
,
φ
⟩
{\displaystyle {\begin{aligned}\left\langle x,{\frac {1}{2\pi i}}\oint _{C}{\frac {\varphi }{\lambda I-L}}d\lambda \right\rangle &={\frac {1}{2\pi i}}\oint _{C}d\lambda \left\langle x,{\frac {\varphi }{\lambda I-L}}\right\rangle \\&={\frac {1}{2\pi i}}\oint _{C}d\lambda \int dy\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \langle y,\varphi \rangle \end{aligned}}}
The function G(x, y; λ) defined by:
G
(
x
,
y
;
λ
)
=
⟨
x
,
y
λ
I
−
L
⟩
=
∑
i
=
1
n
∑
j
=
1
n
⟨
x
,
e
i
⟩
⟨
f
i
,
e
j
λ
I
−
L
⟩
⟨
f
j
,
y
⟩
=
∑
i
=
1
n
⟨
x
,
e
i
⟩
⟨
f
i
,
y
⟩
λ
−
λ
i
=
∑
i
=
1
n
e
i
(
x
)
f
i
∗
(
y
)
λ
−
λ
i
,
{\displaystyle {\begin{aligned}G(x,y;\lambda )&=\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \\&=\sum _{i=1}^{n}\sum _{j=1}^{n}\langle x,e_{i}\rangle \left\langle f_{i},{\frac {e_{j}}{\lambda I-L}}\right\rangle \langle f_{j},y\rangle \\&=\sum _{i=1}^{n}{\frac {\langle x,e_{i}\rangle \langle f_{i},y\rangle }{\lambda -\lambda _{i}}}\\&=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(y)}{\lambda -\lambda _{i}}},\end{aligned}}}
is called the Green's function for operator L, and satisfies:
1
2
π
i
∮
C
G
(
x
,
y
;
λ
)
d
λ
=
−
∑
i
=
1
n
⟨
x
,
e
i
⟩
⟨
f
i
,
y
⟩
=
−
⟨
x
,
y
⟩
=
−
δ
(
x
−
y
)
.
{\displaystyle {\frac {1}{2\pi i}}\oint _{C}G(x,y;\lambda )\,d\lambda =-\sum _{i=1}^{n}\langle x,e_{i}\rangle \langle f_{i},y\rangle =-\langle x,y\rangle =-\delta (x-y).}
== Operator equations ==
Consider the operator equation:
(
O
−
λ
I
)
|
ψ
⟩
=
|
h
⟩
;
{\displaystyle (O-\lambda I)|\psi \rangle =|h\rangle ;}
in terms of coordinates:
∫
⟨
x
,
(
O
−
λ
I
)
y
⟩
⟨
y
,
ψ
⟩
d
y
=
h
(
x
)
.
{\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,\psi \rangle \,dy=h(x).}
A particular case is λ = 0.
The Green's function of the previous section is:
⟨
y
,
G
(
λ
)
z
⟩
=
⟨
y
,
(
O
−
λ
I
)
−
1
z
⟩
=
G
(
y
,
z
;
λ
)
,
{\displaystyle \langle y,G(\lambda )z\rangle =\left\langle y,(O-\lambda I)^{-1}z\right\rangle =G(y,z;\lambda ),}
and satisfies:
∫
⟨
x
,
(
O
−
λ
I
)
y
⟩
⟨
y
,
G
(
λ
)
z
⟩
d
y
=
∫
⟨
x
,
(
O
−
λ
I
)
y
⟩
⟨
y
,
(
O
−
λ
I
)
−
1
z
⟩
d
y
=
⟨
x
,
z
⟩
=
δ
(
x
−
z
)
.
{\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,G(\lambda )z\rangle \,dy=\int \langle x,(O-\lambda I)y\rangle \left\langle y,(O-\lambda I)^{-1}z\right\rangle \,dy=\langle x,z\rangle =\delta (x-z).}
Using this Green's function property:
∫
⟨
x
,
(
O
−
λ
I
)
y
⟩
G
(
y
,
z
;
λ
)
d
y
=
δ
(
x
−
z
)
.
{\displaystyle \int \langle x,(O-\lambda I)y\rangle G(y,z;\lambda )\,dy=\delta (x-z).}
Then, multiplying both sides of this equation by h(z) and integrating:
∫
d
z
h
(
z
)
∫
d
y
⟨
x
,
(
O
−
λ
I
)
y
⟩
G
(
y
,
z
;
λ
)
=
∫
d
y
⟨
x
,
(
O
−
λ
I
)
y
⟩
∫
d
z
h
(
z
)
G
(
y
,
z
;
λ
)
=
h
(
x
)
,
{\displaystyle \int dz\,h(z)\int dy\,\langle x,(O-\lambda I)y\rangle G(y,z;\lambda )=\int dy\,\langle x,(O-\lambda I)y\rangle \int dz\,h(z)G(y,z;\lambda )=h(x),}
which suggests the solution is:
ψ
(
x
)
=
∫
h
(
z
)
G
(
x
,
z
;
λ
)
d
z
.
{\displaystyle \psi (x)=\int h(z)G(x,z;\lambda )\,dz.}
That is, the function ψ(x) satisfying the operator equation is found if we can find the spectrum of O, and construct G, for example by using:
G
(
x
,
z
;
λ
)
=
∑
i
=
1
n
e
i
(
x
)
f
i
∗
(
z
)
λ
−
λ
i
.
{\displaystyle G(x,z;\lambda )=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(z)}{\lambda -\lambda _{i}}}.}
There are many other ways to find G, of course. See the articles on Green's functions and on Fredholm integral equations. It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis, Hilbert spaces, distributions and so forth. Consult these articles and the references for more detail.
== Spectral theorem and Rayleigh quotient ==
Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M.
Theorem Let M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M. Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient. Moreover, this eigenvalue is the largest eigenvalue of M.
Proof Assume the spectral theorem. Let the eigenvalues of M be
λ
1
≤
λ
2
≤
⋯
≤
λ
n
{\displaystyle \lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}}
. Since the
{
v
i
}
{\displaystyle \{v_{i}\}}
form an orthonormal basis, any vector x can be expressed in this basis as
x
=
∑
i
v
i
T
x
v
i
{\displaystyle x=\sum _{i}v_{i}^{T}xv_{i}}
The way to prove this formula is pretty easy. Namely,
v
j
T
∑
i
v
i
T
x
v
i
=
∑
i
v
i
T
x
v
j
T
v
i
=
(
v
j
T
x
)
v
j
T
v
j
=
v
j
T
x
{\displaystyle {\begin{aligned}v_{j}^{T}\sum _{i}v_{i}^{T}xv_{i}={}&\sum _{i}v_{i}^{T}xv_{j}^{T}v_{i}\\[4pt]={}&(v_{j}^{T}x)v_{j}^{T}v_{j}\\[4pt]={}&v_{j}^{T}x\end{aligned}}}
evaluate the Rayleigh quotient with respect to x:
x
T
M
x
=
(
∑
i
(
v
i
T
x
)
v
i
)
T
M
(
∑
j
(
v
j
T
x
)
v
j
)
=
(
∑
i
(
v
i
T
x
)
v
i
T
)
(
∑
j
(
v
j
T
x
)
v
j
λ
j
)
=
∑
i
,
j
(
v
i
T
x
)
v
i
T
(
v
j
T
x
)
v
j
λ
j
=
∑
j
(
v
j
T
x
)
(
v
j
T
x
)
λ
j
=
∑
j
(
v
j
T
x
)
2
λ
j
≤
λ
n
∑
j
(
v
j
T
x
)
2
=
λ
n
x
T
x
,
{\displaystyle {\begin{aligned}x^{T}Mx={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}\right)^{T}M\left(\sum _{j}(v_{j}^{T}x)v_{j}\right)\\[4pt]={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}^{T}\right)\left(\sum _{j}(v_{j}^{T}x)v_{j}\lambda _{j}\right)\\[4pt]={}&\sum _{i,j}(v_{i}^{T}x)v_{i}^{T}(v_{j}^{T}x)v_{j}\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)(v_{j}^{T}x)\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)^{2}\lambda _{j}\leq \lambda _{n}\sum _{j}(v_{j}^{T}x)^{2}\\[4pt]={}&\lambda _{n}x^{T}x,\end{aligned}}}
where we used Parseval's identity in the last line. Finally we obtain that
x
T
M
x
x
T
x
≤
λ
n
{\displaystyle {\frac {x^{T}Mx}{x^{T}x}}\leq \lambda _{n}}
so the Rayleigh quotient is always less than
λ
n
{\displaystyle \lambda _{n}}
.
== See also ==
Functions of operators, Operator theory
Lax pairs
Least-squares spectral analysis
Riesz projector
Self-adjoint operator
Spectrum (functional analysis), Resolvent formalism, Decomposition of spectrum (functional analysis)
Spectral radius, Spectrum of an operator, Spectral theorem
Spectral theory of compact operators
Spectral theory of normal C*-algebras
Sturm–Liouville theory, Integral equations, Fredholm theory
Compact operators, Isospectral operators, Completeness
Spectral geometry
Spectral graph theory
List of functional analysis topics
== Notes ==
== References ==
Edward Brian Davies (1996). Spectral Theory and Differential Operators; Volume 42 in the Cambridge Studies in Advanced Mathematics. Cambridge University Press. ISBN 0-521-58710-7.
Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space (Part 2) (Paperback reprint of 1967 ed.). Wiley. ISBN 0-471-60847-5.
Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Operators (Part 3) (Paperback reprint of 1971 ed.). Wiley. ISBN 0-471-60846-7.
Sadri Hassani (1999). "Chapter 4: Spectral decomposition". Mathematical Physics: a Modern Introduction to its Foundations. Springer. ISBN 0-387-98579-4.
"Spectral theory of linear operators", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Shmuel Kantorovitz (1983). Spectral Theory of Banach Space Operators;. Springer.
Arch W. Naylor, George R. Sell (2000). "Chapter 5, Part B: The Spectrum". Linear Operator Theory in Engineering and Science; Volume 40 of Applied mathematical sciences. Springer. p. 411. ISBN 0-387-95001-X.
Gerald Teschl (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. American Mathematical Society. ISBN 978-0-8218-4660-5.
Valter Moretti (2017). Spectral Theory and Quantum Mechanics; Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation 2nd Edition. Springer. ISBN 978-3-319-70705-1.
== External links ==
Evans M. Harrell II: A Short History of Operator Theory
Gregory H. Moore (1995). "The axiomatization of linear algebra: 1875-1940". Historia Mathematica. 22 (3): 262–303. doi:10.1006/hmat.1995.1025.
Steen, L. A. (April 1973). "Highlights in the History of Spectral Theory". The American Mathematical Monthly. 80 (4): 359–381. doi:10.2307/2319079. JSTOR 2319079. | Wikipedia/Spectral_theory |
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation (called an "iterate") is derived from the previous ones.
A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of an iterative method or a method of successive approximation. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.
In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
by Gaussian elimination). Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.
== Attractive fixed points ==
If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration of x. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example, x(n+1) = f(x(n)).) If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.
== Linear systems ==
In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods.
=== Stationary iterative methods ===
==== Introduction ====
Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.
==== Definition ====
An iterative method is defined by
x
k
+
1
:=
Ψ
(
x
k
)
,
k
≥
0
{\displaystyle \mathbf {x} ^{k+1}:=\Psi (\mathbf {x} ^{k}),\quad k\geq 0}
and for a given linear system
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
with exact solution
x
∗
{\displaystyle \mathbf {x} ^{*}}
the error by
e
k
:=
x
k
−
x
∗
,
k
≥
0.
{\displaystyle \mathbf {e} ^{k}:=\mathbf {x} ^{k}-\mathbf {x} ^{*},\quad k\geq 0.}
An iterative method is called linear if there exists a matrix
C
∈
R
n
×
n
{\displaystyle C\in \mathbb {R} ^{n\times n}}
such that
e
k
+
1
=
C
e
k
∀
k
≥
0
{\displaystyle \mathbf {e} ^{k+1}=C\mathbf {e} ^{k}\quad \forall k\geq 0}
and this matrix is called the iteration matrix.
An iterative method with a given iteration matrix
C
{\displaystyle C}
is called convergent if the following holds
lim
k
→
∞
C
k
=
0.
{\displaystyle \lim _{k\rightarrow \infty }C^{k}=0.}
An important theorem states that for a given iterative method and its iteration matrix
C
{\displaystyle C}
it is convergent if and only if its spectral radius
ρ
(
C
)
{\displaystyle \rho (C)}
is smaller than unity, that is,
ρ
(
C
)
<
1.
{\displaystyle \rho (C)<1.}
The basic iterative methods work by splitting the matrix
A
{\displaystyle A}
into
A
=
M
−
N
{\displaystyle A=M-N}
and here the matrix
M
{\displaystyle M}
should be easily invertible.
The iterative methods are now defined as
M
x
k
+
1
=
N
x
k
+
b
,
k
≥
0
,
{\displaystyle M\mathbf {x} ^{k+1}=N\mathbf {x} ^{k}+b,\quad k\geq 0,}
or, equivalently,
x
k
+
1
=
x
k
+
M
−
1
(
b
−
A
x
k
)
,
k
≥
0.
{\displaystyle \mathbf {x} ^{k+1}=\mathbf {x} ^{k}+M^{-1}(b-A\mathbf {x} ^{k}),\quad k\geq 0.}
From this follows that the iteration matrix is given by
C
=
I
−
M
−
1
A
=
M
−
1
N
.
{\displaystyle C=I-M^{-1}A=M^{-1}N.}
==== Examples ====
Basic examples of stationary iterative methods use a splitting of the matrix
A
{\displaystyle A}
such as
A
=
D
+
L
+
U
,
D
:=
diag
(
(
a
i
i
)
i
)
{\displaystyle A=D+L+U\,,\quad D:={\text{diag}}((a_{ii})_{i})}
where
D
{\displaystyle D}
is only the diagonal part of
A
{\displaystyle A}
, and
L
{\displaystyle L}
is the strict lower triangular part of
A
{\displaystyle A}
.
Respectively,
U
{\displaystyle U}
is the strict upper triangular part of
A
{\displaystyle A}
.
Richardson method:
M
:=
1
ω
I
(
ω
≠
0
)
{\displaystyle M:={\frac {1}{\omega }}I\quad (\omega \neq 0)}
Jacobi method:
M
:=
D
{\displaystyle M:=D}
Damped Jacobi method:
M
:=
1
ω
D
(
ω
≠
0
)
{\displaystyle M:={\frac {1}{\omega }}D\quad (\omega \neq 0)}
Gauss–Seidel method:
M
:=
D
+
L
{\displaystyle M:=D+L}
Successive over-relaxation method (SOR):
M
:=
1
ω
D
+
L
(
ω
≠
0
)
{\displaystyle M:={\frac {1}{\omega }}D+L\quad (\omega \neq 0)}
Symmetric successive over-relaxation (SSOR):
M
:=
1
ω
(
2
−
ω
)
(
D
+
ω
L
)
D
−
1
(
D
+
ω
U
)
(
ω
∉
{
0
,
2
}
)
{\displaystyle M:={\frac {1}{\omega (2-\omega )}}(D+\omega L)D^{-1}(D+\omega U)\quad (\omega \not \in \{0,2\})}
Linear stationary iterative methods are also called relaxation methods.
=== Krylov subspace methods ===
Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence).
The approximations to the solution are then formed by minimizing the residual over the subspace formed.
The prototypical method in this class is the conjugate gradient method (CG) which assumes that the system matrix
A
{\displaystyle A}
is symmetric positive-definite.
For symmetric (and possibly indefinite)
A
{\displaystyle A}
one works with the minimal residual method (MINRES).
In the case of non-symmetric matrices, methods such as the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG) have been derived.
==== Convergence of Krylov subspace methods ====
Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.
=== Preconditioners ===
The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.
== Methods of successive approximation ==
Mathematical methods relating to successive approximation include:
Babylonian method, for finding square roots of numbers
Fixed-point iteration
Means of finding zeros of functions:
Halley's method
Newton's method
Differential-equation matters:
Picard–Lindelöf theorem, on existence of solutions of differential equations
Runge–Kutta methods, for numerical solution of differential equations
=== History ===
Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and π in The Treatise of Chord and Sine to high precision.
An early iterative method for solving a linear system appeared in a letter of Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest .
The theory of stationary iterative methods was solidly established with the work of D.M. Young starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type.
== See also ==
Closed-form expression
Iterative refinement
Kaczmarz method
Non-linear least squares
Numerical analysis
Root-finding algorithm
== References ==
== External links ==
Templates for the Solution of Linear Systems
Y. Saad: Iterative Methods for Sparse Linear Systems, 1st edition, PWS 1996 | Wikipedia/Iterative_method |
The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relatively to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis.: 279 : 261 It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass and distance from the axis.
It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis.
For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other.
== Introduction ==
When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units.
The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by
m
r
2
{\displaystyle mr^{2}}
, where
r
{\displaystyle r}
is the distance of the point from the axis, and
m
{\displaystyle m}
is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object.
In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law.
The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body.
The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor.
The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw.
== Definition ==
The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section.
The moment of inertia I is also defined as the ratio of the net angular momentum L of a system to its angular velocity ω around a principal axis, that is
I
=
L
ω
.
{\displaystyle I={\frac {L}{\omega }}.}
If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster.
If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque τ on a body to the angular acceleration α around a principal axis, that is: 279 : 261, eq.9-19
τ
=
I
α
.
{\displaystyle \tau =I\alpha .}
For a simple pendulum, this definition yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as,
I
=
m
r
2
.
{\displaystyle I=mr^{2}.}
Thus, the moment of inertia of the pendulum depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation.
This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses dm each multiplied by the square of its perpendicular distance r to an axis k. An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass.
In general, given an object of mass m, an effective radius k can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is
I
=
m
k
2
,
{\displaystyle I=mk^{2},}
where k is known as the radius of gyration around the axis.
== Examples ==
=== Simple pendulum ===
Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle
m
{\displaystyle m}
with the square of its distance
r
{\displaystyle r}
to the pivot, that is
I
=
m
r
2
.
{\displaystyle I=mr^{2}.}
This can be shown as follows:
The force of gravity on the mass of a simple pendulum generates a torque
τ
=
r
×
F
{\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} }
around the axis perpendicular to the plane of the pendulum movement. Here
r
{\displaystyle \mathbf {r} }
is the distance vector from the torque axis to the pendulum center of mass, and
F
{\displaystyle \mathbf {F} }
is the net force on the mass. Associated with this torque is an angular acceleration,
α
{\displaystyle {\boldsymbol {\alpha }}}
, of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is
a
=
α
×
r
{\displaystyle \mathbf {a} ={\boldsymbol {\alpha }}\times \mathbf {r} }
. Since
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
the torque equation becomes:
τ
=
r
×
F
=
r
×
(
m
α
×
r
)
=
m
(
(
r
⋅
r
)
α
−
(
r
⋅
α
)
r
)
=
m
r
2
α
=
I
α
k
^
,
{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\mathbf {r} \times \mathbf {F} =\mathbf {r} \times (m{\boldsymbol {\alpha }}\times \mathbf {r} )\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\alpha }}-\left(\mathbf {r} \cdot {\boldsymbol {\alpha }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\alpha }}=I\alpha \mathbf {\hat {k}} ,\end{aligned}}}
where
k
^
{\displaystyle \mathbf {\hat {k}} }
is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of
α
{\displaystyle {\boldsymbol {\alpha }}}
and
r
{\displaystyle \mathbf {r} }
.) The quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
is the moment of inertia of this single mass around the pivot point.
The quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
also appears in the angular momentum of a simple pendulum, which is calculated from the velocity
v
=
ω
×
r
{\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {r} }
of the pendulum mass around the pivot, where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the mass about the pivot point. This angular momentum is given by
L
=
r
×
p
=
r
×
(
m
ω
×
r
)
=
m
(
(
r
⋅
r
)
ω
−
(
r
⋅
ω
)
r
)
=
m
r
2
ω
=
I
ω
k
^
,
{\displaystyle {\begin{aligned}\mathbf {L} &=\mathbf {r} \times \mathbf {p} =\mathbf {r} \times \left(m{\boldsymbol {\omega }}\times \mathbf {r} \right)\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\omega }}-\left(\mathbf {r} \cdot {\boldsymbol {\omega }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\omega }}=I\omega \mathbf {\hat {k}} ,\end{aligned}}}
using a similar derivation to the previous equation.
Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield
E
K
=
1
2
m
v
⋅
v
=
1
2
(
m
r
2
)
ω
2
=
1
2
I
ω
2
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}m\mathbf {v} \cdot \mathbf {v} ={\frac {1}{2}}\left(mr^{2}\right)\omega ^{2}={\frac {1}{2}}I\omega ^{2}.}
This shows that the quantity
I
=
m
r
2
{\displaystyle I=mr^{2}}
is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values
m
r
2
{\displaystyle mr^{2}}
for all of the elements of mass in the body.
=== Compound pendulums ===
A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of.: 395–396 : 51–53 The natural frequency (
ω
n
{\displaystyle \omega _{\text{n}}}
) of a compound pendulum depends on its moment of inertia,
I
P
{\displaystyle I_{P}}
,
ω
n
=
m
g
r
I
P
,
{\displaystyle \omega _{\text{n}}={\sqrt {\frac {mgr}{I_{P}}}},}
where
m
{\displaystyle m}
is the mass of the object,
g
{\displaystyle g}
is local acceleration of gravity, and
r
{\displaystyle r}
is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.: 516–517
Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point
P
{\displaystyle P}
so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (
t
{\displaystyle t}
), to obtain
I
P
=
m
g
r
ω
n
2
=
m
g
r
t
2
4
π
2
,
{\displaystyle I_{P}={\frac {mgr}{\omega _{\text{n}}^{2}}}={\frac {mgrt^{2}}{4\pi ^{2}}},}
where
t
{\displaystyle t}
is the period (duration) of oscillation (usually averaged over multiple periods).
==== Center of oscillation ====
A simple pendulum that has the same natural frequency as a compound pendulum defines the length
L
{\displaystyle L}
from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length
L
{\displaystyle L}
is determined from the formula,
ω
n
=
g
L
=
m
g
r
I
P
,
{\displaystyle \omega _{\text{n}}={\sqrt {\frac {g}{L}}}={\sqrt {\frac {mgr}{I_{P}}}},}
or
L
=
g
ω
n
2
=
I
P
m
r
.
{\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}={\frac {I_{P}}{mr}}.}
The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of
π
r
a
d
/
s
{\displaystyle \pi \ \mathrm {rad/s} }
for the pendulum. In this case, the distance to the center of oscillation,
L
{\displaystyle L}
, can be computed to be
L
=
g
ω
n
2
≈
9.81
m
/
s
2
(
3.14
r
a
d
/
s
)
2
≈
0.99
m
.
{\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}\approx {\frac {9.81\ \mathrm {m/s^{2}} }{(3.14\ \mathrm {rad/s} )^{2}}}\approx 0.99\ \mathrm {m} .}
Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter.
== Measuring moment of inertia ==
The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system.
== Moment of inertia of area ==
Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia.
These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis
I
x
x
{\displaystyle I_{xx}}
and horizontal moment of the y-axis
I
y
y
{\displaystyle I_{yy}}
.
Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived,
r
{\displaystyle r}
=== Sectional areas moment calculated thus ===
Source:
Square:
I
x
x
=
I
y
y
=
b
4
12
{\displaystyle I_{xx}=I_{yy}={\frac {b^{4}}{12}}}
Rectangular:
I
x
x
=
b
h
3
12
{\displaystyle I_{xx}={\frac {bh^{3}}{12}}}
and;
I
y
y
=
h
b
3
12
{\displaystyle I_{yy}={\frac {hb^{3}}{12}}}
Triangular:
I
x
x
=
b
h
3
36
{\displaystyle I_{xx}={\frac {bh^{3}}{36}}}
Circular:
I
x
x
=
I
y
y
=
1
4
π
r
4
=
1
64
π
d
4
{\displaystyle I_{xx}=I_{yy}={\frac {1}{4}}{\pi }r^{4}={\frac {1}{64}}{\pi }d^{4}}
== Motion in a fixed plane ==
=== Point mass ===
The moment of inertia about an axis of a body is calculated by summing
m
r
2
{\displaystyle mr^{2}}
for every particle in the body, where
r
{\displaystyle r}
is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.)
Consider the kinetic energy of an assembly of
N
{\displaystyle N}
masses
m
i
{\displaystyle m_{i}}
that lie at the distances
r
i
{\displaystyle r_{i}}
from the pivot point
P
{\displaystyle P}
, which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,: 516–517 : 1084–1085 : 1296–1300
E
K
=
∑
i
=
1
N
1
2
m
i
v
i
⋅
v
i
=
∑
i
=
1
N
1
2
m
i
(
ω
r
i
)
2
=
1
2
ω
2
∑
i
=
1
N
m
i
r
i
2
.
{\displaystyle E_{\text{K}}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\left(\omega r_{i}\right)^{2}={\frac {1}{2}}\,\omega ^{2}\sum _{i=1}^{N}m_{i}r_{i}^{2}.}
This shows that the moment of inertia of the body is the sum of each of the
m
r
2
{\displaystyle mr^{2}}
terms, that is
I
P
=
∑
i
=
1
N
m
i
r
i
2
.
{\displaystyle I_{P}=\sum _{i=1}^{N}m_{i}r_{i}^{2}.}
Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia.
The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows:
I
P
=
∑
i
m
i
r
i
2
{\displaystyle I_{P}=\sum _{i}m_{i}r_{i}^{2}}
Another expression replaces the summation with an integral,
I
P
=
∭
Q
ρ
(
x
,
y
,
z
)
‖
r
‖
2
d
V
{\displaystyle I_{P}=\iiint _{Q}\rho (x,y,z)\left\|\mathbf {r} \right\|^{2}dV}
Here, the function
ρ
{\displaystyle \rho }
gives the mass density at each point
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
,
r
{\displaystyle \mathbf {r} }
is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
in the solid, and the integration is evaluated over the volume
V
{\displaystyle V}
of the body
Q
{\displaystyle Q}
. The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area.
Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the
z
{\displaystyle z}
-axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the
x
{\displaystyle x}
- and
y
{\displaystyle y}
-axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the
x
{\displaystyle x}
-axis or
y
{\displaystyle y}
-axis depending on the load.
==== Examples ====
The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass.
The moment of inertia of a thin rod with constant cross-section
s
{\displaystyle s}
and density
ρ
{\displaystyle \rho }
and with length
ℓ
{\displaystyle \ell }
about a perpendicular axis through its center of mass is determined by integration.: 1301 Align the
x
{\displaystyle x}
-axis with the rod and locate the origin its center of mass at the center of the rod, then
I
C
,
rod
=
∭
Q
ρ
x
2
d
V
=
∫
−
ℓ
2
ℓ
2
ρ
x
2
s
d
x
=
ρ
s
x
3
3
|
−
ℓ
2
ℓ
2
=
ρ
s
3
(
ℓ
3
8
+
ℓ
3
8
)
=
m
ℓ
2
12
,
{\displaystyle I_{C,{\text{rod}}}=\iiint _{Q}\rho \,x^{2}\,dV=\int _{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}\rho \,x^{2}s\,dx=\left.\rho s{\frac {x^{3}}{3}}\right|_{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}={\frac {\rho s}{3}}\left({\frac {\ell ^{3}}{8}}+{\frac {\ell ^{3}}{8}}\right)={\frac {m\ell ^{2}}{12}},}
where
m
=
ρ
s
ℓ
{\displaystyle m=\rho s\ell }
is the mass of the rod.
The moment of inertia of a thin disc of constant thickness
s
{\displaystyle s}
, radius
R
{\displaystyle R}
, and density
ρ
{\displaystyle \rho }
about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration.: 1301 Align the
z
{\displaystyle z}
-axis with the axis of the disc and define a volume element as
d
V
=
s
r
d
r
d
θ
{\displaystyle dV=sr\,dr\,d\theta }
, then
I
C
,
disc
=
∭
Q
ρ
r
2
d
V
=
∫
0
2
π
∫
0
R
ρ
r
2
s
r
d
r
d
θ
=
2
π
ρ
s
R
4
4
=
1
2
m
R
2
,
{\displaystyle I_{C,{\text{disc}}}=\iiint _{Q}\rho \,r^{2}\,dV=\int _{0}^{2\pi }\int _{0}^{R}\rho r^{2}sr\,dr\,d\theta =2\pi \rho s{\frac {R^{4}}{4}}={\frac {1}{2}}mR^{2},}
where
m
=
π
R
2
ρ
s
{\displaystyle m=\pi R^{2}\rho s}
is its mass.
The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point
P
{\displaystyle P}
as,
I
P
=
I
C
,
rod
+
M
rod
(
L
2
)
2
+
I
C
,
disc
+
M
disc
(
L
+
R
)
2
,
{\displaystyle I_{P}=I_{C,{\text{rod}}}+M_{\text{rod}}\left({\frac {L}{2}}\right)^{2}+I_{C,{\text{disc}}}+M_{\text{disc}}(L+R)^{2},}
where
L
{\displaystyle L}
is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum.
A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly.
As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation: 1301
x
2
+
y
2
+
z
2
=
R
2
,
{\displaystyle x^{2}+y^{2}+z^{2}=R^{2},}
then the square of the radius
r
{\displaystyle r}
of the disc at the cross-section
z
{\displaystyle z}
along the
z
{\displaystyle z}
-axis is
r
(
z
)
2
=
x
2
+
y
2
=
R
2
−
z
2
.
{\displaystyle r(z)^{2}=x^{2}+y^{2}=R^{2}-z^{2}.}
Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the
z
{\displaystyle z}
-axis,
I
C
,
sphere
=
∫
−
R
R
1
2
π
ρ
r
(
z
)
4
d
z
=
∫
−
R
R
1
2
π
ρ
(
R
2
−
z
2
)
2
d
z
=
1
2
π
ρ
[
R
4
z
−
2
3
R
2
z
3
+
1
5
z
5
]
−
R
R
=
π
ρ
(
1
−
2
3
+
1
5
)
R
5
=
2
5
m
R
2
,
{\displaystyle {\begin{aligned}I_{C,{\text{sphere}}}&=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho r(z)^{4}\,dz=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho \left(R^{2}-z^{2}\right)^{2}\,dz\\[1ex]&={\tfrac {1}{2}}\pi \rho \left[R^{4}z-{\tfrac {2}{3}}R^{2}z^{3}+{\tfrac {1}{5}}z^{5}\right]_{-R}^{R}\\[1ex]&=\pi \rho \left(1-{\tfrac {2}{3}}+{\tfrac {1}{5}}\right)R^{5}\\[1ex]&={\tfrac {2}{5}}mR^{2},\end{aligned}}}
where
m
=
4
3
π
R
3
ρ
{\textstyle m={\frac {4}{3}}\pi R^{3}\rho }
is the mass of the sphere.
=== Rigid body ===
If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis
k
^
{\displaystyle \mathbf {\hat {k}} }
parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles.
If a system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point
R
{\displaystyle \mathbf {R} }
, and absolute velocities
v
i
{\displaystyle \mathbf {v} _{i}}
:
Δ
r
i
=
r
i
−
R
,
v
i
=
ω
×
(
r
i
−
R
)
+
V
=
ω
×
Δ
r
i
+
V
,
{\displaystyle {\begin{aligned}\Delta \mathbf {r} _{i}&=\mathbf {r} _{i}-\mathbf {R} ,\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {V} ={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} ,\end{aligned}}}
where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the system and
V
{\displaystyle \mathbf {V} }
is the velocity of
R
{\displaystyle \mathbf {R} }
.
For planar movement the angular velocity vector is directed along the unit vector
k
{\displaystyle \mathbf {k} }
which is perpendicular to the plane of movement. Introduce the unit vectors
e
i
{\displaystyle \mathbf {e} _{i}}
from the reference point
R
{\displaystyle \mathbf {R} }
to a point
r
i
{\displaystyle \mathbf {r} _{i}}
, and the unit vector
t
^
i
=
k
^
×
e
^
i
{\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}}
, so
e
^
i
=
Δ
r
i
Δ
r
i
,
k
^
=
ω
ω
,
t
^
i
=
k
^
×
e
^
i
,
v
i
=
ω
×
Δ
r
i
+
V
=
ω
k
^
×
Δ
r
i
e
^
i
+
V
=
ω
Δ
r
i
t
^
i
+
V
{\displaystyle {\begin{aligned}\mathbf {\hat {e}} _{i}&={\frac {\Delta \mathbf {r} _{i}}{\Delta r_{i}}},\quad \mathbf {\hat {k}} ={\frac {\boldsymbol {\omega }}{\omega }},\quad \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i},\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} =\omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {V} =\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \end{aligned}}}
This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane.
Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement.
==== Angular momentum ====
The angular momentum vector for the planar movement of a rigid system of particles is given by
L
=
∑
i
=
1
n
m
i
Δ
r
i
×
v
i
=
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
×
(
ω
Δ
r
i
t
^
i
+
V
)
=
(
∑
i
=
1
n
m
i
Δ
r
i
2
)
ω
k
^
+
(
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
)
×
V
.
{\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\omega \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {V} .\end{aligned}}}
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point so
Δ
r
i
e
^
i
=
r
i
−
C
,
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
=
0
,
{\displaystyle {\begin{aligned}\Delta r_{i}\mathbf {\hat {e}} _{i}&=\mathbf {r} _{i}-\mathbf {C} ,\\\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}&=0,\end{aligned}}}
and define the moment of inertia relative to the center of mass
I
C
{\displaystyle I_{\mathbf {C} }}
as
I
C
=
∑
i
m
i
Δ
r
i
2
,
{\displaystyle I_{\mathbf {C} }=\sum _{i}m_{i}\,\Delta r_{i}^{2},}
then the equation for angular momentum simplifies to: 1028
L
=
I
C
ω
k
^
.
{\displaystyle \mathbf {L} =I_{\mathbf {C} }\omega \mathbf {\hat {k}} .}
The moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole).
For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body.
==== Kinetic energy ====
The kinetic energy of a rigid system of particles moving in the plane is given by
E
K
=
1
2
∑
i
=
1
n
m
i
v
i
⋅
v
i
,
=
1
2
∑
i
=
1
n
m
i
(
ω
Δ
r
i
t
^
i
+
V
)
⋅
(
ω
Δ
r
i
t
^
i
+
V
)
,
=
1
2
ω
2
(
∑
i
=
1
n
m
i
Δ
r
i
2
t
^
i
⋅
t
^
i
)
+
ω
V
⋅
(
∑
i
=
1
n
m
i
Δ
r
i
t
^
i
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
⋅
V
.
{\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i},\\&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\cdot \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right),\\&={\frac {1}{2}}\omega ^{2}\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\mathbf {\hat {t}} _{i}\cdot \mathbf {\hat {t}} _{i}\right)+\omega \mathbf {V} \cdot \left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {t}} _{i}\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} \cdot \mathbf {V} .\end{aligned}}}
Let the reference point be the center of mass
C
{\displaystyle \mathbf {C} }
of the system so the second term becomes zero, and introduce the moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
so the kinetic energy is given by: 1084
E
K
=
1
2
I
C
ω
2
+
1
2
M
V
⋅
V
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}I_{\mathbf {C} }\omega ^{2}+{\frac {1}{2}}M\mathbf {V} \cdot \mathbf {V} .}
The moment of inertia
I
C
{\displaystyle I_{\mathbf {C} }}
is the polar moment of inertia of the body.
==== Newton's laws ====
Newton's laws for a rigid system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, can be written in terms of a resultant force and torque at a reference point
R
{\displaystyle \mathbf {R} }
, to yield
F
=
∑
i
=
1
n
m
i
A
i
,
τ
=
∑
i
=
1
n
Δ
r
i
×
m
i
A
i
,
{\displaystyle {\begin{aligned}\mathbf {F} &=\sum _{i=1}^{n}m_{i}\mathbf {A} _{i},\\{\boldsymbol {\tau }}&=\sum _{i=1}^{n}\Delta \mathbf {r} _{i}\times m_{i}\mathbf {A} _{i},\end{aligned}}}
where
r
i
{\displaystyle \mathbf {r} _{i}}
denotes the trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle
P
i
{\displaystyle P_{i}}
in terms of the position
R
{\displaystyle \mathbf {R} }
and acceleration
A
{\displaystyle \mathbf {A} }
of the reference particle as well as the angular velocity vector
ω
{\displaystyle {\boldsymbol {\omega }}}
and angular acceleration vector
α
{\displaystyle {\boldsymbol {\alpha }}}
of the rigid system of particles as,
A
i
=
α
×
Δ
r
i
+
ω
×
ω
×
Δ
r
i
+
A
.
{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times \Delta \mathbf {r} _{i}+{\boldsymbol {\omega }}\times {\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {A} .}
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along
k
^
{\displaystyle \mathbf {\hat {k}} }
perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors
e
^
i
{\displaystyle \mathbf {\hat {e}} _{i}}
from the reference point
R
{\displaystyle \mathbf {R} }
to a point
r
i
{\displaystyle \mathbf {r} _{i}}
and the unit vectors
t
^
i
=
k
^
×
e
^
i
{\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}}
, so
A
i
=
α
k
^
×
Δ
r
i
e
^
i
−
ω
k
^
×
ω
k
^
×
Δ
r
i
e
^
i
+
A
=
α
Δ
r
i
t
^
i
−
ω
2
Δ
r
i
e
^
i
+
A
.
{\displaystyle {\begin{aligned}\mathbf {A} _{i}&=\alpha \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}-\omega \mathbf {\hat {k}} \times \omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \\&=\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} .\end{aligned}}}
This yields the resultant torque on the system as
τ
=
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
×
(
α
Δ
r
i
t
^
i
−
ω
2
Δ
r
i
e
^
i
+
A
)
=
(
∑
i
=
1
n
m
i
Δ
r
i
2
)
α
k
^
+
(
∑
i
=
1
n
m
i
Δ
r
i
e
^
i
)
×
A
,
{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\alpha \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {A} ,\end{aligned}}}
where
e
^
i
×
e
^
i
=
0
{\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {e}} _{i}=\mathbf {0} }
, and
e
^
i
×
t
^
i
=
k
^
{\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} }
is the unit vector perpendicular to the plane for all of the particles
P
i
{\displaystyle P_{i}}
.
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point and define the moment of inertia relative to the center of mass
I
C
{\displaystyle I_{\mathbf {C} }}
, then the equation for the resultant torque simplifies to: 1029
τ
=
I
C
α
k
^
.
{\displaystyle {\boldsymbol {\tau }}=I_{\mathbf {C} }\alpha \mathbf {\hat {k}} .}
== Motion in space of a rigid body, and the inertia matrix ==
The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles.
Let the system of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
be located at the coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
with velocities
v
i
{\displaystyle \mathbf {v} _{i}}
relative to a fixed reference frame. For a (possibly moving) reference point
R
{\displaystyle \mathbf {R} }
, the relative positions are
Δ
r
i
=
r
i
−
R
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} }
and the (absolute) velocities are
v
i
=
ω
×
Δ
r
i
+
V
R
{\displaystyle \mathbf {v} _{i}={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }}
where
ω
{\displaystyle {\boldsymbol {\omega }}}
is the angular velocity of the system, and
V
R
{\displaystyle \mathbf {V_{R}} }
is the velocity of
R
{\displaystyle \mathbf {R} }
.
=== Angular momentum ===
Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix,
[
b
]
{\displaystyle \left[\mathbf {b} \right]}
, constructed from the components of
b
=
(
b
x
,
b
y
,
b
z
)
{\displaystyle \mathbf {b} =(b_{x},b_{y},b_{z})}
:
b
×
y
≡
[
b
]
y
[
b
]
≡
[
0
−
b
z
b
y
b
z
0
−
b
x
−
b
y
b
x
0
]
.
{\displaystyle {\begin{aligned}\mathbf {b} \times \mathbf {y} &\equiv \left[\mathbf {b} \right]\mathbf {y} \\\left[\mathbf {b} \right]&\equiv {\begin{bmatrix}0&-b_{z}&b_{y}\\b_{z}&0&-b_{x}\\-b_{y}&b_{x}&0\end{bmatrix}}.\end{aligned}}}
The inertia matrix is constructed by considering the angular momentum, with the reference point
R
{\displaystyle \mathbf {R} }
of the body chosen to be the center of mass
C
{\displaystyle \mathbf {C} }
:
L
=
∑
i
=
1
n
m
i
Δ
r
i
×
v
i
=
∑
i
=
1
n
m
i
Δ
r
i
×
(
ω
×
Δ
r
i
+
V
R
)
=
(
−
∑
i
=
1
n
m
i
Δ
r
i
×
(
Δ
r
i
×
ω
)
)
+
(
∑
i
=
1
n
m
i
Δ
r
i
×
V
R
)
,
{\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }\right)\\&=\left(-\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left(\Delta \mathbf {r} _{i}\times {\boldsymbol {\omega }}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {V} _{\mathbf {R} }\right),\end{aligned}}}
where the terms containing
V
R
{\displaystyle \mathbf {V_{R}} }
(
=
C
{\displaystyle =\mathbf {C} }
) sum to zero by the definition of center of mass.
Then, the skew-symmetric matrix
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
obtained from the relative position vector
Δ
r
i
=
r
i
−
C
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} }
, can be used to define,
L
=
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
=
I
C
ω
,
{\displaystyle \mathbf {L} =\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},}
where
I
C
{\displaystyle \mathbf {I_{C}} }
defined by
I
C
=
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
,
{\displaystyle \mathbf {I} _{\mathbf {C} }=-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2},}
is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass
C
{\displaystyle \mathbf {C} }
.
=== Kinetic energy ===
The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of
n
{\displaystyle n}
particles
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
be located at the coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
with velocities
v
i
{\displaystyle \mathbf {v} _{i}}
, then the kinetic energy is
E
K
=
1
2
∑
i
=
1
n
m
i
v
i
⋅
v
i
=
1
2
∑
i
=
1
n
m
i
(
ω
×
Δ
r
i
+
V
C
)
⋅
(
ω
×
Δ
r
i
+
V
C
)
,
{\displaystyle E_{\text{K}}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right),}
where
Δ
r
i
=
r
i
−
C
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} }
is the position vector of a particle relative to the center of mass.
This equation expands to yield three terms
E
K
=
1
2
(
∑
i
=
1
n
m
i
(
ω
×
Δ
r
i
)
⋅
(
ω
×
Δ
r
i
)
)
+
(
∑
i
=
1
n
m
i
V
C
⋅
(
ω
×
Δ
r
i
)
)
+
1
2
(
∑
i
=
1
n
m
i
V
C
⋅
V
C
)
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\right).}
Since the center of mass is defined by
∑
i
=
1
n
m
i
Δ
r
i
=
0
{\displaystyle \sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}=0}
, the second term in this equation is zero. Introduce the skew-symmetric matrix
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
so the kinetic energy becomes
E
K
=
1
2
(
∑
i
=
1
n
m
i
(
[
Δ
r
i
]
ω
)
⋅
(
[
Δ
r
i
]
ω
)
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
=
1
2
(
∑
i
=
1
n
m
i
(
ω
T
[
Δ
r
i
]
T
[
Δ
r
i
]
ω
)
)
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
=
1
2
ω
⋅
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
+
1
2
(
∑
i
=
1
n
m
i
)
V
C
⋅
V
C
.
{\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\cdot \left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }.\end{aligned}}}
Thus, the kinetic energy of the rigid system of particles is given by
E
K
=
1
2
ω
⋅
I
C
ω
+
1
2
M
V
C
2
.
{\displaystyle E_{\text{K}}={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }}+{\frac {1}{2}}M\mathbf {V} _{\mathbf {C} }^{2}.}
where
I
C
{\displaystyle \mathbf {I_{C}} }
is the inertia matrix relative to the center of mass and
M
{\displaystyle M}
is the total mass.
=== Resultant torque ===
The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is,
τ
=
∑
i
=
1
n
(
r
i
−
R
)
×
m
i
a
i
,
{\displaystyle {\boldsymbol {\tau }}=\sum _{i=1}^{n}\left(\mathbf {r_{i}} -\mathbf {R} \right)\times m_{i}\mathbf {a} _{i},}
where
a
i
{\displaystyle \mathbf {a} _{i}}
is the acceleration of the particle
P
i
{\displaystyle P_{i}}
. The kinematics of a rigid body yields the formula for the acceleration of the particle
P
i
{\displaystyle P_{i}}
in terms of the position
R
{\displaystyle \mathbf {R} }
and acceleration
A
R
{\displaystyle \mathbf {A} _{\mathbf {R} }}
of the reference point, as well as the angular velocity vector
ω
{\displaystyle {\boldsymbol {\omega }}}
and angular acceleration vector
α
{\displaystyle {\boldsymbol {\alpha }}}
of the rigid system as,
a
i
=
α
×
(
r
i
−
R
)
+
ω
×
(
ω
×
(
r
i
−
R
)
)
+
A
R
.
{\displaystyle \mathbf {a} _{i}={\boldsymbol {\alpha }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+{\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\mathbf {A} _{\mathbf {R} }.}
Use the center of mass
C
{\displaystyle \mathbf {C} }
as the reference point, and introduce the skew-symmetric matrix
[
Δ
r
i
]
=
[
r
i
−
C
]
{\displaystyle \left[\Delta \mathbf {r} _{i}\right]=\left[\mathbf {r} _{i}-\mathbf {C} \right]}
to represent the cross product
(
r
i
−
C
)
×
{\displaystyle (\mathbf {r} _{i}-\mathbf {C} )\times }
, to obtain
τ
=
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
α
+
ω
×
(
−
∑
i
=
1
n
m
i
[
Δ
r
i
]
2
)
ω
{\displaystyle {\boldsymbol {\tau }}=\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}}
The calculation uses the identity
Δ
r
i
×
(
ω
×
(
ω
×
Δ
r
i
)
)
+
ω
×
(
(
ω
×
Δ
r
i
)
×
Δ
r
i
)
=
0
,
{\displaystyle \Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\boldsymbol {\omega }}\times \left(\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\times \Delta \mathbf {r} _{i}\right)=0,}
obtained from the Jacobi identity for the triple cross product as shown in the proof below:
Thus, the resultant torque on the rigid system of particles is given by
τ
=
I
C
α
+
ω
×
I
C
ω
,
{\displaystyle {\boldsymbol {\tau }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},}
where
I
C
{\displaystyle \mathbf {I_{C}} }
is the inertia matrix relative to the center of mass.
=== Parallel axis theorem ===
The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass
C
{\displaystyle \mathbf {C} }
and the inertia matrix relative to another point
R
{\displaystyle \mathbf {R} }
. This relationship is called the parallel axis theorem.
Consider the inertia matrix
I
R
{\displaystyle \mathbf {I_{R}} }
obtained for a rigid system of particles measured relative to a reference point
R
{\displaystyle \mathbf {R} }
, given by
I
R
=
−
∑
i
=
1
n
m
i
[
r
i
−
R
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}\left[\mathbf {r} _{i}-\mathbf {R} \right]^{2}.}
Let
C
{\displaystyle \mathbf {C} }
be the center of mass of the rigid system, then
R
=
(
R
−
C
)
+
C
=
d
+
C
,
{\displaystyle \mathbf {R} =(\mathbf {R} -\mathbf {C} )+\mathbf {C} =\mathbf {d} +\mathbf {C} ,}
where
d
{\displaystyle \mathbf {d} }
is the vector from the center of mass
C
{\displaystyle \mathbf {C} }
to the reference point
R
{\displaystyle \mathbf {R} }
. Use this equation to compute the inertia matrix,
I
R
=
−
∑
i
=
1
n
m
i
[
r
i
−
(
C
+
d
)
]
2
=
−
∑
i
=
1
n
m
i
[
(
r
i
−
C
)
−
d
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\left(\mathbf {C} +\mathbf {d} \right)]^{2}=-\sum _{i=1}^{n}m_{i}[\left(\mathbf {r} _{i}-\mathbf {C} \right)-\mathbf {d} ]^{2}.}
Distribute over the cross product to obtain
I
R
=
−
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
2
)
+
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
)
[
d
]
+
[
d
]
(
∑
i
=
1
n
m
i
[
r
i
−
C
]
)
−
(
∑
i
=
1
n
m
i
)
[
d
]
2
.
{\displaystyle \mathbf {I} _{\mathbf {R} }=-\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]^{2}\right)+\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)[\mathbf {d} ]+[\mathbf {d} ]\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)-\left(\sum _{i=1}^{n}m_{i}\right)[\mathbf {d} ]^{2}.}
The first term is the inertia matrix
I
C
{\displaystyle \mathbf {I_{C}} }
relative to the center of mass. The second and third terms are zero by definition of the center of mass
C
{\displaystyle \mathbf {C} }
. And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix
[
d
]
{\displaystyle [\mathbf {d} ]}
constructed from
d
{\displaystyle \mathbf {d} }
.
The result is the parallel axis theorem,
I
R
=
I
C
−
M
[
d
]
2
,
{\displaystyle \mathbf {I} _{\mathbf {R} }=\mathbf {I} _{\mathbf {C} }-M[\mathbf {d} ]^{2},}
where
d
{\displaystyle \mathbf {d} }
is the vector from the center of mass
C
{\displaystyle \mathbf {C} }
to the reference point
R
{\displaystyle \mathbf {R} }
.
Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form
−
m
[
r
]
2
{\displaystyle -m\left[\mathbf {r} \right]^{2}}
, which is similar to the
m
r
2
{\displaystyle mr^{2}}
that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term
m
[
r
]
T
[
r
]
{\displaystyle m\left[\mathbf {r} \right]^{\mathsf {T}}\left[\mathbf {r} \right]}
, if desired, by using the skew-symmetry property of
[
r
]
{\displaystyle [\mathbf {r} ]}
.
=== Scalar moment of inertia in a plane ===
The scalar moment of inertia,
I
L
{\displaystyle I_{L}}
, of a body about a specified axis whose direction is specified by the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
and passes through the body at a point
R
{\displaystyle \mathbf {R} }
is as follows:
I
L
=
k
^
⋅
(
−
∑
i
=
1
N
m
i
[
Δ
r
i
]
2
)
k
^
=
k
^
⋅
I
R
k
^
=
k
^
T
I
R
k
^
,
{\displaystyle I_{L}=\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,}
where
I
R
{\displaystyle \mathbf {I_{R}} }
is the moment of inertia matrix of the system relative to the reference point
R
{\displaystyle \mathbf {R} }
, and
[
Δ
r
i
]
{\displaystyle [\Delta \mathbf {r} _{i}]}
is the skew symmetric matrix obtained from the vector
Δ
r
i
=
r
i
−
R
{\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} }
.
This is derived as follows. Let a rigid assembly of
n
{\displaystyle n}
particles,
P
i
,
i
=
1
,
…
,
n
{\displaystyle P_{i},i=1,\dots ,n}
, have coordinates
r
i
{\displaystyle \mathbf {r} _{i}}
. Choose
R
{\displaystyle \mathbf {R} }
as a reference point and compute the moment of inertia around a line L defined by the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
through the reference point
R
{\displaystyle \mathbf {R} }
,
L
(
t
)
=
R
+
t
k
^
{\displaystyle \mathbf {L} (t)=\mathbf {R} +t\mathbf {\hat {k}} }
. The perpendicular vector from this line to the particle
P
i
{\displaystyle P_{i}}
is obtained from
Δ
r
i
{\displaystyle \Delta \mathbf {r} _{i}}
by removing the component that projects onto
k
^
{\displaystyle \mathbf {\hat {k}} }
.
Δ
r
i
⊥
=
Δ
r
i
−
(
k
^
⋅
Δ
r
i
)
k
^
=
(
E
−
k
^
k
^
T
)
Δ
r
i
,
{\displaystyle \Delta \mathbf {r} _{i}^{\perp }=\Delta \mathbf {r} _{i}-\left(\mathbf {\hat {k}} \cdot \Delta \mathbf {r} _{i}\right)\mathbf {\hat {k}} =\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)\Delta \mathbf {r} _{i},}
where
E
{\displaystyle \mathbf {E} }
is the identity matrix, so as to avoid confusion with the inertia matrix, and
k
^
k
^
T
{\displaystyle \mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}}
is the outer product matrix formed from the unit vector
k
^
{\displaystyle \mathbf {\hat {k}} }
along the line
L
{\displaystyle L}
.
To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix
[
k
^
]
{\displaystyle \left[\mathbf {\hat {k}} \right]}
such that
[
k
^
]
y
=
k
^
×
y
{\displaystyle \left[\mathbf {\hat {k}} \right]\mathbf {y} =\mathbf {\hat {k}} \times \mathbf {y} }
, then we have the identity
−
[
k
^
]
2
≡
|
k
^
|
2
(
E
−
k
^
k
^
T
)
=
E
−
k
^
k
^
T
,
{\displaystyle -\left[\mathbf {\hat {k}} \right]^{2}\equiv \left|\mathbf {\hat {k}} \right|^{2}\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)=\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}},}
noting that
k
^
{\displaystyle \mathbf {\hat {k}} }
is a unit vector.
The magnitude squared of the perpendicular vector is
|
Δ
r
i
⊥
|
2
=
(
−
[
k
^
]
2
Δ
r
i
)
⋅
(
−
[
k
^
]
2
Δ
r
i
)
=
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
{\displaystyle {\begin{aligned}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}&=\left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\cdot \left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\\&=\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\end{aligned}}}
The simplification of this equation uses the triple scalar product identity
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
≡
(
(
k
^
×
(
k
^
×
Δ
r
i
)
)
×
k
^
)
⋅
(
k
^
×
Δ
r
i
)
,
{\displaystyle \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\equiv \left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right),}
where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that
Δ
r
i
{\displaystyle \Delta \mathbf {r} _{i}}
and
k
^
{\displaystyle \mathbf {\hat {k}} }
are orthogonal:
(
k
^
×
(
k
^
×
Δ
r
i
)
)
⋅
(
k
^
×
(
k
^
×
Δ
r
i
)
)
=
(
(
k
^
×
(
k
^
×
Δ
r
i
)
)
×
k
^
)
⋅
(
k
^
×
Δ
r
i
)
=
(
k
^
×
Δ
r
i
)
⋅
(
−
Δ
r
i
×
k
^
)
=
−
k
^
⋅
(
Δ
r
i
×
Δ
r
i
×
k
^
)
=
−
k
^
⋅
[
Δ
r
i
]
2
k
^
.
{\displaystyle {\begin{aligned}&\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\\={}&\left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\\={}&\left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\cdot \left(-\Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left(\Delta \mathbf {r} _{i}\times \Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} .\end{aligned}}}
Thus, the moment of inertia around the line
L
{\displaystyle L}
through
R
{\displaystyle \mathbf {R} }
in the direction
k
^
{\displaystyle \mathbf {\hat {k}} }
is obtained from the calculation
I
L
=
∑
i
=
1
N
m
i
|
Δ
r
i
⊥
|
2
=
−
∑
i
=
1
N
m
i
k
^
⋅
[
Δ
r
i
]
2
k
^
=
k
^
⋅
(
−
∑
i
=
1
N
m
i
[
Δ
r
i
]
2
)
k
^
=
k
^
⋅
I
R
k
^
=
k
^
T
I
R
k
^
,
{\displaystyle {\begin{aligned}I_{L}&=\sum _{i=1}^{N}m_{i}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}\\&=-\sum _{i=1}^{N}m_{i}\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} \\&=\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,\end{aligned}}}
where
I
R
{\displaystyle \mathbf {I_{R}} }
is the moment of inertia matrix of the system relative to the reference point
R
{\displaystyle \mathbf {R} }
.
This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body.
== Inertia tensor ==
For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used.
=== Definition ===
For a rigid object of
N
{\displaystyle N}
point masses
m
k
{\displaystyle m_{k}}
, the moment of inertia tensor is given by
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
.
{\displaystyle \mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\I_{21}&I_{22}&I_{23}\\I_{31}&I_{32}&I_{33}\end{bmatrix}}.}
Its components are defined as
I
i
j
=
d
e
f
∑
k
=
1
N
m
k
(
‖
r
k
‖
2
δ
i
j
−
x
i
(
k
)
x
j
(
k
)
)
{\displaystyle I_{ij}\ {\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(\left\|\mathbf {r} _{k}\right\|^{2}\delta _{ij}-x_{i}^{(k)}x_{j}^{(k)}\right)}
where
i
{\displaystyle i}
,
j
{\displaystyle j}
is equal to 1, 2 or 3 for
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
, respectively,
r
k
=
(
x
1
(
k
)
,
x
2
(
k
)
,
x
3
(
k
)
)
{\displaystyle \mathbf {r} _{k}=\left(x_{1}^{(k)},x_{2}^{(k)},x_{3}^{(k)}\right)}
is the vector to the point mass
m
k
{\displaystyle m_{k}}
from the point about which the tensor is calculated and
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
Note that, by the definition,
I
{\displaystyle \mathbf {I} }
is a symmetric tensor.
The diagonal elements are more succinctly written as
I
x
x
=
d
e
f
∑
k
=
1
N
m
k
(
y
k
2
+
z
k
2
)
,
I
y
y
=
d
e
f
∑
k
=
1
N
m
k
(
x
k
2
+
z
k
2
)
,
I
z
z
=
d
e
f
∑
k
=
1
N
m
k
(
x
k
2
+
y
k
2
)
,
{\displaystyle {\begin{aligned}I_{xx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right),\\I_{yy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+z_{k}^{2}\right),\\I_{zz}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+y_{k}^{2}\right),\end{aligned}}}
while the off-diagonal elements, also called the products of inertia, are
I
x
y
=
I
y
x
=
d
e
f
−
∑
k
=
1
N
m
k
x
k
y
k
,
I
x
z
=
I
z
x
=
d
e
f
−
∑
k
=
1
N
m
k
x
k
z
k
,
I
y
z
=
I
z
y
=
d
e
f
−
∑
k
=
1
N
m
k
y
k
z
k
.
{\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}y_{k}z_{k}.\end{aligned}}}
Here
I
x
x
{\displaystyle I_{xx}}
denotes the moment of inertia around the
x
{\displaystyle x}
-axis when the objects are rotated around the x-axis,
I
x
y
{\displaystyle I_{xy}}
denotes the moment of inertia around the
y
{\displaystyle y}
-axis when the objects are rotated around the
x
{\displaystyle x}
-axis, and so on.
These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has
I
=
∭
V
ρ
(
x
,
y
,
z
)
(
‖
r
‖
2
E
3
−
r
⊗
r
)
d
x
d
y
d
z
,
{\displaystyle \mathbf {I} =\iiint _{V}\rho (x,y,z)\left(\|\mathbf {r} \|^{2}\mathbf {E} _{3}-\mathbf {r} \otimes \mathbf {r} \right)\,dx\,dy\,dz,}
where
r
⊗
r
{\displaystyle \mathbf {r} \otimes \mathbf {r} }
is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object.
Alternatively it can also be written in terms of the angular momentum operator
[
r
]
x
=
r
×
x
{\displaystyle [\mathbf {r} ]\mathbf {x} =\mathbf {r} \times \mathbf {x} }
:
I
=
∭
V
ρ
(
r
)
[
r
]
T
[
r
]
d
V
=
−
∭
Q
ρ
(
r
)
[
r
]
2
d
V
{\displaystyle \mathbf {I} =\iiint _{V}\rho (\mathbf {r} )[\mathbf {r} ]^{\textsf {T}}[\mathbf {r} ]\,dV=-\iiint _{Q}\rho (\mathbf {r} )[\mathbf {r} ]^{2}\,dV}
The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction
n
{\displaystyle \mathbf {n} }
,
I
n
=
n
⋅
I
⋅
n
,
{\displaystyle I_{n}=\mathbf {n} \cdot \mathbf {I} \cdot \mathbf {n} ,}
where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as
I
12
{\displaystyle I_{12}}
is obtained by the computation
I
12
=
e
1
⋅
I
⋅
e
2
,
{\displaystyle I_{12}=\mathbf {e} _{1}\cdot \mathbf {I} \cdot \mathbf {e} _{2},}
and can be interpreted as the moment of inertia around the
x
{\displaystyle x}
-axis when the object rotates around the
y
{\displaystyle y}
-axis.
The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by,
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
=
[
I
x
x
I
x
y
I
x
z
I
y
x
I
y
y
I
y
z
I
z
x
I
z
y
I
z
z
]
=
∑
k
=
1
N
[
m
k
(
y
k
2
+
z
k
2
)
−
m
k
x
k
y
k
−
m
k
x
k
z
k
−
m
k
x
k
y
k
m
k
(
x
k
2
+
z
k
2
)
−
m
k
y
k
z
k
−
m
k
x
k
z
k
−
m
k
y
k
z
k
m
k
(
x
k
2
+
y
k
2
)
]
.
{\displaystyle {\begin{aligned}\mathbf {I} &={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}={\begin{bmatrix}I_{xx}&I_{xy}&I_{xz}\\[1.8ex]I_{yx}&I_{yy}&I_{yz}\\[1.8ex]I_{zx}&I_{zy}&I_{zz}\end{bmatrix}}\\[2ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}}
It is common in rigid body mechanics to use notation that explicitly identifies the
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
-axes, such as
I
x
x
{\displaystyle I_{xx}}
and
I
x
y
{\displaystyle I_{xy}}
, for the components of the inertia tensor.
=== Alternate inertia convention ===
There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix:
I
x
y
=
I
y
x
=
d
e
f
∑
k
=
1
N
m
k
x
k
y
k
,
I
x
z
=
I
z
x
=
d
e
f
∑
k
=
1
N
m
k
x
k
z
k
,
I
y
z
=
I
z
y
=
d
e
f
∑
k
=
1
N
m
k
y
k
z
k
,
I
=
[
I
11
I
12
I
13
I
21
I
22
I
23
I
31
I
32
I
33
]
=
[
I
x
x
−
I
x
y
−
I
x
z
−
I
y
x
I
y
y
−
I
y
z
−
I
z
x
−
I
z
y
I
z
z
]
=
∑
k
=
1
N
[
m
k
(
y
k
2
+
z
k
2
)
−
m
k
x
k
y
k
−
m
k
x
k
z
k
−
m
k
x
k
y
k
m
k
(
x
k
2
+
z
k
2
)
−
m
k
y
k
z
k
−
m
k
x
k
z
k
−
m
k
y
k
z
k
m
k
(
x
k
2
+
y
k
2
)
]
.
{\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}y_{k}z_{k},\\[3pt]\mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}&={\begin{bmatrix}I_{xx}&-I_{xy}&-I_{xz}\\[1.8ex]-I_{yx}&I_{yy}&-I_{yz}\\[1.8ex]-I_{zx}&-I_{zy}&I_{zz}\end{bmatrix}}\\[1ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}}
==== Determine inertia convention (principal axes method) ====
If one has the inertia data
(
I
x
x
,
I
y
y
,
I
z
z
,
I
x
y
,
I
x
z
,
I
y
z
)
{\displaystyle (I_{xx},I_{yy},I_{zz},I_{xy},I_{xz},I_{yz})}
without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions:
The standard inertia convention has been used
(
I
12
=
I
x
y
,
I
13
=
I
x
z
,
I
23
=
I
y
z
)
{\displaystyle (I_{12}=I_{xy},I_{13}=I_{xz},I_{23}=I_{yz})}
.
The alternate inertia convention has been used
(
I
12
=
−
I
x
y
,
I
13
=
−
I
x
z
,
I
23
=
−
I
y
z
)
{\displaystyle (I_{12}=-I_{xy},I_{13}=-I_{xz},I_{23}=-I_{yz})}
.
Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used.
=== Derivation of the tensor components ===
The distance
r
{\displaystyle r}
of a particle at
x
{\displaystyle \mathbf {x} }
from the axis of rotation passing through the origin in the
n
^
{\displaystyle \mathbf {\hat {n}} }
direction is
|
x
−
(
x
⋅
n
^
)
n
^
|
{\displaystyle \left|\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right|}
, where
n
^
{\displaystyle \mathbf {\hat {n}} }
is unit vector. The moment of inertia on the axis is
I
=
m
r
2
=
m
(
x
−
(
x
⋅
n
^
)
n
^
)
⋅
(
x
−
(
x
⋅
n
^
)
n
^
)
=
m
(
x
2
−
2
x
(
x
⋅
n
^
)
n
^
+
(
x
⋅
n
^
)
2
n
^
2
)
=
m
(
x
2
−
(
x
⋅
n
^
)
2
)
.
{\displaystyle I=mr^{2}=m\left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)\cdot \left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)=m\left(\mathbf {x} ^{2}-2\mathbf {x} \left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} +\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\mathbf {\hat {n}} ^{2}\right)=m\left(\mathbf {x} ^{2}-\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\right).}
Rewrite the equation using matrix transpose:
I
=
m
(
x
T
x
−
n
^
T
x
x
T
n
^
)
=
m
⋅
n
^
T
(
x
T
x
⋅
E
3
−
x
x
T
)
n
^
,
{\displaystyle I=m\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} -\mathbf {\hat {n}} ^{\textsf {T}}\mathbf {x} \mathbf {x} ^{\textsf {T}}\mathbf {\hat {n}} \right)=m\cdot \mathbf {\hat {n}} ^{\textsf {T}}\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} \cdot \mathbf {E_{3}} -\mathbf {x} \mathbf {x} ^{\textsf {T}}\right)\mathbf {\hat {n}} ,}
where E3 is the 3×3 identity matrix.
This leads to a tensor formula for the moment of inertia
I
=
m
[
n
1
n
2
n
3
]
[
y
2
+
z
2
−
x
y
−
x
z
−
y
x
x
2
+
z
2
−
y
z
−
z
x
−
z
y
x
2
+
y
2
]
[
n
1
n
2
n
3
]
.
{\displaystyle I=m{\begin{bmatrix}n_{1}&n_{2}&n_{3}\end{bmatrix}}{\begin{bmatrix}y^{2}+z^{2}&-xy&-xz\\[0.5ex]-yx&x^{2}+z^{2}&-yz\\[0.5ex]-zx&-zy&x^{2}+y^{2}\end{bmatrix}}{\begin{bmatrix}n_{1}\\[0.7ex]n_{2}\\[0.7ex]n_{3}\end{bmatrix}}.}
For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct.
=== Inertia tensor of translation ===
Let
I
0
{\displaystyle \mathbf {I} _{0}}
be the inertia tensor of a body calculated at its center of mass, and
R
{\displaystyle \mathbf {R} }
be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by:
I
=
I
0
+
m
[
(
R
⋅
R
)
E
3
−
R
⊗
R
]
{\displaystyle \mathbf {I} =\mathbf {I} _{0}+m[(\mathbf {R} \cdot \mathbf {R} )\mathbf {E} _{3}-\mathbf {R} \otimes \mathbf {R} ]}
where
m
{\displaystyle m}
is the body's mass, E3 is the 3 × 3 identity matrix, and
⊗
{\displaystyle \otimes }
is the outer product.
=== Inertia tensor of rotation ===
Let
R
{\displaystyle \mathbf {R} }
be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by:
I
=
R
I
0
R
T
{\displaystyle \mathbf {I} =\mathbf {R} \mathbf {I_{0}} \mathbf {R} ^{\textsf {T}}}
== Inertia matrix in different reference frames ==
The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant.
=== Body frame ===
Let the body frame inertia matrix relative to the center of mass be denoted
I
C
B
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}}
, and define the orientation of the body frame relative to the inertial frame by the rotation matrix
A
{\displaystyle \mathbf {A} }
, such that,
x
=
A
y
,
{\displaystyle \mathbf {x} =\mathbf {A} \mathbf {y} ,}
where vectors
y
{\displaystyle \mathbf {y} }
in the body fixed coordinate frame have coordinates
x
{\displaystyle \mathbf {x} }
in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by
I
C
=
A
I
C
B
A
T
.
{\displaystyle \mathbf {I} _{\mathbf {C} }=\mathbf {A} \mathbf {I} _{\mathbf {C} }^{B}\mathbf {A} ^{\mathsf {T}}.}
Notice that
A
{\displaystyle \mathbf {A} }
changes as the body moves, while
I
C
B
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}}
remains constant.
=== Principal axes ===
Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix
Q
{\displaystyle \mathbf {Q} }
and a diagonal matrix
Λ
{\displaystyle {\boldsymbol {\Lambda }}}
, given by
I
C
B
=
Q
Λ
Q
T
,
{\displaystyle \mathbf {I} _{\mathbf {C} }^{B}=\mathbf {Q} {\boldsymbol {\Lambda }}\mathbf {Q} ^{\mathsf {T}},}
where
Λ
=
[
I
1
0
0
0
I
2
0
0
0
I
3
]
.
{\displaystyle {\boldsymbol {\Lambda }}={\begin{bmatrix}I_{1}&0&0\\0&I_{2}&0\\0&0&I_{3}\end{bmatrix}}.}
The columns of the rotation matrix
Q
{\displaystyle \mathbf {Q} }
define the directions of the principal axes of the body, and the constants
I
1
{\displaystyle I_{1}}
,
I
2
{\displaystyle I_{2}}
, and
I
3
{\displaystyle I_{3}}
are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. When the body has an axis of symmetry (sometimes called the figure axis or axis of figure) then the other two moments of inertia will be identical and any axis perpendicular to the axis of symmetry will be a principal axis.
A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis.
The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order
m
{\displaystyle m}
, meaning it is symmetrical under rotations of 360°/m about the given axis, that axis is a principal axis. When
m
>
2
{\displaystyle m>2}
, the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid.
The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis.
A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble.
Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type.
=== Ellipsoid ===
The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let
Λ
{\displaystyle {\boldsymbol {\Lambda }}}
be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface
x
T
Λ
x
=
1
,
{\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =1,}
or
I
1
x
2
+
I
2
y
2
+
I
3
z
2
=
1
,
{\displaystyle I_{1}x^{2}+I_{2}y^{2}+I_{3}z^{2}=1,}
defines an ellipsoid in the body frame. Write this equation in the form,
(
x
1
/
I
1
)
2
+
(
y
1
/
I
2
)
2
+
(
z
1
/
I
3
)
2
=
1
,
{\displaystyle \left({\frac {x}{1/{\sqrt {I_{1}}}}}\right)^{2}+\left({\frac {y}{1/{\sqrt {I_{2}}}}}\right)^{2}+\left({\frac {z}{1/{\sqrt {I_{3}}}}}\right)^{2}=1,}
to see that the semi-principal diameters of this ellipsoid are given by
a
=
1
I
1
,
b
=
1
I
2
,
c
=
1
I
3
.
{\displaystyle a={\frac {1}{\sqrt {I_{1}}}},\quad b={\frac {1}{\sqrt {I_{2}}}},\quad c={\frac {1}{\sqrt {I_{3}}}}.}
Let a point
x
{\displaystyle \mathbf {x} }
on this ellipsoid be defined in terms of its magnitude and direction,
x
=
‖
x
‖
n
{\displaystyle \mathbf {x} =\|\mathbf {x} \|\mathbf {n} }
, where
n
{\displaystyle \mathbf {n} }
is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia
I
n
{\displaystyle I_{\mathbf {n} }}
around an axis in the direction
n
{\displaystyle \mathbf {n} }
, yields
x
T
Λ
x
=
‖
x
‖
2
n
T
Λ
n
=
‖
x
‖
2
I
n
=
1.
{\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =\|\mathbf {x} \|^{2}\mathbf {n} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {n} =\|\mathbf {x} \|^{2}I_{\mathbf {n} }=1.}
Thus, the magnitude of a point
x
{\displaystyle \mathbf {x} }
in the direction
n
{\displaystyle \mathbf {n} }
on the inertia ellipsoid is
‖
x
‖
=
1
I
n
.
{\displaystyle \|\mathbf {x} \|={\frac {1}{\sqrt {I_{\mathbf {n} }}}}.}
== See also ==
Central moment
List of moments of inertia
Moment of inertia factor
Planar lamina
Rotational energy
== References ==
== External links ==
Angular momentum and rigid-body rotation in two and three dimensions
Lecture notes on rigid-body rotation and moments of inertia
The moment of inertia tensor
An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation)
Tutorial on finding moments of inertia, with problems and solutions on various basic shapes
Notes on mechanics of manipulation: the angular inertia tensor
Easy to use and Free Moment of Inertia Calculator online | Wikipedia/Inertia_tensor |
Mechanics (from Ancient Greek μηχανική (mēkhanikḗ) 'of machines') is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects may result in displacements, which are changes of an object's position relative to its environment.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics.
As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm.
== History ==
=== Antiquity ===
The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors.
There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII).
=== Medieval age ===
In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus.
Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion.
On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]."
Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators.
=== Early modern age ===
Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.
There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.
=== Modern age ===
Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century.
== Types of mechanical bodies ==
The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.
Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.
Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.
For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.
== Sub-disciplines ==
The following are the three main designations consisting of various subjects that are studied in mechanics.
Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.
=== Classical ===
The following are described as forming classical mechanics:
Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics:
Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc.
Astrodynamics, spacecraft navigation, etc.
Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids
Fracture mechanics
Acoustics, sound (density, variation, propagation) in solids, fluids and gases
Statics, semi-rigid bodies in mechanical equilibrium
Fluid mechanics, the motion of fluids
Soil mechanics, mechanical behavior of soils
Continuum mechanics, mechanics of continua (both solid and fluid)
Hydraulics, mechanical properties of liquids
Fluid statics, liquids in equilibrium
Applied mechanics (also known as engineering mechanics)
Biomechanics, solids, fluids, etc. in biology
Biophysics, physical processes in living organisms
Relativistic or Einsteinian mechanics
=== Quantum ===
The following are categorized as being part of quantum mechanics:
Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle.
Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space.
Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Particle physics, the motion, structure, and behavior of fundamental particles
Nuclear physics, the motion, structure, and reactions of nuclei
Condensed matter physics, quantum gases, solids, liquids, etc.
Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature.
Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.
Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used.
Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles.
Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion.
=== Relativistic ===
Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is E = 1/2mv2, whereas in relativistic mechanics, it is E = (γ − 1)mc2 (where γ is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit).
For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory.
== Professional organizations ==
Applied Mechanics Division, American Society of Mechanical Engineers
Fluid Dynamics Division, American Physical Society
Society for Experimental Mechanics
International Union of Theoretical and Applied Mechanics
== See also ==
Action principles
Applied mechanics
Computational mechanics
Dynamics
Engineering
Index of engineering science and mechanics articles
Kinematics
Kinetics
Non-autonomous mechanics
Statics
Wiesen Test of Mechanical Aptitude (WTMA)
== References ==
== Further reading ==
Salma Alrasheed (2019). Principles of Mechanics. Springer Nature. ISBN 978-3-030-15195-9.
Landau, L. D.; Lifshitz, E. M. (1972). Mechanics and Electrodynamics, Vol. 1. Franklin Book Company, Inc. ISBN 978-0-08-016739-8.
Practical Mechanics for Boys (1914) by James Slough Zerbe.
== External links ==
Physclips: Mechanics with animations and video clips from the University of New South Wales
The Archimedes Project | Wikipedia/Mechanics |
In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere (the pole or center of projection), onto a plane (the projection plane) perpendicular to the diameter through the point. It is a smooth, bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal, meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes. It is neither isometric (distance preserving) nor equiareal (area preserving).
The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates. This is the spherical analog of the Poincaré disk model of the hyperbolic plane.
Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net.
== History ==
The origin of the stereographic projection is not known, but it is believed to have been discovered by Ancient Greek astronomers and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry. Its earliest extant description is found in Ptolemy's Planisphere (2nd century AD), but it was ambiguously attributed to Hipparchus (2nd century BC) by Synesius (c. 400 AD), and Apollonius's Conics (c. 200 BC) contains a theorem which is crucial in proving the property that the stereographic projection maps circles to circles. Hipparchus, Apollonius, Archimedes, and even Eudoxus (4th century BC) have sometimes been speculatively credited with inventing or knowing of the stereographic projection, but some experts consider these attributions unjustified. Ptolemy refers to the use of the stereographic projection in a "horoscopic instrument", perhaps the anaphoric clock described by Vitruvius (1st century BC).
By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by medieval Islamic astronomers. It was transmitted to Western Europe during the 11th–12th century, with Arabic texts translated into Latin.
In the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres. It is believed that already the map created in 1507 by Gualterius Lud was in stereographic projection, as were later the maps of Jean Rotz (1542), Rumold Mercator (1595), and many others. In star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy.
François d'Aguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles (Six Books of Optics, useful for philosophers and mathematicians alike).
In the late 16th century, Thomas Harriot proved that the stereographic projection is conformal; however, this proof was never published and sat among his papers in a box for more than three centuries. In 1695, Edmond Halley, motivated by his interest in star charts, was the first to publish a proof. He used the recently established tools of calculus, invented by his friend Isaac Newton.
== Definition ==
=== First formulation ===
The unit sphere
S
2
{\displaystyle {\mathcal {S}}^{2}}
in three-dimensional space
R
3
{\displaystyle \mathbb {R} ^{3}}
is the set of points
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
such that
x
2
+
y
2
+
z
2
=
1
{\displaystyle x^{2}+y^{2}+z^{2}=1}
. Let
N
=
(
0
,
0
,
1
)
{\displaystyle N=(0,0,1)}
be the "north pole", and let
M
{\displaystyle {\mathcal {M}}}
be the rest of the sphere. The plane
z
=
0
{\displaystyle z=0}
runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane.
For any point
P
{\displaystyle P}
on
M
{\displaystyle {\mathcal {M}}}
, there is a unique line through
N
{\displaystyle N}
and
P
{\displaystyle P}
, and this line intersects the plane
z
=
0
{\displaystyle z=0}
in exactly one point
P
′
{\displaystyle P'}
, known as the stereographic projection of
P
{\displaystyle P}
onto the plane.
In Cartesian coordinates
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
on the sphere and
(
X
,
Y
)
{\displaystyle (X,Y)}
on the plane, the projection and its inverse are given by the formulas
(
X
,
Y
)
=
(
x
1
−
z
,
y
1
−
z
)
,
(
x
,
y
,
z
)
=
(
2
X
1
+
X
2
+
Y
2
,
2
Y
1
+
X
2
+
Y
2
,
−
1
+
X
2
+
Y
2
1
+
X
2
+
Y
2
)
.
{\displaystyle {\begin{aligned}(X,Y)&=\left({\frac {x}{1-z}},{\frac {y}{1-z}}\right),\\(x,y,z)&=\left({\frac {2X}{1+X^{2}+Y^{2}}},{\frac {2Y}{1+X^{2}+Y^{2}}},{\frac {-1+X^{2}+Y^{2}}{1+X^{2}+Y^{2}}}\right).\end{aligned}}}
In spherical coordinates
(
φ
,
θ
)
{\displaystyle (\varphi ,\theta )}
on the sphere (with
φ
{\displaystyle \varphi }
the zenith angle,
0
≤
φ
≤
π
{\displaystyle 0\leq \varphi \leq \pi }
, and
θ
{\displaystyle \theta }
the azimuth,
0
≤
θ
≤
2
π
{\displaystyle 0\leq \theta \leq 2\pi }
) and polar coordinates
(
R
,
Θ
)
{\displaystyle (R,\Theta )}
on the plane, the projection and its inverse are
(
R
,
Θ
)
=
(
sin
φ
1
−
cos
φ
,
θ
)
=
(
cot
φ
2
,
θ
)
,
(
φ
,
θ
)
=
(
2
arctan
1
R
,
Θ
)
.
{\displaystyle {\begin{aligned}(R,\Theta )&=\left({\frac {\sin \varphi }{1-\cos \varphi }},\theta \right)=\left(\cot {\frac {\varphi }{2}},\theta \right),\\(\varphi ,\theta )&=\left(2\arctan {\frac {1}{R}},\Theta \right).\end{aligned}}}
Here,
φ
{\displaystyle \varphi }
is understood to have value
π
{\displaystyle \pi }
when
R
=
0
{\displaystyle R=0}
. Also, there are many ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates
(
r
,
θ
,
z
)
{\displaystyle (r,\theta ,z)}
on the sphere and polar coordinates
(
R
,
Θ
)
{\displaystyle (R,\Theta )}
on the plane, the projection and its inverse are
(
R
,
Θ
)
=
(
r
1
−
z
,
θ
)
,
(
r
,
θ
,
z
)
=
(
2
R
1
+
R
2
,
Θ
,
R
2
−
1
R
2
+
1
)
.
{\displaystyle {\begin{aligned}(R,\Theta )&=\left({\frac {r}{1-z}},\theta \right),\\(r,\theta ,z)&=\left({\frac {2R}{1+R^{2}}},\Theta ,{\frac {R^{2}-1}{R^{2}+1}}\right).\end{aligned}}}
=== Other conventions ===
Some authors define stereographic projection from the north pole (0, 0, 1) onto the plane z = −1, which is tangent to the unit sphere at the south pole (0, 0, −1). This can be described as a composition of a projection onto the equatorial plane described above, and a homothety from it to the polar plane. The homothety scales the image by a factor of 2 (a ratio of a diameter to a radius of the sphere), hence the values X and Y produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole.
Other authors use a sphere of radius 1/2 and the plane z = −1/2. In this case the formulae become
(
x
,
y
,
z
)
→
(
ξ
,
η
)
=
(
x
1
2
−
z
,
y
1
2
−
z
)
,
(
ξ
,
η
)
→
(
x
,
y
,
z
)
=
(
ξ
1
+
ξ
2
+
η
2
,
η
1
+
ξ
2
+
η
2
,
−
1
+
ξ
2
+
η
2
2
+
2
ξ
2
+
2
η
2
)
.
{\displaystyle {\begin{aligned}(x,y,z)\rightarrow (\xi ,\eta )&=\left({\frac {x}{{\frac {1}{2}}-z}},{\frac {y}{{\frac {1}{2}}-z}}\right),\\(\xi ,\eta )\rightarrow (x,y,z)&=\left({\frac {\xi }{1+\xi ^{2}+\eta ^{2}}},{\frac {\eta }{1+\xi ^{2}+\eta ^{2}}},{\frac {-1+\xi ^{2}+\eta ^{2}}{2+2\xi ^{2}+2\eta ^{2}}}\right).\end{aligned}}}
In general, one can define a stereographic projection from any point Q on the sphere onto any plane E such that
E is perpendicular to the diameter through Q, and
E does not contain Q.
As long as E meets these conditions, then for any point P other than Q the line through P and Q meets E in exactly one point P′, which is defined to be the stereographic projection of P onto E.
=== Generalizations ===
More generally, stereographic projection may be applied to the unit n-sphere Sn in (n + 1)-dimensional Euclidean space En+1. If Q is a point of Sn and E a hyperplane in En+1, then the stereographic projection of a point P ∈ Sn − {Q} is the point P′ of intersection of the line QP with E. In Cartesian coordinates (xi, i from 0 to n) on Sn and (Xi, i from 1 to n) on E, the projection from Q = (1, 0, 0, ..., 0) ∈ Sn is given by
X
i
=
x
i
1
−
x
0
(
i
=
1
,
…
,
n
)
.
{\displaystyle X_{i}={\frac {x_{i}}{1-x_{0}}}\quad (i=1,\dots ,n).}
Defining
s
2
=
∑
j
=
1
n
X
j
2
=
1
+
x
0
1
−
x
0
,
{\displaystyle s^{2}=\sum _{j=1}^{n}X_{j}^{2}={\frac {1+x_{0}}{1-x_{0}}},}
the inverse is given by
x
0
=
s
2
−
1
s
2
+
1
and
x
i
=
2
X
i
s
2
+
1
(
i
=
1
,
…
,
n
)
.
{\displaystyle x_{0}={\frac {s^{2}-1}{s^{2}+1}}\quad {\text{and}}\quad x_{i}={\frac {2X_{i}}{s^{2}+1}}\quad (i=1,\dots ,n).}
Still more generally, suppose that S is a (nonsingular) quadric hypersurface in the projective space Pn+1. In other words, S is the locus of zeros of a non-singular quadratic form f(x0, ..., xn+1) in the homogeneous coordinates xi. Fix any point Q on S and a hyperplane E in Pn+1 not containing Q. Then the stereographic projection of a point P in S − {Q} is the unique point of intersection of QP with E. As before, the stereographic projection is conformal and invertible on a non-empty Zariski open set. The stereographic projection presents the quadric hypersurface as a rational hypersurface. This construction plays a role in algebraic geometry and conformal geometry.
== Properties ==
The first stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle, the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle.
The projection is not defined at the projection point N = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer P is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a point at infinity. This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one-point compactification of the plane.
In Cartesian coordinates a point P(x, y, z) on the sphere and its image P′(X, Y) on the plane either both are rational points or none of them:
P
∈
Q
3
⟺
P
′
∈
Q
2
{\displaystyle P\in \mathbb {Q} ^{3}\iff P'\in \mathbb {Q} ^{2}}
Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in (X, Y) coordinates by
d
A
=
4
(
1
+
X
2
+
Y
2
)
2
d
X
d
Y
.
{\displaystyle dA={\frac {4}{(1+X^{2}+Y^{2})^{2}}}\;dX\;dY.}
Along the unit circle, where X2 + Y2 = 1, there is no inflation of area in the limit, giving a scale factor of 1. Near (0, 0) areas are inflated by a factor of 4, and near infinity areas are inflated by arbitrarily small factors.
The metric is given in (X, Y) coordinates by
4
(
1
+
X
2
+
Y
2
)
2
(
d
X
2
+
d
Y
2
)
,
{\displaystyle {\frac {4}{(1+X^{2}+Y^{2})^{2}}}\;(dX^{2}+dY^{2}),}
and is the unique formula found in Bernhard Riemann's Habilitationsschrift on the foundations of geometry, delivered at Göttingen in 1854, and entitled Über die Hypothesen welche der Geometrie zu Grunde liegen.
No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature. The sphere and the plane have different Gaussian curvatures, so this is impossible.
Circles on the sphere that do not pass through the point of projection are projected to circles on the plane. Circles on the sphere that do pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. These properties can be verified by using the expressions of
x
,
y
,
z
{\displaystyle x,y,z}
in terms of
X
,
Y
,
Z
,
{\displaystyle X,Y,Z,}
given in § First formulation: using these expressions for a substitution in the equation
a
x
+
b
y
+
c
z
−
d
=
0
{\displaystyle ax+by+cz-d=0}
of the plane containing a circle on the sphere, and clearing denominators, one gets the equation of a circle, that is, a second-degree equation with
(
c
−
d
)
(
X
2
+
Y
2
)
{\displaystyle (c-d)(X^{2}+Y^{2})}
as its quadratic part. The equation becomes linear if
c
=
d
,
{\displaystyle c=d,}
that is, if the plane passes through the point of projection.
All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, meet at the projection point. Parallel lines, which do not intersect in the plane, are transformed to circles tangent at projection point. Intersecting lines are transformed to circles that intersect transversally at two points in the sphere, one of which is the projection point. (Similar remarks hold about the real projective plane, but the intersection relationships are different there.)
The loxodromes of the sphere map to curves on the plane of the form
R
=
e
Θ
/
a
,
{\displaystyle R=e^{\Theta /a},\,}
where the parameter a measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals. These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles.
The stereographic projection relates to the plane inversion in a simple way. Let P and Q be two points on the sphere with projections P′ and Q′ on the plane. Then P′ and Q′ are inversive images of each other in the image of the equatorial circle if and only if P and Q are reflections of each other in the equatorial plane.
In other words, if:
P is a point on the sphere, but not a 'north pole' N and not its antipode, the 'south pole' S,
P′ is the image of P in a stereographic projection with the projection point N and
P″ is the image of P in a stereographic projection with the projection point S,
then P′ and P″ are inversive images of each other in the unit circle.
△
N
O
P
′
∼
△
P
′
′
O
S
⟹
O
P
′
:
O
N
=
O
S
:
O
P
′
′
⟹
O
P
′
⋅
O
P
′
′
=
r
2
{\displaystyle \triangle NOP^{\prime }\sim \triangle P^{\prime \prime }OS\implies OP^{\prime }:ON=OS:OP^{\prime \prime }\implies OP^{\prime }\cdot OP^{\prime \prime }=r^{2}}
== Wulff net ==
Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy. Instead, it is common to use graph paper designed specifically for the task. This special graph paper is called a stereonet or Wulff net, after the Russian mineralogist George (Yuri Viktorovich) Wulff.
The Wulff net shown here is the stereographic projection of the grid of parallels and meridians of a hemisphere centred at a point on the equator (such as the Eastern or Western hemisphere of a planet).
In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right or left. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area of the former. If the grid is made finer, this ratio approaches exactly 4.
On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereographic projection. (However, the angle-preserving property is stronger than this property. Not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.)
For an example of the use of the Wulff net, imagine two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Let P be the point on the lower unit hemisphere whose spherical coordinates are (140°, 60°) and whose Cartesian coordinates are (0.321, 0.557, −0.766). This point lies on a line oriented 60° counterclockwise from the positive x-axis (or 30° clockwise from the positive y-axis) and 50° below the horizontal plane z = 0. Once these angles are known, there are four steps to plotting P:
Using the grid lines, which are spaced 10° apart in the figures here, mark the point on the edge of the net that is 60° counterclockwise from the point (1, 0) (or 30° clockwise from the point (0, 1)).
Rotate the top net until this point is aligned with (1, 0) on the bottom net.
Using the grid lines on the bottom net, mark the point that is 50° toward the center from that point.
Rotate the top net oppositely to how it was oriented before, to bring it back into alignment with the bottom net. The point marked in step 3 is then the projection that we wanted.
To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°. Spacings of 2° are common.
To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian.
== Applications within mathematics ==
=== Complex analysis ===
Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold).
This construction has special significance in complex analysis. The point (X, Y) in the real plane can be identified with the complex number ζ = X + iY. The stereographic projection from the north pole onto the equatorial plane is then
ζ
=
x
+
i
y
1
−
z
,
(
x
,
y
,
z
)
=
(
2
Re
ζ
1
+
ζ
¯
ζ
,
2
Im
ζ
1
+
ζ
¯
ζ
,
−
1
+
ζ
¯
ζ
1
+
ζ
¯
ζ
)
.
{\displaystyle {\begin{aligned}\zeta &={\frac {x+iy}{1-z}},\\\\(x,y,z)&=\left({\frac {2\operatorname {Re} \zeta }{1+{\bar {\zeta }}\zeta }},{\frac {2\operatorname {Im} \zeta }{1+{\bar {\zeta }}\zeta }},{\frac {-1+{\bar {\zeta }}\zeta }{1+{\bar {\zeta }}\zeta }}\right).\end{aligned}}}
Similarly, letting ξ = X − iY be another complex coordinate, the functions
ξ
=
x
−
i
y
1
+
z
,
(
x
,
y
,
z
)
=
(
2
Re
ξ
1
+
ξ
¯
ξ
,
−
2
Im
ξ
1
+
ξ
¯
ξ
,
1
−
ξ
¯
ξ
1
+
ξ
¯
ξ
)
{\displaystyle {\begin{aligned}\xi &={\frac {x-iy}{1+z}},\\(x,y,z)&=\left({\frac {2\operatorname {Re} \xi }{1+{\bar {\xi }}\xi }},{\frac {-2\operatorname {Im} \xi }{1+{\bar {\xi }}\xi }},{\frac {1-{\bar {\xi }}\xi }{1+{\bar {\xi }}\xi }}\right)\end{aligned}}}
define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the ζ- and ξ-coordinates are then ζ = 1/ξ and ξ = 1/ζ, with ζ approaching 0 as ξ goes to infinity, and vice versa. This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere. The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere.
=== Visualization of lines and planes ===
The set of all lines through the origin in three-dimensional space forms a space called the real projective plane. This plane is difficult to visualize, because it cannot be embedded in three-dimensional space.
However, one can visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere z ≤ 0 in a point, which can then be stereographically projected to a point on a disk in the XY plane. Horizontal lines through the origin intersect the southern hemisphere in two antipodal points along the equator, which project to the boundary of the disk. Either of the two projected points can be considered part of the disk; it is understood that antipodal points on the equator represent a single line in 3 space and a single point on the boundary of the projected disk (see quotient topology). So any set of lines through the origin can be pictured as a set of points in the projected disk. But the boundary points behave differently from the boundary points of an ordinary 2-dimensional disk, in that any one of them is simultaneously close to interior points on opposite sides of the disk (just as two nearly horizontal lines through the origin can project to points on opposite sides of the disk).
Also, every plane through the origin intersects the unit sphere in a great circle, called the trace of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass. Computers now make this task much easier.
Further associated with each plane is a unique line, called the plane's pole, that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces.
This construction is used to visualize directional data in crystallography and geology, as described below.
=== Other visualization ===
Stereographic projection is also applied to the visualization of polytopes. In a Schlegel diagram, an n-dimensional polytope in Rn+1 is projected onto an n-dimensional sphere, which is then stereographically projected onto Rn. The reduction from Rn+1 to Rn can make the polytope easier to visualize and understand.
=== Arithmetic geometry ===
In elementary arithmetic geometry, stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples. Specifically, stereographic projection from the north pole (0,1) onto the x-axis gives a one-to-one correspondence between the rational number points (x, y) on the unit circle (with y ≠ 1) and the rational points of the x-axis. If (m/n, 0) is a rational point on the x-axis, then its inverse stereographic projection is the point
(
2
m
n
m
2
+
n
2
,
m
2
−
n
2
m
2
+
n
2
)
{\displaystyle \left({\frac {2mn}{m^{2}+n^{2}}},{\frac {m^{2}-n^{2}}{m^{2}+n^{2}}}\right)}
which gives Euclid's formula for a Pythagorean triple.
=== Tangent half-angle substitution ===
The pair of trigonometric functions (sin x, cos x) can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle:
cos
x
=
1
−
t
2
1
+
t
2
,
sin
x
=
2
t
t
2
+
1
.
{\displaystyle \cos x={\frac {1-t^{2}}{1+t^{2}}},\quad \sin x={\frac {2t}{t^{2}+1}}.}
Under this reparametrization, the length element dx of the unit circle goes over to
d
x
=
2
d
t
t
2
+
1
.
{\displaystyle dx={\frac {2\,dt}{t^{2}+1}}.}
This substitution can sometimes simplify integrals involving trigonometric functions.
== Applications to other disciplines ==
=== Cartography ===
The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation.
Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin.
=== Planetary science ===
The stereographic is the only projection that maps all circles on a sphere to circles on a plane. This property is valuable in planetary mapping where craters are typical features. The set of circles passing through the point of projection have unbounded radius, and therefore degenerate into lines.
=== Crystallography ===
In crystallography, the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure.
In electron diffraction, Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing experimental access to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space, and fringe visibility maps for use with bend contours in direct space, thus act as road maps for exploring orientation space with crystals in the transmission electron microscope.
=== Geology ===
Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation. Similarly, a fault plane is a planar feature that may contain linear features such as slickensides.
These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection. The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring.
=== Rock mechanics ===
The stereographic projection is one of the most widely used methods for evaluating rock slope stability. It allows for the representation and analysis of three-dimensional orientation data in two dimensions. Kinematic analysis within stereographic projection is used to assess the potential for various modes of rock slope failures—such as plane, wedge, and toppling failures—which occur due to the presence of unfavorably oriented discontinuities. This technique is particularly useful for visualizing the orientation of rock slopes in relation to discontinuity sets, facilitating the assessment of the most likely failure type. For instance, plane failure is more likely when the strike of a discontinuity set is parallel to the slope, and the discontinuities dip towards the slope at an angle steep enough to allow sliding, but not steeper than the slope itself.
Additionally, some authors have developed graphical methods based on stereographic projection to easily calculate geometrical correction parameters—such as those related to the parallelism between the slope and discontinuities, the dip of the discontinuity, and the relative angle between the discontinuity and the slope—for rock mass classifications in slopes, including slope mass rating (SMR) and rock mass rating.
=== Photography ===
Some fisheye lenses use a stereographic projection to capture a wide-angle view. Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture. Image remapping software, such as Panotools, allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection.
The stereographic projection has been used to map spherical panoramas, starting with Horace Bénédict de Saussure's in 1779. This results in effects known as a little planet (when the center of projection is the nadir) and a tube (when the center of projection is the zenith).
The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection.
== See also ==
List of map projections
Astrolabe
Astronomical clock
Poincaré disk model, the analogous mapping of the hyperbolic plane
Stereographic projection in cartography
Curvilinear perspective
Fisheye lens
== References ==
=== Sources ===
== External links ==
Stereographic Projection and Inversion from Cut-the-Knot
DoITPoMS Teaching and Learning Package - "The Stereographic Projection"
=== Videos ===
Proof about Stereographic Projection taking circles in the sphere to circles in the plane
Time Lapse Stereographic Projection on Vimeo
=== Software ===
Stereonet, a software tool for structural geology by Rick Allmendinger.
PTCLab, the phase transformation crystallography lab
Sphaerica, software tool for straightedge and compass construction on the sphere, including a stereographic projection display option
Estereografica Web, a web application for stereographic projection in structural geology and fault kinematics by Ernesto Cristallini. | Wikipedia/Stereographic_projection |
In mathematics, a homothety (or homothecy, or homogeneous dilation) is a transformation of an affine space determined by a point S called its center and a nonzero number k called its ratio, which sends point X to a point X′ by the rule,
S
X
′
→
=
k
S
X
→
{\displaystyle {\overrightarrow {SX'}}=k{\overrightarrow {SX}}}
for a fixed number
k
≠
0
{\displaystyle k\neq 0}
.
Using position vectors:
x
′
=
s
+
k
(
x
−
s
)
{\displaystyle \mathbf {x} '=\mathbf {s} +k(\mathbf {x} -\mathbf {s} )}
.
In case of
S
=
O
{\displaystyle S=O}
(Origin):
x
′
=
k
x
{\displaystyle \mathbf {x} '=k\mathbf {x} }
,
which is a uniform scaling and shows the meaning of special choices for
k
{\displaystyle k}
:
for
k
=
1
{\displaystyle k=1}
one gets the identity mapping,
for
k
=
−
1
{\displaystyle k=-1}
one gets the reflection at the center,
For
1
/
k
{\displaystyle 1/k}
one gets the inverse mapping defined by
k
{\displaystyle k}
.
In Euclidean geometry homotheties are the similarities that fix a point and either preserve (if
k
>
0
{\displaystyle k>0}
) or reverse (if
k
<
0
{\displaystyle k<0}
) the direction of all vectors. Together with the translations, all homotheties of an affine (or Euclidean) space form a group, the group of dilations or homothety-translations. These are precisely the affine transformations with the property that the image of every line g is a line parallel to g.
In projective geometry, a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwise invariant.
In Euclidean geometry, a homothety of ratio
k
{\displaystyle k}
multiplies distances between points by
|
k
|
{\displaystyle |k|}
, areas by
k
2
{\displaystyle k^{2}}
and volumes by
|
k
|
3
{\displaystyle |k|^{3}}
. Here
k
{\displaystyle k}
is the ratio of magnification or dilation factor or scale factor or similitude ratio. Such a transformation can be called an enlargement if the scale factor exceeds 1. The above-mentioned fixed point S is called homothetic center or center of similarity or center of similitude.
The term, coined by French mathematician Michel Chasles, is derived from two Greek elements: the prefix homo- (όμο 'similar'}; and transl. grc – transl. thesis (Θέσις) 'position'). It describes the relationship between two figures of the same shape and orientation. For example, two Russian dolls looking in the same direction can be considered homothetic.
Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops.
== Properties ==
The following properties hold in any dimension.
=== Mapping lines, line segments and angles ===
A homothety has the following properties:
A line is mapped onto a parallel line. Hence: angles remain unchanged.
The ratio of two line segments is preserved.
Both properties show:
A homothety is a similarity.
Derivation of the properties:
In order to make calculations easy it is assumed that the center
S
{\displaystyle S}
is the origin:
x
→
k
x
{\displaystyle \mathbf {x} \to k\mathbf {x} }
. A line
g
{\displaystyle g}
with parametric representation
x
=
p
+
t
v
{\displaystyle \mathbf {x} =\mathbf {p} +t\mathbf {v} }
is mapped onto the point set
g
′
{\displaystyle g'}
with equation
x
=
k
(
p
+
t
v
)
=
k
p
+
t
k
v
{\displaystyle \mathbf {x} =k(\mathbf {p} +t\mathbf {v} )=k\mathbf {p} +tk\mathbf {v} }
, which is a line parallel to
g
{\displaystyle g}
.
The distance of two points
P
:
p
,
Q
:
q
{\displaystyle P:\mathbf {p} ,\;Q:\mathbf {q} }
is
|
p
−
q
|
{\displaystyle |\mathbf {p} -\mathbf {q} |}
and
|
k
p
−
k
q
|
=
|
k
|
|
p
−
q
|
{\displaystyle |k\mathbf {p} -k\mathbf {q} |=|k||\mathbf {p} -\mathbf {q} |}
the distance between their images. Hence, the ratio (quotient) of two line segments remains unchanged.
In case of
S
≠
O
{\displaystyle S\neq O}
the calculation is analogous but a little extensive.
Consequences: A triangle is mapped on a similar one. The homothetic image of a circle is a circle. The image of an ellipse is a similar one. i.e. the ratio of the two axes is unchanged.
=== Graphical constructions ===
==== using the intercept theorem ====
If for a homothety with center
S
{\displaystyle S}
the image
Q
1
{\displaystyle Q_{1}}
of a point
P
1
{\displaystyle P_{1}}
is given (see diagram) then the image
Q
2
{\displaystyle Q_{2}}
of a second point
P
2
{\displaystyle P_{2}}
, which lies not on line
S
P
1
{\displaystyle SP_{1}}
can be constructed graphically using the intercept theorem:
Q
2
{\displaystyle Q_{2}}
is the common point th two lines
P
1
P
2
¯
{\displaystyle {\overline {P_{1}P_{2}}}}
and
S
P
2
¯
{\displaystyle {\overline {SP_{2}}}}
. The image of a point collinear with
P
1
,
Q
1
{\displaystyle P_{1},Q_{1}}
can be determined using
P
2
,
Q
2
{\displaystyle P_{2},Q_{2}}
.
==== using a pantograph ====
Before computers became ubiquitous, scalings of drawings were done by using a pantograph, a tool similar to a compass.
Construction and geometrical background:
Take 4 rods and assemble a mobile parallelogram with vertices
P
0
,
Q
0
,
H
,
P
{\displaystyle P_{0},Q_{0},H,P}
such that the two rods meeting at
Q
0
{\displaystyle Q_{0}}
are prolonged at the other end as shown in the diagram. Choose the ratio
k
{\displaystyle k}
.
On the prolonged rods mark the two points
S
,
Q
{\displaystyle S,Q}
such that
|
S
Q
0
|
=
k
|
S
P
0
|
{\displaystyle |SQ_{0}|=k|SP_{0}|}
and
|
Q
Q
0
|
=
k
|
H
Q
0
|
{\displaystyle |QQ_{0}|=k|HQ_{0}|}
. This is the case if
|
S
Q
0
|
=
k
k
−
1
|
P
0
Q
0
|
.
{\displaystyle |SQ_{0}|={\tfrac {k}{k-1}}|P_{0}Q_{0}|.}
(Instead of
k
{\displaystyle k}
the location of the center
S
{\displaystyle S}
can be prescribed. In this case the ratio is
k
=
|
S
Q
0
|
/
|
S
P
0
|
{\displaystyle k=|SQ_{0}|/|SP_{0}|}
.)
Attach the mobile rods rotatable at point
S
{\displaystyle S}
.
Vary the location of point
P
{\displaystyle P}
and mark at each time point
Q
{\displaystyle Q}
.
Because of
|
S
Q
0
|
/
|
S
P
0
|
=
|
Q
0
Q
|
/
|
P
P
0
|
{\displaystyle |SQ_{0}|/|SP_{0}|=|Q_{0}Q|/|PP_{0}|}
(see diagram) one gets from the intercept theorem that the points
S
,
P
,
Q
{\displaystyle S,P,Q}
are collinear (lie on a line) and equation
|
S
Q
|
=
k
|
S
P
|
{\displaystyle |SQ|=k|SP|}
holds. That shows: the mapping
P
→
Q
{\displaystyle P\to Q}
is a homothety with center
S
{\displaystyle S}
and ratio
k
{\displaystyle k}
.
=== Composition ===
The composition of two homotheties with the same center
S
{\displaystyle S}
is again a homothety with center
S
{\displaystyle S}
. The homotheties with center
S
{\displaystyle S}
form a group.
The composition of two homotheties with different centers
S
1
,
S
2
{\displaystyle S_{1},S_{2}}
and its ratios
k
1
,
k
2
{\displaystyle k_{1},k_{2}}
is
in case of
k
1
k
2
≠
1
{\displaystyle k_{1}k_{2}\neq 1}
a homothety with its center on line
S
1
S
2
¯
{\displaystyle {\overline {S_{1}S_{2}}}}
and ratio
k
1
k
2
{\displaystyle k_{1}k_{2}}
or
in case of
k
1
k
2
=
1
{\displaystyle k_{1}k_{2}=1}
a translation in direction
S
1
S
2
→
{\displaystyle {\overrightarrow {S_{1}S_{2}}}}
. Especially, if
k
1
=
k
2
=
−
1
{\displaystyle k_{1}=k_{2}=-1}
(point reflections).
Derivation:
For the composition
σ
2
σ
1
{\displaystyle \sigma _{2}\sigma _{1}}
of the two homotheties
σ
1
,
σ
2
{\displaystyle \sigma _{1},\sigma _{2}}
with centers
S
1
,
S
2
{\displaystyle S_{1},S_{2}}
with
σ
1
:
x
→
s
1
+
k
1
(
x
−
s
1
)
,
{\displaystyle \sigma _{1}:\mathbf {x} \to \mathbf {s} _{1}+k_{1}(\mathbf {x} -\mathbf {s} _{1}),}
σ
2
:
x
→
s
2
+
k
2
(
x
−
s
2
)
{\displaystyle \sigma _{2}:\mathbf {x} \to \mathbf {s} _{2}+k_{2}(\mathbf {x} -\mathbf {s} _{2})\ }
one gets by calculation for the image of point
X
:
x
{\displaystyle X:\mathbf {x} }
:
(
σ
2
σ
1
)
(
x
)
=
s
2
+
k
2
(
s
1
+
k
1
(
x
−
s
1
)
−
s
2
)
{\displaystyle (\sigma _{2}\sigma _{1})(\mathbf {x} )=\mathbf {s} _{2}+k_{2}{\big (}\mathbf {s} _{1}+k_{1}(\mathbf {x} -\mathbf {s} _{1})-\mathbf {s} _{2}{\big )}}
=
(
1
−
k
1
)
k
2
s
1
+
(
1
−
k
2
)
s
2
+
k
1
k
2
x
{\displaystyle \qquad \qquad \ =(1-k_{1})k_{2}\mathbf {s} _{1}+(1-k_{2})\mathbf {s} _{2}+k_{1}k_{2}\mathbf {x} }
.
Hence, the composition is
in case of
k
1
k
2
=
1
{\displaystyle k_{1}k_{2}=1}
a translation in direction
S
1
S
2
→
{\displaystyle {\overrightarrow {S_{1}S_{2}}}}
by vector
(
1
−
k
2
)
(
s
2
−
s
1
)
{\displaystyle \ (1-k_{2})(\mathbf {s} _{2}-\mathbf {s} _{1})}
.
in case of
k
1
k
2
≠
1
{\displaystyle k_{1}k_{2}\neq 1}
point
S
3
:
s
3
=
(
1
−
k
1
)
k
2
s
1
+
(
1
−
k
2
)
s
2
1
−
k
1
k
2
=
s
1
+
1
−
k
2
1
−
k
1
k
2
(
s
2
−
s
1
)
{\displaystyle S_{3}:\mathbf {s} _{3}={\frac {(1-k_{1})k_{2}\mathbf {s} _{1}+(1-k_{2})\mathbf {s} _{2}}{1-k_{1}k_{2}}}=\mathbf {s} _{1}+{\frac {1-k_{2}}{1-k_{1}k_{2}}}(\mathbf {s} _{2}-\mathbf {s} _{1})}
is a fixpoint (is not moved) and the composition
σ
2
σ
1
:
x
→
s
3
+
k
1
k
2
(
x
−
s
3
)
{\displaystyle \sigma _{2}\sigma _{1}:\ \mathbf {x} \to \mathbf {s} _{3}+k_{1}k_{2}(\mathbf {x} -\mathbf {s} _{3})\quad }
.
is a homothety with center
S
3
{\displaystyle S_{3}}
and ratio
k
1
k
2
{\displaystyle k_{1}k_{2}}
.
S
3
{\displaystyle S_{3}}
lies on line
S
1
S
2
¯
{\displaystyle {\overline {S_{1}S_{2}}}}
.
The composition of a homothety and a translation is a homothety.
Derivation:
The composition of the homothety
σ
:
x
→
s
+
k
(
x
−
s
)
,
k
≠
1
,
{\displaystyle \sigma :\mathbf {x} \to \mathbf {s} +k(\mathbf {x} -\mathbf {s} ),\;k\neq 1,\;}
and the translation
τ
:
x
→
x
+
v
{\displaystyle \tau :\mathbf {x} \to \mathbf {x} +\mathbf {v} }
is
τ
σ
:
x
→
s
+
v
+
k
(
x
−
s
)
{\displaystyle \tau \sigma :\mathbf {x} \to \mathbf {s} +\mathbf {v} +k(\mathbf {x} -\mathbf {s} )}
=
s
+
v
1
−
k
+
k
(
x
−
(
s
+
v
1
−
k
)
)
{\displaystyle =\mathbf {s} +{\frac {\mathbf {v} }{1-k}}+k\left(\mathbf {x} -(\mathbf {s} +{\frac {\mathbf {v} }{1-k}})\right)}
which is a homothety with center
s
′
=
s
+
v
1
−
k
{\displaystyle \mathbf {s} '=\mathbf {s} +{\frac {\mathbf {v} }{1-k}}}
and ratio
k
{\displaystyle k}
.
=== In homogeneous coordinates ===
The homothety
σ
:
x
→
s
+
k
(
x
−
s
)
{\displaystyle \sigma :\mathbf {x} \to \mathbf {s} +k(\mathbf {x} -\mathbf {s} )}
with center
S
=
(
u
,
v
)
{\displaystyle S=(u,v)}
can be written as the composition of a homothety with center
O
{\displaystyle O}
and a translation:
x
→
k
x
+
(
1
−
k
)
s
{\displaystyle \mathbf {x} \to k\mathbf {x} +(1-k)\mathbf {s} }
.
Hence
σ
{\displaystyle \sigma }
can be represented in homogeneous coordinates
by the matrix:
(
k
0
(
1
−
k
)
u
0
k
(
1
−
k
)
v
0
0
1
)
{\displaystyle {\begin{pmatrix}k&0&(1-k)u\\0&k&(1-k)v\\0&0&1\end{pmatrix}}}
A pure homothety linear transformation is also conformal because it is composed of translation and uniform scale.
== See also ==
Scaling (geometry) a similar notion in vector spaces
Homothetic center, the center of a homothetic transformation taking one of a pair of shapes into the other
The Hadwiger conjecture on the number of strictly smaller homothetic copies of a convex body that may be needed to cover it
Homothetic function (economics), a function of the form f(U(y)) in which U is a homogeneous function and f is a monotonically increasing function.
== Notes ==
== References ==
Coxeter, H. S. M. (1961), "Introduction to geometry", Wiley, p. 94
Hadamard, J. (1906), "V: Homothétie et Similtude" [V: Homothety and Similarity], Leçons de Géométrie élémentaire. I: Géométrie plane [Lessons in Elementary Geometry. I: Plane Geometry] (in French) (2nd ed.), Paris: Armand Colin
Meserve, Bruce E. (1955), "Homothetic transformations", Fundamental Concepts of Geometry, Addison-Wesley, pp. 166–169
Tuller, Annita (1967), A Modern Introduction to Geometries, University Series in Undergraduate Mathematics, Princeton, New Jersey: D. Van Nostrand Co.
== External links ==
Homothety, interactive applet from Cut-the-Knot. | Wikipedia/Homothetic_transformation |
The Roothaan equations are a representation of the Hartree–Fock equation in a non orthonormal basis set which can be of Gaussian-type or Slater-type. It applies to closed-shell molecules or atoms where all molecular orbitals or atomic orbitals, respectively, are doubly occupied. This is generally called Restricted Hartree–Fock theory.
The method was developed independently by Clemens C. J. Roothaan and George G. Hall in 1951, and is thus sometimes called the Roothaan-Hall equations. The Roothaan equations can be written in a form resembling generalized eigenvalue problem, although they are not a standard eigenvalue problem because they are nonlinear:
F
C
=
S
C
ϵ
{\displaystyle \mathbf {F} \mathbf {C} =\mathbf {S} \mathbf {C} \mathbf {\epsilon } }
where F is the Fock matrix (which depends on the coefficients C due to electron-electron interactions), C is a matrix of coefficients, S is the overlap matrix of the basis functions, and
ϵ
{\displaystyle \epsilon }
is the (diagonal, by convention) matrix of orbital energies. In the case of an orthonormalised basis set the overlap matrix, S, reduces to the identity matrix. These equations are essentially a special case of a Galerkin method applied to the Hartree–Fock equation using a particular basis set.
In contrast to the Hartree–Fock equations - which are integro-differential equations - the Roothaan–Hall equations have a matrix-form. Therefore, they can be solved using standard techniques.
== See also ==
Hartree–Fock method
== References == | Wikipedia/Roothaan_equations |
In physics, the number of degrees of freedom (DOF) of a mechanical system is the number of independent parameters required to completely specify its configuration or state. That number is an important property in the analysis of systems of bodies in mechanical engineering, structural engineering, aerospace engineering, robotics, and other fields.
As an example, the position of a single railcar (engine) moving along a track has one degree of freedom because the position of the car can be completely specified a single number expressing its distance along the track from some chosen origin. A train of rigid cars connected by hinges to an engine still has only one degree of freedom because the positions of the cars behind the engine are constrained by the shape of the track.
For a second example, an automobile with a very stiff suspension can be considered to be a rigid body traveling on a plane (a flat, two-dimensional space). This body has three independent degrees of freedom consisting of two components of translation (which together specify its position) and one angle of rotation (which specifies its orientation). Skidding or drifting is a good example of an automobile's three independent degrees of freedom.
The position and orientation of a rigid body in space are defined by three components of translation and three components of rotation, which means that the body has six degrees of freedom.
To ensure that a mechanical device's degrees of freedom neither underconstrain nor overconstrain it, its design can be managed using the exact constraint method.
== Motions and dimensions ==
The position of an n-dimensional rigid body is defined by the rigid transformation, [T] = [A, d], where d is an n-dimensional translation and A is an n × n rotation matrix, which has n translational degrees of freedom and n(n − 1)/2 rotational degrees of freedom. The number of rotational degrees of freedom comes from the dimension of the rotation group SO(n).
A non-rigid or deformable body may be thought of as a collection of many minute particles (infinite number of DOFs), this is often approximated by a finite DOF system. When motion involving large displacements is the main objective of study (e.g. for analyzing the motion of satellites), a deformable body may be approximated as a rigid body (or even a particle) in order to simplify the analysis.
The degree of freedom of a system can be viewed as the minimum number of coordinates required to specify a configuration. Applying this definition, we have:
For a single particle in a plane two coordinates define its location so it has two degrees of freedom;
A single particle in space requires three coordinates so it has three degrees of freedom;
Two particles in space have a combined six degrees of freedom;
If two particles in space are constrained to maintain a constant distance from each other, such as in the case of a diatomic molecule, then the six coordinates must satisfy a single constraint equation defined by the distance formula. This reduces the degree of freedom of the system to five, because the distance formula can be used to solve for the remaining coordinate once the other five are specified.
== Rigid bodies ==
A single rigid body has at most six degrees of freedom (6 DOF) 3T3R consisting of three translations 3T and three rotations 3R.
See also Euler angles.
For example, the motion of a ship at sea has the six degrees of freedom of a rigid body, and is described as:
Translation and rotation:
Walking (or surging): Moving forward and backward;
Strafing (or swaying): Moving left and right;
Elevating (or heaving): Moving up and down;
Roll rotation: Pivots side to side;
Pitch rotation: Tilts forward and backward;
Yaw rotation: Swivels left and right;
For example, the trajectory of an airplane in flight has three degrees of freedom and its attitude along the trajectory has three degrees of freedom, for a total of six degrees of freedom.
For rolling in flight and ship dynamics, see roll (aviation) and roll (ship motion), respectively.
An important derivative is the roll rate (or roll velocity), which is the angular speed at which an aircraft can change its roll attitude, and is typically expressed in degrees per second.
For pitching in flight and ship dynamics, see pitch (aviation) and pitch (ship motion), respectively.
For yawing in flight and ship dynamics, see yaw (aviation) and yaw (ship motion), respectively.
One important derivative is the yaw rate (or yaw velocity), the angular speed of yaw rotation, measured with a yaw rate sensor.
Another important derivative is the yawing moment, the angular momentum of a yaw rotation, which is important for adverse yaw in aircraft dynamics.
=== Lower mobility ===
Physical constraints may limit the number of degrees of freedom of a single rigid body. For example, a block sliding around on a flat table has 3 DOF 2T1R consisting of two translations 2T and 1 rotation 1R. An XYZ positioning robot like SCARA has 3 DOF 3T lower mobility.
== Mobility formula ==
The mobility formula counts the number of parameters that define the configuration of a set of rigid bodies that are constrained by joints connecting these bodies.
Consider a system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. In order to count the degrees of freedom of this system, include the fixed body in the count of bodies, so that mobility is independent of the choice of the body that forms the fixed frame. Then the degree-of-freedom of the unconstrained system of N = n + 1 is
M
=
6
n
=
6
(
N
−
1
)
,
{\displaystyle M=6n=6(N-1),\!}
because the fixed body has zero degrees of freedom relative to itself.
Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, have f = 1 and therefore c = 6 − 1 = 5.
The result is that the mobility of a system formed from n moving links and j joints each with freedom fi, i = 1, ..., j, is given by
M
=
6
n
−
∑
i
=
1
j
(
6
−
f
i
)
=
6
(
N
−
1
−
j
)
+
∑
i
=
1
j
f
i
{\displaystyle M=6n-\sum _{i=1}^{j}\ (6-f_{i})=6(N-1-j)+\sum _{i=1}^{j}\ f_{i}}
Recall that N includes the fixed link.
There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain.
A single open chain consists of n moving links connected end to end by n joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is
M
=
∑
i
=
1
j
f
i
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}}
For a simple closed chain, n moving links are connected end-to-end by n + 1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N = j and the mobility of the chain is
M
=
∑
i
=
1
j
f
i
−
6
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}-6}
An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom.
An example of a simple closed chain is the RSSR spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints.
=== Planar and spherical movement ===
It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the links in each system is now three rather than six, and the constraints imposed by joints are now c = 3 − f.
In this case, the mobility formula is given by
M
=
3
(
N
−
1
−
j
)
+
∑
i
=
1
j
f
i
,
{\displaystyle M=3(N-1-j)+\sum _{i=1}^{j}\ f_{i},}
and the special cases become
planar or spherical simple open chain,
M
=
∑
i
=
1
j
f
i
,
{\displaystyle M=\sum _{i=1}^{j}\ f_{i},}
planar or spherical simple closed chain,
M
=
∑
i
=
1
j
f
i
−
3.
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}-3.}
An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1.
=== Systems of bodies ===
A system with several bodies would have a combined DOF that is the sum of the DOFs of the bodies, less the internal constraints they may have on relative motion. A mechanism or linkage containing a number of connected rigid bodies may have more than the degrees of freedom for a single rigid body. Here the term degrees of freedom is used to describe the number of parameters needed to specify the spatial pose of a linkage. It is also defined in context of the configuration space, task space and workspace of a robot.
A specific type of linkage is the open kinematic chain, where a set of rigid links are connected at joints; a joint may provide one DOF (hinge/sliding), or two (cylindrical). Such chains occur commonly in robotics, biomechanics, and for satellites and other space structures. A human arm is considered to have seven DOFs. A shoulder gives pitch, yaw, and roll, an elbow allows for pitch, and a wrist allows for pitch, yaw and roll. Only 3 of those movements would be necessary to move the hand to any point in space, but people would lack the ability to grasp things from different angles or directions. A robot (or object) that has mechanisms to control all 6 physical DOF is said to be holonomic. An object with fewer controllable DOFs than total DOFs is said to be non-holonomic, and an object with more controllable DOFs than total DOFs (such as the human arm) is said to be redundant. Although keep in mind that it is not redundant in the human arm because the two DOFs; wrist and shoulder, that represent the same movement; roll, supply each other since they can't do a full 360.
The degree of freedom are like different movements that can be made.
In mobile robotics, a car-like robot can reach any position and orientation in 2-D space, so it needs 3 DOFs to describe its pose, but at any point, you can move it only by a forward motion and a steering angle. So it has two control DOFs and three representational DOFs; i.e. it is non-holonomic. A fixed-wing aircraft, with 3–4 control DOFs (forward motion, roll, pitch, and to a limited extent, yaw) in a 3-D space, is also non-holonomic, as it cannot move directly up/down or left/right.
A summary of formulas and methods for computing the degrees-of-freedom in mechanical systems has been given by Pennestri, Cavacece, and Vita.
== Electrical engineering ==
In electrical engineering degrees of freedom is often used to describe the number of directions in which a phased array antenna can form either beams or nulls. It is equal to one less than the number of elements contained in the array, as one element is used as a reference against which either constructive or destructive interference may be applied using each of the remaining antenna elements. Radar practice and communication link practice, with beam steering being more prevalent for radar applications and null steering being more prevalent for interference suppression in communication links.
== See also ==
Gimbal lock – Loss of one degree of freedom in a three-dimensional, three-gimbal mechanism
Kinematics – Branch of physics describing the motion of objects without considering forces
Kinematic pair – Connection between two physical objects which constrains their relative movement
XR-2 – Educational robot
== References == | Wikipedia/Degrees_of_freedom_(mechanics) |
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.
== Definition ==
=== Operator and transformation ===
The Householder operator may be defined over any finite-dimensional inner product space
V
{\displaystyle V}
with inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
and unit vector
u
∈
V
{\displaystyle u\in V}
as
H
u
(
x
)
:=
x
−
2
⟨
x
,
u
⟩
u
.
{\displaystyle H_{u}(x):=x-2\,\langle x,u\rangle \,u\,.}
It is also common to choose a non-unit vector
q
∈
V
{\displaystyle q\in V}
, and normalize it directly in the Householder operator's expression:
H
q
(
x
)
=
x
−
2
⟨
x
,
q
⟩
⟨
q
,
q
⟩
q
.
{\displaystyle H_{q}\left(x\right)=x-2\,{\frac {\langle x,q\rangle }{\langle q,q\rangle }}\,q\,.}
Such an operator is linear and self-adjoint.
If
V
=
C
n
{\displaystyle V=\mathbb {C} ^{n}}
, note that the reflection hyperplane can be defined by its normal vector, a unit vector
v
→
∈
V
{\textstyle {\vec {v}}\in V}
(a vector with length
1
{\textstyle 1}
) that is orthogonal to the hyperplane. The reflection of a point
x
{\textstyle x}
about this hyperplane is the Householder transformation:
x
→
−
2
⟨
x
→
,
v
→
⟩
v
→
=
x
→
−
2
v
→
(
v
→
∗
x
→
)
,
{\displaystyle {\vec {x}}-2\langle {\vec {x}},{\vec {v}}\rangle {\vec {v}}={\vec {x}}-2{\vec {v}}\left({\vec {v}}^{*}{\vec {x}}\right),}
where
x
→
{\displaystyle {\vec {x}}}
is the vector from the origin to the point
x
{\displaystyle x}
, and
v
→
∗
{\textstyle {\vec {v}}^{*}}
is the conjugate transpose of
v
→
{\textstyle {\vec {v}}}
.
=== Householder matrix ===
The matrix constructed from this transformation can be expressed in terms of an outer product as:
P
=
I
−
2
v
→
v
→
∗
{\displaystyle P=I-2{\vec {v}}{\vec {v}}^{*}}
is known as the Householder matrix, where
I
{\textstyle I}
is the identity matrix.
==== Properties ====
The Householder matrix has the following properties:
it is Hermitian:
P
=
P
∗
{\textstyle P=P^{*}}
,
it is unitary:
P
−
1
=
P
∗
{\textstyle P^{-1}=P^{*}}
(via the Sherman-Morrison formula),
hence it is involutory:
P
=
P
−
1
{\textstyle P=P^{-1}}
.
A Householder matrix has eigenvalues
±
1
{\textstyle \pm 1}
. To see this, notice that if
x
→
{\textstyle {\vec {x}}}
is orthogonal to the vector
v
→
{\textstyle {\vec {v}}}
which was used to create the reflector, then
P
v
x
→
=
(
I
−
2
v
→
v
→
∗
)
x
→
=
x
→
−
2
⟨
v
→
,
x
→
⟩
v
→
=
x
→
{\textstyle P_{v}{\vec {x}}=(I-2{\vec {v}}{\vec {v}}^{*}){\vec {x}}={\vec {x}}-2\langle {\vec {v}},{\vec {x}}\rangle {\vec {v}}={\vec {x}}}
, i.e.,
1
{\textstyle 1}
is an eigenvalue of multiplicity
n
−
1
{\textstyle n-1}
, since there are
n
−
1
{\textstyle n-1}
independent vectors orthogonal to
v
→
{\textstyle {\vec {v}}}
. Also, notice
P
v
v
→
=
(
I
−
2
v
→
v
→
∗
)
v
→
=
v
→
−
2
⟨
v
→
,
v
→
⟩
v
→
=
−
v
→
{\textstyle P_{v}{\vec {v}}=(I-2{\vec {v}}{\vec {v}}^{*}){\vec {v}}={\vec {v}}-2\langle {\vec {v}},{\vec {v}}\rangle {\vec {v}}=-{\vec {v}}}
(since
v
→
{\displaystyle {\vec {v}}}
is by definition a unit vector), and so
−
1
{\textstyle -1}
is an eigenvalue with multiplicity
1
{\textstyle 1}
.
The determinant of a Householder reflector is
−
1
{\textstyle -1}
, since the determinant of a matrix is the product of its eigenvalues, in this case one of which is
−
1
{\textstyle -1}
with the remainder being
1
{\textstyle 1}
(as in the previous point), or via the Matrix determinant lemma.
==== Example ====
consider the normalization of a vector of 1's
v
→
=
1
2
[
1
1
]
{\displaystyle {\vec {v}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}
Then the Householder matrix corresponding to this vector is
P
v
=
[
1
0
0
1
]
−
2
(
1
2
[
1
1
]
)
(
1
2
[
1
1
]
)
{\displaystyle P_{v}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-2({\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}})({\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\end{bmatrix}})}
=
[
1
0
0
1
]
−
[
1
1
]
[
1
1
]
{\displaystyle ={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-{\begin{bmatrix}1\\1\end{bmatrix}}{\begin{bmatrix}1&1\end{bmatrix}}}
=
[
1
0
0
1
]
−
[
1
1
1
1
]
{\displaystyle ={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-{\begin{bmatrix}1&1\\1&1\end{bmatrix}}}
=
[
0
−
1
−
1
0
]
{\displaystyle ={\begin{bmatrix}0&-1\\-1&0\end{bmatrix}}}
Note that if we have a vector representing a coordinate in the 2D plane
[
x
y
]
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}}
Then in this case
P
v
{\displaystyle P_{v}}
flips and negates the x and y coordinates, in other words
P
v
[
x
y
]
=
[
−
y
−
x
]
{\displaystyle P_{v}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}-y\\-x\end{bmatrix}}}
Which corresponds to reflecting the vector across the line
y
=
−
x
{\displaystyle y=-x}
, which our original vector
v
{\displaystyle v}
is normal to.
== Applications ==
=== Geometric optics ===
In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see Specular reflection § Vector formulation).
=== Numerical linear algebra ===
Householder transformations are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix, to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian matrices, the symmetry can be preserved, resulting in tridiagonalization. Because they involve only a rank-one update and make use of low-level BLAS-1 operations, they can be quite efficient.
==== QR decomposition ====
Householder transformations can be used to calculate a QR decomposition. Consider a matrix tridiangularized up to column
i
{\displaystyle i}
, then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix
[
a
11
a
12
⋯
a
1
n
0
a
22
⋯
a
1
n
⋮
⋱
⋮
0
⋯
0
x
1
=
a
i
i
⋯
a
i
n
0
⋯
0
⋮
⋮
0
⋯
0
x
n
=
a
n
i
⋯
a
n
n
]
{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &&&a_{1n}\\0&a_{22}&\cdots &&&a_{1n}\\\vdots &&\ddots &&&\vdots \\0&\cdots &0&x_{1}=a_{ii}&\cdots &a_{in}\\0&\cdots &0&\vdots &&\vdots \\0&\cdots &0&x_{n}=a_{ni}&\cdots &a_{nn}\end{bmatrix}}}
via the matrix
[
I
i
−
1
0
0
P
v
]
{\displaystyle {\begin{bmatrix}I_{i-1}&0\\0&P_{v}\end{bmatrix}}}
.
(note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition)
If we can find a
v
→
{\displaystyle {\vec {v}}}
so that
P
v
x
→
=
e
→
1
{\displaystyle P_{v}{\vec {x}}={\vec {e}}_{1}}
we could accomplish this. Thinking geometrically, we are looking for a plane so that the reflection about this plane happens to land directly on the basis vector. In other words,
for some constant
α
{\displaystyle \alpha }
. However, for this to happen, we must have
v
→
∝
x
→
−
α
e
→
1
{\displaystyle {\vec {v}}\propto {\vec {x}}-\alpha {\vec {e}}_{1}}
.
And since
v
→
{\displaystyle {\vec {v}}}
is a unit vector, this means that we must have
Now if we apply equation (2) back into equation (1), we get
x
→
−
α
e
→
1
=
2
(
⟨
x
→
,
x
→
−
α
e
→
1
‖
x
→
−
α
e
→
1
‖
2
⟩
x
→
−
α
e
→
1
‖
x
→
−
α
e
→
1
‖
2
{\displaystyle {\vec {x}}-\alpha {\vec {e}}_{1}=2(\langle {\vec {x}},{\frac {{\vec {x}}-\alpha {\vec {e}}_{1}}{\|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}}}\rangle {\frac {{\vec {x}}-\alpha {\vec {e}}_{1}}{\|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}}}}
Or, in other words, by comparing the scalars in front of the vector
x
→
−
α
e
→
1
{\displaystyle {\vec {x}}-\alpha {\vec {e}}_{1}}
we must have
‖
x
→
−
α
e
→
1
‖
2
2
=
2
⟨
x
→
,
x
→
−
α
e
1
⟩
{\displaystyle \|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}^{2}=2\langle {\vec {x}},{\vec {x}}-\alpha e_{1}\rangle }
.
Or
2
(
‖
x
→
‖
2
2
−
α
x
1
)
=
‖
x
→
‖
2
2
−
2
α
x
1
+
α
2
{\displaystyle 2(\|{\vec {x}}\|_{2}^{2}-\alpha x_{1})=\|{\vec {x}}\|_{2}^{2}-2\alpha x_{1}+\alpha ^{2}}
Which means that we can solve for
α
{\displaystyle \alpha }
as
α
=
±
‖
x
→
‖
2
{\displaystyle \alpha =\pm \|{\vec {x}}\|_{2}}
This completes the construction; however, in practice we want to avoid catastrophic cancellation in equation (2). To do so, we choose the sign of
α
{\displaystyle \alpha }
as
α
=
−
s
i
g
n
(
R
e
(
x
1
)
)
‖
x
→
‖
2
{\displaystyle \alpha =-sign(Re(x_{1}))\|{\vec {x}}\|_{2}}
==== Tridiagonalization (Hessenberg) ====
This procedure is presented in Numerical Analysis by Burden and Faires, and works when the matrix is symmetric. In the non-symmetric case, it is still useful as a similar procedure can result in a Hessenberg matrix.
It uses a slightly altered
sgn
{\displaystyle \operatorname {sgn} }
function with
sgn
(
0
)
=
1
{\displaystyle \operatorname {sgn} (0)=1}
.
In the first step, to form the Householder matrix in each step we need to determine
α
{\textstyle \alpha }
and
r
{\textstyle r}
, which are:
α
=
−
sgn
(
a
21
)
∑
j
=
2
n
a
j
1
2
;
r
=
1
2
(
α
2
−
a
21
α
)
;
{\displaystyle {\begin{aligned}\alpha &=-\operatorname {sgn} \left(a_{21}\right){\sqrt {\sum _{j=2}^{n}a_{j1}^{2}}};\\r&={\sqrt {{\frac {1}{2}}\left(\alpha ^{2}-a_{21}\alpha \right)}};\end{aligned}}}
From
α
{\textstyle \alpha }
and
r
{\textstyle r}
, construct vector
v
{\textstyle v}
:
v
→
(
1
)
=
[
v
1
v
2
⋮
v
n
]
,
{\displaystyle {\vec {v}}^{(1)}={\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}},}
where
v
1
=
0
{\textstyle v_{1}=0}
,
v
2
=
a
21
−
α
2
r
{\textstyle v_{2}={\frac {a_{21}-\alpha }{2r}}}
, and
v
k
=
a
k
1
2
r
{\displaystyle v_{k}={\frac {a_{k1}}{2r}}}
for each
k
=
3
,
4
…
n
{\displaystyle k=3,4\ldots n}
Then compute:
P
1
=
I
−
2
v
→
(
1
)
(
v
→
(
1
)
)
T
A
(
2
)
=
P
1
A
P
1
{\displaystyle {\begin{aligned}P^{1}&=I-2{\vec {v}}^{(1)}\left({\vec {v}}^{(1)}\right)^{\textsf {T}}\\A^{(2)}&=P^{1}AP^{1}\end{aligned}}}
Having found
P
1
{\textstyle P^{1}}
and computed
A
(
2
)
{\textstyle A^{(2)}}
the process is repeated for
k
=
2
,
3
,
…
,
n
−
2
{\textstyle k=2,3,\ldots ,n-2}
as follows:
α
=
−
sgn
(
a
k
+
1
,
k
k
)
∑
j
=
k
+
1
n
(
a
j
k
k
)
2
r
=
1
2
(
α
2
−
a
k
+
1
,
k
k
α
)
v
1
k
=
v
2
k
=
⋯
=
v
k
k
=
0
v
k
+
1
k
=
a
k
+
1
,
k
k
−
α
2
r
v
j
k
=
a
j
k
k
2
r
for
j
=
k
+
2
,
k
+
3
,
…
,
n
P
k
=
I
−
2
v
→
(
k
)
(
v
→
(
k
)
)
T
A
(
k
+
1
)
=
P
k
A
(
k
)
P
k
{\displaystyle {\begin{aligned}\alpha &=-\operatorname {sgn} \left(a_{k+1,k}^{k}\right){\sqrt {\sum _{j=k+1}^{n}\left(a_{jk}^{k}\right)^{2}}}\\[2pt]r&={\sqrt {{\frac {1}{2}}\left(\alpha ^{2}-a_{k+1,k}^{k}\alpha \right)}}\\[2pt]v_{1}^{k}&=v_{2}^{k}=\cdots =v_{k}^{k}=0\\[2pt]v_{k+1}^{k}&={\frac {a_{k+1,k}^{k}-\alpha }{2r}}\\v_{j}^{k}&={\frac {a_{jk}^{k}}{2r}}{\text{ for }}j=k+2,\ k+3,\ \ldots ,\ n\\P^{k}&=I-2{\vec {v}}^{(k)}\left({\vec {v}}^{(k)}\right)^{\textsf {T}}\\A^{(k+1)}&=P^{k}A^{(k)}P^{k}\end{aligned}}}
Continuing in this manner, the tridiagonal and symmetric matrix is formed.
==== Examples ====
In this example, also from Burden and Faires, the given matrix is transformed to the similar tridiagonal matrix A3 by using the Householder method.
A
=
[
4
1
−
2
2
1
2
0
1
−
2
0
3
−
2
2
1
−
2
−
1
]
,
{\displaystyle \mathbf {A} ={\begin{bmatrix}4&1&-2&2\\1&2&0&1\\-2&0&3&-2\\2&1&-2&-1\end{bmatrix}},}
Following those steps in the Householder method, we have:
The first Householder matrix:
Q
1
=
[
1
0
0
0
0
−
1
3
2
3
−
2
3
0
2
3
2
3
1
3
0
−
2
3
1
3
2
3
]
,
A
2
=
Q
1
A
Q
1
=
[
4
−
3
0
0
−
3
10
3
1
4
3
0
1
5
3
−
4
3
0
4
3
−
4
3
−
1
]
,
{\displaystyle {\begin{aligned}Q_{1}&={\begin{bmatrix}1&0&0&0\\0&-{\frac {1}{3}}&{\frac {2}{3}}&-{\frac {2}{3}}\\0&{\frac {2}{3}}&{\frac {2}{3}}&{\frac {1}{3}}\\0&-{\frac {2}{3}}&{\frac {1}{3}}&{\frac {2}{3}}\end{bmatrix}},\\A_{2}=Q_{1}AQ_{1}&={\begin{bmatrix}4&-3&0&0\\-3&{\frac {10}{3}}&1&{\frac {4}{3}}\\0&1&{\frac {5}{3}}&-{\frac {4}{3}}\\0&{\frac {4}{3}}&-{\frac {4}{3}}&-1\end{bmatrix}},\end{aligned}}}
Used
A
2
{\textstyle A_{2}}
to form
Q
2
=
[
1
0
0
0
0
1
0
0
0
0
−
3
5
−
4
5
0
0
−
4
5
3
5
]
,
A
3
=
Q
2
A
2
Q
2
=
[
4
−
3
0
0
−
3
10
3
−
5
3
0
0
−
5
3
−
33
25
68
75
0
0
68
75
149
75
]
,
{\displaystyle {\begin{aligned}Q_{2}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&-{\frac {3}{5}}&-{\frac {4}{5}}\\0&0&-{\frac {4}{5}}&{\frac {3}{5}}\end{bmatrix}},\\A_{3}=Q_{2}A_{2}Q_{2}&={\begin{bmatrix}4&-3&0&0\\-3&{\frac {10}{3}}&-{\frac {5}{3}}&0\\0&-{\frac {5}{3}}&-{\frac {33}{25}}&{\frac {68}{75}}\\0&0&{\frac {68}{75}}&{\frac {149}{75}}\end{bmatrix}},\end{aligned}}}
As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps.
==== Quantum computation ====
As unitary matrices are useful in quantum computation, and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of an oracle function represented by what turns out to be a Householder transformation:
{
U
ω
|
x
⟩
=
−
|
x
⟩
for
x
=
ω
, that is,
f
(
x
)
=
1
,
U
ω
|
x
⟩
=
|
x
⟩
for
x
≠
ω
, that is,
f
(
x
)
=
0.
{\displaystyle {\begin{cases}U_{\omega }|x\rangle =-|x\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{\omega }|x\rangle =|x\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0.\end{cases}}}
(here the
|
x
⟩
{\displaystyle |x\rangle }
is part of the bra-ket notation and is analogous to
x
→
{\displaystyle {\vec {x}}}
which we were using previously)
This is done via an algorithm that iterates via the oracle function
U
ω
{\displaystyle U_{\omega }}
and another operator
U
s
{\displaystyle U_{s}}
known as the Grover diffusion operator defined by
|
s
⟩
=
1
N
∑
x
=
0
N
−
1
|
x
⟩
.
{\displaystyle |s\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}|x\rangle .}
and
U
s
=
2
|
s
⟩
⟨
s
|
−
I
{\displaystyle U_{s}=2\left|s\right\rangle \!\!\left\langle s\right|-I}
.
== Computational and theoretical relationship to other unitary transformations ==
The Householder transformation is a reflection about a hyperplane with unit normal vector
v
{\textstyle v}
, as stated earlier. An
N
{\textstyle N}
-by-
N
{\textstyle N}
unitary transformation
U
{\textstyle U}
satisfies
U
U
∗
=
I
{\textstyle UU^{*}=I}
. Taking the determinant (
N
{\textstyle N}
-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvalues
λ
i
{\textstyle \lambda _{i}}
have unit modulus. This can be seen directly and swiftly:
Trace
(
U
U
∗
)
N
=
∑
j
=
1
N
|
λ
j
|
2
N
=
1
,
det
(
U
U
∗
)
=
∏
j
=
1
N
|
λ
j
|
2
=
1.
{\displaystyle {\begin{aligned}{\frac {\operatorname {Trace} \left(UU^{*}\right)}{N}}&={\frac {\sum _{j=1}^{N}\left|\lambda _{j}\right|^{2}}{N}}=1,&\operatorname {det} \left(UU^{*}\right)&=\prod _{j=1}^{N}\left|\lambda _{j}\right|^{2}=1.\end{aligned}}}
Since arithmetic and geometric means are equal if the variables are constant (see inequality of arithmetic and geometric means), we establish the claim of unit modulus.
For the case of real valued unitary matrices we obtain orthogonal matrices,
U
U
T
=
I
{\textstyle UU^{\textsf {T}}=I}
. It follows rather readily (see Orthogonal matrix) that any orthogonal matrix can be decomposed into a product of 2-by-2 rotations, called Givens rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length.
The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner.
Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines.
== See also ==
Block reflector
Givens rotation
Jacobi rotation
== Notes ==
== References ==
LaBudde, C.D. (1963). "The reduction of an arbitrary real square matrix to tridiagonal form using similarity transformations". Mathematics of Computation. 17 (84). American Mathematical Society: 433–437. doi:10.2307/2004005. JSTOR 2004005. MR 0156455.
Morrison, D.D. (1960). "Remarks on the Unitary Triangularization of a Nonsymmetric Matrix". Journal of the ACM. 7 (2): 185–186. doi:10.1145/321021.321030. MR 0114291. S2CID 23361868.
Cipra, Barry A. (2000). "The Best of the 20th Century: Editors Name Top 10 Algorithms". SIAM News. 33 (4): 1. (Herein Householder Transformation is cited as a top 10 algorithm of this century)
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 11.3.2. Householder Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-13.
Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, ISBN 978-0-387-72828-5 | Wikipedia/Householder_transformation |
In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena.
Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.
The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory.
Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.
Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.
== Single and multiple scattering ==
When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.
Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions.
With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization.
Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft.
Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately.
The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.
== Theory ==
Scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future".
The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.
=== Attenuation due to scattering ===
When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time (
I
{\displaystyle I}
), i.e. that
d
I
d
x
=
−
Q
I
{\displaystyle {\frac {dI}{dx}}=-QI\,\!}
where Q is an interaction coefficient and x is the distance traveled in the target.
The above ordinary first-order differential equation has solutions of the form:
I
=
I
o
e
−
Q
Δ
x
=
I
o
e
−
Δ
x
λ
=
I
o
e
−
σ
(
η
Δ
x
)
=
I
o
e
−
ρ
Δ
x
τ
,
{\displaystyle I=I_{o}e^{-Q\Delta x}=I_{o}e^{-{\frac {\Delta x}{\lambda }}}=I_{o}e^{-\sigma (\eta \Delta x)}=I_{o}e^{-{\frac {\rho \Delta x}{\tau }}},}
where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λ = ησ = ρ/τ, as shown in the figure at left.
In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed instead.
=== Elastic and inelastic scattering ===
The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.
The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.
The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.
=== Mathematical framework ===
In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".
Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.
An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.
== Theoretical physics ==
In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.
In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.
== Electromagnetics ==
Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.
Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.
Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006).
Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as:
α
=
π
D
p
/
λ
,
{\displaystyle \alpha =\pi D_{\text{p}}/\lambda ,}
where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are:
α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light);
α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres);
α ≫ 1: geometric scattering (particle much larger than wavelength of light).
Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence.
For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes.
Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy.
At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy.
For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer.
Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.
== See also ==
== References ==
== External links ==
Research group on light scattering and diffusion in complex systems
Multiple light scattering from a photonic science point of view
Neutron Scattering Web
Neutron and X-Ray Scattering
World directory of neutron scattering instruments
Scattering and diffraction
Optics Classification and Indexing Scheme (OCIS), Optical Society of America, 1997
Lectures of the European school on theoretical methods for electron and positron induced chemistry, Prague, Feb. 2005
E. Koelink, Lectures on scattering theory, Delft the Netherlands 2006 | Wikipedia/Scattering_theory |
In mathematics, a recurrence relation is an equation according to which the
n
{\displaystyle n}
th term of a sequence of numbers is equal to some combination of the previous terms. Often, only
k
{\displaystyle k}
previous terms of the sequence appear in the equation, for a parameter
k
{\displaystyle k}
that is independent of
n
{\displaystyle n}
; this number
k
{\displaystyle k}
is called the order of the relation. If the values of the first
k
{\displaystyle k}
numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.
In linear recurrences, the nth term is equated to a linear function of the
k
{\displaystyle k}
previous terms. A famous example is the recurrence for the Fibonacci numbers,
F
n
=
F
n
−
1
+
F
n
−
2
{\displaystyle F_{n}=F_{n-1}+F_{n-2}}
where the order
k
{\displaystyle k}
is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on
n
.
{\displaystyle n.}
For these recurrences, one can express the general term of the sequence as a closed-form expression of
n
{\displaystyle n}
. As well, linear recurrences with polynomial coefficients depending on
n
{\displaystyle n}
are also important, because many common elementary functions and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function).
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of
n
{\displaystyle n}
.
The concept of a recurrence relation can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers.
== Definition ==
A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form
u
n
=
φ
(
n
,
u
n
−
1
)
for
n
>
0
,
{\displaystyle u_{n}=\varphi (n,u_{n-1})\quad {\text{for}}\quad n>0,}
where
φ
:
N
×
X
→
X
{\displaystyle \varphi :\mathbb {N} \times X\to X}
is a function, where X is a set to which the elements of a sequence must belong. For any
u
0
∈
X
{\displaystyle u_{0}\in X}
, this defines a unique sequence with
u
0
{\displaystyle u_{0}}
as its first element, called the initial value.
It is easy to modify the definition for getting sequences starting from the term of index 1 or higher.
This defines recurrence relation of first order. A recurrence relation of order k has the form
u
n
=
φ
(
n
,
u
n
−
1
,
u
n
−
2
,
…
,
u
n
−
k
)
for
n
≥
k
,
{\displaystyle u_{n}=\varphi (n,u_{n-1},u_{n-2},\ldots ,u_{n-k})\quad {\text{for}}\quad n\geq k,}
where
φ
:
N
×
X
k
→
X
{\displaystyle \varphi :\mathbb {N} \times X^{k}\to X}
is a function that involves k consecutive elements of the sequence.
In this case, k initial values are needed for defining a sequence.
== Examples ==
=== Factorial ===
The factorial is defined by the recurrence relation
n
!
=
n
⋅
(
n
−
1
)
!
for
n
>
0
,
{\displaystyle n!=n\cdot (n-1)!\quad {\text{for}}\quad n>0,}
and the initial condition
0
!
=
1.
{\displaystyle 0!=1.}
This is an example of a linear recurrence with polynomial coefficients of order 1, with the simple polynomial (in n)
n
{\displaystyle n}
as its only coefficient.
=== Logistic map ===
An example of a recurrence relation is the logistic map defined by
x
n
+
1
=
r
x
n
(
1
−
x
n
)
,
{\displaystyle x_{n+1}=rx_{n}(1-x_{n}),}
for a given constant
r
.
{\displaystyle r.}
The behavior of the sequence depends dramatically on
r
,
{\displaystyle r,}
but is stable when the initial condition
x
0
{\displaystyle x_{0}}
varies.
=== Fibonacci numbers ===
The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence
F
n
=
F
n
−
1
+
F
n
−
2
{\displaystyle F_{n}=F_{n-1}+F_{n-2}}
with initial conditions
F
0
=
0
{\displaystyle F_{0}=0}
F
1
=
1.
{\displaystyle F_{1}=1.}
Explicitly, the recurrence yields the equations
F
2
=
F
1
+
F
0
{\displaystyle F_{2}=F_{1}+F_{0}}
F
3
=
F
2
+
F
1
{\displaystyle F_{3}=F_{2}+F_{1}}
F
4
=
F
3
+
F
2
{\displaystyle F_{4}=F_{3}+F_{2}}
etc.
We obtain the sequence of Fibonacci numbers, which begins
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial
t
2
=
t
+
1
{\displaystyle t^{2}=t+1}
; the generating function of the sequence is the rational function
t
1
−
t
−
t
2
.
{\displaystyle {\frac {t}{1-t-t^{2}}}.}
=== Binomial coefficients ===
A simple example of a multidimensional recurrence relation is given by the binomial coefficients
(
n
k
)
{\displaystyle {\tbinom {n}{k}}}
, which count the ways of selecting
k
{\displaystyle k}
elements out of a set of
n
{\displaystyle n}
elements.
They can be computed by the recurrence relation
(
n
k
)
=
(
n
−
1
k
−
1
)
+
(
n
−
1
k
)
,
{\displaystyle {\binom {n}{k}}={\binom {n-1}{k-1}}+{\binom {n-1}{k}},}
with the base cases
(
n
0
)
=
(
n
n
)
=
1
{\displaystyle {\tbinom {n}{0}}={\tbinom {n}{n}}=1}
. Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but uses factorials, multiplication and division, not just additions:
(
n
k
)
=
n
!
k
!
(
n
−
k
)
!
.
{\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}
The binomial coefficients can also be computed with a uni-dimensional recurrence:
(
n
k
)
=
(
n
k
−
1
)
(
n
−
k
+
1
)
/
k
,
{\displaystyle {\binom {n}{k}}={\binom {n}{k-1}}(n-k+1)/k,}
with the initial value
(
n
0
)
=
1
{\textstyle {\binom {n}{0}}=1}
(The division is not displayed as a fraction for emphasizing that it must be computed after the multiplication, for not introducing fractional numbers).
This recurrence is widely used in computers because it does not require to build a table as does the bi-dimensional recurrence, and does involve very large integers as does the formula with factorials (if one uses
(
n
k
)
=
(
n
n
−
k
)
,
{\textstyle {\binom {n}{k}}={\binom {n}{n-k}},}
all involved integers are smaller than the final result).
== Difference operator and difference equations ==
The difference operator is an operator that maps sequences to sequences, and, more generally, functions to functions. It is commonly denoted
Δ
,
{\displaystyle \Delta ,}
and is defined, in functional notation, as
(
Δ
f
)
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle (\Delta f)(x)=f(x+1)-f(x).}
It is thus a special case of finite difference.
When using the index notation for sequences, the definition becomes
(
Δ
a
)
n
=
a
n
+
1
−
a
n
.
{\displaystyle (\Delta a)_{n}=a_{n+1}-a_{n}.}
The parentheses around
Δ
f
{\displaystyle \Delta f}
and
Δ
a
{\displaystyle \Delta a}
are generally omitted, and
Δ
a
n
{\displaystyle \Delta a_{n}}
must be understood as the term of index n in the sequence
Δ
a
,
{\displaystyle \Delta a,}
and not
Δ
{\displaystyle \Delta }
applied to the element
a
n
.
{\displaystyle a_{n}.}
Given sequence
a
=
(
a
n
)
n
∈
N
,
{\displaystyle a=(a_{n})_{n\in \mathbb {N} },}
the first difference of a is
Δ
a
.
{\displaystyle \Delta a.}
The second difference is
Δ
2
a
=
(
Δ
∘
Δ
)
a
=
Δ
(
Δ
a
)
.
{\displaystyle \Delta ^{2}a=(\Delta \circ \Delta )a=\Delta (\Delta a).}
A simple computation shows that
Δ
2
a
n
=
a
n
+
2
−
2
a
n
+
1
+
a
n
.
{\displaystyle \Delta ^{2}a_{n}=a_{n+2}-2a_{n+1}+a_{n}.}
More generally: the kth difference is defined recursively as
Δ
k
=
Δ
∘
Δ
k
−
1
,
{\displaystyle \Delta ^{k}=\Delta \circ \Delta ^{k-1},}
and one has
Δ
k
a
n
=
∑
t
=
0
k
(
−
1
)
t
(
k
t
)
a
n
+
k
−
t
.
{\displaystyle \Delta ^{k}a_{n}=\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}a_{n+k-t}.}
This relation can be inverted, giving
a
n
+
k
=
a
n
+
(
k
1
)
Δ
a
n
+
⋯
+
(
k
k
)
Δ
k
(
a
n
)
.
{\displaystyle a_{n+k}=a_{n}+{k \choose 1}\Delta a_{n}+\cdots +{k \choose k}\Delta ^{k}(a_{n}).}
A difference equation of order k is an equation that involves the k first differences of a sequence or a function, in the same way as a differential equation of order k relates the k first derivatives of a function.
The two above relations allow transforming a recurrence relation of order k into a difference equation of order k, and, conversely, a difference equation of order k into recurrence relation of order k. Each transformation is the inverse of the other, and the sequences that are solution of the difference equation are exactly those that satisfies the recurrence relation.
For example, the difference equation
3
Δ
2
a
n
+
2
Δ
a
n
+
7
a
n
=
0
{\displaystyle 3\Delta ^{2}a_{n}+2\Delta a_{n}+7a_{n}=0}
is equivalent to the recurrence relation
3
a
n
+
2
=
4
a
n
+
1
−
8
a
n
,
{\displaystyle 3a_{n+2}=4a_{n+1}-8a_{n},}
in the sense that the two equations are satisfied by the same sequences.
As it is equivalent for a sequence to satisfy a recurrence relation or to be the solution of a difference equation, the two terms "recurrence relation" and "difference equation" are sometimes used interchangeably. See Rational difference equation and Matrix difference equation for example of uses of "difference equation" instead of "recurrence relation"
Difference equations resemble differential equations, and this resemblance is often used to mimic methods for solving differentiable equations to apply to solving difference equations, and therefore recurrence relations.
Summation equations relate to difference equations as integral equations relate to differential equations. See time scale calculus for a unification of the theory of difference equations with that of differential equations.
=== From sequences to grids ===
Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about
n
{\displaystyle n}
-dimensional grids. Functions defined on
n
{\displaystyle n}
-grids can also be studied with partial difference equations.
== Solving ==
=== Solving linear recurrence relations with constant coefficients ===
=== Solving first-order non-homogeneous recurrence relations with variable coefficients ===
Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients:
a
n
+
1
=
f
n
a
n
+
g
n
,
f
n
≠
0
,
{\displaystyle a_{n+1}=f_{n}a_{n}+g_{n},\qquad f_{n}\neq 0,}
there is also a nice method to solve it:
a
n
+
1
−
f
n
a
n
=
g
n
{\displaystyle a_{n+1}-f_{n}a_{n}=g_{n}}
a
n
+
1
∏
k
=
0
n
f
k
−
f
n
a
n
∏
k
=
0
n
f
k
=
g
n
∏
k
=
0
n
f
k
{\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {f_{n}a_{n}}{\prod _{k=0}^{n}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
a
n
+
1
∏
k
=
0
n
f
k
−
a
n
∏
k
=
0
n
−
1
f
k
=
g
n
∏
k
=
0
n
f
k
{\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
Let
A
n
=
a
n
∏
k
=
0
n
−
1
f
k
,
{\displaystyle A_{n}={\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}},}
Then
A
n
+
1
−
A
n
=
g
n
∏
k
=
0
n
f
k
{\displaystyle A_{n+1}-A_{n}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
∑
m
=
0
n
−
1
(
A
m
+
1
−
A
m
)
=
A
n
−
A
0
=
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
{\displaystyle \sum _{m=0}^{n-1}(A_{m+1}-A_{m})=A_{n}-A_{0}=\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a
n
∏
k
=
0
n
−
1
f
k
=
A
0
+
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
{\displaystyle {\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}=A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a
n
=
(
∏
k
=
0
n
−
1
f
k
)
(
A
0
+
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
)
{\displaystyle a_{n}=\left(\prod _{k=0}^{n-1}f_{k}\right)\left(A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}\right)}
If we apply the formula to
a
n
+
1
=
(
1
+
h
f
n
h
)
a
n
+
h
g
n
h
{\displaystyle a_{n+1}=(1+hf_{nh})a_{n}+hg_{nh}}
and take the limit
h
→
0
{\displaystyle h\to 0}
, we get the formula for first order linear differential equations with variable coefficients; the sum becomes an integral, and the product becomes the exponential function of an integral.
=== Solving general homogeneous linear recurrence relations ===
Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to
J
n
+
1
=
2
n
z
J
n
−
J
n
−
1
{\displaystyle J_{n+1}={\frac {2n}{z}}J_{n}-J_{n-1}}
is given by
J
n
=
J
n
(
z
)
,
{\displaystyle J_{n}=J_{n}(z),}
the Bessel function, while
(
b
−
n
)
M
n
−
1
+
(
2
n
−
b
+
z
)
M
n
−
n
M
n
+
1
=
0
{\displaystyle (b-n)M_{n-1}+(2n-b+z)M_{n}-nM_{n+1}=0}
is solved by
M
n
=
M
(
n
,
b
;
z
)
{\displaystyle M_{n}=M(n,b;z)}
the confluent hypergeometric series. Sequences which are the solutions of linear difference equations with polynomial coefficients are called P-recursive. For these specific recurrence equations algorithms are known which find polynomial, rational or hypergeometric solutions.
=== Solving general non-homogeneous linear recurrence relations with constant coefficients ===
Furthermore, for the general non-homogeneous linear recurrence relation with constant coefficients, one can solve it based on variation of parameter.
=== Solving first-order rational difference equations ===
A first order rational difference equation has the form
w
t
+
1
=
a
w
t
+
b
c
w
t
+
d
{\displaystyle w_{t+1}={\tfrac {aw_{t}+b}{cw_{t}+d}}}
. Such an equation can be solved by writing
w
t
{\displaystyle w_{t}}
as a nonlinear transformation of another variable
x
t
{\displaystyle x_{t}}
which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in
x
t
{\displaystyle x_{t}}
.
== Stability ==
=== Stability of linear higher-order recurrences ===
The linear recurrence of order
d
{\displaystyle d}
,
a
n
=
c
1
a
n
−
1
+
c
2
a
n
−
2
+
⋯
+
c
d
a
n
−
d
,
{\displaystyle a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},}
has the characteristic equation
λ
d
−
c
1
λ
d
−
1
−
c
2
λ
d
−
2
−
⋯
−
c
d
λ
0
=
0.
{\displaystyle \lambda ^{d}-c_{1}\lambda ^{d-1}-c_{2}\lambda ^{d-2}-\cdots -c_{d}\lambda ^{0}=0.}
The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value.
=== Stability of linear first-order matrix recurrences ===
In the first-order matrix difference equation
[
x
t
−
x
∗
]
=
A
[
x
t
−
1
−
x
∗
]
{\displaystyle [x_{t}-x^{*}]=A[x_{t-1}-x^{*}]}
with state vector
x
{\displaystyle x}
and transition matrix
A
{\displaystyle A}
,
x
{\displaystyle x}
converges asymptotically to the steady state vector
x
∗
{\displaystyle x^{*}}
if and only if all eigenvalues of the transition matrix
A
{\displaystyle A}
(whether real or complex) have an absolute value which is less than 1.
=== Stability of nonlinear first-order recurrences ===
Consider the nonlinear first-order recurrence
x
n
=
f
(
x
n
−
1
)
.
{\displaystyle x_{n}=f(x_{n-1}).}
This recurrence is locally stable, meaning that it converges to a fixed point
x
∗
{\displaystyle x^{*}}
from points sufficiently close to
x
∗
{\displaystyle x^{*}}
, if the slope of
f
{\displaystyle f}
in the neighborhood of
x
∗
{\displaystyle x^{*}}
is smaller than unity in absolute value: that is,
|
f
′
(
x
∗
)
|
<
1.
{\displaystyle |f'(x^{*})|<1.}
A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable.
A nonlinear recurrence relation could also have a cycle of period
k
{\displaystyle k}
for
k
>
1
{\displaystyle k>1}
. Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function
g
(
x
)
:=
f
∘
f
∘
⋯
∘
f
(
x
)
{\displaystyle g(x):=f\circ f\circ \cdots \circ f(x)}
with
f
{\displaystyle f}
appearing
k
{\displaystyle k}
times is locally stable according to the same criterion:
|
g
′
(
x
∗
)
|
<
1
,
{\displaystyle |g'(x^{*})|<1,}
where
x
∗
{\displaystyle x^{*}}
is any point on the cycle.
In a chaotic recurrence relation, the variable
x
{\displaystyle x}
stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map.
== Relationship to differential equations ==
When solving an ordinary differential equation numerically, one typically encounters a recurrence relation. For example, when solving the initial value problem
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
,
y
(
t
0
)
=
y
0
,
{\displaystyle y'(t)=f(t,y(t)),\ \ y(t_{0})=y_{0},}
with Euler's method and a step size
h
{\displaystyle h}
, one calculates the values
y
0
=
y
(
t
0
)
,
y
1
=
y
(
t
0
+
h
)
,
y
2
=
y
(
t
0
+
2
h
)
,
…
{\displaystyle y_{0}=y(t_{0}),\ \ y_{1}=y(t_{0}+h),\ \ y_{2}=y(t_{0}+2h),\ \dots }
by the recurrence
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
,
t
n
=
t
0
+
n
h
{\displaystyle \,y_{n+1}=y_{n}+hf(t_{n},y_{n}),t_{n}=t_{0}+nh}
Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article.
== Applications ==
=== Mathematical biology ===
Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population.
The logistic map is used either directly to model population growth, or as a starting point for more detailed models of population dynamics. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson–Bailey model for a host-parasite interaction is given by
N
t
+
1
=
λ
N
t
e
−
a
P
t
{\displaystyle N_{t+1}=\lambda N_{t}e^{-aP_{t}}}
P
t
+
1
=
N
t
(
1
−
e
−
a
P
t
)
,
{\displaystyle P_{t+1}=N_{t}(1-e^{-aP_{t}}),}
with
N
t
{\displaystyle N_{t}}
representing the hosts, and
P
t
{\displaystyle P_{t}}
the parasites, at time
t
{\displaystyle t}
.
Integrodifference equations are a form of recurrence relation important to spatial ecology. These and other difference equations are particularly suited to modeling univoltine populations.
=== Computer science ===
Recurrence relations are also of fundamental importance in analysis of algorithms. If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation.
A simple example is the time an algorithm takes to find an element in an ordered vector with
n
{\displaystyle n}
elements, in the worst case.
A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is
n
{\displaystyle n}
.
A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by
c
1
=
1
{\displaystyle c_{1}=1}
c
n
=
1
+
c
n
/
2
{\displaystyle c_{n}=1+c_{n/2}}
the time complexity of which will be
O
(
log
2
(
n
)
)
{\displaystyle O(\log _{2}(n))}
.
=== Digital signal processing ===
In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters.
For example, the equation for a "feedforward" IIR comb filter of delay
T
{\displaystyle T}
is:
y
t
=
(
1
−
α
)
x
t
+
α
y
t
−
T
,
{\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha y_{t-T},}
where
x
t
{\displaystyle x_{t}}
is the input at time
t
{\displaystyle t}
,
y
t
{\displaystyle y_{t}}
is the output at time
t
{\displaystyle t}
, and
α
{\displaystyle \alpha }
controls how much of the delayed signal is fed back into the output. From this we can see that
y
t
=
(
1
−
α
)
x
t
+
α
(
(
1
−
α
)
x
t
−
T
+
α
y
t
−
2
T
)
{\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha ((1-\alpha )x_{t-T}+\alpha y_{t-2T})}
y
t
=
(
1
−
α
)
x
t
+
(
α
−
α
2
)
x
t
−
T
+
α
2
y
t
−
2
T
{\displaystyle y_{t}=(1-\alpha )x_{t}+(\alpha -\alpha ^{2})x_{t-T}+\alpha ^{2}y_{t-2T}}
etc.
=== Economics ===
Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics. In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of past and current values of other variables.
== See also ==
== References ==
=== Footnotes ===
=== Bibliography ===
Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications.
Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin.
Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. Vol. 10, no. 3. pp. 324–353. JSTOR 2027658.
Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90.
Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Wesley. ISBN 0-201-55802-5.
Enders, Walter (2010). Applied Econometric Times Series (3 ed.). Archived from the original on 2014-11-10.
Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7.
Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations.
Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404. Archived from the original (PDF) on 2016-03-04. Retrieved 2014-08-07.
Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations.
Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations.
Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. arXiv:1101.4371. doi:10.1142/S0219530512500108. S2CID 28828175.
== External links ==
"Recurrence relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Recurrence Equation". MathWorld.
"OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) | Wikipedia/Difference_equation |
In applied mathematics, antieigenvalue theory was developed by Karl Gustafson from 1966 to 1968. The theory is applicable to numerical analysis, wavelets, statistics, quantum mechanics, finance and optimization.
The antieigenvectors
x
{\displaystyle x}
are the vectors most turned by a matrix or operator
A
{\displaystyle A}
, that is to say those for which the angle between the original vector and its transformed image is greatest. The corresponding antieigenvalue
μ
{\displaystyle \mu }
is the cosine of the maximal turning angle. The maximal turning angle is
ϕ
(
A
)
{\displaystyle \phi (A)}
and is called the angle of the operator. Just like the eigenvalues which may be ordered as a spectrum from smallest to largest, the theory of antieigenvalues orders the antieigenvalues of an operator A from the smallest to the largest turning angles.
== References ==
Gustafson, Karl (1968), "The angle of an operator and positive operator products", Bulletin of the American Mathematical Society, 74 (3): 488–492, doi:10.1090/S0002-9904-1968-11974-3, ISSN 0002-9904, MR 0222668, Zbl 0172.40702
Gustafson, Karl (2012), Antieigenvalue Analysis, World Scientific, ISBN 978-981-4366-28-1, archived from the original on 2012-05-19, retrieved 2012-01-31. | Wikipedia/Antieigenvalue_theory |
Structural equation modeling (SEM) is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology, business, and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".
SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis (CFA), confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.
SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.
== History ==
Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs). Discussions comparing and contrasting various SEM approaches are available highlighting disciplinary differences in data structures and the concerns motivating economic models.
Judea Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.
SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest. The use of experimental designs may address some of these doubts.
Today, SEM forms the basis of machine learning and (interpretable) neural networks. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning.
== General steps and considerations ==
The following considerations apply to the construction and assessment of many structural equation models.
=== Model specification ===
Building or specifying a model requires attending to:
the set of variables to be employed,
what is known about the variables,
what is theorized or hypothesized about the variables' causal connections and disconnections,
what the researcher seeks to learn from the modeling, and
the instances of missing values and/or the need for imputation.
Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
which effects and/or correlations/covariances are to be included and estimated,
which effects and other coefficients are forbidden or presumed unnecessary,
and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).
The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections.
Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.
=== Estimation of free model coefficients ===
Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.
One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
=== Model assessment ===
Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
whether the data contain reasonable measurements of appropriate variables,
whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)
Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a χ2 (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model χ2 test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the χ2 test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a χ2 test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to χ2. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of χ2 testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that χ2 increases (and hence χ2 probability decreases) with increasing sample size (N). There are two mistakes in discounting χ2 on this basis. First, for proper models, χ2 does not increase with increasing N, so if χ2 increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, χ2 increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by χ2, so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The χ2 model test, possibly adjusted, is the strongest available structural equation model test.
Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.
This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The χ2 evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.
Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.
The considerations relevant to using fit indices include checking:
whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.);
whether a model test is, or is not, available. (A χ2 value, degrees of freedom, and probability will be available for models reporting indices based on χ2.)
and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).
Some of the more commonly used fit statistics include
Chi-square
A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.
Akaike information criterion (AIC)
An index of relative model fit: The preferred model is the one with the lowest AIC value.
A
I
C
=
2
k
−
2
ln
(
L
)
{\displaystyle {\mathit {AIC}}=2k-2\ln(L)\,}
where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model.
Root Mean Square Error of Approximation (RMSEA)
Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested.
Standardized Root Mean Squared Residual (SRMR)
The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.
Comparative Fit Index (CFI)
In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.
The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.
=== Sample size, power, and estimation ===
Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
=== Interpretation ===
Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.
Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.
SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model.
SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.
The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.
The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.
Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.
The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.
=== Controversies and movements ===
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides.
These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives.
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM.
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
== Extensions, modeling alternatives, and statistical kin ==
Categorical dependent variables
Categorical intervening variables
Copulas
Deep Path Modelling
Exploratory Structural Equation Modeling
Fusion validity models
Item response theory models
Latent class models
Latent growth modeling
Link functions
Longitudinal models
Measurement invariance models
Mixture model
Multilevel models, hierarchical models (e.g. people nested in groups)
Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.)
Multi-method multi-trait models
Random intercepts models
Structural Equation Model Trees
Structural Equation Multidimensional scaling
== Software ==
Structural equation modeling programs differ widely in their capabilities and user requirements. Below is a table of available software.
== See also ==
Causal model – Conceptual model in philosophy of science
Graphical model – Probabilistic model
Judea Pearl
Multivariate statistics – Simultaneous observation and analysis of more than one outcome variable
Partial least squares path modeling – Method for structural equation modeling
Partial least squares regression – Statistical method
Simultaneous equations model – Type of statistical model
Causal map – A network consisting of links or arcs between nodes or factors
Bayesian Network – Statistical modelPages displaying short descriptions of redirect targets
== References ==
== Bibliography ==
Hu, Li-tze; Bentler, Peter M (1999). "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives". Structural Equation Modeling. 6: 1–55. doi:10.1080/10705519909540118. hdl:2027.42/139911.
Kaplan, D. (2008). Structural Equation Modeling: Foundations and Extensions (2nd ed.). SAGE. ISBN 978-1412916240.
Kline, Rex (2011). Principles and Practice of Structural Equation Modeling (Third ed.). Guilford. ISBN 978-1-60623-876-9.
MacCallum, Robert; Austin, James (2000). "Applications of Structural Equation Modeling in Psychological Research" (PDF). Annual Review of Psychology. 51: 201–226. doi:10.1146/annurev.psych.51.1.201. PMID 10751970. Archived from the original (PDF) on 28 January 2015. Retrieved 25 January 2015.
Quintana, Stephen M.; Maxwell, Scott E. (1999). "Implications of Recent Developments in Structural Equation Modeling for Counseling Psychology". The Counseling Psychologist. 27 (4): 485–527. doi:10.1177/0011000099274002. S2CID 145586057.
== Further reading ==
Bagozzi, Richard P; Yi, Youjae (2011). "Specification, evaluation, and interpretation of structural equation models". Journal of the Academy of Marketing Science. 40 (1): 8–34. doi:10.1007/s11747-011-0278-x. S2CID 167896719.
Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers, ISBN 0-340-69243-X
Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606.
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley, ISBN 0-471-01171-1
Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA, ISBN 0-8058-4104-0
Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001.
Haavelmo, Trygve (January 1943). "The Statistical Implications of a System of Simultaneous Equations". Econometrica. 11 (1): 1–12. doi:10.2307/1905714. JSTOR 1905714.
Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE, ISBN 0-8039-5318-6
Jöreskog, Karl G.; Yang, Fan (1996). "Non-linear structural equation models: The Kenny-Judd model with interaction effects". In Marcoulides, George A.; Schumacker, Randall E. (eds.). Advanced structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: Sage Publications. pp. 57–88. ISBN 978-1-317-84380-1.
Lewis-Beck, Michael; Bryman, Alan E.; Bryman, Emeritus Professor Alan; Liao, Tim Futing (2004). "Structural Equation Modeling". The SAGE Encyclopedia of Social Science Research Methods. doi:10.4135/9781412950589.n979. hdl:2022/21973. ISBN 978-0-7619-2363-3.
Schermelleh-Engel, K.; Moosbrugger, H.; Müller, H. (2003), "Evaluating the fit of structural equation models" (PDF), Methods of Psychological Research, 8 (2): 23–74.
== External links ==
Structural equation modeling page under David Garson's StatNotes, NCSU
Issues and Opinion on Structural Equation Modeling, SEM in IS Research
The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000.
Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models
Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM | Wikipedia/Structural_equation_model |
In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator (or, more generally, an unbounded linear operator) is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number
λ
{\displaystyle \lambda }
is said to be in the spectrum of a bounded linear operator
T
{\displaystyle T}
if
T
−
λ
I
{\displaystyle T-\lambda I}
either has no set-theoretic inverse;
or the set-theoretic inverse is either unbounded or defined on a non-dense subset.
Here,
I
{\displaystyle I}
is the identity operator.
By the closed graph theorem,
λ
{\displaystyle \lambda }
is in the spectrum if and only if the bounded operator
T
−
λ
I
:
V
→
V
{\displaystyle T-\lambda I:V\to V}
is non-bijective on
V
{\displaystyle V}
.
The study of spectra and related properties is known as spectral theory, which has numerous applications, most notably the mathematical formulation of quantum mechanics.
The spectrum of an operator on a finite-dimensional vector space is precisely the set of eigenvalues. However an operator on an infinite-dimensional space may have additional elements in its spectrum, and may have no eigenvalues. For example, consider the right shift operator R on the Hilbert space ℓ2,
(
x
1
,
x
2
,
…
)
↦
(
0
,
x
1
,
x
2
,
…
)
.
{\displaystyle (x_{1},x_{2},\dots )\mapsto (0,x_{1},x_{2},\dots ).}
This has no eigenvalues, since if Rx=λx then by expanding this expression we see that x1=0, x2=0, etc. On the other hand, 0 is in the spectrum because although the operator R − 0 (i.e. R itself) is invertible, the inverse is defined on a set which is not dense in ℓ2. In fact every bounded linear operator on a complex Banach space must have a non-empty spectrum.
The notion of spectrum extends to unbounded (i.e. not necessarily bounded) operators. A complex number λ is said to be in the spectrum of an unbounded operator
T
:
X
→
X
{\displaystyle T:\,X\to X}
defined on domain
D
(
T
)
⊆
X
{\displaystyle D(T)\subseteq X}
if there is no bounded inverse
(
T
−
λ
I
)
−
1
:
X
→
D
(
T
)
{\displaystyle (T-\lambda I)^{-1}:\,X\to D(T)}
defined on the whole of
X
.
{\displaystyle X.}
If T is closed (which includes the case when T is bounded), boundedness of
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
follows automatically from its existence. U
The space of bounded linear operators B(X) on a Banach space X is an example of a unital Banach algebra. Since the definition of the spectrum does not mention any properties of B(X) except those that any such algebra has, the notion of a spectrum may be generalised to this context by using the same definition verbatim.
== Spectrum of a bounded operator ==
=== Definition ===
Let
T
{\displaystyle T}
be a bounded linear operator acting on a Banach space
X
{\displaystyle X}
over the complex scalar field
C
{\displaystyle \mathbb {C} }
, and
I
{\displaystyle I}
be the identity operator on
X
{\displaystyle X}
. The spectrum of
T
{\displaystyle T}
is the set of all
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which the operator
T
−
λ
I
{\displaystyle T-\lambda I}
does not have an inverse that is a bounded linear operator.
Since
T
−
λ
I
{\displaystyle T-\lambda I}
is a linear operator, the inverse is linear if it exists; and, by the bounded inverse theorem, it is bounded. Therefore, the spectrum consists precisely of those scalars
λ
{\displaystyle \lambda }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
is not bijective.
The spectrum of a given operator
T
{\displaystyle T}
is often denoted
σ
(
T
)
{\displaystyle \sigma (T)}
, and its complement, the resolvent set, is denoted
ρ
(
T
)
=
C
∖
σ
(
T
)
{\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}
. (
ρ
(
T
)
{\displaystyle \rho (T)}
is sometimes used to denote the spectral radius of
T
{\displaystyle T}
)
=== Relation to eigenvalues ===
If
λ
{\displaystyle \lambda }
is an eigenvalue of
T
{\displaystyle T}
, then the operator
T
−
λ
I
{\displaystyle T-\lambda I}
is not one-to-one, and therefore its inverse
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
is not defined. However, the converse statement is not true: the operator
T
−
λ
I
{\displaystyle T-\lambda I}
may not have an inverse, even if
λ
{\displaystyle \lambda }
is not an eigenvalue. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them.
For example, consider the Hilbert space
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
, that consists of all bi-infinite sequences of real numbers
v
=
(
…
,
v
−
2
,
v
−
1
,
v
0
,
v
1
,
v
2
,
…
)
{\displaystyle v=(\ldots ,v_{-2},v_{-1},v_{0},v_{1},v_{2},\ldots )}
that have a finite sum of squares
∑
i
=
−
∞
+
∞
v
i
2
{\textstyle \sum _{i=-\infty }^{+\infty }v_{i}^{2}}
. The bilateral shift operator
T
{\displaystyle T}
simply displaces every element of the sequence by one position; namely if
u
=
T
(
v
)
{\displaystyle u=T(v)}
then
u
i
=
v
i
−
1
{\displaystyle u_{i}=v_{i-1}}
for every integer
i
{\displaystyle i}
. The eigenvalue equation
T
(
v
)
=
λ
v
{\displaystyle T(v)=\lambda v}
has no nonzero solution in this space, since it implies that all the values
v
i
{\displaystyle v_{i}}
have the same absolute value (if
|
λ
|
=
1
{\displaystyle \vert \lambda \vert =1}
) or are a geometric progression (if
|
λ
|
≠
1
{\displaystyle \vert \lambda \vert \neq 1}
); either way, the sum of their squares would not be finite. However, the operator
T
−
λ
I
{\displaystyle T-\lambda I}
is not invertible if
|
λ
|
=
1
{\displaystyle |\lambda |=1}
. For example, the sequence
u
{\displaystyle u}
such that
u
i
=
1
/
(
|
i
|
+
1
)
{\displaystyle u_{i}=1/(|i|+1)}
is in
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
; but there is no sequence
v
{\displaystyle v}
in
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
such that
(
T
−
I
)
v
=
u
{\displaystyle (T-I)v=u}
(that is,
v
i
−
1
=
u
i
+
v
i
{\displaystyle v_{i-1}=u_{i}+v_{i}}
for all
i
{\displaystyle i}
).
=== Basic properties ===
The spectrum of a bounded operator T is always a closed, bounded subset of the complex plane.
If the spectrum were empty, then the resolvent function
R
(
λ
)
=
(
T
−
λ
I
)
−
1
,
λ
∈
C
,
{\displaystyle R(\lambda )=(T-\lambda I)^{-1},\qquad \lambda \in \mathbb {C} ,}
would be defined everywhere on the complex plane and bounded. But it can be shown that the resolvent function R is holomorphic on its domain. By the vector-valued version of Liouville's theorem, this function is constant, thus everywhere zero as it is zero at infinity. This would be a contradiction.
The boundedness of the spectrum follows from the Neumann series expansion in λ; the spectrum σ(T) is bounded by ||T||. A similar result shows the closedness of the spectrum.
The bound ||T|| on the spectrum can be refined somewhat. The spectral radius, r(T), of T is the radius of the smallest circle in the complex plane which is centered at the origin and contains the spectrum σ(T) inside of it, i.e.
r
(
T
)
=
sup
{
|
λ
|
:
λ
∈
σ
(
T
)
}
.
{\displaystyle r(T)=\sup\{|\lambda |:\lambda \in \sigma (T)\}.}
The spectral radius formula says that for any element
T
{\displaystyle T}
of a Banach algebra,
r
(
T
)
=
lim
n
→
∞
‖
T
n
‖
1
/
n
.
{\displaystyle r(T)=\lim _{n\to \infty }\left\|T^{n}\right\|^{1/n}.}
== Spectrum of an unbounded operator ==
One can extend the definition of spectrum to unbounded operators on a Banach space X. These operators are no longer elements in the Banach algebra B(X).
=== Definition ===
Let X be a Banach space and
T
:
D
(
T
)
→
X
{\displaystyle T:\,D(T)\to X}
be a linear operator defined on domain
D
(
T
)
⊆
X
{\displaystyle D(T)\subseteq X}
.
A complex number λ is said to be in the resolvent set (also called regular set) of
T
{\displaystyle T}
if the operator
T
−
λ
I
:
D
(
T
)
→
X
{\displaystyle T-\lambda I:\,D(T)\to X}
has a bounded everywhere-defined inverse, i.e. if there exists a bounded operator
S
:
X
→
D
(
T
)
{\displaystyle S:\,X\rightarrow D(T)}
such that
S
(
T
−
λ
I
)
=
I
D
(
T
)
,
(
T
−
λ
I
)
S
=
I
X
.
{\displaystyle S(T-\lambda I)=I_{D(T)},\,(T-\lambda I)S=I_{X}.}
A complex number λ is then in the spectrum if λ is not in the resolvent set.
For λ to be in the resolvent (i.e. not in the spectrum), just like in the bounded case,
T
−
λ
I
{\displaystyle T-\lambda I}
must be bijective, since it must have a two-sided inverse. As before, if an inverse exists, then its linearity is immediate, but in general it may not be bounded, so this condition must be checked separately.
By the closed graph theorem, boundedness of
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
does follow directly from its existence when T is closed. Then, just as in the bounded case, a complex number λ lies in the spectrum of a closed operator T if and only if
T
−
λ
I
{\displaystyle T-\lambda I}
is not bijective. Note that the class of closed operators includes all bounded operators.
=== Basic properties ===
The spectrum of an unbounded operator is in general a closed, possibly empty, subset of the complex plane.
If the operator T is not closed, then
σ
(
T
)
=
C
{\displaystyle \sigma (T)=\mathbb {C} }
.
The following example indicates that non-closed operators may have empty spectra. Let
T
{\displaystyle T}
denote the differentiation operator on
L
2
(
[
0
,
1
]
)
{\displaystyle L^{2}([0,1])}
, whose domain is defined to be the closure of
C
c
∞
(
(
0
,
1
]
)
{\displaystyle C_{c}^{\infty }((0,1])}
with respect to the
H
1
{\displaystyle H^{1}}
-Sobolev space norm. This space can be characterized as all functions in
H
1
(
[
0
,
1
]
)
{\displaystyle H^{1}([0,1])}
that are zero at
t
=
0
{\displaystyle t=0}
. Then,
T
−
z
{\displaystyle T-z}
has trivial kernel on this domain, as any
H
1
(
[
0
,
1
]
)
{\displaystyle H^{1}([0,1])}
-function in its kernel is a constant multiple of
e
z
t
{\displaystyle e^{zt}}
, which is zero at
t
=
0
{\displaystyle t=0}
if and only if it is identically zero. Therefore, the complement of the spectrum is all of
C
.
{\displaystyle \mathbb {C} .}
== Classification of points in the spectrum ==
A bounded operator T on a Banach space is invertible, i.e. has a bounded inverse, if and only if T is bounded below, i.e.
‖
T
x
‖
≥
c
‖
x
‖
,
{\displaystyle \|Tx\|\geq c\|x\|,}
for some
c
>
0
,
{\displaystyle c>0,}
and has dense range. Accordingly, the spectrum of T can be divided into the following parts:
λ
∈
σ
(
T
)
{\displaystyle \lambda \in \sigma (T)}
if
T
−
λ
I
{\displaystyle T-\lambda I}
is not bounded below. In particular, this is the case if
T
−
λ
I
{\displaystyle T-\lambda I}
is not injective, that is, λ is an eigenvalue. The set of eigenvalues is called the point spectrum of T and denoted by σp(T). Alternatively,
T
−
λ
I
{\displaystyle T-\lambda I}
could be one-to-one but still not bounded below. Such λ is not an eigenvalue but still an approximate eigenvalue of T (eigenvalues themselves are also approximate eigenvalues). The set of approximate eigenvalues (which includes the point spectrum) is called the approximate point spectrum of T, denoted by σap(T).
λ
∈
σ
(
T
)
{\displaystyle \lambda \in \sigma (T)}
if
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range. The set of such λ is called the compression spectrum of T, denoted by
σ
c
p
(
T
)
{\displaystyle \sigma _{\mathrm {cp} }(T)}
. If
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range but is injective, λ is said to be in the residual spectrum of T, denoted by
σ
r
(
T
)
{\displaystyle \sigma _{\mathrm {r} }(T)}
.
Note that the approximate point spectrum and residual spectrum are not necessarily disjoint (however, the point spectrum and the residual spectrum are).
The following subsections provide more details on the three parts of σ(T) sketched above.
=== Point spectrum ===
If an operator is not injective (so there is some nonzero x with T(x) = 0), then it is clearly not invertible. So if λ is an eigenvalue of T, one necessarily has λ ∈ σ(T). The set of eigenvalues of T is also called the point spectrum of T, denoted by σp(T). Some authors refer to the closure of the point spectrum as the pure point spectrum
σ
p
p
(
T
)
=
σ
p
(
T
)
¯
{\displaystyle \sigma _{pp}(T)={\overline {\sigma _{p}(T)}}}
while others simply consider
σ
p
p
(
T
)
:=
σ
p
(
T
)
.
{\displaystyle \sigma _{pp}(T):=\sigma _{p}(T).}
=== Approximate point spectrum ===
More generally, by the bounded inverse theorem, T is not invertible if it is not bounded below; that is, if there is no c > 0 such that ||Tx|| ≥ c||x|| for all x ∈ X. So the spectrum includes the set of approximate eigenvalues, which are those λ such that T - λI is not bounded below; equivalently, it is the set of λ for which there is a sequence of unit vectors x1, x2, ... for which
lim
n
→
∞
‖
T
x
n
−
λ
x
n
‖
=
0
{\displaystyle \lim _{n\to \infty }\|Tx_{n}-\lambda x_{n}\|=0}
.
The set of approximate eigenvalues is known as the approximate point spectrum, denoted by
σ
a
p
(
T
)
{\displaystyle \sigma _{\mathrm {ap} }(T)}
.
It is easy to see that the eigenvalues lie in the approximate point spectrum.
For example, consider the right shift R on
l
2
(
Z
)
{\displaystyle l^{2}(\mathbb {Z} )}
defined by
R
:
e
j
↦
e
j
+
1
,
j
∈
Z
,
{\displaystyle R:\,e_{j}\mapsto e_{j+1},\quad j\in \mathbb {Z} ,}
where
(
e
j
)
j
∈
N
{\displaystyle {\big (}e_{j}{\big )}_{j\in \mathbb {N} }}
is the standard orthonormal basis in
l
2
(
Z
)
{\displaystyle l^{2}(\mathbb {Z} )}
. Direct calculation shows R has no eigenvalues, but every λ with
|
λ
|
=
1
{\displaystyle |\lambda |=1}
is an approximate eigenvalue; letting xn be the vector
1
n
(
…
,
0
,
1
,
λ
−
1
,
λ
−
2
,
…
,
λ
1
−
n
,
0
,
…
)
{\displaystyle {\frac {1}{\sqrt {n}}}(\dots ,0,1,\lambda ^{-1},\lambda ^{-2},\dots ,\lambda ^{1-n},0,\dots )}
one can see that ||xn|| = 1 for all n, but
‖
R
x
n
−
λ
x
n
‖
=
2
n
→
0.
{\displaystyle \|Rx_{n}-\lambda x_{n}\|={\sqrt {\frac {2}{n}}}\to 0.}
Since R is a unitary operator, its spectrum lies on the unit circle. Therefore, the approximate point spectrum of R is its entire spectrum.
This conclusion is also true for a more general class of operators.
A unitary operator is normal. By the spectral theorem, a bounded operator on a Hilbert space H is normal if and only if it is equivalent (after identification of H with an
L
2
{\displaystyle L^{2}}
space) to a multiplication operator. It can be shown that the approximate point spectrum of a bounded multiplication operator equals its spectrum.
=== Discrete spectrum ===
The discrete spectrum is defined as the set of normal eigenvalues or, equivalently, as the set of isolated points of the spectrum such that the corresponding Riesz projector is of finite rank. As such, the discrete spectrum is a strict subset of the point spectrum, i.e.,
σ
d
(
T
)
⊂
σ
p
(
T
)
.
{\displaystyle \sigma _{d}(T)\subset \sigma _{p}(T).}
=== Continuous spectrum ===
The set of all λ for which
T
−
λ
I
{\displaystyle T-\lambda I}
is injective and has dense range, but is not surjective, is called the continuous spectrum of T, denoted by
σ
c
(
T
)
{\displaystyle \sigma _{\mathbb {c} }(T)}
. The continuous spectrum therefore consists of those approximate eigenvalues which are not eigenvalues and do not lie in the residual spectrum. That is,
σ
c
(
T
)
=
σ
a
p
(
T
)
∖
(
σ
r
(
T
)
∪
σ
p
(
T
)
)
{\displaystyle \sigma _{\mathrm {c} }(T)=\sigma _{\mathrm {ap} }(T)\setminus (\sigma _{\mathrm {r} }(T)\cup \sigma _{\mathrm {p} }(T))}
.
For example,
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
e
j
↦
e
j
/
j
{\displaystyle e_{j}\mapsto e_{j}/j}
,
j
∈
N
{\displaystyle j\in \mathbb {N} }
, is injective and has a dense range, yet
R
a
n
(
A
)
⊊
l
2
(
N
)
{\displaystyle \mathrm {Ran} (A)\subsetneq l^{2}(\mathbb {N} )}
.
Indeed, if
x
=
∑
j
∈
N
c
j
e
j
∈
l
2
(
N
)
{\textstyle x=\sum _{j\in \mathbb {N} }c_{j}e_{j}\in l^{2}(\mathbb {N} )}
with
c
j
∈
C
{\displaystyle c_{j}\in \mathbb {C} }
such that
∑
j
∈
N
|
c
j
|
2
<
∞
{\textstyle \sum _{j\in \mathbb {N} }|c_{j}|^{2}<\infty }
, one does not necessarily have
∑
j
∈
N
|
j
c
j
|
2
<
∞
{\textstyle \sum _{j\in \mathbb {N} }\left|jc_{j}\right|^{2}<\infty }
, and then
∑
j
∈
N
j
c
j
e
j
∉
l
2
(
N
)
{\textstyle \sum _{j\in \mathbb {N} }jc_{j}e_{j}\notin l^{2}(\mathbb {N} )}
.
=== Compression spectrum ===
The set of
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range is known as the compression spectrum of T and is denoted by
σ
c
p
(
T
)
{\displaystyle \sigma _{\mathrm {cp} }(T)}
.
=== Residual spectrum ===
The set of
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
is injective but does not have dense range is known as the residual spectrum of T and is denoted by
σ
r
(
T
)
{\displaystyle \sigma _{\mathrm {r} }(T)}
:
σ
r
(
T
)
=
σ
c
p
(
T
)
∖
σ
p
(
T
)
.
{\displaystyle \sigma _{\mathrm {r} }(T)=\sigma _{\mathrm {cp} }(T)\setminus \sigma _{\mathrm {p} }(T).}
An operator may be injective, even bounded below, but still not invertible. The right shift on
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
,
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
R
:
e
j
↦
e
j
+
1
,
j
∈
N
{\displaystyle R:\,e_{j}\mapsto e_{j+1},\,j\in \mathbb {N} }
, is such an example. This shift operator is an isometry, therefore bounded below by 1. But it is not invertible as it is not surjective (
e
1
∉
R
a
n
(
R
)
{\displaystyle e_{1}\not \in \mathrm {Ran} (R)}
), and moreover
R
a
n
(
R
)
{\displaystyle \mathrm {Ran} (R)}
is not dense in
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
(
e
1
∉
R
a
n
(
R
)
¯
{\displaystyle e_{1}\notin {\overline {\mathrm {Ran} (R)}}}
).
=== Peripheral spectrum ===
The peripheral spectrum of an operator is defined as the set of points in its spectrum which have modulus equal to its spectral radius.
=== Essential spectrum ===
There are five similar definitions of the essential spectrum of closed densely defined linear operator
A
:
X
→
X
{\displaystyle A:\,X\to X}
which satisfy
σ
e
s
s
,
1
(
A
)
⊂
σ
e
s
s
,
2
(
A
)
⊂
σ
e
s
s
,
3
(
A
)
⊂
σ
e
s
s
,
4
(
A
)
⊂
σ
e
s
s
,
5
(
A
)
⊂
σ
(
A
)
.
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)\subset \sigma _{\mathrm {ess} ,2}(A)\subset \sigma _{\mathrm {ess} ,3}(A)\subset \sigma _{\mathrm {ess} ,4}(A)\subset \sigma _{\mathrm {ess} ,5}(A)\subset \sigma (A).}
All these spectra
σ
e
s
s
,
k
(
A
)
,
1
≤
k
≤
5
{\displaystyle \sigma _{\mathrm {ess} ,k}(A),\ 1\leq k\leq 5}
, coincide in the case of self-adjoint operators.
The essential spectrum
σ
e
s
s
,
1
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not semi-Fredholm. (The operator is semi-Fredholm if its range is closed and either its kernel or cokernel (or both) is finite-dimensional.) Example 1:
λ
=
0
∈
σ
e
s
s
,
1
(
A
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,1}(A)}
for the operator
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
A
:
e
j
↦
e
j
/
j
,
j
∈
N
{\displaystyle A:\,e_{j}\mapsto e_{j}/j,~j\in \mathbb {N} }
(because the range of this operator is not closed: the range does not include all of
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
although its closure does).Example 2:
λ
=
0
∈
σ
e
s
s
,
1
(
N
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,1}(N)}
for
N
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle N:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
N
:
v
↦
0
{\displaystyle N:\,v\mapsto 0}
for any
v
∈
l
2
(
N
)
{\displaystyle v\in l^{2}(\mathbb {N} )}
(because both kernel and cokernel of this operator are infinite-dimensional).
The essential spectrum
σ
e
s
s
,
2
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,2}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that the operator either
A
−
λ
I
{\displaystyle A-\lambda I}
has infinite-dimensional kernel or has a range which is not closed. It can also be characterized in terms of Weyl's criterion: there exists a sequence
(
x
j
)
j
∈
N
{\displaystyle (x_{j})_{j\in \mathbb {N} }}
in the space X such that
‖
x
j
‖
=
1
{\displaystyle \Vert x_{j}\Vert =1}
,
lim
j
→
∞
‖
(
A
−
λ
I
)
x
j
‖
=
0
,
{\textstyle \lim _{j\to \infty }\left\|(A-\lambda I)x_{j}\right\|=0,}
and such that
(
x
j
)
j
∈
N
{\displaystyle (x_{j})_{j\in \mathbb {N} }}
contains no convergent subsequence. Such a sequence is called a singular sequence (or a singular Weyl sequence).Example:
λ
=
0
∈
σ
e
s
s
,
2
(
B
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,2}(B)}
for the operator
B
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle B:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
B
:
e
j
↦
e
j
/
2
{\displaystyle B:\,e_{j}\mapsto e_{j/2}}
if j is even and
e
j
↦
0
{\displaystyle e_{j}\mapsto 0}
when j is odd (kernel is infinite-dimensional; cokernel is zero-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
1
(
B
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,1}(B)}
.
The essential spectrum
σ
e
s
s
,
3
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,3}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not Fredholm. (The operator is Fredholm if its range is closed and both its kernel and cokernel are finite-dimensional.) Example:
λ
=
0
∈
σ
e
s
s
,
3
(
J
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,3}(J)}
for the operator
J
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle J:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
J
:
e
j
↦
e
2
j
{\displaystyle J:\,e_{j}\mapsto e_{2j}}
(kernel is zero-dimensional, cokernel is infinite-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
2
(
J
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,2}(J)}
.
The essential spectrum
σ
e
s
s
,
4
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,4}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not Fredholm of index zero. It could also be characterized as the largest part of the spectrum of A which is preserved by compact perturbations. In other words,
σ
e
s
s
,
4
(
A
)
=
⋂
K
∈
B
0
(
X
)
σ
(
A
+
K
)
{\textstyle \sigma _{\mathrm {ess} ,4}(A)=\bigcap _{K\in B_{0}(X)}\sigma (A+K)}
; here
B
0
(
X
)
{\displaystyle B_{0}(X)}
denotes the set of all compact operators on X. Example:
λ
=
0
∈
σ
e
s
s
,
4
(
R
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,4}(R)}
where
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
is the right shift operator,
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
R
:
e
j
↦
e
j
+
1
{\displaystyle R:\,e_{j}\mapsto e_{j+1}}
for
j
∈
N
{\displaystyle j\in \mathbb {N} }
(its kernel is zero, its cokernel is one-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
3
(
R
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,3}(R)}
.
The essential spectrum
σ
e
s
s
,
5
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,5}(A)}
is the union of
σ
e
s
s
,
1
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)}
with all components of
C
∖
σ
e
s
s
,
1
(
A
)
{\displaystyle \mathbb {C} \setminus \sigma _{\mathrm {ess} ,1}(A)}
that do not intersect with the resolvent set
C
∖
σ
(
A
)
{\displaystyle \mathbb {C} \setminus \sigma (A)}
. It can also be characterized as
σ
(
A
)
∖
σ
d
(
A
)
{\displaystyle \sigma (A)\setminus \sigma _{\mathrm {d} }(A)}
.Example: consider the operator
T
:
l
2
(
Z
)
→
l
2
(
Z
)
{\displaystyle T:\,l^{2}(\mathbb {Z} )\to l^{2}(\mathbb {Z} )}
,
T
:
e
j
↦
e
j
−
1
{\displaystyle T:\,e_{j}\mapsto e_{j-1}}
for
j
≠
0
{\displaystyle j\neq 0}
,
T
:
e
0
↦
0
{\displaystyle T:\,e_{0}\mapsto 0}
. Since
‖
T
‖
=
1
{\displaystyle \Vert T\Vert =1}
, one has
σ
(
T
)
⊂
D
1
¯
{\displaystyle \sigma (T)\subset {\overline {\mathbb {D} _{1}}}}
. For any
z
∈
C
{\displaystyle z\in \mathbb {C} }
with
|
z
|
=
1
{\displaystyle |z|=1}
, the range of
T
−
z
I
{\displaystyle T-zI}
is dense but not closed, hence the boundary of the unit disc is in the first type of the essential spectrum:
∂
D
1
⊂
σ
e
s
s
,
1
(
T
)
{\displaystyle \partial \mathbb {D} _{1}\subset \sigma _{\mathrm {ess} ,1}(T)}
. For any
z
∈
C
{\displaystyle z\in \mathbb {C} }
with
|
z
|
<
1
{\displaystyle |z|<1}
,
T
−
z
I
{\displaystyle T-zI}
has a closed range, one-dimensional kernel, and one-dimensional cokernel, so
z
∈
σ
(
T
)
{\displaystyle z\in \sigma (T)}
although
z
∉
σ
e
s
s
,
k
(
T
)
{\displaystyle z\not \in \sigma _{\mathrm {ess} ,k}(T)}
for
1
≤
k
≤
4
{\displaystyle 1\leq k\leq 4}
; thus,
σ
e
s
s
,
k
(
T
)
=
∂
D
1
{\displaystyle \sigma _{\mathrm {ess} ,k}(T)=\partial \mathbb {D} _{1}}
for
1
≤
k
≤
4
{\displaystyle 1\leq k\leq 4}
. There are two components of
C
∖
σ
e
s
s
,
1
(
T
)
{\displaystyle \mathbb {C} \setminus \sigma _{\mathrm {ess} ,1}(T)}
:
{
z
∈
C
:
|
z
|
>
1
}
{\displaystyle \{z\in \mathbb {C} :\,|z|>1\}}
and
{
z
∈
C
:
|
z
|
<
1
}
{\displaystyle \{z\in \mathbb {C} :\,|z|<1\}}
. The component
{
|
z
|
<
1
}
{\displaystyle \{|z|<1\}}
has no intersection with the resolvent set; by definition,
σ
e
s
s
,
5
(
T
)
=
σ
e
s
s
,
1
(
T
)
∪
{
z
∈
C
:
|
z
|
<
1
}
=
{
z
∈
C
:
|
z
|
≤
1
}
{\displaystyle \sigma _{\mathrm {ess} ,5}(T)=\sigma _{\mathrm {ess} ,1}(T)\cup \{z\in \mathbb {C} :\,|z|<1\}=\{z\in \mathbb {C} :\,|z|\leq 1\}}
.
== Example: Hydrogen atom ==
The hydrogen atom provides an example of different types of the spectra. The hydrogen atom Hamiltonian operator
H
=
−
Δ
−
Z
|
x
|
{\displaystyle H=-\Delta -{\frac {Z}{|x|}}}
,
Z
>
0
{\displaystyle Z>0}
, with domain
D
(
H
)
=
H
1
(
R
3
)
{\displaystyle D(H)=H^{1}(\mathbb {R} ^{3})}
has a discrete set of eigenvalues (the discrete spectrum
σ
d
(
H
)
{\displaystyle \sigma _{\mathrm {d} }(H)}
, which in this case coincides with the point spectrum
σ
p
(
H
)
{\displaystyle \sigma _{\mathrm {p} }(H)}
since there are no eigenvalues embedded into the continuous spectrum) that can be computed by the Rydberg formula. Their corresponding eigenfunctions are called eigenstates, or the bound states. The result of the ionization process is described by the continuous part of the spectrum (the energy of the collision/ionization is not "quantized"), represented by
σ
c
o
n
t
(
H
)
=
[
0
,
+
∞
)
{\displaystyle \sigma _{\mathrm {cont} }(H)=[0,+\infty )}
(it also coincides with the essential spectrum,
σ
e
s
s
(
H
)
=
[
0
,
+
∞
)
{\displaystyle \sigma _{\mathrm {ess} }(H)=[0,+\infty )}
).
== Spectrum of the adjoint operator ==
Let X be a Banach space and
T
:
X
→
X
{\displaystyle T:\,X\to X}
a closed linear operator with dense domain
D
(
T
)
⊂
X
{\displaystyle D(T)\subset X}
.
If X* is the dual space of X, and
T
∗
:
X
∗
→
X
∗
{\displaystyle T^{*}:\,X^{*}\to X^{*}}
is the hermitian adjoint of T, then
σ
(
T
∗
)
=
σ
(
T
)
¯
:=
{
z
∈
C
:
z
¯
∈
σ
(
T
)
}
.
{\displaystyle \sigma (T^{*})={\overline {\sigma (T)}}:=\{z\in \mathbb {C} :{\bar {z}}\in \sigma (T)\}.}
We also get
σ
p
(
T
)
⊂
σ
r
(
T
∗
)
∪
σ
p
(
T
∗
)
¯
{\displaystyle \sigma _{\mathrm {p} }(T)\subset {\overline {\sigma _{\mathrm {r} }(T^{*})\cup \sigma _{\mathrm {p} }(T^{*})}}}
by the following argument: X embeds isometrically into X**.
Therefore, for every non-zero element in the kernel of
T
−
λ
I
{\displaystyle T-\lambda I}
there exists a non-zero element in X** which vanishes on
R
a
n
(
T
∗
−
λ
¯
I
)
{\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}
.
Thus
R
a
n
(
T
∗
−
λ
¯
I
)
{\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}
can not be dense.
Furthermore, if X is reflexive, we have
σ
r
(
T
∗
)
¯
⊂
σ
p
(
T
)
{\displaystyle {\overline {\sigma _{\mathrm {r} }(T^{*})}}\subset \sigma _{\mathrm {p} }(T)}
.
== Spectra of particular classes of operators ==
=== Compact operators ===
If T is a compact operator, or, more generally, an inessential operator, then it can be shown that the spectrum is countable, that zero is the only possible accumulation point, and that any nonzero λ in the spectrum is an eigenvalue.
=== Quasinilpotent operators ===
A bounded operator
A
:
X
→
X
{\displaystyle A:\,X\to X}
is quasinilpotent if
‖
A
n
‖
1
/
n
→
0
{\displaystyle \lVert A^{n}\rVert ^{1/n}\to 0}
as
n
→
∞
{\displaystyle n\to \infty }
(in other words, if the spectral radius of A equals zero). Such operators could equivalently be characterized by the condition
σ
(
A
)
=
{
0
}
.
{\displaystyle \sigma (A)=\{0\}.}
An example of such an operator is
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
e
j
↦
e
j
+
1
/
2
j
{\displaystyle e_{j}\mapsto e_{j+1}/2^{j}}
for
j
∈
N
{\displaystyle j\in \mathbb {N} }
.
=== Self-adjoint operators ===
If X is a Hilbert space and T is a self-adjoint operator (or, more generally, a normal operator), then a remarkable result known as the spectral theorem gives an analogue of the diagonalisation theorem for normal finite-dimensional operators (Hermitian matrices, for example).
For self-adjoint operators, one can use spectral measures to define a decomposition of the spectrum into absolutely continuous, pure point, and singular parts.
== Spectrum of a real operator ==
The definitions of the resolvent and spectrum can be extended to any continuous linear operator
T
{\displaystyle T}
acting on a Banach space
X
{\displaystyle X}
over the real field
R
{\displaystyle \mathbb {R} }
(instead of the complex field
C
{\displaystyle \mathbb {C} }
) via its complexification
T
C
{\displaystyle T_{\mathbb {C} }}
. In this case we define the resolvent set
ρ
(
T
)
{\displaystyle \rho (T)}
as the set of all
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
such that
T
C
−
λ
I
{\displaystyle T_{\mathbb {C} }-\lambda I}
is invertible as an operator acting on the complexified space
X
C
{\displaystyle X_{\mathbb {C} }}
; then we define
σ
(
T
)
=
C
∖
ρ
(
T
)
{\displaystyle \sigma (T)=\mathbb {C} \setminus \rho (T)}
.
=== Real spectrum ===
The real spectrum of a continuous linear operator
T
{\displaystyle T}
acting on a real Banach space
X
{\displaystyle X}
, denoted
σ
R
(
T
)
{\displaystyle \sigma _{\mathbb {R} }(T)}
, is defined as the set of all
λ
∈
R
{\displaystyle \lambda \in \mathbb {R} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
fails to be invertible in the real algebra of bounded linear operators acting on
X
{\displaystyle X}
. In this case we have
σ
(
T
)
∩
R
=
σ
R
(
T
)
{\displaystyle \sigma (T)\cap \mathbb {R} =\sigma _{\mathbb {R} }(T)}
. Note that the real spectrum may or may not coincide with the complex spectrum. In particular, the real spectrum could be empty.
== Spectrum of a unital Banach algebra ==
Let B be a complex Banach algebra containing a unit e. Then we define the spectrum σ(x) (or more explicitly σB(x)) of an element x of B to be the set of those complex numbers λ for which λe − x is not invertible in B. This extends the definition for bounded linear operators B(X) on a Banach space X, since B(X) is a unital Banach algebra.
== See also ==
Essential spectrum
Discrete spectrum (mathematics)
Self-adjoint operator
Pseudospectrum
Resolvent set
== Notes ==
== References ==
Dales et al., Introduction to Banach Algebras, Operators, and Harmonic Analysis, ISBN 0-521-53584-0
"Spectrum of an operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Simon, Barry (2005). Orthogonal polynomials on the unit circle. Part 1. Classical theory. American Mathematical Society Colloquium Publications. Vol. 54. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-3446-6. MR 2105088.
Teschl, G. (2014). Mathematical Methods in Quantum Mechanics. Providence (R.I): American Mathematical Soc. ISBN 978-1-4704-1704-8. | Wikipedia/Spectrum_(functional_analysis) |
In continuum mechanics, stress is a physical quantity that describes forces present during deformation. For example, an object being pulled apart, such as a stretched elastic band, is subject to tensile stress and may undergo elongation. An object being pushed together, such as a crumpled sponge, is subject to compressive stress and may undergo shortening. The greater the force and the smaller the cross-sectional area of the body on which it acts, the greater the stress. Stress has dimension of force per area, with SI units of newtons per square meter (N/m2) or pascal (Pa).
Stress expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the relative deformation of the material. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).
Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original non-deformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. If the deformation changes gradually with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress.
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
The relation between mechanical stress, strain, and the strain rate can be quite complicated, although a linear approximation may be adequate in practice if the quantities are sufficiently small. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
== History ==
Humans have known about stress inside materials since ancient times. Until the 17th century, this understanding was largely intuitive and empirical, though this did not prevent the development of relatively advanced technologies like the composite bow and glass blowing.
Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals.
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo Galilei's rigorous experimental method, René Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals. With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model of a deformed elastic body by introducing the notions of stress and strain. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallel laminar flow.
== Definition ==
Stress is defined as the force across a small boundary per unit area of that boundary, for all orientations of the boundary. Derived from a physical quantity (force) and a purely geometrical quantity (area), stress is also a physical quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them.: 90–106 Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood.
Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S.: 41–50 In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the traction vector T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field.
=== Normal and shear ===
In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (compression or tension) perpendicular to the surface, and the shear stress that is parallel to the surface.
If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product T · n. This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress). The shear component is then the vector T − (T · n)n.
== Units ==
The dimension of stress is that of pressure, and therefore its coordinates are measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress.
== Causes and effects ==
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines or points; and possibly also on very short time intervals (as in the impulses due to collisions). In active matter, self-propulsion of microscopic particles generates macroscopic stress profiles. In general, the stress distribution in a body is expressed as a piecewise continuous function of space and time.
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. If the deformation changes with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article on viscosity. The same for normal viscous stresses can be found in Sharma (2019).
The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
== Simple types ==
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.
=== Uniaxial normal ===
A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude
F
{\displaystyle F}
along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force, F with continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the single number σ, calculated simply with the magnitude of those forces, F, and cross sectional area, A.
σ
=
F
A
{\displaystyle \sigma ={\frac {F}{A}}}
On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress. If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress
σ
{\displaystyle \sigma }
change sign, and the stress is called compressive stress.
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value
σ
{\displaystyle \sigma }
= F/A will be only the average stress, called engineering stress or nominal stress. If the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle).
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid.
=== Shear ===
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed simply by the single number
τ
{\displaystyle \tau }
, calculated simply with the magnitude of those forces, F and the cross sectional area, A.
τ
=
F
A
{\displaystyle \tau ={\frac {F}{A}}}
Unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it. For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. That average is often sufficient for practical purposes.: 292 Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges").
=== Isotropic ===
Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand very large amounts of isotropic tensile stress under some circumstances. see Z-tube.
=== Cylinder ===
Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
== General types ==
Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction
d
{\displaystyle d}
, and zero across any surfaces that are parallel to
d
{\displaystyle d}
. When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element.
== Cauchy tensor ==
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
Cauchy observed that the stress vector
T
{\displaystyle T}
across a surface will always be a linear function of the surface's normal vector
n
{\displaystyle n}
, the unit-length vector that is perpendicular to it. That is,
T
=
σ
(
n
)
{\displaystyle T={\boldsymbol {\sigma }}(n)}
, where the function
σ
{\displaystyle {\boldsymbol {\sigma }}}
satisfies
σ
(
α
u
+
β
v
)
=
α
σ
(
u
)
+
β
σ
(
v
)
{\displaystyle {\boldsymbol {\sigma }}(\alpha u+\beta v)=\alpha {\boldsymbol {\sigma }}(u)+\beta {\boldsymbol {\sigma }}(v)}
for any vectors
u
,
v
{\displaystyle u,v}
and any real numbers
α
,
β
{\displaystyle \alpha ,\beta }
.
The function
σ
{\displaystyle {\boldsymbol {\sigma }}}
, now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is classified as a second-order tensor of type (0,2) or (1,1) depending on convention.
Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
or named
x
,
y
,
z
{\displaystyle x,y,z}
, the matrix may be written as
[
σ
11
σ
12
σ
13
σ
21
σ
22
σ
23
σ
31
σ
32
σ
33
]
{\displaystyle {\begin{bmatrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{bmatrix}}}
or
[
σ
x
x
σ
x
y
σ
x
z
σ
y
x
σ
y
y
σ
y
z
σ
z
x
σ
z
y
σ
z
z
]
{\displaystyle {\begin{bmatrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\\\end{bmatrix}}}
The stress vector
T
=
σ
(
n
)
{\displaystyle T={\boldsymbol {\sigma }}(n)}
across a surface with normal vector
n
{\displaystyle n}
(which is covariant - "row; horizontal" - vector) with coordinates
n
1
,
n
2
,
n
3
{\displaystyle n_{1},n_{2},n_{3}}
is then a matrix product
T
=
n
⋅
σ
{\displaystyle T=n\cdot {\boldsymbol {\sigma }}}
(where T in upper index is transposition, and as a result we get covariant (row) vector) (look on Cauchy stress tensor), that is
[
T
1
T
2
T
3
]
=
[
n
1
n
2
n
3
]
⋅
[
σ
11
σ
21
σ
31
σ
12
σ
22
σ
32
σ
13
σ
23
σ
33
]
{\displaystyle {\begin{bmatrix}T_{1}&T_{2}&T_{3}\end{bmatrix}}={\begin{bmatrix}n_{1}&n_{2}&n_{3}\end{bmatrix}}\cdot {\begin{bmatrix}\sigma _{11}&\sigma _{21}&\sigma _{31}\\\sigma _{12}&\sigma _{22}&\sigma _{32}\\\sigma _{13}&\sigma _{23}&\sigma _{33}\end{bmatrix}}}
The linear relation between
T
{\displaystyle T}
and
n
{\displaystyle n}
follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is
σ
12
=
σ
21
{\displaystyle \sigma _{12}=\sigma _{21}}
,
σ
13
=
σ
31
{\displaystyle \sigma _{13}=\sigma _{31}}
, and
σ
23
=
σ
32
{\displaystyle \sigma _{23}=\sigma _{32}}
. Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written
[
σ
x
τ
x
y
τ
x
z
τ
x
y
σ
y
τ
y
z
τ
x
z
τ
y
z
σ
z
]
{\displaystyle {\begin{bmatrix}\sigma _{x}&\tau _{xy}&\tau _{xz}\\\tau _{xy}&\sigma _{y}&\tau _{yz}\\\tau _{xz}&\tau _{yz}&\sigma _{z}\end{bmatrix}}}
where the elements
σ
x
,
σ
y
,
σ
z
{\displaystyle \sigma _{x},\sigma _{y},\sigma _{z}}
are called the orthogonal normal stresses (relative to the chosen coordinate system), and
τ
x
y
,
τ
x
z
,
τ
y
z
{\displaystyle \tau _{xy},\tau _{xz},\tau _{yz}}
the orthogonal shear stresses.
=== Change of coordinates ===
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution.
As a symmetric 3×3 real matrix, the stress tensor
σ
{\displaystyle {\boldsymbol {\sigma }}}
has three mutually orthogonal unit-length eigenvectors
e
1
,
e
2
,
e
3
{\displaystyle e_{1},e_{2},e_{3}}
and three real eigenvalues
λ
1
,
λ
2
,
λ
3
{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}
, such that
σ
e
i
=
λ
i
e
i
{\displaystyle {\boldsymbol {\sigma }}e_{i}=\lambda _{i}e_{i}}
. Therefore, in a coordinate system with axes
e
1
,
e
2
,
e
3
{\displaystyle e_{1},e_{2},e_{3}}
, the stress tensor is a diagonal matrix, and has only the three normal components
λ
1
,
λ
2
,
λ
3
{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}
the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
=== Tensor field ===
In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
=== Thin plates ===
Human-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. These simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate).
=== Thin beams ===
The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis).
== Analysis ==
Stress analysis is a branch of applied physics that covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings.
=== Goals and assumptions ===
Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces being applied to such a system must be balanced by internal reaction forces,: 97 which are almost always surface contact forces between adjacent particles — that is, as stress. Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle to particle, creating a stress distribution throughout the body.
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material;: 42–81 or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations.
=== Methods ===
Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. Most stress is analysed by mathematical methods, especially during design.
The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem.
Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.). Engineered structures are usually designed so the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke's law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Still, for two- or three-dimensional cases one must solve a partial differential equation problem.
Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
== Measures ==
Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
== See also ==
== References ==
== Further reading == | Wikipedia/Stress_(mechanics) |
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the
m
{\displaystyle m}
"most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an
n
×
n
{\displaystyle n\times n}
Hermitian matrix, where
m
{\displaystyle m}
is often but not necessarily much smaller than
n
{\displaystyle n}
. Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.
In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the Lanczos vectors (i.e. by repeatedly reorthogonalizing each newly generated vector with all previously generated ones) to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies.
In their original work, these authors also suggested how to select a starting vector (i.e. use a random-number generator to select each element of the starting vector) and suggested an empirically determined method for determining
m
{\displaystyle m}
, the reduced number of vectors (i.e. it should be selected to be approximately 1.5 times the number of accurate eigenvalues desired). Soon thereafter their work was followed by Paige, who also provided an error analysis. In 1988, Ojalvo produced a more detailed history of this algorithm and an efficient eigenvalue error test.
== The algorithm ==
Input a Hermitian matrix
A
{\displaystyle A}
of size
n
×
n
{\displaystyle n\times n}
, and optionally a number of iterations
m
{\displaystyle m}
(as default, let
m
=
n
{\displaystyle m=n}
).
Strictly speaking, the algorithm does not need access to the explicit matrix, but only a function
v
↦
A
v
{\displaystyle v\mapsto Av}
that computes the product of the matrix by an arbitrary vector. This function is called at most
m
{\displaystyle m}
times.
Output an
n
×
m
{\displaystyle n\times m}
matrix
V
{\displaystyle V}
with orthonormal columns and a tridiagonal real symmetric matrix
T
=
V
∗
A
V
{\displaystyle T=V^{*}AV}
of size
m
×
m
{\displaystyle m\times m}
. If
m
=
n
{\displaystyle m=n}
, then
V
{\displaystyle V}
is unitary, and
A
=
V
T
V
∗
{\displaystyle A=VTV^{*}}
.
Warning The Lanczos iteration is prone to numerical instability. When executed in non-exact arithmetic, additional measures (as outlined in later sections) should be taken to ensure validity of the results.
Let
v
1
∈
C
n
{\displaystyle v_{1}\in \mathbb {C} ^{n}}
be an arbitrary vector with Euclidean norm
1
{\displaystyle 1}
.
Abbreviated initial iteration step:
Let
w
1
′
=
A
v
1
{\displaystyle w_{1}'=Av_{1}}
.
Let
α
1
=
w
1
′
∗
v
1
{\displaystyle \alpha _{1}=w_{1}'^{*}v_{1}}
.
Let
w
1
=
w
1
′
−
α
1
v
1
{\displaystyle w_{1}=w_{1}'-\alpha _{1}v_{1}}
.
For
j
=
2
,
…
,
m
{\displaystyle j=2,\dots ,m}
do:
Let
β
j
=
‖
w
j
−
1
‖
{\displaystyle \beta _{j}=\|w_{j-1}\|}
(also Euclidean norm).
If
β
j
≠
0
{\displaystyle \beta _{j}\neq 0}
, then let
v
j
=
w
j
−
1
/
β
j
{\displaystyle v_{j}=w_{j-1}/\beta _{j}}
,
else pick as
v
j
{\displaystyle v_{j}}
an arbitrary vector with Euclidean norm
1
{\displaystyle 1}
that is orthogonal to all of
v
1
,
…
,
v
j
−
1
{\displaystyle v_{1},\dots ,v_{j-1}}
.
Let
w
j
′
=
A
v
j
−
β
j
v
j
−
1
{\displaystyle w_{j}'=Av_{j}-\beta _{j}v_{j-1}}
.
Let
α
j
=
w
j
′
∗
v
j
{\displaystyle \alpha _{j}=w_{j}'^{*}v_{j}}
.
Let
w
j
=
w
j
′
−
α
j
v
j
{\displaystyle w_{j}=w_{j}'-\alpha _{j}v_{j}}
.
Let
V
{\displaystyle V}
be the matrix with columns
v
1
,
…
,
v
m
{\displaystyle v_{1},\dots ,v_{m}}
. Let
T
=
(
α
1
β
2
0
β
2
α
2
β
3
β
3
α
3
⋱
⋱
⋱
β
m
−
1
β
m
−
1
α
m
−
1
β
m
0
β
m
α
m
)
{\displaystyle T={\begin{pmatrix}\alpha _{1}&\beta _{2}&&&&0\\\beta _{2}&\alpha _{2}&\beta _{3}&&&\\&\beta _{3}&\alpha _{3}&\ddots &&\\&&\ddots &\ddots &\beta _{m-1}&\\&&&\beta _{m-1}&\alpha _{m-1}&\beta _{m}\\0&&&&\beta _{m}&\alpha _{m}\\\end{pmatrix}}}
.
Note
A
v
j
=
β
j
+
1
v
j
+
1
+
α
j
v
j
+
β
j
v
j
−
1
{\displaystyle Av_{j}=\beta _{j+1}v_{j+1}+\alpha _{j}v_{j}+\beta _{j}v_{j-1}}
for
2
<
j
<
m
{\displaystyle 2<j<m}
.
There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.
In practice the initial vector
v
1
{\displaystyle v_{1}}
may be taken as another argument of the procedure, with
β
j
=
0
{\displaystyle \beta _{j}=0}
and indicators of numerical imprecision being included as additional loop termination conditions.
Not counting the matrix–vector multiplication, each iteration does
O
(
n
)
{\displaystyle O(n)}
arithmetical operations. The matrix–vector multiplication can be done in
O
(
d
n
)
{\displaystyle O(dn)}
arithmetical operations where
d
{\displaystyle d}
is the average number of nonzero elements in a row. The total complexity is thus
O
(
d
m
n
)
{\displaystyle O(dmn)}
, or
O
(
d
n
2
)
{\displaystyle O(dn^{2})}
if
m
=
n
{\displaystyle m=n}
; the Lanczos algorithm can be very fast for sparse matrices. Schemes for improving numerical stability are typically judged against this high performance.
The vectors
v
j
{\displaystyle v_{j}}
are called Lanczos vectors.
The vector
w
j
′
{\displaystyle w_{j}'}
is not used after
w
j
{\displaystyle w_{j}}
is computed, and the vector
w
j
{\displaystyle w_{j}}
is not used after
v
j
+
1
{\displaystyle v_{j+1}}
is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix
T
{\displaystyle T}
is sought, then the raw iteration does not need
v
j
−
1
{\displaystyle v_{j-1}}
after having computed
w
j
{\displaystyle w_{j}}
, although some schemes for improving the numerical stability would need it later on. Sometimes the subsequent Lanczos vectors are recomputed from
v
1
{\displaystyle v_{1}}
when needed.
=== Application to the eigenproblem ===
The Lanczos algorithm is most often brought up in the context of finding the eigenvalues and eigenvectors of a matrix, but whereas an ordinary diagonalization of a matrix would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.
If
λ
{\displaystyle \lambda }
is an eigenvalue of
T
{\displaystyle T}
, and
x
{\displaystyle x}
its eigenvector (
T
x
=
λ
x
{\displaystyle Tx=\lambda x}
), then
y
=
V
x
{\displaystyle y=Vx}
is a corresponding eigenvector of
A
{\displaystyle A}
with the same eigenvalue:
A
y
=
A
V
x
=
V
T
V
∗
V
x
=
V
T
I
x
=
V
T
x
=
V
(
λ
x
)
=
λ
V
x
=
λ
y
.
{\displaystyle {\begin{aligned}Ay&=AVx\\&=VTV^{*}Vx\\&=VTIx\\&=VTx\\&=V(\lambda x)\\&=\lambda Vx\\&=\lambda y.\end{aligned}}}
Thus the Lanczos algorithm transforms the eigendecomposition problem for
A
{\displaystyle A}
into the eigendecomposition problem for
T
{\displaystyle T}
.
For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if
T
{\displaystyle T}
is an
m
×
m
{\displaystyle m\times m}
tridiagonal symmetric matrix then:
The continuant recursion allows computing the characteristic polynomial in
O
(
m
2
)
{\displaystyle O(m^{2})}
operations, and evaluating it at a point in
O
(
m
)
{\displaystyle O(m)}
operations.
The divide-and-conquer eigenvalue algorithm can be used to compute the entire eigendecomposition of
T
{\displaystyle T}
in
O
(
m
2
)
{\displaystyle O(m^{2})}
operations.
The Fast Multipole Method can compute all eigenvalues in just
O
(
m
log
m
)
{\displaystyle O(m\log m)}
operations.
Some general eigendecomposition algorithms, notably the QR algorithm, are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity of tridiagonal QR is
O
(
m
2
)
{\displaystyle O(m^{2})}
just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have
m
2
{\displaystyle m^{2}}
elements, this is asymptotically optimal.
Even algorithms whose convergence rates are unaffected by unitary transformations, such as the power method and inverse iteration, may enjoy low-level performance benefits from being applied to the tridiagonal matrix
T
{\displaystyle T}
rather than the original matrix
A
{\displaystyle A}
. Since
T
{\displaystyle T}
is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance vis-à-vis caching. Likewise,
T
{\displaystyle T}
is a real matrix with all eigenvectors and eigenvalues real, whereas
A
{\displaystyle A}
in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of
T
{\displaystyle T}
.
If
n
{\displaystyle n}
is very large, then reducing
m
{\displaystyle m}
so that
T
{\displaystyle T}
is of a manageable size will still allow finding the more extreme eigenvalues and eigenvectors of
A
{\displaystyle A}
; in the
m
≪
n
{\displaystyle m\ll n}
region, the Lanczos algorithm can be viewed as a lossy compression scheme for Hermitian matrices, that emphasises preserving the extreme eigenvalues.
The combination of good performance for sparse matrices and the ability to compute several (without computing all) eigenvalues are the main reasons for choosing to use the Lanczos algorithm.
=== Application to tridiagonalization ===
Though the eigenproblem is often the motivation for applying the Lanczos algorithm, the operation the algorithm primarily performs is tridiagonalization of a matrix, for which numerically stable Householder transformations have been favoured since the 1950s. During the 1960s the Lanczos algorithm was disregarded. Interest in it was rejuvenated by the Kaniel–Paige convergence theory and the development of methods to prevent numerical instability, but the Lanczos algorithm remains the alternative algorithm that one tries only if Householder is not satisfactory.
Aspects in which the two algorithms differ include:
Lanczos takes advantage of
A
{\displaystyle A}
being a sparse matrix, whereas Householder does not, and will generate fill-in.
Lanczos works throughout with the original matrix
A
{\displaystyle A}
(and has no problem with it being known only implicitly), whereas raw Householder wants to modify the matrix during the computation (although that can be avoided).
Each iteration of the Lanczos algorithm produces another column of the final transformation matrix
V
{\displaystyle V}
, whereas an iteration of Householder produces another factor in a unitary factorisation
Q
1
Q
2
…
Q
n
{\displaystyle Q_{1}Q_{2}\dots Q_{n}}
of
V
{\displaystyle V}
. Each factor is however determined by a single vector, so the storage requirements are the same for both algorithms, and
V
=
Q
1
Q
2
…
Q
n
{\displaystyle V=Q_{1}Q_{2}\dots Q_{n}}
can be computed in
O
(
n
3
)
{\displaystyle O(n^{3})}
time.
Householder is numerically stable, whereas raw Lanczos is not.
Lanczos is highly parallel, with only
O
(
n
)
{\displaystyle O(n)}
points of synchronisation (the computations of
α
j
{\displaystyle \alpha _{j}}
and
β
j
{\displaystyle \beta _{j}}
). Householder is less parallel, having a sequence of
O
(
n
2
)
{\displaystyle O(n^{2})}
scalar quantities computed that each depend on the previous quantity in the sequence.
== Derivation of the algorithm ==
There are several lines of reasoning which lead to the Lanczos algorithm.
=== A more provident power method ===
The power method for finding the eigenvalue of largest magnitude and a corresponding eigenvector of a matrix
A
{\displaystyle A}
is roughly
Pick a random vector
u
1
≠
0
{\displaystyle u_{1}\neq 0}
.
For
j
⩾
1
{\displaystyle j\geqslant 1}
(until the direction of
u
j
{\displaystyle u_{j}}
has converged) do:
Let
u
j
+
1
′
=
A
u
j
.
{\displaystyle u_{j+1}'=Au_{j}.}
Let
u
j
+
1
=
u
j
+
1
′
/
‖
u
j
+
1
′
‖
.
{\displaystyle u_{j+1}=u_{j+1}'/\|u_{j+1}'\|.}
In the large
j
{\displaystyle j}
limit,
u
j
{\displaystyle u_{j}}
approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.
A critique that can be raised against this method is that it is wasteful: it spends a lot of work (the matrix–vector products in step 2.1) extracting information from the matrix
A
{\displaystyle A}
, but pays attention only to the very last result; implementations typically use the same variable for all the vectors
u
j
{\displaystyle u_{j}}
, having each new iteration overwrite the results from the previous one. It may be desirable to instead keep all the intermediate results and organise the data.
One piece of information that trivially is available from the vectors
u
j
{\displaystyle u_{j}}
is a chain of Krylov subspaces. One way of stating that without introducing sets into the algorithm is to claim that it computes
a subset
{
v
j
}
j
=
1
m
{\displaystyle \{v_{j}\}_{j=1}^{m}}
of a basis of
C
n
{\displaystyle \mathbb {C} ^{n}}
such that
A
x
∈
span
(
v
1
,
…
,
v
j
+
1
)
{\displaystyle Ax\in \operatorname {span} (v_{1},\dotsc ,v_{j+1})}
for every
x
∈
span
(
v
1
,
…
,
v
j
)
{\displaystyle x\in \operatorname {span} (v_{1},\dotsc ,v_{j})}
and all
1
⩽
j
<
m
;
{\displaystyle 1\leqslant j<m;}
this is trivially satisfied by
v
j
=
u
j
{\displaystyle v_{j}=u_{j}}
as long as
u
j
{\displaystyle u_{j}}
is linearly independent of
u
1
,
…
,
u
j
−
1
{\displaystyle u_{1},\dotsc ,u_{j-1}}
(and in the case that there is such a dependence then one may continue the sequence by picking as
v
j
{\displaystyle v_{j}}
an arbitrary vector linearly independent of
u
1
,
…
,
u
j
−
1
{\displaystyle u_{1},\dotsc ,u_{j-1}}
). A basis containing the
u
j
{\displaystyle u_{j}}
vectors is however likely to be numerically ill-conditioned, since this sequence of vectors is by design meant to converge to an eigenvector of
A
{\displaystyle A}
. To avoid that, one can combine the power iteration with a Gram–Schmidt process, to instead produce an orthonormal basis of these Krylov subspaces.
Pick a random vector
u
1
{\displaystyle u_{1}}
of Euclidean norm
1
{\displaystyle 1}
. Let
v
1
=
u
1
{\displaystyle v_{1}=u_{1}}
.
For
j
=
1
,
…
,
m
−
1
{\displaystyle j=1,\dotsc ,m-1}
do:
Let
u
j
+
1
′
=
A
u
j
{\displaystyle u_{j+1}'=Au_{j}}
.
For all
k
=
1
,
…
,
j
{\displaystyle k=1,\dotsc ,j}
let
g
k
,
j
=
v
k
∗
u
j
+
1
′
{\displaystyle g_{k,j}=v_{k}^{*}u_{j+1}'}
. (These are the coordinates of
A
u
j
=
u
j
+
1
′
{\displaystyle Au_{j}=u_{j+1}'}
with respect to the basis vectors
v
1
,
…
,
v
j
{\displaystyle v_{1},\dotsc ,v_{j}}
.)
Let
w
j
+
1
=
u
j
+
1
′
−
∑
k
=
1
j
g
k
,
j
v
k
{\displaystyle w_{j+1}=u_{j+1}'-\sum _{k=1}^{j}g_{k,j}v_{k}}
. (Cancel the component of
u
j
+
1
′
{\displaystyle u_{j+1}'}
that is in
span
(
v
1
,
…
,
v
j
)
{\displaystyle \operatorname {span} (v_{1},\dotsc ,v_{j})}
.)
If
w
j
+
1
≠
0
{\displaystyle w_{j+1}\neq 0}
then let
u
j
+
1
=
u
j
+
1
′
/
‖
u
j
+
1
′
‖
{\displaystyle u_{j+1}=u_{j+1}'/\|u_{j+1}'\|}
and
v
j
+
1
=
w
j
+
1
/
‖
w
j
+
1
‖
{\displaystyle v_{j+1}=w_{j+1}/\|w_{j+1}\|}
,
otherwise pick as
u
j
+
1
=
v
j
+
1
{\displaystyle u_{j+1}=v_{j+1}}
an arbitrary vector of Euclidean norm
1
{\displaystyle 1}
that is orthogonal to all of
v
1
,
…
,
v
j
{\displaystyle v_{1},\dotsc ,v_{j}}
.
The relation between the power iteration vectors
u
j
{\displaystyle u_{j}}
and the orthogonal vectors
v
j
{\displaystyle v_{j}}
is that
A
u
j
=
‖
u
j
+
1
′
‖
u
j
+
1
=
u
j
+
1
′
=
w
j
+
1
+
∑
k
=
1
j
g
k
,
j
v
k
=
‖
w
j
+
1
‖
v
j
+
1
+
∑
k
=
1
j
g
k
,
j
v
k
{\displaystyle Au_{j}=\|u_{j+1}'\|u_{j+1}=u_{j+1}'=w_{j+1}+\sum _{k=1}^{j}g_{k,j}v_{k}=\|w_{j+1}\|v_{j+1}+\sum _{k=1}^{j}g_{k,j}v_{k}}
.
Here it may be observed that we do not actually need the
u
j
{\displaystyle u_{j}}
vectors to compute these
v
j
{\displaystyle v_{j}}
, because
u
j
−
v
j
∈
span
(
v
1
,
…
,
v
j
−
1
)
{\displaystyle u_{j}-v_{j}\in \operatorname {span} (v_{1},\dotsc ,v_{j-1})}
and therefore the difference between
u
j
+
1
′
=
A
u
j
{\displaystyle u_{j+1}'=Au_{j}}
and
w
j
+
1
′
=
A
v
j
{\displaystyle w_{j+1}'=Av_{j}}
is in
span
(
v
1
,
…
,
v
j
)
{\displaystyle \operatorname {span} (v_{1},\dotsc ,v_{j})}
, which is cancelled out by the orthogonalisation process. Thus the same basis for the chain of Krylov subspaces is computed by
Pick a random vector
v
1
{\displaystyle v_{1}}
of Euclidean norm
1
{\displaystyle 1}
.
For
j
=
1
,
…
,
m
−
1
{\displaystyle j=1,\dotsc ,m-1}
do:
Let
w
j
+
1
′
=
A
v
j
{\displaystyle w_{j+1}'=Av_{j}}
.
For all
k
=
1
,
…
,
j
{\displaystyle k=1,\dotsc ,j}
let
h
k
,
j
=
v
k
∗
w
j
+
1
′
{\displaystyle h_{k,j}=v_{k}^{*}w_{j+1}'}
.
Let
w
j
+
1
=
w
j
+
1
′
−
∑
k
=
1
j
h
k
,
j
v
k
{\displaystyle w_{j+1}=w_{j+1}'-\sum _{k=1}^{j}h_{k,j}v_{k}}
.
Let
h
j
+
1
,
j
=
‖
w
j
+
1
‖
{\displaystyle h_{j+1,j}=\|w_{j+1}\|}
.
If
h
j
+
1
,
j
≠
0
{\displaystyle h_{j+1,j}\neq 0}
then let
v
j
+
1
=
w
j
+
1
/
h
j
+
1
,
j
{\displaystyle v_{j+1}=w_{j+1}/h_{j+1,j}}
,
otherwise pick as
v
j
+
1
{\displaystyle v_{j+1}}
an arbitrary vector of Euclidean norm
1
{\displaystyle 1}
that is orthogonal to all of
v
1
,
…
,
v
j
{\displaystyle v_{1},\dotsc ,v_{j}}
.
A priori the coefficients
h
k
,
j
{\displaystyle h_{k,j}}
satisfy
A
v
j
=
∑
k
=
1
j
+
1
h
k
,
j
v
k
{\displaystyle Av_{j}=\sum _{k=1}^{j+1}h_{k,j}v_{k}}
for all
j
<
m
{\displaystyle j<m}
;
the definition
h
j
+
1
,
j
=
‖
w
j
+
1
‖
{\displaystyle h_{j+1,j}=\|w_{j+1}\|}
may seem a bit odd, but fits the general pattern
h
k
,
j
=
v
k
∗
w
j
+
1
′
{\displaystyle h_{k,j}=v_{k}^{*}w_{j+1}'}
since
v
j
+
1
∗
w
j
+
1
′
=
v
j
+
1
∗
w
j
+
1
=
‖
w
j
+
1
‖
v
j
+
1
∗
v
j
+
1
=
‖
w
j
+
1
‖
.
{\displaystyle v_{j+1}^{*}w_{j+1}'=v_{j+1}^{*}w_{j+1}=\|w_{j+1}\|v_{j+1}^{*}v_{j+1}=\|w_{j+1}\|.}
Because the power iteration vectors
u
j
{\displaystyle u_{j}}
that were eliminated from this recursion satisfy
u
j
∈
span
(
v
1
,
…
,
v
j
)
,
{\displaystyle u_{j}\in \operatorname {span} (v_{1},\ldots ,v_{j}),}
the vectors
{
v
j
}
j
=
1
m
{\displaystyle \{v_{j}\}_{j=1}^{m}}
and coefficients
h
k
,
j
{\displaystyle h_{k,j}}
contain enough information from
A
{\displaystyle A}
that all of
u
1
,
…
,
u
m
{\displaystyle u_{1},\ldots ,u_{m}}
can be computed, so nothing was lost by switching vectors. (Indeed, it turns out that the data collected here give significantly better approximations of the largest eigenvalue than one gets from an equal number of iterations in the power method, although that is not necessarily obvious at this point.)
This last procedure is the Arnoldi iteration. The Lanczos algorithm then arises as the simplification one gets from eliminating calculation steps that turn out to be trivial when
A
{\displaystyle A}
is Hermitian—in particular most of the
h
k
,
j
{\displaystyle h_{k,j}}
coefficients turn out to be zero.
Elementarily, if
A
{\displaystyle A}
is Hermitian then
h
k
,
j
=
v
k
∗
w
j
+
1
′
=
v
k
∗
A
v
j
=
v
k
∗
A
∗
v
j
=
(
A
v
k
)
∗
v
j
.
{\displaystyle h_{k,j}=v_{k}^{*}w_{j+1}'=v_{k}^{*}Av_{j}=v_{k}^{*}A^{*}v_{j}=(Av_{k})^{*}v_{j}.}
For
k
<
j
−
1
{\displaystyle k<j-1}
we know that
A
v
k
∈
span
(
v
1
,
…
,
v
j
−
1
)
{\displaystyle Av_{k}\in \operatorname {span} (v_{1},\ldots ,v_{j-1})}
, and since
v
j
{\displaystyle v_{j}}
by construction is orthogonal to this subspace, this inner product must be zero. (This is essentially also the reason why sequences of orthogonal polynomials can always be given a three-term recurrence relation.) For
k
=
j
−
1
{\displaystyle k=j-1}
one gets
h
j
−
1
,
j
=
(
A
v
j
−
1
)
∗
v
j
=
v
j
∗
A
v
j
−
1
¯
=
h
j
,
j
−
1
¯
=
h
j
,
j
−
1
{\displaystyle h_{j-1,j}=(Av_{j-1})^{*}v_{j}={\overline {v_{j}^{*}Av_{j-1}}}={\overline {h_{j,j-1}}}=h_{j,j-1}}
since the latter is real on account of being the norm of a vector. For
k
=
j
{\displaystyle k=j}
one gets
h
j
,
j
=
(
A
v
j
)
∗
v
j
=
v
j
∗
A
v
j
¯
=
h
j
,
j
¯
,
{\displaystyle h_{j,j}=(Av_{j})^{*}v_{j}={\overline {v_{j}^{*}Av_{j}}}={\overline {h_{j,j}}},}
meaning this is real too.
More abstractly, if
V
{\displaystyle V}
is the matrix with columns
v
1
,
…
,
v
m
{\displaystyle v_{1},\ldots ,v_{m}}
then the numbers
h
k
,
j
{\displaystyle h_{k,j}}
can be identified as elements of the matrix
H
=
V
∗
A
V
{\displaystyle H=V^{*}AV}
, and
h
k
,
j
=
0
{\displaystyle h_{k,j}=0}
for
k
>
j
+
1
;
{\displaystyle k>j+1;}
the matrix
H
{\displaystyle H}
is upper Hessenberg. Since
H
∗
=
(
V
∗
A
V
)
∗
=
V
∗
A
∗
V
=
V
∗
A
V
=
H
{\displaystyle H^{*}=\left(V^{*}AV\right)^{*}=V^{*}A^{*}V=V^{*}AV=H}
the matrix
H
{\displaystyle H}
is Hermitian. This implies that
H
{\displaystyle H}
is also lower Hessenberg, so it must in fact be tridiagional. Being Hermitian, its main diagonal is real, and since its first subdiagonal is real by construction, the same is true for its first superdiagonal. Therefore,
H
{\displaystyle H}
is a real, symmetric matrix—the matrix
T
{\displaystyle T}
of the Lanczos algorithm specification.
=== Simultaneous approximation of extreme eigenvalues ===
One way of characterising the eigenvectors of a Hermitian matrix
A
{\displaystyle A}
is as stationary points of the Rayleigh quotient
r
(
x
)
=
x
∗
A
x
x
∗
x
,
x
∈
C
n
.
{\displaystyle r(x)={\frac {x^{*}Ax}{x^{*}x}},\qquad x\in \mathbb {C} ^{n}.}
In particular, the largest eigenvalue
λ
max
{\displaystyle \lambda _{\max }}
is the global maximum of
r
{\displaystyle r}
and the smallest eigenvalue
λ
min
{\displaystyle \lambda _{\min }}
is the global minimum of
r
{\displaystyle r}
.
Within a low-dimensional subspace
L
{\displaystyle {\mathcal {L}}}
of
C
n
{\displaystyle \mathbb {C} ^{n}}
it can be feasible to locate the maximum
x
{\displaystyle x}
and minimum
y
{\displaystyle y}
of
r
{\displaystyle r}
. Repeating that for an increasing chain
L
1
⊂
L
2
⊂
⋯
{\displaystyle {\mathcal {L}}_{1}\subset {\mathcal {L}}_{2}\subset \cdots }
produces two sequences of vectors:
x
1
,
x
2
,
…
{\displaystyle x_{1},x_{2},\ldots }
and
y
1
,
y
2
,
…
{\displaystyle y_{1},y_{2},\dotsc }
such that
x
j
,
y
j
∈
L
j
{\displaystyle x_{j},y_{j}\in {\mathcal {L}}_{j}}
and
r
(
x
1
)
⩽
r
(
x
2
)
⩽
⋯
⩽
λ
max
r
(
y
1
)
⩾
r
(
y
2
)
⩾
⋯
⩾
λ
min
{\displaystyle {\begin{aligned}r(x_{1})&\leqslant r(x_{2})\leqslant \cdots \leqslant \lambda _{\max }\\r(y_{1})&\geqslant r(y_{2})\geqslant \cdots \geqslant \lambda _{\min }\end{aligned}}}
The question then arises how to choose the subspaces so that these sequences converge at optimal rate.
From
x
j
{\displaystyle x_{j}}
, the optimal direction in which to seek larger values of
r
{\displaystyle r}
is that of the gradient
∇
r
(
x
j
)
{\displaystyle \nabla r(x_{j})}
, and likewise from
y
j
{\displaystyle y_{j}}
the optimal direction in which to seek smaller values of
r
{\displaystyle r}
is that of the negative gradient
−
∇
r
(
y
j
)
{\displaystyle -\nabla r(y_{j})}
. In general
∇
r
(
x
)
=
2
x
∗
x
(
A
x
−
r
(
x
)
x
)
,
{\displaystyle \nabla r(x)={\frac {2}{x^{*}x}}(Ax-r(x)x),}
so the directions of interest are easy enough to compute in matrix arithmetic, but if one wishes to improve on both
x
j
{\displaystyle x_{j}}
and
y
j
{\displaystyle y_{j}}
then there are two new directions to take into account:
A
x
j
{\displaystyle Ax_{j}}
and
A
y
j
;
{\displaystyle Ay_{j};}
since
x
j
{\displaystyle x_{j}}
and
y
j
{\displaystyle y_{j}}
can be linearly independent vectors (indeed, are close to orthogonal), one cannot in general expect
A
x
j
{\displaystyle Ax_{j}}
and
A
y
j
{\displaystyle Ay_{j}}
to be parallel. It is not necessary to increase the dimension of
L
j
{\displaystyle {\mathcal {L}}_{j}}
by
2
{\displaystyle 2}
on every step if
{
L
j
}
j
=
1
m
{\displaystyle \{{\mathcal {L}}_{j}\}_{j=1}^{m}}
are taken to be Krylov subspaces, because then
A
z
∈
L
j
+
1
{\displaystyle Az\in {\mathcal {L}}_{j+1}}
for all
z
∈
L
j
,
{\displaystyle z\in {\mathcal {L}}_{j},}
thus in particular for both
z
=
x
j
{\displaystyle z=x_{j}}
and
z
=
y
j
{\displaystyle z=y_{j}}
.
In other words, we can start with some arbitrary initial vector
x
1
=
y
1
,
{\displaystyle x_{1}=y_{1},}
construct the vector spaces
L
j
=
span
(
x
1
,
A
x
1
,
…
,
A
j
−
1
x
1
)
{\displaystyle {\mathcal {L}}_{j}=\operatorname {span} (x_{1},Ax_{1},\ldots ,A^{j-1}x_{1})}
and then seek
x
j
,
y
j
∈
L
j
{\displaystyle x_{j},y_{j}\in {\mathcal {L}}_{j}}
such that
r
(
x
j
)
=
max
z
∈
L
j
r
(
z
)
and
r
(
y
j
)
=
min
z
∈
L
j
r
(
z
)
.
{\displaystyle r(x_{j})=\max _{z\in {\mathcal {L}}_{j}}r(z)\qquad {\text{and}}\qquad r(y_{j})=\min _{z\in {\mathcal {L}}_{j}}r(z).}
Since the
j
{\displaystyle j}
th power method iterate
u
j
{\displaystyle u_{j}}
belongs to
L
j
,
{\displaystyle {\mathcal {L}}_{j},}
it follows that an iteration to produce the
x
j
{\displaystyle x_{j}}
and
y
j
{\displaystyle y_{j}}
cannot converge slower than that of the power method, and will achieve more by approximating both eigenvalue extremes. For the subproblem of optimising
r
{\displaystyle r}
on some
L
j
{\displaystyle {\mathcal {L}}_{j}}
, it is convenient to have an orthonormal basis
{
v
1
,
…
,
v
j
}
{\displaystyle \{v_{1},\ldots ,v_{j}\}}
for this vector space. Thus we are again led to the problem of iteratively computing such a basis for the sequence of Krylov subspaces.
== Convergence and other dynamics ==
When analysing the dynamics of the algorithm, it is convenient to take the eigenvalues and eigenvectors of
A
{\displaystyle A}
as given, even though they are not explicitly known to the user. To fix notation, let
λ
1
⩾
λ
2
⩾
⋯
⩾
λ
n
{\displaystyle \lambda _{1}\geqslant \lambda _{2}\geqslant \dotsb \geqslant \lambda _{n}}
be the eigenvalues (these are known to all be real, and thus possible to order) and let
z
1
,
…
,
z
n
{\displaystyle z_{1},\dotsc ,z_{n}}
be an orthonormal set of eigenvectors such that
A
z
k
=
λ
k
z
k
{\displaystyle Az_{k}=\lambda _{k}z_{k}}
for all
k
=
1
,
…
,
n
{\displaystyle k=1,\dotsc ,n}
.
It is also convenient to fix a notation for the coefficients of the initial Lanczos vector
v
1
{\displaystyle v_{1}}
with respect to this eigenbasis; let
d
k
=
z
k
∗
v
1
{\displaystyle d_{k}=z_{k}^{*}v_{1}}
for all
k
=
1
,
…
,
n
{\displaystyle k=1,\dotsc ,n}
, so that
v
1
=
∑
k
=
1
n
d
k
z
k
{\displaystyle \textstyle v_{1}=\sum _{k=1}^{n}d_{k}z_{k}}
. A starting vector
v
1
{\displaystyle v_{1}}
depleted of some eigencomponent will delay convergence to the corresponding eigenvalue, and even though this just comes out as a constant factor in the error bounds, depletion remains undesirable. One common technique for avoiding being consistently hit by it is to pick
v
1
{\displaystyle v_{1}}
by first drawing the elements randomly according to the same normal distribution with mean
0
{\displaystyle 0}
and then rescale the vector to norm
1
{\displaystyle 1}
. Prior to the rescaling, this causes the coefficients
d
k
{\displaystyle d_{k}}
to also be independent normally distributed stochastic variables from the same normal distribution (since the change of coordinates is unitary), and after rescaling the vector
(
d
1
,
…
,
d
n
)
{\displaystyle (d_{1},\dotsc ,d_{n})}
will have a uniform distribution on the unit sphere in
C
n
{\displaystyle \mathbb {C} ^{n}}
. This makes it possible to bound the probability that for example
|
d
1
|
<
ε
{\displaystyle |d_{1}|<\varepsilon }
.
The fact that the Lanczos algorithm is coordinate-agnostic – operations only look at inner products of vectors, never at individual elements of vectors – makes it easy to construct examples with known eigenstructure to run the algorithm on: make
A
{\displaystyle A}
a diagonal matrix with the desired eigenvalues on the diagonal; as long as the starting vector
v
1
{\displaystyle v_{1}}
has enough nonzero elements, the algorithm will output a general tridiagonal symmetric matrix as
T
{\displaystyle T}
.
=== Kaniel–Paige convergence theory ===
After
m
{\displaystyle m}
iteration steps of the Lanczos algorithm,
T
{\displaystyle T}
is an
m
×
m
{\displaystyle m\times m}
real symmetric matrix, that similarly to the above has
m
{\displaystyle m}
eigenvalues
θ
1
⩾
θ
2
⩾
⋯
⩾
θ
m
.
{\displaystyle \theta _{1}\geqslant \theta _{2}\geqslant \dots \geqslant \theta _{m}.}
By convergence is primarily understood the convergence of
θ
1
{\displaystyle \theta _{1}}
to
λ
1
{\displaystyle \lambda _{1}}
(and the symmetrical convergence of
θ
m
{\displaystyle \theta _{m}}
to
λ
n
{\displaystyle \lambda _{n}}
) as
m
{\displaystyle m}
grows, and secondarily the convergence of some range
θ
1
,
…
,
θ
k
{\displaystyle \theta _{1},\ldots ,\theta _{k}}
of eigenvalues of
T
{\displaystyle T}
to their counterparts
λ
1
,
…
,
λ
k
{\displaystyle \lambda _{1},\ldots ,\lambda _{k}}
of
A
{\displaystyle A}
. The convergence for the Lanczos algorithm is often orders of magnitude faster than that for the power iteration algorithm.: 477
The bounds for
θ
1
{\displaystyle \theta _{1}}
come from the above interpretation of eigenvalues as extreme values of the Rayleigh quotient
r
(
x
)
{\displaystyle r(x)}
. Since
λ
1
{\displaystyle \lambda _{1}}
is a priori the maximum of
r
{\displaystyle r}
on the whole of
C
n
,
{\displaystyle \mathbb {C} ^{n},}
whereas
θ
1
{\displaystyle \theta _{1}}
is merely the maximum on an
m
{\displaystyle m}
-dimensional Krylov subspace, we trivially get
λ
1
⩾
θ
1
{\displaystyle \lambda _{1}\geqslant \theta _{1}}
. Conversely, any point
x
{\displaystyle x}
in that Krylov subspace provides a lower bound
r
(
x
)
{\displaystyle r(x)}
for
θ
1
{\displaystyle \theta _{1}}
, so if a point can be exhibited for which
λ
1
−
r
(
x
)
{\displaystyle \lambda _{1}-r(x)}
is small then this provides a tight bound on
θ
1
{\displaystyle \theta _{1}}
.
The dimension
m
{\displaystyle m}
Krylov subspace is
span
{
v
1
,
A
v
1
,
A
2
v
1
,
…
,
A
m
−
1
v
1
}
,
{\displaystyle \operatorname {span} \left\{v_{1},Av_{1},A^{2}v_{1},\ldots ,A^{m-1}v_{1}\right\},}
so any element of it can be expressed as
p
(
A
)
v
1
{\displaystyle p(A)v_{1}}
for some polynomial
p
{\displaystyle p}
of degree at most
m
−
1
{\displaystyle m-1}
; the coefficients of that polynomial are simply the coefficients in the linear combination of the vectors
v
1
,
A
v
1
,
A
2
v
1
,
…
,
A
m
−
1
v
1
{\displaystyle v_{1},Av_{1},A^{2}v_{1},\ldots ,A^{m-1}v_{1}}
. The polynomial we want will turn out to have real coefficients, but for the moment we should allow also for complex coefficients, and we will write
p
∗
{\displaystyle p^{*}}
for the polynomial obtained by complex conjugating all coefficients of
p
{\displaystyle p}
. In this parametrisation of the Krylov subspace, we have
r
(
p
(
A
)
v
1
)
=
(
p
(
A
)
v
1
)
∗
A
p
(
A
)
v
1
(
p
(
A
)
v
1
)
∗
p
(
A
)
v
1
=
v
1
∗
p
(
A
)
∗
A
p
(
A
)
v
1
v
1
∗
p
(
A
)
∗
p
(
A
)
v
1
=
v
1
∗
p
∗
(
A
∗
)
A
p
(
A
)
v
1
v
1
∗
p
∗
(
A
∗
)
p
(
A
)
v
1
=
v
1
∗
p
∗
(
A
)
A
p
(
A
)
v
1
v
1
∗
p
∗
(
A
)
p
(
A
)
v
1
{\displaystyle r(p(A)v_{1})={\frac {(p(A)v_{1})^{*}Ap(A)v_{1}}{(p(A)v_{1})^{*}p(A)v_{1}}}={\frac {v_{1}^{*}p(A)^{*}Ap(A)v_{1}}{v_{1}^{*}p(A)^{*}p(A)v_{1}}}={\frac {v_{1}^{*}p^{*}(A^{*})Ap(A)v_{1}}{v_{1}^{*}p^{*}(A^{*})p(A)v_{1}}}={\frac {v_{1}^{*}p^{*}(A)Ap(A)v_{1}}{v_{1}^{*}p^{*}(A)p(A)v_{1}}}}
Using now the expression for
v
1
{\displaystyle v_{1}}
as a linear combination of eigenvectors, we get
A
v
1
=
A
∑
k
=
1
n
d
k
z
k
=
∑
k
=
1
n
d
k
λ
k
z
k
{\displaystyle Av_{1}=A\sum _{k=1}^{n}d_{k}z_{k}=\sum _{k=1}^{n}d_{k}\lambda _{k}z_{k}}
and more generally
q
(
A
)
v
1
=
∑
k
=
1
n
d
k
q
(
λ
k
)
z
k
{\displaystyle q(A)v_{1}=\sum _{k=1}^{n}d_{k}q(\lambda _{k})z_{k}}
for any polynomial
q
{\displaystyle q}
.
Thus
λ
1
−
r
(
p
(
A
)
v
1
)
=
λ
1
−
v
1
∗
∑
k
=
1
n
d
k
p
∗
(
λ
k
)
λ
k
p
(
λ
k
)
z
k
v
1
∗
∑
k
=
1
n
d
k
p
∗
(
λ
k
)
p
(
λ
k
)
z
k
=
λ
1
−
∑
k
=
1
n
|
d
k
|
2
λ
k
p
(
λ
k
)
∗
p
(
λ
k
)
∑
k
=
1
n
|
d
k
|
2
p
(
λ
k
)
∗
p
(
λ
k
)
=
∑
k
=
1
n
|
d
k
|
2
(
λ
1
−
λ
k
)
|
p
(
λ
k
)
|
2
∑
k
=
1
n
|
d
k
|
2
|
p
(
λ
k
)
|
2
.
{\displaystyle \lambda _{1}-r(p(A)v_{1})=\lambda _{1}-{\frac {v_{1}^{*}\sum _{k=1}^{n}d_{k}p^{*}(\lambda _{k})\lambda _{k}p(\lambda _{k})z_{k}}{v_{1}^{*}\sum _{k=1}^{n}d_{k}p^{*}(\lambda _{k})p(\lambda _{k})z_{k}}}=\lambda _{1}-{\frac {\sum _{k=1}^{n}|d_{k}|^{2}\lambda _{k}p(\lambda _{k})^{*}p(\lambda _{k})}{\sum _{k=1}^{n}|d_{k}|^{2}p(\lambda _{k})^{*}p(\lambda _{k})}}={\frac {\sum _{k=1}^{n}|d_{k}|^{2}(\lambda _{1}-\lambda _{k})\left|p(\lambda _{k})\right|^{2}}{\sum _{k=1}^{n}|d_{k}|^{2}\left|p(\lambda _{k})\right|^{2}}}.}
A key difference between numerator and denominator here is that the
k
=
1
{\displaystyle k=1}
term vanishes in the numerator, but not in the denominator. Thus if one can pick
p
{\displaystyle p}
to be large at
λ
1
{\displaystyle \lambda _{1}}
but small at all other eigenvalues, one will get a tight bound on the error
λ
1
−
θ
1
{\displaystyle \lambda _{1}-\theta _{1}}
.
Since
A
{\displaystyle A}
has many more eigenvalues than
p
{\displaystyle p}
has coefficients, this may seem a tall order, but one way to meet it is to use Chebyshev polynomials. Writing
c
k
{\displaystyle c_{k}}
for the degree
k
{\displaystyle k}
Chebyshev polynomial of the first kind (that which satisfies
c
k
(
cos
x
)
=
cos
(
k
x
)
{\displaystyle c_{k}(\cos x)=\cos(kx)}
for all
x
{\displaystyle x}
), we have a polynomial which stays in the range
[
−
1
,
1
]
{\displaystyle [-1,1]}
on the known interval
[
−
1
,
1
]
{\displaystyle [-1,1]}
but grows rapidly outside it. With some scaling of the argument, we can have it map all eigenvalues except
λ
1
{\displaystyle \lambda _{1}}
into
[
−
1
,
1
]
{\displaystyle [-1,1]}
. Let
p
(
x
)
=
c
m
−
1
(
2
x
−
λ
2
−
λ
n
λ
2
−
λ
n
)
{\displaystyle p(x)=c_{m-1}\left({\frac {2x-\lambda _{2}-\lambda _{n}}{\lambda _{2}-\lambda _{n}}}\right)}
(in case
λ
2
=
λ
1
{\displaystyle \lambda _{2}=\lambda _{1}}
, use instead the largest eigenvalue strictly less than
λ
1
{\displaystyle \lambda _{1}}
), then the maximal value of
|
p
(
λ
k
)
|
2
{\displaystyle |p(\lambda _{k})|^{2}}
for
k
⩾
2
{\displaystyle k\geqslant 2}
is
1
{\displaystyle 1}
and the minimal value is
0
{\displaystyle 0}
, so
λ
1
−
θ
1
⩽
λ
1
−
r
(
p
(
A
)
v
1
)
=
∑
k
=
2
n
|
d
k
|
2
(
λ
1
−
λ
k
)
|
p
(
λ
k
)
|
2
∑
k
=
1
n
|
d
k
|
2
|
p
(
λ
k
)
|
2
⩽
∑
k
=
2
n
|
d
k
|
2
(
λ
1
−
λ
k
)
|
d
1
|
2
|
p
(
λ
1
)
|
2
⩽
(
λ
1
−
λ
n
)
∑
k
=
2
n
|
d
k
|
2
|
p
(
λ
1
)
|
2
|
d
1
|
2
.
{\displaystyle \lambda _{1}-\theta _{1}\leqslant \lambda _{1}-r(p(A)v_{1})={\frac {\sum _{k=2}^{n}|d_{k}|^{2}(\lambda _{1}-\lambda _{k})|p(\lambda _{k})|^{2}}{\sum _{k=1}^{n}|d_{k}|^{2}|p(\lambda _{k})|^{2}}}\leqslant {\frac {\sum _{k=2}^{n}|d_{k}|^{2}(\lambda _{1}-\lambda _{k})}{|d_{1}|^{2}|p(\lambda _{1})|^{2}}}\leqslant {\frac {(\lambda _{1}-\lambda _{n})\sum _{k=2}^{n}|d_{k}|^{2}}{|p(\lambda _{1})|^{2}|d_{1}|^{2}}}.}
Furthermore
p
(
λ
1
)
=
c
m
−
1
(
2
λ
1
−
λ
2
−
λ
n
λ
2
−
λ
n
)
=
c
m
−
1
(
2
λ
1
−
λ
2
λ
2
−
λ
n
+
1
)
;
{\displaystyle p(\lambda _{1})=c_{m-1}\left({\frac {2\lambda _{1}-\lambda _{2}-\lambda _{n}}{\lambda _{2}-\lambda _{n}}}\right)=c_{m-1}\left(2{\frac {\lambda _{1}-\lambda _{2}}{\lambda _{2}-\lambda _{n}}}+1\right);}
the quantity
ρ
=
λ
1
−
λ
2
λ
2
−
λ
n
{\displaystyle \rho ={\frac {\lambda _{1}-\lambda _{2}}{\lambda _{2}-\lambda _{n}}}}
(i.e., the ratio of the first eigengap to the diameter of the rest of the spectrum) is thus of key importance for the convergence rate here. Also writing
R
=
e
arcosh
(
1
+
2
ρ
)
=
1
+
2
ρ
+
2
ρ
2
+
ρ
,
{\displaystyle R=e^{\operatorname {arcosh} (1+2\rho )}=1+2\rho +2{\sqrt {\rho ^{2}+\rho }},}
we may conclude that
λ
1
−
θ
1
⩽
(
λ
1
−
λ
n
)
(
1
−
|
d
1
|
2
)
c
m
−
1
(
2
ρ
+
1
)
2
|
d
1
|
2
=
1
−
|
d
1
|
2
|
d
1
|
2
(
λ
1
−
λ
n
)
1
cosh
2
(
(
m
−
1
)
arcosh
(
1
+
2
ρ
)
)
=
1
−
|
d
1
|
2
|
d
1
|
2
(
λ
1
−
λ
n
)
4
(
R
m
−
1
+
R
−
(
m
−
1
)
)
2
⩽
4
1
−
|
d
1
|
2
|
d
1
|
2
(
λ
1
−
λ
n
)
R
−
2
(
m
−
1
)
{\displaystyle {\begin{aligned}\lambda _{1}-\theta _{1}&\leqslant {\frac {(\lambda _{1}-\lambda _{n})\left(1-|d_{1}|^{2}\right)}{c_{m-1}(2\rho +1)^{2}|d_{1}|^{2}}}\\[6pt]&={\frac {1-|d_{1}|^{2}}{|d_{1}|^{2}}}(\lambda _{1}-\lambda _{n}){\frac {1}{\cosh ^{2}((m-1)\operatorname {arcosh} (1+2\rho ))}}\\[6pt]&={\frac {1-|d_{1}|^{2}}{|d_{1}|^{2}}}(\lambda _{1}-\lambda _{n}){\frac {4}{\left(R^{m-1}+R^{-(m-1)}\right)^{2}}}\\[6pt]&\leqslant 4{\frac {1-|d_{1}|^{2}}{|d_{1}|^{2}}}(\lambda _{1}-\lambda _{n})R^{-2(m-1)}\end{aligned}}}
The convergence rate is thus controlled chiefly by
R
{\displaystyle R}
, since this bound shrinks by a factor
R
−
2
{\displaystyle R^{-2}}
for each extra iteration.
For comparison, one may consider how the convergence rate of the power method depends on
ρ
{\displaystyle \rho }
, but since the power method primarily is sensitive to the quotient between absolute values of the eigenvalues, we need
|
λ
n
|
⩽
|
λ
2
|
{\displaystyle |\lambda _{n}|\leqslant |\lambda _{2}|}
for the eigengap between
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
to be the dominant one. Under that constraint, the case that most favours the power method is that
λ
n
=
−
λ
2
{\displaystyle \lambda _{n}=-\lambda _{2}}
, so consider that. Late in the power method, the iteration vector:
u
=
(
1
−
t
2
)
1
/
2
z
1
+
t
z
2
≈
z
1
+
t
z
2
,
{\displaystyle u=(1-t^{2})^{1/2}z_{1}+tz_{2}\approx z_{1}+tz_{2},}
where each new iteration effectively multiplies the
z
2
{\displaystyle z_{2}}
-amplitude
t
{\displaystyle t}
by
λ
2
λ
1
=
λ
2
λ
2
+
(
λ
1
−
λ
2
)
=
1
1
+
λ
1
−
λ
2
λ
2
=
1
1
+
2
ρ
.
{\displaystyle {\frac {\lambda _{2}}{\lambda _{1}}}={\frac {\lambda _{2}}{\lambda _{2}+(\lambda _{1}-\lambda _{2})}}={\frac {1}{1+{\frac {\lambda _{1}-\lambda _{2}}{\lambda _{2}}}}}={\frac {1}{1+2\rho }}.}
The estimate of the largest eigenvalue is then
u
∗
A
u
=
(
1
−
t
2
)
λ
1
+
t
2
λ
2
,
{\displaystyle u^{*}Au=(1-t^{2})\lambda _{1}+t^{2}\lambda _{2},}
so the above bound for the Lanczos algorithm convergence rate should be compared to
λ
1
−
u
∗
A
u
=
(
λ
1
−
λ
2
)
t
2
,
{\displaystyle \lambda _{1}-u^{*}Au=(\lambda _{1}-\lambda _{2})t^{2},}
which shrinks by a factor of
(
1
+
2
ρ
)
−
2
{\displaystyle (1+2\rho )^{-2}}
for each iteration. The difference thus boils down to that between
1
+
2
ρ
{\displaystyle 1+2\rho }
and
R
=
1
+
2
ρ
+
2
ρ
2
+
ρ
{\displaystyle R=1+2\rho +2{\sqrt {\rho ^{2}+\rho }}}
. In the
ρ
≫
1
{\displaystyle \rho \gg 1}
region, the latter is more like
1
+
4
ρ
{\displaystyle 1+4\rho }
, and performs like the power method would with an eigengap twice as large; a notable improvement. The more challenging case is however that of
ρ
≪
1
,
{\displaystyle \rho \ll 1,}
in which
R
≈
1
+
2
ρ
{\displaystyle R\approx 1+2{\sqrt {\rho }}}
is an even larger improvement on the eigengap; the
ρ
≫
1
{\displaystyle \rho \gg 1}
region is where the Lanczos algorithm convergence-wise makes the smallest improvement on the power method.
== Numerical stability ==
Stability means how much the algorithm will be affected (i.e. will it produce the approximate result close to the original one) if there are small numerical errors introduced and accumulated. Numerical stability is the central criterion for judging the usefulness of implementing an algorithm on a computer with roundoff.
For the Lanczos algorithm, it can be proved that with exact arithmetic, the set of vectors
v
1
,
v
2
,
⋯
,
v
m
+
1
{\displaystyle v_{1},v_{2},\cdots ,v_{m+1}}
constructs an orthonormal basis, and the eigenvalues/vectors solved are good approximations to those of the original matrix. However, in practice (as the calculations are performed in floating point arithmetic where inaccuracy is inevitable), the orthogonality is quickly lost and in some cases the new vector could even be linearly dependent on the set that is already constructed. As a result, some of the eigenvalues of the resultant tridiagonal matrix may not be approximations to the original matrix. Therefore, the Lanczos algorithm is not very stable.
Users of this algorithm must be able to find and remove those "spurious" eigenvalues. Practical implementations of the Lanczos algorithm go in three directions to fight this stability issue:
Prevent the loss of orthogonality,
Recover the orthogonality after the basis is generated.
After the good and "spurious" eigenvalues are all identified, remove the spurious ones.
== Variations ==
Variations on the Lanczos algorithm exist where the vectors involved are tall, narrow matrices instead of vectors and the normalizing constants are small square matrices. These are called "block" Lanczos algorithms and can be much faster on computers with large numbers of registers and long memory-fetch times.
Many implementations of the Lanczos algorithm restart after a certain number of iterations. One of the most influential restarted variations is the implicitly restarted Lanczos method, which is implemented in ARPACK. This has led into a number of other restarted variations such as restarted Lanczos bidiagonalization. Another successful restarted variation is the Thick-Restart Lanczos method, which has been implemented in a software package called TRLan.
=== Nullspace over a finite field ===
In 1995, Peter Montgomery published an algorithm, based on the Lanczos algorithm, for finding elements of the nullspace of a large sparse matrix over GF(2); since the set of people interested in large sparse matrices over finite fields and the set of people interested in large eigenvalue problems scarcely overlap, this is often also called the block Lanczos algorithm without causing unreasonable confusion.
== Applications ==
Lanczos algorithms are very attractive because the multiplication by
A
{\displaystyle A\,}
is the only large-scale linear operation. Since weighted-term text retrieval engines implement just this operation, the Lanczos algorithm can be applied efficiently to text documents (see latent semantic indexing). Eigenvectors are also important for large-scale ranking methods such as the HITS algorithm developed by Jon Kleinberg, or the PageRank algorithm used by Google.
Lanczos algorithms are also used in condensed matter physics as a method for solving Hamiltonians of strongly correlated electron systems, as well as in shell model codes in nuclear physics.
== Implementations ==
The NAG Library contains several routines for the solution of large scale linear systems and eigenproblems which use the Lanczos algorithm.
MATLAB and GNU Octave come with ARPACK built-in. Both stored and implicit matrices can be analyzed through the eigs() function (Matlab/Octave).
Similarly, in Python, the SciPy package has scipy.sparse.linalg.eigsh which is also a wrapper for the SSEUPD and DSEUPD functions functions from ARPACK which use the Implicitly Restarted Lanczos Method.
A Matlab implementation of the Lanczos algorithm (note precision issues) is available as a part of the Gaussian Belief Propagation Matlab Package. The GraphLab collaborative filtering library incorporates a large scale parallel implementation of the Lanczos algorithm (in C++) for multicore.
The PRIMME library also implements a Lanczos-like algorithm.
== Notes ==
== References ==
== Further reading ==
Golub, Gene H.; Van Loan, Charles F. (1996). "Lanczos Methods". Matrix Computations. Baltimore: Johns Hopkins University Press. pp. 470–507. ISBN 0-8018-5414-8.
Ng, Andrew Y.; Zheng, Alice X.; Jordan, Michael I. (2001). "Link Analysis, Eigenvectors and Stability" (PDF). IJCAI'01 Proceedings of the 17th International Joint Conference on Artificial Intelligence. 2: 903–910.
Erik Koch (2019). "Exact Diagonalization and Lanczos Method" (PDF). In E. Pavarini; E. Koch; S. Zhang (eds.). Many-Body Methods for Real Materials. Jülich. ISBN 978-3-95806-400-3. | Wikipedia/Lanczos_algorithm |
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.
In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied.
== Overview in dynamical systems ==
Many parts of the qualitative theory of differential equations and dynamical systems deal with asymptotic properties of solutions and the trajectories—what happens with the system after a long period of time. The simplest kind of behavior is exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular orbit is well understood, it is natural to ask next whether a small change in the initial condition will lead to similar behavior. Stability theory addresses the following questions: Will a nearby orbit indefinitely stay close to a given orbit? Will it converge to the given orbit? In the former case, the orbit is called stable; in the latter case, it is called asymptotically stable and the given orbit is said to be attracting.
An equilibrium solution
f
e
{\displaystyle f_{e}}
to an autonomous system of first order ordinary differential equations is called:
stable if for every (small)
ϵ
>
0
{\displaystyle \epsilon >0}
, there exists a
δ
>
0
{\displaystyle \delta >0}
such that every solution
f
(
t
)
{\displaystyle f(t)}
having initial conditions within distance
δ
{\displaystyle \delta }
i.e.
‖
f
(
t
0
)
−
f
e
‖
<
δ
{\displaystyle \|f(t_{0})-f_{e}\|<\delta }
of the equilibrium remains within distance
ϵ
{\displaystyle \epsilon }
i.e.
‖
f
(
t
)
−
f
e
‖
<
ϵ
{\displaystyle \|f(t)-f_{e}\|<\epsilon }
for all
t
≥
t
0
{\displaystyle t\geq t_{0}}
.
asymptotically stable if it is stable and, in addition, there exists
δ
0
>
0
{\displaystyle \delta _{0}>0}
such that whenever
‖
f
(
t
0
)
−
f
e
‖
<
δ
0
{\displaystyle \|f(t_{0})-f_{e}\|<\delta _{0}}
then
f
(
t
)
→
f
e
{\displaystyle f(t)\rightarrow f_{e}}
as
t
→
∞
{\displaystyle t\rightarrow \infty }
.
Stability means that the trajectories do not change too much under small perturbations. The opposite situation, where a nearby orbit is getting repelled from the given orbit, is also of interest. In general, perturbing the initial state in some directions results in the trajectory asymptotically approaching the given one and in other directions to the trajectory getting away from it. There may also be directions for which the behavior of the perturbed orbit is more complicated (neither converging nor escaping completely), and then stability theory does not give sufficient information about the dynamics.
One of the key ideas in stability theory is that the qualitative behavior of an orbit under perturbations can be analyzed using the linearization of the system near the orbit. In particular, at each equilibrium of a smooth dynamical system with an n-dimensional phase space, there is a certain n×n matrix A whose eigenvalues characterize the behavior of the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues are negative real numbers or complex numbers with negative real parts then the point is a stable attracting fixed point, and the nearby points converge to it at an exponential rate, cf Lyapunov stability and exponential stability. If none of the eigenvalues are purely imaginary (or zero) then the attracting and repelling directions are related to the eigenspaces of the matrix A with eigenvalues whose real part is negative and, respectively, positive. Analogous statements are known for perturbations of more complicated orbits.
== Stability of fixed points in 2D ==
The paradigmatic case is the stability of the origin under the linear autonomous differential equation
X
˙
=
A
X
{\displaystyle {\dot {X}}=AX}
where
X
=
[
x
y
]
{\displaystyle X={\begin{bmatrix}x\\y\end{bmatrix}}}
and
A
{\displaystyle A}
is a 2-by-2 matrix.
We would sometimes perform change-of-basis by
X
′
=
C
X
{\displaystyle X'=CX}
for some invertible matrix
C
{\displaystyle C}
, which gives
X
˙
′
=
C
−
1
A
C
X
′
{\displaystyle {\dot {X}}'=C^{-1}ACX'}
. We say
C
−
1
A
C
{\displaystyle C^{-1}AC}
is "
A
{\displaystyle A}
in the new basis". Since
det
A
=
det
C
−
1
A
C
{\displaystyle \det A=\det C^{-1}AC}
and
tr
A
=
tr
C
−
1
A
C
{\displaystyle \operatorname {tr} A=\operatorname {tr} C^{-1}AC}
, we can classify the stability of origin using
det
A
{\displaystyle \det A}
and
tr
A
{\displaystyle \operatorname {tr} A}
, while freely using change-of-basis.
=== Classification of stability types ===
If
det
A
=
0
{\displaystyle \det A=0}
, then the rank of
A
{\displaystyle A}
is zero or one.
If the rank is zero, then
A
=
0
{\displaystyle A=0}
, and there is no flow.
If the rank is one, then
ker
A
{\displaystyle \ker A}
and
im
A
{\displaystyle \operatorname {im} A}
are both one-dimensional.
If
ker
A
=
im
A
{\displaystyle \ker A=\operatorname {im} A}
, then let
v
{\displaystyle v}
span
ker
A
{\displaystyle \ker A}
, and let
w
{\displaystyle w}
be a preimage of
v
{\displaystyle v}
, then in
{
v
,
w
}
{\displaystyle \{v,w\}}
basis,
A
=
[
0
1
0
0
]
{\displaystyle A={\begin{bmatrix}0&1\\0&0\end{bmatrix}}}
, and so the flow is a shearing along the
v
{\displaystyle v}
direction. In this case,
tr
A
=
0
{\displaystyle \operatorname {tr} A=0}
.
If
ker
A
≠
im
A
{\displaystyle \ker A\neq \operatorname {im} A}
, then let
v
{\displaystyle v}
span
ker
A
{\displaystyle \ker A}
and let
w
{\displaystyle w}
span
im
A
{\displaystyle \operatorname {im} A}
, then in
{
v
,
w
}
{\displaystyle \{v,w\}}
basis,
A
=
[
0
0
0
a
]
{\displaystyle A={\begin{bmatrix}0&0\\0&a\end{bmatrix}}}
for some nonzero real number
a
{\displaystyle a}
.
If
tr
A
>
0
{\displaystyle \operatorname {tr} A>0}
, then it is unstable, diverging at a rate of
a
{\displaystyle a}
from
ker
A
{\displaystyle \ker A}
along parallel translates of
im
A
{\displaystyle \operatorname {im} A}
.
If
tr
A
<
0
{\displaystyle \operatorname {tr} A<0}
, then it is stable, converging at a rate of
a
{\displaystyle a}
to
ker
A
{\displaystyle \ker A}
along parallel translates of
im
A
{\displaystyle \operatorname {im} A}
.
If
det
A
≠
0
{\displaystyle \det A\neq 0}
, we first find the Jordan normal form of the matrix, to obtain a basis
{
v
,
w
}
{\displaystyle \{v,w\}}
in which
A
{\displaystyle A}
is one of three possible forms:
[
a
0
0
b
]
{\displaystyle {\begin{bmatrix}a&0\\0&b\end{bmatrix}}}
where
a
,
b
≠
0
{\displaystyle a,b\neq 0}
.
If
a
,
b
>
0
{\displaystyle a,b>0}
, then
{
4
det
A
−
(
tr
A
)
2
=
−
(
a
−
b
)
2
≤
0
det
A
=
a
b
>
0
{\displaystyle {\begin{cases}4\det A-(\operatorname {tr} A)^{2}=-(a-b)^{2}\leq 0\\\det A=ab>0\end{cases}}}
. The origin is a source, with integral curves of form
y
=
c
x
b
/
a
{\displaystyle y=cx^{b/a}}
Similarly for
a
,
b
<
0
{\displaystyle a,b<0}
. The origin is a sink.
If
a
>
0
>
b
{\displaystyle a>0>b}
or
a
<
0
<
b
{\displaystyle a<0<b}
, then
det
A
<
0
{\displaystyle \det A<0}
, and the origin is a saddle point. with integral curves of form
y
=
c
x
−
|
b
/
a
|
{\displaystyle y=cx^{-|b/a|}}
.
[
a
1
0
a
]
{\displaystyle {\begin{bmatrix}a&1\\0&a\end{bmatrix}}}
where
a
≠
0
{\displaystyle a\neq 0}
. This can be further simplified by a change-of-basis with
C
=
[
1
/
a
0
0
1
]
{\displaystyle C={\begin{bmatrix}1/a&0\\0&1\end{bmatrix}}}
, after which
A
=
a
[
1
1
0
1
]
{\displaystyle A=a{\begin{bmatrix}1&1\\0&1\end{bmatrix}}}
. We can explicitly solve for
X
˙
=
A
X
{\displaystyle {\dot {X}}=AX}
with
A
=
a
[
1
1
0
1
]
{\displaystyle A=a{\begin{bmatrix}1&1\\0&1\end{bmatrix}}}
. The solution is
X
(
t
)
=
e
A
t
X
(
0
)
{\displaystyle X(t)=e^{At}X(0)}
with
e
A
t
=
e
a
t
[
1
a
t
0
1
]
{\displaystyle e^{At}=e^{at}{\begin{bmatrix}1&at\\0&1\end{bmatrix}}}
. This case is called the "degenerate node". The integral curves in this basis are central dilations of
x
=
y
ln
y
{\displaystyle x=y\ln y}
, plus the x-axis.
If
tr
A
>
0
{\displaystyle \operatorname {tr} A>0}
, then the origin is an degenerate source. Otherwise it is a degenerate sink.
In both cases,
4
det
A
−
(
tr
A
)
2
=
0
{\displaystyle 4\det A-(\operatorname {tr} A)^{2}=0}
a
[
cos
θ
sin
θ
−
sin
θ
cos
θ
]
{\displaystyle a{\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}}
where
a
>
0
,
θ
∈
(
−
π
,
π
]
{\displaystyle a>0,\theta \in (-\pi ,\pi ]}
. In this case,
4
det
A
−
(
tr
A
)
2
=
(
2
a
sin
θ
)
2
≥
0
{\displaystyle 4\det A-(\operatorname {tr} A)^{2}=(2a\sin \theta )^{2}\geq 0}
.
If
θ
∈
(
−
π
,
−
π
/
2
)
∪
(
π
/
2
,
π
]
{\displaystyle \theta \in (-\pi ,-\pi /2)\cup (\pi /2,\pi ]}
, then this is a spiral sink. In this case,
{
4
det
A
−
(
tr
A
)
2
>
0
tr
A
<
0
{\displaystyle {\begin{cases}4\det A-(\operatorname {tr} A)^{2}>0\\\operatorname {tr} A<0\end{cases}}}
. The integral lines are logarithmic spirals.
If
θ
∈
(
−
π
/
2
,
π
/
2
)
{\displaystyle \theta \in (-\pi /2,\pi /2)}
, then this is a spiral source. In this case,
{
4
det
A
−
(
tr
A
)
2
>
0
tr
A
>
0
{\displaystyle {\begin{cases}4\det A-(\operatorname {tr} A)^{2}>0\\\operatorname {tr} A>0\end{cases}}}
. The integral lines are logarithmic spirals.
If
θ
=
−
π
/
2
,
π
/
2
{\displaystyle \theta =-\pi /2,\pi /2}
, then this is a rotation ("neutral stability") at a rate of
a
{\displaystyle a}
, moving neither towards nor away from origin. In this case,
tr
A
=
0
{\displaystyle \operatorname {tr} A=0}
. The integral lines are circles.
The summary is shown in the stability diagram on the right. In each case, except the case of
4
det
A
−
(
tr
A
)
2
=
0
{\displaystyle 4\det A-(\operatorname {tr} A)^{2}=0}
, the values
(
tr
A
,
det
A
)
{\displaystyle (\operatorname {tr} A,\det A)}
allows unique classification of the type of flow.
For the special case of
4
det
A
−
(
tr
A
)
2
=
0
{\displaystyle 4\det A-(\operatorname {tr} A)^{2}=0}
, there are two cases that cannot be distinguished by
(
tr
A
,
det
A
)
{\displaystyle (\operatorname {tr} A,\det A)}
. In both cases,
A
{\displaystyle A}
has only one eigenvalue, with algebraic multiplicity 2.
If the eigenvalue has a two-dimensional eigenspace (geometric multiplicity 2), then the system is a central node (sometimes called a "star", or "dicritical node") which is either a source (when
tr
A
>
0
{\displaystyle \operatorname {tr} A>0}
) or a sink (when
tr
A
<
0
{\displaystyle \operatorname {tr} A<0}
).
If it has a one-dimensional eigenspace (geometric multiplicity 1), then the system is a degenerate node (if
det
A
>
0
{\displaystyle \det A>0}
) or a shearing flow (if
det
A
=
0
{\displaystyle \det A=0}
).
=== Area-preserving flow ===
When
tr
A
=
0
{\displaystyle \operatorname {tr} A=0}
, we have
det
e
A
t
=
e
tr
(
A
)
t
=
1
{\displaystyle \det e^{At}=e^{\operatorname {tr} (A)t}=1}
, so the flow is area-preserving. In this case, the type of flow is classified by
det
A
{\displaystyle \det A}
.
If
det
A
>
0
{\displaystyle \det A>0}
, then it is a rotation ("neutral stability") around the origin.
If
det
A
=
0
{\displaystyle \det A=0}
, then it is a shearing flow.
If
det
A
<
0
{\displaystyle \det A<0}
, then the origin is a saddle point.
== Stability of fixed points ==
The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical system is in a stable equilibrium state then a small push will result in a localized motion, for example, small oscillations as in the case of a pendulum. In a system with damping, a stable equilibrium state is moreover asymptotically stable. On the other hand, for an unstable equilibrium, such as a ball resting on a top of a hill, certain small pushes will result in a motion with a large amplitude that may or may not converge to the original state.
There are useful tests of stability for the case of a linear system. Stability of a nonlinear system can often be inferred from the stability of its linearization.
=== Maps ===
Let f: R → R be a continuously differentiable function with a fixed point a, f(a) = a. Consider the dynamical system obtained by iterating the function f:
x
n
+
1
=
f
(
x
n
)
,
n
=
0
,
1
,
2
,
…
.
{\displaystyle x_{n+1}=f(x_{n}),\quad n=0,1,2,\ldots .}
The fixed point a is stable if the absolute value of the derivative of f at a is strictly less than 1, and unstable if it is strictly greater than 1. This is because near the point a, the function f has a linear approximation with slope f'(a):
f
(
x
)
≈
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
.
{\displaystyle f(x)\approx f(a)+f'(a)(x-a).}
Thus
x
n
+
1
−
a
=
f
(
x
n
)
−
a
≈
f
(
a
)
+
f
′
(
a
)
(
x
n
−
a
)
−
a
=
a
+
f
′
(
a
)
(
x
n
−
a
)
−
a
{\displaystyle {\begin{aligned}x_{n+1}-a&=f(x_{n})-a\\&\approx f(a)+f'(a)(x_{n}-a)-a\\&=a+f'(a)(x_{n}-a)-a\end{aligned}}}
⇒
f
′
(
a
)
≈
x
n
+
1
−
a
x
n
−
a
{\displaystyle \Rightarrow f'(a)\approx {\frac {x_{n+1}-a}{x_{n}-a}}}
which means that the derivative measures the rate at which the successive iterates approach the fixed point a or diverge from it. If the derivative at a is exactly 1 or −1, then more information is needed in order to decide stability.
There is an analogous criterion for a continuously differentiable map f: Rn → Rn with a fixed point a, expressed in terms of its Jacobian matrix at a, Ja(f). If all eigenvalues of J are real or complex numbers with absolute value strictly less than 1 then a is a stable fixed point; if at least one of them has absolute value strictly greater than 1 then a is unstable. Just as for n=1, the case of the largest absolute value being 1 needs to be investigated further — the Jacobian matrix test is inconclusive. The same criterion holds more generally for diffeomorphisms of a smooth manifold.
=== Linear autonomous systems ===
The stability of fixed points of a system of constant coefficient linear differential equations of first order can be analyzed using the eigenvalues of the corresponding matrix.
An autonomous system
x
′
=
A
x
,
{\displaystyle x'=Ax,}
where x(t) ∈ Rn and A is an n×n matrix with real entries, has a constant solution
x
(
t
)
=
0.
{\displaystyle x(t)=0.}
(In a different language, the origin 0 ∈ Rn is an equilibrium point of the corresponding dynamical system.) This solution is asymptotically stable as t → ∞ ("in the future") if and only if for all eigenvalues λ of A, Re(λ) < 0. Similarly, it is asymptotically stable as t → −∞ ("in the past") if and only if for all eigenvalues λ of A, Re(λ) > 0. If there exists an eigenvalue λ of A with Re(λ) > 0 then the solution is unstable for t → ∞.
Application of this result in practice, in order to decide the stability of the origin for a linear system, is facilitated by the Routh–Hurwitz stability criterion. The eigenvalues of a matrix are the roots of its characteristic polynomial. A polynomial in one variable with real coefficients is called a Hurwitz polynomial if the real parts of all roots are strictly negative. The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by means of an algorithm that avoids computing the roots.
=== Non-linear autonomous systems ===
Asymptotic stability of fixed points of a non-linear system can often be established using the Hartman–Grobman theorem.
Suppose that v is a C1-vector field in Rn which vanishes at a point p, v(p) = 0. Then the corresponding autonomous system
x
′
=
v
(
x
)
{\displaystyle x'=v(x)}
has a constant solution
x
(
t
)
=
p
.
{\displaystyle x(t)=p.}
Let Jp(v) be the n×n Jacobian matrix of the vector field v at the point p. If all eigenvalues of J have strictly negative real part then the solution is asymptotically stable. This condition can be tested using the Routh–Hurwitz criterion.
== Lyapunov function for general dynamical systems ==
A general way to establish Lyapunov stability or asymptotic stability of a dynamical system is by means of Lyapunov functions.
== See also ==
Chaos theory
Ship stability
Lyapunov stability
Hyperstability
Linear stability
Orbital stability
Stability criterion
Stability radius
Structural stability
von Neumann stability analysis
== References ==
Philip Holmes and Eric T. Shea-Brown (ed.). "Stability". Scholarpedia.
== External links ==
Stable Equilibria by Michael Schreiber, The Wolfram Demonstrations Project. | Wikipedia/Stability_theory |
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function
f
{\displaystyle f}
in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as
D
f
=
λ
f
{\displaystyle Df=\lambda f}
for some scalar eigenvalue
λ
.
{\displaystyle \lambda .}
The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.
An eigenfunction is a type of eigenvector.
== Eigenfunctions ==
In general, an eigenvector of a linear operator D defined on some vector space is a nonzero vector in the domain of D that, when D acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where D is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function f is an eigenfunction of D if it satisfies the equation
where λ is a scalar. The solutions to Equation (1) may also be subject to boundary conditions. Because of the boundary conditions, the possible values of λ are generally limited, for example to a discrete set λ1, λ2, … or to a continuous set over some range. The set of all possible eigenvalues of D is sometimes called its spectrum, which may be discrete, continuous, or a combination of both.
Each value of λ corresponds to one or more eigenfunctions. If multiple linearly independent eigenfunctions have the same eigenvalue, the eigenvalue is said to be degenerate and the maximum number of linearly independent eigenfunctions associated with the same eigenvalue is the eigenvalue's degree of degeneracy or geometric multiplicity.
=== Derivative example ===
A widely used class of linear operators acting on infinite dimensional spaces are differential operators on the space C∞ of infinitely differentiable real or complex functions of a real or complex argument t. For example, consider the derivative operator
d
d
t
{\textstyle {\frac {d}{dt}}}
with eigenvalue equation
d
d
t
f
(
t
)
=
λ
f
(
t
)
.
{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides by
d
t
f
(
t
)
{\textstyle {\frac {dt}{f(t)}}}
and integrating. Its solution, the exponential function
f
(
t
)
=
f
0
e
λ
t
,
{\displaystyle f(t)=f_{0}e^{\lambda t},}
is the eigenfunction of the derivative operator, where f0 is a parameter that depends on the boundary conditions. Note that in this case the eigenfunction is itself a function of its associated eigenvalue λ, which can take any real or complex value. In particular, note that for λ = 0 the eigenfunction f(t) is a constant.
Suppose in the example that f(t) is subject to the boundary conditions f(0) = 1 and
d
f
d
t
|
t
=
0
=
2
{\textstyle \left.{\frac {df}{dt}}\right|_{t=0}=2}
. We then find that
f
(
t
)
=
e
2
t
,
{\displaystyle f(t)=e^{2t},}
where λ = 2 is the only eigenvalue of the differential equation that also satisfies the boundary condition.
=== Link to eigenvalues and eigenvectors of matrices ===
Eigenfunctions can be expressed as column vectors and linear operators can be expressed as matrices, although they may have infinite dimensions. As a result, many of the concepts related to eigenvectors of matrices carry over to the study of eigenfunctions.
Define the inner product in the function space on which D is defined as
⟨
f
,
g
⟩
=
∫
Ω
f
∗
(
t
)
g
(
t
)
d
t
,
{\displaystyle \langle f,g\rangle =\int _{\Omega }\ f^{*}(t)g(t)dt,}
integrated over some range of interest for t called Ω. The * denotes the complex conjugate.
Suppose the function space has an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. For the orthonormal basis,
⟨
u
i
,
u
j
⟩
=
∫
Ω
u
i
∗
(
t
)
u
j
(
t
)
d
t
=
δ
i
j
=
{
1
i
=
j
0
i
≠
j
,
{\displaystyle \langle u_{i},u_{j}\rangle =\int _{\Omega }\ u_{i}^{*}(t)u_{j}(t)dt=\delta _{ij}={\begin{cases}1&i=j\\0&i\neq j\end{cases}},}
where δij is the Kronecker delta and can be thought of as the elements of the identity matrix.
Functions can be written as a linear combination of the basis functions,
f
(
t
)
=
∑
j
=
1
n
b
j
u
j
(
t
)
,
{\displaystyle f(t)=\sum _{j=1}^{n}b_{j}u_{j}(t),}
for example through a Fourier expansion of f(t). The coefficients bj can be stacked into an n by 1 column vector b = [b1 b2 … bn]T. In some special cases, such as the coefficients of the Fourier series of a sinusoidal function, this column vector has finite dimension.
Additionally, define a matrix representation of the linear operator D with elements
A
i
j
=
⟨
u
i
,
D
u
j
⟩
=
∫
Ω
u
i
∗
(
t
)
D
u
j
(
t
)
d
t
.
{\displaystyle A_{ij}=\langle u_{i},Du_{j}\rangle =\int _{\Omega }\ u_{i}^{*}(t)Du_{j}(t)dt.}
We can write the function Df(t) either as a linear combination of the basis functions or as D acting upon the expansion of f(t),
D
f
(
t
)
=
∑
j
=
1
n
c
j
u
j
(
t
)
=
∑
j
=
1
n
b
j
D
u
j
(
t
)
.
{\displaystyle Df(t)=\sum _{j=1}^{n}c_{j}u_{j}(t)=\sum _{j=1}^{n}b_{j}Du_{j}(t).}
Taking the inner product of each side of this equation with an arbitrary basis function ui(t),
∑
j
=
1
n
c
j
∫
Ω
u
i
∗
(
t
)
u
j
(
t
)
d
t
=
∑
j
=
1
n
b
j
∫
Ω
u
i
∗
(
t
)
D
u
j
(
t
)
d
t
,
c
i
=
∑
j
=
1
n
b
j
A
i
j
.
{\displaystyle {\begin{aligned}\sum _{j=1}^{n}c_{j}\int _{\Omega }\ u_{i}^{*}(t)u_{j}(t)dt&=\sum _{j=1}^{n}b_{j}\int _{\Omega }\ u_{i}^{*}(t)Du_{j}(t)dt,\\c_{i}&=\sum _{j=1}^{n}b_{j}A_{ij}.\end{aligned}}}
This is the matrix multiplication Ab = c written in summation notation and is a matrix equivalent of the operator D acting upon the function f(t) expressed in the orthonormal basis. If f(t) is an eigenfunction of D with eigenvalue λ, then Ab = λb.
=== Eigenvalues and eigenfunctions of Hermitian operators ===
Many of the operators encountered in physics are Hermitian. Suppose the linear operator D acts on a function space that is a Hilbert space with an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. In this basis, the operator D has a matrix representation A with elements
A
i
j
=
⟨
u
i
,
D
u
j
⟩
=
∫
Ω
d
t
u
i
∗
(
t
)
D
u
j
(
t
)
.
{\displaystyle A_{ij}=\langle u_{i},Du_{j}\rangle =\int _{\Omega }dt\ u_{i}^{*}(t)Du_{j}(t).}
integrated over some range of interest for t denoted Ω.
By analogy with Hermitian matrices, D is a Hermitian operator if Aij = Aji*, or:
⟨
u
i
,
D
u
j
⟩
=
⟨
D
u
i
,
u
j
⟩
,
∫
Ω
d
t
u
i
∗
(
t
)
D
u
j
(
t
)
=
∫
Ω
d
t
u
j
(
t
)
[
D
u
i
(
t
)
]
∗
.
{\displaystyle {\begin{aligned}\langle u_{i},Du_{j}\rangle &=\langle Du_{i},u_{j}\rangle ,\\[-1pt]\int _{\Omega }dt\ u_{i}^{*}(t)Du_{j}(t)&=\int _{\Omega }dt\ u_{j}(t)[Du_{i}(t)]^{*}.\end{aligned}}}
Consider the Hermitian operator D with eigenvalues λ1, λ2, … and corresponding eigenfunctions f1(t), f2(t), …. This Hermitian operator has the following properties:
Its eigenvalues are real, λi = λi*
Its eigenfunctions obey an orthogonality condition,
⟨
f
i
,
f
j
⟩
=
0
{\displaystyle \langle f_{i},f_{j}\rangle =0}
if i ≠ j
The second condition always holds for λi ≠ λj. For degenerate eigenfunctions with the same eigenvalue λi, orthogonal eigenfunctions can always be chosen that span the eigenspace associated with λi, for example by using the Gram-Schmidt process. Depending on whether the spectrum is discrete or continuous, the eigenfunctions can be normalized by setting the inner product of the eigenfunctions equal to either a Kronecker delta or a Dirac delta function, respectively.
For many Hermitian operators, notably Sturm–Liouville operators, a third property is
Its eigenfunctions form a basis of the function space on which the operator is defined
As a consequence, in many important cases, the eigenfunctions of the Hermitian operator form an orthonormal basis. In these cases, an arbitrary function can be expressed as a linear combination of the eigenfunctions of the Hermitian operator.
== Applications ==
=== Vibrating strings ===
Let h(x, t) denote the transverse displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. Applying the laws of mechanics to infinitesimal portions of the string, the function h satisfies the partial differential equation
∂
2
h
∂
t
2
=
c
2
∂
2
h
∂
x
2
,
{\displaystyle {\frac {\partial ^{2}h}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}h}{\partial x^{2}}},}
which is called the (one-dimensional) wave equation. Here c is a constant speed that depends on the tension and mass of the string.
This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:
d
2
d
x
2
X
=
−
ω
2
c
2
X
,
d
2
d
t
2
T
=
−
ω
2
T
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}X=-{\frac {\omega ^{2}}{c^{2}}}X,\qquad {\frac {d^{2}}{dt^{2}}}T=-\omega ^{2}T.}
Each of these is an eigenvalue equation with eigenvalues
−
ω
2
c
2
{\textstyle -{\frac {\omega ^{2}}{c^{2}}}}
and −ω2, respectively. For any values of ω and c, the equations are satisfied by the functions
X
(
x
)
=
sin
(
ω
x
c
+
φ
)
,
T
(
t
)
=
sin
(
ω
t
+
ψ
)
,
{\displaystyle X(x)=\sin \left({\frac {\omega x}{c}}+\varphi \right),\qquad T(t)=\sin(\omega t+\psi ),}
where the phase angles φ and ψ are arbitrary real constants.
If we impose boundary conditions, for example that the ends of the string are fixed at x = 0 and x = L, namely X(0) = X(L) = 0, and that T(0) = 0, we constrain the eigenvalues. For these boundary conditions, sin(φ) = 0 and sin(ψ) = 0, so the phase angles φ = ψ = 0, and
sin
(
ω
L
c
)
=
0.
{\displaystyle \sin \left({\frac {\omega L}{c}}\right)=0.}
This last boundary condition constrains ω to take a value ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form
h
(
x
,
t
)
=
sin
(
n
π
x
L
)
sin
(
ω
n
t
)
.
{\displaystyle h(x,t)=\sin \left({\frac {n\pi x}{L}}\right)\sin(\omega _{n}t).}
In the example of a string instrument, the frequency ωn is the frequency of the n-th harmonic, which is called the (n − 1)-th overtone.
=== Schrödinger equation ===
In quantum mechanics, the Schrödinger equation
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
H
Ψ
(
r
,
t
)
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=H\Psi (\mathbf {r} ,t)}
with the Hamiltonian operator
H
=
−
ℏ
2
2
m
∇
2
+
V
(
r
,
t
)
{\displaystyle H=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} ,t)}
can be solved by separation of variables if the Hamiltonian does not depend explicitly on time. In that case, the wave function Ψ(r,t) = φ(r)T(t) leads to the two differential equations,
Both of these differential equations are eigenvalue equations with eigenvalue E. As shown in an earlier example, the solution of Equation (3) is the exponential
T
(
t
)
=
e
−
i
E
t
/
ℏ
.
{\displaystyle T(t)=e^{{-iEt}/{\hbar }}.}
Equation (2) is the time-independent Schrödinger equation. The eigenfunctions φk of the Hamiltonian operator are stationary states of the quantum mechanical system, each with a corresponding energy Ek. They represent allowable energy states of the system and may be constrained by boundary conditions.
The Hamiltonian operator H is an example of a Hermitian operator whose eigenfunctions form an orthonormal basis. When the Hamiltonian does not depend explicitly on time, general solutions of the Schrödinger equation are linear combinations of the stationary states multiplied by the oscillatory T(t),
Ψ
(
r
,
t
)
=
∑
k
c
k
φ
k
(
r
)
e
−
i
E
k
t
/
ℏ
{\textstyle \Psi (\mathbf {r} ,t)=\sum _{k}c_{k}\varphi _{k}(\mathbf {r} )e^{{-iE_{k}t}/{\hbar }}}
or, for a system with a continuous spectrum,
Ψ
(
r
,
t
)
=
∫
d
E
c
E
φ
E
(
r
)
e
−
i
E
t
/
ℏ
.
{\displaystyle \Psi (\mathbf {r} ,t)=\int dE\,c_{E}\varphi _{E}(\mathbf {r} )e^{{-iEt}/{\hbar }}.}
The success of the Schrödinger equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.
=== Signals and systems ===
In the study of signals and systems, an eigenfunction of a system is a signal f(t) that, when input into the system, produces a response y(t) = λf(t), where λ is a complex scalar eigenvalue.
== See also ==
Eigenvalues and eigenvectors
Hilbert–Schmidt theorem
Spectral theory of ordinary differential equations
Fixed point combinator
Fourier transform eigenfunctions
== Notes ==
=== Citations ===
== Works cited ==
== External links ==
More images (non-GPL) at Atom in a Box | Wikipedia/Eigenfunction |
In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues
λ
{\displaystyle \lambda }
, left eigenvectors
y
{\displaystyle y}
and right eigenvectors
x
{\displaystyle x}
such that
Q
(
λ
)
x
=
0
and
y
∗
Q
(
λ
)
=
0
,
{\displaystyle Q(\lambda )x=0~{\text{ and }}~y^{\ast }Q(\lambda )=0,}
where
Q
(
λ
)
=
λ
2
M
+
λ
C
+
K
{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}
, with matrix coefficients
M
,
C
,
K
∈
C
n
×
n
{\displaystyle M,\,C,K\in \mathbb {C} ^{n\times n}}
and we require that
M
≠
0
{\displaystyle M\,\neq 0}
, (so that we have a nonzero leading coefficient). There are
2
n
{\displaystyle 2n}
eigenvalues that may be infinite or finite, and possibly zero. This is a special case of a nonlinear eigenproblem.
Q
(
λ
)
{\displaystyle Q(\lambda )}
is also known as a quadratic polynomial matrix.
== Spectral theory ==
A QEP is said to be regular if
det
(
Q
(
λ
)
)
≢
0
{\displaystyle {\text{det}}(Q(\lambda ))\not \equiv 0}
identically. The coefficient of the
λ
2
n
{\displaystyle \lambda ^{2n}}
term in
det
(
Q
(
λ
)
)
{\displaystyle {\text{det}}(Q(\lambda ))}
is
det
(
M
)
{\displaystyle {\text{det}}(M)}
, implying that the QEP is regular if
M
{\displaystyle M}
is nonsingular.
Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial,
λ
2
Q
(
λ
−
1
)
=
λ
2
K
+
λ
C
+
M
{\displaystyle \lambda ^{2}Q(\lambda ^{-1})=\lambda ^{2}K+\lambda C+M}
. As there are
2
n
{\displaystyle 2n}
eigenvectors in a
n
{\displaystyle n}
dimensional space, the eigenvectors cannot be orthogonal. It is possible to have the same eigenvector attached to different eigenvalues.
== Applications ==
=== Systems of differential equations ===
Quadratic eigenvalue problems arise naturally in the solution of systems of second order linear differential equations without forcing:
M
q
″
(
t
)
+
C
q
′
(
t
)
+
K
q
(
t
)
=
0
{\displaystyle Mq''(t)+Cq'(t)+Kq(t)=0}
Where
q
(
t
)
∈
R
n
{\displaystyle q(t)\in \mathbb {R} ^{n}}
, and
M
,
C
,
K
∈
R
n
×
n
{\displaystyle M,C,K\in \mathbb {R} ^{n\times n}}
. If all quadratic eigenvalues of
Q
(
λ
)
=
λ
2
M
+
λ
C
+
K
{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}
are distinct, then the solution can be written in terms of the quadratic eigenvalues and right quadratic eigenvectors as
q
(
t
)
=
∑
j
=
1
2
n
α
j
x
j
e
λ
j
t
=
X
e
Λ
t
α
{\displaystyle q(t)=\sum _{j=1}^{2n}\alpha _{j}x_{j}e^{\lambda _{j}t}=Xe^{\Lambda t}\alpha }
Where
Λ
=
Diag
(
[
λ
1
,
…
,
λ
2
n
]
)
∈
R
2
n
×
2
n
{\displaystyle \Lambda ={\text{Diag}}([\lambda _{1},\ldots ,\lambda _{2n}])\in \mathbb {R} ^{2n\times 2n}}
are the quadratic eigenvalues,
X
=
[
x
1
,
…
,
x
2
n
]
∈
R
n
×
2
n
{\displaystyle X=[x_{1},\ldots ,x_{2n}]\in \mathbb {R} ^{n\times 2n}}
are the
2
n
{\displaystyle 2n}
right quadratic eigenvectors, and
α
=
[
α
1
,
⋯
,
α
2
n
]
⊤
∈
R
2
n
{\displaystyle \alpha =[\alpha _{1},\cdots ,\alpha _{2n}]^{\top }\in \mathbb {R} ^{2n}}
is a parameter vector determined from the initial conditions on
q
{\displaystyle q}
and
q
′
{\displaystyle q'}
.
Stability theory for linear systems can now be applied, as the behavior of a solution depends explicitly on the (quadratic) eigenvalues.
=== Finite element methods ===
A QEP can result in part of the dynamic analysis of structures discretized by the finite element method. In this case the quadratic,
Q
(
λ
)
{\displaystyle Q(\lambda )}
has the form
Q
(
λ
)
=
λ
2
M
+
λ
C
+
K
{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}
, where
M
{\displaystyle M}
is the mass matrix,
C
{\displaystyle C}
is the damping matrix and
K
{\displaystyle K}
is the stiffness matrix.
Other applications include vibro-acoustics and fluid dynamics.
== Methods of solution ==
Direct methods for solving the standard or generalized eigenvalue problems
A
x
=
λ
x
{\displaystyle Ax=\lambda x}
and
A
x
=
λ
B
x
{\displaystyle Ax=\lambda Bx}
are based on transforming the problem to Schur or Generalized Schur form. However, there is no analogous form for quadratic matrix polynomials.
One approach is to transform the quadratic matrix polynomial to a linear matrix pencil (
A
−
λ
B
{\displaystyle A-\lambda B}
), and solve a generalized
eigenvalue problem. Once eigenvalues and eigenvectors of the linear problem have been determined, eigenvectors and eigenvalues of the quadratic can be determined.
The most common linearization is the first companion linearization
L
1
(
λ
)
=
[
0
N
−
K
−
C
]
−
λ
[
N
0
0
M
]
,
{\displaystyle L1(\lambda )={\begin{bmatrix}0&N\\-K&-C\end{bmatrix}}-\lambda {\begin{bmatrix}N&0\\0&M\end{bmatrix}},}
with corresponding eigenvector
z
=
[
x
λ
x
]
.
{\displaystyle z={\begin{bmatrix}x\\\lambda x\end{bmatrix}}.}
For convenience, one often takes
N
{\displaystyle N}
to be the
n
×
n
{\displaystyle n\times n}
identity matrix. We solve
L
(
λ
)
z
=
0
{\displaystyle L(\lambda )z=0}
for
λ
{\displaystyle \lambda }
and
z
{\displaystyle z}
, for example by computing the Generalized Schur form. We can then
take the first
n
{\displaystyle n}
components of
z
{\displaystyle z}
as the eigenvector
x
{\displaystyle x}
of the original quadratic
Q
(
λ
)
{\displaystyle Q(\lambda )}
.
Another common linearization is given by
L
2
(
λ
)
=
[
−
K
0
0
N
]
−
λ
[
C
M
N
0
]
.
{\displaystyle L2(\lambda )={\begin{bmatrix}-K&0\\0&N\end{bmatrix}}-\lambda {\begin{bmatrix}C&M\\N&0\end{bmatrix}}.}
In the case when either
A
{\displaystyle A}
or
B
{\displaystyle B}
is a Hamiltonian matrix and the other is a skew-Hamiltonian matrix, the following linearizations can be used.
L
3
(
λ
)
=
[
K
0
C
K
]
−
λ
[
0
K
−
M
0
]
.
{\displaystyle L3(\lambda )={\begin{bmatrix}K&0\\C&K\end{bmatrix}}-\lambda {\begin{bmatrix}0&K\\-M&0\end{bmatrix}}.}
L
4
(
λ
)
=
[
0
−
K
M
0
]
−
λ
[
M
C
0
M
]
.
{\displaystyle L4(\lambda )={\begin{bmatrix}0&-K\\M&0\end{bmatrix}}-\lambda {\begin{bmatrix}M&C\\0&M\end{bmatrix}}.}
== References == | Wikipedia/Quadratic_eigenvalue_problem |
In quantum physics, a wave function (or wavefunction) is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters ψ and Ψ (lower-case and capital psi, respectively). Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function ψ and calculate the statistical distributions for measurable quantities.
Wave functions can be functions of variables other than position, such as momentum. The information represented by a wave function that is dependent upon position can be converted into a wave function dependent upon momentum and vice versa, by means of a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for each possible value of the discrete degrees of freedom (e.g., z-component of spin). These values are often displayed in a column matrix (e.g., a 2 × 1 column vector for a non-relativistic electron with spin 1⁄2).
According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product of two wave functions is a measure of the overlap between the corresponding physical states and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, as of 2023 still open to different interpretations, which fundamentally differs from that of classic mechanical waves.
== Historical background ==
In 1900, Max Planck postulated the proportionality between the frequency
f
{\displaystyle f}
of a photon and its energy
E
{\displaystyle E}
,
E
=
h
f
{\displaystyle E=hf}
,
and in 1916 the corresponding relation between a photon's momentum
p
{\displaystyle p}
and wavelength
λ
{\displaystyle \lambda }
,
λ
=
h
p
{\displaystyle \lambda ={\frac {h}{p}}}
,
where
h
{\displaystyle h}
is the Planck constant. In 1923, De Broglie was the first to suggest that the relation
λ
=
h
p
{\displaystyle \lambda ={\frac {h}{p}}}
, now called the De Broglie relation, holds for massive particles, the chief clue being Lorentz invariance, and this can be viewed as the starting point for the modern development of quantum mechanics. The equations represent wave–particle duality for both massless and massive particles.
In the 1920s and 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, and others, developing "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent.
In 1926, Schrödinger published the famous wave equation now named after him, the Schrödinger equation. This equation was based on classical conservation of energy using quantum operators and the de Broglie relations and the solutions of the equation are the wave functions for the quantum system. However, no one was clear on how to interpret it. At first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the elastic scattering of a wave packet (representing a particle) off a target; it spreads out in all directions.
While a scattered particle may scatter in any direction, it does not break up and take off in all directions. In 1926, Born provided the perspective of probability amplitude. This relates calculations of quantum mechanics directly to probabilistic experimental observations. It is accepted as part of the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics. In 1927, Hartree and Fock made the first step in an attempt to solve the N-body wave function, and developed the self-consistency cycle: an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method. The Slater determinant and permanent (of a matrix) was part of the method, provided by John C. Slater.
Schrödinger did encounter an equation for the wave function that satisfied relativistic energy conservation before he published the non-relativistic one, but discarded it as it predicted negative probabilities and negative energies. In 1927, Klein, Gordon and Fock also found it, but incorporated the electromagnetic interaction and proved that it was Lorentz invariant. De Broglie also arrived at the same equation in 1928. This relativistic wave equation is now most commonly known as the Klein–Gordon equation.
In 1927, Pauli phenomenologically found a non-relativistic equation to describe spin-1/2 particles in electromagnetic fields, now called the Pauli equation. Pauli found the wave function was not described by a single complex function of space and time, but needed two complex numbers, which respectively correspond to the spin +1/2 and −1/2 states of the fermion. Soon after in 1928, Dirac found an equation from the first successful unification of special relativity and quantum mechanics applied to the electron, now called the Dirac equation. In this, the wave function is a spinor represented by four complex-valued components: two for the electron and two for the electron's antiparticle, the positron. In the non-relativistic limit, the Dirac wave function resembles the Pauli wave function for the electron. Later, other relativistic wave equations were found.
=== Wave functions and wave equations in modern theories ===
All these wave equations are of enduring importance. The Schrödinger equation and the Pauli equation are under many circumstances excellent approximations of the relativistic variants. They are considerably easier to solve in practical problems than the relativistic counterparts.
The Klein–Gordon equation and the Dirac equation, while being relativistic, do not represent full reconciliation of quantum mechanics and special relativity. The branch of quantum mechanics where these equations are studied the same way as the Schrödinger equation, often called relativistic quantum mechanics, while very successful, has its limitations (see e.g. Lamb shift) and conceptual problems (see e.g. Dirac sea).
Relativity makes it inevitable that the number of particles in a system is not constant. For full reconciliation, quantum field theory is needed.
In this theory, the wave equations and the wave functions have their place, but in a somewhat different guise. The main objects of interest are not the wave functions, but rather operators, so called field operators (or just fields where "operator" is understood) on the Hilbert space of states (to be described next section). It turns out that the original relativistic wave equations and their solutions are still needed to build the Hilbert space. Moreover, the free fields operators, i.e. when interactions are assumed not to exist, turn out to (formally) satisfy the same equation as do the fields (wave functions) in many cases.
Thus the Klein–Gordon equation (spin 0) and the Dirac equation (spin 1⁄2) in this guise remain in the theory. Higher spin analogues include the Proca equation (spin 1), Rarita–Schwinger equation (spin 3⁄2), and, more generally, the Bargmann–Wigner equations. For massless free fields two examples are the free field Maxwell equation (spin 1) and the free field Einstein equation (spin 2) for the field operators.
All of them are essentially a direct consequence of the requirement of Lorentz invariance. Their solutions must transform under Lorentz transformation in a prescribed way, i.e. under a particular representation of the Lorentz group and that together with few other reasonable demands, e.g. the cluster decomposition property,
with implications for causality is enough to fix the equations.
This applies to free field equations; interactions are not included. If a Lagrangian density (including interactions) is available, then the Lagrangian formalism will yield an equation of motion at the classical level. This equation may be very complex and not amenable to solution. Any solution would refer to a fixed number of particles and would not account for the term "interaction" as referred to in these theories, which involves the creation and annihilation of particles and not external potentials as in ordinary "first quantized" quantum theory.
In string theory, the situation remains analogous. For instance, a wave function in momentum space has the role of Fourier expansion coefficient in a general state of a particle (string) with momentum that is not sharply defined.
== Definition (one spinless particle in one dimension) ==
For now, consider the simple case of a non-relativistic single particle, without spin, in one spatial dimension. More general cases are discussed below.
According to the postulates of quantum mechanics, the state of a physical system, at fixed time
t
{\displaystyle t}
, is given by the wave function belonging to a separable complex Hilbert space. As such, the inner product of two wave functions Ψ1 and Ψ2 can be defined as the complex number (at time t)
(
Ψ
1
,
Ψ
2
)
=
∫
−
∞
∞
Ψ
1
∗
(
x
,
t
)
Ψ
2
(
x
,
t
)
d
x
<
∞
{\displaystyle (\Psi _{1},\Psi _{2})=\int _{-\infty }^{\infty }\,\Psi _{1}^{*}(x,t)\Psi _{2}(x,t)\,dx<\infty }
.
More details are given below. However, the inner product of a wave function Ψ with itself,
(
Ψ
,
Ψ
)
=
‖
Ψ
‖
2
{\displaystyle (\Psi ,\Psi )=\|\Psi \|^{2}}
,
is always a positive real number. The number ‖Ψ‖ (not ‖Ψ‖2) is called the norm of the wave function Ψ.
The separable Hilbert space being considered is infinite-dimensional, which means there is no finite set of square integrable functions which can be added together in various combinations to create every possible square integrable function.
=== Position-space wave functions ===
The state of such a particle is completely described by its wave function,
Ψ
(
x
,
t
)
,
{\displaystyle \Psi (x,t)\,,}
where x is position and t is time. This is a complex-valued function of two real variables x and t.
For one spinless particle in one dimension, if the wave function is interpreted as a probability amplitude; the square modulus of the wave function, the positive real number
|
Ψ
(
x
,
t
)
|
2
=
Ψ
∗
(
x
,
t
)
Ψ
(
x
,
t
)
=
ρ
(
x
)
,
{\displaystyle \left|\Psi (x,t)\right|^{2}=\Psi ^{*}(x,t)\Psi (x,t)=\rho (x),}
is interpreted as the probability density for a measurement of the particle's position at a given time t. The asterisk indicates the complex conjugate. If the particle's position is measured, its location cannot be determined from the wave function, but is described by a probability distribution.
==== Normalization condition ====
The probability that its position x will be in the interval a ≤ x ≤ b is the integral of the density over this interval:
P
a
≤
x
≤
b
(
t
)
=
∫
a
b
|
Ψ
(
x
,
t
)
|
2
d
x
{\displaystyle P_{a\leq x\leq b}(t)=\int _{a}^{b}\,|\Psi (x,t)|^{2}dx}
where t is the time at which the particle was measured. This leads to the normalization condition:
∫
−
∞
∞
|
Ψ
(
x
,
t
)
|
2
d
x
=
1
,
{\displaystyle \int _{-\infty }^{\infty }\,|\Psi (x,t)|^{2}dx=1\,,}
because if the particle is measured, there is 100% probability that it will be somewhere.
For a given system, the set of all possible normalizable wave functions (at any given time) forms an abstract mathematical vector space, meaning that it is possible to add together different wave functions, and multiply wave functions by complex numbers. Technically, wave functions form a ray in a projective Hilbert space rather than an ordinary vector space.
==== Quantum states as vectors ====
At a particular instant of time, all values of the wave function Ψ(x, t) are components of a vector. There are uncountably infinitely many of them and integration is used in place of summation. In Bra–ket notation, this vector is written
|
Ψ
(
t
)
⟩
=
∫
Ψ
(
x
,
t
)
|
x
⟩
d
x
{\displaystyle |\Psi (t)\rangle =\int \Psi (x,t)|x\rangle dx}
and is referred to as a "quantum state vector", or simply "quantum state". There are several advantages to understanding wave functions as representing elements of an abstract vector space:
All the powerful tools of linear algebra can be used to manipulate and understand wave functions. For example:
Linear algebra explains how a vector space can be given a basis, and then any vector in the vector space can be expressed in this basis. This explains the relationship between a wave function in position space and a wave function in momentum space and suggests that there are other possibilities too.
Bra–ket notation can be used to manipulate wave functions.
The idea that quantum states are vectors in an abstract vector space is completely general in all aspects of quantum mechanics and quantum field theory, whereas the idea that quantum states are complex-valued "wave" functions of space is only true in certain situations.
The time parameter is often suppressed, and will be in the following. The x coordinate is a continuous index. The |x⟩ are called improper vectors which, unlike proper vectors that are normalizable to unity, can only be normalized to a Dirac delta function.
⟨
x
′
|
x
⟩
=
δ
(
x
′
−
x
)
{\displaystyle \langle x'|x\rangle =\delta (x'-x)}
thus
⟨
x
′
|
Ψ
⟩
=
∫
Ψ
(
x
)
⟨
x
′
|
x
⟩
d
x
=
Ψ
(
x
′
)
{\displaystyle \langle x'|\Psi \rangle =\int \Psi (x)\langle x'|x\rangle dx=\Psi (x')}
and
|
Ψ
⟩
=
∫
|
x
⟩
⟨
x
|
Ψ
⟩
d
x
=
(
∫
|
x
⟩
⟨
x
|
d
x
)
|
Ψ
⟩
{\displaystyle |\Psi \rangle =\int |x\rangle \langle x|\Psi \rangle dx=\left(\int |x\rangle \langle x|dx\right)|\Psi \rangle }
which illuminates the identity operator
I
=
∫
|
x
⟩
⟨
x
|
d
x
.
{\displaystyle I=\int |x\rangle \langle x|dx\,.}
which is analogous to completeness relation of orthonormal basis in N-dimensional Hilbert space.
Finding the identity operator in a basis allows the abstract state to be expressed explicitly in a basis, and more (the inner product between two state vectors, and other operators for observables, can be expressed in the basis).
=== Momentum-space wave functions ===
The particle also has a wave function in momentum space:
Φ
(
p
,
t
)
{\displaystyle \Phi (p,t)}
where p is the momentum in one dimension, which can be any value from −∞ to +∞, and t is time.
Analogous to the position case, the inner product of two wave functions Φ1(p, t) and Φ2(p, t) can be defined as:
(
Φ
1
,
Φ
2
)
=
∫
−
∞
∞
Φ
1
∗
(
p
,
t
)
Φ
2
(
p
,
t
)
d
p
.
{\displaystyle (\Phi _{1},\Phi _{2})=\int _{-\infty }^{\infty }\,\Phi _{1}^{*}(p,t)\Phi _{2}(p,t)dp\,.}
One particular solution to the time-independent Schrödinger equation is
Ψ
p
(
x
)
=
e
i
p
x
/
ℏ
,
{\displaystyle \Psi _{p}(x)=e^{ipx/\hbar },}
a plane wave, which can be used in the description of a particle with momentum exactly p, since it is an eigenfunction of the momentum operator. These functions are not normalizable to unity (they are not square-integrable), so they are not really elements of physical Hilbert space. The set
{
Ψ
p
(
x
,
t
)
,
−
∞
≤
p
≤
∞
}
{\displaystyle \{\Psi _{p}(x,t),-\infty \leq p\leq \infty \}}
forms what is called the momentum basis. This "basis" is not a basis in the usual mathematical sense. For one thing, since the functions are not normalizable, they are instead normalized to a delta function,
(
Ψ
p
,
Ψ
p
′
)
=
δ
(
p
−
p
′
)
.
{\displaystyle (\Psi _{p},\Psi _{p'})=\delta (p-p').}
For another thing, though they are linearly independent, there are too many of them (they form an uncountable set) for a basis for physical Hilbert space. They can still be used to express all functions in it using Fourier transforms as described next.
=== Relations between position and momentum representations ===
The x and p representations are
|
Ψ
⟩
=
I
|
Ψ
⟩
=
∫
|
x
⟩
⟨
x
|
Ψ
⟩
d
x
=
∫
Ψ
(
x
)
|
x
⟩
d
x
,
|
Ψ
⟩
=
I
|
Ψ
⟩
=
∫
|
p
⟩
⟨
p
|
Ψ
⟩
d
p
=
∫
Φ
(
p
)
|
p
⟩
d
p
.
{\displaystyle {\begin{aligned}|\Psi \rangle =I|\Psi \rangle &=\int |x\rangle \langle x|\Psi \rangle dx=\int \Psi (x)|x\rangle dx,\\|\Psi \rangle =I|\Psi \rangle &=\int |p\rangle \langle p|\Psi \rangle dp=\int \Phi (p)|p\rangle dp.\end{aligned}}}
Now take the projection of the state Ψ onto eigenfunctions of momentum using the last expression in the two equations,
∫
Ψ
(
x
)
⟨
p
|
x
⟩
d
x
=
∫
Φ
(
p
′
)
⟨
p
|
p
′
⟩
d
p
′
=
∫
Φ
(
p
′
)
δ
(
p
−
p
′
)
d
p
′
=
Φ
(
p
)
.
{\displaystyle \int \Psi (x)\langle p|x\rangle dx=\int \Phi (p')\langle p|p'\rangle dp'=\int \Phi (p')\delta (p-p')dp'=\Phi (p).}
Then utilizing the known expression for suitably normalized eigenstates of momentum in the position representation solutions of the free Schrödinger equation
⟨
x
|
p
⟩
=
p
(
x
)
=
1
2
π
ℏ
e
i
ℏ
p
x
⇒
⟨
p
|
x
⟩
=
1
2
π
ℏ
e
−
i
ℏ
p
x
,
{\displaystyle \langle x|p\rangle =p(x)={\frac {1}{\sqrt {2\pi \hbar }}}e^{{\frac {i}{\hbar }}px}\Rightarrow \langle p|x\rangle ={\frac {1}{\sqrt {2\pi \hbar }}}e^{-{\frac {i}{\hbar }}px},}
one obtains
Φ
(
p
)
=
1
2
π
ℏ
∫
Ψ
(
x
)
e
−
i
ℏ
p
x
d
x
.
{\displaystyle \Phi (p)={\frac {1}{\sqrt {2\pi \hbar }}}\int \Psi (x)e^{-{\frac {i}{\hbar }}px}dx\,.}
Likewise, using eigenfunctions of position,
Ψ
(
x
)
=
1
2
π
ℏ
∫
Φ
(
p
)
e
i
ℏ
p
x
d
p
.
{\displaystyle \Psi (x)={\frac {1}{\sqrt {2\pi \hbar }}}\int \Phi (p)e^{{\frac {i}{\hbar }}px}dp\,.}
The position-space and momentum-space wave functions are thus found to be Fourier transforms of each other. They are two representations of the same state; containing the same information, and either one is sufficient to calculate any property of the particle.
In practice, the position-space wave function is used much more often than the momentum-space wave function. The potential entering the relevant equation (Schrödinger, Dirac, etc.) determines in which basis the description is easiest. For the harmonic oscillator, x and p enter symmetrically, so there it does not matter which description one uses. The same equation (modulo constants) results. From this, with a little bit of afterthought, it follows that solutions to the wave equation of the harmonic oscillator are eigenfunctions of the Fourier transform in L2.
== Definitions (other cases) ==
Following are the general forms of the wave function for systems in higher dimensions and more particles, as well as including other degrees of freedom than position coordinates or momentum components.
=== Finite dimensional Hilbert space ===
While Hilbert spaces originally refer to infinite dimensional complete inner product spaces they, by definition, include finite dimensional complete inner product spaces as well.
In physics, they are often referred to as finite dimensional Hilbert spaces. For every finite dimensional Hilbert space there exist orthonormal basis kets that span the entire Hilbert space.
If the N-dimensional set
{
|
ϕ
i
⟩
}
{\textstyle \{|\phi _{i}\rangle \}}
is orthonormal, then the projection operator for the space spanned by these states is given by:
P
=
∑
i
|
ϕ
i
⟩
⟨
ϕ
i
|
=
I
{\displaystyle P=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|=I}
where the projection is equivalent to identity operator since
{
|
ϕ
i
⟩
}
{\textstyle \{|\phi _{i}\rangle \}}
spans the entire Hilbert space, thus leaving any vector from Hilbert space unchanged. This is also known as completeness relation of finite dimensional Hilbert space.
The wavefunction is instead given by:
|
ψ
⟩
=
I
|
ψ
⟩
=
∑
i
|
ϕ
i
⟩
⟨
ϕ
i
|
ψ
⟩
{\displaystyle |\psi \rangle =I|\psi \rangle =\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|\psi \rangle }
where
{
⟨
ϕ
i
|
ψ
⟩
}
{\textstyle \{\langle \phi _{i}|\psi \rangle \}}
, is a set of complex numbers which can be used to construct a wavefunction using the above formula.
==== Probability interpretation of inner product ====
If the set
{
|
ϕ
i
⟩
}
{\textstyle \{|\phi _{i}\rangle \}}
are eigenkets of a non-degenerate observable with eigenvalues
λ
i
{\textstyle \lambda _{i}}
, by the postulates of quantum mechanics, the probability of measuring the observable to be
λ
i
{\textstyle \lambda _{i}}
is given according to Born rule as:
P
ψ
(
λ
i
)
=
|
⟨
ϕ
i
|
ψ
⟩
|
2
{\displaystyle P_{\psi }(\lambda _{i})=|\langle \phi _{i}|\psi \rangle |^{2}}
For non-degenerate
{
|
ϕ
i
⟩
}
{\textstyle \{|\phi _{i}\rangle \}}
of some observable, if eigenvalues
λ
{\textstyle \lambda }
have subset of eigenvectors labelled as
{
|
λ
(
j
)
⟩
}
{\textstyle \{|\lambda ^{(j)}\rangle \}}
, by the postulates of quantum mechanics, the probability of measuring the observable to be
λ
{\textstyle \lambda }
is given by:
P
ψ
(
λ
)
=
∑
j
|
⟨
λ
(
j
)
|
ψ
⟩
|
2
=
|
P
^
λ
|
ψ
⟩
|
2
{\displaystyle P_{\psi }(\lambda )=\sum _{j}|\langle \lambda ^{(j)}|\psi \rangle |^{2}=|{\widehat {P}}_{\lambda }|\psi \rangle |^{2}}
where
P
^
λ
=
∑
j
|
λ
(
j
)
⟩
⟨
λ
(
j
)
|
{\textstyle {\widehat {P}}_{\lambda }=\sum _{j}|\lambda ^{(j)}\rangle \langle \lambda ^{(j)}|}
is a projection operator of states to subspace spanned by
{
|
λ
(
j
)
⟩
}
{\textstyle \{|\lambda ^{(j)}\rangle \}}
. The equality follows due to orthogonal nature of
{
|
ϕ
i
⟩
}
{\textstyle \{|\phi _{i}\rangle \}}
.
Hence,
{
⟨
ϕ
i
|
ψ
⟩
}
{\textstyle \{\langle \phi _{i}|\psi \rangle \}}
which specify state of the quantum mechanical system, have magnitudes whose square gives the probability of measuring the respective
|
ϕ
i
⟩
{\textstyle |\phi _{i}\rangle }
state.
==== Physical significance of relative phase ====
While the relative phase has observable effects in experiments, the global phase of the system is experimentally indistinguishable. For example in a particle in superposition of two states, the global phase of the particle cannot be distinguished by finding expectation value of observable or probabilities of observing different states but relative phases can affect the expectation values of observables.
While the overall phase of the system is considered to be arbitrary, the relative phase for each state
|
ϕ
i
⟩
{\textstyle |\phi _{i}\rangle }
of a prepared state in superposition can be determined based on physical meaning of the prepared state and its symmetry. For example, the construction of spin states along x direction as a superposition of spin states along z direction, can done by applying appropriate rotation transformation on the spin along z states which provides appropriate phase of the states relative to each other.
==== Application to include spin ====
An example of finite dimensional Hilbert space can be constructed using spin eigenkets of
s
{\textstyle s}
-spin particles which forms a
2
s
+
1
{\textstyle 2s+1}
dimensional Hilbert space. However, the general wavefunction of a particle that fully describes its state, is always from an infinite dimensional Hilbert space since it involves a tensor product with Hilbert space relating to the position or momentum of the particle. Nonetheless, the techniques developed for finite dimensional Hilbert space are useful since they can either be treated independently or treated in consideration of linearity of tensor product.
Since the spin operator for a given
s
{\textstyle s}
-spin particles can be represented as a finite
(
2
s
+
1
)
2
{\textstyle (2s+1)^{2}}
matrix which acts on
2
s
+
1
{\textstyle 2s+1}
independent spin vector components, it is usually preferable to denote spin components using matrix/column/row notation as applicable.
For example, each |sz⟩ is usually identified as a column vector:
|
s
⟩
↔
[
1
0
⋮
0
0
]
,
|
s
−
1
⟩
↔
[
0
1
⋮
0
0
]
,
…
,
|
−
(
s
−
1
)
⟩
↔
[
0
0
⋮
1
0
]
,
|
−
s
⟩
↔
[
0
0
⋮
0
1
]
{\displaystyle |s\rangle \leftrightarrow {\begin{bmatrix}1\\0\\\vdots \\0\\0\\\end{bmatrix}}\,,\quad |s-1\rangle \leftrightarrow {\begin{bmatrix}0\\1\\\vdots \\0\\0\\\end{bmatrix}}\,,\ldots \,,\quad |-(s-1)\rangle \leftrightarrow {\begin{bmatrix}0\\0\\\vdots \\1\\0\\\end{bmatrix}}\,,\quad |-s\rangle \leftrightarrow {\begin{bmatrix}0\\0\\\vdots \\0\\1\\\end{bmatrix}}}
but it is a common abuse of notation, because the kets |sz⟩ are not synonymous or equal to the column vectors. Column vectors simply provide a convenient way to express the spin components.
Corresponding to the notation, the z-component spin operator can be written as:
1
ℏ
S
^
z
=
[
s
0
⋯
0
0
0
s
−
1
⋯
0
0
⋮
⋮
⋱
⋮
⋮
0
0
⋯
−
(
s
−
1
)
0
0
0
⋯
0
−
s
]
{\displaystyle {\frac {1}{\hbar }}{\hat {S}}_{z}={\begin{bmatrix}s&0&\cdots &0&0\\0&s-1&\cdots &0&0\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &-(s-1)&0\\0&0&\cdots &0&-s\end{bmatrix}}}
since the eigenvectors of z-component spin operator are the above column vectors, with eigenvalues being the corresponding spin quantum numbers.
Corresponding to the notation, a vector from such a finite dimensional Hilbert space is hence represented as:
|
ϕ
⟩
=
[
⟨
s
|
ϕ
⟩
⟨
s
−
1
|
ϕ
⟩
⋮
⟨
−
(
s
−
1
)
|
ϕ
⟩
⟨
−
s
|
ϕ
⟩
]
=
[
ε
s
ε
s
−
1
⋮
ε
−
s
+
1
ε
−
s
]
{\displaystyle |\phi \rangle ={\begin{bmatrix}\langle s|\phi \rangle \\\langle s-1|\phi \rangle \\\vdots \\\langle -(s-1)|\phi \rangle \\\langle -s|\phi \rangle \\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{s}\\\varepsilon _{s-1}\\\vdots \\\varepsilon _{-s+1}\\\varepsilon _{-s}\\\end{bmatrix}}}
where
{
ε
i
}
{\textstyle \{\varepsilon _{i}\}}
are corresponding complex numbers.
In the following discussion involving spin, the complete wavefunction is considered as tensor product of spin states from finite dimensional Hilbert spaces and the wavefunction which was previously developed. The basis for this Hilbert space are hence considered:
|
r
,
s
z
⟩
=
|
r
⟩
|
s
z
⟩
{\displaystyle |\mathbf {r} ,s_{z}\rangle =|\mathbf {r} \rangle |s_{z}\rangle }
.
=== One-particle states in 3d position space ===
The position-space wave function of a single particle without spin in three spatial dimensions is similar to the case of one spatial dimension above:
Ψ
(
r
,
t
)
{\displaystyle \Psi (\mathbf {r} ,t)}
where r is the position vector in three-dimensional space, and t is time. As always Ψ(r, t) is a complex-valued function of real variables. As a single vector in Dirac notation
|
Ψ
(
t
)
⟩
=
∫
d
3
r
Ψ
(
r
,
t
)
|
r
⟩
{\displaystyle |\Psi (t)\rangle =\int d^{3}\!\mathbf {r} \,\Psi (\mathbf {r} ,t)\,|\mathbf {r} \rangle }
All the previous remarks on inner products, momentum space wave functions, Fourier transforms, and so on extend to higher dimensions.
For a particle with spin, ignoring the position degrees of freedom, the wave function is a function of spin only (time is a parameter);
ξ
(
s
z
,
t
)
{\displaystyle \xi (s_{z},t)}
where sz is the spin projection quantum number along the z axis. (The z axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The sz parameter, unlike r and t, is a discrete variable. For example, for a spin-1/2 particle, sz can only be +1/2 or −1/2, and not any other value. (In general, for spin s, sz can be s, s − 1, ..., −s + 1, −s). Inserting each quantum number gives a complex valued function of space and time, there are 2s + 1 of them. These can be arranged into a column vector
ξ
=
[
ξ
(
s
,
t
)
ξ
(
s
−
1
,
t
)
⋮
ξ
(
−
(
s
−
1
)
,
t
)
ξ
(
−
s
,
t
)
]
=
ξ
(
s
,
t
)
[
1
0
⋮
0
0
]
+
ξ
(
s
−
1
,
t
)
[
0
1
⋮
0
0
]
+
⋯
+
ξ
(
−
(
s
−
1
)
,
t
)
[
0
0
⋮
1
0
]
+
ξ
(
−
s
,
t
)
[
0
0
⋮
0
1
]
{\displaystyle \xi ={\begin{bmatrix}\xi (s,t)\\\xi (s-1,t)\\\vdots \\\xi (-(s-1),t)\\\xi (-s,t)\\\end{bmatrix}}=\xi (s,t){\begin{bmatrix}1\\0\\\vdots \\0\\0\\\end{bmatrix}}+\xi (s-1,t){\begin{bmatrix}0\\1\\\vdots \\0\\0\\\end{bmatrix}}+\cdots +\xi (-(s-1),t){\begin{bmatrix}0\\0\\\vdots \\1\\0\\\end{bmatrix}}+\xi (-s,t){\begin{bmatrix}0\\0\\\vdots \\0\\1\\\end{bmatrix}}}
In bra–ket notation, these easily arrange into the components of a vector:
|
ξ
(
t
)
⟩
=
∑
s
z
=
−
s
s
ξ
(
s
z
,
t
)
|
s
z
⟩
{\displaystyle |\xi (t)\rangle =\sum _{s_{z}=-s}^{s}\xi (s_{z},t)\,|s_{z}\rangle }
The entire vector ξ is a solution of the Schrödinger equation (with a suitable Hamiltonian), which unfolds to a coupled system of 2s + 1 ordinary differential equations with solutions ξ(s, t), ξ(s − 1, t), ..., ξ(−s, t). The term "spin function" instead of "wave function" is used by some authors. This contrasts the solutions to position space wave functions, the position coordinates being continuous degrees of freedom, because then the Schrödinger equation does take the form of a wave equation.
More generally, for a particle in 3d with any spin, the wave function can be written in "position–spin space" as:
Ψ
(
r
,
s
z
,
t
)
{\displaystyle \Psi (\mathbf {r} ,s_{z},t)}
and these can also be arranged into a column vector
Ψ
(
r
,
t
)
=
[
Ψ
(
r
,
s
,
t
)
Ψ
(
r
,
s
−
1
,
t
)
⋮
Ψ
(
r
,
−
(
s
−
1
)
,
t
)
Ψ
(
r
,
−
s
,
t
)
]
{\displaystyle \Psi (\mathbf {r} ,t)={\begin{bmatrix}\Psi (\mathbf {r} ,s,t)\\\Psi (\mathbf {r} ,s-1,t)\\\vdots \\\Psi (\mathbf {r} ,-(s-1),t)\\\Psi (\mathbf {r} ,-s,t)\\\end{bmatrix}}}
in which the spin dependence is placed in indexing the entries, and the wave function is a complex vector-valued function of space and time only.
All values of the wave function, not only for discrete but continuous variables also, collect into a single vector
|
Ψ
(
t
)
⟩
=
∑
s
z
∫
d
3
r
Ψ
(
r
,
s
z
,
t
)
|
r
,
s
z
⟩
{\displaystyle |\Psi (t)\rangle =\sum _{s_{z}}\int d^{3}\!\mathbf {r} \,\Psi (\mathbf {r} ,s_{z},t)\,|\mathbf {r} ,s_{z}\rangle }
For a single particle, the tensor product ⊗ of its position state vector |ψ⟩ and spin state vector |ξ⟩ gives the composite position-spin state vector
|
ψ
(
t
)
⟩
⊗
|
ξ
(
t
)
⟩
=
∑
s
z
∫
d
3
r
ψ
(
r
,
t
)
ξ
(
s
z
,
t
)
|
r
⟩
⊗
|
s
z
⟩
{\displaystyle |\psi (t)\rangle \!\otimes \!|\xi (t)\rangle =\sum _{s_{z}}\int d^{3}\!\mathbf {r} \,\psi (\mathbf {r} ,t)\,\xi (s_{z},t)\,|\mathbf {r} \rangle \!\otimes \!|s_{z}\rangle }
with the identifications
|
Ψ
(
t
)
⟩
=
|
ψ
(
t
)
⟩
⊗
|
ξ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle =|\psi (t)\rangle \!\otimes \!|\xi (t)\rangle }
Ψ
(
r
,
s
z
,
t
)
=
ψ
(
r
,
t
)
ξ
(
s
z
,
t
)
{\displaystyle \Psi (\mathbf {r} ,s_{z},t)=\psi (\mathbf {r} ,t)\,\xi (s_{z},t)}
|
r
,
s
z
⟩
=
|
r
⟩
⊗
|
s
z
⟩
{\displaystyle |\mathbf {r} ,s_{z}\rangle =|\mathbf {r} \rangle \!\otimes \!|s_{z}\rangle }
The tensor product factorization of energy eigenstates is always possible if the orbital and spin angular momenta of the particle are separable in the Hamiltonian operator underlying the system's dynamics (in other words, the Hamiltonian can be split into the sum of orbital and spin terms). The time dependence can be placed in either factor, and time evolution of each can be studied separately. Under such Hamiltonians, any tensor product state evolves into another tensor product state, which essentially means any unentangled state remains unentangled under time evolution. This is said to happen when there is no physical interaction between the states of the tensor products. In the case of non separable Hamiltonians, energy eigenstates are said to be some linear combination of such states, which need not be factorizable; examples include a particle in a magnetic field, and spin–orbit coupling.
The preceding discussion is not limited to spin as a discrete variable, the total angular momentum J may also be used. Other discrete degrees of freedom, like isospin, can expressed similarly to the case of spin above.
=== Many-particle states in 3d position space ===
If there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that one wave function describes many particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for N particles is written:
Ψ
(
r
1
,
r
2
⋯
r
N
,
t
)
{\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2}\cdots \mathbf {r} _{N},t)}
where ri is the position of the i-th particle in three-dimensional space, and t is time. Altogether, this is a complex-valued function of 3N + 1 real variables.
In quantum mechanics there is a fundamental distinction between identical particles and distinguishable particles. For example, any two electrons are identical and fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it. This translates to a requirement on the wave function for a system of identical particles:
Ψ
(
…
r
a
,
…
,
r
b
,
…
)
=
±
Ψ
(
…
r
b
,
…
,
r
a
,
…
)
{\displaystyle \Psi \left(\ldots \mathbf {r} _{a},\ldots ,\mathbf {r} _{b},\ldots \right)=\pm \Psi \left(\ldots \mathbf {r} _{b},\ldots ,\mathbf {r} _{a},\ldots \right)}
where the + sign occurs if the particles are all bosons and − sign if they are all fermions. In other words, the wave function is either totally symmetric in the positions of bosons, or totally antisymmetric in the positions of fermions. The physical interchange of particles corresponds to mathematically switching arguments in the wave function. The antisymmetry feature of fermionic wave functions leads to the Pauli principle. Generally, bosonic and fermionic symmetry requirements are the manifestation of particle statistics and are present in other quantum state formalisms.
For N distinguishable particles (no two being identical, i.e. no two having the same set of quantum numbers), there is no requirement for the wave function to be either symmetric or antisymmetric.
For a collection of particles, some identical with coordinates r1, r2, ... and others distinguishable x1, x2, ... (not identical with each other, and not identical to the aforementioned identical particles), the wave function is symmetric or antisymmetric in the identical particle coordinates ri only:
Ψ
(
…
r
a
,
…
,
r
b
,
…
,
x
1
,
x
2
,
…
)
=
±
Ψ
(
…
r
b
,
…
,
r
a
,
…
,
x
1
,
x
2
,
…
)
{\displaystyle \Psi \left(\ldots \mathbf {r} _{a},\ldots ,\mathbf {r} _{b},\ldots ,\mathbf {x} _{1},\mathbf {x} _{2},\ldots \right)=\pm \Psi \left(\ldots \mathbf {r} _{b},\ldots ,\mathbf {r} _{a},\ldots ,\mathbf {x} _{1},\mathbf {x} _{2},\ldots \right)}
Again, there is no symmetry requirement for the distinguishable particle coordinates xi.
The wave function for N particles each with spin is the complex-valued function
Ψ
(
r
1
,
r
2
⋯
r
N
,
s
z
1
,
s
z
2
⋯
s
z
N
,
t
)
{\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2}\cdots \mathbf {r} _{N},s_{z\,1},s_{z\,2}\cdots s_{z\,N},t)}
Accumulating all these components into a single vector,
|
Ψ
⟩
=
∑
s
z
1
,
…
,
s
z
N
⏞
discrete labels
∫
R
N
d
3
r
N
⋯
∫
R
1
d
3
r
1
⏞
continuous labels
Ψ
(
r
1
,
…
,
r
N
,
s
z
1
,
…
,
s
z
N
)
⏟
wave function (component of
state vector along basis state)
|
r
1
,
…
,
r
N
,
s
z
1
,
…
,
s
z
N
⟩
⏟
basis state (basis ket)
.
{\displaystyle |\Psi \rangle =\overbrace {\sum _{s_{z\,1},\ldots ,s_{z\,N}}} ^{\text{discrete labels}}\overbrace {\int _{R_{N}}d^{3}\mathbf {r} _{N}\cdots \int _{R_{1}}d^{3}\mathbf {r} _{1}} ^{\text{continuous labels}}\;\underbrace {{\Psi }(\mathbf {r} _{1},\ldots ,\mathbf {r} _{N},s_{z\,1},\ldots ,s_{z\,N})} _{\begin{array}{c}{\text{wave function (component of }}\\{\text{ state vector along basis state)}}\end{array}}\;\underbrace {|\mathbf {r} _{1},\ldots ,\mathbf {r} _{N},s_{z\,1},\ldots ,s_{z\,N}\rangle } _{\text{basis state (basis ket)}}\,.}
For identical particles, symmetry requirements apply to both position and spin arguments of the wave function so it has the overall correct symmetry.
The formulae for the inner products are integrals over all coordinates or momenta and sums over all spin quantum numbers. For the general case of N particles with spin in 3-d,
(
Ψ
1
,
Ψ
2
)
=
∑
s
z
N
⋯
∑
s
z
2
∑
s
z
1
∫
a
l
l
s
p
a
c
e
d
3
r
1
∫
a
l
l
s
p
a
c
e
d
3
r
2
⋯
∫
a
l
l
s
p
a
c
e
d
3
r
N
Ψ
1
∗
(
r
1
⋯
r
N
,
s
z
1
⋯
s
z
N
,
t
)
Ψ
2
(
r
1
⋯
r
N
,
s
z
1
⋯
s
z
N
,
t
)
{\displaystyle (\Psi _{1},\Psi _{2})=\sum _{s_{z\,N}}\cdots \sum _{s_{z\,2}}\sum _{s_{z\,1}}\int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{1}\int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{2}\cdots \int \limits _{\mathrm {all\,space} }d^{3}\mathbf {r} _{N}\Psi _{1}^{*}\left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)\Psi _{2}\left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)}
this is altogether N three-dimensional volume integrals and N sums over the spins. The differential volume elements d3ri are also written "dVi" or "dxi dyi dzi".
The multidimensional Fourier transforms of the position or position–spin space wave functions yields momentum or momentum–spin space wave functions.
==== Probability interpretation ====
For the general case of N particles with spin in 3d, if Ψ is interpreted as a probability amplitude, the probability density is
ρ
(
r
1
⋯
r
N
,
s
z
1
⋯
s
z
N
,
t
)
=
|
Ψ
(
r
1
⋯
r
N
,
s
z
1
⋯
s
z
N
,
t
)
|
2
{\displaystyle \rho \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)=\left|\Psi \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},s_{z\,1}\cdots s_{z\,N},t\right)\right|^{2}}
and the probability that particle 1 is in region R1 with spin sz1 = m1 and particle 2 is in region R2 with spin sz2 = m2 etc. at time t is the integral of the probability density over these regions and evaluated at these spin numbers:
P
r
1
∈
R
1
,
s
z
1
=
m
1
,
…
,
r
N
∈
R
N
,
s
z
N
=
m
N
(
t
)
=
∫
R
1
d
3
r
1
∫
R
2
d
3
r
2
⋯
∫
R
N
d
3
r
N
|
Ψ
(
r
1
⋯
r
N
,
m
1
⋯
m
N
,
t
)
|
2
{\displaystyle P_{\mathbf {r} _{1}\in R_{1},s_{z\,1}=m_{1},\ldots ,\mathbf {r} _{N}\in R_{N},s_{z\,N}=m_{N}}(t)=\int _{R_{1}}d^{3}\mathbf {r} _{1}\int _{R_{2}}d^{3}\mathbf {r} _{2}\cdots \int _{R_{N}}d^{3}\mathbf {r} _{N}\left|\Psi \left(\mathbf {r} _{1}\cdots \mathbf {r} _{N},m_{1}\cdots m_{N},t\right)\right|^{2}}
==== Physical significance of phase ====
In non-relativistic quantum mechanics, it can be shown using Schrodinger's time dependent wave equation that the equation:
∂
ρ
∂
t
+
∇
⋅
J
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0}
is satisfied, where
ρ
(
x
,
t
)
=
|
ψ
(
x
,
t
)
|
2
{\textstyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}}
is the probability density and
J
(
x
,
t
)
=
ℏ
2
i
m
(
ψ
∗
∇
ψ
−
ψ
∇
ψ
∗
)
=
ℏ
m
Im
(
ψ
∗
∇
ψ
)
{\textstyle \mathbf {J} (\mathbf {x} ,t)={\frac {\hbar }{2im}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}{\text{Im}}(\psi ^{*}\nabla \psi )}
, is known as the probability flux in accordance with the continuity equation form of the above equation.
Using the following expression for wavefunction:
ψ
(
x
,
t
)
=
ρ
(
x
,
t
)
exp
i
S
(
x
,
t
)
ℏ
{\displaystyle \psi (\mathbf {x} ,t)={\sqrt {\rho (\mathbf {x} ,t)}}\exp {\frac {iS(\mathbf {x} ,t)}{\hbar }}}
where
ρ
(
x
,
t
)
=
|
ψ
(
x
,
t
)
|
2
{\textstyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}}
is the probability density and
S
(
x
,
t
)
{\textstyle S(\mathbf {x} ,t)}
is the phase of the wavefunction, it can be shown that:
J
(
x
,
t
)
=
ρ
∇
S
m
{\displaystyle \mathbf {J} (\mathbf {x} ,t)={\frac {\rho \nabla S}{m}}}
Hence the spacial variation of phase characterizes the probability flux.
In classical analogy, for
J
=
ρ
v
{\textstyle \mathbf {J} =\rho \mathbf {v} }
, the quantity
∇
S
m
{\textstyle {\frac {\nabla S}{m}}}
is analogous with velocity. Note that this does not imply a literal interpretation of
∇
S
m
{\textstyle {\frac {\nabla S}{m}}}
as velocity since velocity and position cannot be simultaneously determined as per the uncertainty principle. Substituting the form of wavefunction in Schrodinger's time dependent wave equation, and taking the classical limit,
ℏ
|
∇
2
S
|
≪
|
∇
S
|
2
{\textstyle \hbar |\nabla ^{2}S|\ll |\nabla S|^{2}}
:
1
2
m
|
∇
S
(
x
,
t
)
|
2
+
V
(
x
)
+
∂
S
∂
t
=
0
{\displaystyle {\frac {1}{2m}}|\nabla S(\mathbf {x} ,t)|^{2}+V(\mathbf {x} )+{\frac {\partial S}{\partial t}}=0}
Which is analogous to Hamilton-Jacobi equation from classical mechanics. This interpretation fits with Hamilton–Jacobi theory, in which
P
class.
=
∇
S
{\textstyle \mathbf {P} _{\text{class.}}=\nabla S}
, where S is Hamilton's principal function.
== Time dependence ==
For systems in time-independent potentials, the wave function can always be written as a function of the degrees of freedom multiplied by a time-dependent phase factor, the form of which is given by the Schrödinger equation. For N particles, considering their positions only and suppressing other degrees of freedom,
Ψ
(
r
1
,
r
2
,
…
,
r
N
,
t
)
=
e
−
i
E
t
/
ℏ
ψ
(
r
1
,
r
2
,
…
,
r
N
)
,
{\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)=e^{-iEt/\hbar }\,\psi (\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})\,,}
where E is the energy eigenvalue of the system corresponding to the eigenstate Ψ. Wave functions of this form are called stationary states.
The time dependence of the quantum state and the operators can be placed according to unitary transformations on the operators and states. For any quantum state |Ψ⟩ and operator O, in the Schrödinger picture |Ψ(t)⟩ changes with time according to the Schrödinger equation while O is constant. In the Heisenberg picture it is the other way round, |Ψ⟩ is constant while O(t) evolves with time according to the Heisenberg equation of motion. The Dirac (or interaction) picture is intermediate, time dependence is places in both operators and states which evolve according to equations of motion. It is useful primarily in computing S-matrix elements.
== Non-relativistic examples ==
The following are solutions to the Schrödinger equation for one non-relativistic spinless particle.
=== Finite potential barrier ===
One of the most prominent features of wave mechanics is the possibility for a particle to reach a location with a prohibitive (in classical mechanics) force potential. A common model is the "potential barrier", the one-dimensional case has the potential
V
(
x
)
=
{
V
0
|
x
|
<
a
0
|
x
|
≥
a
{\displaystyle V(x)={\begin{cases}V_{0}&|x|<a\\0&|x|\geq a\end{cases}}}
and the steady-state solutions to the wave equation have the form (for some constants k, κ)
Ψ
(
x
)
=
{
A
r
e
i
k
x
+
A
l
e
−
i
k
x
x
<
−
a
,
B
r
e
κ
x
+
B
l
e
−
κ
x
|
x
|
≤
a
,
C
r
e
i
k
x
+
C
l
e
−
i
k
x
x
>
a
.
{\displaystyle \Psi (x)={\begin{cases}A_{\mathrm {r} }e^{ikx}+A_{\mathrm {l} }e^{-ikx}&x<-a,\\B_{\mathrm {r} }e^{\kappa x}+B_{\mathrm {l} }e^{-\kappa x}&|x|\leq a,\\C_{\mathrm {r} }e^{ikx}+C_{\mathrm {l} }e^{-ikx}&x>a.\end{cases}}}
Note that these wave functions are not normalized; see scattering theory for discussion.
The standard interpretation of this is as a stream of particles being fired at the step from the left (the direction of negative x): setting Ar = 1 corresponds to firing particles singly; the terms containing Ar and Cr signify motion to the right, while Al and Cl – to the left. Under this beam interpretation, put Cl = 0 since no particles are coming from the right. By applying the continuity of wave functions and their derivatives at the boundaries, it is hence possible to determine the constants above.
In a semiconductor crystallite whose radius is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box.
=== Quantum harmonic oscillator ===
The wave functions for the quantum harmonic oscillator can be expressed in terms of Hermite polynomials Hn, they are
Ψ
n
(
x
)
=
1
2
n
n
!
⋅
(
m
ω
π
ℏ
)
1
/
4
⋅
e
−
m
ω
x
2
2
ℏ
⋅
H
n
(
m
ω
ℏ
x
)
{\displaystyle \Psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}{\left({\sqrt {\frac {m\omega }{\hbar }}}x\right)}}
where n = 0, 1, 2, ....
=== Hydrogen atom ===
The wave functions of an electron in a Hydrogen atom are expressed in terms of spherical harmonics and generalized Laguerre polynomials (these are defined differently by different authors—see main article on them and the hydrogen atom).
It is convenient to use spherical coordinates, and the wave function can be separated into functions of each coordinate,
Ψ
n
ℓ
m
(
r
,
θ
,
ϕ
)
=
R
(
r
)
Y
ℓ
m
(
θ
,
ϕ
)
{\displaystyle \Psi _{n\ell m}(r,\theta ,\phi )=R(r)\,\,Y_{\ell }^{m}\!(\theta ,\phi )}
where R are radial functions and Ymℓ(θ, φ) are spherical harmonics of degree ℓ and order m. This is the only atom for which the Schrödinger equation has been solved exactly. Multi-electron atoms require approximative methods. The family of solutions is:
Ψ
n
ℓ
m
(
r
,
θ
,
ϕ
)
=
(
2
n
a
0
)
3
(
n
−
ℓ
−
1
)
!
2
n
[
(
n
+
ℓ
)
!
]
e
−
r
/
n
a
0
(
2
r
n
a
0
)
ℓ
L
n
−
ℓ
−
1
2
ℓ
+
1
(
2
r
n
a
0
)
⋅
Y
ℓ
m
(
θ
,
ϕ
)
{\displaystyle \Psi _{n\ell m}(r,\theta ,\phi )={\sqrt {{\left({\frac {2}{na_{0}}}\right)}^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\phi )}
where a0 = 4πε0ħ2/mee2 is the Bohr radius,
L2ℓ + 1n − ℓ − 1 are the generalized Laguerre polynomials of degree n − ℓ − 1, n = 1, 2, ... is the principal quantum number, ℓ = 0, 1, ..., n − 1 the azimuthal quantum number, m = −ℓ, −ℓ + 1, ..., ℓ − 1, ℓ the magnetic quantum number. Hydrogen-like atoms have very similar solutions.
This solution does not take into account the spin of the electron.
In the figure of the hydrogen orbitals, the 19 sub-images are images of wave functions in position space (their norm squared). The wave functions represent the abstract state characterized by the triple of quantum numbers (n, ℓ, m), in the lower right of each image. These are the principal quantum number, the orbital angular momentum quantum number, and the magnetic quantum number. Together with one spin-projection quantum number of the electron, this is a complete set of observables.
The figure can serve to illustrate some further properties of the function spaces of wave functions.
In this case, the wave functions are square integrable. One can initially take the function space as the space of square integrable functions, usually denoted L2.
The displayed functions are solutions to the Schrödinger equation. Obviously, not every function in L2 satisfies the Schrödinger equation for the hydrogen atom. The function space is thus a subspace of L2.
The displayed functions form part of a basis for the function space. To each triple (n, ℓ, m), there corresponds a basis wave function. If spin is taken into account, there are two basis functions for each triple. The function space thus has a countable basis.
The basis functions are mutually orthonormal.
== Wave functions and function spaces ==
The concept of function spaces enters naturally in the discussion about wave functions. A function space is a set of functions, usually with some defining requirements on the functions (in the present case that they are square integrable), sometimes with an algebraic structure on the set (in the present case a vector space structure with an inner product), together with a topology on the set. The latter will sparsely be used here, it is only needed to obtain a precise definition of what it means for a subset of a function space to be closed. It will be concluded below that the function space of wave functions is a Hilbert space. This observation is the foundation of the predominant mathematical formulation of quantum mechanics.
=== Vector space structure ===
A wave function is an element of a function space partly characterized by the following concrete and abstract descriptions.
The Schrödinger equation is linear. This means that the solutions to it, wave functions, can be added and multiplied by scalars to form a new solution. The set of solutions to the Schrödinger equation is a vector space.
The superposition principle of quantum mechanics. If Ψ and Φ are two states in the abstract space of states of a quantum mechanical system, and a and b are any two complex numbers, then aΨ + bΦ is a valid state as well. (Whether the null vector counts as a valid state ("no system present") is a matter of definition. The null vector does not at any rate describe the vacuum state in quantum field theory.) The set of allowable states is a vector space.
This similarity is of course not accidental. There are also a distinctions between the spaces to keep in mind.
=== Representations ===
Basic states are characterized by a set of quantum numbers. This is a set of eigenvalues of a maximal set of commuting observables. Physical observables are represented by linear operators, also called observables, on the vectors space. Maximality means that there can be added to the set no further algebraically independent observables that commute with the ones already present. A choice of such a set may be called a choice of representation.
It is a postulate of quantum mechanics that a physically observable quantity of a system, such as position, momentum, or spin, is represented by a linear Hermitian operator on the state space. The possible outcomes of measurement of the quantity are the eigenvalues of the operator. At a deeper level, most observables, perhaps all, arise as generators of symmetries.
The physical interpretation is that such a set represents what can – in theory – simultaneously be measured with arbitrary precision. The Heisenberg uncertainty relation prohibits simultaneous exact measurements of two non-commuting observables.
The set is non-unique. It may for a one-particle system, for example, be position and spin z-projection, (x, Sz), or it may be momentum and spin y-projection, (p, Sy). In this case, the operator corresponding to position (a multiplication operator in the position representation) and the operator corresponding to momentum (a differential operator in the position representation) do not commute.
Once a representation is chosen, there is still arbitrariness. It remains to choose a coordinate system. This may, for example, correspond to a choice of x, y- and z-axis, or a choice of curvilinear coordinates as exemplified by the spherical coordinates used for the Hydrogen atomic wave functions. This final choice also fixes a basis in abstract Hilbert space. The basic states are labeled by the quantum numbers corresponding to the maximal set of commuting observables and an appropriate coordinate system.
The abstract states are "abstract" only in that an arbitrary choice necessary for a particular explicit description of it is not given. This is the same as saying that no choice of maximal set of commuting observables has been given. This is analogous to a vector space without a specified basis. Wave functions corresponding to a state are accordingly not unique. This non-uniqueness reflects the non-uniqueness in the choice of a maximal set of commuting observables. For one spin particle in one dimension, to a particular state there corresponds two wave functions, Ψ(x, Sz) and Ψ(p, Sy), both describing the same state.
For each choice of maximal commuting sets of observables for the abstract state space, there is a corresponding representation that is associated to a function space of wave functions.
Between all these different function spaces and the abstract state space, there are one-to-one correspondences (here disregarding normalization and unobservable phase factors), the common denominator here being a particular abstract state. The relationship between the momentum and position space wave functions, for instance, describing the same state is the Fourier transform.
Each choice of representation should be thought of as specifying a unique function space in which wave functions corresponding to that choice of representation lives. This distinction is best kept, even if one could argue that two such function spaces are mathematically equal, e.g. being the set of square integrable functions. One can then think of the function spaces as two distinct copies of that set.
=== Inner product ===
There is an additional algebraic structure on the vector spaces of wave functions and the abstract state space.
Physically, different wave functions are interpreted to overlap to some degree. A system in a state Ψ that does not overlap with a state Φ cannot be found to be in the state Φ upon measurement. But if Φ1, Φ2, … overlap Ψ to some degree, there is a chance that measurement of a system described by Ψ will be found in states Φ1, Φ2, …. Also selection rules are observed apply. These are usually formulated in the preservation of some quantum numbers. This means that certain processes allowable from some perspectives (e.g. energy and momentum conservation) do not occur because the initial and final total wave functions do not overlap.
Mathematically, it turns out that solutions to the Schrödinger equation for particular potentials are orthogonal in some manner, this is usually described by an integral
∫
Ψ
m
∗
Ψ
n
w
d
V
=
δ
n
m
,
{\displaystyle \int \Psi _{m}^{*}\Psi _{n}w\,dV=\delta _{nm},}
where m, n are (sets of) indices (quantum numbers) labeling different solutions, the strictly positive function w is called a weight function, and δmn is the Kronecker delta. The integration is taken over all of the relevant space.
This motivates the introduction of an inner product on the vector space of abstract quantum states, compatible with the mathematical observations above when passing to a representation. It is denoted (Ψ, Φ), or in the Bra–ket notation ⟨Ψ|Φ⟩. It yields a complex number. With the inner product, the function space is an inner product space. The explicit appearance of the inner product (usually an integral or a sum of integrals) depends on the choice of representation, but the complex number (Ψ, Φ) does not. Much of the physical interpretation of quantum mechanics stems from the Born rule. It states that the probability p of finding upon measurement the state Φ given the system is in the state Ψ is
p
=
|
(
Φ
,
Ψ
)
|
2
,
{\displaystyle p=|(\Phi ,\Psi )|^{2},}
where Φ and Ψ are assumed normalized. Consider a scattering experiment. In quantum field theory, if Φout describes a state in the "distant future" (an "out state") after interactions between scattering particles have ceased, and Ψin an "in state" in the "distant past", then the quantities (Φout, Ψin), with Φout and Ψin varying over a complete set of in states and out states respectively, is called the S-matrix or scattering matrix. Knowledge of it is, effectively, having solved the theory at hand, at least as far as predictions go. Measurable quantities such as decay rates and scattering cross sections are calculable from the S-matrix.
=== Hilbert space ===
The above observations encapsulate the essence of the function spaces of which wave functions are elements. However, the description is not yet complete. There is a further technical requirement on the function space, that of completeness, that allows one to take limits of sequences in the function space, and be ensured that, if the limit exists, it is an element of the function space. A complete inner product space is called a Hilbert space. The property of completeness is crucial in advanced treatments and applications of quantum mechanics. For instance, the existence of projection operators or orthogonal projections relies on the completeness of the space. These projection operators, in turn, are essential for the statement and proof of many useful theorems, e.g. the spectral theorem. It is not very important in introductory quantum mechanics, and technical details and links may be found in footnotes like the one that follows.
The space L2 is a Hilbert space, with inner product presented later. The function space of the example of the figure is a subspace of L2. A subspace of a Hilbert space is a Hilbert space if it is closed.
In summary, the set of all possible normalizable wave functions for a system with a particular choice of basis, together with the null vector, constitute a Hilbert space.
Not all functions of interest are elements of some Hilbert space, say L2. The most glaring example is the set of functions e2πip · x⁄h. These are plane wave solutions of the Schrödinger equation for a free particle that are not normalizable, hence not in L2. But they are nonetheless fundamental for the description. One can, using them, express functions that are normalizable using wave packets. They are, in a sense, a basis (but not a Hilbert space basis, nor a Hamel basis) in which wave functions of interest can be expressed. There is also the artifact "normalization to a delta function" that is frequently employed for notational convenience, see further down. The delta functions themselves are not square integrable either.
The above description of the function space containing the wave functions is mostly mathematically motivated. The function spaces are, due to completeness, very large in a certain sense. Not all functions are realistic descriptions of any physical system. For instance, in the function space L2 one can find the function that takes on the value 0 for all rational numbers and -i for the irrationals in the interval [0, 1]. This is square integrable,
but can hardly represent a physical state.
=== Common Hilbert spaces ===
While the space of solutions as a whole is a Hilbert space there are many other Hilbert spaces that commonly occur as ingredients.
Square integrable complex valued functions on the interval [0, 2π]. The set {eint/2π, n ∈ Z} is a Hilbert space basis, i.e. a maximal orthonormal set.
The Fourier transform takes functions in the above space to elements of l2(Z), the space of square summable functions Z → C. The latter space is a Hilbert space and the Fourier transform is an isomorphism of Hilbert spaces. Its basis is {ei, i ∈ Z} with ei(j) = δij, i, j ∈ Z.
The most basic example of spanning polynomials is in the space of square integrable functions on the interval [–1, 1] for which the Legendre polynomials is a Hilbert space basis (complete orthonormal set).
The square integrable functions on the unit sphere S2 is a Hilbert space. The basis functions in this case are the spherical harmonics. The Legendre polynomials are ingredients in the spherical harmonics. Most problems with rotational symmetry will have "the same" (known) solution with respect to that symmetry, so the original problem is reduced to a problem of lower dimensionality.
The associated Laguerre polynomials appear in the hydrogenic wave function problem after factoring out the spherical harmonics. These span the Hilbert space of square integrable functions on the semi-infinite interval [0, ∞).
More generally, one may consider a unified treatment of all second order polynomial solutions to the Sturm–Liouville equations in the setting of Hilbert space. These include the Legendre and Laguerre polynomials as well as Chebyshev polynomials, Jacobi polynomials and Hermite polynomials. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. For this, see Byron & Fuller (1992, Chapter 5).
There occurs also finite-dimensional Hilbert spaces. The space Cn is a Hilbert space of dimension n. The inner product is the standard inner product on these spaces. In it, the "spin part" of a single particle wave function resides.
In the non-relativistic description of an electron one has n = 2 and the total wave function is a solution of the Pauli equation.
In the corresponding relativistic treatment, n = 4 and the wave function solves the Dirac equation.
With more particles, the situations is more complicated. One has to employ tensor products and use representation theory of the symmetry groups involved (the rotation group and the Lorentz group respectively) to extract from the tensor product the spaces in which the (total) spin wave functions reside. (Further problems arise in the relativistic case unless the particles are free. See the Bethe–Salpeter equation.) Corresponding remarks apply to the concept of isospin, for which the symmetry group is SU(2). The models of the nuclear forces of the sixties (still useful today, see nuclear force) used the symmetry group SU(3). In this case, as well, the part of the wave functions corresponding to the inner symmetries reside in some Cn or subspaces of tensor products of such spaces.
In quantum field theory the underlying Hilbert space is Fock space. It is built from free single-particle states, i.e. wave functions when a representation is chosen, and can accommodate any finite, not necessarily constant in time, number of particles. The interesting (or rather the tractable) dynamics lies not in the wave functions but in the field operators that are operators acting on Fock space. Thus the Heisenberg picture is the most common choice (constant states, time varying operators).
Due to the infinite-dimensional nature of the system, the appropriate mathematical tools are objects of study in functional analysis.
=== Simplified description ===
Not all introductory textbooks take the long route and introduce the full Hilbert space machinery, but the focus is on the non-relativistic Schrödinger equation in position representation for certain standard potentials. The following constraints on the wave function are sometimes explicitly formulated for the calculations and physical interpretation to make sense:
The wave function must be square integrable. This is motivated by the Copenhagen interpretation of the wave function as a probability amplitude.
It must be everywhere continuous and everywhere continuously differentiable. This is motivated by the appearance of the Schrödinger equation for most physically reasonable potentials.
It is possible to relax these conditions somewhat for special purposes.
If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. Note that exceptions can arise to the continuity of derivatives rule at points of infinite discontinuity of potential field. For example, in particle in a box where the derivative of wavefunction can be discontinuous at the boundary of the box where the potential is known to have infinite discontinuity.
This does not alter the structure of the Hilbert space that these particular wave functions inhabit, but the subspace of the square-integrable functions L2, which is a Hilbert space, satisfying the second requirement is not closed in L2, hence not a Hilbert space in itself.
The functions that does not meet the requirements are still needed for both technical and practical reasons.
== More on wave functions and abstract state space ==
As has been demonstrated, the set of all possible wave functions in some representation for a system constitute an in general infinite-dimensional Hilbert space. Due to the multiple possible choices of representation basis, these Hilbert spaces are not unique. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. Specifically, each state is represented as an abstract vector in state space. A quantum state |Ψ⟩ in any representation is generally expressed as a vector
|
Ψ
⟩
=
∑
α
∫
d
m
ω
Ψ
t
(
α
,
ω
)
|
α
,
ω
⟩
{\displaystyle |\Psi \rangle =\sum _{\boldsymbol {\alpha }}\int d^{m}\!{\boldsymbol {\omega }}\,\,\Psi _{t}({\boldsymbol {\alpha }},{\boldsymbol {\omega }})\,|{\boldsymbol {\alpha }},{\boldsymbol {\omega }}\rangle }
where
|α, ω⟩ the basis vectors of the chosen representation
dmω = dω1dω2...dωm a differential volume element in the continuous degrees of freedom
Ψ
t
(
α
,
ω
)
{\displaystyle {\boldsymbol {\Psi }}_{t}({\boldsymbol {\alpha }},{\boldsymbol {\omega }})}
a component of the vector
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
, called the wave function of the system
α = (α1, α2, ..., αn) dimensionless discrete quantum numbers
ω = (ω1, ω2, ..., ωm) continuous variables (not necessarily dimensionless)
These quantum numbers index the components of the state vector. More, all α are in an n-dimensional set A = A1 × A2 × ... × An where each Ai is the set of allowed values for αi; all ω are in an m-dimensional "volume" Ω ⊆ ℝm where Ω = Ω1 × Ω2 × ... × Ωm and each Ωi ⊆ R is the set of allowed values for ωi, a subset of the real numbers R. For generality n and m are not necessarily equal.
Example:
The probability density of finding the system at time
t
{\displaystyle t}
at state |α, ω⟩ is
ρ
α
,
ω
(
t
)
=
|
Ψ
(
α
,
ω
,
t
)
|
2
{\displaystyle \rho _{\alpha ,\omega }(t)=|\Psi ({\boldsymbol {\alpha }},{\boldsymbol {\omega }},t)|^{2}}
The probability of finding system with α in some or all possible discrete-variable configurations, D ⊆ A, and ω in some or all possible continuous-variable configurations, C ⊆ Ω, is the sum and integral over the density,
P
(
t
)
=
∑
α
∈
D
∫
C
d
m
ω
ρ
α
,
ω
(
t
)
{\displaystyle P(t)=\sum _{{\boldsymbol {\alpha }}\in D}\int _{C}d^{m}\!{\boldsymbol {\omega }}\,\,\rho _{\alpha ,\omega }(t)}
Since the sum of all probabilities must be 1, the normalization condition
1
=
∑
α
∈
A
∫
Ω
d
m
ω
ρ
α
,
ω
(
t
)
{\displaystyle 1=\sum _{{\boldsymbol {\alpha }}\in A}\int _{\Omega }d^{m}\!{\boldsymbol {\omega }}\,\,\rho _{\alpha ,\omega }(t)}
must hold at all times during the evolution of the system.
The normalization condition requires ρ dmω to be dimensionless, by dimensional analysis Ψ must have the same units as (ω1ω2...ωm)−1/2.
== Ontology ==
Whether the wave function exists in reality, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Erwin Schrödinger, Albert Einstein and Niels Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Eugene Wigner and John von Neumann) while others, such as John Archibald Wheeler or Edwin Thompson Jaynes, take the more classical approach and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, David Bohm and Hugh Everett III and others, argued that the wave function must have an objective, physical existence. Einstein thought that a complete description of physical reality should refer directly to physical space and time, as distinct from the wave function, which refers to an abstract mathematical space.
== See also ==
== Notes ==
=== Remarks ===
=== Citations ===
== References ==
== Further reading ==
== External links ==
Quantum Mechanics for Engineers
Spin wave functions NYU
Identical Particles Revisited, Michael Fowler
The Nature of Many-Electron Wavefunctions
Quantum Mechanics and Quantum Computation at BerkeleyX Archived 2013-05-13 at the Wayback Machine
Einstein, The quantum theory of radiation | Wikipedia/Wavefunction |
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
== Statement of the equation ==
Poisson's equation is
Δ
φ
=
f
,
{\displaystyle \Delta \varphi =f,}
where
Δ
{\displaystyle \Delta }
is the Laplace operator, and
f
{\displaystyle f}
and
φ
{\displaystyle \varphi }
are real or complex-valued functions on a manifold. Usually,
f
{\displaystyle f}
is given, and
φ
{\displaystyle \varphi }
is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as ∇2, and so Poisson's equation is frequently written as
∇
2
φ
=
f
.
{\displaystyle \nabla ^{2}\varphi =f.}
In three-dimensional Cartesian coordinates, it takes the form
(
∂
2
∂
x
2
+
∂
2
∂
y
2
+
∂
2
∂
z
2
)
φ
(
x
,
y
,
z
)
=
f
(
x
,
y
,
z
)
.
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}\right)\varphi (x,y,z)=f(x,y,z).}
When
f
=
0
{\displaystyle f=0}
identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
φ
(
r
)
=
−
∭
f
(
r
′
)
4
π
|
r
−
r
′
|
d
3
r
′
,
{\displaystyle \varphi (\mathbf {r} )=-\iiint {\frac {f(\mathbf {r} ')}{4\pi |\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} ^{3}r',}
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
== Applications in physics and engineering ==
=== Newtonian gravity ===
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity. Gauss's law for gravity is
∇
⋅
g
=
−
4
π
G
ρ
.
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho .}
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
g
=
−
∇
ϕ
.
{\displaystyle \mathbf {g} =-\nabla \phi .}
Substituting this into Gauss's law,
∇
⋅
(
−
∇
ϕ
)
=
−
4
π
G
ρ
,
{\displaystyle \nabla \cdot (-\nabla \phi )=-4\pi G\rho ,}
yields Poisson's equation for gravity:
∇
2
ϕ
=
4
π
G
ρ
.
{\displaystyle \nabla ^{2}\phi =4\pi G\rho .}
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance r from a central point mass m (i.e., the fundamental solution). In three dimensions the potential is
ϕ
(
r
)
=
−
G
m
r
,
{\displaystyle \phi (r)={\frac {-Gm}{r}},}
which is equivalent to Newton's law of universal gravitation.
=== Electrostatics ===
Many problems in electrostatics are governed by the Poisson equation, which relates the electric potential
φ to the free charge density
ρ
f
{\displaystyle \rho _{f}}
, such as those found in conductors.
The mathematical details of Poisson's equation, commonly expressed in SI units (as opposed to Gaussian units), describe how the distribution of free charges generates the electrostatic potential in a given region.
Starting with Gauss's law for electricity (also one of Maxwell's equations) in differential form, one has
∇
⋅
D
=
ρ
f
,
{\displaystyle \mathbf {\nabla } \cdot \mathbf {D} =\rho _{f},}
where
∇
⋅
{\displaystyle \mathbf {\nabla } \cdot }
is the divergence operator, D is the electric displacement field, and ρf is the free-charge density (describing charges brought from outside).
Assuming the medium is linear, isotropic, and homogeneous (see polarization density), we have the constitutive equation
D
=
ε
E
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,}
where ε is the permittivity of the medium, and E is the electric field.
Substituting this into Gauss's law and assuming that ε is spatially constant in the region of interest yields
∇
⋅
E
=
ρ
f
ε
.
{\displaystyle \mathbf {\nabla } \cdot \mathbf {E} ={\frac {\rho _{f}}{\varepsilon }}.}
In electrostatics, we assume that there is no magnetic field (the argument that follows also holds in the presence of a constant magnetic field).
Then, we have that
∇
×
E
=
0
,
{\displaystyle \nabla \times \mathbf {E} =0,}
where ∇× is the curl operator. This equation means that we can write the electric field as the gradient of a scalar function φ (called the electric potential), since the curl of any gradient is zero. Thus we can write
E
=
−
∇
φ
,
{\displaystyle \mathbf {E} =-\nabla \varphi ,}
where the minus sign is introduced so that φ is identified as the electric potential energy per unit charge.
The derivation of Poisson's equation under these circumstances is straightforward. Substituting the potential gradient for the electric field,
∇
⋅
E
=
∇
⋅
(
−
∇
φ
)
=
−
∇
2
φ
=
ρ
f
ε
,
{\displaystyle \nabla \cdot \mathbf {E} =\nabla \cdot (-\nabla \varphi )=-\nabla ^{2}\varphi ={\frac {\rho _{f}}{\varepsilon }},}
directly produces Poisson's equation for electrostatics, which is
∇
2
φ
=
−
ρ
f
ε
.
{\displaystyle \nabla ^{2}\varphi =-{\frac {\rho _{f}}{\varepsilon }}.}
Specifying the Poisson's equation for the potential requires knowing the charge density distribution. If the charge density is zero, then Laplace's equation results. If the charge density follows a Boltzmann distribution, then the Poisson–Boltzmann equation results. The Poisson–Boltzmann equation plays a role in the development of the Debye–Hückel theory of dilute electrolyte solutions.
Using a Green's function, the potential at distance r from a central point charge Q (i.e., the fundamental solution) is
φ
(
r
)
=
Q
4
π
ε
r
,
{\displaystyle \varphi (r)={\frac {Q}{4\pi \varepsilon r}},}
which is Coulomb's law of electrostatics. (For historical reasons, and unlike gravity's model above, the
4
π
{\displaystyle 4\pi }
factor appears here and not in Gauss's law.)
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general class of cases, computing φ is no longer sufficient to calculate E, since E also depends on the magnetic vector potential A, which must be independently computed. See Maxwell's equation in potential formulation for more on φ and A in Maxwell's equations and how an appropriate Poisson's equation is obtained in this case.
==== Potential of a Gaussian charge density ====
If there is a static spherically symmetric Gaussian charge density
ρ
f
(
r
)
=
Q
σ
3
2
π
3
e
−
r
2
/
(
2
σ
2
)
,
{\displaystyle \rho _{f}(r)={\frac {Q}{\sigma ^{3}{\sqrt {2\pi }}^{3}}}\,e^{-r^{2}/(2\sigma ^{2})},}
where Q is the total charge, then the solution φ(r) of Poisson's equation
∇
2
φ
=
−
ρ
f
ε
{\displaystyle \nabla ^{2}\varphi =-{\frac {\rho _{f}}{\varepsilon }}}
is given by
φ
(
r
)
=
1
4
π
ε
Q
r
erf
(
r
2
σ
)
,
{\displaystyle \varphi (r)={\frac {1}{4\pi \varepsilon }}{\frac {Q}{r}}\operatorname {erf} \left({\frac {r}{{\sqrt {2}}\sigma }}\right),}
where erf(x) is the error function. This solution can be checked explicitly by evaluating ∇2φ.
Note that for r much greater than σ,
erf
(
r
/
2
σ
)
{\textstyle \operatorname {erf} (r/{\sqrt {2}}\sigma )}
approaches unity, and the potential φ(r) approaches the point-charge potential,
φ
≈
1
4
π
ε
Q
r
,
{\displaystyle \varphi \approx {\frac {1}{4\pi \varepsilon }}{\frac {Q}{r}},}
as one would expect. Furthermore, the error function approaches 1 extremely quickly as its argument increases; in practice, for r > 3σ the relative error is smaller than one part in a thousand.
=== Surface reconstruction ===
Surface reconstruction is an inverse problem. The goal is to digitally reconstruct a smooth surface based on a large number of points pi (a point cloud) where each point also carries an estimate of the local surface normal ni. Poisson's equation can be utilized to solve this problem with a technique called Poisson surface reconstruction.
The goal of this technique is to reconstruct an implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pi, ni) is thus modeled as a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.
In order to effectively apply Poisson's equation to the problem of surface reconstruction, it is necessary to find a good discretization of the vector field V. The basic approach is to bound the data with a finite-difference grid. For a function valued at the nodes of such a grid, its gradient can be represented as valued on staggered grids, i.e. on grids whose nodes lie in between the nodes of the original grid. It is convenient to define three staggered grids, each shifted in one and only one direction corresponding to the components of the normal data. On each staggered grid we perform trilinear interpolation on the set of points. The interpolation weights are then used to distribute the magnitude of the associated component of ni onto the nodes of the particular staggered grid cell containing pi. Kazhdan and coauthors give a more accurate method of discretization using an adaptive finite-difference grid, i.e. the cells of the grid are smaller (the grid is more finely divided) where there are more data points. They suggest implementing this technique with an adaptive octree.
=== Fluid dynamics ===
For the incompressible Navier–Stokes equations, given by
∂
v
∂
t
+
(
v
⋅
∇
)
v
=
−
1
ρ
∇
p
+
ν
Δ
v
+
g
,
∇
⋅
v
=
0.
{\displaystyle {\begin{aligned}{\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} &=-{\frac {1}{\rho }}\nabla p+\nu \Delta \mathbf {v} +\mathbf {g} ,\\\nabla \cdot \mathbf {v} &=0.\end{aligned}}}
The equation for the pressure field
p
{\displaystyle p}
is an example of a nonlinear Poisson equation:
Δ
p
=
−
ρ
∇
⋅
(
v
⋅
∇
v
)
=
−
ρ
Tr
(
(
∇
v
)
(
∇
v
)
)
.
{\displaystyle {\begin{aligned}\Delta p&=-\rho \nabla \cdot (\mathbf {v} \cdot \nabla \mathbf {v} )\\&=-\rho \operatorname {Tr} {\big (}(\nabla \mathbf {v} )(\nabla \mathbf {v} ){\big )}.\end{aligned}}}
Notice that the above trace is not sign-definite.
== See also ==
Discrete Poisson equation
Poisson–Boltzmann equation
Helmholtz equation
Uniqueness theorem for Poisson's equation
Weak formulation
Harmonic function
Heat equation
Potential theory
== References ==
== Further reading ==
Evans, Lawrence C. (1998). Partial Differential Equations. Providence (RI): American Mathematical Society. ISBN 0-8218-0772-2.
Mathews, Jon; Walker, Robert L. (1970). Mathematical Methods of Physics (2nd ed.). New York: W. A. Benjamin. ISBN 0-8053-7002-1.
Polyanin, Andrei D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton (FL): Chapman & Hall/CRC Press. ISBN 1-58488-299-9.
== External links ==
"Poisson equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Poisson Equation at EqWorld: The World of Mathematical Equations | Wikipedia/Poisson's_equation |
In mathematics and physics (more specifically thermodynamics), the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics.
== Definition ==
Given an open subset U of Rn and a subinterval I of R, one says that a function u : U × I → R is a solution of the heat equation if
∂
u
∂
t
=
∂
2
u
∂
x
1
2
+
⋯
+
∂
2
u
∂
x
n
2
,
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {\partial ^{2}u}{\partial x_{1}^{2}}}+\cdots +{\frac {\partial ^{2}u}{\partial x_{n}^{2}}},}
where (x1, ..., xn, t) denotes a general point of the domain. It is typical to refer to t as time and x1, ..., xn as spatial variables, even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as x. For any given value of t, the right-hand side of the equation is the Laplacian of the function u(⋅, t) : U → R. As such, the heat equation is often written more compactly as
In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider the specific case of a function u(x, y, z, t) of three spatial variables (x, y, z) and time variable t. One then says that u is a solution of the heat equation if
∂
u
∂
t
=
α
(
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
)
{\displaystyle {\frac {\partial u}{\partial t}}=\alpha \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)}
in which α is a positive coefficient called the thermal diffusivity of the medium. In addition to other physical phenomena, this equation describes the flow of heat in a homogeneous and isotropic medium, with u(x, y, z, t) being the temperature at the point (x, y, z) and time t. If the medium is not homogeneous and isotropic, then α would not be a fixed coefficient, and would instead depend on (x, y, z); the equation would also have a slightly different form. In the physics and engineering literature, it is common to use ∇2 to denote the Laplacian, rather than ∆.
In mathematics as well as in physics and engineering, it is common to use Newton's notation for time derivatives, so that
u
˙
{\displaystyle {\dot {u}}}
is used to denote ∂u/∂t, so the equation can be written
Note also that the ability to use either ∆ or ∇2 to denote the Laplacian, without explicit reference to the spatial variables, is a reflection of the fact that the Laplacian is independent of the choice of coordinate system. In mathematical terms, one would say that the Laplacian is translationally and rotationally invariant. In fact, it is (loosely speaking) the simplest differential operator which has these symmetries. This can be taken as a significant (and purely mathematical) justification of the use of the Laplacian and of the heat equation in modeling any physical phenomena which are homogeneous and isotropic, of which heat diffusion is a principal example.
=== Diffusivity constant ===
The diffusivity constant α is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let u be a function with
∂
u
∂
t
=
α
Δ
u
.
{\displaystyle {\frac {\partial u}{\partial t}}=\alpha \Delta u.}
Define a new function
v
(
t
,
x
)
=
u
(
t
/
α
,
x
)
{\displaystyle v(t,x)=u(t/\alpha ,x)}
. Then, according to the chain rule, one has
Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of α and solutions of the heat equation with α = 1. As such, for the sake of mathematical analysis, it is often sufficient to only consider the case α = 1.
Since
α
>
0
{\displaystyle \alpha >0}
there is another option to define a
v
{\displaystyle v}
satisfying
∂
∂
t
v
=
Δ
v
{\textstyle {\frac {\partial }{\partial t}}v=\Delta v}
as in (⁎) above by setting
v
(
t
,
x
)
=
u
(
t
,
α
1
/
2
x
)
{\displaystyle v(t,x)=u(t,\alpha ^{1/2}x)}
. Note that the two possible means of defining the new function
v
{\displaystyle v}
discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length.
=== Nonhomogeneous heat equation ===
The nonhomogeneous heat equation is
∂
u
∂
t
=
Δ
u
+
f
{\displaystyle {\frac {\partial u}{\partial t}}=\Delta u+f}
for a given function
f
=
f
(
x
,
t
)
{\displaystyle f=f(x,t)}
which is allowed to depend on both x and t. The inhomogeneous heat equation models thermal problems in which a heat source modeled by f is switched on. For example, it can be used to model the temperature throughout a room with a heater switched on. If
S
⊂
U
{\displaystyle S\subset U}
is the region of the room where the heater is and the heater is constantly generating q units of heat per unit of volume, then f would be given by
f
(
x
,
t
)
=
q
1
S
(
x
)
{\displaystyle f(x,t)=q1_{S}(x)}
.
=== Steady-state equation ===
A solution to the heat equation
∂
u
/
∂
t
=
Δ
u
{\displaystyle \partial u/\partial t=\Delta u}
is said to be a steady-state solution if it does not vary with respect to time:
0
=
∂
u
∂
t
=
Δ
u
.
{\displaystyle 0={\frac {\partial u}{\partial t}}=\Delta u.}
Flowing u via. the heat equation causes it to become closer and closer as time increases to a steady-state solution. For very large time, u is closely approximated by a steady-state solution. A steady state solution of the heat equation is equivalently a solution of Laplace's equation.
Similarly, a solution to the nonhomogeneous heat equation
∂
u
/
∂
t
=
Δ
u
+
f
{\displaystyle \partial u/\partial t=\Delta u+f}
is said to be a steady-state solution if it does not vary with respect to time:
0
=
∂
u
∂
t
=
Δ
u
+
f
.
{\displaystyle 0={\frac {\partial u}{\partial t}}=\Delta u+f.}
This is equivalently a solution of Poisson's equation.
In the steady-state case, a nonzero spatial thermal gradient
∇
u
{\displaystyle \nabla u}
may (or may not) be present, but if it is, it does not change in time. The steady-state equation describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well. The steady-state equations are simpler and can help to understand better the physics of the materials without focusing on the dynamics of heat transport. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time.
== Interpretation ==
Informally, the Laplacian operator ∆ gives the difference between the average value of a function in the neighborhood of a point, and its value at that point. Thus, if u is the temperature, ∆u conveys if (and by how much) the material surrounding each point is hotter or colder, on the average, than the material at that point.
By the second law of thermodynamics, heat will flow from hotter bodies to adjacent colder bodies, in proportion to the difference of temperature and of the thermal conductivity of the material between them. When heat flows into (respectively, out of) a material, its temperature increases (respectively, decreases), in proportion to the amount of heat divided by the amount (mass) of material, with a proportionality factor called the specific heat capacity of the material.
By the combination of these observations, the heat equation says the rate
u
˙
{\displaystyle {\dot {u}}}
at which the material at a point will heat up (or cool down) is proportional to how much hotter (or cooler) the surrounding material is. The coefficient α in the equation takes into account the thermal conductivity, specific heat, and density of the material.
=== Interpretation of the equation ===
The first half of the above physical thinking can be put into a mathematical form. The key is that, for any fixed x, one has
u
(
x
)
(
0
)
=
u
(
x
)
u
(
x
)
′
(
0
)
=
0
u
(
x
)
″
(
0
)
=
1
n
Δ
u
(
x
)
{\displaystyle {\begin{aligned}u_{(x)}(0)&=u(x)\\u_{(x)}'(0)&=0\\u_{(x)}''(0)&={\frac {1}{n}}\Delta u(x)\end{aligned}}}
where u(x)(r) is the single-variable function denoting the average value of u over the surface of the sphere of radius r centered at x; it can be defined by
u
(
x
)
(
r
)
=
1
ω
n
−
1
r
n
−
1
∫
{
y
:
|
x
−
y
|
=
r
}
u
d
H
n
−
1
,
{\displaystyle u_{(x)}(r)={\frac {1}{\omega _{n-1}r^{n-1}}}\int _{\{y:|x-y|=r\}}u\,d{\mathcal {H}}^{n-1},}
in which ωn − 1 denotes the surface area of the unit ball in n-dimensional Euclidean space. This formalizes the above statement that the value of ∆u at a point x measures the difference between the value of u(x) and the value of u at points nearby to x, in the sense that the latter is encoded by the values of u(x)(r) for small positive values of r.
Following this observation, one may interpret the heat equation as imposing an infinitesimal averaging of a function. Given a solution of the heat equation, the value of u(x, t + τ) for a small positive value of τ may be approximated as 1/2n times the average value of the function u(⋅, t) over a sphere of very small radius centered at x.
=== Character of the solutions ===
The heat equation implies that peaks (local maxima) of
u
{\displaystyle u}
will be gradually eroded down, while depressions (local minima) will be filled in. The value at some point will remain stable only as long as it is equal to the average value in its immediate surroundings. In particular, if the values in a neighborhood are very close to a linear function
A
x
+
B
y
+
C
z
+
D
{\displaystyle Ax+By+Cz+D}
, then the value at the center of that neighborhood will not be changing at that time (that is, the derivative
u
˙
{\displaystyle {\dot {u}}}
will be zero).
A more subtle consequence is the maximum principle, that says that the maximum value of
u
{\displaystyle u}
in any region
R
{\displaystyle R}
of the medium will not exceed the maximum value that previously occurred in
R
{\displaystyle R}
, unless it is on the boundary of
R
{\displaystyle R}
. That is, the maximum temperature in a region
R
{\displaystyle R}
can increase only if heat comes in from outside
R
{\displaystyle R}
. This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below).
Another interesting property is that even if
u
{\displaystyle u}
initially has a sharp jump (discontinuity) of value across some surface inside the medium, the jump is immediately smoothed out by a momentary, infinitesimally short but infinitely large rate of flow of heat through that surface. For example, if two isolated bodies, initially at uniform but different temperatures
u
0
{\displaystyle u_{0}}
and
u
1
{\displaystyle u_{1}}
, are made to touch each other, the temperature at the point of contact will immediately assume some intermediate value, and a zone will develop around that point where
u
{\displaystyle u}
will gradually vary between
u
0
{\displaystyle u_{0}}
and
u
1
{\displaystyle u_{1}}
.
If a certain amount of heat is suddenly applied to a point in the medium, it will spread out in all directions in the form of a diffusion wave. Unlike the elastic and electromagnetic waves, the speed of a diffusion wave drops with time: as it spreads over a larger region, the temperature gradient decreases, and therefore the heat flow decreases too.
== Specific examples ==
=== Heat flow in a uniform rod ===
For heat flow, the heat equation follows from the physical laws of conduction of heat and conservation of energy (Cannon 1984).
By Fourier's law for an isotropic medium, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across it:
q
=
−
k
∇
u
{\displaystyle \mathbf {q} =-k\,\nabla u}
where
k
{\displaystyle k}
is the thermal conductivity of the material,
u
=
u
(
x
,
t
)
{\displaystyle u=u(\mathbf {x} ,t)}
is the temperature, and
q
=
q
(
x
,
t
)
{\displaystyle \mathbf {q} =\mathbf {q} (\mathbf {x} ,t)}
is a vector field that represents the magnitude and direction of the heat flow at the point
x
{\displaystyle \mathbf {x} }
of space and time
t
{\displaystyle t}
.
If the medium is a thin rod of uniform section and material, the position x is a single coordinate and the heat flow
q
=
q
(
t
,
x
)
{\displaystyle q=q(t,x)}
towards
x
{\displaystyle x}
is a scalar field. The equation becomes
q
=
−
k
∂
u
∂
x
{\displaystyle q=-k\,{\frac {\partial u}{\partial x}}}
Let
Q
=
Q
(
x
,
t
)
{\displaystyle Q=Q(x,t)}
be the internal energy (heat) per unit volume of the bar at each point and time. The rate of change in heat per unit volume in the material,
∂
Q
/
∂
t
{\displaystyle \partial Q/\partial t}
, is proportional to the rate of change of its temperature,
∂
u
/
∂
t
{\displaystyle \partial u/\partial t}
. That is,
∂
Q
∂
t
=
c
ρ
∂
u
∂
t
{\displaystyle {\frac {\partial Q}{\partial t}}=c\,\rho \,{\frac {\partial u}{\partial t}}}
where
c
{\displaystyle c}
is the specific heat capacity (at constant pressure, in case of a gas) and
ρ
{\displaystyle \rho }
is the density (mass per unit volume) of the material. This derivation assumes that the material has constant mass density and heat capacity through space as well as time.
Applying the law of conservation of energy to a small element of the medium centred at
x
{\displaystyle x}
, one concludes that the rate at which heat changes at a given point
x
{\displaystyle x}
is equal to the derivative of the heat flow at that point (the difference between the heat flows either side of the particle). That is,
∂
Q
∂
t
=
−
∂
q
∂
x
{\displaystyle {\frac {\partial Q}{\partial t}}=-{\frac {\partial q}{\partial x}}}
From the above equations it follows that
∂
u
∂
t
=
−
1
c
ρ
∂
q
∂
x
=
−
1
c
ρ
∂
∂
x
(
−
k
∂
u
∂
x
)
=
k
c
ρ
∂
2
u
∂
x
2
{\displaystyle {\frac {\partial u}{\partial t}}\;=\;-{\frac {1}{c\rho }}{\frac {\partial q}{\partial x}}\;=\;-{\frac {1}{c\rho }}{\frac {\partial }{\partial x}}\left(-k\,{\frac {\partial u}{\partial x}}\right)\;=\;{\frac {k}{c\rho }}{\frac {\partial ^{2}u}{\partial x^{2}}}}
which is the heat equation in one dimension, with diffusivity coefficient
α
=
k
c
ρ
{\displaystyle \alpha ={\frac {k}{c\rho }}}
This quantity is called the thermal diffusivity of the medium.
==== Accounting for radiative loss ====
An additional term may be introduced into the equation to account for radiative loss of heat. According to the Stefan–Boltzmann law, this term is
μ
(
u
4
−
v
4
)
{\displaystyle \mu \left(u^{4}-v^{4}\right)}
, where
v
=
v
(
x
,
t
)
{\displaystyle v=v(x,t)}
is the temperature of the surroundings, and
μ
{\displaystyle \mu }
is a coefficient that depends on the Stefan-Boltzmann constant, the emissivity of the material, and the geometry. The rate of change in internal energy becomes
∂
Q
∂
t
=
−
∂
q
∂
x
−
μ
(
u
4
−
v
4
)
{\displaystyle {\frac {\partial Q}{\partial t}}=-{\frac {\partial q}{\partial x}}-\mu \left(u^{4}-v^{4}\right)}
and the equation for the evolution of
u
{\displaystyle u}
becomes
∂
u
∂
t
=
k
c
ρ
∂
2
u
∂
x
2
−
μ
c
ρ
(
u
4
−
v
4
)
.
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {k}{c\rho }}{\frac {\partial ^{2}u}{\partial x^{2}}}-{\frac {\mu }{c\rho }}\left(u^{4}-v^{4}\right).}
==== Non-uniform isotropic medium ====
Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or
ρ
{\displaystyle \rho }
) influences which term.
ρ
c
p
∂
T
∂
t
−
∇
⋅
(
k
∇
T
)
=
q
˙
V
{\displaystyle \rho c_{p}{\frac {\partial T}{\partial t}}-\nabla \cdot \left(k\nabla T\right)={\dot {q}}_{V}}
where
q
˙
V
{\displaystyle {\dot {q}}_{V}}
is the volumetric heat source.
=== Heat flow in non-homogeneous anisotropic media ===
In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space.
The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density Q, so that
q
t
(
V
)
=
∫
V
Q
(
x
,
t
)
d
x
{\displaystyle q_{t}(V)=\int _{V}Q(x,t)\,dx\quad }
Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is
H
(
x
)
⋅
n
(
x
)
d
S
.
{\displaystyle \mathbf {H} (x)\cdot \mathbf {n} (x)\,dS.}
Thus the rate of heat flow into V is also given by the surface integral
q
t
(
V
)
=
−
∫
∂
V
H
(
x
)
⋅
n
(
x
)
d
S
{\displaystyle q_{t}(V)=-\int _{\partial V}\mathbf {H} (x)\cdot \mathbf {n} (x)\,dS}
where n(x) is the outward pointing normal vector at x.
The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient
H
(
x
)
=
−
A
(
x
)
⋅
∇
u
(
x
)
{\displaystyle \mathbf {H} (x)=-\mathbf {A} (x)\cdot \nabla u(x)}
where A(x) is a 3 × 3 real matrix that is symmetric and positive definite.
By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral
q
t
(
V
)
=
−
∫
∂
V
H
(
x
)
⋅
n
(
x
)
d
S
=
∫
∂
V
A
(
x
)
⋅
∇
u
(
x
)
⋅
n
(
x
)
d
S
=
∫
V
∑
i
,
j
∂
x
i
(
a
i
j
(
x
)
∂
x
j
u
(
x
,
t
)
)
d
x
{\displaystyle {\begin{aligned}q_{t}(V)&=-\int _{\partial V}\mathbf {H} (x)\cdot \mathbf {n} (x)\,dS\\&=\int _{\partial V}\mathbf {A} (x)\cdot \nabla u(x)\cdot \mathbf {n} (x)\,dS\\&=\int _{V}\sum _{i,j}\partial _{x_{i}}{\bigl (}a_{ij}(x)\partial _{x_{j}}u(x,t){\bigr )}\,dx\end{aligned}}}
The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ
∂
t
u
(
x
,
t
)
=
κ
(
x
)
Q
(
x
,
t
)
{\displaystyle \partial _{t}u(x,t)=\kappa (x)Q(x,t)}
Putting these equations together gives the general equation of heat flow:
∂
t
u
(
x
,
t
)
=
κ
(
x
)
∑
i
,
j
∂
x
i
(
a
i
j
(
x
)
∂
x
j
u
(
x
,
t
)
)
{\displaystyle \partial _{t}u(x,t)=\kappa (x)\sum _{i,j}\partial _{x_{i}}{\bigl (}a_{ij}(x)\partial _{x_{j}}u(x,t){\bigr )}}
Remarks
The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x:
κ
=
1
/
(
ρ
c
p
)
{\displaystyle \kappa =1/(\rho c_{p})}
.
In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity k.
In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down, though it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by
A
u
(
x
)
:=
∑
i
,
j
∂
x
i
a
i
j
(
x
)
∂
x
j
u
(
x
)
{\displaystyle Au(x):=\sum _{i,j}\partial _{x_{i}}a_{ij}(x)\partial _{x_{j}}u(x)}
is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup.
=== Three-dimensional problem ===
In the special cases of propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is
∂
u
∂
t
=
α
∇
2
u
=
α
(
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
)
{\displaystyle {\frac {\partial u}{\partial t}}=\alpha \nabla ^{2}u=\alpha \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)}
=
α
(
u
x
x
+
u
y
y
+
u
z
z
)
{\displaystyle =\alpha \left(u_{xx}+u_{yy}+u_{zz}\right)}
where:
u
=
u
(
x
,
y
,
z
,
t
)
{\displaystyle u=u(x,y,z,t)}
is temperature as a function of space and time;
∂
u
∂
t
{\displaystyle {\tfrac {\partial u}{\partial t}}}
is the rate of change of temperature at a point over time;
u
x
x
{\displaystyle u_{xx}}
,
u
y
y
{\displaystyle u_{yy}}
, and
u
z
z
{\displaystyle u_{zz}}
are the second spatial derivatives (thermal conductions) of temperature in the
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
directions, respectively;
α
≡
k
c
p
ρ
{\displaystyle \alpha \equiv {\tfrac {k}{c_{p}\rho }}}
is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity
k
{\displaystyle k}
, the specific heat capacity
c
p
{\displaystyle c_{p}}
, and the mass density
ρ
{\displaystyle \rho }
.
The heat equation is a consequence of Fourier's law of conduction (see heat conduction).
If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume additional conditions, for example an exponential bound on the growth of solutions or a sign condition (nonnegative solutions are unique by a result of David Widder).
Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods.
The heat equation is the prototypical example of a parabolic partial differential equation.
Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as
u
t
=
α
∇
2
u
=
α
Δ
u
,
{\displaystyle u_{t}=\alpha \nabla ^{2}u=\alpha \Delta u,}
where the Laplace operator, denoted as either Δ or as ∇2 (the divergence of the gradient), is taken in the spatial variables.
The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis.
The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed.
=== Internal heat generation ===
The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units.
Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time. Then the heat per unit volume u satisfies an equation
1
α
∂
u
∂
t
=
(
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
)
+
1
k
q
.
{\displaystyle {\frac {1}{\alpha }}{\frac {\partial u}{\partial t}}=\left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)+{\frac {1}{k}}q.}
For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero.
== Solving the heat equation using Fourier series ==
The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is
where u = u(x, t) is a function of two variables x and t. Here
x is the space variable, so x ∈ [0, L], where L is the length of the rod.
t is the time variable, so t ≥ 0.
We assume the initial condition
where the function f is given, and the boundary conditions
Let us attempt to find a solution of (1) that is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
This solution technique is called separation of variables. Substituting u back into equation (1),
T
′
(
t
)
α
T
(
t
)
=
X
″
(
x
)
X
(
x
)
.
{\displaystyle {\frac {T'(t)}{\alpha T(t)}}={\frac {X''(x)}{X(x)}}.}
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
and
We will now show that nontrivial solutions for (6) for values of λ ≤ 0 cannot occur:
Suppose that λ < 0. Then there exist real numbers B, C such that
X
(
x
)
=
B
e
−
λ
x
+
C
e
−
−
λ
x
.
{\displaystyle X(x)=Be^{{\sqrt {-\lambda }}\,x}+Ce^{-{\sqrt {-\lambda }}\,x}.}
From (3) we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0.
Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation (3) we conclude in the same manner as in 1 that u is identically 0.
Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that
T
(
t
)
=
A
e
−
λ
α
t
{\displaystyle T(t)=Ae^{-\lambda \alpha t}}
and
X
(
x
)
=
B
sin
(
λ
x
)
+
C
cos
(
λ
x
)
.
{\displaystyle X(x)=B\sin \left({\sqrt {\lambda }}\,x\right)+C\cos \left({\sqrt {\lambda }}\,x\right).}
From (3) we get C = 0 and that for some positive integer n,
λ
=
n
π
L
.
{\displaystyle {\sqrt {\lambda }}=n{\frac {\pi }{L}}.}
This solves the heat equation in the special case that the dependence of u has the special form (4).
In general, the sum of solutions to (1) that satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by
u
(
x
,
t
)
=
∑
n
=
1
∞
D
n
sin
(
n
π
x
L
)
e
−
n
2
π
2
α
t
L
2
{\displaystyle u(x,t)=\sum _{n=1}^{\infty }D_{n}\sin \left({\frac {n\pi x}{L}}\right)e^{-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}}}
where
D
n
=
2
L
∫
0
L
f
(
x
)
sin
(
n
π
x
L
)
d
x
.
{\displaystyle D_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin \left({\frac {n\pi x}{L}}\right)\,dx.}
=== Generalizing the solution technique ===
The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenfunctions. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators.
Consider the linear operator Δu = uxx. The infinite sequence of functions
e
n
(
x
)
=
2
L
sin
(
n
π
x
L
)
{\displaystyle e_{n}(x)={\sqrt {\frac {2}{L}}}\sin \left({\frac {n\pi x}{L}}\right)}
for n ≥ 1 are eigenfunctions of Δ. Indeed,
Δ
e
n
=
−
n
2
π
2
L
2
e
n
.
{\displaystyle \Delta e_{n}=-{\frac {n^{2}\pi ^{2}}{L^{2}}}e_{n}.}
Moreover, any eigenfunction f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means
⟨
e
n
,
e
m
⟩
=
∫
0
L
e
n
(
x
)
e
m
∗
(
x
)
d
x
=
δ
m
n
{\displaystyle \langle e_{n},e_{m}\rangle =\int _{0}^{L}e_{n}(x)e_{m}^{*}(x)dx=\delta _{mn}}
Finally, the sequence {en}n ∈ N spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ.
=== Mean-value property ===
Solutions of the heat equations
(
∂
t
−
Δ
)
u
=
0
{\displaystyle (\partial _{t}-\Delta )u=0}
satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of
Δ
u
=
0
,
{\displaystyle \Delta u=0,}
though a bit more complicated. Precisely, if u solves
(
∂
t
−
Δ
)
u
=
0
{\displaystyle (\partial _{t}-\Delta )u=0}
and
(
x
,
t
)
+
E
λ
⊂
d
o
m
(
u
)
{\displaystyle (x,t)+E_{\lambda }\subset \mathrm {dom} (u)}
then
u
(
x
,
t
)
=
λ
4
∫
E
λ
u
(
x
−
y
,
t
−
s
)
|
y
|
2
s
2
d
s
d
y
,
{\displaystyle u(x,t)={\frac {\lambda }{4}}\int _{E_{\lambda }}u(x-y,t-s){\frac {|y|^{2}}{s^{2}}}ds\,dy,}
where Eλ is a heat-ball, that is a super-level set of the fundamental solution of the heat equation:
E
λ
:=
{
(
y
,
s
)
:
Φ
(
y
,
s
)
>
λ
}
,
{\displaystyle E_{\lambda }:=\{(y,s):\Phi (y,s)>\lambda \},}
Φ
(
x
,
t
)
:=
(
4
t
π
)
−
n
2
exp
(
−
|
x
|
2
4
t
)
.
{\displaystyle \Phi (x,t):=(4t\pi )^{-{\frac {n}{2}}}\exp \left(-{\frac {|x|^{2}}{4t}}\right).}
Notice that
d
i
a
m
(
E
λ
)
=
o
(
1
)
{\displaystyle \mathrm {diam} (E_{\lambda })=o(1)}
as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough.
== Fundamental solutions ==
A fundamental solution of the heat equation is a solution that corresponds to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains (see, for instance, Evans 2010).
In one variable, the Green's function is a solution of the initial value problem (by Duhamel's principle, equivalent to the definition of Green's function as one with a delta function as solution to the first equation)
{
u
t
(
x
,
t
)
−
k
u
x
x
(
x
,
t
)
=
0
(
x
,
t
)
∈
R
×
(
0
,
∞
)
u
(
x
,
0
)
=
δ
(
x
)
{\displaystyle {\begin{cases}u_{t}(x,t)-ku_{xx}(x,t)=0&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=\delta (x)&\end{cases}}}
where
δ
{\displaystyle \delta }
is the Dirac delta function. The fundamental solution to this problem is given by the heat kernel
Φ
(
x
,
t
)
=
1
4
π
k
t
exp
(
−
x
2
4
k
t
)
.
{\displaystyle \Phi (x,t)={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right).}
One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution:
u
(
x
,
t
)
=
∫
Φ
(
x
−
y
,
t
)
g
(
y
)
d
y
.
{\displaystyle u(x,t)=\int \Phi (x-y,t)g(y)dy.}
In several spatial variables, the fundamental solution solves the analogous problem
{
u
t
(
x
,
t
)
−
k
∑
i
=
1
n
u
x
i
x
i
(
x
,
t
)
=
0
(
x
,
t
)
∈
R
n
×
(
0
,
∞
)
u
(
x
,
0
)
=
δ
(
x
)
{\displaystyle {\begin{cases}u_{t}(\mathbf {x} ,t)-k\sum _{i=1}^{n}u_{x_{i}x_{i}}(\mathbf {x} ,t)=0&(\mathbf {x} ,t)\in \mathbb {R} ^{n}\times (0,\infty )\\u(\mathbf {x} ,0)=\delta (\mathbf {x} )\end{cases}}}
The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e.,
Φ
(
x
,
t
)
=
Φ
(
x
1
,
t
)
Φ
(
x
2
,
t
)
⋯
Φ
(
x
n
,
t
)
=
1
(
4
π
k
t
)
n
exp
(
−
x
⋅
x
4
k
t
)
.
{\displaystyle \Phi (\mathbf {x} ,t)=\Phi (x_{1},t)\Phi (x_{2},t)\cdots \Phi (x_{n},t)={\frac {1}{\sqrt {(4\pi kt)^{n}}}}\exp \left(-{\frac {\mathbf {x} \cdot \mathbf {x} }{4kt}}\right).}
The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has
u
(
x
,
t
)
=
∫
R
n
Φ
(
x
−
y
,
t
)
g
(
y
)
d
y
.
{\displaystyle u(\mathbf {x} ,t)=\int _{\mathbb {R} ^{n}}\Phi (\mathbf {x} -\mathbf {y} ,t)g(\mathbf {y} )d\mathbf {y} .}
The general problem on a domain Ω in Rn is
{
u
t
(
x
,
t
)
−
k
∑
i
=
1
n
u
x
i
x
i
(
x
,
t
)
=
0
(
x
,
t
)
∈
Ω
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
x
∈
Ω
{\displaystyle {\begin{cases}u_{t}(\mathbf {x} ,t)-k\sum _{i=1}^{n}u_{x_{i}x_{i}}(\mathbf {x} ,t)=0&(\mathbf {x} ,t)\in \Omega \times (0,\infty )\\u(\mathbf {x} ,0)=g(\mathbf {x} )&\mathbf {x} \in \Omega \end{cases}}}
with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011).
=== Some Green's function solutions in 1D ===
A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere. In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation
u
t
=
k
u
x
x
+
f
.
{\displaystyle u_{t}=ku_{xx}+f.}
where f is some given function of x and t.
==== Homogeneous heat equation ====
Initial value problem on (−∞,∞)
{
u
t
=
k
u
x
x
(
x
,
t
)
∈
R
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
Initial condition
{\displaystyle {\begin{cases}u_{t}=ku_{xx}&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=g(x)&{\text{Initial condition}}\end{cases}}}
u
(
x
,
t
)
=
1
4
π
k
t
∫
−
∞
∞
exp
(
−
(
x
−
y
)
2
4
k
t
)
g
(
y
)
d
y
{\displaystyle u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{-\infty }^{\infty }\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)g(y)\,dy}
Comment. This solution is the convolution with respect to the variable x of the fundamental solution
Φ
(
x
,
t
)
:=
1
4
π
k
t
exp
(
−
x
2
4
k
t
)
,
{\displaystyle \Phi (x,t):={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right),}
and the function g(x). (The Green's function number of the fundamental solution is X00.)
Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for
(
∂
t
−
k
∂
x
2
)
(
Φ
∗
g
)
=
[
(
∂
t
−
k
∂
x
2
)
Φ
]
∗
g
=
0.
{\displaystyle \left(\partial _{t}-k\partial _{x}^{2}\right)(\Phi *g)=\left[\left(\partial _{t}-k\partial _{x}^{2}\right)\Phi \right]*g=0.}
Moreover,
Φ
(
x
,
t
)
=
1
t
Φ
(
x
t
,
1
)
{\displaystyle \Phi (x,t)={\frac {1}{\sqrt {t}}}\,\Phi \left({\frac {x}{\sqrt {t}}},1\right)}
∫
−
∞
∞
Φ
(
x
,
t
)
d
x
=
1
,
{\displaystyle \int _{-\infty }^{\infty }\Phi (x,t)\,dx=1,}
so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ g → g as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then Φ(⋅, t) ∗ g converges uniformly to g as t → 0, meaning that u(x, t) is continuous on R × [0, ∞) with u(x, 0) = g(x).
Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions
{
u
t
=
k
u
x
x
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
IC
u
(
0
,
t
)
=
0
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u(0,t)=0&{\text{BC}}\end{cases}}}
u
(
x
,
t
)
=
1
4
π
k
t
∫
0
∞
[
exp
(
−
(
x
−
y
)
2
4
k
t
)
−
exp
(
−
(
x
+
y
)
2
4
k
t
)
]
g
(
y
)
d
y
{\displaystyle u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{0}^{\infty }\left[\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)-\exp \left(-{\frac {(x+y)^{2}}{4kt}}\right)\right]g(y)\,dy}
Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0.
The Green's function number of this solution is X10.
Initial value problem on (0,∞) with homogeneous Neumann boundary conditions
{
u
t
=
k
u
x
x
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
IC
u
x
(
0
,
t
)
=
0
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u_{x}(0,t)=0&{\text{BC}}\end{cases}}}
u
(
x
,
t
)
=
1
4
π
k
t
∫
0
∞
[
exp
(
−
(
x
−
y
)
2
4
k
t
)
+
exp
(
−
(
x
+
y
)
2
4
k
t
)
]
g
(
y
)
d
y
{\displaystyle u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{0}^{\infty }\left[\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)+\exp \left(-{\frac {(x+y)^{2}}{4kt}}\right)\right]g(y)\,dy}
Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20.
Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions
{
u
t
=
k
u
x
x
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
0
IC
u
(
0
,
t
)
=
h
(
t
)
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u(0,t)=h(t)&{\text{BC}}\end{cases}}}
u
(
x
,
t
)
=
∫
0
t
x
4
π
k
(
t
−
s
)
3
exp
(
−
x
2
4
k
(
t
−
s
)
)
h
(
s
)
d
s
,
∀
x
>
0
{\displaystyle u(x,t)=\int _{0}^{t}{\frac {x}{\sqrt {4\pi k(t-s)^{3}}}}\exp \left(-{\frac {x^{2}}{4k(t-s)}}\right)h(s)\,ds,\qquad \forall x>0}
Comment. This solution is the convolution with respect to the variable t of
ψ
(
x
,
t
)
:=
−
2
k
∂
x
Φ
(
x
,
t
)
=
x
4
π
k
t
3
exp
(
−
x
2
4
k
t
)
{\displaystyle \psi (x,t):=-2k\partial _{x}\Phi (x,t)={\frac {x}{\sqrt {4\pi kt^{3}}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)}
and the function h(t). Since Φ(x, t) is the fundamental solution of
∂
t
−
k
∂
x
2
,
{\displaystyle \partial _{t}-k\partial _{x}^{2},}
the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover,
ψ
(
x
,
t
)
=
1
x
2
ψ
(
1
,
t
x
2
)
{\displaystyle \psi (x,t)={\frac {1}{x^{2}}}\,\psi \left(1,{\frac {t}{x^{2}}}\right)}
∫
0
∞
ψ
(
x
,
t
)
d
t
=
1
,
{\displaystyle \int _{0}^{\infty }\psi (x,t)\,dt=1,}
so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ h → h as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on [0, ∞) × [0, ∞) with u(0, t) = h(t).
==== Inhomogeneous heat equation ====
Problem on (-∞,∞) homogeneous initial conditions
Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution
Φ
(
x
,
t
)
:=
1
4
π
k
t
exp
(
−
x
2
4
k
t
)
{\displaystyle \Phi (x,t):={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)}
and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that
(
∂
t
−
k
∂
x
2
)
(
Φ
∗
f
)
=
f
,
{\displaystyle \left(\partial _{t}-k\partial _{x}^{2}\right)(\Phi *f)=f,}
which expressed in the language of distributions becomes
(
∂
t
−
k
∂
x
2
)
Φ
=
δ
,
{\displaystyle \left(\partial _{t}-k\partial _{x}^{2}\right)\Phi =\delta ,}
where the distribution δ is the Dirac's delta function, that is the evaluation at 0.
Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions
{
u
t
=
k
u
x
x
+
f
(
x
,
t
)
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
0
IC
u
(
0
,
t
)
=
0
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}+f(x,t)&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u(0,t)=0&{\text{BC}}\end{cases}}}
u
(
x
,
t
)
=
∫
0
t
∫
0
∞
1
4
π
k
(
t
−
s
)
(
exp
(
−
(
x
−
y
)
2
4
k
(
t
−
s
)
)
−
exp
(
−
(
x
+
y
)
2
4
k
(
t
−
s
)
)
)
f
(
y
,
s
)
d
y
d
s
{\displaystyle u(x,t)=\int _{0}^{t}\int _{0}^{\infty }{\frac {1}{\sqrt {4\pi k(t-s)}}}\left(\exp \left(-{\frac {(x-y)^{2}}{4k(t-s)}}\right)-\exp \left(-{\frac {(x+y)^{2}}{4k(t-s)}}\right)\right)f(y,s)\,dy\,ds}
Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0.
Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions
{
u
t
=
k
u
x
x
+
f
(
x
,
t
)
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
0
IC
u
x
(
0
,
t
)
=
0
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}+f(x,t)&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u_{x}(0,t)=0&{\text{BC}}\end{cases}}}
u
(
x
,
t
)
=
∫
0
t
∫
0
∞
1
4
π
k
(
t
−
s
)
(
exp
(
−
(
x
−
y
)
2
4
k
(
t
−
s
)
)
+
exp
(
−
(
x
+
y
)
2
4
k
(
t
−
s
)
)
)
f
(
y
,
s
)
d
y
d
s
{\displaystyle u(x,t)=\int _{0}^{t}\int _{0}^{\infty }{\frac {1}{\sqrt {4\pi k(t-s)}}}\left(\exp \left(-{\frac {(x-y)^{2}}{4k(t-s)}}\right)+\exp \left(-{\frac {(x+y)^{2}}{4k(t-s)}}\right)\right)f(y,s)\,dy\,ds}
Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0.
==== Examples ====
Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions.
For example, to solve
{
u
t
=
k
u
x
x
+
f
(
x
,
t
)
∈
R
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
IC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}+f&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\end{cases}}}
let u = w + v where w and v solve the problems
{
v
t
=
k
v
x
x
+
f
,
w
t
=
k
w
x
x
(
x
,
t
)
∈
R
×
(
0
,
∞
)
v
(
x
,
0
)
=
0
,
w
(
x
,
0
)
=
g
(
x
)
IC
{\displaystyle {\begin{cases}v_{t}=kv_{xx}+f,\,w_{t}=kw_{xx}\,&(x,t)\in \mathbb {R} \times (0,\infty )\\v(x,0)=0,\,w(x,0)=g(x)\,&{\text{IC}}\end{cases}}}
Similarly, to solve
{
u
t
=
k
u
x
x
+
f
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
u
(
x
,
0
)
=
g
(
x
)
IC
u
(
0
,
t
)
=
h
(
t
)
BC
{\displaystyle {\begin{cases}u_{t}=ku_{xx}+f&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u(0,t)=h(t)&{\text{BC}}\end{cases}}}
let u = w + v + r where w, v, and r solve the problems
{
v
t
=
k
v
x
x
+
f
,
w
t
=
k
w
x
x
,
r
t
=
k
r
x
x
(
x
,
t
)
∈
[
0
,
∞
)
×
(
0
,
∞
)
v
(
x
,
0
)
=
0
,
w
(
x
,
0
)
=
g
(
x
)
,
r
(
x
,
0
)
=
0
IC
v
(
0
,
t
)
=
0
,
w
(
0
,
t
)
=
0
,
r
(
0
,
t
)
=
h
(
t
)
BC
{\displaystyle {\begin{cases}v_{t}=kv_{xx}+f,\,w_{t}=kw_{xx},\,r_{t}=kr_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\v(x,0)=0,\;w(x,0)=g(x),\;r(x,0)=0&{\text{IC}}\\v(0,t)=0,\;w(0,t)=0,\;r(0,t)=h(t)&{\text{BC}}\end{cases}}}
== Applications ==
As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.
The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of artificial viscosity methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr.
=== Particle diffusion ===
One can model particle diffusion by an equation involving either:
the volumetric concentration of particles, denoted c, in the case of collective diffusion of a large number of particles, or
the probability density function associated with the position of a single particle, denoted P.
In either case, one uses the heat equation
c
t
=
D
Δ
c
,
{\displaystyle c_{t}=D\Delta c,}
or
P
t
=
D
Δ
P
.
{\displaystyle P_{t}=D\Delta P.}
Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation.
=== Brownian motion ===
Let the stochastic process
X
{\displaystyle X}
be the solution to the stochastic differential equation
{
d
X
t
=
2
k
d
B
t
X
0
=
0
{\displaystyle {\begin{cases}\mathrm {d} X_{t}={\sqrt {2k}}\;\mathrm {d} B_{t}\\X_{0}=0\end{cases}}}
where
B
{\displaystyle B}
is the Wiener process (standard Brownian motion). The probability density function of
X
{\displaystyle X}
is given at any time
t
{\displaystyle t}
by
1
4
π
k
t
exp
(
−
x
2
4
k
t
)
{\displaystyle {\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)}
which is the solution to the initial value problem
{
u
t
(
x
,
t
)
−
k
u
x
x
(
x
,
t
)
=
0
,
(
x
,
t
)
∈
R
×
(
0
,
+
∞
)
u
(
x
,
0
)
=
δ
(
x
)
{\displaystyle {\begin{cases}u_{t}(x,t)-ku_{xx}(x,t)=0,&(x,t)\in \mathbb {R} \times (0,+\infty )\\u(x,0)=\delta (x)\end{cases}}}
where
δ
{\displaystyle \delta }
is the Dirac delta function.
=== Schrödinger equation for a free particle ===
With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way:
ψ
t
=
i
ℏ
2
m
Δ
ψ
{\displaystyle \psi _{t}={\frac {i\hbar }{2m}}\Delta \psi }
,
where i is the imaginary unit, ħ is the reduced Planck constant, and ψ is the wave function of the particle.
This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation:
c
(
R
,
t
)
→
ψ
(
R
,
t
)
D
→
i
ℏ
2
m
{\displaystyle {\begin{aligned}c(\mathbf {R} ,t)&\to \psi (\mathbf {R} ,t)\\D&\to {\frac {i\hbar }{2m}}\end{aligned}}}
Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0:
ψ
(
R
,
t
)
=
∫
ψ
(
R
0
,
t
=
0
)
G
(
R
−
R
0
,
t
)
d
R
x
0
d
R
y
0
d
R
z
0
,
{\displaystyle \psi (\mathbf {R} ,t)=\int \psi \left(\mathbf {R} ^{0},t=0\right)G\left(\mathbf {R} -\mathbf {R} ^{0},t\right)dR_{x}^{0}\,dR_{y}^{0}\,dR_{z}^{0},}
with
G
(
R
,
t
)
=
(
m
2
π
i
ℏ
t
)
3
/
2
e
−
R
2
m
2
i
ℏ
t
.
{\displaystyle G(\mathbf {R} ,t)=\left({\frac {m}{2\pi i\hbar t}}\right)^{3/2}e^{-{\frac {\mathbf {R} ^{2}m}{2i\hbar t}}}.}
Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying the Schrödinger equation might have an origin other than diffusion.
=== Thermal diffusivity in polymers ===
A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere TC
T
C
−
T
S
T
0
−
T
S
=
2
∑
n
=
1
∞
(
−
1
)
n
+
1
exp
(
−
n
2
π
2
α
t
L
2
)
{\displaystyle {\frac {T_{C}-T_{S}}{T_{0}-T_{S}}}=2\sum _{n=1}^{\infty }(-1)^{n+1}\exp \left({-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}}\right)}
where T0 is the initial temperature of the sphere and TS the temperature at the surface of the sphere, of radius L. This equation has also found applications in protein energy transfer and thermal modeling in biophysics.
=== Financial Mathematics ===
The heat equation arises in a number of phenomena and is often used in financial mathematics in the modeling of options. The Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions (Thambynayagam 2011).
=== Image Analysis ===
The heat equation is also widely used in image analysis (Perona & Malik 1990) and in machine learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of (Crank & Nicolson 1947). This method can be extended to many of the models with no closed form solution, see for instance (Wilmott, Howison & Dewynne 1995).
=== Riemannian geometry ===
An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry.
== See also ==
Caloric polynomial
Curve-shortening flow
Diffusion equation
Parabolic partial differential equation
Relativistic heat conduction
Schrödinger equation
Weierstrass transform
== Notes ==
== References ==
Cannon, John Rozier (1984), The one–dimensional heat equation, Encyclopedia of Mathematics and its Applications, vol. 23, Reading, MA: Addison-Wesley Publishing Company, Advanced Book Program, ISBN 0-201-13522-1, MR 0747979, Zbl 0567.35001
Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type", Proceedings of the Cambridge Philosophical Society, 43 (1): 50–67, Bibcode:1947PCPS...43...50C, doi:10.1017/S0305004100023197, S2CID 16676040
Evans, Lawrence C. (2010), Partial Differential Equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3
Perona, P; Malik, J. (1990), "Scale-Space and Edge Detection Using Anisotropic Diffusion" (PDF), IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (7): 629–639, doi:10.1109/34.56205, S2CID 14502908
Thambynayagam, R. K. M. (2011), The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill Professional, ISBN 978-0-07-175184-1
Wilmott, Paul; Howison, Sam; Dewynne, Jeff (1995), The mathematics of financial derivatives. A student introduction, Cambridge: Cambridge University Press, ISBN 0-521-49699-3
== Further reading ==
Carslaw, H.S.; Jaeger, J.C. (1988), Conduction of heat in solids, Oxford Science Publications (2nd ed.), New York: The Clarendon Press, Oxford University Press, ISBN 978-0-19-853368-9
Cole, Kevin D.; Beck, James V.; Haji-Sheikh, A.; Litkouhi, Bahan (2011), Heat conduction using Green's functions, Series in Computational and Physical Processes in Mechanics and Thermal Sciences (2nd ed.), Boca Raton, FL: CRC Press, ISBN 978-1-43-981354-6
Einstein, Albert (1905), "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF), Annalen der Physik, 322 (8): 549–560, Bibcode:1905AnP...322..549E, doi:10.1002/andp.19053220806
Friedman, Avner (1964), Partial differential equations of parabolic type, Englewood Cliffs, N.J.: Prentice-Hall
Unsworth, J.; Duarte, F. J. (1979), "Heat diffusion in a solid sphere and Fourier Theory", Am. J. Phys., 47 (11): 891–893, Bibcode:1979AmJPh..47..981U, doi:10.1119/1.11601
Jili, Latif M. (2009), Heat Conduction, Springer (3rd ed.), Berlin-Heidelberg: Springer-Verlag, ISBN 978-3-642-01266-2
Widder, D.V. (1975), The heat equation, Pure and Applied Mathematics, vol. 67, New York-London: Academic Press [Harcourt Brace Jovanovich, Publishers]
== External links ==
Derivation of the heat equation
Linear heat equations: Particular solutions and boundary value problems - from EqWorld
"The Heat Equation". PBS Infinite Series. November 17, 2017. Archived from the original on 2021-12-11 – via YouTube. | Wikipedia/Heat_equation |
Energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy.
== Forms ==
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
== History ==
The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
== Units of measure ==
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
== Scientific use ==
=== Classical mechanics ===
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
W
=
∫
C
F
⋅
d
s
{\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} }
This says that the work (
W
{\displaystyle W}
) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
=== Chemistry ===
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
=== Biology ===
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C
6
H
12
O
6
+
6
O
2
⟶
6
CO
2
+
6
H
2
O
{\displaystyle {\ce {C6H12O6 + 6O2 -> 6CO2 + 6H2O}}}
C
57
H
110
O
6
+
(
81
1
2
)
O
2
⟶
57
CO
2
+
55
H
2
O
{\displaystyle {\ce {C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O}}}
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
=== Earth sciences ===
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
=== Cosmology ===
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
=== Quantum mechanics ===
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
E
=
h
ν
{\displaystyle E=h\nu }
(where
h
{\displaystyle h}
is the Planck constant and
ν
{\displaystyle \nu }
the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
=== Relativity ===
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
E
0
=
m
0
c
2
,
{\displaystyle E_{0}=m_{0}c^{2},}
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
E
0
{\displaystyle E_{0}}
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
== Transformation ==
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy (
E
p
{\displaystyle E_{p}}
) to kinetic energy (
E
k
{\displaystyle E_{k}}
) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since
E
p
=
m
g
h
{\displaystyle E_{p}=mgh}
(mass times acceleration due to gravity times the height) and
E
k
=
1
2
m
v
2
{\textstyle E_{k}={\frac {1}{2}}mv^{2}}
(half mass times velocity squared). Then the total amount of energy can be found by adding
E
p
+
E
k
=
E
total
{\displaystyle E_{p}+E_{k}=E_{\text{total}}}
.
=== Conservation of energy and mass in transformation ===
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
c
2
{\displaystyle c^{2}}
is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9×1016 joules, equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
=== Reversible and non-reversible transformations ===
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
== Conservation of energy ==
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
Δ
E
Δ
t
≥
ℏ
2
{\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}}
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
== Energy transfer ==
=== Closed systems ===
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where
E
{\displaystyle E}
is the amount of energy transferred,
W
{\displaystyle W}
represents the work done on or by the system, and
Q
{\displaystyle Q}
represents the heat flow into or out of the system. As a simplification, the heat term,
Q
{\displaystyle Q}
, can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
=== Open systems ===
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
E
matter
{\displaystyle E_{\text{matter}}}
, one may write
== Thermodynamics ==
=== Internal energy ===
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
=== First law of thermodynamics ===
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
d
E
=
T
d
S
−
P
d
V
,
{\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,,}
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
d
E
=
δ
Q
+
δ
W
{\displaystyle \mathrm {d} E=\delta Q+\delta W}
where
δ
Q
{\displaystyle \delta Q}
is the heat supplied to the system and
δ
W
{\displaystyle \delta W}
is the work applied to the system.
=== Equipartition of energy ===
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
== See also ==
== Notes ==
== References ==
== Further reading ==
=== Journals ===
The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018–
== External links ==
Differences between Heat and Thermal energy (Archived 2016-08-27 at the Wayback Machine) – BioCab | Wikipedia/Energy |
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers.
While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one.
Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number.
== Cospectral graphs ==
Two graphs are called cospectral or isospectral if the adjacency matrices of the graphs are isospectral, that is, if the adjacency matrices have equal multisets of eigenvalues.
Cospectral graphs need not be isomorphic, but isomorphic graphs are always cospectral.
=== Graphs determined by their spectrum ===
A graph
G
{\displaystyle G}
is said to be determined by its spectrum if any other graph with the same spectrum as
G
{\displaystyle G}
is isomorphic to
G
{\displaystyle G}
.
Some first examples of families of graphs that are determined by their spectrum include:
The complete graphs.
The finite starlike trees.
=== Cospectral mates ===
A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic.
The smallest pair of cospectral mates is {K1,4, C4 ∪ K1}, comprising the 5-vertex star and the graph union of the 4-vertex cycle and the single-vertex graph. The first example of cospectral graphs was reported by Collatz and Sinogowitz in 1957.
The smallest pair of polyhedral cospectral mates are enneahedra with eight vertices each.
=== Finding cospectral graphs ===
Almost all trees are cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1.
A pair of regular graphs are cospectral if and only if their complements are cospectral.
A pair of distance-regular graphs are cospectral if and only if they have the same intersection array.
Cospectral graphs can also be constructed by means of the Sunada method.
Another important source of cospectral graphs are the point-collinearity graphs and the line-intersection graphs of point-line geometries. These graphs are always cospectral but are often non-isomorphic.
== Cheeger inequality ==
The famous Cheeger's inequality from Riemannian geometry has a discrete analogue involving the Laplacian matrix; this is perhaps the most important theorem in spectral graph theory and one of the most useful facts in algorithmic applications. It approximates the sparsest cut of a graph through the second eigenvalue of its Laplacian.
=== Cheeger constant ===
The Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling, and low-dimensional topology (in particular, the study of hyperbolic 3-manifolds).
More formally, the Cheeger constant h(G) of a graph G on n vertices is defined as
h
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
(
S
)
|
|
S
|
,
{\displaystyle h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial (S)|}{|S|}},}
where the minimum is over all nonempty sets S of at most n/2 vertices and ∂(S) is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S.
=== Cheeger inequality ===
When the graph G is d-regular, there is a relationship between h(G) and the spectral gap d − λ2 of G. An inequality due to Dodziuk and independently Alon and Milman states that
1
2
(
d
−
λ
2
)
≤
h
(
G
)
≤
2
d
(
d
−
λ
2
)
.
{\displaystyle {\frac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.}
This inequality is closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
For general connected graphs that are not necessarily regular, an alternative inequality is given by Chung: 35
1
2
λ
≤
h
(
G
)
≤
2
λ
,
{\displaystyle {\frac {1}{2}}{\lambda }\leq {\mathbf {h} }(G)\leq {\sqrt {2\lambda }},}
where
λ
{\displaystyle \lambda }
is the least nontrivial eigenvalue of the normalized Laplacian, and
h
(
G
)
{\displaystyle {\mathbf {h} }(G)}
is the (normalized) Cheeger constant
h
(
G
)
=
min
∅
≠
S
⊂
V
(
G
)
|
∂
(
S
)
|
min
(
v
o
l
(
S
)
,
v
o
l
(
S
¯
)
)
{\displaystyle {\mathbf {h} }(G)=\min _{\emptyset \not =S\subset V(G)}{\frac {|\partial (S)|}{\min({\mathrm {vol} }(S),{\mathrm {vol} }({\bar {S}}))}}}
where
v
o
l
(
Y
)
{\displaystyle {\mathrm {vol} }(Y)}
is the sum of degrees of vertices in
Y
{\displaystyle Y}
.
== Hoffman–Delsarte inequality ==
There is an eigenvalue bound for independent sets in regular graphs, originally due to Alan J. Hoffman and Philippe Delsarte.
Suppose that
G
{\displaystyle G}
is a
k
{\displaystyle k}
-regular graph on
n
{\displaystyle n}
vertices with least eigenvalue
λ
m
i
n
{\displaystyle \lambda _{\mathrm {min} }}
. Then:
α
(
G
)
≤
n
1
−
k
λ
m
i
n
{\displaystyle \alpha (G)\leq {\frac {n}{1-{\frac {k}{\lambda _{\mathrm {min} }}}}}}
where
α
(
G
)
{\displaystyle \alpha (G)}
denotes its independence number.
This bound has been applied to establish e.g. algebraic proofs of the Erdős–Ko–Rado theorem and its analogue for intersecting families of subspaces over finite fields.
For general graphs which are not necessarily regular, a similar upper bound for the independence number can be derived by using the maximum eigenvalue
λ
m
a
x
′
{\displaystyle \lambda '_{max}}
of the normalized Laplacian of
G
{\displaystyle G}
:
α
(
G
)
≤
n
(
1
−
1
λ
m
a
x
′
)
m
a
x
d
e
g
m
i
n
d
e
g
{\displaystyle \alpha (G)\leq n(1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\frac {\mathrm {maxdeg} }{\mathrm {mindeg} }}}
where
m
a
x
d
e
g
{\displaystyle {\mathrm {maxdeg} }}
and
m
i
n
d
e
g
{\displaystyle {\mathrm {mindeg} }}
denote the maximum and minimum degree in
G
{\displaystyle G}
, respectively. This a consequence of a more general inequality (pp. 109 in
):
v
o
l
(
X
)
≤
(
1
−
1
λ
m
a
x
′
)
v
o
l
(
V
(
G
)
)
{\displaystyle {\mathrm {vol} }(X)\leq (1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\mathrm {vol} }(V(G))}
where
X
{\displaystyle X}
is an independent set of vertices and
v
o
l
(
Y
)
{\displaystyle {\mathrm {vol} }(Y)}
denotes the sum of degrees of vertices in
Y
{\displaystyle Y}
.
== Historical outline ==
Spectral graph theory emerged in the 1950s and 1960s. Besides graph theoretic research on the relationship between structural and spectral properties of graphs, another major source was research in quantum chemistry, but the connections between these two lines of work were not discovered until much later. The 1980 monograph Spectra of Graphs by Cvetković, Doob, and Sachs summarised nearly all research to date in the area. In 1988 it was updated by the survey Recent Results in the Theory of Graph Spectra. The 3rd edition of Spectra of Graphs (1995) contains a summary of the further recent contributions to the subject. Discrete geometric analysis created and developed by Toshikazu Sunada in the 2000s deals with spectral graph theory in terms of discrete Laplacians associated with weighted graphs, and finds application in various fields, including shape analysis. In most recent years, the spectral graph theory has expanded to vertex-varying graphs often encountered in many real-life applications.
== See also ==
Strongly regular graph
Algebraic connectivity
Algebraic graph theory
Spectral clustering
Spectral shape analysis
Estrada index
Lovász theta
Expander graph
== References ==
Alon; Spencer (2011), The probabilistic method, Wiley.
Brouwer, Andries; Haemers, Willem H. (2011), Spectra of Graphs (PDF), Springer
Hoory; Linial; Wigderson (2006), Expander graphs and their applications (PDF)
Chung, Fan (1997). American Mathematical Society (ed.). Spectral Graph Theory. Providence, R. I. ISBN 0821803158. MR 1421568[first 4 chapters are available in the website]{{cite book}}: CS1 maint: postscript (link)
Schwenk, A. J. (1973). "Almost All Trees are Cospectral". In Harary, Frank (ed.). New Directions in the Theory of Graphs. New York: Academic Press. ISBN 012324255X. OCLC 890297242.
Bogdan, Nica (2018). "A Brief Introduction to Spectral Graph Theory". Zurich: EMS Press. ISBN 978-3-03719-188-0.
Pavel Kurasov (2024), Spectral Geometry of Graphs, Springer(Birkhauser), Open Access (CC4.0).
== External links ==
Spielman, Daniel (2011). "Spectral Graph Theory" (PDF). [chapter from Combinatorial Scientific Computing]
Spielman, Daniel (2007). "Spectral Graph Theory and its Applications". [presented at FOCS 2007 Conference]
Spielman, Daniel (2004). "Spectral Graph Theory and its Applications". [course page and lecture notes] | Wikipedia/Spectral_graph_theory |
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.
== The practical QR algorithm ==
Formally, let A be a real matrix of which we want to compute the eigenvalues, and let A0 := A. At the k-th step (starting with k = 0), we compute the QR decomposition Ak = Qk Rk where Qk is an orthogonal matrix (i.e., QT = Q−1) and Rk is an upper triangular matrix. We then form Ak+1 = Rk Qk. Note that
A
k
+
1
=
R
k
Q
k
=
Q
k
−
1
Q
k
R
k
Q
k
=
Q
k
−
1
A
k
Q
k
=
Q
k
T
A
k
Q
k
,
{\displaystyle A_{k+1}=R_{k}Q_{k}=Q_{k}^{-1}Q_{k}R_{k}Q_{k}=Q_{k}^{-1}A_{k}Q_{k}=Q_{k}^{\mathsf {T}}A_{k}Q_{k},}
so all the Ak are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.
Under certain conditions, the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem provides a bound on the error.
If the matrices converge, then the eigenvalues along the diagonal will appear according to their geometric multiplicity. To guarantee convergence, A must be a symmetric matrix, and for all non zero eigenvalues
λ
{\displaystyle \lambda }
there must not be a corresponding eigenvalue
−
λ
{\displaystyle -\lambda }
. Due to the fact that a single QR iteration has a cost of
O
(
n
3
)
{\displaystyle {\mathcal {O}}(n^{3})}
and the convergence is linear, the standard QR algorithm is extremely expensive to compute, especially considering it is not guaranteed to converge.
=== Using Hessenberg form ===
In the above crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix A to upper Hessenberg form (which costs
10
3
n
3
+
O
(
n
2
)
{\textstyle {\tfrac {10}{3}}n^{3}+{\mathcal {O}}(n^{2})}
arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs
6
n
2
+
O
(
n
)
{\textstyle 6n^{2}+{\mathcal {O}}(n)}
arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm.
If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the Ak. In this case reaching Hessenberg form costs
4
3
n
3
+
O
(
n
2
)
{\textstyle {\tfrac {4}{3}}n^{3}+{\mathcal {O}}(n^{2})}
arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs
O
(
n
)
{\displaystyle {\mathcal {O}}(n)}
operations.
=== Iteration phase ===
If a Hessenberg matrix
A
{\displaystyle A}
has element
a
k
,
k
−
1
=
0
{\displaystyle a_{k,k-1}=0}
for some
k
{\displaystyle k}
, i.e., if one of the elements just below the diagonal is in fact zero, then it decomposes into blocks whose eigenproblems may be solved separately; an eigenvalue is either an eigenvalue of the submatrix of the first
k
−
1
{\displaystyle k-1}
rows and columns, or an eigenvalue of the submatrix of remaining rows and columns. The purpose of the QR iteration step is to shrink one of these
a
k
,
k
−
1
{\displaystyle a_{k,k-1}}
elements so that effectively a small block along the diagonal is split off from the bulk of the matrix. In the case of a real eigenvalue that is usually the
1
×
1
{\displaystyle 1\times 1}
block in the lower right corner (in which case element
a
n
n
{\displaystyle a_{nn}}
holds that eigenvalue), whereas in the case of a pair of conjugate complex eigenvalues it is the
2
×
2
{\displaystyle 2\times 2}
block in the lower right corner.
The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.
==== A single iteration with explicit shift ====
The steps of a QR iteration with explicit shift on a real Hessenberg matrix
A
{\displaystyle A}
are:
Pick a shift
μ
{\displaystyle \mu }
and subtract it from all diagonal elements, producing the matrix
A
−
μ
I
{\displaystyle A-\mu I}
. A basic strategy is to use
μ
=
a
n
,
n
{\displaystyle \mu =a_{n,n}}
, but there are more refined strategies that would further accelerate convergence. The idea is that
μ
{\displaystyle \mu }
should be close to an eigenvalue, since making this shift will accelerate convergence to that eigenvalue.
Perform a sequence of Givens rotations
G
1
,
G
2
,
…
,
G
n
−
1
{\displaystyle G_{1},G_{2},\dots ,G_{n-1}}
on
A
−
μ
I
{\displaystyle A-\mu I}
, where
G
i
{\displaystyle G_{i}}
acts on rows
i
{\displaystyle i}
and
i
+
1
{\displaystyle i+1}
, and
G
i
{\displaystyle G_{i}}
is chosen to zero out position
(
i
+
1
,
i
)
{\displaystyle (i+1,i)}
of
G
i
−
1
⋯
G
1
(
A
−
μ
I
)
{\displaystyle G_{i-1}\dotsb G_{1}(A-\mu I)}
. This produces the upper triangular matrix
R
=
G
n
−
1
⋯
G
1
(
A
−
μ
I
)
{\displaystyle R=G_{n-1}\dotsb G_{1}(A-\mu I)}
. The orthogonal factor
Q
{\displaystyle Q}
would be
G
1
T
G
2
T
⋯
G
n
−
1
T
{\displaystyle G_{1}^{\mathrm {T} }G_{2}^{\mathrm {T} }\dotsb G_{n-1}^{\mathrm {T} }}
, but it is neither necessary nor efficient to produce that explicitly.
Now multiply
R
{\displaystyle R}
by the Givens matrices
G
1
T
{\displaystyle G_{1}^{\mathrm {T} }}
,
G
2
T
{\displaystyle G_{2}^{\mathrm {T} }}
, ...,
G
n
−
1
T
{\displaystyle G_{n-1}^{\mathrm {T} }}
on the right, where
G
i
T
{\displaystyle G_{i}^{\mathrm {T} }}
instead acts on columns
i
{\displaystyle i}
and
i
+
1
{\displaystyle i+1}
. This produces the matrix
R
Q
=
R
G
1
T
G
2
T
⋯
G
n
−
1
T
{\displaystyle RQ=RG_{1}^{\mathrm {T} }G_{2}^{\mathrm {T} }\dotsb G_{n-1}^{\mathrm {T} }}
, which is again on Hessenberg form.
Finally undo the shift by adding
μ
{\displaystyle \mu }
to all diagonal entries. The result is
A
′
=
R
Q
+
μ
I
{\displaystyle A'=RQ+\mu I}
. Since
Q
{\displaystyle Q}
commutes with
I
{\displaystyle I}
, we have that
A
′
=
Q
T
(
A
−
μ
I
)
Q
+
μ
I
=
Q
T
A
Q
{\displaystyle A'=Q^{\mathrm {T} }(A-\mu I)Q+\mu I=Q^{\mathrm {T} }AQ}
.
The purpose of the shift is to change which Givens rotations are chosen.
In more detail, the structure of one of these
G
i
{\displaystyle G_{i}}
matrices are
G
i
=
[
I
0
0
0
0
c
−
s
0
0
s
c
0
0
0
0
I
]
{\displaystyle G_{i}={\begin{bmatrix}I&0&0&0\\0&c&-s&0\\0&s&c&0\\0&0&0&I\end{bmatrix}}}
where the
I
{\displaystyle I}
in the upper left corner is an
(
n
−
1
)
×
(
n
−
1
)
{\displaystyle (n-1)\times (n-1)}
identity matrix, and the two scalars
c
=
cos
θ
{\displaystyle c=\cos \theta }
and
s
=
sin
θ
{\displaystyle s=\sin \theta }
are determined by what rotation angle
θ
{\displaystyle \theta }
is appropriate for zeroing out position
(
i
+
1
,
i
)
{\displaystyle (i+1,i)}
. It is not necessary to exhibit
θ
{\displaystyle \theta }
; the factors
c
{\displaystyle c}
and
s
{\displaystyle s}
can be determined directly from elements in the matrix
G
i
{\displaystyle G_{i}}
should act on. Nor is it necessary to produce the whole matrix; multiplication (from the left) by
G
i
{\displaystyle G_{i}}
only affects rows
i
{\displaystyle i}
and
i
+
1
{\displaystyle i+1}
, so it is easier to just update those two rows in place. Likewise, for the Step 3 multiplication by
G
i
T
{\displaystyle G_{i}^{\mathrm {T} }}
from the right, it is sufficient to remember
i
{\displaystyle i}
,
c
{\displaystyle c}
, and
s
{\displaystyle s}
.
If using the simple
μ
=
a
n
,
n
{\displaystyle \mu =a_{n,n}}
strategy, then at the beginning of Step 2 we have a matrix
A
−
a
n
,
n
I
=
(
×
×
×
×
×
×
×
×
×
×
0
×
×
×
×
0
0
×
×
×
0
0
0
×
0
)
{\displaystyle A-a_{n,n}I={\begin{pmatrix}\times &\times &\times &\times &\times \\\times &\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&\times &0\end{pmatrix}}}
where the
×
{\displaystyle \times }
denotes “could be whatever”. The first Givens rotation
G
1
{\displaystyle G_{1}}
zeroes out the
(
i
+
1
,
i
)
{\displaystyle (i+1,i)}
position of this, producing
G
1
(
A
−
a
n
,
n
I
)
=
(
×
×
×
×
×
0
×
×
×
×
0
×
×
×
×
0
0
×
×
×
0
0
0
×
0
)
.
{\displaystyle G_{1}(A-a_{n,n}I)={\begin{pmatrix}\times &\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&\times &0\end{pmatrix}}{\text{.}}}
Each new rotation zeroes out another subdiagonal element, thus increasing the number of known zeroes until we are at
H
=
G
n
−
2
⋯
G
1
(
A
−
a
n
,
n
I
)
=
(
×
×
×
×
×
0
×
×
×
×
0
0
×
×
×
0
0
0
h
n
−
1
,
n
−
1
h
n
−
1
,
n
0
0
0
h
n
,
n
−
1
0
)
.
{\displaystyle H=G_{n-2}\dotsb G_{1}(A-a_{n,n}I)={\begin{pmatrix}\times &\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&h_{n-1,n-1}&h_{n-1,n}\\0&0&0&h_{n,n-1}&0\end{pmatrix}}{\text{.}}}
The final rotation
G
n
−
1
{\displaystyle G_{n-1}}
has
(
c
,
s
)
{\displaystyle (c,s)}
chosen so that
s
h
n
−
1
,
n
−
1
+
c
h
n
,
n
−
1
=
0
{\displaystyle sh_{n-1,n-1}+ch_{n,n-1}=0}
. If
|
h
n
−
1
,
n
−
1
|
≫
|
h
n
,
n
−
1
|
{\displaystyle |h_{n-1,n-1}|\gg |h_{n,n-1}|}
, as is typically the case when we approach convergence, then
c
≈
1
{\displaystyle c\approx 1}
and
|
s
|
≪
1
{\displaystyle |s|\ll 1}
. Making this rotation produces
R
=
G
n
−
1
G
n
−
2
⋯
G
1
(
A
−
a
n
,
n
I
)
=
(
×
×
×
×
×
0
×
×
×
×
0
0
×
×
×
0
0
0
×
c
h
n
−
1
,
n
0
0
0
0
s
h
n
−
1
,
n
)
,
{\displaystyle R=G_{n-1}G_{n-2}\dotsb G_{1}(A-a_{n,n}I)={\begin{pmatrix}\times &\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&\times &ch_{n-1,n}\\0&0&0&0&sh_{n-1,n}\end{pmatrix}}{\text{,}}}
which is our upper triangular matrix. But now we reach Step 3, and need to start rotating data between columns. The first rotation acts on columns
1
{\displaystyle 1}
and
2
{\displaystyle 2}
, producing
R
G
1
T
=
(
×
×
×
×
×
×
×
×
×
×
0
0
×
×
×
0
0
0
×
c
h
n
−
1
,
n
0
0
0
0
s
h
n
−
1
,
n
)
.
{\displaystyle RG_{1}^{\mathrm {T} }={\begin{pmatrix}\times &\times &\times &\times &\times \\\times &\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&\times &ch_{n-1,n}\\0&0&0&0&sh_{n-1,n}\end{pmatrix}}{\text{.}}}
The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form. This ends at
R
G
1
T
⋯
G
n
−
1
T
=
(
×
×
×
×
×
×
×
×
×
×
0
×
×
×
×
0
0
×
×
×
0
0
0
−
s
2
h
n
−
1
,
n
c
s
h
n
−
1
,
n
)
.
{\displaystyle RG_{1}^{\mathrm {T} }\dotsb G_{n-1}^{\mathrm {T} }={\begin{pmatrix}\times &\times &\times &\times &\times \\\times &\times &\times &\times &\times \\0&\times &\times &\times &\times \\0&0&\times &\times &\times \\0&0&0&-s^{2}h_{n-1,n}&csh_{n-1,n}\end{pmatrix}}{\text{.}}}
Algebraically the form is unchanged, but numerically the element in position
(
n
,
n
−
1
)
{\displaystyle (n,n-1)}
has gotten a lot closer to zero: there used to be a factor
s
{\displaystyle s}
gap between it and the diagonal element above, but now the gap is more like a factor
s
2
{\displaystyle s^{2}}
, and another iteration would make it factor
s
4
{\displaystyle s^{4}}
; we have quadratic convergence. Practically that means
O
(
1
)
{\displaystyle O(1)}
iterations per eigenvalue suffice for convergence, and thus overall we can complete in
O
(
n
)
{\displaystyle O(n)}
QR steps, each of which does a mere
O
(
n
2
)
{\displaystyle O(n^{2})}
arithmetic operations (or as little as
O
(
n
)
{\displaystyle O(n)}
operations, in the case that
A
{\displaystyle A}
is symmetric).
== Visualization ==
The basic QR algorithm can be visualized in the case where A is a positive-definite symmetric matrix. In that case, A can be depicted as an ellipse in 2 dimensions or an ellipsoid in higher dimensions. The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation). Note that the LR algorithm is depicted alongside the QR algorithm.
A single iteration causes the ellipse to tilt or "fall" towards the x-axis. In the event where the large semi-axis of the ellipse is parallel to the x-axis, one iteration of QR does nothing. Another situation where the algorithm "does nothing" is when the large semi-axis is parallel to the y-axis instead of the x-axis. In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction. In both situations, the matrix is diagonal. A situation where an iteration of the algorithm "does nothing" is called a fixed point. The strategy employed by the algorithm is iteration towards a fixed-point. Observe that one fixed point is stable while the other is unstable. If the ellipse were tilted away from the unstable fixed point by a very small amount, one iteration of QR would cause the ellipse to tilt away from the fixed point instead of towards. Eventually though, the algorithm would converge to a different fixed point, but it would take a long time.
=== Finding eigenvalues versus finding eigenvectors ===
It's worth pointing out that finding even a single eigenvector of a symmetric matrix is not computable (in exact real arithmetic according to the definitions in computable analysis). This difficulty exists whenever the multiplicities of a matrix's eigenvalues are not knowable. On the other hand, the same problem does not exist for finding eigenvalues. The eigenvalues of a matrix are always computable.
We will now discuss how these difficulties manifest in the basic QR algorithm. This is illustrated in Figure 2. Recall that the ellipses represent positive-definite symmetric matrices. As the two eigenvalues of the input matrix approach each other, the input ellipse changes into a circle. A circle corresponds to a multiple of the identity matrix. A near-circle corresponds to a near-multiple of the identity matrix whose eigenvalues are nearly equal to the diagonal entries of the matrix. Therefore, the problem of approximately finding the eigenvalues is shown to be easy in that case. But notice what happens to the semi-axes of the ellipses. An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle. The eigenvectors can only be known when the semi-axes are parallel to the x-axis and y-axis. The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular.
While it may be impossible to compute the eigendecomposition of an arbitrary symmetric matrix, it is always possible to perturb the matrix by an arbitrarily small amount and compute the eigendecomposition of the resulting matrix. In the case when the matrix is depicted as a near-circle, the matrix can be replaced with one whose depiction is a perfect circle. In that case, the matrix is a multiple of the identity matrix, and its eigendecomposition is immediate. Be aware though that the resulting eigenbasis can be quite far from the original eigenbasis.
=== Speeding up: Shifting and deflation ===
The slowdown when the ellipse gets more circular has a converse: It turns out that when the ellipse gets more stretched - and less circular - then the rotation of the ellipse becomes faster. Such a stretch can be induced when the matrix
M
{\displaystyle M}
which the ellipse represents gets replaced with
M
−
λ
I
{\displaystyle M-\lambda I}
where
λ
{\displaystyle \lambda }
is approximately the smallest eigenvalue of
M
{\displaystyle M}
. In this case, the ratio of the two semi-axes of the ellipse approaches
∞
{\displaystyle \infty }
. In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues. This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be deflated, which simply means removing its last row and column.
The issue with the unstable fixed point also needs to be addressed. The shifting heuristic is often designed to deal with this problem as well: Practical shifts are often discontinuous and randomised. Wilkinson's shift—which is well-suited for symmetric matrices like the ones we're visualising—is in particular discontinuous.
== The implicit QR algorithm ==
In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form
A
0
=
Q
A
Q
T
{\displaystyle A_{0}=QAQ^{\mathsf {T}}}
as in the explicit version; then, at each step, the first column of
A
k
{\displaystyle A_{k}}
is transformed via a small-size Householder similarity transformation to the first column of
p
(
A
k
)
{\displaystyle p(A_{k})}
(or
p
(
A
k
)
e
1
{\displaystyle p(A_{k})e_{1}}
), where
p
(
A
k
)
{\displaystyle p(A_{k})}
, of degree
r
{\displaystyle r}
, is the polynomial that defines the shifting strategy (often
p
(
x
)
=
(
x
−
λ
)
(
x
−
λ
¯
)
{\displaystyle p(x)=(x-\lambda )(x-{\bar {\lambda }})}
, where
λ
{\displaystyle \lambda }
and
λ
¯
{\displaystyle {\bar {\lambda }}}
are the two eigenvalues of the trailing
2
×
2
{\displaystyle 2\times 2}
principal submatrix of
A
k
{\displaystyle A_{k}}
, the so-called implicit double-shift). Then successive Householder transformations of size
r
+
1
{\displaystyle r+1}
are performed in order to return the working matrix
A
k
{\displaystyle A_{k}}
to upper Hessenberg form. This operation is known as bulge chasing, due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of
A
k
{\displaystyle A_{k}}
is sufficiently small.
=== Renaming proposal ===
Since in the modern implicit version of the procedure no QR decompositions are explicitly performed, some authors, for instance Watkins, suggested changing its name to Francis algorithm. Golub and Van Loan use the term Francis QR step.
== Interpretation and convergence ==
The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm. Recall that the power algorithm repeatedly multiplies A times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix A, upon convergence, AQ = QΛ, where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.
== History ==
The QR algorithm was preceded by the LR algorithm, which uses the LU decomposition instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm.
The LR algorithm was developed in the early 1950s by Heinz Rutishauser, who worked at that time as a research assistant of Eduard Stiefel at ETH Zurich. Stiefel suggested that Rutishauser use the sequence of moments y0T Ak x0, k = 0, 1, ... (where x0 and y0 are arbitrary vectors) to find the eigenvalues of A. Rutishauser took an algorithm of Alexander Aitken for this task and developed it into the quotient–difference algorithm or qd algorithm. After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration Ak = LkUk (LU decomposition), Ak+1 = UkLk, applied on a tridiagonal matrix, from which the LR algorithm follows.
== Other variants ==
One variant of the QR algorithm, the Golub-Kahan-Reinsch algorithm starts with reducing a general matrix into a bidiagonal one. This variant of the QR algorithm for the computation of singular values was first described by Golub & Kahan (1965). The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.
== References ==
== Sources ==
Demmel, James; Kahan, William (1990). "Accurate singular values of bidiagonal matrices". SIAM Journal on Scientific and Statistical Computing. 11 (5): 873–912. CiteSeerX 10.1.1.48.3740. doi:10.1137/0911052.
Golub, Gene H.; Kahan, William (1965). "Calculating the singular values and pseudo-inverse of a matrix". Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis. 2 (2): 205–224. Bibcode:1965SJNA....2..205G. doi:10.1137/0702016. JSTOR 2949777.
== External links ==
Eigenvalue problem at PlanetMath.
Notes on orthogonal bases and the workings of the QR algorithm by Peter J. Olver
Module for the QR Method
C++ Library | Wikipedia/QR_algorithm |
In algebra, the terms left and right denote the order of a binary operation (usually, but not always, called "multiplication") in non-commutative algebraic structures.
A binary operation ∗ is usually written in the infix form:
s ∗ t
The argument s is placed on the left side, and the argument t is on the right side. Even if the symbol of the operation is omitted, the order of s and t does matter (unless ∗ is commutative).
A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides.
Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus, or to left and right in geometry.
== Binary operation as an operator ==
A binary operation ∗ may be considered as a family of unary operators through currying:
Rt(s) = s ∗ t,
depending on t as a parameter – this is the family of right operations. Similarly,
Ls(t) = s ∗ t
defines the family of left operations parametrized with s.
If for some e, the left operation Le is the identity operation, then e is called a left identity. Similarly, if Re = id, then e is a right identity.
In ring theory, a subring which is invariant under any left multiplication in a ring is called a left ideal. Similarly, a right multiplication-invariant subring is a right ideal.
== Left and right modules ==
Over non-commutative rings, the left–right distinction is applied to modules, namely to specify the side where a scalar (module element) appears in the scalar multiplication.
The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring.
A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them.
== Other examples ==
Left eigenvectors
Left and right group actions
== In category theory ==
In category theory the usage of "left" and "right" has some algebraic resemblance, but refers to left and right sides of morphisms. See adjoint functors.
== See also ==
Operator associativity
== External links ==
Barile, Margherita. "right ideal". MathWorld.
Barile, Margherita. "left ideal". MathWorld.
Weisstein, Eric W. "left eigenvector". MathWorld. | Wikipedia/Left_and_right_(algebra) |
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties in 1786. This is often written as
∇
2
f
=
0
{\displaystyle \nabla ^{2}\!f=0}
or
Δ
f
=
0
,
{\displaystyle \Delta f=0,}
where
Δ
=
∇
⋅
∇
=
∇
2
{\displaystyle \Delta =\nabla \cdot \nabla =\nabla ^{2}}
is the Laplace operator,
∇
⋅
{\displaystyle \nabla \cdot }
is the divergence operator (also symbolized "div"),
∇
{\displaystyle \nabla }
is the gradient operator (also symbolized "grad"), and
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
If the right-hand side is specified as a given function,
h
(
x
,
y
,
z
)
{\displaystyle h(x,y,z)}
, we have
Δ
f
=
h
{\displaystyle \Delta f=h}
This is called Poisson's equation, a generalization of Laplace's equation. Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Laplace's equation is also a special case of the Helmholtz equation.
The general theory of solutions to Laplace's equation is known as potential theory. The twice continuously differentiable solutions of Laplace's equation are the harmonic functions, which are important in multiple branches of physics, notably electrostatics, gravitation, and fluid dynamics. In the study of heat conduction, the Laplace equation is the steady-state heat equation. In general, Laplace's equation describes situations of equilibrium, or those that do not depend explicitly on time.
== Forms in different coordinate systems ==
In rectangular coordinates,
∇
2
f
=
∂
2
f
∂
x
2
+
∂
2
f
∂
y
2
+
∂
2
f
∂
z
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.}
In cylindrical coordinates,
∇
2
f
=
1
r
∂
∂
r
(
r
∂
f
∂
r
)
+
1
r
2
∂
2
f
∂
ϕ
2
+
∂
2
f
∂
z
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \phi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.}
In spherical coordinates, using the
(
r
,
θ
,
φ
)
{\displaystyle (r,\theta ,\varphi )}
convention,
∇
2
f
=
1
r
2
∂
∂
r
(
r
2
∂
f
∂
r
)
+
1
r
2
sin
θ
∂
∂
θ
(
sin
θ
∂
f
∂
θ
)
+
1
r
2
sin
2
θ
∂
2
f
∂
φ
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.}
More generally, in arbitrary curvilinear coordinates (ξi),
∇
2
f
=
∂
∂
ξ
j
(
∂
f
∂
ξ
k
g
k
j
)
+
∂
f
∂
ξ
j
g
j
m
Γ
m
n
n
=
0
,
{\displaystyle \nabla ^{2}f={\frac {\partial }{\partial \xi ^{j}}}\left({\frac {\partial f}{\partial \xi ^{k}}}g^{kj}\right)+{\frac {\partial f}{\partial \xi ^{j}}}g^{jm}\Gamma _{mn}^{n}=0,}
or
∇
2
f
=
1
|
g
|
∂
∂
ξ
i
(
|
g
|
g
i
j
∂
f
∂
ξ
j
)
=
0
,
(
g
=
det
{
g
i
j
}
)
{\displaystyle \nabla ^{2}f={\frac {1}{\sqrt {|g|}}}{\frac {\partial }{\partial \xi ^{i}}}\!\left({\sqrt {|g|}}g^{ij}{\frac {\partial f}{\partial \xi ^{j}}}\right)=0,\qquad (g=\det\{g_{ij}\})}
where gij is the Euclidean metric tensor relative to the new coordinates and Γ denotes its Christoffel symbols.
== Boundary conditions ==
The Dirichlet problem for Laplace's equation consists of finding a solution φ on some domain D such that φ on the boundary of D is equal to some given function. Since the Laplace operator appears in the heat equation, one physical interpretation of this problem is as follows: fix the temperature on the boundary of the domain according to the given specification of the boundary condition. Allow heat to flow until a stationary state is reached in which the temperature at each point on the domain does not change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem.
The Neumann boundary conditions for Laplace's equation specify not the function φ itself on the boundary of D but its normal derivative. Physically, this corresponds to the construction of a potential for a vector field whose effect is known at the boundary of D alone. For the example of the heat equation it amounts to prescribing the heat flux through the boundary. In particular, at an adiabatic boundary, the normal derivative of φ is zero.
Solutions of Laplace's equation are called harmonic functions; they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplace's equation (or any linear homogeneous differential equation), their sum (or any linear combination) is also a solution. This property, called the principle of superposition, is very useful. For example, solutions to complex problems can be constructed by summing simple solutions.
== In two dimensions ==
Laplace's equation in two independent variables in rectangular coordinates has the form
∂
2
ψ
∂
x
2
+
∂
2
ψ
∂
y
2
≡
ψ
x
x
+
ψ
y
y
=
0.
{\displaystyle {\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}\equiv \psi _{xx}+\psi _{yy}=0.}
=== Analytic functions ===
The real and imaginary parts of a complex analytic function both satisfy the Laplace equation. That is, if z = x + iy, and if
f
(
z
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
,
{\displaystyle f(z)=u(x,y)+iv(x,y),}
then the necessary condition that f(z) be analytic is that u and v be differentiable and that the Cauchy–Riemann equations be satisfied:
u
x
=
v
y
,
v
x
=
−
u
y
.
{\displaystyle u_{x}=v_{y},\quad v_{x}=-u_{y}.}
where ux is the first partial derivative of u with respect to x.
It follows that
u
y
y
=
(
−
v
x
)
y
=
−
(
v
y
)
x
=
−
(
u
x
)
x
.
{\displaystyle u_{yy}=(-v_{x})_{y}=-(v_{y})_{x}=-(u_{x})_{x}.}
Therefore u satisfies the Laplace equation. A similar calculation shows that v also satisfies the Laplace equation.
Conversely, given a harmonic function, it is the real part of an analytic function, f(z) (at least locally). If a trial form is
f
(
z
)
=
φ
(
x
,
y
)
+
i
ψ
(
x
,
y
)
,
{\displaystyle f(z)=\varphi (x,y)+i\psi (x,y),}
then the Cauchy–Riemann equations will be satisfied if we set
ψ
x
=
−
φ
y
,
ψ
y
=
φ
x
.
{\displaystyle \psi _{x}=-\varphi _{y},\quad \psi _{y}=\varphi _{x}.}
This relation does not determine ψ, but only its increments:
d
ψ
=
−
φ
y
d
x
+
φ
x
d
y
.
{\displaystyle d\psi =-\varphi _{y}\,dx+\varphi _{x}\,dy.}
The Laplace equation for φ implies that the integrability condition for ψ is satisfied:
ψ
x
y
=
ψ
y
x
,
{\displaystyle \psi _{xy}=\psi _{yx},}
and thus ψ may be defined by a line integral. The integrability condition and Stokes' theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called conjugate harmonic functions. This construction is only valid locally, or provided that the path does not loop around a singularity. For example, if r and θ are polar coordinates and
φ
=
log
r
,
{\displaystyle \varphi =\log r,}
then a corresponding analytic function is
f
(
z
)
=
log
z
=
log
r
+
i
θ
.
{\displaystyle f(z)=\log z=\log r+i\theta .}
However, the angle θ is single-valued only in a region that does not enclose the origin.
The close connection between the Laplace equation and analytic functions implies that any solution of the Laplace equation has derivatives of all orders, and can be expanded in a power series, at least inside a circle that does not enclose a singularity. This is in sharp contrast to solutions of the wave equation, which generally have less regularity.
There is an intimate connection between power series and Fourier series. If we expand a function f in a power series inside a circle of radius R, this means that
f
(
z
)
=
∑
n
=
0
∞
c
n
z
n
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}z^{n},}
with suitably defined coefficients whose real and imaginary parts are given by
c
n
=
a
n
+
i
b
n
.
{\displaystyle c_{n}=a_{n}+ib_{n}.}
Therefore
f
(
z
)
=
∑
n
=
0
∞
[
a
n
r
n
cos
n
θ
−
b
n
r
n
sin
n
θ
]
+
i
∑
n
=
1
∞
[
a
n
r
n
sin
n
θ
+
b
n
r
n
cos
n
θ
]
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }\left[a_{n}r^{n}\cos n\theta -b_{n}r^{n}\sin n\theta \right]+i\sum _{n=1}^{\infty }\left[a_{n}r^{n}\sin n\theta +b_{n}r^{n}\cos n\theta \right],}
which is a Fourier series for f. These trigonometric functions can themselves be expanded, using multiple angle formulae.
=== Fluid flow ===
Let the quantities u and v be the horizontal and vertical components of the velocity field of a steady incompressible, irrotational flow in two dimensions. The continuity condition for an incompressible flow is that
u
x
+
v
y
=
0
,
{\displaystyle u_{x}+v_{y}=0,}
and the condition that the flow be irrotational is that
∇
×
V
=
v
x
−
u
y
=
0.
{\displaystyle \nabla \times \mathbf {V} =v_{x}-u_{y}=0.}
If we define the differential of a function ψ by
d
ψ
=
u
d
y
−
v
d
x
,
{\displaystyle d\psi =u\,dy-v\,dx,}
then the continuity condition is the integrability condition for this differential: the resulting function is called the stream function because it is constant along flow lines. The first derivatives of ψ are given by
ψ
x
=
−
v
,
ψ
y
=
u
,
{\displaystyle \psi _{x}=-v,\quad \psi _{y}=u,}
and the irrotationality condition implies that ψ satisfies the Laplace equation. The harmonic function φ that is conjugate to ψ is called the velocity potential. The Cauchy–Riemann equations imply that
φ
x
=
ψ
y
=
u
,
φ
y
=
−
ψ
x
=
v
.
{\displaystyle \varphi _{x}=\psi _{y}=u,\quad \varphi _{y}=-\psi _{x}=v.}
Thus every analytic function corresponds to a steady incompressible, irrotational, inviscid fluid flow in the plane. The real part is the velocity potential, and the imaginary part is the stream function.
=== Electrostatics ===
According to Maxwell's equations, an electric field (u, v) in two space dimensions that is independent of time satisfies
∇
×
(
u
,
v
,
0
)
=
(
v
x
−
u
y
)
k
^
=
0
,
{\displaystyle \nabla \times (u,v,0)=(v_{x}-u_{y}){\hat {\mathbf {k} }}=\mathbf {0} ,}
and
∇
⋅
(
u
,
v
)
=
ρ
,
{\displaystyle \nabla \cdot (u,v)=\rho ,}
where ρ is the charge density. The first Maxwell equation is the integrability condition for the differential
d
φ
=
−
u
d
x
−
v
d
y
,
{\displaystyle d\varphi =-u\,dx-v\,dy,}
so the electric potential φ may be constructed to satisfy
φ
x
=
−
u
,
φ
y
=
−
v
.
{\displaystyle \varphi _{x}=-u,\quad \varphi _{y}=-v.}
The second of Maxwell's equations then implies that
φ
x
x
+
φ
y
y
=
−
ρ
,
{\displaystyle \varphi _{xx}+\varphi _{yy}=-\rho ,}
which is the Poisson equation. The Laplace equation can be used in three-dimensional problems in electrostatics and fluid flow just as in two dimensions.
== In three dimensions ==
=== Fundamental solution ===
A fundamental solution of Laplace's equation satisfies
Δ
u
=
u
x
x
+
u
y
y
+
u
z
z
=
−
δ
(
x
−
x
′
,
y
−
y
′
,
z
−
z
′
)
,
{\displaystyle \Delta u=u_{xx}+u_{yy}+u_{zz}=-\delta (x-x',y-y',z-z'),}
where the Dirac delta function δ denotes a unit source concentrated at the point (x′, y′, z′). No function has this property: in fact it is a distribution rather than a function; but it can be thought of as a limit of functions whose integrals over space are unity, and whose support (the region where the function is non-zero) shrinks to a point (see weak solution). It is common to take a different sign convention for this equation than one typically does when defining fundamental solutions. This choice of sign is often convenient to work with because −Δ is a positive operator. The definition of the fundamental solution thus implies that, if the Laplacian of u is integrated over any volume that encloses the source point, then
∭
V
∇
⋅
∇
u
d
V
=
−
1.
{\displaystyle \iiint _{V}\nabla \cdot \nabla u\,dV=-1.}
The Laplace equation is unchanged under a rotation of coordinates, and hence we can expect that a fundamental solution may be obtained among solutions that only depend upon the distance r from the source point. If we choose the volume to be a ball of radius a around the source point, then Gauss's divergence theorem implies that
−
1
=
∭
V
∇
⋅
∇
u
d
V
=
∬
S
d
u
d
r
d
S
=
4
π
a
2
d
u
d
r
|
r
=
a
.
{\displaystyle -1=\iiint _{V}\nabla \cdot \nabla u\,dV=\iint _{S}{\frac {du}{dr}}\,dS=\left.4\pi a^{2}{\frac {du}{dr}}\right|_{r=a}.}
It follows that
d
u
d
r
=
−
1
4
π
r
2
,
{\displaystyle {\frac {du}{dr}}=-{\frac {1}{4\pi r^{2}}},}
on a sphere of radius r that is centered on the source point, and hence
u
=
1
4
π
r
.
{\displaystyle u={\frac {1}{4\pi r}}.}
Note that, with the opposite sign convention (used in physics), this is the potential generated by a point particle, for an inverse-square law force, arising in the solution of Poisson equation. A similar argument shows that in two dimensions
u
=
−
log
(
r
)
2
π
.
{\displaystyle u=-{\frac {\log(r)}{2\pi }}.}
where log(r) denotes the natural logarithm. Note that, with the opposite sign convention, this is the potential generated by a pointlike sink (see point particle), which is the solution of the Euler equations in two-dimensional incompressible flow.
=== Green's function ===
A Green's function is a fundamental solution that also satisfies a suitable condition on the boundary S of a volume V. For instance,
G
(
x
,
y
,
z
;
x
′
,
y
′
,
z
′
)
{\displaystyle G(x,y,z;x',y',z')}
may satisfy
∇
⋅
∇
G
=
−
δ
(
x
−
x
′
,
y
−
y
′
,
z
−
z
′
)
in
V
,
{\displaystyle \nabla \cdot \nabla G=-\delta (x-x',y-y',z-z')\qquad {\text{in }}V,}
G
=
0
if
(
x
,
y
,
z
)
on
S
.
{\displaystyle G=0\quad {\text{if}}\quad (x,y,z)\qquad {\text{on }}S.}
Now if u is any solution of the Poisson equation in V:
∇
⋅
∇
u
=
−
f
,
{\displaystyle \nabla \cdot \nabla u=-f,}
and u assumes the boundary values g on S, then we may apply Green's identity, (a consequence of the divergence theorem) which states that
∭
V
[
G
∇
⋅
∇
u
−
u
∇
⋅
∇
G
]
d
V
=
∭
V
∇
⋅
[
G
∇
u
−
u
∇
G
]
d
V
=
∬
S
[
G
u
n
−
u
G
n
]
d
S
.
{\displaystyle \iiint _{V}\left[G\,\nabla \cdot \nabla u-u\,\nabla \cdot \nabla G\right]\,dV=\iiint _{V}\nabla \cdot \left[G\nabla u-u\nabla G\right]\,dV=\iint _{S}\left[Gu_{n}-uG_{n}\right]\,dS.\,}
The notations un and Gn denote normal derivatives on S. In view of the conditions satisfied by u and G, this result simplifies to
u
(
x
′
,
y
′
,
z
′
)
=
∭
V
G
f
d
V
−
∬
S
G
n
g
d
S
.
{\displaystyle u(x',y',z')=\iiint _{V}Gf\,dV-\iint _{S}G_{n}g\,dS.\,}
Thus the Green's function describes the influence at (x′, y′, z′) of the data f and g. For the case of the interior of a sphere of radius a, the Green's function may be obtained by means of a reflection (Sommerfeld 1949): the source point P at distance ρ from the center of the sphere is reflected along its radial line to a point P' that is at a distance
ρ
′
=
a
2
ρ
.
{\displaystyle \rho '={\frac {a^{2}}{\rho }}.\,}
Note that if P is inside the sphere, then P′ will be outside the sphere. The Green's function is then given by
1
4
π
R
−
a
4
π
ρ
R
′
,
{\displaystyle {\frac {1}{4\pi R}}-{\frac {a}{4\pi \rho R'}},\,}
where R denotes the distance to the source point P and R′ denotes the distance to the reflected point P′. A consequence of this expression for the Green's function is the Poisson integral formula. Let ρ, θ, and φ be spherical coordinates for the source point P. Here θ denotes the angle with the vertical axis, which is contrary to the usual American mathematical notation, but agrees with standard European and physical practice. Then the solution of the Laplace equation with Dirichlet boundary values g inside the sphere is given by (Zachmanoglou & Thoe 1986, p. 228)
u
(
P
)
=
1
4
π
a
3
(
1
−
ρ
2
a
2
)
∫
0
2
π
∫
0
π
g
(
θ
′
,
φ
′
)
sin
θ
′
(
a
2
+
ρ
2
−
2
a
ρ
cos
Θ
)
3
2
d
θ
′
d
φ
′
{\displaystyle u(P)={\frac {1}{4\pi }}a^{3}\left(1-{\frac {\rho ^{2}}{a^{2}}}\right)\int _{0}^{2\pi }\int _{0}^{\pi }{\frac {g(\theta ',\varphi ')\sin \theta '}{(a^{2}+\rho ^{2}-2a\rho \cos \Theta )^{\frac {3}{2}}}}d\theta '\,d\varphi '}
where
cos
Θ
=
cos
θ
cos
θ
′
+
sin
θ
sin
θ
′
cos
(
φ
−
φ
′
)
{\displaystyle \cos \Theta =\cos \theta \cos \theta '+\sin \theta \sin \theta '\cos(\varphi -\varphi ')}
is the cosine of the angle between (θ, φ) and (θ′, φ′). A simple consequence of this formula is that if u is a harmonic function, then the value of u at the center of the sphere is the mean value of its values on the sphere. This mean value property immediately implies that a non-constant harmonic function cannot assume its maximum value at an interior point.
=== Laplace's spherical harmonics ===
Laplace's equation in spherical coordinates is:
∇
2
f
=
1
r
2
∂
∂
r
(
r
2
∂
f
∂
r
)
+
1
r
2
sin
θ
∂
∂
θ
(
sin
θ
∂
f
∂
θ
)
+
1
r
2
sin
2
θ
∂
2
f
∂
φ
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.}
Consider the problem of finding solutions of the form f(r, θ, φ) = R(r) Y(θ, φ). By separation of variables, two differential equations result by imposing Laplace's equation:
1
R
d
d
r
(
r
2
d
R
d
r
)
=
λ
,
1
Y
1
sin
θ
∂
∂
θ
(
sin
θ
∂
Y
∂
θ
)
+
1
Y
1
sin
2
θ
∂
2
Y
∂
φ
2
=
−
λ
.
{\displaystyle {\frac {1}{R}}{\frac {d}{dr}}\left(r^{2}{\frac {dR}{dr}}\right)=\lambda ,\qquad {\frac {1}{Y}}{\frac {1}{\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial Y}{\partial \theta }}\right)+{\frac {1}{Y}}{\frac {1}{\sin ^{2}\theta }}{\frac {\partial ^{2}Y}{\partial \varphi ^{2}}}=-\lambda .}
The second equation can be simplified under the assumption that Y has the form Y(θ, φ) = Θ(θ) Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations
1
Φ
d
2
Φ
d
φ
2
=
−
m
2
{\displaystyle {\frac {1}{\Phi }}{\frac {d^{2}\Phi }{d\varphi ^{2}}}=-m^{2}}
λ
sin
2
θ
+
sin
θ
Θ
d
d
θ
(
sin
θ
d
Θ
d
θ
)
=
m
2
{\displaystyle \lambda \sin ^{2}\theta +{\frac {\sin \theta }{\Theta }}{\frac {d}{d\theta }}\left(\sin \theta {\frac {d\Theta }{d\theta }}\right)=m^{2}}
for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e±imφ. The solution function Y(θ, φ) is regular at the poles of the sphere, where θ = 0, π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ (ℓ + 1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cos θ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial Pℓm(cos θ) . Finally, the equation for R has solutions of the form R(r) = A rℓ + B r−ℓ − 1; requiring the solution to be regular throughout R3 forces B = 0.
Here the solution was assumed to have the special form Y(θ, φ) = Θ(θ) Φ(φ). For a given value of ℓ, there are 2ℓ + 1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
Y
ℓ
m
(
θ
,
φ
)
=
N
e
i
m
φ
P
ℓ
m
(
cos
θ
)
{\displaystyle Y_{\ell }^{m}(\theta ,\varphi )=Ne^{im\varphi }P_{\ell }^{m}(\cos {\theta })}
which fulfill
r
2
∇
2
Y
ℓ
m
(
θ
,
φ
)
=
−
ℓ
(
ℓ
+
1
)
Y
ℓ
m
(
θ
,
φ
)
.
{\displaystyle r^{2}\nabla ^{2}Y_{\ell }^{m}(\theta ,\varphi )=-\ell (\ell +1)Y_{\ell }^{m}(\theta ,\varphi ).}
Here Yℓm is called a spherical harmonic function of degree ℓ and order m, Pℓm is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole, to π/2 at the Equator, to π at the South Pole, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ, φ) of the eigenvalue problem
r
2
∇
2
Y
=
−
ℓ
(
ℓ
+
1
)
Y
{\displaystyle r^{2}\nabla ^{2}Y=-\ell (\ell +1)Y}
is a linear combination of Yℓm. In fact, for any such solution, rℓ Y(θ, φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ + 1 linearly independent such polynomials.
The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor rℓ,
f
(
r
,
θ
,
φ
)
=
∑
ℓ
=
0
∞
∑
m
=
−
ℓ
ℓ
f
ℓ
m
r
ℓ
Y
ℓ
m
(
θ
,
φ
)
,
{\displaystyle f(r,\theta ,\varphi )=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }f_{\ell }^{m}r^{\ell }Y_{\ell }^{m}(\theta ,\varphi ),}
where the fℓm are constants and the factors rℓ Yℓm are known as solid harmonics. Such an expansion is valid in the ball
r
<
R
=
1
lim sup
ℓ
→
∞
|
f
ℓ
m
|
1
/
ℓ
.
{\displaystyle r<R={\frac {1}{\limsup _{\ell \to \infty }|f_{\ell }^{m}|^{{1}/{\ell }}}}.}
For
r
>
R
{\displaystyle r>R}
, the solid harmonics with negative powers of
r
{\displaystyle r}
are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about
r
=
∞
{\displaystyle r=\infty }
), instead of Taylor series (about
r
=
0
{\displaystyle r=0}
), to match the terms and find
f
ℓ
m
{\displaystyle f_{\ell }^{m}}
.
=== Electrostatics and magnetostatics ===
Let
E
{\displaystyle \mathbf {E} }
be the electric field,
ρ
{\displaystyle \rho }
be the electric charge density, and
ε
0
{\displaystyle \varepsilon _{0}}
be the permittivity of free space. Then Gauss's law for electricity (Maxwell's first equation) in differential form states
∇
⋅
E
=
ρ
ε
0
.
{\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\varepsilon _{0}}}.}
Now, the electric field can be expressed as the negative gradient of the electric potential
V
{\displaystyle V}
,
E
=
−
∇
V
,
{\displaystyle \mathbf {E} =-\nabla V,}
if the field is irrotational,
∇
×
E
=
0
{\displaystyle \nabla \times \mathbf {E} =\mathbf {0} }
. The irrotationality of
E
{\displaystyle \mathbf {E} }
is also known as the electrostatic condition.
∇
⋅
E
=
∇
⋅
(
−
∇
V
)
=
−
∇
2
V
{\displaystyle \nabla \cdot \mathbf {E} =\nabla \cdot (-\nabla V)=-\nabla ^{2}V}
∇
2
V
=
−
∇
⋅
E
{\displaystyle \nabla ^{2}V=-\nabla \cdot \mathbf {E} }
Plugging this relation into Gauss's law, we obtain Poisson's equation for electricity,
∇
2
V
=
−
ρ
ε
0
.
{\displaystyle \nabla ^{2}V=-{\frac {\rho }{\varepsilon _{0}}}.}
In the particular case of a source-free region,
ρ
=
0
{\displaystyle \rho =0}
and Poisson's equation reduces to Laplace's equation for the electric potential.
If the electrostatic potential
V
{\displaystyle V}
is specified on the boundary of a region
R
{\displaystyle {\mathcal {R}}}
, then it is uniquely determined. If
R
{\displaystyle {\mathcal {R}}}
is surrounded by a conducting material with a specified charge density
ρ
{\displaystyle \rho }
, and if the total charge
Q
{\displaystyle Q}
is known, then
V
{\displaystyle V}
is also unique.
For the magnetic field, when there is no free current,
∇
×
H
=
0
.
{\displaystyle \nabla \times \mathbf {H} =\mathbf {0} .}
We can thus define a magnetic scalar potential, ψ, as
H
=
−
∇
ψ
.
{\displaystyle \mathbf {H} =-\nabla \psi .}
With the definition of H:
∇
⋅
B
=
μ
0
∇
⋅
(
H
+
M
)
=
0
,
{\displaystyle \nabla \cdot \mathbf {B} =\mu _{0}\nabla \cdot \left(\mathbf {H} +\mathbf {M} \right)=0,}
it follows that
∇
2
ψ
=
−
∇
⋅
H
=
∇
⋅
M
.
{\displaystyle \nabla ^{2}\psi =-\nabla \cdot \mathbf {H} =\nabla \cdot \mathbf {M} .}
Similar to electrostatics, in a source-free region,
M
=
0
{\displaystyle \mathbf {M} =0}
and Poisson's equation reduces to Laplace's equation for the magnetic scalar potential ,
∇
2
ψ
=
0
{\displaystyle \nabla ^{2}\psi =0}
A potential that does not satisfy Laplace's equation together with the boundary condition is an invalid electrostatic or magnetic scalar potential.
== Gravitation ==
Let
g
{\displaystyle \mathbf {g} }
be the gravitational field,
ρ
{\displaystyle \rho }
the mass density, and
G
{\displaystyle G}
the gravitational constant. Then Gauss's law for gravitation in differential form is
∇
⋅
g
=
−
4
π
G
ρ
.
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho .}
The gravitational field is conservative and can therefore be expressed as the negative gradient of the gravitational potential:
g
=
−
∇
V
,
∇
⋅
g
=
∇
⋅
(
−
∇
V
)
=
−
∇
2
V
,
⟹
∇
2
V
=
−
∇
⋅
g
.
{\displaystyle {\begin{aligned}\mathbf {g} &=-\nabla V,\\\nabla \cdot \mathbf {g} &=\nabla \cdot (-\nabla V)=-\nabla ^{2}V,\\\implies \nabla ^{2}V&=-\nabla \cdot \mathbf {g} .\end{aligned}}}
Using the differential form of Gauss's law of gravitation, we have
∇
2
V
=
4
π
G
ρ
,
{\displaystyle \nabla ^{2}V=4\pi G\rho ,}
which is Poisson's equation for gravitational fields.
In empty space,
ρ
=
0
{\displaystyle \rho =0}
and we have
∇
2
V
=
0
,
{\displaystyle \nabla ^{2}V=0,}
which is Laplace's equation for gravitational fields.
== In the Schwarzschild metric ==
S. Persides solved the Laplace equation in Schwarzschild spacetime on hypersurfaces of constant t. Using the canonical variables r, θ, φ the solution is
Ψ
(
r
,
θ
,
φ
)
=
R
(
r
)
Y
l
(
θ
,
φ
)
,
{\displaystyle \Psi (r,\theta ,\varphi )=R(r)Y_{l}(\theta ,\varphi ),}
where Yl(θ, φ) is a spherical harmonic function, and
R
(
r
)
=
(
−
1
)
l
(
l
!
)
2
r
s
l
(
2
l
)
!
P
l
(
1
−
2
r
r
s
)
+
(
−
1
)
l
+
1
2
(
2
l
+
1
)
!
(
l
)
!
2
r
s
l
+
1
Q
l
(
1
−
2
r
r
s
)
.
{\displaystyle R(r)=(-1)^{l}{\frac {(l!)^{2}r_{s}^{l}}{(2l)!}}P_{l}\left(1-{\frac {2r}{r_{s}}}\right)+(-1)^{l+1}{\frac {2(2l+1)!}{(l)!^{2}r_{s}^{l+1}}}Q_{l}\left(1-{\frac {2r}{r_{s}}}\right).}
Here Pl and Ql are Legendre functions of the first and second kind, respectively, while rs is the Schwarzschild radius. The parameter l is an arbitrary non-negative integer.
== See also ==
6-sphere coordinates, a coordinate system under which Laplace's equation becomes R-separable
Helmholtz equation, a generalization of Laplace's equation
Spherical harmonic
Quadrature domains
Potential theory
Potential flow
Bateman transform
Earnshaw's theorem uses the Laplace equation to show that stable static ferromagnetic suspension is impossible
Vector Laplacian
Fundamental solution
== Notes ==
== References ==
== Sources ==
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume I, Wiley-Interscience.
Sommerfeld, A. (1949). Partial Differential Equations in Physics. New York: Academic Press. Bibcode:1949pdep.book.....S.
Zachmanoglou, E. C.; Thoe, Dale W. (1986). Introduction to Partial Differential Equations with Applications. New York: Dover. ISBN 9780486652511.
== Further reading ==
Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 978-0-8218-0772-9.
Petrovsky, I. G. (1967). Partial Differential Equations. Philadelphia: W. B. Saunders.
Polyanin, A. D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton: Chapman & Hall/CRC Press. ISBN 978-1-58488-299-2.
== External links ==
"Laplace equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Laplace Equation (particular solutions and boundary value problems) at EqWorld: The World of Mathematical Equations.
Example initial-boundary value problems using Laplace's equation from exampleproblems.com.
Weisstein, Eric W. "Laplace's Equation". MathWorld.
Find out how boundary value problems governed by Laplace's equation may be solved numerically by boundary element method | Wikipedia/Laplace's_equation |
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
== Fundamental theory of matrix eigenvectors and eigenvalues ==
A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form
A
v
=
λ
v
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }
for some scalar λ. Then λ is called the eigenvalue corresponding to v. Geometrically speaking, the eigenvectors of A are the vectors that A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvalues
p
(
λ
)
=
det
(
A
−
λ
I
)
=
0.
{\displaystyle p\left(\lambda \right)=\det \left(\mathbf {A} -\lambda \mathbf {I} \right)=0.}
We call p(λ) the characteristic polynomial, and the equation, called the characteristic equation, is an Nth-order polynomial equation in the unknown λ. This equation will have Nλ distinct solutions, where 1 ≤ Nλ ≤ N. The set of solutions, that is, the eigenvalues, is called the spectrum of A.
If the field of scalars is algebraically closed, then we can factor p as
p
(
λ
)
=
(
λ
−
λ
1
)
n
1
(
λ
−
λ
2
)
n
2
⋯
(
λ
−
λ
N
λ
)
n
N
λ
=
0.
{\displaystyle p(\lambda )=\left(\lambda -\lambda _{1}\right)^{n_{1}}\left(\lambda -\lambda _{2}\right)^{n_{2}}\cdots \left(\lambda -\lambda _{N_{\lambda }}\right)^{n_{N_{\lambda }}}=0.}
The integer ni is termed the algebraic multiplicity of eigenvalue λi. The algebraic multiplicities sum to N:
∑
i
=
1
N
λ
n
i
=
N
.
{\textstyle \sum _{i=1}^{N_{\lambda }}{n_{i}}=N.}
For each eigenvalue λi, we have a specific eigenvalue equation
(
A
−
λ
i
I
)
v
=
0.
{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} =0.}
There will be 1 ≤ mi ≤ ni linearly independent solutions to each eigenvalue equation. The linear combinations of the mi solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue λi. The integer mi is termed the geometric multiplicity of λi. It is important to keep in mind that the algebraic multiplicity ni and geometric multiplicity mi may or may not be equal, but we always have mi ≤ ni. The simplest case is of course when mi = ni = 1. The total number of linearly independent eigenvectors, Nv, can be calculated by summing the geometric multiplicities
∑
i
=
1
N
λ
m
i
=
N
v
.
{\displaystyle \sum _{i=1}^{N_{\lambda }}{m_{i}}=N_{\mathbf {v} }.}
The eigenvectors can be indexed by eigenvalues, using a double index, with vij being the jth eigenvector for the ith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index vk, with k = 1, 2, ..., Nv.
== Eigendecomposition of a matrix ==
Let A be a square n × n matrix with n linearly independent eigenvectors qi (where i = 1, ..., n). Then A can be factored as
A
=
Q
Λ
Q
−
1
{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}
where Q is the square n × n matrix whose ith column is the eigenvector qi of A, and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, Λii = λi. Note that only diagonalizable matrices can be factorized in this way. For example, the defective matrix
[
1
1
0
1
]
{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}
(which is a shear matrix) cannot be diagonalized.
The n eigenvectors qi are usually normalized, but they don't have to be. A non-normalized set of n eigenvectors, vi can also be used as the columns of Q. That can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1. If one of the eigenvalues λi has multiple linearly independent eigenvectors (that is, the geometric multiplicity of λi is greater than 1), then these eigenvectors for this eigenvalue λi can be chosen to be mutually orthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if A is a normal matrix, then by the spectral theorem, it's always possible to diagonalize A in an orthonormal basis {qi}.
The decomposition can be derived from the fundamental property of eigenvectors:
A
v
=
λ
v
A
Q
=
Q
Λ
A
=
Q
Λ
Q
−
1
.
{\displaystyle {\begin{aligned}\mathbf {A} \mathbf {v} &=\lambda \mathbf {v} \\\mathbf {A} \mathbf {Q} &=\mathbf {Q} \mathbf {\Lambda } \\\mathbf {A} &=\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}.\end{aligned}}}
The linearly independent eigenvectors qi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products Ax, for x ∈ Cn, which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix A. The number of linearly independent eigenvectors qi with nonzero eigenvalues is equal to the rank of the matrix A, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectors qi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation A.
=== Example ===
The 2 × 2 real matrix A
A
=
[
1
0
1
3
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0\\1&3\\\end{bmatrix}}}
may be decomposed into a diagonal matrix through multiplication of a non-singular matrix Q
Q
=
[
a
b
c
d
]
∈
R
2
×
2
.
{\displaystyle \mathbf {Q} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.}
Then
[
a
b
c
d
]
−
1
[
1
0
1
3
]
[
a
b
c
d
]
=
[
x
0
0
y
]
,
{\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}x&0\\0&y\end{bmatrix}},}
for some real diagonal matrix
[
x
0
0
y
]
{\displaystyle \left[{\begin{smallmatrix}x&0\\0&y\end{smallmatrix}}\right]}
.
Multiplying both sides of the equation on the left by Q:
[
1
0
1
3
]
[
a
b
c
d
]
=
[
a
b
c
d
]
[
x
0
0
y
]
.
{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}
The above equation can be decomposed into two simultaneous equations:
{
[
1
0
1
3
]
[
a
c
]
=
[
a
x
c
x
]
[
1
0
1
3
]
[
b
d
]
=
[
b
y
d
y
]
.
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}={\begin{bmatrix}ax\\cx\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}={\begin{bmatrix}by\\dy\end{bmatrix}}\end{cases}}.}
Factoring out the eigenvalues x and y:
{
[
1
0
1
3
]
[
a
c
]
=
x
[
a
c
]
[
1
0
1
3
]
[
b
d
]
=
y
[
b
d
]
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=x{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=y{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Letting
a
=
[
a
c
]
,
b
=
[
b
d
]
,
{\displaystyle \mathbf {a} ={\begin{bmatrix}a\\c\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b\\d\end{bmatrix}},}
this gives us two vector equations:
{
A
a
=
x
a
A
b
=
y
b
{\displaystyle {\begin{cases}\mathbf {A} \mathbf {a} =x\mathbf {a} \\\mathbf {A} \mathbf {b} =y\mathbf {b} \end{cases}}}
And can be represented by a single vector equation involving two solutions as eigenvalues:
A
u
=
λ
u
{\displaystyle \mathbf {A} \mathbf {u} =\lambda \mathbf {u} }
where λ represents the two eigenvalues x and y, and u represents the vectors a and b.
Shifting λu to the left hand side and factoring u out
(
A
−
λ
I
)
u
=
0
{\displaystyle \left(\mathbf {A} -\lambda \mathbf {I} \right)\mathbf {u} =\mathbf {0} }
Since Q is non-singular, it is essential that u is nonzero. Therefore,
det
(
A
−
λ
I
)
=
0
{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}
Thus
(
1
−
λ
)
(
3
−
λ
)
=
0
{\displaystyle (1-\lambda )(3-\lambda )=0}
giving us the solutions of the eigenvalues for the matrix A as λ = 1 or λ = 3, and the resulting diagonal matrix from the eigendecomposition of A is thus
[
1
0
0
3
]
{\displaystyle \left[{\begin{smallmatrix}1&0\\0&3\end{smallmatrix}}\right]}
.
Putting the solutions back into the above simultaneous equations
{
[
1
0
1
3
]
[
a
c
]
=
1
[
a
c
]
[
1
0
1
3
]
[
b
d
]
=
3
[
b
d
]
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=1{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=3{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Solving the equations, we have
a
=
−
2
c
and
b
=
0
,
c
,
d
∈
R
.
{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad c,d\in \mathbb {R} .}
Thus the matrix Q required for the eigendecomposition of A is
Q
=
[
−
2
c
0
c
d
]
,
c
,
d
∈
R
,
{\displaystyle \mathbf {Q} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad c,d\in \mathbb {R} ,}
that is:
[
−
2
c
0
c
d
]
−
1
[
1
0
1
3
]
[
−
2
c
0
c
d
]
=
[
1
0
0
3
]
,
c
,
d
∈
R
{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad c,d\in \mathbb {R} }
=== Matrix inverse via eigendecomposition ===
If a matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is invertible and its inverse is given by
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
If
A
{\displaystyle \mathbf {A} }
is a symmetric matrix, since
Q
{\displaystyle \mathbf {Q} }
is formed from the eigenvectors of
A
{\displaystyle \mathbf {A} }
,
Q
{\displaystyle \mathbf {Q} }
is guaranteed to be an orthogonal matrix, therefore
Q
−
1
=
Q
T
{\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }}
. Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate:
[
Λ
−
1
]
i
i
=
1
λ
i
{\displaystyle \left[\mathbf {\Lambda } ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}}
==== Practical implications ====
When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues:
min
|
∇
2
λ
s
|
{\displaystyle \min \left|\nabla ^{2}\lambda _{\mathrm {s} }\right|}
where the eigenvalues are subscripted with an s to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
== Functional calculus ==
The eigendecomposition allows for much easier computation of power series of matrices. If f (x) is given by
f
(
x
)
=
a
0
+
a
1
x
+
a
2
x
2
+
⋯
{\displaystyle f(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots }
then we know that
f
(
A
)
=
Q
f
(
Λ
)
Q
−
1
{\displaystyle f\!\left(\mathbf {A} \right)=\mathbf {Q} \,f\!\left(\mathbf {\Lambda } \right)\mathbf {Q} ^{-1}}
Because Λ is a diagonal matrix, functions of Λ are very easy to calculate:
[
f
(
Λ
)
]
i
i
=
f
(
λ
i
)
{\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
The off-diagonal elements of f (Λ) are zero; that is, f (Λ) is also a diagonal matrix. Therefore, calculating f (A) reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with the holomorphic functional calculus, using
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
from above. Once again, we find that
[
f
(
Λ
)
]
i
i
=
f
(
λ
i
)
{\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
=== Examples ===
A
2
=
(
Q
Λ
Q
−
1
)
(
Q
Λ
Q
−
1
)
=
Q
Λ
(
Q
−
1
Q
)
Λ
Q
−
1
=
Q
Λ
2
Q
−
1
A
n
=
Q
Λ
n
Q
−
1
exp
A
=
Q
exp
(
Λ
)
Q
−
1
{\displaystyle {\begin{aligned}\mathbf {A} ^{2}&=\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)=\mathbf {Q} \mathbf {\Lambda } \left(\mathbf {Q} ^{-1}\mathbf {Q} \right)\mathbf {\Lambda } \mathbf {Q} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{2}\mathbf {Q} ^{-1}\\[1.2ex]\mathbf {A} ^{n}&=\mathbf {Q} \mathbf {\Lambda } ^{n}\mathbf {Q} ^{-1}\\[1.2ex]\exp \mathbf {A} &=\mathbf {Q} \exp(\mathbf {\Lambda } )\mathbf {Q} ^{-1}\end{aligned}}}
which are examples for the functions
f
(
x
)
=
x
2
,
f
(
x
)
=
x
n
,
f
(
x
)
=
exp
x
{\displaystyle f(x)=x^{2},\;f(x)=x^{n},\;f(x)=\exp {x}}
. Furthermore,
exp
A
{\displaystyle \exp {\mathbf {A} }}
is the matrix exponential.
== Decomposition for spectral matrices ==
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.
=== Normal matrices ===
A complex-valued square matrix
A
{\displaystyle A}
is normal (meaning ,
A
∗
A
=
A
A
∗
{\displaystyle \mathbf {A} ^{*}\mathbf {A} =\mathbf {A} \mathbf {A} ^{*}}
, where
A
∗
{\displaystyle \mathbf {A} ^{*}}
is the conjugate transpose) if and only if it can be decomposed as
A
=
U
Λ
U
∗
{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}
, where
U
{\displaystyle \mathbf {U} }
is a unitary matrix (meaning
U
∗
=
U
−
1
{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}}
) and
Λ
=
{\displaystyle \mathbf {\Lambda } =}
diag(
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
) is a diagonal matrix. The columns
u
1
,
⋯
,
u
n
{\displaystyle \mathbf {u} _{1},\cdots ,\mathbf {u} _{n}}
of
U
{\displaystyle \mathbf {U} }
form an orthonormal basis and are eigenvectors of
A
{\displaystyle \mathbf {A} }
with corresponding eigenvalues
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
.
For example, consider the 2 x 2 normal matrix
A
=
[
1
2
2
1
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}}
.
The eigenvalues are
λ
1
=
3
{\displaystyle \lambda _{1}=3}
and
λ
2
=
−
1
{\displaystyle \lambda _{2}=-1}
.
The (normalized) eigenvectors corresponding to these eigenvalues are
u
1
=
1
2
[
1
1
]
{\displaystyle \mathbf {u} _{1}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}
and
u
2
=
1
2
[
−
1
1
]
{\displaystyle \mathbf {u} _{2}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}-1\\1\end{bmatrix}}}
.
The diagonalization is
A
=
U
Λ
U
∗
{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}
, where
U
=
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle \mathbf {U} ={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
,
Λ
=
{\displaystyle \mathbf {\Lambda } =}
[
3
0
0
−
1
]
{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}
and
U
∗
=
U
−
1
=
{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}=}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
.
The verification is
U
Λ
U
∗
=
{\displaystyle \mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}=}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
[
3
0
0
−
1
]
{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
=
[
1
2
2
1
]
=
A
{\displaystyle ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}=\mathbf {A} }
.
This example illustrates the process of diagonalizing a normal matrix
A
{\displaystyle \mathbf {A} }
by finding its eigenvalues and eigenvectors, forming the unitary matrix
U
{\displaystyle \mathbf {U} }
, the diagonal matrix
Λ
{\displaystyle \mathbf {\Lambda } }
, and verifying the decomposition.
=== Real symmetric matrices ===
As a special case, for every n × n real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix A can be decomposed as
A
=
Q
Λ
Q
T
{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{\mathsf {T}}}
, where Q is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of A, and Λ is a diagonal matrix whose entries are the eigenvalues of A.
=== Diagonalizable matrices ===
Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as
A
=
P
D
P
−
1
{\displaystyle \mathbf {A} =\mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}
, where
P
{\displaystyle \mathbf {P} }
is a matrix whose columns are eigenvectors of
A
{\displaystyle \mathbf {A} }
and
D
{\displaystyle \mathbf {D} }
is a diagonal matrix consisting of the corresponding eigenvalues of
A
{\displaystyle \mathbf {A} }
.
=== Positive definite matrices ===
Positive definite matrices are matrices for which all eigenvalues are positive. They can be decomposed as
A
=
L
L
T
{\displaystyle \mathbf {A} =\mathbf {L} \mathbf {L} ^{\mathsf {T}}}
using the Cholesky decomposition, where
L
{\displaystyle \mathbf {L} }
is a lower triangular matrix.
=== Unitary and Hermitian matrices ===
Unitary matrices satisfy
U
U
∗
=
I
{\displaystyle \mathbf {U} \mathbf {U} ^{*}=\mathbf {I} }
(real case) or
U
U
†
=
I
{\displaystyle \mathbf {U} \mathbf {U} ^{\dagger }=\mathbf {I} }
(complex case), where
U
∗
{\displaystyle \mathbf {U} ^{*}}
denotes the conjugate transpose and
U
†
{\displaystyle \mathbf {U} ^{\dagger }}
denotes the conjugate transpose. They diagonalize using unitary transformations.
Hermitian matrices satisfy
H
=
H
†
{\displaystyle \mathbf {H} =\mathbf {H} ^{\dagger }}
, where
H
†
{\displaystyle \mathbf {H} ^{\dagger }}
denotes the conjugate transpose. They can be diagonalized using unitary or orthogonal matrices.
== Useful facts ==
=== Useful facts regarding eigenvalues ===
The product of the eigenvalues is equal to the determinant of A
det
(
A
)
=
∏
i
=
1
N
λ
λ
i
n
i
{\displaystyle \det \left(\mathbf {A} \right)=\prod _{i=1}^{N_{\lambda }}{\lambda _{i}^{n_{i}}}}
Note that each eigenvalue is raised to the power ni, the algebraic multiplicity.
The sum of the eigenvalues is equal to the trace of A
tr
(
A
)
=
∑
i
=
1
N
λ
n
i
λ
i
{\displaystyle \operatorname {tr} \left(\mathbf {A} \right)=\sum _{i=1}^{N_{\lambda }}{{n_{i}}\lambda _{i}}}
Note that each eigenvalue is multiplied by ni, the algebraic multiplicity.
If the eigenvalues of A are λi, and A is invertible, then the eigenvalues of A−1 are simply λ−1i.
If the eigenvalues of A are λi, then the eigenvalues of f (A) are simply f (λi), for any holomorphic function f.
=== Useful facts regarding eigenvectors ===
If A is Hermitian and full-rank, the basis of eigenvectors may be chosen to be mutually orthogonal. The eigenvalues are real.
The eigenvectors of A−1 are the same as the eigenvectors of A.
Eigenvectors are only defined up to a multiplicative constant. That is, if Av = λv then cv is also an eigenvector for any scalar c ≠ 0. In particular, −v and eiθv (for any θ) are also eigenvectors.
In the case of degenerate eigenvalues (an eigenvalue having more than one eigenvector), the eigenvectors have an additional freedom of linear transformation, that is to say, any linear (orthonormal) combination of eigenvectors sharing an eigenvalue (in the degenerate subspace) is itself an eigenvector (in the subspace).
=== Useful facts regarding eigendecomposition ===
A can be eigendecomposed if and only if the number of linearly independent eigenvectors, Nv, equals the dimension of an eigenvector: Nv = N
If the field of scalars is algebraically closed and if p(λ) has no repeated roots, that is, if
N
λ
=
N
,
{\displaystyle N_{\lambda }=N,}
then A can be eigendecomposed.
The statement "A can be eigendecomposed" does not imply that A has an inverse as some eigenvalues may be zero, which is not invertible.
The statement "A has an inverse" does not imply that A can be eigendecomposed. A counterexample is
[
1
1
0
1
]
{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}
, which is an invertible defective matrix.
=== Useful facts regarding matrix inverse ===
A can be inverted if and only if all eigenvalues are nonzero:
λ
i
≠
0
∀
i
{\displaystyle \lambda _{i}\neq 0\quad \forall \,i}
If λi ≠ 0 and Nv = N, the inverse is given by
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
== Numerical computations ==
=== Numerical computation of eigenvalues ===
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using nth roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.
A simple and accurate iterative method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as
A
v
‖
A
v
‖
,
A
2
v
‖
A
2
v
‖
,
A
3
v
‖
A
3
v
‖
,
…
{\displaystyle {\frac {\mathbf {A} \mathbf {v} }{\left\|\mathbf {A} \mathbf {v} \right\|}},{\frac {\mathbf {A} ^{2}\mathbf {v} }{\left\|\mathbf {A} ^{2}\mathbf {v} \right\|}},{\frac {\mathbf {A} ^{3}\mathbf {v} }{\left\|\mathbf {A} ^{3}\mathbf {v} \right\|}},\ldots }
This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that v has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine. Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of all the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration. Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.
=== Numerical computation of eigenvectors ===
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation
(
A
−
λ
i
I
)
v
i
,
j
=
0
{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} _{i,j}=\mathbf {0} }
using Gaussian elimination or any other method for solving matrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector). In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Q matrices from the steps in the algorithm. (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.
== Additional topics ==
=== Generalized eigenspaces ===
Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. That is, it is the space of generalized eigenvectors (first sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with the generalized eigenvalue problem described below.
=== Conjugate eigenvector ===
A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is
A
v
=
λ
v
∗
.
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} ^{*}.}
For example, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.
=== Generalized eigenvalue problem ===
A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector v that obeys
A
v
=
λ
B
v
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {B} \mathbf {v} }
where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B (in the second sense), and λ is called the generalized eigenvalue of A and B (in the second sense) which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation
det
(
A
−
λ
B
)
=
0.
{\displaystyle \det(\mathbf {A} -\lambda \mathbf {B} )=0.}
If n linearly independent vectors {v1, …, vn} can be found, such that for every i ∈ {1, …, n}, Avi = λiBvi, then we define the matrices P and D such that
P
=
[
|
|
v
1
⋯
v
n
|
|
]
≡
[
(
v
1
)
1
⋯
(
v
n
)
1
⋮
⋮
(
v
1
)
n
⋯
(
v
n
)
n
]
{\displaystyle P={\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}\equiv {\begin{bmatrix}(\mathbf {v} _{1})_{1}&\cdots &(\mathbf {v} _{n})_{1}\\\vdots &&\vdots \\(\mathbf {v} _{1})_{n}&\cdots &(\mathbf {v} _{n})_{n}\end{bmatrix}}}
(
D
)
i
j
=
{
λ
i
,
if
i
=
j
0
,
otherwise
{\displaystyle (D)_{ij}={\begin{cases}\lambda _{i},&{\text{if }}i=j\\0,&{\text{otherwise}}\end{cases}}}
Then the following equality holds
A
=
B
P
D
P
−
1
{\displaystyle \mathbf {A} =\mathbf {B} \mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}
And the proof is
A
P
=
A
[
|
|
v
1
⋯
v
n
|
|
]
=
[
|
|
A
v
1
⋯
A
v
n
|
|
]
=
[
|
|
λ
1
B
v
1
⋯
λ
n
B
v
n
|
|
]
=
[
|
|
B
v
1
⋯
B
v
n
|
|
]
D
=
B
P
D
{\displaystyle \mathbf {A} \mathbf {P} =\mathbf {A} {\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\A\mathbf {v} _{1}&\cdots &A\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\\lambda _{1}B\mathbf {v} _{1}&\cdots &\lambda _{n}B\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\B\mathbf {v} _{1}&\cdots &B\mathbf {v} _{n}\\|&&|\end{bmatrix}}\mathbf {D} =\mathbf {B} \mathbf {P} \mathbf {D} }
And since P is invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the form A − λB, where λ is a complex number, is called a pencil; the term matrix pencil can also refer to the pair (A, B) of matrices.
If B is invertible, then the original problem can be written in the form
B
−
1
A
v
=
λ
v
{\displaystyle \mathbf {B} ^{-1}\mathbf {A} \mathbf {v} =\lambda \mathbf {v} }
which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if A and B are Hermitian matrices, since in this case B−1A is not generally Hermitian and important properties of the solution are no longer apparent.
If A and B are both symmetric or Hermitian, and B is also a positive-definite matrix, the eigenvalues λi are real and eigenvectors v1 and v2 with distinct eigenvalues are B-orthogonal (v1*Bv2 = 0). In this case, eigenvectors can be chosen so that the matrix P defined above satisfies
P
∗
B
P
=
I
{\displaystyle \mathbf {P} ^{*}\mathbf {B} \mathbf {P} =\mathbf {I} }
or
P
P
∗
B
=
I
,
{\displaystyle \mathbf {P} \mathbf {P} ^{*}\mathbf {B} =\mathbf {I} ,}
and there exists a basis of generalized eigenvectors (it is not a defective problem). This case is sometimes called a Hermitian definite pencil or definite pencil.
== See also ==
Eigenvalue perturbation
Frobenius covariant
Householder transformation
Jordan normal form
List of matrices
Matrix decomposition
Singular value decomposition
Sylvester's formula
== Notes ==
== References ==
== External links ==
Interactive program & tutorial of Spectral Decomposition. | Wikipedia/Generalized_eigenvalue_problem |
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
== Isolated atoms ==
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
== Electronic configuration ==
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
=== Bohr model of the atom ===
The Bohr model, proposed by Niels Bohr in 1913, is a revolutionary theory describing the structure of the hydrogen atom. It introduced the idea of quantized orbits for electrons, combining classical and quantum physics.
Key Postulates of the Bohr Model
Electrons Move in Circular Orbits
Electrons revolve around the nucleus in fixed, circular paths called orbits or energy levels.
These orbits are stable and do not radiate energy.
Quantization of Angular Momentum:
The angular momentum of an electron is quantized and given by:
L
=
m
e
v
r
=
n
ℏ
,
n
=
1
,
2
,
3
,
…
{\displaystyle \ L=m_{\text{e}}vr=n\hbar ,\quad n=1,2,3,\ldots }
where:
m
e
{\displaystyle m_{\text{e}}}
: electron mass
v
{\displaystyle v}
: velocity of the electron
r
{\displaystyle r}
: radius of the orbit
ℏ
{\displaystyle \hbar }
: reduced Planck constant (
ℏ
=
h
/
2
π
{\displaystyle \hbar ={h}/{2\pi }}
)
n
{\displaystyle n}
: principal quantum number, representing the orbit
Energy Levels
Each orbit has a specific energy. The total energy of an electron in the
n
{\displaystyle n}
th orbit is:
E
n
=
−
13.6
e
V
n
2
,
{\displaystyle \ E_{n}=-{\frac {\mathrm {13.6~eV} }{n^{2}}},}
where
13.6
e
V
{\displaystyle \mathrm {13.6~eV} }
is the ground-state energy of the hydrogen atom.
Emission or Absorption of Energy
Electrons can transition between orbits by absorbing or emitting energy equal to the difference between the energy levels:
Δ
E
=
E
f
−
E
i
=
h
ν
,
{\displaystyle \Delta E=E_{\text{f}}-E_{\text{i}}=h\nu ,}
where:
h
{\displaystyle h}
: the Planck constant.
ν
{\displaystyle \nu }
: frequency of emitted/absorbed radiation.
E
f
,
E
i
{\displaystyle E_{\text{f}},E_{\text{i}}}
: final and initial energy levels.
== History and developments ==
One of the earliest steps towards atomic physics was the recognition that matter was composed
of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or Vaiśeṣika Sūtra written by Kaṇāda. This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it was not clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry
(quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
Beyond the well-known phenomena which can be describe with regular quantum mechanics chaotic processes can occur which need different descriptions.
== Significant atomic physicists ==
== See also ==
Particle physics
Isomeric shift
Atomism
Ionisation
Quantum Mechanics
Electron Correlation
Quantum Chemistry
Bound State
== Bibliography ==
Will Raven (2025). Atomic Physics for Everyone. Springer Nature. doi:10.1007/978-3-031-69507-0. ISBN 978-3-031-69507-0.
Sommerfeld, A. (1923) Atomic structure and spectral lines. (translated from German "Atombau und Spektrallinien" 1921), Dutton Publisher.
Foot, CJ (2004). Atomic Physics. Oxford University Press. ISBN 978-0-19-850696-6.
Smirnov, B.E. (2003) Physics of Atoms and Ions, Springer. ISBN 0-387-95550-X.
Szász, L. (1992) The Electronic Structure of Atoms, John Willey & Sons. ISBN 0-471-54280-6.
Herzberg, Gerhard (1979) [1945]. Atomic Spectra and Atomic Structure. New York: Dover. ISBN 978-0-486-60115-1.
Bethe, H.A. & Salpeter E.E. (1957) Quantum Mechanics of One- and Two Electron Atoms. Springer.
Born, M. (1937) Atomic Physics. Blackie & Son Limited.
Cox, P.A. (1996) Introduction to Quantum Theory and Atomic Spectra. Oxford University Press. ISBN 0-19-855916
Condon, E.U. & Shortley, G.H. (1935). The Theory of Atomic Spectra. Cambridge University Press. ISBN 978-0-521-09209-8. {{cite book}}: ISBN / Date incompatibility (help)
Cowan, Robert D. (1981). The Theory of Atomic Structure and Spectra. University of California Press. ISBN 978-0-520-03821-9.
Lindgren, I. & Morrison, J. (1986). Atomic Many-Body Theory (Second ed.). Springer-Verlag. ISBN 978-0-387-16649-0.
== References ==
== External links ==
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
Joint Quantum Institute at University of Maryland and NIST
Atomic Physics on the Internet
JILA (Atomic Physics)
ORNL Physics Division | Wikipedia/Atomic_physics |
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
The oldest and simplest method, known since antiquity as long multiplication or grade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a time complexity of
O
(
n
2
)
{\displaystyle O(n^{2})}
, where n is the number of digits. When done by hand, this may also be reframed as grid method multiplication or lattice multiplication. In software, this may be called "shift and add" due to bitshifts and addition being the only two operations needed.
In 1960, Anatoly Karatsuba discovered Karatsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply complex numbers quickly.) Done recursively, this has a time complexity of
O
(
n
log
2
3
)
{\displaystyle O(n^{\log _{2}3})}
. Splitting numbers into more than two parts results in Toom-Cook multiplication; for example, using three parts results in the Toom-3 algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical.
In 1968, the Schönhage-Strassen algorithm, which makes use of a Fourier transform over a modulus, was discovered. It has a time complexity of
O
(
n
log
n
log
log
n
)
{\displaystyle O(n\log n\log \log n)}
. In 2007, Martin Fürer proposed an algorithm with complexity
O
(
n
log
n
2
Θ
(
log
∗
n
)
)
{\displaystyle O(n\log n2^{\Theta (\log ^{*}n)})}
. In 2014, Harvey, Joris van der Hoeven, and Lecerf proposed one with complexity
O
(
n
log
n
2
3
log
∗
n
)
{\displaystyle O(n\log n2^{3\log ^{*}n})}
, thus making the implicit constant explicit; this was improved to
O
(
n
log
n
2
2
log
∗
n
)
{\displaystyle O(n\log n2^{2\log ^{*}n})}
in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with a galactic algorithm with complexity
O
(
n
log
n
)
{\displaystyle O(n\log n)}
. This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains a conjecture today.
Integer multiplication algorithms can also be used to multiply polynomials by means of the method of Kronecker substitution.
== Long multiplication ==
If a positional numeral system is used, a natural way of multiplying numbers is taught in schools
as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm:
multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits.
This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed.
=== Example ===
This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product).
23958233
× 5830
———————————————
00000000 ( = 23,958,233 × 0)
71874699 ( = 23,958,233 × 30)
191665864 ( = 23,958,233 × 800)
+ 119791165 ( = 23,958,233 × 5,000)
———————————————
139676498390 ( = 139,676,498,390)
==== Other notations ====
In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:
23958233 · 5830
———————————————
119791165
191665864
71874699
00000000
———————————————
139676498390
Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness.
=== Usage in computers ===
Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally, multiplying two n-digit numbers using long multiplication requires Θ(n2) single-digit operations (additions and multiplications).
When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP.
Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode.
On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition.
In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form
2
n
{\displaystyle 2^{n}}
or
2
n
±
1
{\displaystyle 2^{n}\pm 1}
often can be converted to such a short sequence.
== Algorithms for multiplying by hand ==
In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable.
=== Grid method ===
The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.
Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage.
The calculation 34 × 13, for example, could be computed using the grid:
followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals
(300 + 40) + (90 + 12) = 340 + 102 = 442.
This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage.
The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need.
=== Lattice multiplication ===
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death.
As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.
During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner.
During the addition phase, the lattice is summed on the diagonals.
Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication.
==== Example ====
The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown.
=== Russian peasant multiplication ===
The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.
==== Description ====
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.
==== Examples ====
This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33.
Decimal: Binary:
11 3 1011 11
5 6 101 110
2 12 10 1100
1 24 1 11000
—— ——————
33 100001
Describing the steps explicitly:
11 and 3 are written at the top
11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5).
5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded.
2 is halved (1) and 12 is doubled (24).
All not-scratched-out values are summed: 3 + 6 + 24 = 33.
The method works because multiplication is distributive, so:
3
×
11
=
3
×
(
1
×
2
0
+
1
×
2
1
+
0
×
2
2
+
1
×
2
3
)
=
3
×
(
1
+
2
+
8
)
=
3
+
6
+
24
=
33.
{\displaystyle {\begin{aligned}3\times 11&=3\times (1\times 2^{0}+1\times 2^{1}+0\times 2^{2}+1\times 2^{3})\\&=3\times (1+2+8)\\&=3+6+24\\&=33.\end{aligned}}}
A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830):
Decimal: Binary:
5830 23958233 1011011000110 1011011011001001011011001
2915 47916466 101101100011 10110110110010010110110010
1457 95832932 10110110001 101101101100100101101100100
728 191665864 1011011000 1011011011001001011011001000
364 383331728 101101100 10110110110010010110110010000
182 766663456 10110110 101101101100100101101100100000
91 1533326912 1011011 1011011011001001011011001000000
45 3066653824 101101 10110110110010010110110010000000
22 6133307648 10110 101101101100100101101100100000000
11 12266615296 1011 1011011011001001011011001000000000
5 24533230592 101 10110110110010010110110010000000000
2 49066461184 10 101101101100100101101100100000000000
1 98132922368 1 1011011011001001011011001000000000000
———————————— 1022143253354344244353353243222210110 (before carry)
139676498390 10000010000101010111100011100111010110
=== Quarter square multiplication ===
This formula can in some cases be used, to make multiplication tasks easier to complete:
(
x
+
y
)
2
4
−
(
x
−
y
)
2
4
=
1
4
(
(
x
2
+
2
x
y
+
y
2
)
−
(
x
2
−
2
x
y
+
y
2
)
)
=
1
4
(
4
x
y
)
=
x
y
.
{\displaystyle {\frac {\left(x+y\right)^{2}}{4}}-{\frac {\left(x-y\right)^{2}}{4}}={\frac {1}{4}}\left(\left(x^{2}+2xy+y^{2}\right)-\left(x^{2}-2xy+y^{2}\right)\right)={\frac {1}{4}}\left(4xy\right)=xy.}
In the case where
x
{\displaystyle x}
and
y
{\displaystyle y}
are integers, we have that
(
x
+
y
)
2
≡
(
x
−
y
)
2
mod
4
{\displaystyle (x+y)^{2}\equiv (x-y)^{2}{\bmod {4}}}
because
x
+
y
{\displaystyle x+y}
and
x
−
y
{\displaystyle x-y}
are either both even or both odd. This means that
x
y
=
1
4
(
x
+
y
)
2
−
1
4
(
x
−
y
)
2
=
(
(
x
+
y
)
2
div
4
)
−
(
(
x
−
y
)
2
div
4
)
{\displaystyle {\begin{aligned}xy&={\frac {1}{4}}(x+y)^{2}-{\frac {1}{4}}(x-y)^{2}\\&=\left((x+y)^{2}{\text{ div }}4\right)-\left((x-y)^{2}{\text{ div }}4\right)\end{aligned}}}
and it's sufficient to (pre-)compute the integral part of squares divided by 4 like in the following example.
==== Examples ====
Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to 9×9.
If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3.
==== History of quarter square multiplication ====
In prehistoric time, quarter square multiplication involved floor function; that some sources attribute to Babylonian mathematics (2000–1600 BC).
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888.
Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier.
In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025).
The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502.
== Computational complexity of multiplication ==
A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two
n
{\displaystyle n}
-bit integers. This is known as the computational complexity of multiplication. Usual algorithms done by hand have asymptotic complexity of
O
(
n
2
)
{\displaystyle O(n^{2})}
, but in 1960 Anatoly Karatsuba discovered that better complexity was possible (with the Karatsuba algorithm).
Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only
O
(
n
log
n
)
{\displaystyle O(n\log n)}
operations. This is conjectured to be the best possible algorithm, but lower bounds of
Ω
(
n
log
n
)
{\displaystyle \Omega (n\log n)}
are not known.
=== Karatsuba multiplication ===
Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations.
By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner.
Let
x
{\displaystyle x}
and
y
{\displaystyle y}
be represented as
n
{\displaystyle n}
-digit strings in some base
B
{\displaystyle B}
. For any positive integer
m
{\displaystyle m}
less than
n
{\displaystyle n}
, one can write the two given numbers as
x
=
x
1
B
m
+
x
0
,
{\displaystyle x=x_{1}B^{m}+x_{0},}
y
=
y
1
B
m
+
y
0
,
{\displaystyle y=y_{1}B^{m}+y_{0},}
where
x
0
{\displaystyle x_{0}}
and
y
0
{\displaystyle y_{0}}
are less than
B
m
{\displaystyle B^{m}}
. The product is then
x
y
=
(
x
1
B
m
+
x
0
)
(
y
1
B
m
+
y
0
)
=
x
1
y
1
B
2
m
+
(
x
1
y
0
+
x
0
y
1
)
B
m
+
x
0
y
0
=
z
2
B
2
m
+
z
1
B
m
+
z
0
,
{\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}}
where
z
2
=
x
1
y
1
,
{\displaystyle z_{2}=x_{1}y_{1},}
z
1
=
x
1
y
0
+
x
0
y
1
,
{\displaystyle z_{1}=x_{1}y_{0}+x_{0}y_{1},}
z
0
=
x
0
y
0
.
{\displaystyle z_{0}=x_{0}y_{0}.}
These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that
x
y
{\displaystyle xy}
can be computed in only three multiplications, at the cost of a few extra additions. With
z
0
{\displaystyle z_{0}}
and
z
2
{\displaystyle z_{2}}
as before one can observe that
z
1
=
x
1
y
0
+
x
0
y
1
=
x
1
y
0
+
x
0
y
1
+
x
1
y
1
−
x
1
y
1
+
x
0
y
0
−
x
0
y
0
=
x
1
y
0
+
x
0
y
0
+
x
0
y
1
+
x
1
y
1
−
x
1
y
1
−
x
0
y
0
=
(
x
1
+
x
0
)
y
0
+
(
x
0
+
x
1
)
y
1
−
x
1
y
1
−
x
0
y
0
=
(
x
1
+
x
0
)
(
y
0
+
y
1
)
−
x
1
y
1
−
x
0
y
0
=
(
x
1
+
x
0
)
(
y
1
+
y
0
)
−
z
2
−
z
0
.
{\displaystyle {\begin{aligned}z_{1}&=x_{1}y_{0}+x_{0}y_{1}\\&=x_{1}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}+x_{0}y_{0}-x_{0}y_{0}\\&=x_{1}y_{0}+x_{0}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})y_{0}+(x_{0}+x_{1})y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{0}+y_{1})-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{1}+y_{0})-z_{2}-z_{0}.\\\end{aligned}}}
Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n.
==== General case with multiplication of N numbers ====
By exploring patterns after expansion, one see following:
(
x
1
B
m
+
x
0
)
(
y
1
B
m
+
y
0
)
(
z
1
B
m
+
z
0
)
(
a
1
B
m
+
a
0
)
=
a
1
x
1
y
1
z
1
B
4
m
+
a
1
x
1
y
1
z
0
B
3
m
+
a
1
x
1
y
0
z
1
B
3
m
+
a
1
x
0
y
1
z
1
B
3
m
+
a
0
x
1
y
1
z
1
B
3
m
+
a
1
x
1
y
0
z
0
B
2
m
+
a
1
x
0
y
1
z
0
B
2
m
+
a
0
x
1
y
1
z
0
B
2
m
+
a
1
x
0
y
0
z
1
B
2
m
+
a
0
x
1
y
0
z
1
B
2
m
+
a
0
x
0
y
1
z
1
B
2
m
+
a
1
x
0
y
0
z
0
B
m
1
+
a
0
x
1
y
0
z
0
B
m
1
+
a
0
x
0
y
1
z
0
B
m
1
+
a
0
x
0
y
0
z
1
B
m
1
+
a
0
x
0
y
0
z
0
B
1
m
{\displaystyle {\begin{alignedat}{5}(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})(z_{1}B^{m}+z_{0})(a_{1}B^{m}+a_{0})&=a_{1}x_{1}y_{1}z_{1}B^{4m}&+a_{1}x_{1}y_{1}z_{0}B^{3m}&+a_{1}x_{1}y_{0}z_{1}B^{3m}&+a_{1}x_{0}y_{1}z_{1}B^{3m}\\&+a_{0}x_{1}y_{1}z_{1}B^{3m}&+a_{1}x_{1}y_{0}z_{0}B^{2m}&+a_{1}x_{0}y_{1}z_{0}B^{2m}&+a_{0}x_{1}y_{1}z_{0}B^{2m}\\&+a_{1}x_{0}y_{0}z_{1}B^{2m}&+a_{0}x_{1}y_{0}z_{1}B^{2m}&+a_{0}x_{0}y_{1}z_{1}B^{2m}&+a_{1}x_{0}y_{0}z_{0}B^{m{\phantom {1}}}\\&+a_{0}x_{1}y_{0}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{1}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{1}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{0}{\phantom {B^{1m}}}\end{alignedat}}}
Each summand is associated to a unique binary number from 0 to
2
N
+
1
−
1
{\displaystyle 2^{N+1}-1}
, for example
a
1
x
1
y
1
z
1
⟷
1111
,
a
1
x
0
y
1
z
0
⟷
1010
{\displaystyle a_{1}x_{1}y_{1}z_{1}\longleftrightarrow 1111,\ a_{1}x_{0}y_{1}z_{0}\longleftrightarrow 1010}
etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m.
If we express this in fewer terms, we get:
∏
j
=
1
N
(
x
j
,
1
B
m
+
x
j
,
0
)
=
∑
i
=
1
2
N
+
1
−
1
∏
j
=
1
N
x
j
,
c
(
i
,
j
)
B
m
∑
j
=
1
N
c
(
i
,
j
)
=
∑
j
=
0
N
z
j
B
j
m
{\displaystyle \prod _{j=1}^{N}(x_{j,1}B^{m}+x_{j,0})=\sum _{i=1}^{2^{N+1}-1}\prod _{j=1}^{N}x_{j,c(i,j)}B^{m\sum _{j=1}^{N}c(i,j)}=\sum _{j=0}^{N}z_{j}B^{jm}}
, where
c
(
i
,
j
)
{\displaystyle c(i,j)}
means digit in number i at position j. Notice that
c
(
i
,
j
)
∈
{
0
,
1
}
{\displaystyle c(i,j)\in \{0,1\}}
z
0
=
∏
j
=
1
N
x
j
,
0
z
N
=
∏
j
=
1
N
x
j
,
1
z
N
−
1
=
∏
j
=
1
N
(
x
j
,
0
+
x
j
,
1
)
−
∑
i
≠
N
−
1
N
z
i
{\displaystyle {\begin{aligned}z_{0}&=\prod _{j=1}^{N}x_{j,0}\\z_{N}&=\prod _{j=1}^{N}x_{j,1}\\z_{N-1}&=\prod _{j=1}^{N}(x_{j,0}+x_{j,1})-\sum _{i\neq N-1}^{N}z_{i}\end{aligned}}}
==== History ====
Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications.
=== Toom–Cook ===
Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3.
Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.
=== Schönhage–Strassen ===
Every number in base B, can be written as a polynomial:
X
=
∑
i
=
0
N
x
i
B
i
{\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}}
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
X
Y
=
(
∑
i
=
0
N
x
i
B
i
)
(
∑
j
=
0
N
y
i
B
j
)
{\displaystyle XY=(\sum _{i=0}^{N}{x_{i}B^{i}})(\sum _{j=0}^{N}{y_{i}B^{j}})}
Because,for
B
k
{\displaystyle B^{k}}
:
c
k
=
∑
(
i
,
j
)
:
i
+
j
=
k
a
i
b
j
=
∑
i
=
0
k
a
i
b
k
−
i
{\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}}
,
we have a convolution.
By using fft (fast fourier transformation) with convolution rule, we can get
f
^
(
a
∗
b
)
=
f
^
(
∑
i
=
0
k
a
i
b
k
−
i
)
=
f
^
(
a
)
∙
f
^
(
b
)
{\displaystyle {\hat {f}}(a*b)={\hat {f}}(\sum _{i=0}^{k}{a_{i}b_{k-i}})={\hat {f}}(a)\bullet {\hat {f}}(b)}
. That is;
C
k
=
a
k
∙
b
k
{\displaystyle C_{k}=a_{k}\bullet b_{k}}
, where
C
k
{\displaystyle C_{k}}
is the corresponding coefficient in fourier space. This can also be written as:
f
f
t
(
a
∗
b
)
=
f
f
t
(
a
)
∙
f
f
t
(
b
)
{\displaystyle \mathrm {fft} (a*b)=\mathrm {fft} (a)\bullet \mathrm {fft} (b)}
.
We have the same coefficient due to linearity under fourier transformation, and because these polynomials
only consist of one unique term per coefficient:
f
^
(
x
n
)
=
(
i
2
π
)
n
δ
(
n
)
{\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}}
and
f
^
(
a
X
(
ξ
)
+
b
Y
(
ξ
)
)
=
a
X
^
(
ξ
)
+
b
Y
^
(
ξ
)
{\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )}
Convolution rule:
f
^
(
X
∗
Y
)
=
f
^
(
X
)
∙
f
^
(
Y
)
{\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)}
We have reduced our convolution problem
to product problem, through fft.
By finding ifft (polynomial interpolation), for each
c
k
{\displaystyle c_{k}}
, one get the desired coefficients.
Algorithm uses divide and conquer strategy, to divide problem to subproblems.
It has a time complexity of O(n log(n) log(log(n))).
==== History ====
The algorithm was invented by Strassen (1968). It was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm.
=== Further improvements ===
In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to
O
(
n
log
n
⋅
2
Θ
(
log
∗
(
n
)
)
)
{\textstyle O(n\log n\cdot {2}^{\Theta (\log ^{*}(n))})}
using Fourier transforms over complex numbers, where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs.
In 2014, Harvey, Joris van der Hoeven and Lecerf gave a new algorithm that achieves a running time of
O
(
n
log
n
⋅
2
3
log
∗
n
)
{\displaystyle O(n\log n\cdot 2^{3\log ^{*}n})}
, making explicit the implied constant in the
O
(
log
∗
n
)
{\displaystyle O(\log ^{*}n)}
exponent. They also proposed a variant of their algorithm which achieves
O
(
n
log
n
⋅
2
2
log
∗
n
)
{\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}
but whose validity relies on standard conjectures about the distribution of Mersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of
O
(
n
log
n
⋅
2
2
log
∗
n
)
{\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}
. This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture. In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of
O
(
n
log
n
⋅
2
2
log
∗
n
)
{\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})}
.
In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an O(n log n) multiplication algorithm. It was published in the Annals of Mathematics in 2021. Because Schönhage and Strassen predicted that n log(n) is the "best possible" result, Harvey said: "... our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously."
=== Lower bounds ===
There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0[p] for any prime p, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication. Lower bounds for multiplication are also known for some classes of branching programs.
== Complex number multiplication ==
Complex multiplication normally involves four multiplications and two additions.
(
a
+
b
i
)
(
c
+
d
i
)
=
(
a
c
−
b
d
)
+
(
b
c
+
a
d
)
i
.
{\displaystyle (a+bi)(c+di)=(ac-bd)+(bc+ad)i.}
Or
×
a
b
i
c
a
c
b
c
i
d
i
a
d
i
−
b
d
{\displaystyle {\begin{array}{c|c|c}\times &a&bi\\\hline c&ac&bci\\\hline di&adi&-bd\end{array}}}
As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product (a + bi) · (c + di) can be calculated in the following way.
k1 = c · (a + b)
k2 = a · (d − c)
k3 = b · (c + d)
Real part = k1 − k3
Imaginary part = k1 + k2.
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients c + di (called twiddle factors in FFTs), in which case two of the additions (d−c and c+d) can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units.
== Polynomial multiplication ==
All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication.
Long multiplication methods can be generalised to allow the multiplication of algebraic formulae:
14ac - 3ab + 2 multiplied by ac - ab + 1
14ac -3ab 2
ac -ab 1
————————————————————
14a2c2 -3a2bc 2ac
-14a2bc 3 a2b2 -2ab
14ac -3ab 2
———————————————————————————————————————
14a2c2 -17a2bc 16ac 3a2b2 -5ab +2
=======================================
As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr.
t cwt qtr
23 12 2
47 x
————————————————
141 94 94
940 470
29 23
————————————————
1110 587 94
————————————————
1110 7 2
================= Answer: 1110 ton 7 cwt 2 qtr
First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down.
The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system.
== See also ==
Binary multiplier
Dadda multiplier
Division algorithm
Horner scheme for evaluating of a polynomial
Logarithm
Matrix multiplication algorithm
Mental calculation
Number-theoretic transform
Prosthaphaeresis
Slide rule
Trachtenberg system
Residue number system § Multiplication for another fast multiplication algorithm, specially efficient when many operations are done in sequence, such as in linear algebra
Wallace tree
== References ==
== Further reading ==
Warren Jr., Henry S. (2013). Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8.
Savard, John J. G. (2018) [2006]. "Advanced Arithmetic Techniques". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16.
Johansson, Kenny (2008). Low Power and Low Complexity Shift-and-Add Based Computations (PDF) (Dissertation thesis). Linköping Studies in Science and Technology (1 ed.). Linköping, Sweden: Department of Electrical Engineering, Linköping University. ISBN 978-91-7393-836-5. ISSN 0345-7524. No. 1201. Archived (PDF) from the original on 2017-08-13. Retrieved 2021-08-23. (x+268 pages)
== External links ==
=== Basic arithmetic ===
The Many Ways of Arithmetic in UCSMP Everyday Mathematics
A Powerpoint presentation about ancient mathematics
Lattice Multiplication Flash Video
=== Advanced algorithms ===
Multiplication Algorithms used by GMP | Wikipedia/Multiplication_algorithm |
In the mathematical subfield of numerical analysis, de Boor's algorithm is a polynomial-time and numerically stable algorithm for evaluating spline curves in B-spline form. It is a generalization of de Casteljau's algorithm for Bézier curves. The algorithm was devised by German-American mathematician Carl R. de Boor. Simplified, potentially faster variants of the de Boor algorithm have been created but they suffer from comparatively lower stability.
== Introduction ==
A general introduction to B-splines is given in the main article. Here we discuss de Boor's algorithm, an efficient and numerically stable scheme to evaluate a spline curve
S
(
x
)
{\displaystyle \mathbf {S} (x)}
at position
x
{\displaystyle x}
. The curve is built from a sum of B-spline functions
B
i
,
p
(
x
)
{\displaystyle B_{i,p}(x)}
multiplied with potentially vector-valued constants
c
i
{\displaystyle \mathbf {c} _{i}}
, called control points,
S
(
x
)
=
∑
i
c
i
B
i
,
p
(
x
)
.
{\displaystyle \mathbf {S} (x)=\sum _{i}\mathbf {c} _{i}B_{i,p}(x).}
B-splines of order
p
+
1
{\displaystyle p+1}
are connected piece-wise polynomial functions of degree
p
{\displaystyle p}
defined over a grid of knots
t
0
,
…
,
t
i
,
…
,
t
m
{\displaystyle {t_{0},\dots ,t_{i},\dots ,t_{m}}}
(we always use zero-based indices in the following). De Boor's algorithm uses O(p2) + O(p) operations to evaluate the spline curve. Note: the main article about B-splines and the classic publications use a different notation: the B-spline is indexed as
B
i
,
n
(
x
)
{\displaystyle B_{i,n}(x)}
with
n
=
p
+
1
{\displaystyle n=p+1}
.
== Local support ==
B-splines have local support, meaning that the polynomials are positive only in a compact domain and zero elsewhere. The Cox-de Boor recursion formula shows this:
B
i
,
0
(
x
)
:=
{
1
if
t
i
≤
x
<
t
i
+
1
0
otherwise
{\displaystyle B_{i,0}(x):={\begin{cases}1&{\text{if }}\quad t_{i}\leq x<t_{i+1}\\0&{\text{otherwise}}\end{cases}}}
B
i
,
p
(
x
)
:=
x
−
t
i
t
i
+
p
−
t
i
B
i
,
p
−
1
(
x
)
+
t
i
+
p
+
1
−
x
t
i
+
p
+
1
−
t
i
+
1
B
i
+
1
,
p
−
1
(
x
)
.
{\displaystyle B_{i,p}(x):={\frac {x-t_{i}}{t_{i+p}-t_{i}}}B_{i,p-1}(x)+{\frac {t_{i+p+1}-x}{t_{i+p+1}-t_{i+1}}}B_{i+1,p-1}(x).}
Let the index
k
{\displaystyle k}
define the knot interval that contains the position,
x
∈
[
t
k
,
t
k
+
1
)
{\displaystyle x\in [t_{k},t_{k+1})}
. We can see in the recursion formula that only B-splines with
i
=
k
−
p
,
…
,
k
{\displaystyle i=k-p,\dots ,k}
are non-zero for this knot interval. Thus, the sum is reduced to:
S
(
x
)
=
∑
i
=
k
−
p
k
c
i
B
i
,
p
(
x
)
.
{\displaystyle \mathbf {S} (x)=\sum _{i=k-p}^{k}\mathbf {c} _{i}B_{i,p}(x).}
It follows from
i
≥
0
{\displaystyle i\geq 0}
that
k
≥
p
{\displaystyle k\geq p}
. Similarly, we see in the recursion that the highest queried knot location is at index
k
+
1
+
p
{\displaystyle k+1+p}
. This means that any knot interval
[
t
k
,
t
k
+
1
)
{\displaystyle [t_{k},t_{k+1})}
which is actually used must have at least
p
{\displaystyle p}
additional knots before and after. In a computer program, this is typically achieved by repeating the first and last used knot location
p
{\displaystyle p}
times. For example, for
p
=
3
{\displaystyle p=3}
and real knot locations
(
0
,
1
,
2
)
{\displaystyle (0,1,2)}
, one would pad the knot vector to
(
0
,
0
,
0
,
0
,
1
,
2
,
2
,
2
,
2
)
{\displaystyle (0,0,0,0,1,2,2,2,2)}
.
== The algorithm ==
With these definitions, we can now describe de Boor's algorithm. The algorithm does not compute the B-spline functions
B
i
,
p
(
x
)
{\displaystyle B_{i,p}(x)}
directly. Instead it evaluates
S
(
x
)
{\displaystyle \mathbf {S} (x)}
through an equivalent recursion formula.
Let
d
i
,
r
{\displaystyle \mathbf {d} _{i,r}}
be new control points with
d
i
,
0
:=
c
i
{\displaystyle \mathbf {d} _{i,0}:=\mathbf {c} _{i}}
for
i
=
k
−
p
,
…
,
k
{\displaystyle i=k-p,\dots ,k}
. For
r
=
1
,
…
,
p
{\displaystyle r=1,\dots ,p}
the following recursion is applied:
d
i
,
r
=
(
1
−
α
i
,
r
)
d
i
−
1
,
r
−
1
+
α
i
,
r
d
i
,
r
−
1
;
i
=
k
−
p
+
r
,
…
,
k
{\displaystyle \mathbf {d} _{i,r}=(1-\alpha _{i,r})\mathbf {d} _{i-1,r-1}+\alpha _{i,r}\mathbf {d} _{i,r-1};\quad i=k-p+r,\dots ,k}
α
i
,
r
=
x
−
t
i
t
i
+
1
+
p
−
r
−
t
i
.
{\displaystyle \alpha _{i,r}={\frac {x-t_{i}}{t_{i+1+p-r}-t_{i}}}.}
Once the iterations are complete, we have
S
(
x
)
=
d
k
,
p
{\displaystyle \mathbf {S} (x)=\mathbf {d} _{k,p}}
, meaning that
d
k
,
p
{\displaystyle \mathbf {d} _{k,p}}
is the desired result.
De Boor's algorithm is more efficient than an explicit calculation of B-splines
B
i
,
p
(
x
)
{\displaystyle B_{i,p}(x)}
with the Cox-de Boor recursion formula, because it does not compute terms which are guaranteed to be multiplied by zero.
== Optimizations ==
The algorithm above is not optimized for the implementation in a computer. It requires memory for
(
p
+
1
)
+
p
+
⋯
+
1
=
(
p
+
1
)
(
p
+
2
)
/
2
{\displaystyle (p+1)+p+\dots +1=(p+1)(p+2)/2}
temporary control points
d
i
,
r
{\displaystyle \mathbf {d} _{i,r}}
. Each temporary control point is written exactly once and read twice. By reversing the iteration over
i
{\displaystyle i}
(counting down instead of up), we can run the algorithm with memory for only
p
+
1
{\displaystyle p+1}
temporary control points, by letting
d
i
,
r
{\displaystyle \mathbf {d} _{i,r}}
reuse the memory for
d
i
,
r
−
1
{\displaystyle \mathbf {d} _{i,r-1}}
. Similarly, there is only one value of
α
{\displaystyle \alpha }
used in each step, so we can reuse the memory as well.
Furthermore, it is more convenient to use a zero-based index
j
=
0
,
…
,
p
{\displaystyle j=0,\dots ,p}
for the temporary control points. The relation to the previous index is
i
=
j
+
k
−
p
{\displaystyle i=j+k-p}
. Thus we obtain the improved algorithm:
Let
d
j
:=
c
j
+
k
−
p
{\displaystyle \mathbf {d} _{j}:=\mathbf {c} _{j+k-p}}
for
j
=
0
,
…
,
p
{\displaystyle j=0,\dots ,p}
. Iterate for
r
=
1
,
…
,
p
{\displaystyle r=1,\dots ,p}
:
d
j
:=
(
1
−
α
j
)
d
j
−
1
+
α
j
d
j
;
j
=
p
,
…
,
r
{\displaystyle \mathbf {d} _{j}:=(1-\alpha _{j})\mathbf {d} _{j-1}+\alpha _{j}\mathbf {d} _{j};\quad j=p,\dots ,r\quad }
α
j
:=
x
−
t
j
+
k
−
p
t
j
+
1
+
k
−
r
−
t
j
+
k
−
p
.
{\displaystyle \alpha _{j}:={\frac {x-t_{j+k-p}}{t_{j+1+k-r}-t_{j+k-p}}}.}
Note that j must be counted down. After the iterations are complete, the result is
S
(
x
)
=
d
p
{\displaystyle \mathbf {S} (x)=\mathbf {d} _{p}}
.
== Example implementation ==
The following code in the Python programming language is a naive implementation of the optimized algorithm.
== See also ==
De Casteljau's algorithm
Bézier curve
Non-uniform rational B-spline
== External links ==
De Boor's Algorithm
The DeBoor-Cox Calculation
== Computer code ==
PPPACK: contains many spline algorithms in Fortran
GNU Scientific Library: C-library, contains a sub-library for splines ported from PPPACK
SciPy: Python-library, contains a sub-library scipy.interpolate with spline functions based on FITPACK
TinySpline: C-library for splines with a C++ wrapper and bindings for C#, Java, Lua, PHP, Python, and Ruby
Einspline: C-library for splines in 1, 2, and 3 dimensions with Fortran wrappers
== References ==
Works cited
Carl de Boor (2003). A Practical Guide to Splines, Revised Edition. Springer-Verlag. ISBN 0-387-95366-3. | Wikipedia/De_Boor's_algorithm |
In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to split a single Bézier curve into two Bézier curves at an arbitrary parameter value.
The algorithm is numerically stable when compared to direct evaluation of polynomials. The computational complexity of this algorithm is
O
(
d
n
2
)
{\displaystyle O(dn^{2})}
, where d is the number of dimensions, and n is the number of control points. There exist faster alternatives.
== Definition ==
A Bézier curve
B
{\displaystyle B}
(of degree
n
{\displaystyle n}
, with control points
β
0
,
…
,
β
n
{\displaystyle \beta _{0},\ldots ,\beta _{n}}
) can be written in Bernstein form as follows
B
(
t
)
=
∑
i
=
0
n
β
i
b
i
,
n
(
t
)
,
{\displaystyle B(t)=\sum _{i=0}^{n}\beta _{i}b_{i,n}(t),}
where
b
{\displaystyle b}
is a Bernstein basis polynomial
b
i
,
n
(
t
)
=
(
n
i
)
(
1
−
t
)
n
−
i
t
i
.
{\displaystyle b_{i,n}(t)={n \choose i}(1-t)^{n-i}t^{i}.}
The curve at point
t
0
{\displaystyle t_{0}}
can be evaluated with the recurrence relation
β
i
(
0
)
:=
β
i
,
i
=
0
,
…
,
n
β
i
(
j
)
:=
β
i
(
j
−
1
)
(
1
−
t
0
)
+
β
i
+
1
(
j
−
1
)
t
0
,
i
=
0
,
…
,
n
−
j
,
j
=
1
,
…
,
n
{\displaystyle {\begin{aligned}\beta _{i}^{(0)}&:=\beta _{i},&&i=0,\ldots ,n\\\beta _{i}^{(j)}&:=\beta _{i}^{(j-1)}(1-t_{0})+\beta _{i+1}^{(j-1)}t_{0},&&i=0,\ldots ,n-j,\ \ j=1,\ldots ,n\end{aligned}}}
Then, the evaluation of
B
{\displaystyle B}
at point
t
0
{\displaystyle t_{0}}
can be evaluated in
(
n
2
)
{\textstyle {\binom {n}{2}}}
operations. The result
B
(
t
0
)
{\displaystyle B(t_{0})}
is given by
B
(
t
0
)
=
β
0
(
n
)
.
{\displaystyle B(t_{0})=\beta _{0}^{(n)}.}
Moreover, the Bézier curve
B
{\displaystyle B}
can be split at point
t
0
{\displaystyle t_{0}}
into two curves with respective control points:
β
0
(
0
)
,
β
0
(
1
)
,
…
,
β
0
(
n
)
β
0
(
n
)
,
β
1
(
n
−
1
)
,
…
,
β
n
(
0
)
{\displaystyle {\begin{aligned}&\beta _{0}^{(0)},\beta _{0}^{(1)},\ldots ,\beta _{0}^{(n)}\\[1ex]&\beta _{0}^{(n)},\beta _{1}^{(n-1)},\ldots ,\beta _{n}^{(0)}\end{aligned}}}
=== Geometric interpretation ===
The geometric interpretation of De Casteljau's algorithm is straightforward.
Consider a Bézier curve with control points
P
0
,
…
,
P
n
{\displaystyle P_{0},\dots ,P_{n}}
. Connecting the consecutive points we create the control polygon of the curve.
Subdivide now each line segment of this polygon with the ratio
t
:
(
1
−
t
)
{\displaystyle t:(1-t)}
and connect the points you get. This way you arrive at the new polygon having one fewer segment.
Repeat the process until you arrive at the single point – this is the point of the curve corresponding to the parameter
t
{\displaystyle t}
.
The following picture shows this process for a cubic Bézier curve:
Note that the intermediate points that were constructed are in fact the control points for two new Bézier curves, both exactly coincident with the old one. This algorithm not only evaluates the curve at
t
{\displaystyle t}
, but splits the curve into two pieces at
t
{\displaystyle t}
, and provides the equations of the two sub-curves in Bézier form.
The interpretation given above is valid for a nonrational Bézier curve. To evaluate a rational Bézier curve in
R
n
{\displaystyle \mathbf {R} ^{n}}
, we may project the point into
R
n
+
1
{\displaystyle \mathbf {R} ^{n+1}}
; for example, a curve in three dimensions may have its control points
{
(
x
i
,
y
i
,
z
i
)
}
{\displaystyle \{(x_{i},y_{i},z_{i})\}}
and weights
{
w
i
}
{\displaystyle \{w_{i}\}}
projected to the weighted control points
{
(
w
i
x
i
,
w
i
y
i
,
w
i
z
i
,
w
i
)
}
{\displaystyle \{(w_{i}x_{i},w_{i}y_{i},w_{i}z_{i},w_{i})\}}
. The algorithm then proceeds as usual, interpolating in
R
4
{\displaystyle \mathbf {R} ^{4}}
. The resulting four-dimensional points may be projected back into three-space with a perspective divide.
In general, operations on a rational curve (or surface) are equivalent to operations on a nonrational curve in a projective space. This representation as the "weighted control points" and weights is often convenient when evaluating rational curves.
=== Notation ===
When doing the calculation by hand it is useful to write down the coefficients in a triangle scheme as
β
0
=
β
0
(
0
)
β
0
(
1
)
β
1
=
β
1
(
0
)
⋱
⋮
⋮
β
0
(
n
)
β
n
−
1
=
β
n
−
1
(
0
)
β
n
−
1
(
1
)
β
n
=
β
n
(
0
)
{\displaystyle {\begin{matrix}\beta _{0}&=\beta _{0}^{(0)}&&&\\&&\beta _{0}^{(1)}&&\\\beta _{1}&=\beta _{1}^{(0)}&&&\\&&&\ddots &\\\vdots &&\vdots &&\beta _{0}^{(n)}\\&&&&\\\beta _{n-1}&=\beta _{n-1}^{(0)}&&&\\&&\beta _{n-1}^{(1)}&&\\\beta _{n}&=\beta _{n}^{(0)}&&&\\\end{matrix}}}
When choosing a point t0 to evaluate a Bernstein polynomial we can use the two diagonals of the triangle scheme to construct a division of the polynomial
B
(
t
)
=
∑
i
=
0
n
β
i
(
0
)
b
i
,
n
(
t
)
,
t
∈
[
0
,
1
]
{\displaystyle B(t)=\sum _{i=0}^{n}\beta _{i}^{(0)}b_{i,n}(t),\quad t\in [0,1]}
into
B
1
(
t
)
=
∑
i
=
0
n
β
0
(
i
)
b
i
,
n
(
t
t
0
)
,
t
∈
[
0
,
t
0
]
{\displaystyle B_{1}(t)=\sum _{i=0}^{n}\beta _{0}^{(i)}b_{i,n}\left({\frac {t}{t_{0}}}\right)\!,\quad t\in [0,t_{0}]}
and
B
2
(
t
)
=
∑
i
=
0
n
β
i
(
n
−
i
)
b
i
,
n
(
t
−
t
0
1
−
t
0
)
,
t
∈
[
t
0
,
1
]
.
{\displaystyle B_{2}(t)=\sum _{i=0}^{n}\beta _{i}^{(n-i)}b_{i,n}\left({\frac {t-t_{0}}{1-t_{0}}}\right)\!,\quad t\in [t_{0},1].}
== Bézier curve ==
When evaluating a Bézier curve of degree n in 3-dimensional space with n + 1 control points Pi
B
(
t
)
=
∑
i
=
0
n
P
i
b
i
,
n
(
t
)
,
t
∈
[
0
,
1
]
{\displaystyle \mathbf {B} (t)=\sum _{i=0}^{n}\mathbf {P} _{i}b_{i,n}(t),\ t\in [0,1]}
with
P
i
:=
(
x
i
y
i
z
i
)
,
{\displaystyle \mathbf {P} _{i}:={\begin{pmatrix}x_{i}\\y_{i}\\z_{i}\end{pmatrix}},}
we split the Bézier curve into three separate equations
B
1
(
t
)
=
∑
i
=
0
n
x
i
b
i
,
n
(
t
)
,
t
∈
[
0
,
1
]
B
2
(
t
)
=
∑
i
=
0
n
y
i
b
i
,
n
(
t
)
,
t
∈
[
0
,
1
]
B
3
(
t
)
=
∑
i
=
0
n
z
i
b
i
,
n
(
t
)
,
t
∈
[
0
,
1
]
{\displaystyle {\begin{aligned}B_{1}(t)&=\sum _{i=0}^{n}x_{i}b_{i,n}(t),&t\in [0,1]\\[1ex]B_{2}(t)&=\sum _{i=0}^{n}y_{i}b_{i,n}(t),&t\in [0,1]\\[1ex]B_{3}(t)&=\sum _{i=0}^{n}z_{i}b_{i,n}(t),&t\in [0,1]\end{aligned}}}
which we evaluate individually using De Casteljau's algorithm.
== Example ==
We want to evaluate the Bernstein polynomial of degree 2 with the Bernstein coefficients
β
0
(
0
)
=
β
0
β
1
(
0
)
=
β
1
β
2
(
0
)
=
β
2
{\displaystyle {\begin{aligned}\beta _{0}^{(0)}&=\beta _{0}\\[1ex]\beta _{1}^{(0)}&=\beta _{1}\\[1ex]\beta _{2}^{(0)}&=\beta _{2}\end{aligned}}}
at the point t0.
We start the recursion with
β
0
(
1
)
=
β
0
(
0
)
(
1
−
t
0
)
+
β
1
(
0
)
t
0
=
β
0
(
1
−
t
0
)
+
β
1
t
0
β
1
(
1
)
=
β
1
(
0
)
(
1
−
t
0
)
+
β
2
(
0
)
t
0
=
β
1
(
1
−
t
0
)
+
β
2
t
0
{\displaystyle {\begin{aligned}\beta _{0}^{(1)}&&=&&\beta _{0}^{(0)}(1-t_{0})+\beta _{1}^{(0)}t_{0}&&=&&\beta _{0}(1-t_{0})+\beta _{1}t_{0}\\[1ex]\beta _{1}^{(1)}&&=&&\beta _{1}^{(0)}(1-t_{0})+\beta _{2}^{(0)}t_{0}&&=&&\beta _{1}(1-t_{0})+\beta _{2}t_{0}\end{aligned}}}
and with the second iteration the recursion stops with
β
0
(
2
)
=
β
0
(
1
)
(
1
−
t
0
)
+
β
1
(
1
)
t
0
=
β
0
(
1
−
t
0
)
(
1
−
t
0
)
+
β
1
t
0
(
1
−
t
0
)
+
β
1
(
1
−
t
0
)
t
0
+
β
2
t
0
t
0
=
β
0
(
1
−
t
0
)
2
+
β
1
2
t
0
(
1
−
t
0
)
+
β
2
t
0
2
{\displaystyle {\begin{aligned}\beta _{0}^{(2)}&=\beta _{0}^{(1)}(1-t_{0})+\beta _{1}^{(1)}t_{0}\\\ &=\beta _{0}(1-t_{0})(1-t_{0})+\beta _{1}t_{0}(1-t_{0})+\beta _{1}(1-t_{0})t_{0}+\beta _{2}t_{0}t_{0}\\\ &=\beta _{0}(1-t_{0})^{2}+\beta _{1}2t_{0}(1-t_{0})+\beta _{2}t_{0}^{2}\end{aligned}}}
which is the expected Bernstein polynomial of degree 2.
== Implementations ==
Here are example implementations of De Casteljau's algorithm in various programming languages.
=== Haskell ===
=== Python ===
=== Java ===
=== Code Example in JavaScript ===
The following JavaScript function applies De Casteljau's algorithm to an array of control points or poles as originally named by De Casteljau to reduce them one by one until reaching a point in the curve for a given t between 0 for the first point of the curve and 1 for the last one
For example,
var poles = [ [0, 128], [128, 0], [256, 0], [384, 128] ]
crlPtReduceDeCasteljau (poles, .5)
returns the array
[ [ [0, 128], [128, 0], [256, 0], [384, 128 ] ],
[ [64, 64], [192, 0], [320, 64] ],
[ [128, 32], [256, 32]],
[ [192, 32]],
]
which yields the points and segments plotted below:
== See also ==
Bézier curves
De Boor's algorithm
Horner scheme to evaluate polynomials in monomial form
Clenshaw algorithm to evaluate polynomials in Chebyshev form
== References ==
Farin, Gerald E.; Hansford, Dianne (2000). The Essentials of CAGD. Natick, MA: A.K. Peters. ISBN 978-1-56881-123-9.
== External links ==
Piecewise linear approximation of Bézier curves – description of De Casteljau's algorithm, including a criterion to determine when to stop the recursion
Bézier Curves and Picasso — Description and illustration of De Casteljau's algorithm applied to cubic Bézier curves.
de Casteljau's algorithm - Implementation help and interactive demonstration of the algorithm. | Wikipedia/De_Casteljau's_algorithm |
In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials.
It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.
== Clenshaw algorithm ==
In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x)}
:
S
(
x
)
=
∑
k
=
0
n
a
k
ϕ
k
(
x
)
{\displaystyle S(x)=\sum _{k=0}^{n}a_{k}\phi _{k}(x)}
where
ϕ
k
,
k
=
0
,
1
,
…
{\displaystyle \phi _{k},\;k=0,1,\ldots }
is a sequence of functions that satisfy the linear recurrence relation
ϕ
k
+
1
(
x
)
=
α
k
(
x
)
ϕ
k
(
x
)
+
β
k
(
x
)
ϕ
k
−
1
(
x
)
,
{\displaystyle \phi _{k+1}(x)=\alpha _{k}(x)\,\phi _{k}(x)+\beta _{k}(x)\,\phi _{k-1}(x),}
where the coefficients
α
k
(
x
)
{\displaystyle \alpha _{k}(x)}
and
β
k
(
x
)
{\displaystyle \beta _{k}(x)}
are known in advance.
The algorithm is most useful when
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x)}
are functions that are complicated to compute directly, but
α
k
(
x
)
{\displaystyle \alpha _{k}(x)}
and
β
k
(
x
)
{\displaystyle \beta _{k}(x)}
are particularly simple. In the most common applications,
α
(
x
)
{\displaystyle \alpha (x)}
does not depend on
k
{\displaystyle k}
, and
β
{\displaystyle \beta }
is a constant that depends on neither
x
{\displaystyle x}
nor
k
{\displaystyle k}
.
To perform the summation for given series of coefficients
a
0
,
…
,
a
n
{\displaystyle a_{0},\ldots ,a_{n}}
, compute the values
b
k
(
x
)
{\displaystyle b_{k}(x)}
by the "reverse" recurrence formula:
b
n
+
1
(
x
)
=
b
n
+
2
(
x
)
=
0
,
b
k
(
x
)
=
a
k
+
α
k
(
x
)
b
k
+
1
(
x
)
+
β
k
+
1
(
x
)
b
k
+
2
(
x
)
.
{\displaystyle {\begin{aligned}b_{n+1}(x)&=b_{n+2}(x)=0,\\b_{k}(x)&=a_{k}+\alpha _{k}(x)\,b_{k+1}(x)+\beta _{k+1}(x)\,b_{k+2}(x).\end{aligned}}}
Note that this computation makes no direct reference to the functions
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x)}
. After computing
b
2
(
x
)
{\displaystyle b_{2}(x)}
and
b
1
(
x
)
{\displaystyle b_{1}(x)}
,
the desired sum can be expressed in terms of them and the simplest functions
ϕ
0
(
x
)
{\displaystyle \phi _{0}(x)}
and
ϕ
1
(
x
)
{\displaystyle \phi _{1}(x)}
:
S
(
x
)
=
ϕ
0
(
x
)
a
0
+
ϕ
1
(
x
)
b
1
(
x
)
+
β
1
(
x
)
ϕ
0
(
x
)
b
2
(
x
)
.
{\displaystyle S(x)=\phi _{0}(x)\,a_{0}+\phi _{1}(x)\,b_{1}(x)+\beta _{1}(x)\,\phi _{0}(x)\,b_{2}(x).}
See Fox and Parker for more information and stability analyses.
== Examples ==
=== Horner as a special case of Clenshaw ===
A particularly simple case occurs when evaluating a polynomial of the form
S
(
x
)
=
∑
k
=
0
n
a
k
x
k
.
{\displaystyle S(x)=\sum _{k=0}^{n}a_{k}x^{k}.}
The functions are simply
ϕ
0
(
x
)
=
1
,
ϕ
k
(
x
)
=
x
k
=
x
ϕ
k
−
1
(
x
)
{\displaystyle {\begin{aligned}\phi _{0}(x)&=1,\\\phi _{k}(x)&=x^{k}=x\phi _{k-1}(x)\end{aligned}}}
and are produced by the recurrence coefficients
α
(
x
)
=
x
{\displaystyle \alpha (x)=x}
and
β
=
0
{\displaystyle \beta =0}
.
In this case, the recurrence formula to compute the sum is
b
k
(
x
)
=
a
k
+
x
b
k
+
1
(
x
)
{\displaystyle b_{k}(x)=a_{k}+xb_{k+1}(x)}
and, in this case, the sum is simply
S
(
x
)
=
a
0
+
x
b
1
(
x
)
=
b
0
(
x
)
,
{\displaystyle S(x)=a_{0}+xb_{1}(x)=b_{0}(x),}
which is exactly the usual Horner's method.
=== Special case for Chebyshev series ===
Consider a truncated Chebyshev series
p
n
(
x
)
=
a
0
+
a
1
T
1
(
x
)
+
a
2
T
2
(
x
)
+
⋯
+
a
n
T
n
(
x
)
.
{\displaystyle p_{n}(x)=a_{0}+a_{1}T_{1}(x)+a_{2}T_{2}(x)+\cdots +a_{n}T_{n}(x).}
The coefficients in the recursion relation for the Chebyshev polynomials are
α
(
x
)
=
2
x
,
β
=
−
1
,
{\displaystyle \alpha (x)=2x,\quad \beta =-1,}
with the initial conditions
T
0
(
x
)
=
1
,
T
1
(
x
)
=
x
.
{\displaystyle T_{0}(x)=1,\quad T_{1}(x)=x.}
Thus, the recurrence is
b
k
(
x
)
=
a
k
+
2
x
b
k
+
1
(
x
)
−
b
k
+
2
(
x
)
{\displaystyle b_{k}(x)=a_{k}+2xb_{k+1}(x)-b_{k+2}(x)}
and the final results are
b
0
(
x
)
=
a
0
+
2
x
b
1
(
x
)
−
b
2
(
x
)
,
{\displaystyle b_{0}(x)=a_{0}+2xb_{1}(x)-b_{2}(x),}
p
n
(
x
)
=
1
2
[
a
0
+
b
0
(
x
)
−
b
2
(
x
)
]
.
{\displaystyle p_{n}(x)={\tfrac {1}{2}}\left[a_{0}+b_{0}(x)-b_{2}(x)\right].}
An equivalent expression for the sum is given by
p
n
(
x
)
=
a
0
+
x
b
1
(
x
)
−
b
2
(
x
)
.
{\displaystyle p_{n}(x)=a_{0}+xb_{1}(x)-b_{2}(x).}
=== Meridian arc length on the ellipsoid ===
Clenshaw summation is extensively used in geodetic applications. A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form
m
(
θ
)
=
C
0
θ
+
C
1
sin
θ
+
C
2
sin
2
θ
+
⋯
+
C
n
sin
n
θ
.
{\displaystyle m(\theta )=C_{0}\,\theta +C_{1}\sin \theta +C_{2}\sin 2\theta +\cdots +C_{n}\sin n\theta .}
Leaving off the initial
C
0
θ
{\displaystyle C_{0}\,\theta }
term, the remainder is a summation of the appropriate form. There is no leading term because
ϕ
0
(
θ
)
=
sin
0
θ
=
sin
0
=
0
{\displaystyle \phi _{0}(\theta )=\sin 0\theta =\sin 0=0}
.
The recurrence relation for
sin
k
θ
{\displaystyle \sin k\theta }
is
sin
(
k
+
1
)
θ
=
2
cos
θ
sin
k
θ
−
sin
(
k
−
1
)
θ
,
{\displaystyle \sin(k+1)\theta =2\cos \theta \sin k\theta -\sin(k-1)\theta ,}
making the coefficients in the recursion relation
α
k
(
θ
)
=
2
cos
θ
,
β
k
=
−
1.
{\displaystyle \alpha _{k}(\theta )=2\cos \theta ,\quad \beta _{k}=-1.}
and the evaluation of the series is given by
b
n
+
1
(
θ
)
=
b
n
+
2
(
θ
)
=
0
,
b
k
(
θ
)
=
C
k
+
2
cos
θ
b
k
+
1
(
θ
)
−
b
k
+
2
(
θ
)
,
f
o
r
n
≥
k
≥
1.
{\displaystyle {\begin{aligned}b_{n+1}(\theta )&=b_{n+2}(\theta )=0,\\b_{k}(\theta )&=C_{k}+2\cos \theta \,b_{k+1}(\theta )-b_{k+2}(\theta ),\quad \mathrm {for\ } n\geq k\geq 1.\end{aligned}}}
The final step is made particularly simple because
ϕ
0
(
θ
)
=
sin
0
=
0
{\displaystyle \phi _{0}(\theta )=\sin 0=0}
, so the end of the recurrence is simply
b
1
(
θ
)
sin
(
θ
)
{\displaystyle b_{1}(\theta )\sin(\theta )}
; the
C
0
θ
{\displaystyle C_{0}\,\theta }
term is added separately:
m
(
θ
)
=
C
0
θ
+
b
1
(
θ
)
sin
θ
.
{\displaystyle m(\theta )=C_{0}\,\theta +b_{1}(\theta )\sin \theta .}
Note that the algorithm requires only the evaluation of two trigonometric quantities
cos
θ
{\displaystyle \cos \theta }
and
sin
θ
{\displaystyle \sin \theta }
.
=== Difference in meridian arc lengths ===
Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write
m
(
θ
1
)
−
m
(
θ
2
)
=
C
0
(
θ
1
−
θ
2
)
+
∑
k
=
1
n
2
C
k
sin
(
1
2
k
(
θ
1
−
θ
2
)
)
cos
(
1
2
k
(
θ
1
+
θ
2
)
)
.
{\displaystyle m(\theta _{1})-m(\theta _{2})=C_{0}(\theta _{1}-\theta _{2})+\sum _{k=1}^{n}2C_{k}\sin {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}-\theta _{2}){\bigr )}\cos {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}+\theta _{2}){\bigr )}.}
Clenshaw summation can be applied in this case
provided we simultaneously compute
m
(
θ
1
)
+
m
(
θ
2
)
{\displaystyle m(\theta _{1})+m(\theta _{2})}
and perform a matrix summation,
M
(
θ
1
,
θ
2
)
=
[
(
m
(
θ
1
)
+
m
(
θ
2
)
)
/
2
(
m
(
θ
1
)
−
m
(
θ
2
)
)
/
(
θ
1
−
θ
2
)
]
=
C
0
[
μ
1
]
+
∑
k
=
1
n
C
k
F
k
(
θ
1
,
θ
2
)
,
{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})={\begin{bmatrix}(m(\theta _{1})+m(\theta _{2}))/2\\(m(\theta _{1})-m(\theta _{2}))/(\theta _{1}-\theta _{2})\end{bmatrix}}=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+\sum _{k=1}^{n}C_{k}{\mathsf {F}}_{k}(\theta _{1},\theta _{2}),}
where
δ
=
1
2
(
θ
1
−
θ
2
)
,
μ
=
1
2
(
θ
1
+
θ
2
)
,
F
k
(
θ
1
,
θ
2
)
=
[
cos
k
δ
sin
k
μ
sin
k
δ
δ
cos
k
μ
]
.
{\displaystyle {\begin{aligned}\delta &={\tfrac {1}{2}}(\theta _{1}-\theta _{2}),\\[1ex]\mu &={\tfrac {1}{2}}(\theta _{1}+\theta _{2}),\\[1ex]{\mathsf {F}}_{k}(\theta _{1},\theta _{2})&={\begin{bmatrix}\cos k\delta \sin k\mu \\{\dfrac {\sin k\delta }{\delta }}\cos k\mu \end{bmatrix}}.\end{aligned}}}
The first element of
M
(
θ
1
,
θ
2
)
{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})}
is the average
value of
m
{\displaystyle m}
and the second element is the average slope.
F
k
(
θ
1
,
θ
2
)
{\displaystyle {\mathsf {F}}_{k}(\theta _{1},\theta _{2})}
satisfies the recurrence
relation
F
k
+
1
(
θ
1
,
θ
2
)
=
A
(
θ
1
,
θ
2
)
F
k
(
θ
1
,
θ
2
)
−
F
k
−
1
(
θ
1
,
θ
2
)
,
{\displaystyle {\mathsf {F}}_{k+1}(\theta _{1},\theta _{2})={\mathsf {A}}(\theta _{1},\theta _{2}){\mathsf {F}}_{k}(\theta _{1},\theta _{2})-{\mathsf {F}}_{k-1}(\theta _{1},\theta _{2}),}
where
A
(
θ
1
,
θ
2
)
=
2
[
cos
δ
cos
μ
−
δ
sin
δ
sin
μ
−
sin
δ
δ
sin
μ
cos
δ
cos
μ
]
{\displaystyle {\mathsf {A}}(\theta _{1},\theta _{2})=2{\begin{bmatrix}\cos \delta \cos \mu &-\delta \sin \delta \sin \mu \\-\displaystyle {\frac {\sin \delta }{\delta }}\sin \mu &\cos \delta \cos \mu \end{bmatrix}}}
takes the place of
α
{\displaystyle \alpha }
in the recurrence relation, and
β
=
−
1
{\displaystyle \beta =-1}
.
The standard Clenshaw algorithm can now be applied to yield
B
n
+
1
=
B
n
+
2
=
0
,
B
k
=
C
k
I
+
A
B
k
+
1
−
B
k
+
2
,
f
o
r
n
≥
k
≥
1
,
M
(
θ
1
,
θ
2
)
=
C
0
[
μ
1
]
+
B
1
F
1
(
θ
1
,
θ
2
)
,
{\displaystyle {\begin{aligned}{\mathsf {B}}_{n+1}&={\mathsf {B}}_{n+2}={\mathsf {0}},\\[1ex]{\mathsf {B}}_{k}&=C_{k}{\mathsf {I}}+{\mathsf {A}}{\mathsf {B}}_{k+1}-{\mathsf {B}}_{k+2},\qquad \mathrm {for\ } n\geq k\geq 1,\\[1ex]{\mathsf {M}}(\theta _{1},\theta _{2})&=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+{\mathsf {B}}_{1}{\mathsf {F}}_{1}(\theta _{1},\theta _{2}),\end{aligned}}}
where
B
k
{\displaystyle {\mathsf {B}}_{k}}
are 2×2 matrices. Finally we have
m
(
θ
1
)
−
m
(
θ
2
)
θ
1
−
θ
2
=
M
2
(
θ
1
,
θ
2
)
.
{\displaystyle {\frac {m(\theta _{1})-m(\theta _{2})}{\theta _{1}-\theta _{2}}}={\mathsf {M}}_{2}(\theta _{1},\theta _{2}).}
This technique can be used in the limit
θ
2
=
θ
1
=
μ
{\displaystyle \theta _{2}=\theta _{1}=\mu }
and
δ
=
0
{\displaystyle \delta =0}
to simultaneously compute
m
(
μ
)
{\displaystyle m(\mu )}
and the derivative
d
m
(
μ
)
/
d
μ
{\displaystyle dm(\mu )/d\mu }
, provided that, in evaluating
F
1
{\displaystyle {\mathsf {F}}_{1}}
and
A
{\displaystyle {\mathsf {A}}}
, we take
lim
δ
→
0
(
sin
k
δ
)
/
δ
=
k
{\displaystyle \lim _{\delta \to 0}(\sin k\delta )/\delta =k}
.
== See also ==
Horner scheme to evaluate polynomials in monomial form
De Casteljau's algorithm to evaluate polynomials in Bézier form
== References == | Wikipedia/Clenshaw_algorithm |
In mathematics, Lill's method is a visual method of finding the real roots of a univariate polynomial of any degree. It was developed by Austrian engineer Eduard Lill in 1867. A later paper by Lill dealt with the problem of complex roots.
Lill's method involves drawing a path of straight line segments making right angles, with lengths equal to the coefficients of the polynomial. The roots of the polynomial can then be found as the slopes of other right-angle paths, also connecting the start to the terminus, but with vertices on the lines of the first path.
== Description of the method ==
To employ the method, a diagram is drawn starting at the origin. A line segment is drawn rightwards by the magnitude of the leading coefficient, so that with a negative coefficient, the segment will end left of the origin. From the end of the first segment, another segment is drawn upwards by the magnitude of the second coefficient, then left by the magnitude of the third, then down by the magnitude of the fourth, and so on. The sequence of directions (not turns) is always rightward, upward, leftward, downward, then repeating itself. Thus, each turn is counterclockwise. The process continues for every coefficient of the polynomial, including zeros, with negative coefficients "walking backwards." The final point reached, at the end of the segment corresponding to the equation's constant term, is the terminus.
A line is then launched from the origin at some angle θ, reflected off of each line segment at a right angle (not necessarily the "natural" angle of reflection), and refracted at a right angle through the line through each segment (including a line for the zero coefficients) when the angled path does not hit the line segment on that line. The vertical and horizontal lines are reflected off or refracted through in the following sequence: the line containing the segment corresponding to the coefficient of xn−1, then of xn−2 etc. Choosing θ so that the path lands on the terminus, −tan(θ) is a root of this polynomial. For every real zero of the polynomial, there will be one unique initial angle and path that will land on the terminus. A quadratic with two real roots, for example, will have exactly two angles that satisfy the above conditions.
For complex roots, one must also find a series of similar triangles, but with the vertices of the root path displaced from the polynomial path by a distance equal to the imaginary part of the root. In this case, the root path will not be rectangular.
=== Explanation ===
The construction in effect evaluates the polynomial according to Horner's method. For the polynomial
a
n
x
n
+
a
n
−
1
x
n
−
1
+
a
n
−
2
x
n
−
2
+
⋯
{\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+\cdots }
, the values of
a
n
x
{\displaystyle a_{n}x}
,
(
a
n
x
+
a
n
−
1
)
x
{\displaystyle (a_{n}x+a_{n-1})x}
,
(
(
a
n
x
+
a
n
−
1
)
x
+
a
n
−
2
)
x
{\displaystyle ((a_{n}x+a_{n-1})x+a_{n-2})x}
, ... are successively generated as distances between the vertices of the polynomial and root paths. For a root of the polynomial, the final value is zero, so the last vertex coincides with the polynomial path terminus.
=== Additional properties ===
A solution line giving a root is similar to the Lill's construction for the polynomial with that root removed, because the visual construction is analogous to the synthetic division of the polynomial by a linear (root) monic (Ruffini's rule).
From the symmetry of the diagram, it can easily be seen that the roots of the reversed polynomial are the reciprocals of the original roots.
The construction can also be done using clockwise turns instead of counterclockwise turns. When a path is interpreted using the other convention, it corresponds to the mirrored polynomial (every odd coefficient's sign is changed), and the roots are negated.
When the right-angle path is traversed in the other direction but with the same direction convention, it corresponds to the reversed mirrored polynomial, and the roots are the negative reciprocals of the original roots.
=== Finding quadratic roots using Thales's theorem ===
Lill's method can be used with Thales's theorem to find the real roots of a quadratic polynomial.
In this example with 3x2 + 5x − 2, the polynomial's line segments are first drawn in black, as above. A circle is drawn with the straight line segment joining the start and end points forming a diameter.
According to Thales's theorem, the triangle containing these points and any other point on the circle is a right triangle. Intersects of this circle with the middle segment of Lill's method, extended if needed, thus define the two angled paths in Lill's method, colored blue and red.
The negative of the gradients of their first segments, m, yield the real roots 1/3 and −2.
=== Finding roots using paper folding ===
In 1936, Margherita Piazzola Beloch showed how Lill's method could be adapted to solve cubic equations using paper folding. If simultaneous folds are allowed, then any nth-degree equation with a real root can be solved using n − 2 simultaneous folds.
In this example with 3x3 + 2x2 − 7x + 2, the polynomial's line segments are first drawn on a sheet of paper (black). Lines passing through reflections of the start and end points in the second and third segments, respectively (faint circle and square), and parallel to them (grey lines), are drawn.
For each root, the paper is folded until the start point (black circle) and end point (black square) are reflected onto these lines. The axis of reflection (dash-dot line) defines the angled path corresponding to the root (blue, purple, and red). The negative of the gradients of their first segments, m, yield the real roots 1/3, 1, and −2.
== See also ==
Carlyle circle, which is based on a slightly modified version of Lill's method for a normed quadratic.
== References ==
== External links ==
Animation for Lill's Method
Mathologer video: "Solving equations by shooting turtles with lasers" | Wikipedia/Lill's_method |
A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of NOR flash, OTP ROM, or ferroelectric RAM is also often included on the chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general-purpose applications consisting of various discrete chips.
In modern terminology, a microcontroller is similar to, but less sophisticated than, a system on a chip (SoC). A SoC may include a microcontroller as one of its components but usually integrates it with advanced peripherals like a graphics processing unit (GPU), a Wi-Fi module, or one or more coprocessors.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys, and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make digital control of more devices and processes practical. Mixed-signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of the Internet of Things, microcontrollers are an economical and popular means of data collection, sensing and actuating the physical world as edge devices.
Some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz for low power consumption (single-digit milliwatts or microwatts). They generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.
== History ==
=== Background ===
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released on a single MOS LSI chip in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. It was followed by the 4-bit Intel 4040, the 8-bit Intel 8008, and the 8-bit Intel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips. As a result, the total system cost was several hundred (1970s US) dollars, making it impossible to economically computerize small appliances.
MOS Technology introduced its sub-$100 microprocessors in 1975, the 6501 and 6502. Their chief aim was to reduce this cost barrier but these microprocessors still required external support, memory, and peripheral chips which kept the total system cost in the hundreds of dollars.
=== Development ===
One book credits TI engineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was the TMS 1000, which became commercially available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems.
During the early-to-mid-1970s, Japanese electronics manufacturers began producing microcontrollers for automobiles, including 4-bit MCUs for in-car entertainment, automatic wipers, electronic locks, and dashboard, and 8-bit MCUs for engine control.
Partly in response to the existence of the single-chip TMS 1000, Intel developed a computer system on a chip optimized for control applications, the Intel 8048, with commercial parts first shipping in 1977. It combined RAM and ROM on the same chip with a microprocessor. Among numerous applications, this chip would eventually find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, and he expanded the microcontroller division's budget by over 25%.
Most microcontrollers at this time had concurrent variants. One had EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light. These erasable chips were often used for prototyping. The other variant was either a mask-programmed ROM or a PROM variant which was only programmable once. For the latter, sometimes the designation OTP was used, standing for "one-time programmable". In an OTP microcontroller, the PROM was usually of identical type as the EPROM, but the chip package had no quartz window; because there was no way to expose the EPROM to ultraviolet light, it could not be erased. Because the erasable versions required ceramic packages with quartz windows, they were significantly more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is largely opaque—but the main cost differentiator was the ceramic package itself. Piggyback microcontrollers were also used.
In 1993, the introduction of EEPROM memory allowed microcontrollers (beginning with the Microchip PIC16C84) to be electrically erased quickly without an expensive package as required for EPROM, allowing both rapid prototyping, and in-system programming. (EEPROM technology had been available prior to this time, but the earlier EEPROM was more expensive and less durable, making it unsuitable for low-cost mass-produced microcontrollers.) The same year, Atmel introduced the first microcontroller using Flash memory, a special type of EEPROM. Other companies rapidly followed suit, with both memory types.
Nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors.
=== Volume and cost ===
In 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors.
Over two billion 8-bit microcontrollers were sold in 1997, and according to Semico, over four billion 8-bit microcontrollers were sold in 2006. More recently, Semico has claimed the MCU market grew 36.5% in 2010 and 12% in 2011.
A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has about 30 microcontrollers. They can also be found in many electrical devices such as washing machines, microwave ovens, and telephones.
Historically, the 8-bit segment has dominated the MCU market [..] 16-bit microcontrollers became the largest volume MCU category in 2011, overtaking 8-bit devices for the first time that year [..] IC Insights believes the makeup of the MCU market will undergo substantial changes in the next five years with 32-bit devices steadily grabbing a greater share of sales and unit volumes. By 2017, 32-bit MCUs are expected to account for 55% of microcontroller sales [..] In terms of unit volumes, 32-bit MCUs are expected account for 38% of microcontroller shipments in 2017, while 16-bit devices will represent 34% of the total, and 4-/8-bit designs are forecast to be 28% of units sold that year.
The 32-bit MCU market is expected to grow rapidly due to increasing demand for higher levels of precision in embedded-processing systems and the growth in connectivity using the Internet. [..] In the next few years, complex 32-bit MCUs are expected to account for over 25% of the processing power in vehicles.
Cost to manufacture can be under US$0.10 per unit.
Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under US$0.03 in 2018, and some 32-bit microcontrollers around US$1 for similar quantities.
In 2012, following a global crisis—a worst ever annual sales decline and recovery and average sales price year-over-year plunging 17%—the biggest reduction since the 1980s—the average price for a microcontroller was US$0.88 (US$0.69 for 4-/8-bit, US$0.59 for 16-bit, US$1.76 for 32-bit).
In 2012, worldwide sales of 8-bit microcontrollers were around US$4 billion, while 4-bit microcontrollers also saw significant sales.
In 2015, 8-bit microcontrollers could be bought for US$0.311 (1,000 units), 16-bit for US$0.385 (1,000 units), and 32-bit for US$0.378 (1,000 units, but at US$0.35 for 5,000).
In 2018, 8-bit microcontrollers could be bought for US$0.03, 16-bit for US$0.393 (1,000 units, but at US$0.563 for 100 or US$0.349 for full reel of 2,000), and 32-bit for US$0.503 (1,000 units, but at US$0.466 for 5,000).
In 2018, the low-priced microcontrollers above from 2015 were all more expensive (with inflation calculated between 2018 and 2015 prices for those specific units) at: the 8-bit microcontroller could be bought for US$0.319 (1,000 units) or 2.6% higher, the 16-bit one for US$0.464 (1,000 units) or 21% higher, and the 32-bit one for US$0.503 (1,000 units, but at US$0.466 for 5,000) or 33% higher.
=== Smallest computer ===
On 21 June 2018, the "world's smallest computer" was announced by the University of Michigan. The device is a "0.04 mm3 16 nW wireless and batteryless sensor system with integrated Cortex-M0+ processor and optical communication for cellular temperature measurement." It "measures just 0.3 mm to a side—dwarfed by a grain of rice. [...] In addition to the RAM and photovoltaics, the new computing devices have processors and wireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data." The device is 1⁄10th the size of IBM's previously claimed world-record-sized computer from months back in March 2018, which is "smaller than a grain of salt", has a million transistors, costs less than $0.10 to manufacture, and, combined with blockchain technology, is intended for logistics and "crypto-anchors"—digital fingerprint applications.
== Embedded design ==
A microcontroller can be considered a self-contained system with a processor, memory and peripherals and can be used as an embedded system. The majority of microcontrollers in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems.
While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LED's, small or custom liquid-crystal displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind.
=== Interrupts ===
Microcontrollers must provide real-time (predictable, though not necessarily fast) response to events in the embedded system they are controlling. When certain events occur, an interrupt system can signal the processor to suspend processing the current instruction sequence and to begin an interrupt service routine (ISR, or "interrupt handler") which will perform any processing required based on the source of the interrupt, before returning to the original instruction sequence. Possible interrupt sources are device-dependent and often include events such as an internal timer overflow, completing an analog-to-digital conversion, a logic-level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery devices, interrupts may also wake a microcontroller from a low-power sleep state where the processor is halted until required to do something by a peripheral event.
=== Programs ===
Typically microcontroller programs must fit in the available on-chip memory, since it would be costly to provide a system with external, expandable memory. Compilers and assemblers are used to convert both high-level and assembly language code into a compact machine code for storage in the microcontroller's memory. Depending on the device, the program memory may be permanent, read-only memory that can only be programmed at the factory, or it may be field-alterable flash or erasable read-only memory.
Manufacturers have often produced special versions of their microcontrollers in order to help the hardware and software development of the target system. Originally these included EPROM versions that have a "window" on the top of the device through which program memory can be erased by ultraviolet light, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced by EEPROM and flash, which are easier to use (can be erased electronically) and cheaper to manufacture.
Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming rare due to the widespread availability of cheap microcontroller programmers.
The use of field-programmable devices on a microcontroller may allow field update of the firmware or permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product.
Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be economical. These "mask-programmed" parts have the program laid down in the same way as the logic of the chip, at the same time.
A customized microcontroller incorporates a block of digital logic that can be personalized for additional processing capability, peripherals and interfaces that are adapted to the requirements of the application. One example is the AT91CAP from Atmel.
=== Other microcontroller features ===
Microcontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins are software configurable to either an input or an output state. When GPIO pins are configured to an input state, they are often used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devices such as LEDs or motors, often indirectly, through external power electronics.
Many embedded systems need to read sensors that produce analog signals. However, because they are built to interpret and process digital data, i.e. 1s and 0s, they are not able to do anything with the analog signals that may be sent to it by a device. So, an analog-to-digital converter (ADC) is used to convert the incoming data into a form that the processor can recognize. A less common feature on some microcontrollers is a digital-to-analog converter (DAC) that allows the processor to output analog signals or voltage levels.
In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the programmable interval timer (PIT). A PIT may either count down from some value to zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on/off, the heater on/off, etc.
A dedicated pulse-width modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using many CPU resources in tight timer loops.
A universal asynchronous receiver/transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities to communicate with other devices (chips) in digital formats such as Inter-Integrated Circuit (I²C), Serial Peripheral Interface (SPI), Universal Serial Bus (USB), and Ethernet.
== Higher integration ==
Microcontrollers may not implement an external address or data bus as they integrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package.
Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board, in addition to tending to decrease the defect rate for the finished assembly.
A microcontroller is a single integrated circuit, commonly with the following features:
central processing unit – ranging from small and simple 4-bit processors to complex 32-bit or 64-bit processors
volatile memory (RAM) for data storage
ROM, EPROM, EEPROM or Flash memory for program and operating parameter storage
discrete input and output bits, allowing control or detection of the logic state of an individual package pin
serial input/output such as serial ports (UARTs)
other serial communications interfaces like I²C, Serial Peripheral Interface and Controller Area Network for system interconnect
peripherals such as timers, event counters, PWM generators, and watchdog
clock generator – often an oscillator for a quartz timing crystal, resonator or RC circuit
many include analog-to-digital converters, some include digital-to-analog converters
in-circuit programming and in-circuit debugging support
This integration drastically reduces the number of chips and the amount of wiring and circuit board space that would be needed to produce equivalent systems using separate chips. Furthermore, on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions.
Microcontrollers have proved to be highly popular in embedded systems since their introduction in the 1970s.
Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers.
The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.
Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose-built for control applications. A microcontroller instruction set usually has many instructions intended for bit manipulation (bit-wise operations) to make control programs more compact. For example, a general-purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a microcontroller could have a single instruction to provide that commonly required function.
Microcontrollers historically have not had math coprocessors, so floating-point arithmetic has been performed by software. However, some recent designs do include FPUs and DSP-optimized features. An example would be Microchip's PIC32 MIPS-based line.
== Programming environments ==
Microcontrollers were originally programmed only in assembly language, but various high-level programming languages, such as C, Python and JavaScript, are now also in common use to target microcontrollers and embedded systems. Compilers for general-purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.
Microcontrollers with specialty hardware may require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters may also contain nonstandard features, such as MicroPython, although a fork, CircuitPython, has looked to move hardware dependencies to libraries and have the language adhere to a more CPython standard.
Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontroller Intel 8052; BASIC and FORTH on the Zilog Z8 as well as some modern devices. Typically these interpreters support interactive programming.
Simulators are available for some microcontrollers. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems.
Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an in-circuit emulator (ICE) via JTAG, allow debugging of the firmware with a debugger. A real-time ICE may allow viewing and/or manipulating of internal states while running. A tracing ICE can record executed program and MCU states before/after a trigger point.
== Types ==
As of 2008, there are several dozen microcontroller architectures and vendors including:
ARM core processors (many vendors)
ARM Cortex-M cores are specifically targeted toward microcontroller applications
Microchip Technology Atmel AVR (8-bit), AVR32 (32-bit), and AT91SAM (32-bit)
Cypress Semiconductor's M8C core used in their Cypress PSoC
Freescale ColdFire (32-bit) and S08 (8-bit)
Freescale 68HC11 (8-bit), and others based on the Motorola 6800 family
Intel 8051, also manufactured by NXP Semiconductors, Infineon and many others
Infineon: 8-bit XC800, 16-bit XE166, 32-bit XMC4000 (ARM based Cortex M4F), 32-bit TriCore and, 32-bit Aurix Tricore Bit microcontrollers
Maxim Integrated MAX32600, MAX32620, MAX32625, MAX32630, MAX32650, MAX32640
MIPS
Microchip Technology PIC, (8-bit PIC16, PIC18, 16-bit dsPIC33 / PIC24), (32-bit PIC32)
NXP Semiconductors LPC1000, LPC2000, LPC3000, LPC4000 (32-bit), LPC900, LPC700 (8-bit)
Parallax Propeller
PowerPC ISE
Rabbit 2000 (8-bit)
Renesas Electronics: RL78 16-bit MCU; RX 32-bit MCU; SuperH; V850 32-bit MCU; H8; R8C 16-bit MCU
Silicon Laboratories Pipelined 8-bit 8051 microcontrollers and mixed-signal ARM-based 32-bit microcontrollers
STMicroelectronics STM8 (8-bit), ST10 (16-bit), STM32 (32-bit), SPC5 (automotive 32-bit)
Texas Instruments TI MSP430 (16-bit), MSP432 (32-bit), C2000 (32-bit)
Toshiba TLCS-870 (8-bit/16-bit)
Many others exist, some of which are used in very narrow range of applications or are more like applications processors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors, technologies, and markets. Note that many vendors sell or have sold multiple architectures.
== Interrupt latency ==
In contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimize interrupt latency over instruction throughput. Issues include both reducing the latency, and making it be more predictable (to support real-time control).
When an electronic device causes an interrupt, during the context switch the intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after that interrupt handler is finished. If there are more processor registers, this saving and restoring process may take more time, increasing the latency. (If an ISR does not require the use of some registers, it may simply leave them alone rather than saving and restoring them, so in that case those registers are not involved with the latency.) Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least having the hardware not save them all (this fails if the software then needs to compensate by saving the rest "manually"). Another technique involves spending silicon gates on "shadow registers": One or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack.
Other factors affecting interrupt latency include:
Cycles needed to complete current CPU activities. To minimize those costs, microcontrollers tend to have short pipelines (often three instructions or less), small write buffers, and ensure that longer instructions are continuable or restartable. RISC design principles ensure that most instructions take the same number of cycles, helping avoid the need for most such continuation/restart logic.
The length of any critical section that needs to be interrupted. Entry to a critical section restricts concurrent data structure access. When a data structure must be accessed by an interrupt handler, the critical section must block that interrupt. Accordingly, interrupt latency is increased by however long that interrupt is blocked. When there are hard external constraints on system latency, developers often need tools to measure interrupt latencies and track down which critical sections cause slowdowns.
One common technique just blocks all interrupts for the duration of the critical section. This is easy to implement, but sometimes critical sections get uncomfortably long.
A more complex technique just blocks the interrupts that may trigger access to that data structure. This is often based on interrupt priorities, which tend to not correspond well to the relevant system data structures. Accordingly, this technique is used mostly in very constrained environments.
Processors may have hardware support for some critical sections. Examples include supporting atomic access to bits or bytes within a word, or other atomic access primitives like the LDREX/STREX exclusive access primitives introduced in the ARMv6 architecture.
Interrupt nesting. Some microcontrollers allow higher priority interrupts to interrupt lower priority ones. This allows software to manage latency by giving time-critical interrupts higher priority (and thus lower and more predictable latency) than less-critical ones.
Trigger rate. When interrupts occur back-to-back, microcontrollers may avoid an extra context save/restore cycle by a form of tail call optimization.
Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones.
== Memory technology ==
Two different kinds of memory are commonly used with microcontrollers, a non-volatile memory for storing firmware and a read–write memory for temporary data.
=== Data ===
From the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write working memory, with a few more transistors per bit used in the register file.
In addition to the SRAM, some microcontrollers also have internal EEPROM and/or NVRAM for data storage; and ones that do not have any (such as the BASIC Stamp), or where the internal memory is insufficient, are often connected to an external EEPROM or flash memory chip.
A few microcontrollers beginning in 2003 have "self-programmable" flash memory.
=== Firmware ===
The earliest microcontrollers used mask ROM to store firmware. Later microcontrollers (such as the early versions of the Freescale 68HC11 and early PIC microcontrollers) had EPROM memory, which used a translucent window to allow erasure via UV light, while production versions had no such window, being OTP (one-time-programmable). Firmware updates were equivalent to replacing the microcontroller itself, thus many products were not upgradeable.
Motorola MC68HC805 was the first microcontroller to use EEPROM to store the firmware. EEPROM microcontrollers became more popular in 1993 when Microchip introduced PIC16C84 and Atmel introduced an 8051-core microcontroller that was first one to use NOR Flash memory to store the firmware. Today's microcontrollers almost all use flash memory, with a few models using FRAM and some ultra-low-cost parts still using OTP or Mask ROM.
== See also ==
Microprocessor
System on a chip
List of common microcontrollers
List of Wi-Fi microcontrollers
List of open-source hardware projects
Microbotics
Programmable logic controller
Single-board microcontroller
== References ==
== External links == | Wikipedia/Microcontroller |
The felicific calculus is an algorithm formulated by utilitarian philosopher Jeremy Bentham (1748–1832) for calculating the degree or amount of pleasure that a specific action is likely to induce. Bentham, an ethical hedonist, believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced. The felicific calculus could in principle, at least, determine the moral status of any considered act. The algorithm is also known as the utility calculus, the hedonistic calculus and the hedonic calculus.
To be included in this calculation are several variables (or vectors), which Bentham called "circumstances". These are:
Intensity: How strong is the pleasure?
Duration: How long will the pleasure last?
Certainty or uncertainty: How likely or unlikely is it that the pleasure will occur?
Propinquity or remoteness: How soon will the pleasure occur?
Fecundity: The probability that the action will be followed by sensations of the same kind.
Purity: The probability that it will not be followed by sensations of the opposite kind.
Extent: How many people will be affected?
== Bentham's instructions ==
To take an exact account of the general tendency of any act, by which the interests of a community are affected, proceed as follows. Begin with any one person of those whose interests seem most immediately to be affected by it: and take an account,
Of the value of each distinguishable pleasure which appears to be produced by it in the first instance.
Of the value of each pain which appears to be produced by it in the first instance.
Of the value of each pleasure which appears to be produced by it after the first. This constitutes the fecundity of the first pleasure and the impurity of the first pain.
Of the value of each pain which appears to be produced by it after the first. This constitutes the fecundity of the first pain, and the impurity of the first pleasure.
Sum up all the values of all the pleasures on the one side, and those of all the pains on the other. The balance, if it be on the side of pleasure, will give the good tendency of the act upon the whole, with respect to the interests of that individual person; if on the side of pain, the bad tendency of it upon the whole.
Take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. Sum up the numbers expressive of the degrees of good tendency, which the act has, with respect to each individual, in regard to whom the tendency of it is good upon the whole. Do this again with respect to each individual, in regard to whom the tendency of it is bad upon the whole. Take the balance which if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community.
To make his proposal easier to remember, Bentham devised what he called a "mnemonic doggerel" (also referred to as "memoriter verses"), which synthesized "the whole fabric of morals and legislation":
Intense, long, certain, speedy, fruitful, pure—
Such marks in pleasures and in pains endure.
Such pleasures seek if private be thy end:
If it be public, wide let them extend
Such pains avoid, whichever be thy view:
If pains must come, let them extend to few.
== Jevons' economics ==
W. Stanley Jevons used the algebra of pleasure and pain in his science of utility applied to economics. He described utility with graphs where marginal utility continuously declines. His figure 9 on page 173 has two curves: one for the painfulness of labour and the other for utility of production. As the amount of product increases there is a point where a "balance of pain" is reached and labour ceases.
== Hedonimetry ==
Hedonimetry is the study of happiness ("experienced utility") as a measurable economic asset. The first major work in the field was an 1881 publication of Mathematical Psychics by the famous statistician and economist Francis Ysidro Edgeworth, who hypothesized a way of measuring happiness in units.
The concept of measuring hedonic utility arose in Utilitarianism, with Classical Utilitarians acknowledging that the actual pleasure might not be easy to express quantitatively as a numeric value. Bentham, the early proponent of the concept, declared that the happiness is a sequence of episodes, each characterized by its intensity and duration. This definition formally makes episodes permutable, as the total pleasure does not depend on their order. Since practical experience teaches otherwise (enjoyment from a meal does depend on the order of courses), followers of Bentham argued that the order of episodes changes their intensity.
=== Units ===
The units of measurements used in the felicific calculus may be termed hedons and dolors.
== See also ==
Act utilitarianism
Bellman equation
Epicurus
Ethical calculus
Reinforcement learning
Science of morality
Utilitarian social choice rule - a mathematical formula for felicific calculus.
== References ==
== Sources ==
Skyrms, Brian; Narens, Louis (2019). "Measuring the hedonimeter". Philosophical Studies. 176 (12): 3199–3210. doi:10.1007/s11098-018-1170-z. ISSN 0031-8116. | Wikipedia/Felicific_calculus |
A finite difference is a mathematical expression of the form f(x + b) − f(x + a). Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation.
The difference operator, commonly denoted
Δ
{\displaystyle \Delta }
, is the operator that maps a function f to the function
Δ
[
f
]
{\displaystyle \Delta [f]}
defined by
Δ
[
f
]
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle \Delta [f](x)=f(x+1)-f(x).}
A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences.
In numerical analysis, finite differences are widely used for approximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives".
Finite differences were introduced by Brook Taylor in 1715 and have also been studied as abstract self-standing mathematical objects in works by George Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939). Finite differences trace their origins back to one of Jost Bürgi's algorithms (c. 1592) and work by others including Isaac Newton. The formal calculus of finite differences can be viewed as an alternative to the calculus of infinitesimals.
== Basic types ==
Three basic types are commonly considered: forward, backward, and central finite differences.
A forward difference, denoted
Δ
h
[
f
]
,
{\displaystyle \Delta _{h}[f],}
of a function f is a function defined as
Δ
h
[
f
]
(
x
)
=
f
(
x
+
h
)
−
f
(
x
)
.
{\displaystyle \Delta _{h}[f](x)=f(x+h)-f(x).}
Depending on the application, the spacing h may be variable or constant. When omitted, h is taken to be 1; that is,
Δ
[
f
]
(
x
)
=
Δ
1
[
f
]
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle \Delta [f](x)=\Delta _{1}[f](x)=f(x+1)-f(x).}
A backward difference uses the function values at x and x − h, instead of the values at x + h and x:
∇
h
[
f
]
(
x
)
=
f
(
x
)
−
f
(
x
−
h
)
=
Δ
h
[
f
]
(
x
−
h
)
.
{\displaystyle \nabla _{h}[f](x)=f(x)-f(x-h)=\Delta _{h}[f](x-h).}
Finally, the central difference is given by
δ
h
[
f
]
(
x
)
=
f
(
x
+
h
2
)
−
f
(
x
−
h
2
)
=
Δ
h
/
2
[
f
]
(
x
)
+
∇
h
/
2
[
f
]
(
x
)
.
{\displaystyle \delta _{h}[f](x)=f(x+{\tfrac {h}{2}})-f(x-{\tfrac {h}{2}})=\Delta _{h/2}[f](x)+\nabla _{h/2}[f](x).}
== Relation with derivatives ==
The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
The derivative of a function f at a point x is defined by the limit
f
′
(
x
)
=
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}
If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written
f
(
x
+
h
)
−
f
(
x
)
h
=
Δ
h
[
f
]
(
x
)
h
.
{\displaystyle {\frac {f(x+h)-f(x)}{h}}={\frac {\Delta _{h}[f](x)}{h}}.}
Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is twice differentiable, we have
Δ
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
)
→
0
as
h
→
0.
{\displaystyle {\frac {\Delta _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
The same formula holds for the backward difference:
∇
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
)
→
0
as
h
→
0.
{\displaystyle {\frac {\nabla _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
However, the central (also called centered) difference yields a more accurate approximation. If f is three times differentiable,
δ
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
2
)
.
{\displaystyle {\frac {\delta _{h}[f](x)}{h}}-f'(x)=o\left(h^{2}\right).}
The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If f(nh) = 1 for n odd, and f(nh) = 2 for n even, then f′(nh) = 0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete. See also Symmetric derivative.
Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section).
== Higher-order differences ==
In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f′(x + h/2) and f′(x − h/2) and applying a central difference formula for the derivative of f′ at x, we obtain the central difference approximation of the second derivative of f:
Second-order central
f
″
(
x
)
≈
δ
h
2
[
f
]
(
x
)
h
2
=
f
(
x
+
h
)
−
f
(
x
)
h
−
f
(
x
)
−
f
(
x
−
h
)
h
h
=
f
(
x
+
h
)
−
2
f
(
x
)
+
f
(
x
−
h
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+h)-f(x)}{h}}-{\frac {f(x)-f(x-h)}{h}}}{h}}={\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.}
Similarly we can apply other differencing formulas in a recursive manner.
Second order forward
f
″
(
x
)
≈
Δ
h
2
[
f
]
(
x
)
h
2
=
f
(
x
+
2
h
)
−
f
(
x
+
h
)
h
−
f
(
x
+
h
)
−
f
(
x
)
h
h
=
f
(
x
+
2
h
)
−
2
f
(
x
+
h
)
+
f
(
x
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\Delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+2h)-f(x+h)}{h}}-{\frac {f(x+h)-f(x)}{h}}}{h}}={\frac {f(x+2h)-2f(x+h)+f(x)}{h^{2}}}.}
Second order backward
f
″
(
x
)
≈
∇
h
2
[
f
]
(
x
)
h
2
=
f
(
x
)
−
f
(
x
−
h
)
h
−
f
(
x
−
h
)
−
f
(
x
−
2
h
)
h
h
=
f
(
x
)
−
2
f
(
x
−
h
)
+
f
(
x
−
2
h
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\nabla _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x)-f(x-h)}{h}}-{\frac {f(x-h)-f(x-2h)}{h}}}{h}}={\frac {f(x)-2f(x-h)+f(x-2h)}{h^{2}}}.}
More generally, the n-th order forward, backward, and central differences are given by, respectively,
Forward
Δ
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
n
−
i
(
n
i
)
f
(
x
+
i
h
)
,
{\displaystyle \Delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{n-i}{\binom {n}{i}}f{\bigl (}x+ih{\bigr )},}
Backward
∇
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
i
(
n
i
)
f
(
x
−
i
h
)
,
{\displaystyle \nabla _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f(x-ih),}
Central
δ
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
i
(
n
i
)
f
(
x
+
(
n
2
−
i
)
h
)
.
{\displaystyle \delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f\left(x+\left({\frac {n}{2}}-i\right)h\right).}
These equations use binomial coefficients after the summation sign shown as (ni). Each row of Pascal's triangle provides the coefficient for each value of i.
Note that the central difference will, for odd n, have h multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average of
δ
n
[
f
]
(
x
−
h
2
)
{\displaystyle \ \delta ^{n}[f](\ x-{\tfrac {\ h\ }{2}}\ )\ }
and
δ
n
[
f
]
(
x
+
h
2
)
.
{\displaystyle \ \delta ^{n}[f](\ x+{\tfrac {\ h\ }{2}}\ )~.}
Forward differences applied to a sequence are sometimes called the binomial transform of the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n.
The relationship of these higher-order differences with the respective derivatives is straightforward,
d
n
f
d
x
n
(
x
)
=
Δ
h
n
[
f
]
(
x
)
h
n
+
o
(
h
)
=
∇
h
n
[
f
]
(
x
)
h
n
+
o
(
h
)
=
δ
h
n
[
f
]
(
x
)
h
n
+
o
(
h
2
)
.
{\displaystyle {\frac {d^{n}f}{dx^{n}}}(x)={\frac {\Delta _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\nabla _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\delta _{h}^{n}[f](x)}{h^{n}}}+o\left(h^{2}\right).}
Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination
Δ
h
[
f
]
(
x
)
−
1
2
Δ
h
2
[
f
]
(
x
)
h
=
−
f
(
x
+
2
h
)
−
4
f
(
x
+
h
)
+
3
f
(
x
)
2
h
{\displaystyle {\frac {\Delta _{h}[f](x)-{\frac {1}{2}}\Delta _{h}^{2}[f](x)}{h}}=-{\frac {f(x+2h)-4f(x+h)+3f(x)}{2h}}}
approximates f′(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.
If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.
== Polynomials ==
For a given polynomial of degree n ≥ 1, expressed in the function P(x), with real numbers a ≠ 0 and b and lower order terms (if any) marked as l.o.t.:
P
(
x
)
=
a
x
n
+
b
x
n
−
1
+
l
.
o
.
t
.
{\displaystyle P(x)=ax^{n}+bx^{n-1}+l.o.t.}
After n pairwise differences, the following result can be achieved, where h ≠ 0 is a real number marking the arithmetic difference:
Δ
h
n
[
P
]
(
x
)
=
a
h
n
n
!
{\displaystyle \Delta _{h}^{n}[P](x)=ah^{n}n!}
Only the coefficient of the highest-order term remains. As this result is constant with respect to x, any further pairwise differences will have the value 0.
=== Inductive proof ===
==== Base case ====
Let Q(x) be a polynomial of degree 1:
Δ
h
[
Q
]
(
x
)
=
Q
(
x
+
h
)
−
Q
(
x
)
=
[
a
(
x
+
h
)
+
b
]
−
[
a
x
+
b
]
=
a
h
=
a
h
1
1
!
{\displaystyle \Delta _{h}[Q](x)=Q(x+h)-Q(x)=[a(x+h)+b]-[ax+b]=ah=ah^{1}1!}
This proves it for the base case.
==== Inductive step ====
Let R(x) be a polynomial of degree m − 1 where m ≥ 2 and the coefficient of the highest-order term be a ≠ 0. Assuming the following holds true for all polynomials of degree m − 1:
Δ
h
m
−
1
[
R
]
(
x
)
=
a
h
m
−
1
(
m
−
1
)
!
{\displaystyle \Delta _{h}^{m-1}[R](x)=ah^{m-1}(m-1)!}
Let S(x) be a polynomial of degree m. With one pairwise difference:
Δ
h
[
S
]
(
x
)
=
[
a
(
x
+
h
)
m
+
b
(
x
+
h
)
m
−
1
+
l.o.t.
]
−
[
a
x
m
+
b
x
m
−
1
+
l.o.t.
]
=
a
h
m
x
m
−
1
+
l.o.t.
=
T
(
x
)
{\displaystyle \Delta _{h}[S](x)=[a(x+h)^{m}+b(x+h)^{m-1}+{\text{l.o.t.}}]-[ax^{m}+bx^{m-1}+{\text{l.o.t.}}]=ahmx^{m-1}+{\text{l.o.t.}}=T(x)}
As ahm ≠ 0, this results in a polynomial T(x) of degree m − 1, with ahm as the coefficient of the highest-order term. Given the assumption above and m − 1 pairwise differences (resulting in a total of m pairwise differences for S(x)), it can be found that:
Δ
h
m
−
1
[
T
]
(
x
)
=
a
h
m
⋅
h
m
−
1
(
m
−
1
)
!
=
a
h
m
m
!
{\displaystyle \Delta _{h}^{m-1}[T](x)=ahm\cdot h^{m-1}(m-1)!=ah^{m}m!}
This completes the proof.
=== Application ===
This identity can be used to find the lowest-degree polynomial that intercepts a number of points (x, y) where the difference on the x-axis from one point to the next is a constant h ≠ 0. For example, given the following points:
We can use a differences table, where for all cells to the right of the first y, the following relation to the cells in the column immediately to the left exists for a cell (a + 1, b + 1), with the top-leftmost cell being at coordinate (0, 0):
(
a
+
1
,
b
+
1
)
=
(
a
,
b
+
1
)
−
(
a
,
b
)
{\displaystyle (a+1,b+1)=(a,b+1)-(a,b)}
To find the first term, the following table can be used:
This arrives at a constant 648. The arithmetic difference is h = 3, as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree 3. Thus, using the identity above:
648
=
a
⋅
3
3
⋅
3
!
=
a
⋅
27
⋅
6
=
a
⋅
162
{\displaystyle 648=a\cdot 3^{3}\cdot 3!=a\cdot 27\cdot 6=a\cdot 162}
Solving for a, it can be found to have the value 4. Thus, the first term of the polynomial is 4x3.
Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again:
Here, the constant is achieved after only two pairwise differences, thus the following result:
−
306
=
a
⋅
3
2
⋅
2
!
=
a
⋅
18
{\displaystyle -306=a\cdot 3^{2}\cdot 2!=a\cdot 18}
Solving for a, which is −17, the polynomial's second term is −17x2.
Moving on to the next term, by subtracting out the second term:
Thus the constant is achieved after only one pairwise difference:
108
=
a
⋅
3
1
⋅
1
!
=
a
⋅
3
{\displaystyle 108=a\cdot 3^{1}\cdot 1!=a\cdot 3}
It can be found that a = 36 and thus the third term of the polynomial is 36x. Subtracting out the third term:
Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant −19. Thus, the lowest-degree polynomial intercepting all the points in the first table is found:
4
x
3
−
17
x
2
+
36
x
−
19
{\displaystyle 4x^{3}-17x^{2}+36x-19}
== Arbitrarily sized kernels ==
Using linear algebra one can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid.
This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side.
Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed.
=== Properties ===
For all positive k and n
Δ
k
h
n
(
f
,
x
)
=
∑
i
1
=
0
k
−
1
∑
i
2
=
0
k
−
1
⋯
∑
i
n
=
0
k
−
1
Δ
h
n
(
f
,
x
+
i
1
h
+
i
2
h
+
⋯
+
i
n
h
)
.
{\displaystyle \Delta _{kh}^{n}(f,x)=\sum \limits _{i_{1}=0}^{k-1}\sum \limits _{i_{2}=0}^{k-1}\cdots \sum \limits _{i_{n}=0}^{k-1}\Delta _{h}^{n}\left(f,x+i_{1}h+i_{2}h+\cdots +i_{n}h\right).}
Leibniz rule:
Δ
h
n
(
f
g
,
x
)
=
∑
k
=
0
n
(
n
k
)
Δ
h
k
(
f
,
x
)
Δ
h
n
−
k
(
g
,
x
+
k
h
)
.
{\displaystyle \Delta _{h}^{n}(fg,x)=\sum \limits _{k=0}^{n}{\binom {n}{k}}\Delta _{h}^{k}(f,x)\Delta _{h}^{n-k}(g,x+kh).}
== In differential equations ==
An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods.
Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc.
== Newton's series ==
The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Gregory–Newton interpolation formula (named after Isaac Newton and James Gregory), first published in his Principia Mathematica in 1687, namely the discrete analog of the continuous Taylor expansion,
which holds for any polynomial function f and for many (but not all) analytic functions. (It does not hold when f is exponential type
π
{\displaystyle \pi }
. This is easily seen, as the sine function vanishes at integer multiples of
π
{\displaystyle \pi }
; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression
(
x
k
)
=
(
x
)
k
k
!
{\displaystyle {\binom {x}{k}}={\frac {(x)_{k}}{k!}}}
is the binomial coefficient, and
(
x
)
k
=
x
(
x
−
1
)
(
x
−
2
)
⋯
(
x
−
k
+
1
)
{\displaystyle (x)_{k}=x(x-1)(x-2)\cdots (x-k+1)}
is the "falling factorial" or "lower factorial", while the empty product (x)0 is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of x, h = 1 of the generalization below.
Note the formal correspondence of this result to Taylor's theorem. Historically, this, as well as the Chu–Vandermonde identity,
(
x
+
y
)
n
=
∑
k
=
0
n
(
n
k
)
(
x
)
n
−
k
(
y
)
k
,
{\displaystyle (x+y)_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(x)_{n-k}\,(y)_{k},}
(following from it, and corresponding to the binomial theorem), are included in the observations that matured to the system of umbral calculus.
Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (see Holstein–Primakoff transformation), bosonic operator functions or discrete counting statistics.
To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequence f = 2, 2, 4, ... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to x0 (underlined) into the formula as follows,
x
f
=
Δ
0
Δ
1
Δ
2
1
2
_
0
_
2
2
2
_
2
3
4
f
(
x
)
=
Δ
0
⋅
1
+
Δ
1
⋅
(
x
−
x
0
)
1
1
!
+
Δ
2
⋅
(
x
−
x
0
)
2
2
!
(
x
0
=
1
)
=
2
⋅
1
+
0
⋅
x
−
1
1
+
2
⋅
(
x
−
1
)
(
x
−
2
)
2
=
2
+
(
x
−
1
)
(
x
−
2
)
{\displaystyle {\begin{matrix}{\begin{array}{|c||c|c|c|}\hline x&f=\Delta ^{0}&\Delta ^{1}&\Delta ^{2}\\\hline 1&{\underline {2}}&&\\&&{\underline {0}}&\\2&2&&{\underline {2}}\\&&2&\\3&4&&\\\hline \end{array}}&\quad {\begin{aligned}f(x)&=\Delta ^{0}\cdot 1+\Delta ^{1}\cdot {\dfrac {(x-x_{0})_{1}}{1!}}+\Delta ^{2}\cdot {\dfrac {(x-x_{0})_{2}}{2!}}\quad (x_{0}=1)\\\\&=2\cdot 1+0\cdot {\dfrac {x-1}{1}}+2\cdot {\dfrac {(x-1)(x-2)}{2}}\\\\&=2+(x-1)(x-2)\\\end{aligned}}\end{matrix}}}
For the case of nonuniform steps in the values of x, Newton computes the divided differences,
Δ
j
,
0
=
y
j
,
Δ
j
,
k
=
Δ
j
+
1
,
k
−
1
−
Δ
j
,
k
−
1
x
j
+
k
−
x
j
∋
{
k
>
0
,
j
≤
max
(
j
)
−
k
}
,
Δ
0
k
=
Δ
0
,
k
{\displaystyle \Delta _{j,0}=y_{j},\qquad \Delta _{j,k}={\frac {\Delta _{j+1,k-1}-\Delta _{j,k-1}}{x_{j+k}-x_{j}}}\quad \ni \quad \left\{k>0,\;j\leq \max \left(j\right)-k\right\},\qquad \Delta 0_{k}=\Delta _{0,k}}
the series of products,
P
0
=
1
,
P
k
+
1
=
P
k
⋅
(
ξ
−
x
k
)
,
{\displaystyle {P_{0}}=1,\quad \quad P_{k+1}=P_{k}\cdot \left(\xi -x_{k}\right),}
and the resulting polynomial is the scalar product,
f
(
ξ
)
=
Δ
0
⋅
P
(
ξ
)
.
{\displaystyle f(\xi )=\Delta 0\cdot P\left(\xi \right).}
In analysis with p-adic numbers, Mahler's theorem states that the assumption that f is a polynomial function can be weakened all the way to the assumption that f is merely continuous.
Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist.
The Newton series, together with the Stirling series and the Selberg series, is a special case of the general difference series, all of which are defined in terms of suitably scaled forward differences.
In a compressed and slightly more general form and equidistant nodes the formula reads
f
(
x
)
=
∑
k
=
0
(
x
−
a
h
k
)
∑
j
=
0
k
(
−
1
)
k
−
j
(
k
j
)
f
(
a
+
j
h
)
.
{\displaystyle f(x)=\sum _{k=0}{\binom {\frac {x-a}{h}}{k}}\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}f(a+jh).}
== Calculus of finite differences ==
The forward difference can be considered as an operator, called the difference operator, which maps the function f to Δh[f]. This operator amounts to
Δ
h
=
T
h
−
I
,
{\displaystyle \Delta _{h}=\operatorname {T} _{h}-\operatorname {I} \ ,}
where Th is the shift operator with step h, defined by Th[f](x) = f(x + h), and I is the identity operator.
The finite difference of higher orders can be defined in recursive manner as Δnh ≡ Δh(Δn − 1h). Another equivalent definition is Δnh ≡ [Th − I]n.
The difference operator Δh is a linear operator, as such it satisfies Δh[α f + β g](x) = α Δh[f](x) + β Δh[g](x).
It also satisfies a special Leibniz rule:
Δ
h
(
f
(
x
)
g
(
x
)
)
=
(
Δ
h
f
(
x
)
)
g
(
x
+
h
)
+
f
(
x
)
(
Δ
h
g
(
x
)
)
.
{\displaystyle \ \operatorname {\Delta } _{h}{\bigl (}\ f(x)\ g(x)\ {\bigr )}\ =\ {\bigl (}\ \operatorname {\Delta } _{h}f(x)\ {\bigr )}\ g(x+h)\ +\ f(x)\ {\bigl (}\ \operatorname {\Delta } _{h}g(x)\ {\bigr )}~.}
Similar Leibniz rules hold for the backward and central differences.
Formally applying the Taylor series with respect to h, yields the operator equation
Δ
h
=
h
D
+
1
2
!
h
2
D
2
+
1
3
!
h
3
D
3
+
⋯
=
e
h
D
−
I
,
{\displaystyle \operatorname {\Delta } _{h}=h\operatorname {D} +{\frac {1}{2!}}h^{2}\operatorname {D} ^{2}+{\frac {1}{3!}}h^{3}\operatorname {D} ^{3}+\cdots =e^{h\operatorname {D} }-\operatorname {I} \ ,}
where D denotes the conventional, continuous derivative operator, mapping f to its derivative f′. The expansion is valid when both sides act on analytic functions, for sufficiently small h; in the special case that the series of derivatives terminates (when the function operated on is a finite polynomial) the expression is exact, for all finite stepsizes, h . Thus Th = eh D, and formally inverting the exponential yields
h
D
=
ln
(
1
+
Δ
h
)
=
Δ
h
−
1
2
Δ
h
2
+
1
3
Δ
h
3
−
⋯
.
{\displaystyle h\operatorname {D} =\ln(1+\Delta _{h})=\Delta _{h}-{\tfrac {1}{2}}\,\Delta _{h}^{2}+{\tfrac {1}{3}}\,\Delta _{h}^{3}-\cdots ~.}
This formula holds in the sense that both operators give the same result when applied to a polynomial.
Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f ′(x) mentioned at the end of the section § Higher-order differences.
The analogous formulas for the backward and central difference operators are
h
D
=
−
ln
(
1
−
∇
h
)
and
h
D
=
2
arsinh
(
1
2
δ
h
)
.
{\displaystyle h\operatorname {D} =-\ln(1-\nabla _{h})\quad {\text{ and }}\quad h\operatorname {D} =2\operatorname {arsinh} \left({\tfrac {1}{2}}\,\delta _{h}\right)~.}
The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h → 0 limits),
A large number of formal differential relations of standard calculus involving
functions f(x) thus systematically map to umbral finite-difference analogs involving f( x T−1h ).
For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol),
(
x
)
n
≡
(
x
T
h
−
1
)
n
=
x
(
x
−
h
)
(
x
−
2
h
)
⋯
(
x
−
(
n
−
1
)
h
)
,
{\displaystyle \ (x)_{n}\equiv \left(\ x\ \operatorname {T} _{h}^{-1}\right)^{n}=x\left(x-h\right)\left(x-2h\right)\cdots {\bigl (}x-\left(n-1\right)\ h{\bigr )}\ ,}
so that
Δ
h
h
(
x
)
n
=
n
(
x
)
n
−
1
,
{\displaystyle \ {\frac {\Delta _{h}}{h}}(x)_{n}=n\ (x)_{n-1}\ ,}
hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f(x) in such symbols), and so on.
For example, the umbral sine is
sin
(
x
T
h
−
1
)
=
x
−
(
x
)
3
3
!
+
(
x
)
5
5
!
−
(
x
)
7
7
!
+
⋯
{\displaystyle \ \sin \left(x\ \operatorname {T} _{h}^{-1}\right)=x-{\frac {(x)_{3}}{3!}}+{\frac {(x)_{5}}{5!}}-{\frac {(x)_{7}}{7!}}+\cdots \ }
As in the continuum limit, the eigenfunction of Δh/h also happens to be an exponential,
Δ
h
h
(
1
+
λ
h
)
x
h
=
Δ
h
h
e
ln
(
1
+
λ
h
)
x
h
=
λ
e
ln
(
1
+
λ
h
)
x
h
,
{\displaystyle \ {\frac {\Delta _{h}}{h}}(1+\lambda h)^{\frac {x}{h}}={\frac {\Delta _{h}}{h}}e^{\ln(1+\lambda h){\frac {x}{h}}}=\lambda e^{\ln(1+\lambda h){\frac {x}{h}}}\ ,}
and hence Fourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols.
Thus, for instance, the Dirac delta function maps to its umbral correspondent, the cardinal sine function
δ
(
x
)
↦
sin
[
π
2
(
1
+
x
h
)
]
π
(
x
+
h
)
,
{\displaystyle \ \delta (x)\mapsto {\frac {\sin \left[{\frac {\pi }{2}}\left(1+{\frac {x}{h}}\right)\right]}{\pi (x+h)}}\ ,}
and so forth. Difference equations can often be solved with techniques very similar to those for solving differential equations.
The inverse operator of the forward difference operator, so then the umbral integral, is the indefinite sum or antidifference operator.
=== Rules for calculus of finite difference operators ===
Analogous to rules for finding the derivative, we have:
Constant rule: If c is a constant, then
Δ
c
=
0
{\displaystyle \ \Delta c=0\ }
Linearity: If a and b are constants,
Δ
(
a
f
+
b
g
)
=
a
Δ
f
+
b
Δ
g
{\displaystyle \ \Delta (a\ f+b\ g)=a\ \Delta f+b\ \Delta g\ }
All of the above rules apply equally well to any difference operator as to Δ, including δ and ∇.
Product rule:
Δ
(
f
g
)
=
f
Δ
g
+
g
Δ
f
+
Δ
f
Δ
g
∇
(
f
g
)
=
f
∇
g
+
g
∇
f
−
∇
f
∇
g
{\displaystyle {\begin{aligned}\ \Delta (fg)&=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g\\[4pt]\nabla (fg)&=f\,\nabla g+g\,\nabla f-\nabla f\,\nabla g\ \end{aligned}}}
Quotient rule:
∇
(
f
g
)
=
(
det
[
∇
f
∇
g
f
g
]
)
/
(
g
⋅
det
[
g
∇
g
1
1
]
)
{\displaystyle \ \nabla \left({\frac {f}{g}}\right)=\left.\left(\det {\begin{bmatrix}\nabla f&\nabla g\\f&g\end{bmatrix}}\right)\right/\left(g\cdot \det {\begin{bmatrix}g&\nabla g\\1&1\end{bmatrix}}\right)}
or
∇
(
f
g
)
=
g
∇
f
−
f
∇
g
g
⋅
(
g
−
∇
g
)
{\displaystyle \nabla \left({\frac {f}{g}}\right)={\frac {g\,\nabla f-f\,\nabla g}{g\cdot (g-\nabla g)}}\ }
Summation rules:
∑
n
=
a
b
Δ
f
(
n
)
=
f
(
b
+
1
)
−
f
(
a
)
∑
n
=
a
b
∇
f
(
n
)
=
f
(
b
)
−
f
(
a
−
1
)
{\displaystyle {\begin{aligned}\ \sum _{n=a}^{b}\Delta f(n)&=f(b+1)-f(a)\\\sum _{n=a}^{b}\nabla f(n)&=f(b)-f(a-1)\ \end{aligned}}}
See references.
== Generalizations ==
A generalized finite difference is usually defined as
Δ
h
μ
[
f
]
(
x
)
=
∑
k
=
0
N
μ
k
f
(
x
+
k
h
)
,
{\displaystyle \Delta _{h}^{\mu }[f](x)=\sum _{k=0}^{N}\mu _{k}f(x+kh),}
where μ = (μ0, …, μN) is its coefficient vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients μk depend on point x: μk = μk(x), thus considering weighted finite difference. Also one may make the step h depend on point x: h = h(x). Such generalizations are useful for constructing different modulus of continuity.
The generalized difference can be seen as the polynomial rings R[Th]. It leads to difference algebras.
Difference operator generalizes to Möbius inversion over a partially ordered set.
As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function μ; for the difference operator, μ is the sequence (1, −1, 0, 0, 0, …).
== Multivariate finite differences ==
Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables.
Some partial derivative approximations are:
f
x
(
x
,
y
)
≈
f
(
x
+
h
,
y
)
−
f
(
x
−
h
,
y
)
2
h
f
y
(
x
,
y
)
≈
f
(
x
,
y
+
k
)
−
f
(
x
,
y
−
k
)
2
k
f
x
x
(
x
,
y
)
≈
f
(
x
+
h
,
y
)
−
2
f
(
x
,
y
)
+
f
(
x
−
h
,
y
)
h
2
f
y
y
(
x
,
y
)
≈
f
(
x
,
y
+
k
)
−
2
f
(
x
,
y
)
+
f
(
x
,
y
−
k
)
k
2
f
x
y
(
x
,
y
)
≈
f
(
x
+
h
,
y
+
k
)
−
f
(
x
+
h
,
y
−
k
)
−
f
(
x
−
h
,
y
+
k
)
+
f
(
x
−
h
,
y
−
k
)
4
h
k
.
{\displaystyle {\begin{aligned}f_{x}(x,y)&\approx {\frac {f(x+h,y)-f(x-h,y)}{2h}}\\f_{y}(x,y)&\approx {\frac {f(x,y+k)-f(x,y-k)}{2k}}\\f_{xx}(x,y)&\approx {\frac {f(x+h,y)-2f(x,y)+f(x-h,y)}{h^{2}}}\\f_{yy}(x,y)&\approx {\frac {f(x,y+k)-2f(x,y)+f(x,y-k)}{k^{2}}}\\f_{xy}(x,y)&\approx {\frac {f(x+h,y+k)-f(x+h,y-k)-f(x-h,y+k)+f(x-h,y-k)}{4hk}}.\end{aligned}}}
Alternatively, for applications in which the computation of f is the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case is
f
x
y
(
x
,
y
)
≈
f
(
x
+
h
,
y
+
k
)
−
f
(
x
+
h
,
y
)
−
f
(
x
,
y
+
k
)
+
2
f
(
x
,
y
)
−
f
(
x
−
h
,
y
)
−
f
(
x
,
y
−
k
)
+
f
(
x
−
h
,
y
−
k
)
2
h
k
,
{\displaystyle f_{xy}(x,y)\approx {\frac {f(x+h,y+k)-f(x+h,y)-f(x,y+k)+2f(x,y)-f(x-h,y)-f(x,y-k)+f(x-h,y-k)}{2hk}},}
since the only values to compute that are not already needed for the previous four equations are f(x + h, y + k) and f(x − h, y − k).
== See also ==
== References ==
== External links ==
"Finite-difference calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Table of useful finite difference formula generated using Mathematica
D. Gleich (2005), Finite Calculus: A Tutorial for Solving Nasty Sums
Discrete Second Derivative from Unevenly Spaced Points | Wikipedia/Calculus_of_finite_differences |
In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form
∫
a
b
e
M
f
(
x
)
d
x
,
{\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx,}
where
f
{\displaystyle f}
is a twice-differentiable function,
M
{\displaystyle M}
is a large number, and the endpoints
a
{\displaystyle a}
and
b
{\displaystyle b}
could be infinite. This technique was originally presented in the book by Laplace (1774).
In Bayesian statistics, Laplace's approximation can refer to either approximating the posterior normalizing constant with Laplace's method or approximating the posterior distribution with a Gaussian centered at the maximum a posteriori estimate. Laplace approximations are used in the integrated nested Laplace approximations method for fast approximations of Bayesian inference.
== Concept ==
Let the function
f
(
x
)
{\displaystyle f(x)}
have a unique global maximum at
x
0
{\displaystyle x_{0}}
.
M
>
0
{\displaystyle M>0}
is a constant here. The following two functions are considered:
g
(
x
)
=
M
f
(
x
)
,
h
(
x
)
=
e
M
f
(
x
)
.
{\displaystyle {\begin{aligned}g(x)&=Mf(x),\\h(x)&=e^{Mf(x)}.\end{aligned}}}
Then,
x
0
{\displaystyle x_{0}}
is the global maximum of
g
{\displaystyle g}
and
h
{\displaystyle h}
as well. Hence:
g
(
x
0
)
g
(
x
)
=
M
f
(
x
0
)
M
f
(
x
)
=
f
(
x
0
)
f
(
x
)
,
h
(
x
0
)
h
(
x
)
=
e
M
f
(
x
0
)
e
M
f
(
x
)
=
e
M
(
f
(
x
0
)
−
f
(
x
)
)
.
{\displaystyle {\begin{aligned}{\frac {g(x_{0})}{g(x)}}&={\frac {Mf(x_{0})}{Mf(x)}}={\frac {f(x_{0})}{f(x)}},\\[4pt]{\frac {h(x_{0})}{h(x)}}&={\frac {e^{Mf(x_{0})}}{e^{Mf(x)}}}=e^{M(f(x_{0})-f(x))}.\end{aligned}}}
As M increases, the ratio for
h
{\displaystyle h}
will grow exponentially, while the ratio for
g
{\displaystyle g}
does not change. Thus, significant contributions to the integral of this function will come only from points
x
{\displaystyle x}
in a neighborhood of
x
0
{\displaystyle x_{0}}
, which can then be estimated.
== General theory ==
To state and motivate the method, one must make several assumptions. It is assumed that
x
0
{\displaystyle x_{0}}
is not an endpoint of the interval of integration and that the values
f
(
x
)
{\displaystyle f(x)}
cannot be very close to
f
(
x
0
)
{\displaystyle f(x_{0})}
unless
x
{\displaystyle x}
is close to
x
0
{\displaystyle x_{0}}
.
f
(
x
)
{\displaystyle f(x)}
can be expanded around
x
0
{\displaystyle x_{0}}
by Taylor's theorem,
f
(
x
)
=
f
(
x
0
)
+
f
′
(
x
0
)
(
x
−
x
0
)
+
1
2
f
″
(
x
0
)
(
x
−
x
0
)
2
+
R
{\displaystyle f(x)=f(x_{0})+f'(x_{0})(x-x_{0})+{\frac {1}{2}}f''(x_{0})(x-x_{0})^{2}+R}
where
R
=
O
(
(
x
−
x
0
)
3
)
{\displaystyle R=O\left((x-x_{0})^{3}\right)}
(see: big O notation).
Since
f
{\displaystyle f}
has a global maximum at
x
0
{\displaystyle x_{0}}
, and
x
0
{\displaystyle x_{0}}
is not an endpoint, it is a stationary point, i.e.
f
′
(
x
0
)
=
0
{\displaystyle f'(x_{0})=0}
. Therefore, the second-order Taylor polynomial approximating
f
(
x
)
{\displaystyle f(x)}
is
f
(
x
)
≈
f
(
x
0
)
+
1
2
f
″
(
x
0
)
(
x
−
x
0
)
2
.
{\displaystyle f(x)\approx f(x_{0})+{\frac {1}{2}}f''(x_{0})(x-x_{0})^{2}.}
Then, just one more step is needed to get a Gaussian distribution. Since
x
0
{\displaystyle x_{0}}
is a global maximum of the function
f
{\displaystyle f}
it can be stated, by definition of the second derivative, that
f
″
(
x
0
)
≤
0
{\displaystyle f''(x_{0})\leq 0}
, thus giving the relation
f
(
x
)
≈
f
(
x
0
)
−
1
2
|
f
″
(
x
0
)
|
(
x
−
x
0
)
2
{\displaystyle f(x)\approx f(x_{0})-{\frac {1}{2}}|f''(x_{0})|(x-x_{0})^{2}}
for
x
{\displaystyle x}
close to
x
0
{\displaystyle x_{0}}
. The integral can then be approximated with:
∫
a
b
e
M
f
(
x
)
d
x
≈
e
M
f
(
x
0
)
∫
a
b
e
−
1
2
M
|
f
″
(
x
0
)
|
(
x
−
x
0
)
2
d
x
{\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx\approx e^{Mf(x_{0})}\int _{a}^{b}e^{-{\frac {1}{2}}M|f''(x_{0})|(x-x_{0})^{2}}\,dx}
If
f
″
(
x
0
)
<
0
{\displaystyle f''(x_{0})<0}
this latter integral becomes a Gaussian integral if we replace the limits of integration by
−
∞
{\displaystyle -\infty }
and
+
∞
{\displaystyle +\infty }
; when
M
{\displaystyle M}
is large this creates only a small error because the exponential decays very fast away from
x
0
{\displaystyle x_{0}}
. Computing this Gaussian integral we obtain:
∫
a
b
e
M
f
(
x
)
d
x
≈
2
π
M
|
f
″
(
x
0
)
|
e
M
f
(
x
0
)
as
M
→
∞
.
{\displaystyle \int _{a}^{b}e^{Mf(x)}\,dx\approx {\sqrt {\frac {2\pi }{M|f''(x_{0})|}}}e^{Mf(x_{0})}{\text{ as }}M\to \infty .}
A generalization of this method and extension to arbitrary precision is provided by the book Fog (2008).
=== Formal statement and proof ===
Suppose
f
(
x
)
{\displaystyle f(x)}
is a twice continuously differentiable function on
[
a
,
b
]
,
{\displaystyle [a,b],}
and there exists a unique point
x
0
∈
(
a
,
b
)
{\displaystyle x_{0}\in (a,b)}
such that:
f
(
x
0
)
=
max
x
∈
[
a
,
b
]
f
(
x
)
and
f
″
(
x
0
)
<
0.
{\displaystyle f(x_{0})=\max _{x\in [a,b]}f(x)\quad {\text{and}}\quad f''(x_{0})<0.}
Then:
lim
n
→
∞
∫
a
b
e
n
f
(
x
)
d
x
e
n
f
(
x
0
)
2
π
n
(
−
f
″
(
x
0
)
)
=
1.
{\displaystyle \lim _{n\to \infty }{\frac {\int _{a}^{b}e^{nf(x)}\,dx}{e^{nf(x_{0})}{\sqrt {\frac {2\pi }{n\left(-f''(x_{0})\right)}}}}}=1.}
This method relies on 4 basic concepts such as
Based on these four concepts, we can derive the relative error of this method.
== Other formulations ==
Laplace's approximation is sometimes written as
∫
a
b
h
(
x
)
e
M
g
(
x
)
d
x
≈
2
π
M
|
g
″
(
x
0
)
|
h
(
x
0
)
e
M
g
(
x
0
)
as
M
→
∞
{\displaystyle \int _{a}^{b}h(x)e^{Mg(x)}\,dx\approx {\sqrt {\frac {2\pi }{M|g''(x_{0})|}}}h(x_{0})e^{Mg(x_{0})}\ {\text{ as }}M\to \infty }
where
h
{\displaystyle h}
is positive.
Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in
g
(
x
)
{\displaystyle g(x)}
and what goes into
h
(
x
)
.
{\displaystyle h(x).}
In the multivariate case, where
x
{\displaystyle \mathbf {x} }
is a
d
{\displaystyle d}
-dimensional vector and
f
(
x
)
{\displaystyle f(\mathbf {x} )}
is a scalar function of
x
{\displaystyle \mathbf {x} }
, Laplace's approximation is usually written as:
∫
h
(
x
)
e
M
f
(
x
)
d
d
x
≈
(
2
π
M
)
d
/
2
h
(
x
0
)
e
M
f
(
x
0
)
|
−
H
(
f
)
(
x
0
)
|
1
/
2
as
M
→
∞
{\displaystyle \int h(\mathbf {x} )e^{Mf(\mathbf {x} )}\,d^{d}x\approx \left({\frac {2\pi }{M}}\right)^{d/2}{\frac {h(\mathbf {x} _{0})e^{Mf(\mathbf {x} _{0})}}{\left|-H(f)(\mathbf {x} _{0})\right|^{1/2}}}{\text{ as }}M\to \infty }
where
H
(
f
)
(
x
0
)
{\displaystyle H(f)(\mathbf {x} _{0})}
is the Hessian matrix of
f
{\displaystyle f}
evaluated at
x
0
{\displaystyle \mathbf {x} _{0}}
and where
|
⋅
|
{\displaystyle |\cdot |}
denotes its matrix determinant. Analogously to the univariate case, the Hessian is required to be negative-definite.
== Steepest descent extension ==
In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of
f
{\displaystyle f}
vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents).
The appropriate formulation for the complex z-plane is
∫
a
b
e
M
f
(
z
)
d
z
≈
2
π
−
M
f
″
(
z
0
)
e
M
f
(
z
0
)
as
M
→
∞
.
{\displaystyle \int _{a}^{b}e^{Mf(z)}\,dz\approx {\sqrt {\frac {2\pi }{-Mf''(z_{0})}}}e^{Mf(z_{0})}{\text{ as }}M\to \infty .}
for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions).
== Further generalizations ==
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function
f
{\displaystyle f}
defined on that contour and a special point, such as infinity, a holomorphic function M is sought away from C, with prescribed jump across C, and with a given normalization at infinity. If
f
{\displaystyle f}
and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
== Median-point approximation generalization ==
In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density
e
M
f
(
x
)
.
{\displaystyle e^{Mf(x)}.}
Denoting the cumulative distribution
F
(
x
)
{\displaystyle F(x)}
, if there is a diffeomorphic Gaussian distribution with density
e
−
g
−
γ
2
y
2
{\displaystyle e^{-g-{\frac {\gamma }{2}}y^{2}}}
the norm is given by
2
π
γ
−
1
e
−
g
{\displaystyle {\sqrt {2\pi \gamma ^{-1}}}e^{-g}}
and the corresponding diffeomorphism is
y
(
x
)
=
1
γ
Φ
−
1
(
F
(
x
)
F
(
∞
)
)
,
{\displaystyle y(x)={\frac {1}{\sqrt {\gamma }}}\Phi ^{-1}{\left({\frac {F(x)}{F(\infty )}}\right)},}
where
Φ
{\displaystyle \Phi }
denotes cumulative standard normal distribution function.
In general, any distribution diffeomorphic to the Gaussian distribution has density
e
−
g
−
γ
2
y
2
(
x
)
y
′
(
x
)
{\displaystyle e^{-g-{\frac {\gamma }{2}}y^{2}(x)}y'(x)}
and the median-point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values of
γ
{\displaystyle \gamma }
and
g
{\displaystyle g}
.
The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context of partition function evaluation for a system of interacting fermions.
== Complex integrals ==
For complex integrals in the form:
1
2
π
i
∫
c
−
i
∞
c
+
i
∞
g
(
s
)
e
s
t
d
s
{\displaystyle {\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }g(s)e^{st}\,ds}
with
t
≫
1
,
{\displaystyle t\gg 1,}
we make the substitution t = iu and the change of variable
s
=
c
+
i
x
{\displaystyle s=c+ix}
to get the bilateral Laplace transform:
1
2
π
∫
−
∞
∞
g
(
c
+
i
x
)
e
−
u
x
e
i
c
u
d
x
.
{\displaystyle {\frac {1}{2\pi }}\int _{-\infty }^{\infty }g(c+ix)e^{-ux}e^{icu}\,dx.}
We then split g(c + ix) in its real and complex part, after which we recover u = t/i. This is useful for inverse Laplace transforms, the Perron formula and complex integration.
== Example: Stirling's approximation ==
Laplace's method can be used to derive Stirling's approximation
N
!
≈
2
π
N
(
N
e
)
N
{\displaystyle N!\approx {\sqrt {2\pi N}}\left({\frac {N}{e}}\right)^{N}\,}
for a large integer N. From the definition of the Gamma function, we have
N
!
=
Γ
(
N
+
1
)
=
∫
0
∞
e
−
x
x
N
d
x
.
{\displaystyle N!=\Gamma (N+1)=\int _{0}^{\infty }e^{-x}x^{N}\,dx.}
Now we change variables, letting
x
=
N
z
{\displaystyle x=Nz}
so that
d
x
=
N
d
z
.
{\displaystyle dx=Ndz.}
Plug these values back in to obtain
N
!
=
∫
0
∞
e
−
N
z
(
N
z
)
N
N
d
z
=
N
N
+
1
∫
0
∞
e
−
N
z
z
N
d
z
=
N
N
+
1
∫
0
∞
e
−
N
z
e
N
ln
z
d
z
=
N
N
+
1
∫
0
∞
e
N
(
ln
z
−
z
)
d
z
.
{\displaystyle {\begin{aligned}N!&=\int _{0}^{\infty }e^{-Nz}(Nz)^{N}N\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}z^{N}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}e^{N\ln z}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{N(\ln z-z)}\,dz.\end{aligned}}}
This integral has the form necessary for Laplace's method with
f
(
z
)
=
ln
z
−
z
{\displaystyle f(z)=\ln {z}-z}
which is twice-differentiable:
f
′
(
z
)
=
1
z
−
1
,
{\displaystyle f'(z)={\frac {1}{z}}-1,}
f
″
(
z
)
=
−
1
z
2
.
{\displaystyle f''(z)=-{\frac {1}{z^{2}}}.}
The maximum of
f
(
z
)
{\displaystyle f(z)}
lies at z0 = 1, and the second derivative of
f
(
z
)
{\displaystyle f(z)}
has the value −1 at this point. Therefore, we obtain
N
!
≈
N
N
+
1
2
π
N
e
−
N
=
2
π
N
N
N
e
−
N
.
{\displaystyle N!\approx N^{N+1}{\sqrt {\frac {2\pi }{N}}}e^{-N}={\sqrt {2\pi N}}N^{N}e^{-N}.}
== See also ==
Method of stationary phase
Method of steepest descent
Large deviations theory
Laplace principle (large deviations theory)
Laplace's approximation
== Notes ==
== References ==
This article incorporates material from saddle point approximation on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Laplace's_method |
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
== Elements of a mathematical model ==
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
== Classifications ==
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
== Construction ==
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
=== A priori information ===
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
==== Subjective information ====
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
=== Complexity ===
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
=== Training, tuning, and fitting ===
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
=== Evaluation and assessment ===
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
==== Prediction of empirical data ====
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
==== Scope of the model ====
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
==== Philosophical considerations ====
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
== Significance in the natural sciences ==
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
== Some applications ==
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
== Examples ==
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
M
=
(
Q
,
Σ
,
δ
,
q
0
,
F
)
{\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F)}
where
Q
=
{
S
1
,
S
2
}
,
{\displaystyle Q=\{S_{1},S_{2}\},}
Σ
=
{
0
,
1
}
,
{\displaystyle \Sigma =\{0,1\},}
q
0
=
S
1
,
{\displaystyle q_{0}=S_{1},}
F
=
{
S
1
}
,
{\displaystyle F=\{S_{1}\},}
and
δ
{\displaystyle \delta }
is defined by the following state-transition table:
The state
S
1
{\displaystyle S_{1}}
represents that there has been an even number of 0s in the input so far, while
S
2
{\displaystyle S_{2}}
signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s,
M
{\displaystyle M}
will finish in state
S
1
,
{\displaystyle S_{1},}
an accepting state, so the input string will be accepted.
The language recognized by
M
{\displaystyle M}
is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function
V
:
R
3
→
R
{\displaystyle V\!:\mathbb {R} ^{3}\!\to \mathbb {R} }
and the trajectory, that is a function
r
:
R
→
R
3
,
{\displaystyle \mathbf {r} \!:\mathbb {R} \to \mathbb {R} ^{3},}
is the solution of the differential equation:
−
d
2
r
(
t
)
d
t
2
m
=
∂
V
[
r
(
t
)
]
∂
x
x
^
+
∂
V
[
r
(
t
)
]
∂
y
y
^
+
∂
V
[
r
(
t
)
]
∂
z
z
^
,
{\displaystyle -{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}m={\frac {\partial V[\mathbf {r} (t)]}{\partial x}}\mathbf {\hat {x}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial y}}\mathbf {\hat {y}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial z}}\mathbf {\hat {z}} ,}
that can be written also as
m
d
2
r
(
t
)
d
t
2
=
−
∇
V
[
r
(
t
)
]
.
{\displaystyle m{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}=-\nabla V[\mathbf {r} (t)].}
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of
n
{\displaystyle n}
commodities labeled
1
,
2
,
…
,
n
{\displaystyle 1,2,\dots ,n}
each with a market price
p
1
,
p
2
,
…
,
p
n
.
{\displaystyle p_{1},p_{2},\dots ,p_{n}.}
The consumer is assumed to have an ordinal utility function
U
{\displaystyle U}
(ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
consumed. The model further assumes that the consumer has a budget
M
{\displaystyle M}
which is used to purchase a vector
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
in such a way as to maximize
U
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle U(x_{1},x_{2},\dots ,x_{n}).}
The problem of rational behavior in this model then becomes a mathematical optimization problem, that is:
max
U
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \max \,U(x_{1},x_{2},\ldots ,x_{n})}
subject to:
∑
i
=
1
n
p
i
x
i
≤
M
,
{\displaystyle \sum _{i=1}^{n}p_{i}x_{i}\leq M,}
x
i
≥
0
for all
i
=
1
,
2
,
…
,
n
.
{\displaystyle x_{i}\geq 0\;\;\;{\text{ for all }}i=1,2,\dots ,n.}
This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
== See also ==
== References ==
== Further reading ==
=== Books ===
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover. ISBN 0-486-68131-9
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover. ISBN 0-486-41180-X
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt ISBN 0871502364
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press ISBN 0-521-57095-6 .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM. ISBN 0-89871-229-7
Models as Mediators: Perspectives on Natural and Social Science edited by Mary S. Morgan and Margaret Morrison, 1999.
Mary S. Morgan The World in the Model: How Economists Work and Think, 2012.
=== Specific applications ===
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67–80. doi:10.24057/2071-9388-2010-3-1-67-80
Peierls, R. (1980). "Model-making in physics". Contemporary Physics. 21: 3–17. Bibcode:1980ConPh..21....3P. doi:10.1080/00107518008210938.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
== External links ==
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model? | Wikipedia/Mathematical_model |
In classical mechanics, the Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body.
Traditionally the Newton–Euler equations is the grouping together of Euler's two laws of motion for a rigid body into a single equation with 6 components, using column vectors and matrices. These laws relate the motion of the center of gravity of a rigid body with the sum of forces and torques (or synonymously moments) acting on the rigid body.
== Center of mass frame ==
With respect to a coordinate frame whose origin coincides with the body's center of mass for τ(torque) and an inertial frame of reference for F(force), they can be expressed in matrix form as:
(
F
τ
)
=
(
m
I
3
0
0
I
c
m
)
(
a
c
m
α
)
+
(
0
ω
×
I
c
m
ω
)
,
{\displaystyle \left({\begin{matrix}{\mathbf {F} }\\{\boldsymbol {\tau }}\end{matrix}}\right)=\left({\begin{matrix}m{\mathbf {I} _{3}}&0\\0&{\mathbf {I} }_{\rm {cm}}\end{matrix}}\right)\left({\begin{matrix}\mathbf {a} _{\rm {cm}}\\{\boldsymbol {\alpha }}\end{matrix}}\right)+\left({\begin{matrix}0\\{\boldsymbol {\omega }}\times {\mathbf {I} }_{\rm {cm}}\,{\boldsymbol {\omega }}\end{matrix}}\right),}
where
F = total force acting on the center of mass
m = mass of the body
I3 = the 3×3 identity matrix
acm = acceleration of the center of mass
vcm = velocity of the center of mass
τ = total torque acting about the center of mass
Icm = moment of inertia about the center of mass
ω = angular velocity of the body
α = angular acceleration of the body
== Any reference frame ==
With respect to a coordinate frame located at point P that is fixed in the body and not coincident with the center of mass, the equations assume the more complex form:
(
F
τ
p
)
=
(
m
I
3
−
m
[
c
]
×
m
[
c
]
×
I
c
m
−
m
[
c
]
×
[
c
]
×
)
(
a
p
α
)
+
(
m
[
ω
]
×
[
ω
]
×
c
[
ω
]
×
(
I
c
m
−
m
[
c
]
×
[
c
]
×
)
ω
)
,
{\displaystyle \left({\begin{matrix}{\mathbf {F} }\\{\boldsymbol {\tau }}_{\rm {p}}\end{matrix}}\right)=\left({\begin{matrix}m{\mathbf {I} _{3}}&-m[{\mathbf {c} }]^{\times }\\m[{\mathbf {c} }]^{\times }&{\mathbf {I} }_{\rm {cm}}-m[{\mathbf {c} }]^{\times }[{\mathbf {c} }]^{\times }\end{matrix}}\right)\left({\begin{matrix}\mathbf {a} _{\rm {p}}\\{\boldsymbol {\alpha }}\end{matrix}}\right)+\left({\begin{matrix}m[{\boldsymbol {\omega }}]^{\times }[{\boldsymbol {\omega }}]^{\times }{\mathbf {c} }\\{[{\boldsymbol {\omega }}]}^{\times }({\mathbf {I} }_{\rm {cm}}-m[{\mathbf {c} }]^{\times }[{\mathbf {c} }]^{\times })\,{\boldsymbol {\omega }}\end{matrix}}\right),}
where c is the vector from P to the center of mass of the body expressed in the body-fixed frame,
and
[
c
]
×
≡
(
0
−
c
z
c
y
c
z
0
−
c
x
−
c
y
c
x
0
)
[
ω
]
×
≡
(
0
−
ω
z
ω
y
ω
z
0
−
ω
x
−
ω
y
ω
x
0
)
{\displaystyle [\mathbf {c} ]^{\times }\equiv \left({\begin{matrix}0&-c_{z}&c_{y}\\c_{z}&0&-c_{x}\\-c_{y}&c_{x}&0\end{matrix}}\right)\qquad \qquad [\mathbf {\boldsymbol {\omega }} ]^{\times }\equiv \left({\begin{matrix}0&-\omega _{z}&\omega _{y}\\\omega _{z}&0&-\omega _{x}\\-\omega _{y}&\omega _{x}&0\end{matrix}}\right)}
denote skew-symmetric cross product matrices.
The left hand side of the equation—which includes the sum of external forces, and the sum of external moments about P—describes a spatial wrench, see screw theory.
The inertial terms are contained in the spatial inertia matrix
(
m
I
3
−
m
[
c
]
×
m
[
c
]
×
I
c
m
−
m
[
c
]
×
[
c
]
×
)
,
{\displaystyle \left({\begin{matrix}m{\mathbf {I} _{3}}&-m[{\mathbf {c} }]^{\times }\\m[{\mathbf {c} }]^{\times }&{\mathbf {I} }_{\rm {cm}}-m[{\mathbf {c} }]^{\times }[{\mathbf {c} }]^{\times }\end{matrix}}\right),}
while the fictitious forces are contained in the term:
(
m
[
ω
]
×
[
ω
]
×
c
[
ω
]
×
(
I
c
m
−
m
[
c
]
×
[
c
]
×
)
ω
)
.
{\displaystyle \left({\begin{matrix}m{[{\boldsymbol {\omega }}]}^{\times }{[{\boldsymbol {\omega }}]}^{\times }{\mathbf {c} }\\{[{\boldsymbol {\omega }}]}^{\times }({\mathbf {I} }_{\rm {cm}}-m[{\mathbf {c} }]^{\times }[{\mathbf {c} }]^{\times })\,{\boldsymbol {\omega }}\end{matrix}}\right).}
When the center of mass is not coincident with the coordinate frame (that is, when c is nonzero), the translational and angular accelerations (a and α) are coupled, so that each is associated with force and torque components.
== Applications ==
The Newton–Euler equations are used as the basis for more complicated "multi-body" formulations (screw theory) that describe the dynamics of systems of rigid bodies connected by joints and other constraints. Multi-body problems can be
solved by a variety of numerical algorithms.
== See also ==
Euler's laws of motion for a rigid body.
Euler angles
Inverse dynamics
Centrifugal force
Principal axes
Spatial acceleration
Screw theory of rigid body motion.
== References == | Wikipedia/Newton–Euler_equations |
In physics, motion is when an object changes its position with respect to a reference point in a given time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics, while the branch studying forces and their effect on motion is called dynamics.
If an object is not in motion relative to a given frame of reference, it is said to be at rest, motionless, immobile, stationary, or to have a constant or time-invariant position with reference to its surroundings. Modern physics holds that, as there is no absolute frame of reference, Isaac Newton's concept of absolute motion cannot be determined. Everything in the universe can be considered to be in motion.: 20–21
Motion applies to various physical systems: objects, bodies, matter particles, matter fields, radiation, radiation fields, radiation particles, curvature, and space-time. One can also speak of the motion of images, shapes, and boundaries. In general, the term motion signifies a continuous change in the position or configuration of a physical system in space. For example, one can talk about the motion of a wave or the motion of a quantum particle, where the configuration consists of the probabilities of the wave or particle occupying specific positions.
== Equations of motion ==
== Laws of motion ==
In physics, the motion of massive bodies is described through two related sets of laws of mechanics. Classical mechanics for super atomic (larger than an atom) objects (such as cars, projectiles, planets, cells, and humans) and quantum mechanics for atomic and sub-atomic objects (such as helium, protons, and electrons). Historically, Newton and Euler formulated three laws of classical mechanics:
=== Classical mechanics ===
Classical mechanics is used for describing the motion of macroscopic objects moving at speeds significantly slower than the speed of light, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains and is one of the oldest and largest scientific descriptions in science, engineering, and technology.
Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, which was first published on July 5, 1687. Newton's three laws are:
A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force. (This is known as the law of inertia.)
Force (
F
→
{\displaystyle {\vec {F}}}
) is equal to the change in momentum per change in time (
Δ
m
v
→
Δ
t
{\displaystyle {\frac {\Delta m{\vec {v}}}{\Delta t}}}
). For a constant mass, force equals mass times acceleration (
F
→
=
m
a
→
{\displaystyle {\vec {F}}=m{\vec {a}}}
).
For every action, there is an equal and opposite reaction. (In other words, whenever one body exerts a force
F
→
{\displaystyle {\vec {F}}}
onto a second body, (in some cases, which is standing still) the second body exerts the force
−
F
→
{\displaystyle -{\vec {F}}}
back onto the first body.
F
→
{\displaystyle {\vec {F}}}
and
−
F
→
{\displaystyle -{\vec {F}}}
are equal in magnitude and opposite in direction. So, the body that exerts
F
→
{\displaystyle {\vec {F}}}
will be pushed backward.)
Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and the motion of objects on Earth.
=== Relativistic mechanics ===
Modern kinematics developed with study of electromagnetism and refers all velocities
v
{\displaystyle v}
to their ratio to speed of light
c
{\displaystyle c}
. Velocity is then interpreted as rapidity, the hyperbolic angle
φ
{\displaystyle \varphi }
for which the hyperbolic tangent function
tanh
φ
=
v
÷
c
{\displaystyle \tanh \varphi =v\div c}
. Acceleration, the change of velocity over time, then changes rapidity according to Lorentz transformations. This part of mechanics is special relativity. Efforts to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein. The development used differential geometry to describe a curved universe with gravity; the study is called general relativity.
=== Quantum mechanics ===
Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality.
In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In quantum mechanics, due to the Heisenberg uncertainty principle, the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined.
In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomena such as superfluidity, superconductivity, and biological systems, including the function of smell receptors and the structures of protein.
== Orders of magnitude ==
Humans, like all known things in the universe, are in constant motion;: 8–9 however, aside from obvious movements of the various external body parts and locomotion, humans are in motion in a variety of ways that are more difficult to perceive. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third), which prevents the feeling of motion on a mass to which the observer is connected, and the lack of an obvious frame of reference that would allow individuals to easily see that they are moving. The smaller scales of these motions are too small to be detected conventionally with human senses.
=== Universe ===
Spacetime (the fabric of the universe) is expanding, meaning everything in the universe is stretching, like a rubber band. This motion is the most obscure, not involving physical movement but a fundamental change in the universe's nature. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were moving away from Earth, known as Hubble's law, predicted by a universal expansion.
=== Galaxy ===
The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately 600 kilometres per second (1,340,000 mph) relative to the observed locations of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background. This frame of reference indicates that the Milky Way is moving at around 582 kilometres per second (1,300,000 mph).
=== Sun and Solar System ===
The Milky Way is rotating around its dense Galactic Center, thus the Sun is moving in a circle within the galaxy's gravity. Away from the central bulge, or outer rim, the typical stellar velocity is between 210 and 240 kilometres per second (470,000 and 540,000 mph). All planets and their moons move with the Sun. Thus, the Solar System is in motion.
=== Earth ===
The Earth is rotating or spinning around its axis. This is evidenced by day and night, at the equator the earth has an eastward velocity of 0.4651 kilometres per second (1,040 mph). The Earth is also orbiting around the Sun in an orbital revolution. A complete orbit around the Sun takes one year, or about 365 days; it averages a speed of about 30 kilometres per second (67,000 mph).
=== Continents ===
The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle, causing them to move across the surface of the planet at the slow speed of approximately 2.54 centimetres (1 in) per year. However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of 75 millimetres (3.0 in) per year and the Pacific Plate moving 52–69 millimetres (2.0–2.7 in) per year. At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of about 21 millimetres (0.83 in) per year.
=== Internal body ===
The human heart is regularly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though considerable variation exists, and peak flows in the venae cavae have been found between 0.1 and 0.45 metres per second (0.33 and 1.48 ft/s). additionally, the smooth muscles of hollow internal organs are moving. The most familiar would be the occurrence of peristalsis, which is where digested food is forced throughout the digestive tract. Though different foods travel through the body at different rates, an average speed through the human small intestine is 3.48 kilometres per hour (2.16 mph). The human lymphatic system is also constantly causing movements of excess fluids, lipids, and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s.
=== Cells ===
The cells of the human body have many structures and organelles that move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm, various motor proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules, and motor proteins are typically powered by the hydrolysis of adenosine triphosphate (ATP), and convert chemical energy into mechanical work. Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s.
=== Particles ===
According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms that make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a feeling of cold.
=== Subatomic particles ===
Within the standard atomic orbital model, electrons exist in a region around the nucleus of each atom. This region is called the electron cloud. According to Bohr's model of the atom, electrons have a high velocity, and the larger the nucleus they are orbiting the faster they would need to move. If electrons were to move about the electron cloud in strict paths the same way planets orbit the Sun, then electrons would be required to do so at speeds that would far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud. Inside the atomic nucleus, the protons and neutrons are also probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles.
== Light ==
Light moves at a speed of 299,792,458 m/s, or 299,792.458 kilometres per second (186,282.397 mi/s), in a vacuum. The speed of light in vacuum (or
c
{\displaystyle c}
) is also the speed of all massless particles and associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for all physical systems.
In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement unit for speed and a fundamental constant of nature.
In 2019, the speed of light was redefined alongside all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. A new, but completely equivalent, wording of the metre's definition was proposed: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly 299792458 when it is expressed in the SI unit m s−1." This implicit change to the speed of light was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI.
=== Superluminal motion ===
Some motion appears to an observer to exceed the speed of light. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole, responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. This occurs owing to how motion is often calculated at long distances; oftentimes calculations fail to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed.
== Types of motion ==
Simple harmonic motion – motion in which the body oscillates in such a way that the restoring force acting on it is directly proportional to the body's displacement. Mathematically Force is directly proportional to the negative of displacement. Negative sign signifies the restoring nature of the force. (e.g., that of a pendulum).
Linear motion – motion that follows a straight linear path, and whose displacement is exactly the same as its trajectory. [Also known as rectilinear motion]
Reciprocal motion
Brownian motion – the random movement of very small particles
Circular motion
Rotatory motion – a motion about a fixed point. (e.g. Ferris wheel).
Curvilinear motion – It is defined as the motion along a curved path that may be planar or in three dimensions.
Rolling motion – (as of the wheel of a bicycle)
Oscillatory – (swinging from side to side)
Vibratory motion
Combination (or simultaneous) motions – Combination of two or more above listed motions
Projectile motion – uniform horizontal motion + vertical accelerated motion
== Fundamental motions ==
Linear motion
Circular motion
Oscillation
Wave
Relative motion
Rotary motion
== See also ==
Deflection (physics) – Change in a moving object's trajectory due to a collision or force field
Flow (physics) – Aspects of fluid mechanics involving fluid flowPages displaying short descriptions of redirect targets
Kinematics – Branch of physics describing the motion of objects without considering forces
Simple machines – Mechanical device that changes the direction or magnitude of a forcePages displaying short descriptions of redirect targets
Kinematic chain – Mathematical model for a mechanical system
Power – Amount of energy transferred or converted per unit time
Machine – Powered mechanical devicePages displaying short descriptions of redirect targets
Microswimmer – Microscopic object able to traverse fluid
Motion (geometry) – Transformation of a geometric space preserving structure
Motion capture – Process of recording the movement of objects or people
Displacement – Vector relating the initial and the final positions of a moving pointPages displaying short descriptions of redirect targets
Translatory motion – Planar movement within a Euclidean space without rotationPages displaying short descriptions of redirect targets
== References ==
== External links ==
Feynman's lecture on motion
Media related to Motion at Wikimedia Commons | Wikipedia/Motion_(physics) |
Calculus Made Easy is a book on infinitesimal calculus originally published in 1910 by Silvanus P. Thompson. The original text continues to be available as of 2008 from Macmillan and Co., but a 1998 update by Martin Gardner is available from St. Martin's Press which provides an introduction; three preliminary chapters explaining functions, limits, and derivatives; an appendix of recreational calculus problems; and notes for modern readers. Gardner changes "fifth form boys" to the more American sounding (and gender neutral) "high school students," updates many now obsolescent mathematical notations or terms, and uses American decimal dollars and cents in currency examples.
Calculus Made Easy ignores the use of limits with its epsilon-delta definition, replacing it with a method of approximating (to arbitrary precision) directly to the correct answer in the infinitesimal spirit of Leibniz, now formally justified in modern nonstandard analysis and smooth infinitesimal analysis.
The first edition was published in 1910 and was reprinted four times. A second edition followed in 1914 and received fifteen reprints. A third edition, only slightly modified from the second, was reprinted six times by 1967. The original text is now in the public domain under US copyright law (although Macmillan's copyright under UK law is reproduced in the 1998 edition from St. Martin's Press). It can be freely accessed on Project Gutenberg.
== Further reading ==
Nature, Vol. 86, No. 2158 (March 9, 1911), p. 41. doi:10.1038/086041c0. A review of the first edition. Internet Archive; Google Books.
Carl Linderholm, College Mathematics Journal, Vol. 31, No. 1 (January, 2000), pp. 77-79. A review of Silvanus P. Thompson, revised by Martin Gardner, Calculus Made Easy (St. Martin's Press, 1998).
== References ==
== External links ==
Silvanus P. Thompson, Calculus Made Easy: Being a Very-Simplest Introduction to Those Beautiful Methods of Reckoning which Are Generally Called by the Terrifying Names of the Differential Calculus and the Integral Calculus (New York: MacMillan Company, 2nd Ed., 1914). Also available as the (London: MacMillan and Co., Limited, 2nd Ed., 1914) printing, which isn't published under Thompson's name, but instead has the byline of "by F.R.S." (i.e., Fellow of the Royal Society).
Calculus Made Easy at Project Gutenberg (Re-typeset in LaTeX)
Calculus Made Easy public domain audiobook at LibriVox
Calculus Made Easy online
Public-domain modernized edition based on the Project Gutenberg text | Wikipedia/Calculus_Made_Easy |
In calculus, the differential represents the principal part of the change in a function
y
=
f
(
x
)
{\displaystyle y=f(x)}
with respect to changes in the independent variable. The differential
d
y
{\displaystyle dy}
is defined by
d
y
=
f
′
(
x
)
d
x
,
{\displaystyle dy=f'(x)\,dx,}
where
f
′
(
x
)
{\displaystyle f'(x)}
is the derivative of f with respect to
x
{\displaystyle x}
, and
d
x
{\displaystyle dx}
is an additional real variable (so that
d
y
{\displaystyle dy}
is a function of
x
{\displaystyle x}
and
d
x
{\displaystyle dx}
). The notation is such that the equation
d
y
=
d
y
d
x
d
x
{\displaystyle dy={\frac {dy}{dx}}\,dx}
holds, where the derivative is represented in the Leibniz notation
d
y
/
d
x
{\displaystyle dy/dx}
, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes
d
f
(
x
)
=
f
′
(
x
)
d
x
.
{\displaystyle df(x)=f'(x)\,dx.}
The precise meaning of the variables
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables
d
x
{\displaystyle dx}
and
d
y
{\displaystyle dy}
are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.
== History and usage ==
The differential was first introduced via an intuitive or heuristic definition by Isaac Newton and furthered by Gottfried Leibniz, who thought of the differential dy as an infinitely small (or infinitesimal) change in the value y of the function, corresponding to an infinitely small change dx in the function's argument x. For that reason, the instantaneous rate of change of y with respect to x, which is the value of the derivative of the function, is denoted by the fraction
d
y
d
x
{\displaystyle {\frac {dy}{dx}}}
in what is called the Leibniz notation for derivatives. The quotient
d
y
/
d
x
{\displaystyle dy/dx}
is not infinitely small; rather it is a real number.
The use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy (1823) defined the differential without appeal to the atomism of Leibniz's infinitesimals. Instead, Cauchy, following d'Alembert, inverted the logical order of Leibniz and his successors: the derivative itself became the fundamental object, defined as a limit of difference quotients, and the differentials were then defined in terms of it. That is, one was free to define the differential
d
y
{\displaystyle dy}
by an expression
d
y
=
f
′
(
x
)
d
x
{\displaystyle dy=f'(x)\,dx}
in which
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
are simply new variables taking finite real values, not fixed infinitesimals as they had been for Leibniz.
According to Boyer (1959, p. 12), Cauchy's approach was a significant logical improvement over the infinitesimal approach of Leibniz because, instead of invoking the metaphysical notion of infinitesimals, the quantities
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
could now be manipulated in exactly the same manner as any other real quantities
in a meaningful way. Cauchy's overall conceptual approach to differentials remains the standard one in modern analytical treatments, although the final word on rigor, a fully modern notion of the limit, was ultimately due to Karl Weierstrass.
In physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John (1999, p. 184) reconcile the physical use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the particular purpose for which they are intended. Thus "physical infinitesimals" need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense.
Following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is more desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a linear functional of an increment
Δ
x
{\displaystyle \Delta x}
. This approach allows the differential (as a linear map) to be developed for a variety of more sophisticated spaces, ultimately giving rise to such notions as the Fréchet or Gateaux derivative. Likewise, in differential geometry, the differential of a function at a point is a linear function of a tangent vector (an "infinitely small displacement"), which exhibits it as a kind of one-form: the exterior derivative of the function. In non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing (see differential (infinitesimal)).
== Definition ==
The differential is defined in modern treatments of differential calculus as follows. The differential of a function
f
(
x
)
{\displaystyle f(x)}
of a single real variable
x
{\displaystyle x}
is the function
d
f
{\displaystyle df}
of two independent real variables
x
{\displaystyle x}
and
Δ
x
{\displaystyle \Delta x}
given by
d
f
(
x
,
Δ
x
)
=
d
e
f
f
′
(
x
)
Δ
x
.
{\displaystyle df(x,\Delta x)\ {\stackrel {\mathrm {def} }{=}}\ f'(x)\,\Delta x.}
One or both of the arguments may be suppressed, i.e., one may see
d
f
(
x
)
{\displaystyle df(x)}
or simply
d
f
{\displaystyle df}
. If
y
=
f
(
x
)
{\displaystyle y=f(x)}
, the differential may also be written as
d
y
{\displaystyle dy}
. Since
d
x
(
x
,
Δ
x
)
=
Δ
x
{\displaystyle dx(x,\Delta x)=\Delta x}
, it is conventional to write
d
x
=
Δ
x
{\displaystyle dx=\Delta x}
so that the following equality holds:
d
f
(
x
)
=
f
′
(
x
)
d
x
{\displaystyle df(x)=f'(x)\,dx}
This notion of differential is broadly applicable when a linear approximation to a function is sought, in which the value of the increment
Δ
x
{\displaystyle \Delta x}
is small enough. More precisely, if
f
{\displaystyle f}
is a differentiable function at
x
{\displaystyle x}
, then the difference in
y
{\displaystyle y}
-values
Δ
y
=
d
e
f
f
(
x
+
Δ
x
)
−
f
(
x
)
{\displaystyle \Delta y\ {\stackrel {\rm {def}}{=}}\ f(x+\Delta x)-f(x)}
satisfies
Δ
y
=
f
′
(
x
)
Δ
x
+
ε
=
d
f
(
x
)
+
ε
{\displaystyle \Delta y=f'(x)\,\Delta x+\varepsilon =df(x)+\varepsilon \,}
where the error
ε
{\displaystyle \varepsilon }
in the approximation satisfies
ε
/
Δ
x
→
0
{\displaystyle \varepsilon /\Delta x\rightarrow 0}
as
Δ
x
→
0
{\displaystyle \Delta x\rightarrow 0}
. In other words, one has the approximate identity
Δ
y
≈
d
y
{\displaystyle \Delta y\approx dy}
in which the error can be made as small as desired relative to
Δ
x
{\displaystyle \Delta x}
by constraining
Δ
x
{\displaystyle \Delta x}
to be sufficiently small; that is to say,
Δ
y
−
d
y
Δ
x
→
0
{\displaystyle {\frac {\Delta y-dy}{\Delta x}}\to 0}
as
Δ
x
→
0
{\displaystyle \Delta x\rightarrow 0}
. For this reason, the differential of a function is known as the principal (linear) part in the increment of a function: the differential is a linear function of the increment
Δ
x
{\displaystyle \Delta x}
, and although the error
ε
{\displaystyle \varepsilon }
may be nonlinear, it tends to zero rapidly as
Δ
x
{\displaystyle \Delta x}
tends to zero.
== Differentials in several variables ==
Following Goursat (1904, I, §15), for functions of more than one independent variable,
y
=
f
(
x
1
,
…
,
x
n
)
,
{\displaystyle y=f(x_{1},\dots ,x_{n}),}
the partial differential of y with respect to any one of the variables xi is the principal part of the change in y resulting from a change dxi in that one variable. The partial differential is therefore
∂
y
∂
x
i
d
x
i
{\displaystyle {\frac {\partial y}{\partial x_{i}}}dx_{i}}
involving the partial derivative of y with respect to xi. The sum of the partial differentials with respect to all of the independent variables is the total differential
d
y
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
,
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}dx_{n},}
which is the principal part of the change in y resulting from changes in the independent variables xi.
More precisely, in the context of multivariable calculus, following Courant (1937b), if f is a differentiable function, then by the definition of differentiability, the increment
Δ
y
=
d
e
f
f
(
x
1
+
Δ
x
1
,
…
,
x
n
+
Δ
x
n
)
−
f
(
x
1
,
…
,
x
n
)
=
∂
y
∂
x
1
Δ
x
1
+
⋯
+
∂
y
∂
x
n
Δ
x
n
+
ε
1
Δ
x
1
+
⋯
+
ε
n
Δ
x
n
{\displaystyle {\begin{aligned}\Delta y&{}~{\stackrel {\mathrm {def} }{=}}~f(x_{1}+\Delta x_{1},\dots ,x_{n}+\Delta x_{n})-f(x_{1},\dots ,x_{n})\\&{}={\frac {\partial y}{\partial x_{1}}}\Delta x_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\Delta x_{n}+\varepsilon _{1}\Delta x_{1}+\cdots +\varepsilon _{n}\Delta x_{n}\end{aligned}}}
where the error terms ε i tend to zero as the increments Δxi jointly tend to zero. The total differential is then rigorously defined as
d
y
=
∂
y
∂
x
1
Δ
x
1
+
⋯
+
∂
y
∂
x
n
Δ
x
n
.
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}\Delta x_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\Delta x_{n}.}
Since, with this definition,
d
x
i
(
Δ
x
1
,
…
,
Δ
x
n
)
=
Δ
x
i
,
{\displaystyle dx_{i}(\Delta x_{1},\dots ,\Delta x_{n})=\Delta x_{i},}
one has
d
y
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
.
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}\,dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\,dx_{n}.}
As in the case of one variable, the approximate identity holds
d
y
≈
Δ
y
{\displaystyle dy\approx \Delta y}
in which the total error can be made as small as desired relative to
Δ
x
1
2
+
⋯
+
Δ
x
n
2
{\textstyle {\sqrt {\Delta x_{1}^{2}+\cdots +\Delta x_{n}^{2}}}}
by confining attention to sufficiently small increments.
=== Application of the total differential to error estimation ===
In measurement, the total differential is used in estimating the error
Δ
f
{\displaystyle \Delta f}
of a function
f
{\displaystyle f}
based on the errors
Δ
x
,
Δ
y
,
…
{\displaystyle \Delta x,\Delta y,\ldots }
of the parameters
x
,
y
,
…
{\displaystyle x,y,\ldots }
. Assuming that the interval is short enough for the change to be approximately linear:
Δ
f
(
x
)
=
f
′
(
x
)
Δ
x
{\displaystyle \Delta f(x)=f'(x)\Delta x}
and that all variables are independent, then for all variables,
Δ
f
=
f
x
Δ
x
+
f
y
Δ
y
+
⋯
{\displaystyle \Delta f=f_{x}\Delta x+f_{y}\Delta y+\cdots }
This is because the derivative
f
x
{\displaystyle f_{x}}
with respect to the particular parameter
x
{\displaystyle x}
gives the sensitivity of the function
f
{\displaystyle f}
to a change in
x
{\displaystyle x}
, in particular the error
Δ
x
{\displaystyle \Delta x}
. As they are assumed to be independent, the analysis describes the worst-case scenario. The absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign. From this principle the error rules of summation, multiplication etc. are derived, e.g.:
That is to say, in multiplication, the total relative error is the sum of the relative errors of the parameters.
To illustrate how this depends on the function considered, consider the case where the function is
f
(
a
,
b
)
=
a
ln
b
{\displaystyle f(a,b)=a\ln b}
instead. Then, it can be computed that the error estimate is
Δ
f
f
=
Δ
a
a
+
Δ
b
b
ln
b
{\displaystyle {\frac {\Delta f}{f}}={\frac {\Delta a}{a}}+{\frac {\Delta b}{b\ln b}}}
with an extra ln b factor not found in the case of a simple product. This additional factor tends to make the error smaller, as the denominator b ln b is larger than a bare b.
== Higher-order differentials ==
Higher-order differentials of a function y = f(x) of a single variable x can be defined via:
d
2
y
=
d
(
d
y
)
=
d
(
f
′
(
x
)
d
x
)
=
(
d
f
′
(
x
)
)
d
x
=
f
″
(
x
)
(
d
x
)
2
,
{\displaystyle d^{2}y=d(dy)=d(f'(x)dx)=(df'(x))dx=f''(x)\,(dx)^{2},}
and, in general,
d
n
y
=
f
(
n
)
(
x
)
(
d
x
)
n
.
{\displaystyle d^{n}y=f^{(n)}(x)\,(dx)^{n}.}
Informally, this motivates Leibniz's notation for higher-order derivatives
f
(
n
)
(
x
)
=
d
n
f
d
x
n
.
{\displaystyle f^{(n)}(x)={\frac {d^{n}f}{dx^{n}}}.}
When the independent variable x itself is permitted to depend on other variables, then the expression becomes more complicated, as it must include also higher order differentials in x itself. Thus, for instance,
d
2
y
=
f
″
(
x
)
(
d
x
)
2
+
f
′
(
x
)
d
2
x
d
3
y
=
f
‴
(
x
)
(
d
x
)
3
+
3
f
″
(
x
)
d
x
d
2
x
+
f
′
(
x
)
d
3
x
{\displaystyle {\begin{aligned}d^{2}y&=f''(x)\,(dx)^{2}+f'(x)d^{2}x\\[1ex]d^{3}y&=f'''(x)\,(dx)^{3}+3f''(x)dx\,d^{2}x+f'(x)d^{3}x\end{aligned}}}
and so forth.
Similar considerations apply to defining higher order differentials of functions of several variables. For example, if f is a function of two variables x and y, then
d
n
f
=
∑
k
=
0
n
(
n
k
)
∂
n
f
∂
x
k
∂
y
n
−
k
(
d
x
)
k
(
d
y
)
n
−
k
,
{\displaystyle d^{n}f=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {\partial ^{n}f}{\partial x^{k}\partial y^{n-k}}}(dx)^{k}(dy)^{n-k},}
where
(
n
k
)
{\textstyle {\binom {n}{k}}}
is a binomial coefficient. In more variables, an analogous expression holds, but with an appropriate multinomial expansion rather than binomial expansion.
Higher order differentials in several variables also become more complicated when the independent variables are themselves allowed to depend on other variables. For instance, for a function f of x and y which are allowed to depend on auxiliary variables, one has
d
2
f
=
(
∂
2
f
∂
x
2
(
d
x
)
2
+
2
∂
2
f
∂
x
∂
y
d
x
d
y
+
∂
2
f
∂
y
2
(
d
y
)
2
)
+
∂
f
∂
x
d
2
x
+
∂
f
∂
y
d
2
y
.
{\displaystyle d^{2}f=\left({\frac {\partial ^{2}f}{\partial x^{2}}}(dx)^{2}+2{\frac {\partial ^{2}f}{\partial x\partial y}}dx\,dy+{\frac {\partial ^{2}f}{\partial y^{2}}}(dy)^{2}\right)+{\frac {\partial f}{\partial x}}d^{2}x+{\frac {\partial f}{\partial y}}d^{2}y.}
Because of this notational awkwardness, the use of higher order differentials was roundly criticized by Hadamard (1935), who concluded:
Enfin, que signifie ou que représente l'égalité
d
2
z
=
r
d
x
2
+
2
s
d
x
d
y
+
t
d
y
2
?
{\displaystyle d^{2}z=r\,dx^{2}+2s\,dx\,dy+t\,dy^{2}\,?}
A mon avis, rien du tout.
That is: Finally, what is meant, or represented, by the equality [...]? In my opinion, nothing at all. In spite of this skepticism, higher order differentials did emerge as an important tool in analysis.
In these contexts, the n-th order differential of the function f applied to an increment Δx is defined by
d
n
f
(
x
,
Δ
x
)
=
d
n
d
t
n
f
(
x
+
t
Δ
x
)
|
t
=
0
{\displaystyle d^{n}f(x,\Delta x)=\left.{\frac {d^{n}}{dt^{n}}}f(x+t\Delta x)\right|_{t=0}}
or an equivalent expression, such as
lim
t
→
0
Δ
t
Δ
x
n
f
t
n
{\displaystyle \lim _{t\to 0}{\frac {\Delta _{t\Delta x}^{n}f}{t^{n}}}}
where
Δ
t
Δ
x
n
f
{\displaystyle \Delta _{t\Delta x}^{n}f}
is an nth forward difference with increment tΔx.
This definition makes sense as well if f is a function of several variables (for simplicity taken here as a vector argument). Then the n-th differential defined in this way is a homogeneous function of degree n in the vector increment Δx. Furthermore, the Taylor series of f at the point x is given by
f
(
x
+
Δ
x
)
∼
f
(
x
)
+
d
f
(
x
,
Δ
x
)
+
1
2
d
2
f
(
x
,
Δ
x
)
+
⋯
+
1
n
!
d
n
f
(
x
,
Δ
x
)
+
⋯
{\displaystyle f(x+\Delta x)\sim f(x)+df(x,\Delta x)+{\frac {1}{2}}d^{2}f(x,\Delta x)+\cdots +{\frac {1}{n!}}d^{n}f(x,\Delta x)+\cdots }
The higher order Gateaux derivative generalizes these considerations to infinite dimensional spaces.
== Properties ==
A number of properties of the differential follow in a straightforward manner from the corresponding properties of the derivative, partial derivative, and total derivative. These include:
Linearity: For constants a and b and differentiable functions f and g,
d
(
a
f
+
b
g
)
=
a
d
f
+
b
d
g
.
{\displaystyle d(af+bg)=a\,df+b\,dg.}
Product rule: For two differentiable functions f and g,
d
(
f
g
)
=
f
d
g
+
g
d
f
.
{\displaystyle d(fg)=f\,dg+g\,df.}
An operation d with these two properties is known in abstract algebra as a derivation. They imply the power rule
d
(
f
n
)
=
n
f
n
−
1
d
f
{\displaystyle d(f^{n})=nf^{n-1}df}
In addition, various forms of the chain rule hold, in increasing level of generality:
If y = f(u) is a differentiable function of the variable u and u = g(x) is a differentiable function of x, then
d
y
=
f
′
(
u
)
d
u
=
f
′
(
g
(
x
)
)
g
′
(
x
)
d
x
.
{\displaystyle dy=f'(u)\,du=f'(g(x))g'(x)\,dx.}
If y = f(x1, ..., xn) and all of the variables x1, ..., xn depend on another variable t, then by the chain rule for partial derivatives, one has
d
y
=
d
y
d
t
d
t
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
=
∂
y
∂
x
1
d
x
1
d
t
d
t
+
⋯
+
∂
y
∂
x
n
d
x
n
d
t
d
t
.
{\displaystyle {\begin{aligned}dy={\frac {dy}{dt}}dt&={\frac {\partial y}{\partial x_{1}}}dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}dx_{n}\\[1ex]&={\frac {\partial y}{\partial x_{1}}}{\frac {dx_{1}}{dt}}\,dt+\cdots +{\frac {\partial y}{\partial x_{n}}}{\frac {dx_{n}}{dt}}\,dt.\end{aligned}}}
Heuristically, the chain rule for several variables can itself be understood by dividing through both sides of this equation by the infinitely small quantity dt.
More general analogous expressions hold, in which the intermediate variables xi depend on more than one variable.
== General formulation ==
A consistent notion of differential can be developed for a function f : Rn → Rm between two Euclidean spaces. Let x,Δx ∈ Rn be a pair of Euclidean vectors. The increment in the function f is
Δ
f
=
f
(
x
+
Δ
x
)
−
f
(
x
)
.
{\displaystyle \Delta f=f(\mathbf {x} +\Delta \mathbf {x} )-f(\mathbf {x} ).}
If there exists an m × n matrix A such that
Δ
f
=
A
Δ
x
+
‖
Δ
x
‖
ε
{\displaystyle \Delta f=A\Delta \mathbf {x} +\|\Delta \mathbf {x} \|{\boldsymbol {\varepsilon }}}
in which the vector ε → 0 as Δx → 0, then f is by definition differentiable at the point x. The matrix A is sometimes known as the Jacobian matrix, and the linear transformation that associates to the increment Δx ∈ Rn the vector AΔx ∈ Rm is, in this general setting, known as the differential df(x) of f at the point x. This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces.
Another fruitful point of view is to define the differential directly as a kind of directional derivative:
d
f
(
x
,
h
)
=
lim
t
→
0
f
(
x
+
t
h
)
−
f
(
x
)
t
=
d
d
t
f
(
x
+
t
h
)
|
t
=
0
,
{\displaystyle df(\mathbf {x} ,\mathbf {h} )=\lim _{t\to 0}{\frac {f(\mathbf {x} +t\mathbf {h} )-f(\mathbf {x} )}{t}}=\left.{\frac {d}{dt}}f(\mathbf {x} +t\mathbf {h} )\right|_{t=0},}
which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If t represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so df gives a linear function on the tangent space: a differential form. With this interpretation, the differential of f is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of f also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity. If one treats the differential in this manner, then it is known as the pushforward since it "pushes" velocities from a source space into velocities in a target space.
== Other approaches ==
Although the notion of having an infinitesimal increment dx is not well-defined in modern mathematical analysis, a variety of techniques exist for defining the infinitesimal differential so that the differential of a function can be handled in a manner that does not clash with the Leibniz notation. These include:
Defining the differential as a kind of differential form, specifically the exterior derivative of a function. The infinitesimal increments are then identified with vectors in the tangent space at a point. This approach is popular in differential geometry and related fields, because it readily generalizes to mappings between differentiable manifolds.
Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.
Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.
Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers which contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.
== Examples and applications ==
Differentials may be effectively used in numerical analysis to study the propagation of experimental errors in a calculation, and thus the overall numerical stability of a problem (Courant 1937a). Suppose that the variable x represents the outcome of an experiment and y is the result of a numerical computation applied to x. The question is to what extent errors in the measurement of x influence the outcome of the computation of y. If the x is known to within Δx of its true value, then Taylor's theorem gives the following estimate on the error Δy in the computation of y:
Δ
y
=
f
′
(
x
)
Δ
x
+
(
Δ
x
)
2
2
f
″
(
ξ
)
{\displaystyle \Delta y=f'(x)\Delta x+{\frac {(\Delta x)^{2}}{2}}f''(\xi )}
where ξ = x + θΔx for some 0 < θ < 1. If Δx is small, then the second order term is negligible, so that Δy is, for practical purposes, well-approximated by dy = f'(x) Δx.
The differential is often useful to rewrite a differential equation
d
y
d
x
=
g
(
x
)
{\displaystyle {\frac {dy}{dx}}=g(x)}
in the form
d
y
=
g
(
x
)
d
x
,
{\displaystyle dy=g(x)\,dx,}
in particular when one wants to separate the variables.
== Notes ==
== See also ==
Notation for differentiation
== References ==
Boyer, Carl B. (1959), The history of the calculus and its conceptual development, New York: Dover Publications, MR 0124178.
Cauchy, Augustin-Louis (1823), Résumé des Leçons données à l'Ecole royale polytechnique sur les applications du calcul infinitésimal, archived from the original on 2007-07-08, retrieved 2009-08-19.
Courant, Richard (1937a), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60842-4, MR 1009558 {{citation}}: ISBN / Date incompatibility (help).
Courant, Richard (1937b), Differential and integral calculus. Vol. II, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60840-0, MR 1009559 {{citation}}: ISBN / Date incompatibility (help).
Courant, Richard; John, Fritz (1999), Introduction to Calculus and Analysis Volume 1, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 3-540-65058-X, MR 1746554
Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 0-387-98637-5.
Fréchet, Maurice (1925), "La notion de différentielle dans l'analyse générale", Annales Scientifiques de l'École Normale Supérieure, Série 3, 42: 293–323, doi:10.24033/asens.766, ISSN 0012-9593, MR 1509268.
Goursat, Édouard (1904), A course in mathematical analysis: Vol 1: Derivatives and differentials, definite integrals, expansion in series, applications to geometry, E. R. Hedrick, New York: Dover Publications (published 1959), MR 0106155.
Hadamard, Jacques (1935), "La notion de différentiel dans l'enseignement", Mathematical Gazette, XIX (236): 341–342, doi:10.2307/3606323, JSTOR 3606323.
Hardy, Godfrey Harold (1908), A Course of Pure Mathematics, Cambridge University Press, ISBN 978-0-521-09227-2 {{citation}}: ISBN / Date incompatibility (help).
Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094.
Itô, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4.
Kline, Morris (1977), "Chapter 13: Differentials and the law of the mean", Calculus: An intuitive and physical approach, John Wiley and Sons.
Kline, Morris (1972), Mathematical thought from ancient to modern times (3rd ed.), Oxford University Press (published 1990), ISBN 978-0-19-506136-9
Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.).
Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press.
Moerdijk, I.; Reyes, G.E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag.
Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3.
Tolstov, G.P. (2001) [1994], "Differential", Encyclopedia of Mathematics, EMS Press.
== External links ==
Differential Of A Function at Wolfram Demonstrations Project | Wikipedia/Differential_of_a_function |
The parallelogram of forces is a method for solving (or visualizing) the results of applying two forces to an object.
When more than two forces are involved, the geometry is no longer a parallelogram, but the same principles apply to a polygon of forces.
The resultant force due to the application of a number of forces can be found geometrically by drawing arrows for each force.
The parallelogram of forces is a graphical manifestation of the addition of vectors.
== Newton's proof ==
=== Preliminary: the parallelogram of velocity ===
Suppose a particle moves at a uniform rate along a line from A to B (Figure 2) in a given time (say, one second), while in the same time, the line AB moves uniformly from its position at AB to a position at DC, remaining parallel to its original orientation throughout. Accounting for both motions, the particle traces the line AC. Because a displacement in a given time is a measure of velocity, the length of AB is a measure of the particle's velocity along AB, the length of AD is a measure of the line's velocity along AD, and the length of AC is a measure of the particle's velocity along AC. The particle's motion is the same as if it had moved with a single velocity along AC.
=== Newton's proof of the parallelogram of force ===
Suppose two forces act on a particle at the origin (the "tails" of the vectors) of Figure 1. Let the lengths of the vectors F1 and F2 represent the velocities the two forces could produce in the particle by acting for a given time, and let the direction of each represent the direction in which they act. Each force acts independently and will produce its particular velocity whether the other force acts or not. At the end of the given time, the particle has both velocities. By the above proof, they are equivalent to a single velocity, Fnet. By Newton's second law, this vector is also a measure of the force which would produce that velocity, thus the two forces are equivalent to a single force.
== Bernoulli's proof for perpendicular vectors ==
We model forces as Euclidean vectors or members of
R
2
{\displaystyle \mathbb {R} ^{2}}
. Our first assumption is that the resultant of two forces is in fact another force, so that for any two forces
F
,
G
∈
R
2
{\displaystyle \mathbf {F} ,\mathbf {G} \in \mathbb {R} ^{2}}
there is another force
F
⊕
G
∈
R
2
{\displaystyle \mathbf {F} \oplus \mathbf {G} \in \mathbb {R} ^{2}}
.
Our final assumption is that the resultant of two forces doesn't change when rotated. If
R
:
R
2
→
R
2
{\displaystyle R:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}
is any rotation (any orthogonal map for the usual vector space structure of
R
2
{\displaystyle \mathbb {R} ^{2}}
with
det
R
=
1
{\displaystyle \det R=1}
), then for all forces
F
,
G
∈
R
2
{\displaystyle \mathbf {F} ,\mathbf {G} \in \mathbb {R} ^{2}}
R
(
F
⊕
G
)
=
R
(
F
)
⊕
R
(
G
)
{\displaystyle R\left(\mathbf {F} \oplus \mathbf {G} \right)=R\left(\mathbf {F} \right)\oplus R\left(\mathbf {G} \right)}
Consider two perpendicular forces
F
1
{\displaystyle \mathbf {F} _{1}}
of length
a
{\displaystyle a}
and
F
2
{\displaystyle \mathbf {F} _{2}}
of length
b
{\displaystyle b}
, with
x
{\displaystyle x}
being the length of
F
1
⊕
F
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}}
.
Let
G
1
:=
a
2
x
2
(
F
1
⊕
F
2
)
{\displaystyle \mathbf {G} _{1}:={\tfrac {a^{2}}{x^{2}}}\left(\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right)}
and
G
2
:=
a
x
R
(
F
2
)
{\displaystyle \mathbf {G} _{2}:={\tfrac {a}{x}}R(\mathbf {F} _{2})}
, where
R
{\displaystyle R}
is the rotation between
F
1
{\displaystyle \mathbf {F} _{1}}
and
F
1
⊕
F
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}}
, so
G
1
=
a
x
R
(
F
1
)
{\displaystyle \mathbf {G_{1}} ={\tfrac {a}{x}}R\left(\mathbf {F} _{1}\right)}
. Under the invariance of the rotation, we get
F
1
=
x
a
R
−
1
(
G
1
)
=
a
x
R
−
1
(
F
1
⊕
F
2
)
=
a
x
R
−
1
(
F
1
)
⊕
a
x
R
−
1
(
F
2
)
=
G
1
⊕
G
2
{\displaystyle \mathbf {F} _{1}={\frac {x}{a}}R^{-1}\left(\mathbf {G} _{1}\right)={\frac {a}{x}}R^{-1}\left(\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right)={\frac {a}{x}}R^{-1}\left(\mathbf {F} _{1}\right)\oplus {\frac {a}{x}}R^{-1}\left(\mathbf {F} _{2}\right)=\mathbf {G} _{1}\oplus \mathbf {G} _{2}}
Similarly, consider two more forces
H
1
:=
−
G
2
{\displaystyle \mathbf {H} _{1}:=-\mathbf {G} _{2}}
and
H
2
:=
b
2
x
2
(
F
1
⊕
F
2
)
{\displaystyle \mathbf {H} _{2}:={\tfrac {b^{2}}{x^{2}}}\left(\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right)}
. Let
T
{\displaystyle T}
be the rotation from
F
1
{\displaystyle \mathbf {F} _{1}}
to
H
1
{\displaystyle \mathbf {H} _{1}}
:
H
1
=
b
x
T
(
F
1
)
{\displaystyle \mathbf {H} _{1}={\tfrac {b}{x}}T\left(\mathbf {F} _{1}\right)}
, which by inspection makes
H
2
=
b
x
T
(
F
2
)
{\displaystyle \mathbf {H} _{2}={\tfrac {b}{x}}T\left(\mathbf {F} _{2}\right)}
.
F
2
=
x
b
T
−
1
(
H
2
)
=
b
x
T
−
1
(
F
1
⊕
F
2
)
=
b
x
T
−
1
(
F
1
)
⊕
b
x
T
−
1
(
F
2
)
=
H
1
⊕
H
2
{\displaystyle \mathbf {F} _{2}={\frac {x}{b}}T^{-1}\left(\mathbf {H} _{2}\right)={\frac {b}{x}}T^{-1}\left(\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right)={\frac {b}{x}}T^{-1}\left(\mathbf {F} _{1}\right)\oplus {\frac {b}{x}}T^{-1}\left(\mathbf {F} _{2}\right)=\mathbf {H} _{1}\oplus \mathbf {H_{2}} }
Applying these two equations
F
1
⊕
F
2
=
(
G
1
⊕
G
2
)
⊕
(
H
1
⊕
H
2
)
=
(
G
1
⊕
G
2
)
⊕
(
−
G
2
⊕
H
2
)
=
G
1
⊕
H
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}=\left(\mathbf {G} _{1}\oplus \mathbf {G} _{2}\right)\oplus \left(\mathbf {H} _{1}\oplus \mathbf {H_{2}} \right)=\left(\mathbf {G} _{1}\oplus \mathbf {G} _{2}\right)\oplus \left(-\mathbf {G} _{2}\oplus \mathbf {H} _{2}\right)=\mathbf {G} _{1}\oplus \mathbf {H} _{2}}
Since
G
1
{\displaystyle \mathbf {G} _{1}}
and
H
2
{\displaystyle \mathbf {H} _{2}}
both lie along
F
1
⊕
F
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}}
, their lengths are equal
x
=
|
F
1
⊕
F
2
|
=
|
G
1
⊕
H
2
|
=
a
2
x
+
b
2
x
{\displaystyle x=\left|\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right|=\left|\mathbf {G} _{1}\oplus \mathbf {H} _{2}\right|={\tfrac {a^{2}}{x}}+{\tfrac {b^{2}}{x}}}
x
=
a
2
+
b
2
{\displaystyle x={\sqrt {a^{2}+b^{2}}}}
which implies that
F
1
⊕
F
2
=
a
e
1
⊕
b
e
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}=a\mathbf {e} _{1}\oplus b\mathbf {e} _{2}}
has length
a
2
+
b
2
{\displaystyle {\sqrt {a^{2}+b^{2}}}}
, which is the length of
a
e
1
+
b
e
2
{\displaystyle a\mathbf {e} _{1}+b\mathbf {e} _{2}}
. Thus for the case where
F
1
{\displaystyle \mathbf {F} _{1}}
and
F
2
{\displaystyle \mathbf {F} _{2}}
are perpendicular,
F
1
⊕
F
2
=
F
1
+
F
2
{\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}=\mathbf {F} _{1}+\mathbf {F} _{2}}
. However, when combining our two sets of auxiliary forces we used the associativity of
⊕
{\displaystyle \oplus }
. Using this additional assumption, we will form an additional proof below.
== Algebraic proof of the parallelogram of force ==
We model forces as Euclidean vectors or members of
R
2
{\displaystyle \mathbb {R} ^{2}}
. Our first assumption is that the resultant of two forces is in fact another force, so that for any two forces
F
,
G
∈
R
2
{\displaystyle \mathbf {F} ,\mathbf {G} \in \mathbb {R} ^{2}}
there is another force
F
⊕
G
∈
R
2
{\displaystyle \mathbf {F} \oplus \mathbf {G} \in \mathbb {R} ^{2}}
. We assume commutativity, as these are forces being applied concurrently, so the order shouldn't matter
F
⊕
G
=
G
⊕
F
{\displaystyle \mathbf {F} \oplus \mathbf {G} =\mathbf {G} \oplus \mathbf {F} }
.
Consider the map
(
a
,
b
)
=
a
e
1
+
b
e
2
↦
a
e
1
⊕
b
e
2
{\displaystyle (a,b)=a\mathbf {e} _{1}+b\mathbf {e} _{2}\mapsto a\mathbf {e} _{1}\oplus b\mathbf {e} _{2}}
If
⊕
{\displaystyle \oplus }
is associative, then this map will be linear. Since it also sends
e
1
{\displaystyle \mathbf {e} _{1}}
to
e
1
{\displaystyle \mathbf {e} _{1}}
and
e
2
{\displaystyle \mathbf {e} _{2}}
to
e
2
{\displaystyle \mathbf {e} _{2}}
, it must also be the identity map. Thus
⊕
{\displaystyle \oplus }
must be equivalent to the normal vector addition operator.
== Controversy ==
The mathematical proof of the parallelogram of force is not generally accepted to be mathematically valid. Various proofs were developed (chiefly Duchayla's and Poisson's), and these also caused objections. That the parallelogram of force was true was not questioned, but why it was true. Today the parallelogram of force is accepted as an empirical fact, non-reducible to Newton's first principles.
== See also ==
Newton's Mathematical Principles of Natural Philosophy, Axioms or Laws of Motion, Corollary I, at Wikisource
Vector (geometric)
Net force
== References == | Wikipedia/Parallelogram_of_force |
The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise, for instance, in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton, and first appeared in Gauss's 1809 work Theoria motus corporum coelestium in sectionibus conicis solem ambientum.
== Description ==
Given
m
{\displaystyle m}
functions
r
=
(
r
1
,
…
,
r
m
)
{\displaystyle {\textbf {r}}=(r_{1},\ldots ,r_{m})}
(often called residuals) of
n
{\displaystyle n}
variables
β
=
(
β
1
,
…
β
n
)
,
{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\ldots \beta _{n}),}
with
m
≥
n
,
{\displaystyle m\geq n,}
the Gauss–Newton algorithm iteratively finds the value of
β
{\displaystyle \beta }
that minimize the sum of squares
S
(
β
)
=
∑
i
=
1
m
r
i
(
β
)
2
.
{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}({\boldsymbol {\beta }})^{2}.}
Starting with an initial guess
β
(
0
)
{\displaystyle {\boldsymbol {\beta }}^{(0)}}
for the minimum, the method proceeds by the iterations
β
(
s
+
1
)
=
β
(
s
)
−
(
J
r
T
J
r
)
−
1
J
r
T
r
(
β
(
s
)
)
,
{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
where, if r and β are column vectors, the entries of the Jacobian matrix are
(
J
r
)
i
j
=
∂
r
i
(
β
(
s
)
)
∂
β
j
,
{\displaystyle \left(\mathbf {J_{r}} \right)_{ij}={\frac {\partial r_{i}\left({\boldsymbol {\beta }}^{(s)}\right)}{\partial \beta _{j}}},}
and the symbol
T
{\displaystyle ^{\operatorname {T} }}
denotes the matrix transpose.
At each iteration, the update
Δ
=
β
(
s
+
1
)
−
β
(
s
)
{\displaystyle \Delta ={\boldsymbol {\beta }}^{(s+1)}-{\boldsymbol {\beta }}^{(s)}}
can be found by rearranging the previous equation in the following two steps:
Δ
=
−
(
J
r
T
J
r
)
−
1
J
r
T
r
(
β
(
s
)
)
{\displaystyle \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)}
J
r
T
J
r
Δ
=
−
J
r
T
r
(
β
(
s
)
)
{\displaystyle \mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \Delta =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)}
With substitutions
A
=
J
r
T
J
r
{\textstyle A=\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} }
,
b
=
−
J
r
T
r
(
β
(
s
)
)
{\displaystyle \mathbf {b} =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)}
, and
x
=
Δ
{\displaystyle \mathbf {x} =\Delta }
, this turns into the conventional matrix equation of form
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
, which can then be solved in a variety of methods (see Notes).
If m = n, the iteration simplifies to
β
(
s
+
1
)
=
β
(
s
)
−
(
J
r
)
−
1
r
(
β
(
s
)
)
,
{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} \right)^{-1}\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
which is a direct generalization of Newton's method in one dimension.
In data fitting, where the goal is to find the parameters
β
{\displaystyle {\boldsymbol {\beta }}}
such that a given model function
f
(
x
,
β
)
{\displaystyle \mathbf {f} (\mathbf {x} ,{\boldsymbol {\beta }})}
best fits some data points
(
x
i
,
y
i
)
{\displaystyle (x_{i},y_{i})}
, the functions
r
i
{\displaystyle r_{i}}
are the residuals:
r
i
(
β
)
=
y
i
−
f
(
x
i
,
β
)
.
{\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right).}
Then, the Gauss–Newton method can be expressed in terms of the Jacobian
J
f
=
−
J
r
{\displaystyle \mathbf {J_{f}} =-\mathbf {J_{r}} }
of the function
f
{\displaystyle \mathbf {f} }
as
β
(
s
+
1
)
=
β
(
s
)
+
(
J
f
T
J
f
)
−
1
J
f
T
r
(
β
(
s
)
)
.
{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right).}
Note that
(
J
f
T
J
f
)
−
1
J
f
T
{\displaystyle \left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }}
is the left pseudoinverse of
J
f
{\displaystyle \mathbf {J_{f}} }
.
== Notes ==
The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix
J
r
T
J
r
{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }
is not invertible and the normal equations cannot be solved (at least uniquely).
The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:
r
(
β
)
≈
r
(
β
(
s
)
)
+
J
r
(
β
(
s
)
)
Δ
{\displaystyle \mathbf {r} ({\boldsymbol {\beta }})\approx \mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta }
with
Δ
=
β
−
β
(
s
)
{\displaystyle \Delta ={\boldsymbol {\beta }}-{\boldsymbol {\beta }}^{(s)}}
. The task of finding
Δ
{\displaystyle \Delta }
minimizing the sum of squares of the right-hand side; i.e.,
min
‖
r
(
β
(
s
)
)
+
J
r
(
β
(
s
)
)
Δ
‖
2
2
,
{\displaystyle \min \left\|\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta \right\|_{2}^{2},}
is a linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations are n simultaneous linear equations in the unknown increments
Δ
{\displaystyle \Delta }
. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of
J
r
{\displaystyle \mathbf {J_{r}} }
. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail, as
J
r
T
J
r
{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }
becomes singular.
When
r
{\displaystyle \mathbf {r} }
is complex
r
:
C
n
→
C
{\displaystyle \mathbf {r} :\mathbb {C} ^{n}\to \mathbb {C} }
the conjugate form should be used:
(
J
r
¯
T
J
r
)
−
1
J
r
¯
T
{\displaystyle \left({\overline {\mathbf {J_{r}} }}^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}{\overline {\mathbf {J_{r}} }}^{\operatorname {T} }}
.
== Example ==
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration [S] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
It is desired to find a curve (model function) of the form
rate
=
V
max
⋅
[
S
]
K
M
+
[
S
]
{\displaystyle {\text{rate}}={\frac {V_{\text{max}}\cdot [S]}{K_{M}+[S]}}}
that fits best the data in the least-squares sense, with the parameters
V
max
{\displaystyle V_{\text{max}}}
and
K
M
{\displaystyle K_{M}}
to be determined.
Denote by
x
i
{\displaystyle x_{i}}
and
y
i
{\displaystyle y_{i}}
the values of [S] and rate respectively, with
i
=
1
,
…
,
7
{\displaystyle i=1,\dots ,7}
. Let
β
1
=
V
max
{\displaystyle \beta _{1}=V_{\text{max}}}
and
β
2
=
K
M
{\displaystyle \beta _{2}=K_{M}}
. We will find
β
1
{\displaystyle \beta _{1}}
and
β
2
{\displaystyle \beta _{2}}
such that the sum of squares of the residuals
r
i
=
y
i
−
β
1
x
i
β
2
+
x
i
,
(
i
=
1
,
…
,
7
)
{\displaystyle r_{i}=y_{i}-{\frac {\beta _{1}x_{i}}{\beta _{2}+x_{i}}},\quad (i=1,\dots ,7)}
is minimized.
The Jacobian
J
r
{\displaystyle \mathbf {J_{r}} }
of the vector of residuals
r
i
{\displaystyle r_{i}}
with respect to the unknowns
β
j
{\displaystyle \beta _{j}}
is a
7
×
2
{\displaystyle 7\times 2}
matrix with the
i
{\displaystyle i}
-th row having the entries
∂
r
i
∂
β
1
=
−
x
i
β
2
+
x
i
;
∂
r
i
∂
β
2
=
β
1
⋅
x
i
(
β
2
+
x
i
)
2
.
{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{1}}}=-{\frac {x_{i}}{\beta _{2}+x_{i}}};\quad {\frac {\partial r_{i}}{\partial \beta _{2}}}={\frac {\beta _{1}\cdot x_{i}}{\left(\beta _{2}+x_{i}\right)^{2}}}.}
Starting with the initial estimates of
β
1
=
0.9
{\displaystyle \beta _{1}=0.9}
and
β
2
=
0.2
{\displaystyle \beta _{2}=0.2}
, after five iterations of the Gauss–Newton algorithm, the optimal values
β
^
1
=
0.362
{\displaystyle {\hat {\beta }}_{1}=0.362}
and
β
^
2
=
0.556
{\displaystyle {\hat {\beta }}_{2}=0.556}
are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.
== Convergence properties ==
The Gauss-Newton iteration is guaranteed to converge toward a local minimum point
β
^
{\displaystyle {\hat {\beta }}}
under 4 conditions: The functions
r
1
,
…
,
r
m
{\displaystyle r_{1},\ldots ,r_{m}}
are twice continuously differentiable in an open convex set
D
∋
β
^
{\displaystyle D\ni {\hat {\beta }}}
, the Jacobian
J
r
(
β
^
)
{\displaystyle \mathbf {J} _{\mathbf {r} }({\hat {\beta }})}
is of full column rank, the initial iterate
β
(
0
)
{\displaystyle \beta ^{(0)}}
is near
β
^
{\displaystyle {\hat {\beta }}}
, and the local minimum value
|
S
(
β
^
)
|
{\displaystyle |S({\hat {\beta }})|}
is small. The convergence is quadratic if
|
S
(
β
^
)
|
=
0
{\displaystyle |S({\hat {\beta }})|=0}
.
It can be shown that the increment Δ is a descent direction for S, and, if the algorithm converges, then the limit is a stationary point of S. For large minimum value
|
S
(
β
^
)
|
{\displaystyle |S({\hat {\beta }})|}
, however, convergence is not guaranteed, not even local convergence as in Newton's method, or convergence under the usual Wolfe conditions.
The rate of convergence of the Gauss–Newton algorithm can approach quadratic. The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix
J
r
T
J
r
{\displaystyle \mathbf {J_{r}^{\operatorname {T} }J_{r}} }
is ill-conditioned. For example, consider the problem with
m
=
2
{\displaystyle m=2}
equations and
n
=
1
{\displaystyle n=1}
variable, given by
r
1
(
β
)
=
β
+
1
,
r
2
(
β
)
=
λ
β
2
+
β
−
1.
{\displaystyle {\begin{aligned}r_{1}(\beta )&=\beta +1,\\r_{2}(\beta )&=\lambda \beta ^{2}+\beta -1.\end{aligned}}}
For
λ
<
1
{\displaystyle \lambda <1}
,
β
=
0
{\displaystyle \beta =0}
is a local optimum. If
λ
=
0
{\displaystyle \lambda =0}
, then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.
== Solving overdetermined systems of equations ==
The Gauss-Newton iteration
x
(
k
+
1
)
=
x
(
k
)
−
J
(
x
(
k
)
)
†
f
(
x
(
k
)
)
,
k
=
0
,
1
,
…
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {x} ^{(k)}-J(\mathbf {x} ^{(k)})^{\dagger }\mathbf {f} (\mathbf {x} ^{(k)})\,,\quad k=0,1,\ldots }
is an effective method for solving overdetermined systems of equations in the form of
f
(
x
)
=
0
{\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} }
with
f
(
x
)
=
[
f
1
(
x
1
,
…
,
x
n
)
⋮
f
m
(
x
1
,
…
,
x
n
)
]
{\displaystyle \mathbf {f} (\mathbf {x} )={\begin{bmatrix}f_{1}(x_{1},\ldots ,x_{n})\\\vdots \\f_{m}(x_{1},\ldots ,x_{n})\end{bmatrix}}}
and
m
>
n
{\displaystyle m>n}
where
J
(
x
)
†
{\displaystyle J(\mathbf {x} )^{\dagger }}
is the Moore-Penrose inverse (also known as pseudoinverse) of the Jacobian matrix
J
(
x
)
{\displaystyle J(\mathbf {x} )}
of
f
(
x
)
{\displaystyle \mathbf {f} (\mathbf {x} )}
.
It can be considered an extension of Newton's method and enjoys the same local quadratic convergence toward isolated regular solutions.
If the solution doesn't exist but the initial iterate
x
(
0
)
{\displaystyle \mathbf {x} ^{(0)}}
is near a point
x
^
=
(
x
^
1
,
…
,
x
^
n
)
{\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},\ldots ,{\hat {x}}_{n})}
at which the sum of squares
∑
i
=
1
m
|
f
i
(
x
1
,
…
,
x
n
)
|
2
≡
‖
f
(
x
)
‖
2
2
{\textstyle \sum _{i=1}^{m}|f_{i}(x_{1},\ldots ,x_{n})|^{2}\equiv \|\mathbf {f} (\mathbf {x} )\|_{2}^{2}}
reaches a small local minimum, the Gauss-Newton iteration linearly converges to
x
^
{\displaystyle {\hat {\mathbf {x} }}}
. The point
x
^
{\displaystyle {\hat {\mathbf {x} }}}
is often called a least squares solution of the overdetermined system.
== Derivation from Newton's method ==
In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.
The recurrence relation for Newton's method for minimizing a function S of parameters
β
{\displaystyle {\boldsymbol {\beta }}}
is
β
(
s
+
1
)
=
β
(
s
)
−
H
−
1
g
,
{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\mathbf {H} ^{-1}\mathbf {g} ,}
where g denotes the gradient vector of S, and H denotes the Hessian matrix of S.
Since
S
=
∑
i
=
1
m
r
i
2
{\textstyle S=\sum _{i=1}^{m}r_{i}^{2}}
, the gradient is given by
g
j
=
2
∑
i
=
1
m
r
i
∂
r
i
∂
β
j
.
{\displaystyle g_{j}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}.}
Elements of the Hessian are calculated by differentiating the gradient elements,
g
j
{\displaystyle g_{j}}
, with respect to
β
k
{\displaystyle \beta _{k}}
:
H
j
k
=
2
∑
i
=
1
m
(
∂
r
i
∂
β
j
∂
r
i
∂
β
k
+
r
i
∂
2
r
i
∂
β
j
∂
β
k
)
.
{\displaystyle H_{jk}=2\sum _{i=1}^{m}\left({\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}+r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right).}
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
H
j
k
≈
2
∑
i
=
1
m
J
i
j
J
i
k
,
{\displaystyle H_{jk}\approx 2\sum _{i=1}^{m}J_{ij}J_{ik},}
where
J
i
j
=
∂
r
i
/
∂
β
j
{\textstyle J_{ij}={\partial r_{i}}/{\partial \beta _{j}}}
are entries of the Jacobian Jr. Note that when the exact hessian is evaluated near an exact fit we have near-zero
r
i
{\displaystyle r_{i}}
, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as
g
=
2
J
r
T
r
,
H
≈
2
J
r
T
J
r
.
{\displaystyle \mathbf {g} =2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {r} ,\quad \mathbf {H} \approx 2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} .}
These expressions are substituted into the recurrence relation above to obtain the operational equations
β
(
s
+
1
)
=
β
(
s
)
+
Δ
;
Δ
=
−
(
J
r
T
J
r
)
−
1
J
r
T
r
.
{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\Delta ;\quad \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} .}
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
|
r
i
∂
2
r
i
∂
β
j
∂
β
k
|
≪
|
∂
r
i
∂
β
j
∂
r
i
∂
β
k
|
{\displaystyle \left|r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right|\ll \left|{\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}\right|}
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:
The function values
r
i
{\displaystyle r_{i}}
are small in magnitude, at least around the minimum.
The functions are only "mildly" nonlinear, so that
∂
2
r
i
∂
β
j
∂
β
k
{\textstyle {\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}}
is relatively small in magnitude.
== Improved versions ==
With the Gauss–Newton method the sum of squares of the residuals S may not decrease at every iteration. However, since Δ is a descent direction, unless
S
(
β
s
)
{\displaystyle S\left({\boldsymbol {\beta }}^{s}\right)}
is a stationary point, it holds that
S
(
β
s
+
α
Δ
)
<
S
(
β
s
)
{\displaystyle S\left({\boldsymbol {\beta }}^{s}+\alpha \Delta \right)<S\left({\boldsymbol {\beta }}^{s}\right)}
for all sufficiently small
α
>
0
{\displaystyle \alpha >0}
. Thus, if divergence occurs, one solution is to employ a fraction
α
{\displaystyle \alpha }
of the increment vector Δ in the updating formula:
β
s
+
1
=
β
s
+
α
Δ
.
{\displaystyle {\boldsymbol {\beta }}^{s+1}={\boldsymbol {\beta }}^{s}+\alpha \Delta .}
In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function S. An optimal value for
α
{\displaystyle \alpha }
can be found by using a line search algorithm, that is, the magnitude of
α
{\displaystyle \alpha }
is determined by finding the value that minimizes S, usually using a direct search method in the interval
0
<
α
<
1
{\displaystyle 0<\alpha <1}
or a backtracking line search such as Armijo-line search. Typically,
α
{\displaystyle \alpha }
should be chosen such that it satisfies the Wolfe conditions or the Goldstein conditions.
In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, a trust region method. The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,
(
J
T
J
+
λ
D
)
Δ
=
−
J
T
r
,
{\displaystyle \left(\mathbf {J^{\operatorname {T} }J+\lambda D} \right)\Delta =-\mathbf {J} ^{\operatorname {T} }\mathbf {r} ,}
where D is a positive diagonal matrix. Note that when D is the identity matrix I and
λ
→
+
∞
{\displaystyle \lambda \to +\infty }
, then
λ
Δ
=
λ
(
J
T
J
+
λ
I
)
−
1
(
−
J
T
r
)
=
(
I
−
J
T
J
/
λ
+
⋯
)
(
−
J
T
r
)
→
−
J
T
r
{\displaystyle \lambda \Delta =\lambda \left(\mathbf {J^{\operatorname {T} }J} +\lambda \mathbf {I} \right)^{-1}\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)=\left(\mathbf {I} -\mathbf {J^{\operatorname {T} }J} /\lambda +\cdots \right)\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)\to -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }
, therefore the direction of Δ approaches the direction of the negative gradient
−
J
T
r
{\displaystyle -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }
.
The so-called Marquardt parameter
λ
{\displaystyle \lambda }
may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time
λ
{\displaystyle \lambda }
is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in S. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization.
== Large-scale optimization ==
For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrix
J
r
{\displaystyle \mathbf {J} _{\mathbf {r} }}
is more sparse than the approximate Hessian
J
r
T
J
r
{\displaystyle \mathbf {J} _{\mathbf {r} }^{\operatorname {T} }\mathbf {J_{r}} }
. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the conjugate gradient method.
In order to make this kind of approach work, one needs at least an efficient method for computing the product
J
r
T
J
r
p
{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} }
for some vector p. With sparse matrix storage, it is in general practical to store the rows of
J
r
{\displaystyle \mathbf {J} _{\mathbf {r} }}
in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines ci as row i of the matrix
J
r
{\displaystyle \mathbf {J} _{\mathbf {r} }}
, the following simple relation holds:
J
r
T
J
r
p
=
∑
i
c
i
(
c
i
⋅
p
)
,
{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} =\sum _{i}\mathbf {c} _{i}\left(\mathbf {c} _{i}\cdot \mathbf {p} \right),}
so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for parallel computations. Note that every row ci is the gradient of the corresponding residual ri; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.
== Related algorithms ==
In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian
∂
2
S
∂
β
j
∂
β
k
{\textstyle {\frac {\partial ^{2}S}{\partial \beta _{j}\partial \beta _{k}}}}
is built up numerically using first derivatives
∂
r
i
∂
β
j
{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}
only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
== Example implementations ==
=== Julia ===
The following implementation in Julia provides one method which uses a provided Jacobian and another computing with automatic differentiation.
== Notes ==
== References ==
Björck, A. (1996). Numerical methods for least squares problems. SIAM, Philadelphia. ISBN 0-89871-360-9.
Fletcher, Roger (1987). Practical methods of optimization (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-91547-8..
Nocedal, Jorge; Wright, Stephen (1999). Numerical optimization. New York: Springer. ISBN 0-387-98793-2.{{cite book}}: CS1 maint: publisher location (link)
== External links ==
Probability, Statistics and Estimation The algorithm is detailed and applied to the biology experiment discussed as an example in this article (page 84 with the uncertainties on the estimated values).
=== Implementations ===
Artelys Knitro is a non-linear solver with an implementation of the Gauss–Newton method. It is written in C and has interfaces to C++/C#/Java/Python/MATLAB/R. | Wikipedia/Gauss–Newton_algorithm |
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis (published 1768–1770).
The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size.
The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method.
== Geometrical description ==
=== Purpose and why it works ===
Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated.
The idea is that while the curve is initially unknown, its starting point, which we denote by
A
0
,
{\displaystyle A_{0},}
is known (see Figure 1). Then, from the differential equation, the slope to the curve at
A
0
{\displaystyle A_{0}}
can be computed, and so, the tangent line.
Take a small step along that tangent line up to a point
A
1
.
{\displaystyle A_{1}.}
Along this small step, the slope does not change too much, so
A
1
{\displaystyle A_{1}}
will be close to the curve. If we pretend that
A
1
{\displaystyle A_{1}}
is still on the curve, the same reasoning as for the point
A
0
{\displaystyle A_{0}}
above can be used. After several steps, a polygonal curve (
A
0
,
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{0},A_{1},A_{2},A_{3},\dots }
) is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite.
=== First-order process ===
When given the values for
t
0
{\displaystyle t_{0}}
and
y
(
t
0
)
{\displaystyle y(t_{0})}
, and the derivative of
y
{\displaystyle y}
is a given function of
t
{\displaystyle t}
and
y
{\displaystyle y}
denoted as
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
{\displaystyle y'(t)=f{\bigl (}t,y(t){\bigr )}}
. Begin the process by setting
y
0
=
y
(
t
0
)
{\displaystyle y_{0}=y(t_{0})}
. Next, choose a value
h
{\displaystyle h}
for the size of every step along t-axis, and set
t
n
=
t
0
+
n
h
{\displaystyle t_{n}=t_{0}+nh}
(or equivalently
t
n
+
1
=
t
n
+
h
{\displaystyle t_{n+1}=t_{n}+h}
). Now, the Euler method is used to find
y
n
+
1
{\displaystyle y_{n+1}}
from
y
n
{\displaystyle y_{n}}
and
t
n
{\displaystyle t_{n}}
:
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
.
{\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).}
The value of
y
n
{\displaystyle y_{n}}
is an approximation of the solution at time
t
n
{\displaystyle t_{n}}
, i.e.,
y
n
≈
y
(
t
n
)
{\displaystyle y_{n}\approx y(t_{n})}
. The Euler method is explicit, i.e. the solution
y
n
+
1
{\displaystyle y_{n+1}}
is an explicit function of
y
i
{\displaystyle y_{i}}
for
i
≤
n
{\displaystyle i\leq n}
.
=== Higher-order process ===
While the Euler method integrates a first-order ODE, any ODE of order
N
{\displaystyle N}
can be represented as a system of first-order ODEs. When given the ODE of order
N
{\displaystyle N}
defined as
y
(
N
+
1
)
(
t
)
=
f
(
t
,
y
(
t
)
,
y
′
(
t
)
,
…
,
y
(
N
)
(
t
)
)
,
{\displaystyle y^{(N+1)}(t)=f\left(t,y(t),y'(t),\ldots ,y^{(N)}(t)\right),}
as well as
h
{\displaystyle h}
,
t
0
{\displaystyle t_{0}}
, and
y
0
,
y
0
′
,
…
,
y
0
(
N
)
{\displaystyle y_{0},y'_{0},\dots ,y_{0}^{(N)}}
, we implement the following formula until we reach the approximation of the solution to the ODE at the desired time:
y
→
i
+
1
=
(
y
i
+
1
y
i
+
1
′
⋮
y
i
+
1
(
N
−
1
)
y
i
+
1
(
N
)
)
=
(
y
i
+
h
⋅
y
i
′
y
i
′
+
h
⋅
y
i
″
⋮
y
i
(
N
−
1
)
+
h
⋅
y
i
(
N
)
y
i
(
N
)
+
h
⋅
f
(
t
i
,
y
i
,
y
i
′
,
…
,
y
i
(
N
)
)
)
{\displaystyle {\vec {y}}_{i+1}={\begin{pmatrix}y_{i+1}\\y'_{i+1}\\\vdots \\y_{i+1}^{(N-1)}\\y_{i+1}^{(N)}\end{pmatrix}}={\begin{pmatrix}y_{i}+h\cdot y'_{i}\\y'_{i}+h\cdot y''_{i}\\\vdots \\y_{i}^{(N-1)}+h\cdot y_{i}^{(N)}\\y_{i}^{(N)}+h\cdot f\left(t_{i},y_{i},y'_{i},\ldots ,y_{i}^{(N)}\right)\end{pmatrix}}}
These first-order systems can be handled by Euler's method or, in fact, by any other scheme for first-order systems.
== First-order example ==
Given the initial value problem
y
′
=
y
,
y
(
0
)
=
1
,
{\displaystyle y'=y,\quad y(0)=1,}
we would like to use the Euler method to approximate
y
(
4
)
{\displaystyle y(4)}
.
=== Using step size equal to 1 (h = 1) ===
The Euler method is
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
.
{\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).}
so first we must compute
f
(
t
0
,
y
0
)
{\displaystyle f(t_{0},y_{0})}
. In this simple differential equation, the function
f
{\displaystyle f}
is defined by
f
(
t
,
y
)
=
y
{\displaystyle f(t,y)=y}
. We have
f
(
t
0
,
y
0
)
=
f
(
0
,
1
)
=
1.
{\displaystyle f(t_{0},y_{0})=f(0,1)=1.}
By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point
(
0
,
1
)
{\displaystyle (0,1)}
. Recall that the slope is defined as the change in
y
{\displaystyle y}
divided by the change in
t
{\displaystyle t}
, or
Δ
y
Δ
t
{\textstyle {\frac {\Delta y}{\Delta t}}}
.
The next step is to multiply the above value by the step size
h
{\displaystyle h}
, which we take equal to one here:
h
⋅
f
(
y
0
)
=
1
⋅
1
=
1.
{\displaystyle h\cdot f(y_{0})=1\cdot 1=1.}
Since the step size is the change in
t
{\displaystyle t}
, when we multiply the step size and the slope of the tangent, we get a change in
y
{\displaystyle y}
value. This value is then added to the initial
y
{\displaystyle y}
value to obtain the next value to be used for computations.
y
0
+
h
f
(
y
0
)
=
y
1
=
1
+
1
⋅
1
=
2.
{\displaystyle y_{0}+hf(y_{0})=y_{1}=1+1\cdot 1=2.}
The above steps should be repeated to find
y
2
{\displaystyle y_{2}}
,
y
3
{\displaystyle y_{3}}
and
y
4
{\displaystyle y_{4}}
.
y
2
=
y
1
+
h
f
(
y
1
)
=
2
+
1
⋅
2
=
4
,
y
3
=
y
2
+
h
f
(
y
2
)
=
4
+
1
⋅
4
=
8
,
y
4
=
y
3
+
h
f
(
y
3
)
=
8
+
1
⋅
8
=
16.
{\displaystyle {\begin{aligned}y_{2}&=y_{1}+hf(y_{1})=2+1\cdot 2=4,\\y_{3}&=y_{2}+hf(y_{2})=4+1\cdot 4=8,\\y_{4}&=y_{3}+hf(y_{3})=8+1\cdot 8=16.\end{aligned}}}
Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors.
The conclusion of this computation is that
y
4
=
16
{\displaystyle y_{4}=16}
. The exact solution of the differential equation is
y
(
t
)
=
e
t
{\displaystyle y(t)=e^{t}}
, so
y
(
4
)
=
e
4
≈
54.598
{\displaystyle y(4)=e^{4}\approx 54.598}
. Although the approximation of the Euler method was not very precise in this specific case, particularly due to a large value step size
h
{\displaystyle h}
, its behaviour is qualitatively correct as the figure shows.
=== Using other step sizes ===
As suggested in the introduction, the Euler method is more accurate if the step size
h
{\displaystyle h}
is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure.
The error recorded in the last column of the table is the difference between the exact solution at
t
=
4
{\displaystyle t=4}
and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details.
Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the global error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order.
We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, higher-order methods are employed such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired.
== Higher-order example ==
For this third-order example, assume that the following information is given:
y
‴
+
4
t
y
″
−
t
2
y
′
−
(
cos
t
)
y
=
sin
t
t
0
=
0
y
0
=
y
(
t
0
)
=
2
y
0
′
=
y
′
(
t
0
)
=
−
1
y
0
″
=
y
″
(
t
0
)
=
3
h
=
0.5
{\displaystyle {\begin{aligned}&y'''+4ty''-t^{2}y'-(\cos {t})y=\sin {t}\\&t_{0}=0\\&y_{0}=y(t_{0})=2\\&y'_{0}=y'(t_{0})=-1\\&y''_{0}=y''(t_{0})=3\\&h=0.5\end{aligned}}}
From this we can isolate y''' to get the equation:
f
(
t
,
y
,
y
′
,
y
″
)
=
y
‴
=
sin
t
+
(
cos
t
)
y
+
t
2
y
′
−
4
t
y
″
{\displaystyle f\left(t,y,y',y''\right)=y'''=\sin {t}+(\cos {t})y+t^{2}y'-4ty''}
Using that we can get the solution for
y
→
1
{\displaystyle {\vec {y}}_{1}}
:
y
→
1
=
(
y
1
y
1
′
y
1
″
)
=
(
y
0
+
h
⋅
y
0
′
y
0
′
+
h
⋅
y
0
″
y
0
″
+
h
⋅
f
(
t
0
,
y
0
,
y
0
′
,
y
0
″
)
)
=
(
2
+
0.5
⋅
−
1
−
1
+
0.5
⋅
3
3
+
0.5
⋅
(
sin
0
+
(
cos
0
)
⋅
2
+
0
2
⋅
(
−
1
)
−
4
⋅
0
⋅
3
)
)
=
(
1.5
0.5
4
)
{\displaystyle {\vec {y}}_{1}={\begin{pmatrix}y_{1}\\y_{1}'\\y_{1}''\end{pmatrix}}={\begin{pmatrix}y_{0}+h\cdot y'_{0}\\y'_{0}+h\cdot y''_{0}\\y''_{0}+h\cdot f\left(t_{0},y_{0},y'_{0},y''_{0}\right)\end{pmatrix}}={\begin{pmatrix}2+0.5\cdot -1\\-1+0.5\cdot 3\\3+0.5\cdot \left(\sin {0}+(\cos {0})\cdot 2+0^{2}\cdot (-1)-4\cdot 0\cdot 3\right)\end{pmatrix}}={\begin{pmatrix}1.5\\0.5\\4\end{pmatrix}}}
And using the solution for
y
→
1
{\displaystyle {\vec {y}}_{1}}
, we can get the solution for
y
→
2
{\displaystyle {\vec {y}}_{2}}
:
y
→
2
=
(
y
2
y
2
′
y
2
″
)
=
(
y
1
+
h
⋅
y
1
′
y
1
′
+
h
⋅
y
1
″
y
1
″
+
h
⋅
f
(
t
1
,
y
1
,
y
1
′
,
y
1
″
)
)
=
(
1.5
+
0.5
⋅
0.5
0.5
+
0.5
⋅
4
4
+
0.5
⋅
(
sin
0.5
+
(
cos
0.5
)
⋅
1.5
+
0.5
2
⋅
0.5
−
4
⋅
0.5
⋅
4
)
)
=
(
1.75
2.5
0.9604...
)
{\displaystyle {\vec {y}}_{2}={\begin{pmatrix}y_{2}\\y_{2}'\\y_{2}''\end{pmatrix}}={\begin{pmatrix}y_{1}+h\cdot y'_{1}\\y'_{1}+h\cdot y''_{1}\\y''_{1}+h\cdot f\left(t_{1},y_{1},y'_{1},y''_{1}\right)\end{pmatrix}}={\begin{pmatrix}1.5+0.5\cdot 0.5\\0.5+0.5\cdot 4\\4+0.5\cdot \left(\sin {0.5}+(\cos {0.5})\cdot 1.5+0.5^{2}\cdot 0.5-4\cdot 0.5\cdot 4\right)\end{pmatrix}}={\begin{pmatrix}1.75\\2.5\\0.9604...\end{pmatrix}}}
We can continue this process using the same formula as long as necessary to find whichever
y
→
i
{\displaystyle {\vec {y}}_{i}}
desired.
== Derivation ==
The Euler method can be derived in a number of ways.
(1) Firstly, there is the geometrical description above.
(2) Another possibility is to consider the Taylor expansion of the function
y
{\displaystyle y}
around
t
0
{\displaystyle t_{0}}
:
y
(
t
0
+
h
)
=
y
(
t
0
)
+
h
y
′
(
t
0
)
+
1
2
h
2
y
″
(
t
0
)
+
O
(
h
3
)
.
{\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).}
The differential equation states that
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
. If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises.
The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge–Kutta methods.
(3) A closely related derivation is to substitute the forward finite difference formula for the derivative,
y
′
(
t
0
)
≈
y
(
t
0
+
h
)
−
y
(
t
0
)
h
{\displaystyle y'(t_{0})\approx {\frac {y(t_{0}+h)-y(t_{0})}{h}}}
in the differential equation
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
. Again, this yields the Euler method.
A similar computation leads to the midpoint method and the backward Euler method.
(4) Finally, one can integrate the differential equation from
t
0
{\displaystyle t_{0}}
to
t
0
+
h
{\displaystyle t_{0}+h}
and apply the fundamental theorem of calculus to get:
y
(
t
0
+
h
)
−
y
(
t
0
)
=
∫
t
0
t
0
+
h
f
(
t
,
y
(
t
)
)
d
t
.
{\displaystyle y(t_{0}+h)-y(t_{0})=\int _{t_{0}}^{t_{0}+h}f{\bigl (}t,y(t){\bigr )}\,\mathrm {d} t.}
Now approximate the integral by the left-hand rectangle method (with only one rectangle):
∫
t
0
t
0
+
h
f
(
t
,
y
(
t
)
)
d
t
≈
h
f
(
t
0
,
y
(
t
0
)
)
.
{\displaystyle \int _{t_{0}}^{t_{0}+h}f{\bigl (}t,y(t){\bigr )}\,\mathrm {d} t\approx hf{\bigl (}t_{0},y(t_{0}){\bigr )}.}
Combining both equations, one finds again the Euler method.
This line of thought can be continued to arrive at various linear multistep methods.
== Local truncation error ==
The local truncation error of the Euler method is the error made in a single step. It is the difference between the numerical solution after one step,
y
1
{\displaystyle y_{1}}
, and the exact solution at time
t
1
=
t
0
+
h
{\displaystyle t_{1}=t_{0}+h}
. The numerical solution is given by
y
1
=
y
0
+
h
f
(
t
0
,
y
0
)
.
{\displaystyle y_{1}=y_{0}+hf(t_{0},y_{0}).}
For the exact solution, we use the Taylor expansion mentioned in the section Derivation above:
y
(
t
0
+
h
)
=
y
(
t
0
)
+
h
y
′
(
t
0
)
+
1
2
h
2
y
″
(
t
0
)
+
O
(
h
3
)
.
{\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).}
The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations:
L
T
E
=
y
(
t
0
+
h
)
−
y
1
=
1
2
h
2
y
″
(
t
0
)
+
O
(
h
3
)
.
{\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).}
This result is valid if
y
{\displaystyle y}
has a bounded third derivative.
This shows that for small
h
{\displaystyle h}
, the local truncation error is approximately proportional to
h
2
{\displaystyle h^{2}}
. This makes the Euler method less accurate than higher-order techniques such as Runge–Kutta methods and linear multistep methods, for which the local truncation error is proportional to a higher power of the step size.
A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If
y
{\displaystyle y}
has a continuous second derivative, then there exists a
ξ
∈
[
t
0
,
t
0
+
h
]
{\displaystyle \xi \in [t_{0},t_{0}+h]}
such that
L
T
E
=
y
(
t
0
+
h
)
−
y
1
=
1
2
h
2
y
″
(
ξ
)
.
{\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\tfrac {1}{2}}h^{2}y''(\xi ).}
In the above expressions for the error, the second derivative of the unknown exact solution
y
{\displaystyle y}
can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
that
y
″
(
t
0
)
=
∂
f
∂
t
(
t
0
,
y
(
t
0
)
)
+
∂
f
∂
y
(
t
0
,
y
(
t
0
)
)
f
(
t
0
,
y
(
t
0
)
)
.
{\displaystyle y''(t_{0})={\frac {\partial f}{\partial t}}{\bigl (}t_{0},y(t_{0}){\bigr )}+{\frac {\partial f}{\partial y}}{\bigl (}t_{0},y(t_{0}){\bigr )}\,f{\bigl (}t_{0},y(t_{0}){\bigr )}.}
== Global truncation error ==
The global truncation error is the error at a fixed time
t
i
{\displaystyle t_{i}}
, after however many steps the method needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be
t
i
−
t
0
h
{\textstyle {\frac {t_{i}-t_{0}}{h}}}
, which is proportional to
1
h
{\textstyle {\frac {1}{h}}}
, and the error committed in each step is proportional to
h
2
{\displaystyle h^{2}}
(see the previous section). Thus, it is to be expected that the global truncation error will be proportional to
h
{\displaystyle h}
.
This intuitive reasoning can be made precise. If the solution
y
{\displaystyle y}
has a bounded second derivative and
f
{\displaystyle f}
is Lipschitz continuous in its second argument, then the global truncation error (denoted as
|
y
(
t
i
)
−
y
i
|
{\displaystyle |y(t_{i})-y_{i}|}
) is bounded by
|
y
(
t
i
)
−
y
i
|
≤
h
M
2
L
(
e
L
(
t
i
−
t
0
)
−
1
)
{\displaystyle |y(t_{i})-y_{i}|\leq {\frac {hM}{2L}}\left(e^{L(t_{i}-t_{0})}-1\right)}
where
M
{\displaystyle M}
is an upper bound on the second derivative of
y
{\displaystyle y}
on the given interval and
L
{\displaystyle L}
is the Lipschitz constant of
f
{\displaystyle f}
. Or more simply, when
y
′
(
t
)
=
f
(
t
,
y
)
{\displaystyle y'(t)=f(t,y)}
, the value
L
=
max
(
|
d
d
y
[
f
(
t
,
y
)
]
|
)
{\textstyle L={\text{max}}{\bigl (}|{\frac {d}{dy}}{\bigl [}f(t,y){\bigr ]}|{\bigr )}}
(such that
t
{\displaystyle t}
is treated as a constant). In contrast,
M
=
max
(
|
d
2
d
t
2
[
y
(
t
)
]
|
)
{\textstyle M={\text{max}}{\bigl (}|{\frac {d^{2}}{dt^{2}}}{\bigl [}y(t){\bigr ]}|{\bigr )}}
where function
y
(
t
)
{\displaystyle y(t)}
is the exact solution which only contains the
t
{\displaystyle t}
variable.
The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to
h
{\displaystyle h}
. For this reason, the Euler method is said to be first order.
=== Example ===
If we have the differential equation
y
′
=
1
+
(
t
−
y
)
2
{\displaystyle y'=1+(t-y)^{2}}
, and the exact solution
y
=
t
+
1
t
−
1
{\displaystyle y=t+{\frac {1}{t-1}}}
, and we want to find
M
{\displaystyle M}
and
L
{\displaystyle L}
for when
2
≤
t
≤
3
{\displaystyle 2\leq t\leq 3}
.
L
=
max
(
|
d
d
y
[
f
(
t
,
y
)
]
|
)
=
max
2
≤
t
≤
3
(
|
d
d
y
[
1
+
(
t
−
y
)
2
]
|
)
=
max
2
≤
t
≤
3
(
|
2
(
t
−
y
)
|
)
=
max
2
≤
t
≤
3
(
|
2
(
t
−
[
t
+
1
t
−
1
]
)
|
)
=
max
2
≤
t
≤
3
(
|
−
2
t
−
1
|
)
=
2
{\displaystyle L={\text{max}}{\bigl (}|{\frac {d}{dy}}{\bigl [}f(t,y){\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|{\frac {d}{dy}}{\bigl [}1+(t-y)^{2}{\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|2(t-y)|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|2(t-[t+{\frac {1}{t-1}}])|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|-{\frac {2}{t-1}}|{\bigr )}=2}
M
=
max
(
|
d
2
d
t
2
[
y
(
t
)
]
|
)
=
max
2
≤
t
≤
3
(
|
d
2
d
t
2
[
t
+
1
1
−
t
]
|
)
=
max
2
≤
t
≤
3
(
|
2
(
−
t
+
1
)
3
|
)
=
2
{\displaystyle M={\text{max}}{\bigl (}|{\frac {d^{2}}{dt^{2}}}{\bigl [}y(t){\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}\left(|{\frac {d^{2}}{dt^{2}}}{\bigl [}t+{\frac {1}{1-t}}{\bigr ]}|\right)=\max _{2\leq t\leq 3}\left(|{\frac {2}{(-t+1)^{3}}}|\right)=2}
Thus we can find the error bound at t=2.5 and h=0.5:
error bound
=
h
M
2
L
(
e
L
(
t
i
−
t
0
)
−
1
)
=
0.5
⋅
2
2
⋅
2
(
e
2
(
2.5
−
2
)
−
1
)
=
0.42957
{\displaystyle {\text{error bound}}={\frac {hM}{2L}}\left(e^{L(t_{i}-t_{0})}-1\right)={\frac {0.5\cdot 2}{2\cdot 2}}\left(e^{2(2.5-2)}-1\right)=0.42957}
Notice that t0 is equal to 2 because it is the lower bound for t in
2
≤
t
≤
3
{\displaystyle 2\leq t\leq 3}
.
== Numerical stability ==
The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation
y
′
=
−
2.3
y
,
y
(
0
)
=
1.
{\displaystyle y'=-2.3y,\qquad y(0)=1.}
The exact solution is
y
(
t
)
=
e
−
2.3
t
{\displaystyle y(t)=e^{-2.3t}}
, which decays to zero as
t
→
∞
{\displaystyle t\to \infty }
. However, if the Euler method is applied to this equation with step size
h
=
1
{\displaystyle h=1}
, then the numerical solution is qualitatively wrong: It oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance
h
=
0.7
{\displaystyle h=0.7}
, then the numerical solution does decay to zero.
If the Euler method is applied to the linear equation
y
′
=
k
y
{\displaystyle y'=ky}
, then the numerical solution is unstable if the product
h
k
{\displaystyle hk}
is outside the region
{
z
∈
C
|
|
z
+
1
|
≤
1
}
,
{\displaystyle {\bigl \{}z\in \mathbf {C} \,{\big |}\,|z+1|\leq 1{\bigr \}},}
illustrated on the right. This region is called the (linear) stability region. In the example,
k
=
−
2.3
{\displaystyle k=-2.3}
, so if
h
=
1
{\displaystyle h=1}
then
h
k
=
−
2.3
{\displaystyle hk=-2.3}
which is outside the stability region, and thus the numerical solution is unstable.
This limitation — along with its slow convergence of error with
h
{\displaystyle h}
— means that the Euler method is not often used, except as a simple example of numerical integration. Frequently models of physical systems contain terms representing fast-decaying elements (i.e. with large negative exponential arguments). Even when these are not of interest in the overall solution, the instability they can induce means that an exceptionally small timestep would be required if the Euler method is used.
== Rounding errors ==
In step
n
{\displaystyle n}
of the Euler method, the rounding error is roughly of the magnitude
ε
y
n
{\displaystyle \varepsilon y_{n}}
where
ε
{\displaystyle \varepsilon }
is the machine epsilon. Assuming that the rounding errors are independent random variables, the expected total rounding error is proportional to
ε
h
{\textstyle {\frac {\varepsilon }{\sqrt {h}}}}
. Thus, for extremely small values of the step size the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.
== Modifications and extensions ==
A simple modification of the Euler method which eliminates the stability problems noted above is the backward Euler method:
y
n
+
1
=
y
n
+
h
f
(
t
n
+
1
,
y
n
+
1
)
.
{\displaystyle y_{n+1}=y_{n}+hf(t_{n+1},y_{n+1}).}
This differs from the (standard, or forward) Euler method in that the function
f
{\displaystyle f}
is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has
y
n
+
1
{\displaystyle y_{n+1}}
on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly.
Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method.
More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article:
y
n
+
1
=
y
n
+
h
f
(
t
n
+
1
2
h
,
y
n
+
1
2
h
f
(
t
n
,
y
n
)
)
{\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\tfrac {1}{2}}h,y_{n}+{\tfrac {1}{2}}hf(t_{n},y_{n})\right)}
.
This leads to the family of Runge–Kutta methods.
The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method:
y
n
+
1
=
y
n
+
3
2
h
f
(
t
n
,
y
n
)
−
1
2
h
f
(
t
n
−
1
,
y
n
−
1
)
.
{\displaystyle y_{n+1}=y_{n}+{\tfrac {3}{2}}hf(t_{n},y_{n})-{\tfrac {1}{2}}hf(t_{n-1},y_{n-1}).}
This leads to the family of linear multistep methods. There are other modifications which uses techniques from compressive sensing to minimize memory usage
== In popular culture ==
In the film Hidden Figures, Katherine Johnson resorts to the Euler method in calculating the re-entry of astronaut John Glenn from Earth orbit.
== See also ==
Crank–Nicolson method
Gradient descent similarly uses finite steps, here to find minima of functions
List of Runge–Kutta methods
Linear multistep method
Numerical integration (for calculating definite integrals)
Numerical methods for ordinary differential equations
== Notes ==
== References ==
Atkinson, Kendall A. (1989). An Introduction to Numerical Analysis (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-50023-0.
Ascher, Uri M.; Petzold, Linda R. (1998). Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-412-8.
Butcher, John C. (2003). Numerical Methods for Ordinary Differential Equations. New York: John Wiley & Sons. ISBN 978-0-471-96758-3.
Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993). Solving ordinary differential equations I: Nonstiff problems. Berlin, New York: Springer-Verlag. ISBN 978-3-540-56670-0.
Iserles, Arieh (1996). A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press. ISBN 978-0-521-55655-2.
Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95452-3.
Lakoba, Taras I. (2012), Simple Euler method and its modifications (PDF) (Lecture notes for MATH334), University of Vermont, retrieved 29 February 2012
Unni, M P. (2017). "Memory reduction for numerical solution of differential equations using compressive sensing". 2017 IEEE 13th International Colloquium on Signal Processing & its Applications (CSPA). IEEE CSPA. pp. 79–84. doi:10.1109/CSPA.2017.8064928. ISBN 978-1-5090-1184-1. S2CID 13082456.
== External links ==
Media related to Euler method at Wikimedia Commons
Euler method implementations in different languages by Rosetta Code
"Euler method", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Euler_method |
The Heaviside cover-up method, named after Oliver Heaviside, is a technique for quickly determining the coefficients when performing the partial-fraction expansion of a rational function in the case of linear factors.
== Method ==
Separation of a fractional algebraic expression into partial fractions is the reverse of the process of combining fractions by converting each fraction to the lowest common denominator (LCD) and adding the numerators. This separation can be accomplished by the Heaviside cover-up method, another method for determining the coefficients of a partial fraction. Case one has fractional expressions where factors in the denominator are unique. Case two has fractional expressions where some factors may repeat as powers of a binomial.
In integral calculus we would want to write a fractional algebraic expression as the sum of its partial fractions in order to take the integral of each simple fraction separately. Once the original denominator, D0, has been factored we set up a fraction for each factor in the denominator. We may use a subscripted D to represent the denominator of the respective partial fractions which are the factors in D0. Letters A, B, C, D, E, and so on will represent the numerators of the respective partial fractions. When a partial fraction term has a single (i.e. unrepeated) binomial in the denominator, the numerator is a residue of the function defined by the input fraction.
We calculate each respective numerator by (1) taking the root of the denominator (i.e. the value of x that makes the denominator zero) and (2) then substituting this root into the original expression but ignoring the corresponding factor in the denominator. Each root for the variable is the value which would give an undefined value to the expression since we do not divide by zero.
General formula for a cubic denominator with three distinct roots:
ℓ
x
2
+
m
x
+
n
(
x
−
a
)
(
x
−
b
)
(
x
−
c
)
=
A
(
x
−
a
)
+
B
(
x
−
b
)
+
C
(
x
−
c
)
{\displaystyle {\frac {\ell x^{2}+mx+n}{(x-a)(x-b)(x-c)}}={\frac {A}{(x-a)}}+{\frac {B}{(x-b)}}+{\frac {C}{(x-c)}}}
Where
A
=
ℓ
a
2
+
m
a
+
n
(
a
−
b
)
(
a
−
c
)
;
{\displaystyle A={\frac {\ell a^{2}+ma+n}{(a-b)(a-c)}};}
and where
B
=
ℓ
b
2
+
m
b
+
n
(
b
−
c
)
(
b
−
a
)
;
{\displaystyle B={\frac {\ell b^{2}+mb+n}{(b-c)(b-a)}};}
and where
C
=
ℓ
c
2
+
m
c
+
n
(
c
−
a
)
(
c
−
b
)
.
{\displaystyle C={\frac {\ell c^{2}+mc+n}{(c-a)(c-b)}}.}
=== Case one ===
Factorize the expression in the denominator. Set up a partial fraction for each factor in the denominator. Apply the cover-up rule to solve for the new numerator of each partial fraction.
==== Example ====
3
x
2
+
12
x
+
11
(
x
+
1
)
(
x
+
2
)
(
x
+
3
)
=
A
x
+
1
+
B
x
+
2
+
C
x
+
3
{\displaystyle {\frac {3x^{2}+12x+11}{(x+1)(x+2)(x+3)}}={\frac {A}{x+1}}+{\frac {B}{x+2}}+{\frac {C}{x+3}}}
Set up a partial fraction for each factor in the denominator. With this framework we apply the cover-up rule to solve for A, B, and C.
D1 is x + 1; set it equal to zero. This gives the residue for A when x = −1.
Next, substitute this value of x into the fractional expression, but without D1.
Put this value down as the value of A.
Proceed similarly for B and C.
D2 is x + 2; For the residue B use x = −2.
D3 is x + 3; For residue C use x = −3.
Thus, to solve for A, use x = −1 in the expression but without D1:
3
x
2
+
12
x
+
11
(
x
+
2
)
(
x
+
3
)
=
3
−
12
+
11
(
1
)
(
2
)
=
2
2
=
1
=
A
.
{\displaystyle {\frac {3x^{2}+12x+11}{(x+2)(x+3)}}={\frac {3-12+11}{(1)(2)}}={\frac {2}{2}}=1=A.}
Thus, to solve for B, use x = −2 in the expression but without D2:
3
x
2
+
12
x
+
11
(
x
+
1
)
(
x
+
3
)
=
12
−
24
+
11
(
−
1
)
(
1
)
=
−
1
(
−
1
)
=
+
1
=
B
.
{\displaystyle {\frac {3x^{2}+12x+11}{(x+1)(x+3)}}={\frac {12-24+11}{(-1)(1)}}={\frac {-1}{(-1)}}=+1=B.}
Thus, to solve for C, use x = −3 in the expression but without D3:
3
x
2
+
12
x
+
11
(
x
+
1
)
(
x
+
2
)
=
27
−
36
+
11
(
−
2
)
(
−
1
)
=
2
(
+
2
)
=
+
1
=
C
.
{\displaystyle {\frac {3x^{2}+12x+11}{(x+1)(x+2)}}={\frac {27-36+11}{(-2)(-1)}}={\frac {2}{(+2)}}=+1=C.}
Thus,
3
x
2
+
12
x
+
11
(
x
+
1
)
(
x
+
2
)
(
x
+
3
)
=
1
x
+
1
+
1
x
+
2
+
1
x
+
3
{\displaystyle {\frac {3x^{2}+12x+11}{(x+1)(x+2)(x+3)}}={\frac {1}{x+1}}+{\frac {1}{x+2}}+{\frac {1}{x+3}}}
=== Case two ===
When factors of the denominator include powers of one expression we
Set up a partial fraction for each unique factor and each lower power of D;
Set up an equation showing the relation of the numerators if all were converted to the LCD.
From the equation of numerators we solve for each numerator, A, B, C, D, and so on.
This equation of the numerators is an absolute identity, true for all values of x. So, we may select any value of x and solve for the numerator.
==== Example ====
3
x
+
5
(
1
−
2
x
)
2
=
A
(
1
−
2
x
)
2
+
B
1
−
2
x
{\displaystyle {\frac {3x+5}{(1-2x)^{2}}}={\frac {A}{(1-2x)^{2}}}+{\frac {B}{1-2x}}}
Here, we set up a partial fraction for each descending power of the denominator. Then we solve for the numerators, A and B. As
(
1
−
2
x
)
{\displaystyle (1-2x)}
is a repeated factor, we now need to find two numbers, as so we need an additional relation in order to solve for both.
To write the relation of numerators the second fraction needs another factor of
(
1
−
2
x
)
{\displaystyle (1-2x)}
to convert it to the LCD, giving us
3
x
+
5
=
A
+
B
(
1
−
2
x
)
{\displaystyle 3x+5=A+B(1-2x)}
. In general, if a binomial factor is raised to the power of
n
{\displaystyle n}
, then
n
{\displaystyle n}
constants
A
k
{\displaystyle A_{k}}
will be needed, each appearing divided by successive powers,
(
1
−
2
x
)
k
{\displaystyle (1-2x)^{k}}
, where
k
{\displaystyle k}
runs from 1 to
n
{\displaystyle n}
. The cover-up rule can be used to find
A
n
{\displaystyle A_{n}}
, but it is still
A
1
{\displaystyle A_{1}}
that is called the residue. Here,
n
=
2
{\displaystyle n=2}
,
A
=
A
2
{\displaystyle A=A_{2}}
, and
B
=
A
1
{\displaystyle B=A_{1}}
To solve for
A
{\displaystyle A}
:
A
{\displaystyle A}
can be solved by setting the denominator of the first fraction to zero,
1
−
2
x
=
0
{\displaystyle 1-2x=0}
.
Solving for
x
{\displaystyle x}
gives the cover-up value for
A
{\displaystyle A}
: when
x
=
1
/
2
{\displaystyle x=1/2}
.
When we substitute this value,
x
=
1
/
2
{\displaystyle x=1/2}
, we get:
3
(
1
2
)
+
5
=
A
+
B
(
0
)
{\displaystyle 3\left({\frac {1}{2}}\right)+5=A+B(0)}
A
=
3
2
+
5
=
13
2
{\displaystyle A={\frac {3}{2}}+5={\frac {13}{2}}}
To solve for
B
{\displaystyle B}
:
Since the equation of the numerators, here,
3
x
+
5
=
A
+
B
(
1
−
2
x
)
{\displaystyle 3x+5=A+B(1-2x)}
, is true for all values of
x
{\displaystyle x}
, pick a value for
x
{\displaystyle x}
and use it to solve for
B
{\displaystyle B}
.
As we have solved for the value of
A
{\displaystyle A}
above,
A
=
13
/
2
{\displaystyle A=13/2}
, we may use that value to solve for
B
{\displaystyle B}
.
We may pick
x
=
0
{\displaystyle x=0}
, use
A
=
13
/
2
{\displaystyle A=13/2}
, and then solve for
B
{\displaystyle B}
:
3
x
+
5
=
A
+
B
(
1
−
2
x
)
0
+
5
=
13
2
+
B
(
1
+
0
)
10
2
=
13
2
+
B
−
3
2
=
B
{\displaystyle {\begin{aligned}3x+5&=A+B(1-2x)\\0+5&={\frac {13}{2}}+B(1+0)\\{\frac {10}{2}}&={\frac {13}{2}}+B\\-{\frac {3}{2}}&=B\\\end{aligned}}}
We may pick
x
=
1
{\displaystyle x=1}
, Then solve for
B
{\displaystyle B}
:
3
x
+
5
=
A
+
B
(
1
−
2
x
)
3
+
5
=
13
2
+
B
(
1
−
2
)
8
=
13
2
+
B
(
−
1
)
16
2
=
13
2
−
B
B
=
−
3
2
{\displaystyle {\begin{aligned}3x+5&=A+B(1-2x)\\3+5&={\frac {13}{2}}+B(1-2)\\8&={\frac {13}{2}}+B(-1)\\{\frac {16}{2}}&={\frac {13}{2}}-B\\B&=-{\frac {3}{2}}\end{aligned}}}
We may pick
x
=
−
1
{\displaystyle x=-1}
. Solve for
B
{\displaystyle B}
:
3
x
+
5
=
A
+
B
(
1
−
2
x
)
−
3
+
5
=
13
2
+
B
(
1
+
2
)
4
2
=
13
2
+
3
B
−
9
2
=
3
B
−
3
2
=
B
{\displaystyle {\begin{aligned}3x+5&=A+B(1-2x)\\-3+5&={\frac {13}{2}}+B(1+2)\\{\frac {4}{2}}&={\frac {13}{2}}+3B\\-{\frac {9}{2}}&=3B\\-{\frac {3}{2}}&=B\end{aligned}}}
Hence,
3
x
+
5
(
1
−
2
x
)
2
=
13
/
2
(
1
−
2
x
)
2
+
−
3
/
2
(
1
−
2
x
)
,
{\displaystyle {\frac {3x+5}{(1-2x)^{2}}}={\frac {13/2}{(1-2x)^{2}}}+{\frac {-3/2}{(1-2x)}},}
or
3
x
+
5
(
1
−
2
x
)
2
=
13
2
(
1
−
2
x
)
2
−
3
2
(
1
−
2
x
)
{\displaystyle {\frac {3x+5}{(1-2x)^{2}}}={\frac {13}{2(1-2x)^{2}}}-{\frac {3}{2(1-2x)}}}
== References ==
== External links ==
http://www.math-cs.gordon.edu/courses/ma225/handouts/heavyside.pdf
MIT 18.03 Notes on Heaviside’s Cover-up Method by Prof. Arthur Mattuck. | Wikipedia/Heaviside_cover-up_method |
"Nova Methodus pro Maximis et Minimis" is the first published work on the subject of calculus. It was published by Gottfried Leibniz in the Acta Eruditorum in October 1684. It is considered to be the birth of infinitesimal calculus.
== Full title ==
The full title of the published work is "Nova methodus pro maximis et minimis, itemque tangentibus, quae nec fractas nec irrationales quantitates moratur, et singulare pro illis calculi genus." In English, the full title can be translated as "A new method for maxima and minima, and for tangents, that is not hindered by fractional or irrational quantities, and a singular kind of calculus for the above mentioned." It is from this title that this branch of mathematics takes the name calculus.
== Influence ==
Although calculus was independently co-invented by Isaac Newton, most of the notation in modern calculus is from Leibniz. Leibniz's careful attention to his notation makes some believe that "his contribution to calculus was much more influential than Newton's."
== Citation and translations ==
Leibniz, Gottfried (1684). "Nova Methodus pro Maximis et Minimis". Acta Eruditorum (in Latin). 3: 467–473. Figures Tab. XII.
Leibniz, Gottfried (1768). "Nova Methodus pro Maximis et Minimis". In Dutens, Louis (ed.). Gothofredi Guillelmi Leibnitii Opera Omnia (in Latin). Vol. 3. Geneva: Fratres de Tournes. pp. 167–172. Figures Tab. VI.
Struik, Dirk J., ed. (1969). "A New Method for Maxima and Minima". A Source Book in Mathematics, 1200-1800. Harvard University Press. pp. 272–280.
Bruce, Ian, ed. (2014). "A New Method for Finding Maxima and Minima" (PDF). 17centurymaths.com. Archived from the original (PDF) on 2023-05-21.
== See also ==
Leibniz–Newton calculus controversy
== References ==
== External links ==
Mathematical Treasure: Leibniz's Papers on Calculus: "Nova Methodus pro Maximis et Minimis..." (Latin original) | Wikipedia/Nova_Methodus_pro_Maximis_et_Minimis |
In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function f in terms of the derivative of f. More precisely, if the inverse of
f
{\displaystyle f}
is denoted as
f
−
1
{\displaystyle f^{-1}}
, where
f
−
1
(
y
)
=
x
{\displaystyle f^{-1}(y)=x}
if and only if
f
(
x
)
=
y
{\displaystyle f(x)=y}
, then the inverse function rule is, in Lagrange's notation,
[
f
−
1
]
′
(
y
)
=
1
f
′
(
f
−
1
(
y
)
)
{\displaystyle \left[f^{-1}\right]'(y)={\frac {1}{f'\left(f^{-1}(y)\right)}}}
.
This formula holds in general whenever
f
{\displaystyle f}
is continuous and injective on an interval I, with
f
{\displaystyle f}
being differentiable at
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
(
∈
I
{\displaystyle \in I}
) and where
f
′
(
f
−
1
(
y
)
)
≠
0
{\displaystyle f'(f^{-1}(y))\neq 0}
. The same formula is also equivalent to the expression
D
[
f
−
1
]
=
1
(
D
f
)
∘
(
f
−
1
)
,
{\displaystyle {\mathcal {D}}\left[f^{-1}\right]={\frac {1}{({\mathcal {D}}f)\circ \left(f^{-1}\right)}},}
where
D
{\displaystyle {\mathcal {D}}}
denotes the unary derivative operator (on the space of functions) and
∘
{\displaystyle \circ }
denotes function composition.
Geometrically, a function and inverse function have graphs that are reflections, in the line
y
=
x
{\displaystyle y=x}
. This reflection operation turns the gradient of any line into its reciprocal.
Assuming that
f
{\displaystyle f}
has an inverse in a neighbourhood of
x
{\displaystyle x}
and that its derivative at that point is non-zero, its inverse is guaranteed to be differentiable at
x
{\displaystyle x}
and have a derivative given by the above formula.
The inverse function rule may also be expressed in Leibniz's notation. As that notation suggests,
d
x
d
y
⋅
d
y
d
x
=
1.
{\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}=1.}
This relation is obtained by differentiating the equation
f
−
1
(
y
)
=
x
{\displaystyle f^{-1}(y)=x}
in terms of x and applying the chain rule, yielding that:
d
x
d
y
⋅
d
y
d
x
=
d
x
d
x
{\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}={\frac {dx}{dx}}}
considering that the derivative of x with respect to x is 1.
== Derivation ==
Let
f
{\displaystyle f}
be an invertible (bijective) function, let
x
{\displaystyle x}
be in the domain of
f
{\displaystyle f}
, and let
y
=
f
(
x
)
.
{\displaystyle y=f(x).}
Let
g
=
f
−
1
.
{\displaystyle g=f^{-1}.}
So,
f
(
g
(
y
)
)
=
y
.
{\displaystyle f(g(y))=y.}
Derivating this equation with respect to
y
{\displaystyle y}
, and using the chain rule, one gets
f
′
(
g
(
y
)
)
⋅
g
′
(
y
)
=
1.
{\displaystyle f'(g(y))\cdot g'(y)=1.}
That is,
g
′
(
y
)
=
1
f
′
(
g
(
y
)
)
{\displaystyle g'(y)={\frac {1}{f'(g(y))}}}
or
(
f
−
1
)
′
(
y
)
=
1
f
′
(
f
−
1
(
y
)
)
.
{\displaystyle (f^{-1})^{\prime }(y)={\frac {1}{f^{\prime }(f^{-1}(y))}}.}
== Examples ==
y
=
x
2
{\displaystyle y=x^{2}}
(for positive x) has inverse
x
=
y
{\displaystyle x={\sqrt {y}}}
.
d
y
d
x
=
2
x
;
d
x
d
y
=
1
2
y
=
1
2
x
{\displaystyle {\frac {dy}{dx}}=2x{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{2{\sqrt {y}}}}={\frac {1}{2x}}}
d
y
d
x
⋅
d
x
d
y
=
2
x
⋅
1
2
x
=
1.
{\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=2x\cdot {\frac {1}{2x}}=1.}
At
x
=
0
{\displaystyle x=0}
, however, there is a problem: the graph of the square root function becomes vertical, corresponding to a horizontal tangent for the square function.
y
=
e
x
{\displaystyle y=e^{x}}
(for real x) has inverse
x
=
ln
y
{\displaystyle x=\ln {y}}
(for positive
y
{\displaystyle y}
)
d
y
d
x
=
e
x
;
d
x
d
y
=
1
y
=
e
−
x
{\displaystyle {\frac {dy}{dx}}=e^{x}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{y}}=e^{-x}}
d
y
d
x
⋅
d
x
d
y
=
e
x
⋅
e
−
x
=
1.
{\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=e^{x}\cdot e^{-x}=1.}
== Additional properties ==
Integrating this relationship gives
f
−
1
(
x
)
=
∫
1
f
′
(
f
−
1
(
x
)
)
d
x
+
C
.
{\displaystyle {f^{-1}}(x)=\int {\frac {1}{f'({f^{-1}}(x))}}\,{dx}+C.}
This is only useful if the integral exists. In particular we need
f
′
(
x
)
{\displaystyle f'(x)}
to be non-zero across the range of integration.
It follows that a function that has a continuous derivative has an inverse in a neighbourhood of every point where the derivative is non-zero. This need not be true if the derivative is not continuous.
Another very interesting and useful property is the following:
∫
f
−
1
(
x
)
d
x
=
x
f
−
1
(
x
)
−
F
(
f
−
1
(
x
)
)
+
C
{\displaystyle \int f^{-1}(x)\,{dx}=xf^{-1}(x)-F(f^{-1}(x))+C}
Where
F
{\displaystyle F}
denotes the antiderivative of
f
{\displaystyle f}
.
The inverse of the derivative of f(x) is also of interest, as it is used in showing the convexity of the Legendre transform.
Let
z
=
f
′
(
x
)
{\displaystyle z=f'(x)}
then we have, assuming
f
″
(
x
)
≠
0
{\displaystyle f''(x)\neq 0}
:
d
(
f
′
)
−
1
(
z
)
d
z
=
1
f
″
(
x
)
{\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {1}{f''(x)}}}
This can be shown using the previous notation
y
=
f
(
x
)
{\displaystyle y=f(x)}
. Then we have:
f
′
(
x
)
=
d
y
d
x
=
d
y
d
z
d
z
d
x
=
d
y
d
z
f
″
(
x
)
⇒
d
y
d
z
=
f
′
(
x
)
f
″
(
x
)
{\displaystyle f'(x)={\frac {dy}{dx}}={\frac {dy}{dz}}{\frac {dz}{dx}}={\frac {dy}{dz}}f''(x)\Rightarrow {\frac {dy}{dz}}={\frac {f'(x)}{f''(x)}}}
Therefore:
d
(
f
′
)
−
1
(
z
)
d
z
=
d
x
d
z
=
d
y
d
z
d
x
d
y
=
f
′
(
x
)
f
″
(
x
)
1
f
′
(
x
)
=
1
f
″
(
x
)
{\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {dx}{dz}}={\frac {dy}{dz}}{\frac {dx}{dy}}={\frac {f'(x)}{f''(x)}}{\frac {1}{f'(x)}}={\frac {1}{f''(x)}}}
By induction, we can generalize this result for any integer
n
≥
1
{\displaystyle n\geq 1}
, with
z
=
f
(
n
)
(
x
)
{\displaystyle z=f^{(n)}(x)}
, the nth derivative of f(x), and
y
=
f
(
n
−
1
)
(
x
)
{\displaystyle y=f^{(n-1)}(x)}
, assuming
f
(
i
)
(
x
)
≠
0
for
0
<
i
≤
n
+
1
{\displaystyle f^{(i)}(x)\neq 0{\text{ for }}0<i\leq n+1}
:
d
(
f
(
n
)
)
−
1
(
z
)
d
z
=
1
f
(
n
+
1
)
(
x
)
{\displaystyle {\frac {d(f^{(n)})^{-1}(z)}{dz}}={\frac {1}{f^{(n+1)}(x)}}}
== Higher derivatives ==
The chain rule given above is obtained by differentiating the identity
f
−
1
(
f
(
x
)
)
=
x
{\displaystyle f^{-1}(f(x))=x}
with respect to x. One can continue the same process for higher derivatives. Differentiating the identity twice with respect to x, one obtains
d
2
y
d
x
2
⋅
d
x
d
y
+
d
d
x
(
d
x
d
y
)
⋅
(
d
y
d
x
)
=
0
,
{\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d}{dx}}\left({\frac {dx}{dy}}\right)\,\cdot \,\left({\frac {dy}{dx}}\right)=0,}
that is simplified further by the chain rule as
d
2
y
d
x
2
⋅
d
x
d
y
+
d
2
x
d
y
2
⋅
(
d
y
d
x
)
2
=
0.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}=0.}
Replacing the first derivative, using the identity obtained earlier, we get
d
2
y
d
x
2
=
−
d
2
x
d
y
2
⋅
(
d
y
d
x
)
3
.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}=-{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{3}.}
Similarly for the third derivative:
d
3
y
d
x
3
=
−
d
3
x
d
y
3
⋅
(
d
y
d
x
)
4
−
3
d
2
x
d
y
2
⋅
d
2
y
d
x
2
⋅
(
d
y
d
x
)
2
{\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}-3{\frac {d^{2}x}{dy^{2}}}\,\cdot \,{\frac {d^{2}y}{dx^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}}
or using the formula for the second derivative,
d
3
y
d
x
3
=
−
d
3
x
d
y
3
⋅
(
d
y
d
x
)
4
+
3
(
d
2
x
d
y
2
)
2
⋅
(
d
y
d
x
)
5
{\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}+3\left({\frac {d^{2}x}{dy^{2}}}\right)^{2}\,\cdot \,\left({\frac {dy}{dx}}\right)^{5}}
These formulas are generalized by the Faà di Bruno's formula.
These formulas can also be written using Lagrange's notation. If f and g are inverses, then
g
″
(
x
)
=
−
f
″
(
g
(
x
)
)
[
f
′
(
g
(
x
)
)
]
3
{\displaystyle g''(x)={\frac {-f''(g(x))}{[f'(g(x))]^{3}}}}
== Example ==
y
=
e
x
{\displaystyle y=e^{x}}
has the inverse
x
=
ln
y
{\displaystyle x=\ln y}
. Using the formula for the second derivative of the inverse function,
d
y
d
x
=
d
2
y
d
x
2
=
e
x
=
y
;
(
d
y
d
x
)
3
=
y
3
;
{\displaystyle {\frac {dy}{dx}}={\frac {d^{2}y}{dx^{2}}}=e^{x}=y{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\left({\frac {dy}{dx}}\right)^{3}=y^{3};}
so that
d
2
x
d
y
2
⋅
y
3
+
y
=
0
;
d
2
x
d
y
2
=
−
1
y
2
{\displaystyle {\frac {d^{2}x}{dy^{2}}}\,\cdot \,y^{3}+y=0{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {d^{2}x}{dy^{2}}}=-{\frac {1}{y^{2}}}}
,
which agrees with the direct calculation.
== See also ==
Calculus – Branch of mathematics
Chain rule – For derivatives of composed functions
Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function
Differentiation rules – Rules for computing derivatives of functions
Implicit function theorem – On converting relations to functions of several real variables
Integration of inverse functions – Mathematical theorem, used in calculusPages displaying short descriptions of redirect targets
Inverse function – Mathematical concept
Inverse function theorem – Theorem in mathematics
Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets
Vector calculus identities – Mathematical identities
== References ==
Marsden, Jerrold E.; Weinstein, Alan (1981). "Chapter 8: Inverse Functions and the Chain Rule". Calculus unlimited (PDF). Menlo Park, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-6932-5. | Wikipedia/Inverse_function_rule |
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations (such as spectral measures and projection-valued measures) of measure are widely used in quantum physics and physics in general.
The intuition behind this concept dates back to ancient Greece, when Archimedes tried to calculate the area of a circle. But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Constantin Carathéodory, and Maurice Fréchet, among others.
== Definition ==
Let
X
{\displaystyle X}
be a set and
Σ
{\displaystyle \Sigma }
a σ-algebra over
X
.
{\displaystyle X.}
A set function
μ
{\displaystyle \mu }
from
Σ
{\displaystyle \Sigma }
to the extended real number line is called a measure if the following conditions hold:
Non-negativity: For all
E
∈
Σ
,
μ
(
E
)
≥
0.
{\displaystyle E\in \Sigma ,\ \ \mu (E)\geq 0.}
μ
(
∅
)
=
0.
{\displaystyle \mu (\varnothing )=0.}
Countable additivity (or σ-additivity): For all countable collections
{
E
k
}
k
=
1
∞
{\displaystyle \{E_{k}\}_{k=1}^{\infty }}
of pairwise disjoint sets in Σ,
μ
(
⋃
k
=
1
∞
E
k
)
=
∑
k
=
1
∞
μ
(
E
k
)
.
{\displaystyle \mu {\left(\bigcup _{k=1}^{\infty }E_{k}\right)}=\sum _{k=1}^{\infty }\mu (E_{k}).}
If at least one set
E
{\displaystyle E}
has finite measure, then the requirement
μ
(
∅
)
=
0
{\displaystyle \mu (\varnothing )=0}
is met automatically due to countable additivity:
μ
(
E
)
=
μ
(
E
∪
∅
)
=
μ
(
E
)
+
μ
(
∅
)
,
{\displaystyle \mu (E)=\mu (E\cup \varnothing )=\mu (E)+\mu (\varnothing ),}
and therefore
μ
(
∅
)
=
0.
{\displaystyle \mu (\varnothing )=0.}
If the condition of non-negativity is dropped, and
μ
{\displaystyle \mu }
takes on at most one of the values of
±
∞
,
{\displaystyle \pm \infty ,}
then
μ
{\displaystyle \mu }
is called a signed measure.
The pair
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
is called a measurable space, and the members of
Σ
{\displaystyle \Sigma }
are called measurable sets.
A triple
(
X
,
Σ
,
μ
)
{\displaystyle (X,\Sigma ,\mu )}
is called a measure space. A probability measure is a measure with total measure one – that is,
μ
(
X
)
=
1.
{\displaystyle \mu (X)=1.}
A probability space is a measure space with a probability measure.
For measure spaces that are also topological spaces various compatibility conditions can be placed for the measure and the topology. Most measures met in practice in analysis (and in many cases also in probability theory) are Radon measures. Radon measures have an alternative definition in terms of linear functionals on the locally convex topological vector space of continuous functions with compact support. This approach is taken by Bourbaki (2004) and a number of other sources. For more details, see the article on Radon measures.
== Instances ==
Some important measures are listed here.
The counting measure is defined by
μ
(
S
)
{\displaystyle \mu (S)}
= number of elements in
S
.
{\displaystyle S.}
The Lebesgue measure on
R
{\displaystyle \mathbb {R} }
is a complete translation-invariant measure on a σ-algebra containing the intervals in
R
{\displaystyle \mathbb {R} }
such that
μ
(
[
0
,
1
]
)
=
1
{\displaystyle \mu ([0,1])=1}
; and every other measure with these properties extends the Lebesgue measure.
Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping.
The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties.
Every (pseudo) Riemannian manifold
(
M
,
g
)
{\displaystyle (M,g)}
has a canonical measure
μ
g
{\displaystyle \mu _{g}}
that in local coordinates
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
looks like
|
det
g
|
d
n
x
{\displaystyle {\sqrt {\left|\det g\right|}}d^{n}x}
where
d
n
x
{\displaystyle d^{n}x}
is the usual Lebesgue measure.
The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets.
Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a probability measure or distribution. See the list of probability distributions for instances.
The Dirac measure δa (cf. Dirac delta function) is given by δa(S) = χS(a), where χS is the indicator function of
S
.
{\displaystyle S.}
The measure of a set is 1 if it contains the point
a
{\displaystyle a}
and 0 otherwise.
Other 'named' measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Gaussian measure, Baire measure, Radon measure, Young measure, and Loeb measure.
In physics an example of a measure is spatial distribution of mass (see for example, gravity potential), or another non-negative extensive property, conserved (see conservation law for a list of these) or not. Negative values lead to signed measures, see "generalizations" below.
Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.
Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.
Measure theory is used in machine learning. One example is the Flow Induced Probability Measure in GFlowNet.
== Basic properties ==
Let
μ
{\displaystyle \mu }
be a measure.
=== Monotonicity ===
If
E
1
{\displaystyle E_{1}}
and
E
2
{\displaystyle E_{2}}
are measurable sets with
E
1
⊆
E
2
{\displaystyle E_{1}\subseteq E_{2}}
then
μ
(
E
1
)
≤
μ
(
E
2
)
.
{\displaystyle \mu (E_{1})\leq \mu (E_{2}).}
=== Measure of countable unions and intersections ===
==== Countable subadditivity ====
For any countable sequence
E
1
,
E
2
,
E
3
,
…
{\displaystyle E_{1},E_{2},E_{3},\ldots }
of (not necessarily disjoint) measurable sets
E
n
{\displaystyle E_{n}}
in
Σ
:
{\displaystyle \Sigma :}
μ
(
⋃
i
=
1
∞
E
i
)
≤
∑
i
=
1
∞
μ
(
E
i
)
.
{\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)\leq \sum _{i=1}^{\infty }\mu (E_{i}).}
==== Continuity from below ====
If
E
1
,
E
2
,
E
3
,
…
{\displaystyle E_{1},E_{2},E_{3},\ldots }
are measurable sets that are increasing (meaning that
E
1
⊆
E
2
⊆
E
3
⊆
…
{\displaystyle E_{1}\subseteq E_{2}\subseteq E_{3}\subseteq \ldots }
) then the union of the sets
E
n
{\displaystyle E_{n}}
is measurable and
μ
(
⋃
i
=
1
∞
E
i
)
=
lim
i
→
∞
μ
(
E
i
)
=
sup
i
≥
1
μ
(
E
i
)
.
{\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)~=~\lim _{i\to \infty }\mu (E_{i})=\sup _{i\geq 1}\mu (E_{i}).}
==== Continuity from above ====
If
E
1
,
E
2
,
E
3
,
…
{\displaystyle E_{1},E_{2},E_{3},\ldots }
are measurable sets that are decreasing (meaning that
E
1
⊇
E
2
⊇
E
3
⊇
…
{\displaystyle E_{1}\supseteq E_{2}\supseteq E_{3}\supseteq \ldots }
) then the intersection of the sets
E
n
{\displaystyle E_{n}}
is measurable; furthermore, if at least one of the
E
n
{\displaystyle E_{n}}
has finite measure then
μ
(
⋂
i
=
1
∞
E
i
)
=
lim
i
→
∞
μ
(
E
i
)
=
inf
i
≥
1
μ
(
E
i
)
.
{\displaystyle \mu \left(\bigcap _{i=1}^{\infty }E_{i}\right)=\lim _{i\to \infty }\mu (E_{i})=\inf _{i\geq 1}\mu (E_{i}).}
This property is false without the assumption that at least one of the
E
n
{\displaystyle E_{n}}
has finite measure. For instance, for each
n
∈
N
,
{\displaystyle n\in \mathbb {N} ,}
let
E
n
=
[
n
,
∞
)
⊆
R
,
{\displaystyle E_{n}=[n,\infty )\subseteq \mathbb {R} ,}
which all have infinite Lebesgue measure, but the intersection is empty.
== Other properties ==
=== Completeness ===
A measurable set
X
{\displaystyle X}
is called a null set if
μ
(
X
)
=
0.
{\displaystyle \mu (X)=0.}
A subset of a null set is called a negligible set. A negligible set need not be measurable, but every measurable negligible set is automatically a null set. A measure is called complete if every negligible set is measurable.
A measure can be extended to a complete one by considering the σ-algebra of subsets
Y
{\displaystyle Y}
which differ by a negligible set from a measurable set
X
,
{\displaystyle X,}
that is, such that the symmetric difference of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is contained in a null set. One defines
μ
(
Y
)
{\displaystyle \mu (Y)}
to equal
μ
(
X
)
.
{\displaystyle \mu (X).}
=== "Dropping the Edge" ===
If
f
:
X
→
[
0
,
+
∞
]
{\displaystyle f:X\to [0,+\infty ]}
is
(
Σ
,
B
(
[
0
,
+
∞
]
)
)
{\displaystyle (\Sigma ,{\cal {B}}([0,+\infty ]))}
-measurable, then
μ
{
x
∈
X
:
f
(
x
)
≥
t
}
=
μ
{
x
∈
X
:
f
(
x
)
>
t
}
{\displaystyle \mu \{x\in X:f(x)\geq t\}=\mu \{x\in X:f(x)>t\}}
for almost all
t
∈
[
−
∞
,
∞
]
.
{\displaystyle t\in [-\infty ,\infty ].}
This property is used in connection with Lebesgue integral.
=== Additivity ===
Measures are required to be countably additive. However, the condition can be strengthened as follows.
For any set
I
{\displaystyle I}
and any set of nonnegative
r
i
,
i
∈
I
{\displaystyle r_{i},i\in I}
define:
∑
i
∈
I
r
i
=
sup
{
∑
i
∈
J
r
i
:
|
J
|
<
∞
,
J
⊆
I
}
.
{\displaystyle \sum _{i\in I}r_{i}=\sup \left\lbrace \sum _{i\in J}r_{i}:|J|<\infty ,J\subseteq I\right\rbrace .}
That is, we define the sum of the
r
i
{\displaystyle r_{i}}
to be the supremum of all the sums of finitely many of them.
A measure
μ
{\displaystyle \mu }
on
Σ
{\displaystyle \Sigma }
is
κ
{\displaystyle \kappa }
-additive if for any
λ
<
κ
{\displaystyle \lambda <\kappa }
and any family of disjoint sets
X
α
,
α
<
λ
{\displaystyle X_{\alpha },\alpha <\lambda }
the following hold:
⋃
α
∈
λ
X
α
∈
Σ
{\displaystyle \bigcup _{\alpha \in \lambda }X_{\alpha }\in \Sigma }
μ
(
⋃
α
∈
λ
X
α
)
=
∑
α
∈
λ
μ
(
X
α
)
.
{\displaystyle \mu \left(\bigcup _{\alpha \in \lambda }X_{\alpha }\right)=\sum _{\alpha \in \lambda }\mu \left(X_{\alpha }\right).}
The second condition is equivalent to the statement that the ideal of null sets is
κ
{\displaystyle \kappa }
-complete.
=== Sigma-finite measures ===
A measure space
(
X
,
Σ
,
μ
)
{\displaystyle (X,\Sigma ,\mu )}
is called finite if
μ
(
X
)
{\displaystyle \mu (X)}
is a finite real number (rather than
∞
{\displaystyle \infty }
). Nonzero finite measures are analogous to probability measures in the sense that any finite measure
μ
{\displaystyle \mu }
is proportional to the probability measure
1
μ
(
X
)
μ
.
{\displaystyle {\frac {1}{\mu (X)}}\mu .}
A measure
μ
{\displaystyle \mu }
is called σ-finite if
X
{\displaystyle X}
can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have a σ-finite measure if it is a countable union of sets with finite measure.
For example, the real numbers with the standard Lebesgue measure are σ-finite but not finite. Consider the closed intervals
[
k
,
k
+
1
]
{\displaystyle [k,k+1]}
for all integers
k
;
{\displaystyle k;}
there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to the Lindelöf property of topological spaces. They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'.
=== Strictly localizable measures ===
=== Semifinite measures ===
Let
X
{\displaystyle X}
be a set, let
A
{\displaystyle {\cal {A}}}
be a sigma-algebra on
X
,
{\displaystyle X,}
and let
μ
{\displaystyle \mu }
be a measure on
A
.
{\displaystyle {\cal {A}}.}
We say
μ
{\displaystyle \mu }
is semifinite to mean that for all
A
∈
μ
pre
{
+
∞
}
,
{\displaystyle A\in \mu ^{\text{pre}}\{+\infty \},}
P
(
A
)
∩
μ
pre
(
R
>
0
)
≠
∅
.
{\displaystyle {\cal {P}}(A)\cap \mu ^{\text{pre}}(\mathbb {R} _{>0})\neq \emptyset .}
Semifinite measures generalize sigma-finite measures, in such a way that some big theorems of measure theory that hold for sigma-finite but not arbitrary measures can be extended with little modification to hold for semifinite measures. (To-do: add examples of such theorems; cf. the talk page.)
==== Basic examples ====
Every sigma-finite measure is semifinite.
Assume
A
=
P
(
X
)
,
{\displaystyle {\cal {A}}={\cal {P}}(X),}
let
f
:
X
→
[
0
,
+
∞
]
,
{\displaystyle f:X\to [0,+\infty ],}
and assume
μ
(
A
)
=
∑
a
∈
A
f
(
a
)
{\displaystyle \mu (A)=\sum _{a\in A}f(a)}
for all
A
⊆
X
.
{\displaystyle A\subseteq X.}
We have that
μ
{\displaystyle \mu }
is sigma-finite if and only if
f
(
x
)
<
+
∞
{\displaystyle f(x)<+\infty }
for all
x
∈
X
{\displaystyle x\in X}
and
f
pre
(
R
>
0
)
{\displaystyle f^{\text{pre}}(\mathbb {R} _{>0})}
is countable. We have that
μ
{\displaystyle \mu }
is semifinite if and only if
f
(
x
)
<
+
∞
{\displaystyle f(x)<+\infty }
for all
x
∈
X
.
{\displaystyle x\in X.}
Taking
f
=
X
×
{
1
}
{\displaystyle f=X\times \{1\}}
above (so that
μ
{\displaystyle \mu }
is counting measure on
P
(
X
)
{\displaystyle {\cal {P}}(X)}
), we see that counting measure on
P
(
X
)
{\displaystyle {\cal {P}}(X)}
is
sigma-finite if and only if
X
{\displaystyle X}
is countable; and
semifinite (without regard to whether
X
{\displaystyle X}
is countable). (Thus, counting measure, on the power set
P
(
X
)
{\displaystyle {\cal {P}}(X)}
of an arbitrary uncountable set
X
,
{\displaystyle X,}
gives an example of a semifinite measure that is not sigma-finite.)
Let
d
{\displaystyle d}
be a complete, separable metric on
X
,
{\displaystyle X,}
let
B
{\displaystyle {\cal {B}}}
be the Borel sigma-algebra induced by
d
,
{\displaystyle d,}
and let
s
∈
R
>
0
.
{\displaystyle s\in \mathbb {R} _{>0}.}
Then the Hausdorff measure
H
s
|
B
{\displaystyle {\cal {H}}^{s}|{\cal {B}}}
is semifinite.
Let
d
{\displaystyle d}
be a complete, separable metric on
X
,
{\displaystyle X,}
let
B
{\displaystyle {\cal {B}}}
be the Borel sigma-algebra induced by
d
,
{\displaystyle d,}
and let
s
∈
R
>
0
.
{\displaystyle s\in \mathbb {R} _{>0}.}
Then the packing measure
H
s
|
B
{\displaystyle {\cal {H}}^{s}|{\cal {B}}}
is semifinite.
==== Involved example ====
The zero measure is sigma-finite and thus semifinite. In addition, the zero measure is clearly less than or equal to
μ
.
{\displaystyle \mu .}
It can be shown there is a greatest measure with these two properties:
We say the semifinite part of
μ
{\displaystyle \mu }
to mean the semifinite measure
μ
sf
{\displaystyle \mu _{\text{sf}}}
defined in the above theorem. We give some nice, explicit formulas, which some authors may take as definition, for the semifinite part:
μ
sf
=
(
sup
{
μ
(
B
)
:
B
∈
P
(
A
)
∩
μ
pre
(
R
≥
0
)
}
)
A
∈
A
.
{\displaystyle \mu _{\text{sf}}=(\sup\{\mu (B):B\in {\cal {P}}(A)\cap \mu ^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}.}
μ
sf
=
(
sup
{
μ
(
A
∩
B
)
:
B
∈
μ
pre
(
R
≥
0
)
}
)
A
∈
A
}
.
{\displaystyle \mu _{\text{sf}}=(\sup\{\mu (A\cap B):B\in \mu ^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}\}.}
μ
sf
=
μ
|
μ
pre
(
R
>
0
)
∪
{
A
∈
A
:
sup
{
μ
(
B
)
:
B
∈
P
(
A
)
}
=
+
∞
}
×
{
+
∞
}
∪
{
A
∈
A
:
sup
{
μ
(
B
)
:
B
∈
P
(
A
)
}
<
+
∞
}
×
{
0
}
.
{\displaystyle \mu _{\text{sf}}=\mu |_{\mu ^{\text{pre}}(\mathbb {R} _{>0})}\cup \{A\in {\cal {A}}:\sup\{\mu (B):B\in {\cal {P}}(A)\}=+\infty \}\times \{+\infty \}\cup \{A\in {\cal {A}}:\sup\{\mu (B):B\in {\cal {P}}(A)\}<+\infty \}\times \{0\}.}
Since
μ
sf
{\displaystyle \mu _{\text{sf}}}
is semifinite, it follows that if
μ
=
μ
sf
{\displaystyle \mu =\mu _{\text{sf}}}
then
μ
{\displaystyle \mu }
is semifinite. It is also evident that if
μ
{\displaystyle \mu }
is semifinite then
μ
=
μ
sf
.
{\displaystyle \mu =\mu _{\text{sf}}.}
==== Non-examples ====
Every
0
−
∞
{\displaystyle 0-\infty }
measure that is not the zero measure is not semifinite. (Here, we say
0
−
∞
{\displaystyle 0-\infty }
measure to mean a measure whose range lies in
{
0
,
+
∞
}
{\displaystyle \{0,+\infty \}}
:
(
∀
A
∈
A
)
(
μ
(
A
)
∈
{
0
,
+
∞
}
)
.
{\displaystyle (\forall A\in {\cal {A}})(\mu (A)\in \{0,+\infty \}).}
) Below we give examples of
0
−
∞
{\displaystyle 0-\infty }
measures that are not zero measures.
Let
X
{\displaystyle X}
be nonempty, let
A
{\displaystyle {\cal {A}}}
be a
σ
{\displaystyle \sigma }
-algebra on
X
,
{\displaystyle X,}
let
f
:
X
→
{
0
,
+
∞
}
{\displaystyle f:X\to \{0,+\infty \}}
be not the zero function, and let
μ
=
(
∑
x
∈
A
f
(
x
)
)
A
∈
A
.
{\displaystyle \mu =(\sum _{x\in A}f(x))_{A\in {\cal {A}}}.}
It can be shown that
μ
{\displaystyle \mu }
is a measure.
μ
=
{
(
∅
,
0
)
}
∪
(
A
∖
{
∅
}
)
×
{
+
∞
}
.
{\displaystyle \mu =\{(\emptyset ,0)\}\cup ({\cal {A}}\setminus \{\emptyset \})\times \{+\infty \}.}
X
=
{
0
}
,
{\displaystyle X=\{0\},}
A
=
{
∅
,
X
}
,
{\displaystyle {\cal {A}}=\{\emptyset ,X\},}
μ
=
{
(
∅
,
0
)
,
(
X
,
+
∞
)
}
.
{\displaystyle \mu =\{(\emptyset ,0),(X,+\infty )\}.}
Let
X
{\displaystyle X}
be uncountable, let
A
{\displaystyle {\cal {A}}}
be a
σ
{\displaystyle \sigma }
-algebra on
X
,
{\displaystyle X,}
let
C
=
{
A
∈
A
:
A
is countable
}
{\displaystyle {\cal {C}}=\{A\in {\cal {A}}:A{\text{ is countable}}\}}
be the countable elements of
A
,
{\displaystyle {\cal {A}},}
and let
μ
=
C
×
{
0
}
∪
(
A
∖
C
)
×
{
+
∞
}
.
{\displaystyle \mu ={\cal {C}}\times \{0\}\cup ({\cal {A}}\setminus {\cal {C}})\times \{+\infty \}.}
It can be shown that
μ
{\displaystyle \mu }
is a measure.
==== Involved non-example ====
Measures that are not semifinite are very wild when restricted to certain sets. Every measure is, in a sense, semifinite once its
0
−
∞
{\displaystyle 0-\infty }
part (the wild part) is taken away.
We say the
0
−
∞
{\displaystyle \mathbf {0-\infty } }
part of
μ
{\displaystyle \mu }
to mean the measure
μ
0
−
∞
{\displaystyle \mu _{0-\infty }}
defined in the above theorem. Here is an explicit formula for
μ
0
−
∞
{\displaystyle \mu _{0-\infty }}
:
μ
0
−
∞
=
(
sup
{
μ
(
B
)
−
μ
sf
(
B
)
:
B
∈
P
(
A
)
∩
μ
sf
pre
(
R
≥
0
)
}
)
A
∈
A
.
{\displaystyle \mu _{0-\infty }=(\sup\{\mu (B)-\mu _{\text{sf}}(B):B\in {\cal {P}}(A)\cap \mu _{\text{sf}}^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}.}
==== Results regarding semifinite measures ====
Let
F
{\displaystyle \mathbb {F} }
be
R
{\displaystyle \mathbb {R} }
or
C
,
{\displaystyle \mathbb {C} ,}
and let
T
:
L
F
∞
(
μ
)
→
(
L
F
1
(
μ
)
)
∗
:
g
↦
T
g
=
(
∫
f
g
d
μ
)
f
∈
L
F
1
(
μ
)
.
{\displaystyle T:L_{\mathbb {F} }^{\infty }(\mu )\to \left(L_{\mathbb {F} }^{1}(\mu )\right)^{*}:g\mapsto T_{g}=\left(\int fgd\mu \right)_{f\in L_{\mathbb {F} }^{1}(\mu )}.}
Then
μ
{\displaystyle \mu }
is semifinite if and only if
T
{\displaystyle T}
is injective. (This result has import in the study of the dual space of
L
1
=
L
F
1
(
μ
)
{\displaystyle L^{1}=L_{\mathbb {F} }^{1}(\mu )}
.)
Let
F
{\displaystyle \mathbb {F} }
be
R
{\displaystyle \mathbb {R} }
or
C
,
{\displaystyle \mathbb {C} ,}
and let
T
{\displaystyle {\cal {T}}}
be the topology of convergence in measure on
L
F
0
(
μ
)
.
{\displaystyle L_{\mathbb {F} }^{0}(\mu ).}
Then
μ
{\displaystyle \mu }
is semifinite if and only if
T
{\displaystyle {\cal {T}}}
is Hausdorff.
(Johnson) Let
X
{\displaystyle X}
be a set, let
A
{\displaystyle {\cal {A}}}
be a sigma-algebra on
X
,
{\displaystyle X,}
let
μ
{\displaystyle \mu }
be a measure on
A
,
{\displaystyle {\cal {A}},}
let
Y
{\displaystyle Y}
be a set, let
B
{\displaystyle {\cal {B}}}
be a sigma-algebra on
Y
,
{\displaystyle Y,}
and let
ν
{\displaystyle \nu }
be a measure on
B
.
{\displaystyle {\cal {B}}.}
If
μ
,
ν
{\displaystyle \mu ,\nu }
are both not a
0
−
∞
{\displaystyle 0-\infty }
measure, then both
μ
{\displaystyle \mu }
and
ν
{\displaystyle \nu }
are semifinite if and only if
(
μ
×
cld
ν
)
{\displaystyle (\mu \times _{\text{cld}}\nu )}
(
A
×
B
)
=
μ
(
A
)
ν
(
B
)
{\displaystyle (A\times B)=\mu (A)\nu (B)}
for all
A
∈
A
{\displaystyle A\in {\cal {A}}}
and
B
∈
B
.
{\displaystyle B\in {\cal {B}}.}
(Here,
μ
×
cld
ν
{\displaystyle \mu \times _{\text{cld}}\nu }
is the measure defined in Theorem 39.1 in Berberian '65.)
=== Localizable measures ===
Localizable measures are a special case of semifinite measures and a generalization of sigma-finite measures.
Let
X
{\displaystyle X}
be a set, let
A
{\displaystyle {\cal {A}}}
be a sigma-algebra on
X
,
{\displaystyle X,}
and let
μ
{\displaystyle \mu }
be a measure on
A
.
{\displaystyle {\cal {A}}.}
Let
F
{\displaystyle \mathbb {F} }
be
R
{\displaystyle \mathbb {R} }
or
C
,
{\displaystyle \mathbb {C} ,}
and let
T
:
L
F
∞
(
μ
)
→
(
L
F
1
(
μ
)
)
∗
:
g
↦
T
g
=
(
∫
f
g
d
μ
)
f
∈
L
F
1
(
μ
)
.
{\displaystyle T:L_{\mathbb {F} }^{\infty }(\mu )\to \left(L_{\mathbb {F} }^{1}(\mu )\right)^{*}:g\mapsto T_{g}=\left(\int fgd\mu \right)_{f\in L_{\mathbb {F} }^{1}(\mu )}.}
Then
μ
{\displaystyle \mu }
is localizable if and only if
T
{\displaystyle T}
is bijective (if and only if
L
F
∞
(
μ
)
{\displaystyle L_{\mathbb {F} }^{\infty }(\mu )}
"is"
L
F
1
(
μ
)
∗
{\displaystyle L_{\mathbb {F} }^{1}(\mu )^{*}}
).
=== s-finite measures ===
A measure is said to be s-finite if it is a countable sum of finite measures. S-finite measures are more general than sigma-finite ones and have applications in the theory of stochastic processes.
== Non-measurable sets ==
If the axiom of choice is assumed to be true, it can be proved that not all subsets of Euclidean space are Lebesgue measurable; examples of such sets include the Vitali set, and the non-measurable sets postulated by the Hausdorff paradox and the Banach–Tarski paradox.
== Generalizations ==
For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additive set function with values in the (signed) real numbers is called a signed measure, while such a function with values in the complex numbers is called a complex measure. Observe, however, that complex measure is necessarily of finite variation, hence complex measures include finite signed measures but not, for example, the Lebesgue measure.
Measures that take values in Banach spaces have been studied extensively. A measure that takes values in the set of self-adjoint projections on a Hilbert space is called a projection-valued measure; these are used in functional analysis for the spectral theorem. When it is necessary to distinguish the usual measures which take non-negative values from generalizations, the term positive measure is used. Positive measures are closed under conical combination but not general linear combination, while signed measures are the linear closure of positive measures. More generally see measure theory in topological vector spaces.
Another generalization is the finitely additive measure, also known as a content. This is the same as a measure except that instead of requiring countable additivity we require only finite additivity. Historically, this definition was used first. It turns out that in general, finitely additive measures are connected with notions such as Banach limits, the dual of
L
∞
{\displaystyle L^{\infty }}
and the Stone–Čech compactification. All these are linked in one way or another to the axiom of choice. Contents remain useful in certain technical problems in geometric measure theory; this is the theory of Banach measures.
A charge is a generalization in both directions: it is a finitely additive, signed measure. (Cf. ba space for information about bounded charges, where we say a charge is bounded to mean its range its a bounded subset of R.)
== See also ==
== Notes ==
== Bibliography ==
== References ==
== External links ==
"Measure", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Tutorial: Measure Theory for Dummies | Wikipedia/Measure_theory |
In the history of calculus, the calculus controversy (German: Prioritätsstreit, lit. 'priority dispute') was an argument between mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first discovered calculus. The question was a major intellectual controversy, beginning in 1699 and reaching its peak in 1712. Leibniz had published his work on calculus first, but Newton's supporters accused Leibniz of plagiarizing Newton's unpublished ideas. The modern consensus is that the two men independently developed their ideas. Their creation of calculus has been called "the greatest advance in mathematics that had taken place since the time of Archimedes."
Newton stated he had begun working on a form of calculus (which he called "The Method of Fluxions and Infinite Series") in 1666, at the age of 23, but did not publish it until 1737 as a minor annotation in the back of one of his works decades later (a relevant Newton manuscript of October 1666 is now published among his mathematical papers). Gottfried Leibniz began working on his variant of calculus in 1674, and in 1684 published his first paper employing it, "Nova Methodus pro Maximis et Minimis". L'Hôpital published a text on Leibniz's calculus in 1696 (in which he recognized that Newton's Principia of 1687 was "nearly all about this calculus"). Meanwhile, Newton, though he explained his (geometrical) form of calculus in Section I of Book I of the Principia of 1687, did not explain his eventual fluxional notation for the calculus in print until 1693 (in part) and 1704 (in full).
The prevailing opinion in the 18th century was against Leibniz (in Britain, not in the German-speaking world). Today, the consensus is Leibniz and Newton independently invented and described calculus in Europe in the 17th century, with their work noted to be more than just a "synthesis of previously distinct pieces of mathematical technique, but it was certainly this in part".
It was certainly Isaac Newton who first devised a new infinitesimal calculus and elaborated it into a widely extensible algorithm, whose potentialities he fully understood; of equal certainty, differential and integral calculus, the fount of great developments flowing continuously from 1684 to the present day, was created independently by Gottfried Leibniz.
One author has identified the dispute as being about "profoundly different" methods:
Despite ... points of resemblance, the methods [of Newton and Leibniz] are profoundly different, so making the priority row a nonsense.
On the other hand, other authors have emphasized the equivalences and mutual translatability of the methods: here N Guicciardini (2003) appears to confirm L'Hôpital (1696) (already cited):
the Newtonian and Leibnizian schools shared a common mathematical method. They adopted two algorithms, the analytical method of fluxions, and the differential and integral calculus, which were translatable one into the other.
== Scientific priority in the 17th century ==
In the 17th century the question of scientific priority was of great importance to scientists; however, during this period, scientific journals had just begun to appear, and the generally accepted mechanism for fixing priority when publishing information about discoveries had not yet been formed. Among the methods used by scientists were anagrams, sealed envelopes placed in a safe place, correspondence with other scientists, or a private message. A letter to the founder of the French Academy of Sciences, Marin Mersenne for a French scientist, or to the secretary of the Royal Society of London, Henry Oldenburg for English, had essentially the status of a published article. The discoverer could "time-stamp" the moment of his discovery, and prove that he knew of it at the point the letter was sealed, and had not copied it from anything subsequently published; nevertheless, where an idea was subsequently published in conjunction with its use in a particularly valuable context, this might take priority over an earlier discoverer's work, which had no obvious application. Further, a mathematician's claim could be undermined by counter-claims that he had not truly invented an idea, but merely improved on someone else's idea, an improvement that required little skill, and was based on facts that were already known.
A series of high-profile disputes about the scientific priority of the 17th century—the era that the American science historian D. Meli called "the golden age of the mud-slinging priority disputes"—is associated with Leibniz. The first of them occurred at the beginning of 1673, during his first visit to London, when in the presence of the famous mathematician John Pell he presented his method of approximating series by differences. To Pell's remark this discovery had already been made by François Regnaud and published in 1670 in Lyon by Gabriel Mouton, Leibniz answered the next day. In a letter to Oldenburg, he wrote that, having looked at Mouton's book, Pell was correct, but he can provide his draft notes, which contain nuances not found by Renault and Mouton. Thus, the integrity of Leibniz was proved, but in this case, was recalled later. On the same visit to London, Leibniz was found in the opposite position. February 1, 1673, at a meeting of the Royal Society of London, he demonstrated his mechanical calculator. The curator of the experiments of the Society, Robert Hooke, carefully examined the device and even removed the back cover. A few days later, in the absence of Leibniz, Hooke criticized the German scientist's machine, saying that he could make a simpler model. Leibniz, who learned about this, returned to Paris and categorically rejected Hooke's claim in a letter to Oldenburg and formulated principles of correct scientific behaviour: "We know that respectable and modest people prefer it when they think of something that is consistent with what someone's done other discoveries, ascribe their own improvements and additions to the discoverer, so as not to arouse suspicions of intellectual dishonesty, and the desire for true generosity should pursue them, instead of the lying thirst for dishonest profit." To illustrate the proper behaviour, Leibniz gives an example of Nicolas-Claude Fabri de Peiresc and Pierre Gassendi, who performed astronomical observations similar to those made earlier by Galileo Galilei and Johannes Hevelius, respectively. Learning they did not make their discoveries first, the French scientists passed on their data to the discoverers.
Newton's approach to the priority problem can be illustrated by the example of the discovery of the inverse-square law as applied to the dynamics of bodies moving under the influence of gravity. Based on an analysis of Kepler's laws and his own calculations, Robert Hooke made the assumption that motion under such conditions should occur along orbits similar to elliptical. Unable to rigorously prove this claim, he reported it to Newton. Without further entering into correspondence with Hooke, Newton solved this problem, as well as the inverse to it, proving that the law of inverse-squares follows from the ellipticity of the orbits. This discovery was set forth in his famous work Philosophiæ Naturalis Principia Mathematica without mentioning Hooke. At the insistence of astronomer Edmund Halley, to whom the manuscript was handed over for editing and publication, the phrase was included in the text that the compliance of Kepler's first law with the law of inverse squares was "independently approved by Wren, Hooke and Halley."
According to the remark of Vladimir Arnold, Newton, choosing between refusal to publish his discoveries and constant struggle for priority, chose both of them.
== Background ==
=== Invention of Differential and Integral Calculus ===
By the time of Newton and Leibniz, European mathematicians had already made a significant contribution to the formation of the ideas of mathematical analysis. The Dutchman Simon Stevin (1548–1620), the Italian Luca Valerio (1553–1618), the German Johannes Kepler (1571–1630) were engaged in the development of the ancient "method of exhaustion" for calculating areas and volumes. The latter's ideas, apparently, influenced – directly or through Galileo Galilei – on the "method of indivisibles" developed by Bonaventura Cavalieri (1598–1647).
The last years of Leibniz's life, 1710–1716, were embittered by a long controversy with John Keill, Newton, and others, over whether Leibniz had discovered calculus independently of Newton, or whether he had merely invented another notation for ideas that were fundamentally Newton's. No participant doubted that Newton had already developed his method of fluxions when Leibniz began working on the differential calculus, yet there was seemingly no proof beyond Newton's word. He had published a calculation of a tangent with the note: "This is only a special case of a general method whereby I can calculate curves and determine maxima, minima, and centers of gravity." How this was done he explained to a pupil a 20 years later, when Leibniz's articles were already well-read. Newton's manuscripts came to light only after his death.
The infinitesimal calculus can be expressed either in the notation of fluxions or in that of differentials, or, as noted above, it was also expressed by Newton in geometrical form, as in the Principia of 1687. Newton employed fluxions as early as 1666, but did not publish an account of his notation until 1693. The earliest use of differentials in Leibniz's notebooks may be traced to 1675. He employed this notation in a 1677 letter to Newton. The differential notation also appeared in Leibniz's memoir of 1684.
The claim that Leibniz invented the calculus independently of Newton rests on the basis that Leibniz:
Published a description of his method some years before Newton printed anything on fluxions,
Always alluded to the discovery as being his own invention (this statement went unchallenged for some years),
Enjoyed the strong presumption that he acted in good faith
Demonstrated in his private papers his development of the ideas of calculus in a manner independent of the path taken by Newton.
According to Leibniz's detractors, the fact that Leibniz's claim went unchallenged for some years is immaterial. To rebut this case it is sufficient to show that he:
Saw some of Newton's papers on the subject in or before 1675 or at least 1677, and
Obtained the fundamental ideas of the calculus from those papers.
No attempt was made to rebut #4, which was not known at the time, but which provides the strongest of the evidence that Leibniz came to the calculus independently from Newton. This evidence, however, is still questionable based on the discovery, in the inquest and after, that Leibniz both back-dated and changed fundamentals of his "original" notes, not only in this intellectual conflict, but in several others. He also published "anonymous" slanders of Newton regarding their controversy which he tried, initially, to claim he was not author of.
If good faith is nevertheless assumed, however, Leibniz's notes as presented to the inquest came first to integration, which he saw as a generalization of the summation of infinite series, whereas Newton began from derivatives. However, to view the development of calculus as entirely independent between the work of Newton and Leibniz misses that both had some knowledge of the methods of the other (though Newton did develop most fundamentals before Leibniz began) and worked together on a few aspects, in particular power series, as is shown in a letter to Henry Oldenburg dated 24 October 1676, where Newton remarks that Leibniz had developed a number of methods, one of which was new to him. Both Leibniz and Newton could see the other was far along towards inventing calculus (Leibniz in particular mentions it) but only Leibniz was prodded thereby into publication.
That Leibniz saw some of Newton's manuscripts had always been likely. In 1849, C. I. Gerhardt, while going through Leibniz's manuscripts, found extracts from Newton's De Analysi per Equationes Numero Terminorum Infinitas (published in 1704 as part of the De Quadratura Curvarum but also previously circulated among mathematicians starting with Newton giving a copy to Isaac Barrow in 1669 and Barrow sending it to John Collins) in Leibniz's handwriting, the existence of which had been previously unsuspected, along with notes re-expressing the content of these extracts in Leibniz's differential notation. Hence when these extracts were made becomes all-important. It is known that a copy of Newton's manuscript had been sent to Ehrenfried Walther von Tschirnhaus in May 1675, a time when he and Leibniz were collaborating; it is not impossible that these extracts were made then. It is also possible that they may have been made in 1676, when Leibniz discussed analysis by infinite series with Collins and Oldenburg. It is probable that they would have then shown him Newton's manuscript on the subject, a copy of which one or both of them surely possessed. On the other hand, it may be supposed that Leibniz made the extracts from the printed copy in or after 1704. Shortly before his death, Leibniz admitted in a letter to Abbé Antonio Schinella Conti, that in 1676 Collins had shown him some of Newton's papers, but Leibniz also implied that they were of little or no value. Presumably he was referring to Newton's letters of 13 June and 24 October 1676, and to the letter of 10 December 1672, on the method of tangents, extracts from which accompanied the letter of 13 June.
Whether Leibniz made use of the manuscript from which he had copied extracts, or whether he had previously invented the calculus, are questions on which no direct evidence is available at present. It is, however, worth noting that the unpublished Portsmouth Papers show that when Newton entered into the dispute in 1711, he picked this manuscript as the one which had likely fallen into Leibniz's hands. At that time there was no direct evidence that Leibniz had seen Newton's manuscript before it was printed in 1704; hence Newton's conjecture was not published. But Gerhardt's discovery of a copy made by Leibniz appears to confirm its accuracy. Those who question Leibniz's good faith allege that to a man of his ability, the manuscript, especially if supplemented by the letter of 10 December 1672, sufficed to give him a clue as to the methods of the calculus. Since Newton's work at issue did employ the fluxional notation, anyone building on that work would have to invent a notation, but some deny this.
== Development ==
The quarrel was a retrospective affair. In 1696, already some years later than the events that became the subject of the quarrel, the position still looked potentially peaceful: Newton and Leibniz had each made limited acknowledgements of the other's work, and L'Hôpital's 1696 book about the calculus from a Leibnizian point of view had also acknowledged Newton's published work of the 1680s as "nearly all about this calculus" ("presque tout de ce calcul"), while expressing preference for the convenience of Leibniz's notation.
At first, there was no reason to suspect Leibniz's good faith. In 1699, Nicolas Fatio de Duillier, a Swiss mathematician known for his work on the zodiacal light problem, publicly accused Leibniz of plagiarizing Newton, although he privately had accused Leibniz of plagiarism twice in letters to Christiaan Huygens in 1692. It was not until the 1704 publication of an anonymous review of Newton's tract on quadrature, which implied Newton had borrowed the idea of the fluxional calculus from Leibniz, that any responsible mathematician doubted Leibniz had invented the calculus independently of Newton. With respect to the review of Newton's quadrature work, all admit that there was no justification or authority for the statements made therein, which were rightly attributed to Leibniz. But the subsequent discussion led to a critical examination of the whole question, and doubts emerged: "Had Leibniz derived the fundamental idea of the calculus from Newton?" The case against Leibniz, as it appeared to Newton's friends, was summed up in the Commercium Epistolicum of 1712, which referenced all allegations. This document was thoroughly machined by Newton.
No such summary (with facts, dates, and references) of the case for Leibniz was issued by his friends; but Johann Bernoulli attempted to indirectly weaken the evidence by attacking the personal character of Newton in a letter dated 7 June 1713. When pressed for an explanation, Bernoulli most solemnly denied having written the letter. In accepting the denial, Newton added in a private letter to Bernoulli the following remarks, Newton's claimed reasons for why he took part in the controversy. He said, "I have never grasped at fame among foreign nations, but I am very desirous to preserve my character for honesty, which the author of that epistle, as if by the authority of a great judge, had endeavoured to wrest from me. Now that I am old, I have little pleasure in mathematical studies, and I have never tried to propagate my opinions over the world, but I have rather taken care not to involve myself in disputes on account of them."
Leibniz explained his silence as follows, in a letter to Conti dated 9 April 1716:
In order to respond point by point to all the work published against me, I would have to go into much minutiae that occurred thirty, forty years ago, of which I remember little: I would have to search my old letters, of which many are lost. Moreover, in most cases, I did not keep a copy, and when I did, the copy is buried in a great heap of papers, which I could sort through only with time and patience. I have enjoyed little leisure, being so weighted down of late with occupations of a totally different nature.
In any event, a bias favouring Newton tainted the whole affair from the outset. The Royal Society, of which Isaac Newton was president at the time, set up a committee to pronounce on the priority dispute, in response to a letter it had received from Leibniz. That committee never asked Leibniz to give his version of the events. The report of the committee, finding in favour of Newton, was written and published as "Commercium Epistolicum" (mentioned above) by Newton early in 1713. But Leibniz did not see it until the autumn of 1714.
=== Leibniz's death and end of dispute ===
Leibniz never agreed to acknowledge Newton's priority in inventing calculus. He attempted to write his own version of the history of differential calculus, but, as in the case of the history of the rulers of Braunschweig, he did not complete the matter. At the end of 1715, Leibniz accepted Johann Bernoulli's offer to organize another mathematical competition, in which different approaches had to prove their worth. This time the problem was taken from the area later called the calculus of variations – it was required to construct a tangent line to a family of curves. A letter was written on 25 November and transmitted in London to Newton through Abate Conti. The problem was formulated in unclear terms, and only later it became evident that it was required to find a general, and not a particular, as Newton understood, solution. After the British side published their decision, Leibniz published his, more general, and, thus, formally won this competition. For his part, Newton stubbornly sought to destroy his opponent. Not having achieved this with the "Report", he continued his research, spending hundreds of hours on it. His next study, entitled "Observations upon the preceding Epistle", was inspired by a letter from Leibniz to Conti in March 1716, which criticized Newton's philosophical views; no new facts were given in this document.
== See also ==
Possibility of transmission of Kerala School results to Europe
List of scientific priority disputes
== References ==
This article incorporates text from this source, which is in the public domain: Ball, W. W. Rouse (1908). A Short Account of the History of Mathematics. New York: MacMillan.{{cite book}}: CS1 maint: publisher location (link)
== Sources ==
Арнольд, В. И. (1989). Гюйгенс и Барроу, Ньютон и Гук - Первые шаги математического анализа и теории катастроф. М.: Наука. p. 98. ISBN 5-02-013935-1.
Arnold, Vladimir (1990). Huygens and Barrow, Newton and Hooke: Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Translated by Primrose, Eric J.F. Birkhäuser Verlag. ISBN 3-7643-2383-3.
W. W. Rouse Ball (1908) A Short Account of the History of Mathematics], 4th ed.
Bardi, Jason Socrates (2006). The Calculus Wars: Newton, Leibniz, and the Greatest Mathematical Clash of All Time. New York: Thunder's Mouth Press. ISBN 978-1-56025-992-3.
Boyer, C. B. (1949). The History of the Calculus and its conceptual development. Dover Publications, inc.
Richard C. Brown (2012) Tangled origins of the Leibnitzian Calculus: A case study of mathematical revolution, World Scientific ISBN 9789814390804
Ivor Grattan-Guinness (1997) The Norton History of the Mathematical Sciences. W W Norton.
Hall, A. R. (1980). Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge University Press. p. 356. ISBN 0-521-22732-1.
Stephen Hawking (1988) A Brief History of Time From the Big Bang to Black Holes. Bantam Books.
Kandaswamy, Anand. The Newton/Leibniz Conflict in Context.
Meli, D. B. (1993). Equivalence and Priority: Newton versus Leibniz: Including Leibniz's Unpublished Manuscripts on the Principia. Clarendon Press. p. 318. ISBN 0-19-850143-9.
== External links ==
Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, Reihe VII: Mathematische Schriften, vol. 5: Infinitesimalmathematik 1674-1676, Berlin: Akademie Verlag, 2008, pp. 288–295 ("Analyseos tetragonisticae pars secunda", 29 October 1675) and 321–331 ("Methodi tangentium inversae exempla", 11 November 1675).
Gottfried Wilhelm Leibniz, "Nova Methodus pro Maximis et Minimis...", 1684 (Latin original) (English translation)
Isaac Newton, "Newton's Waste Book (Part 3) (Normalized Version)": 16 May 1666 entry (The Newton Project)
Isaac Newton, "De Analysi per Equationes Numero Terminorum Infinitas (Of the Quadrature of Curves and Analysis by Equations of an Infinite Number of Terms)", in: Sir Isaac Newton's Two Treatises, James Bettenham, 1745. | Wikipedia/Newton_v._Leibniz_calculus_controversy |
In calculus, interchange of the order of integration is a methodology that transforms iterated integrals (or multiple integrals through the use of Fubini's theorem) of functions into other, hopefully simpler, integrals by changing the order in which the integrations are performed. In some cases, the order of integration can be validly interchanged; in others it cannot.
== Problem statement ==
The problem for examination is evaluation of an integral of the form
∬
D
f
(
x
,
y
)
d
x
d
y
,
{\displaystyle \iint _{D}\ f(x,y)\ dx\,dy,}
where D is some two-dimensional area in the xy–plane. For some functions f straightforward integration is feasible, but where that is not true, the integral can sometimes be reduced to simpler form by changing the order of integration. The difficulty with this interchange is determining the change in description of the domain D.
The method also is applicable to other multiple integrals.
Sometimes, even though a full evaluation is difficult, or perhaps requires a numerical integration, a double integral can be reduced to a single integration, as illustrated next. Reduction to a single integration makes a numerical evaluation much easier and more efficient.
== Relation to integration by parts ==
Consider the iterated integral
∫
a
z
∫
a
x
h
(
y
)
d
y
d
x
,
{\displaystyle \int _{a}^{z}\,\int _{a}^{x}\,h(y)\,dy\,dx,}
In this expression, the second integral is calculated first with respect to y and x is held constant—a strip of width dx is integrated first over the y-direction (a strip of width dx in the x direction is integrated with respect to the y variable across the y direction), adding up an infinite amount of rectangles of width dy along the y-axis. This forms a three dimensional slice dx wide along the x-axis, from y=a to y=x along the y-axis, and in the z direction z=h(y). Notice that if the thickness dx is infinitesimal, x varies only infinitesimally on the slice. We can assume that x is constant. This integration is as shown in the left panel of Figure 1, but is inconvenient especially when the function h(y) is not easily integrated. The integral can be reduced to a single integration by reversing the order of integration as shown in the right panel of the figure. To accomplish this interchange of variables, the strip of width dy is first integrated from the line x = y to the limit x = z, and then the result is integrated from y = a to y = z, resulting in:
∫
a
z
∫
a
x
h
(
y
)
d
y
d
x
=
∫
a
z
h
(
y
)
d
y
∫
y
z
d
x
=
∫
a
z
(
z
−
y
)
h
(
y
)
d
y
.
{\displaystyle \int _{a}^{z}\int _{a}^{x}h(y)\ dy\ dx=\int _{a}^{z}h(y)\ dy\ \int _{y}^{z}dx=\int _{a}^{z}\left(z-y\right)h(y)\,dy.}
This result can be seen to be an example of the formula for integration by parts, as stated below:
∫
a
z
f
(
x
)
g
′
(
x
)
d
x
=
[
f
(
x
)
g
(
x
)
]
a
z
−
∫
a
z
f
′
(
x
)
g
(
x
)
d
x
{\displaystyle \int _{a}^{z}f(x)g'(x)\,dx=\left[f(x)g(x)\right]_{a}^{z}-\int _{a}^{z}f'(x)g(x)\,dx}
Substitute:
f
(
x
)
=
∫
a
x
h
(
y
)
d
y
and
g
′
(
x
)
=
1.
{\displaystyle f(x)=\int _{a}^{x}h(y)\,dy~{\text{ and }}~g'(x)=1.}
Which gives the result.
== Principal-value integrals ==
For application to principal-value integrals, see Whittaker and Watson, Gakhov, Lu, or Zwillinger. See also the discussion of the Poincaré-Bertrand transformation in Obolashvili. An example where the order of integration cannot be exchanged is given by Kanwal:
1
(
2
π
i
)
2
∫
L
∗
d
τ
1
τ
1
−
t
∫
L
∗
g
(
τ
)
d
τ
τ
−
τ
1
=
1
4
g
(
t
)
,
{\displaystyle {\frac {1}{(2\pi i)^{2}}}\int _{L}^{*}{\frac {d{\tau }_{1}}{{\tau }_{1}-t}}\ \int _{L}^{*}\ g(\tau ){\frac {d\tau }{\tau -\tau _{1}}}={\frac {1}{4}}g(t)\ ,}
while:
1
(
2
π
i
)
2
∫
L
∗
g
(
τ
)
d
τ
(
∫
L
∗
d
τ
1
(
τ
1
−
t
)
(
τ
−
τ
1
)
)
=
0
.
{\displaystyle {\frac {1}{(2\pi i)^{2}}}\int _{L}^{*}g(\tau )\ d\tau \left(\int _{L}^{*}{\frac {d\tau _{1}}{\left(\tau _{1}-t\right)\left(\tau -\tau _{1}\right)}}\right)=0\ .}
The second form is evaluated using a partial fraction expansion and an evaluation using the Sokhotski–Plemelj formula:
∫
L
∗
d
τ
1
τ
1
−
t
=
∫
L
∗
d
τ
1
τ
1
−
t
=
π
i
.
{\displaystyle \int _{L}^{*}{\frac {d\tau _{1}}{\tau _{1}-t}}=\int _{L}^{*}{\frac {d\tau _{1}}{\tau _{1}-t}}=\pi \ i\ .}
The notation
∫
L
∗
{\displaystyle \int _{L}^{*}}
indicates a Cauchy principal value. See Kanwal.
== Basic theorems ==
A discussion of the basis for reversing the order of integration is found in the book Fourier Analysis by T.W. Körner. He introduces his discussion with an example where interchange of integration leads to two different answers because the conditions of Theorem II below are not satisfied. Here is the example:
∫
1
∞
x
2
−
y
2
(
x
2
+
y
2
)
2
d
y
=
[
y
x
2
+
y
2
]
1
∞
=
−
1
1
+
x
2
[
x
≥
1
]
.
{\displaystyle \int _{1}^{\infty }{\frac {x^{2}-y^{2}}{\left(x^{2}+y^{2}\right)^{2}}}\ dy=\left[{\frac {y}{x^{2}+y^{2}}}\right]_{1}^{\infty }=-{\frac {1}{1+x^{2}}}\ \left[x\geq 1\right]\ .}
∫
1
∞
(
∫
1
∞
x
2
−
y
2
(
x
2
+
y
2
)
2
d
y
)
d
x
=
−
π
4
.
{\displaystyle \int _{1}^{\infty }\left(\int _{1}^{\infty }{\frac {x^{2}-y^{2}}{\left(x^{2}+y^{2}\right)^{2}}}\ dy\right)\ dx=-{\frac {\pi }{4}}\ .}
∫
1
∞
(
∫
1
∞
x
2
−
y
2
(
x
2
+
y
2
)
2
d
x
)
d
y
=
π
4
.
{\displaystyle \int _{1}^{\infty }\left(\int _{1}^{\infty }{\frac {x^{2}-y^{2}}{\left(x^{2}+y^{2}\right)^{2}}}\ dx\right)\ dy={\frac {\pi }{4}}\ .}
Two basic theorems governing admissibility of the interchange are quoted below from Chaudhry and Zubair:
The most important theorem for the applications is quoted from Protter and Morrey:
== See also ==
Fubini's theorem
== References and notes ==
== External links ==
Paul's Online Math Notes: Calculus III
Good 3D images showing the computation of "Double Integrals" using iterated integrals, the Department of Mathematics at Oregon State University.
Ron Miech's UCLA Calculus Problems More complex examples of changing the order of integration (see Problems 33, 35, 37, 39, 41 & 43)
Duane Nykamp's University of Minnesota website | Wikipedia/Order_of_integration_(calculus) |
In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function
f
{\displaystyle f}
, which are solutions to the equation
f
(
x
)
=
0
{\displaystyle f(x)=0}
. However, to optimize a twice-differentiable
f
{\displaystyle f}
, our goal is to find the roots of
f
′
{\displaystyle f'}
. We can therefore use Newton's method on its derivative
f
′
{\displaystyle f'}
to find solutions to
f
′
(
x
)
=
0
{\displaystyle f'(x)=0}
, also known as the critical points of
f
{\displaystyle f}
. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function
f
{\displaystyle f}
.
== Newton's method ==
The central problem of optimization is minimization of functions. Let us first consider the case of univariate functions, i.e., functions of a single real variable. We will later consider the more general and more practically useful multivariate case.
Given a twice differentiable function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
, we seek to solve the optimization problem
min
x
∈
R
f
(
x
)
.
{\displaystyle \min _{x\in \mathbb {R} }f(x).}
Newton's method attempts to solve this problem by constructing a sequence
{
x
k
}
{\displaystyle \{x_{k}\}}
from an initial guess (starting point)
x
0
∈
R
{\displaystyle x_{0}\in \mathbb {R} }
that converges towards a minimizer
x
∗
{\displaystyle x_{*}}
of
f
{\displaystyle f}
by using a sequence of second-order Taylor approximations of
f
{\displaystyle f}
around the iterates. The second-order Taylor expansion of f around
x
k
{\displaystyle x_{k}}
is
f
(
x
k
+
t
)
≈
f
(
x
k
)
+
f
′
(
x
k
)
t
+
1
2
f
″
(
x
k
)
t
2
.
{\displaystyle f(x_{k}+t)\approx f(x_{k})+f'(x_{k})t+{\frac {1}{2}}f''(x_{k})t^{2}.}
The next iterate
x
k
+
1
{\displaystyle x_{k+1}}
is defined so as to minimize this quadratic approximation in
t
{\displaystyle t}
, and setting
x
k
+
1
=
x
k
+
t
{\displaystyle x_{k+1}=x_{k}+t}
. If the second derivative is positive, the quadratic approximation is a convex function of
t
{\displaystyle t}
, and its minimum can be found by setting the derivative to zero. Since
0
=
d
d
t
(
f
(
x
k
)
+
f
′
(
x
k
)
t
+
1
2
f
″
(
x
k
)
t
2
)
=
f
′
(
x
k
)
+
f
″
(
x
k
)
t
,
{\displaystyle \displaystyle 0={\frac {\rm {d}}{{\rm {d}}t}}\left(f(x_{k})+f'(x_{k})t+{\frac {1}{2}}f''(x_{k})t^{2}\right)=f'(x_{k})+f''(x_{k})t,}
the minimum is achieved for
t
=
−
f
′
(
x
k
)
f
″
(
x
k
)
.
{\displaystyle t=-{\frac {f'(x_{k})}{f''(x_{k})}}.}
Putting everything together, Newton's method performs the iteration
x
k
+
1
=
x
k
+
t
=
x
k
−
f
′
(
x
k
)
f
″
(
x
k
)
.
{\displaystyle x_{k+1}=x_{k}+t=x_{k}-{\frac {f'(x_{k})}{f''(x_{k})}}.}
== Geometric interpretation ==
The geometric interpretation of Newton's method is that at each iteration, it amounts to the fitting of a parabola to the graph of
f
(
x
)
{\displaystyle f(x)}
at the trial value
x
k
{\displaystyle x_{k}}
, having the same slope and curvature as the graph at that point, and then proceeding to the maximum or minimum of that parabola (in higher dimensions, this may also be a saddle point), see below. Note that if
f
{\displaystyle f}
happens to be a quadratic function, then the exact extremum is found in one step.
== Higher dimensions ==
The above iterative scheme can be generalized to
d
>
1
{\displaystyle d>1}
dimensions by replacing the derivative with the gradient (different authors use different notation for the gradient, including
f
′
(
x
)
=
∇
f
(
x
)
=
g
f
(
x
)
∈
R
d
{\displaystyle f'(x)=\nabla f(x)=g_{f}(x)\in \mathbb {R} ^{d}}
), and the reciprocal of the second derivative with the inverse of the Hessian matrix (different authors use different notation for the Hessian, including
f
″
(
x
)
=
∇
2
f
(
x
)
=
H
f
(
x
)
∈
R
d
×
d
{\displaystyle f''(x)=\nabla ^{2}f(x)=H_{f}(x)\in \mathbb {R} ^{d\times d}}
). One thus obtains the iterative scheme
x
k
+
1
=
x
k
−
[
f
″
(
x
k
)
]
−
1
f
′
(
x
k
)
,
k
≥
0.
{\displaystyle x_{k+1}=x_{k}-[f''(x_{k})]^{-1}f'(x_{k}),\qquad k\geq 0.}
Often Newton's method is modified to include a small step size
0
<
γ
≤
1
{\displaystyle 0<\gamma \leq 1}
instead of
γ
=
1
{\displaystyle \gamma =1}
:
x
k
+
1
=
x
k
−
γ
[
f
″
(
x
k
)
]
−
1
f
′
(
x
k
)
.
{\displaystyle x_{k+1}=x_{k}-\gamma [f''(x_{k})]^{-1}f'(x_{k}).}
This is often done to ensure that the Wolfe conditions, or much simpler and efficient Armijo's condition, are satisfied at each step of the method. For step sizes other than 1, the method is often referred to as the relaxed or damped Newton's method.
== Convergence ==
If f is a strongly convex function with Lipschitz Hessian, then provided that
x
0
{\displaystyle x_{0}}
is close enough to
x
∗
=
arg
min
f
(
x
)
{\displaystyle x_{*}=\arg \min f(x)}
, the sequence
x
0
,
x
1
,
x
2
,
…
{\displaystyle x_{0},x_{1},x_{2},\dots }
generated by Newton's method will converge to the (necessarily unique) minimizer
x
∗
{\displaystyle x_{*}}
of
f
{\displaystyle f}
quadratically fast. That is,
‖
x
k
+
1
−
x
∗
‖
≤
1
2
‖
x
k
−
x
∗
‖
2
,
∀
k
≥
0.
{\displaystyle \|x_{k+1}-x_{*}\|\leq {\frac {1}{2}}\|x_{k}-x_{*}\|^{2},\qquad \forall k\geq 0.}
== Computing the Newton direction ==
Finding the inverse of the Hessian in high dimensions to compute the Newton direction
h
=
−
(
f
″
(
x
k
)
)
−
1
f
′
(
x
k
)
{\displaystyle h=-(f''(x_{k}))^{-1}f'(x_{k})}
can be an expensive operation. In such cases, instead of directly inverting the Hessian, it is better to calculate the vector
h
{\displaystyle h}
as the solution to the system of linear equations
[
f
″
(
x
k
)
]
h
=
−
f
′
(
x
k
)
{\displaystyle [f''(x_{k})]h=-f'(x_{k})}
which may be solved by various factorizations or approximately (but to great accuracy) using iterative methods. Many of these methods are only applicable to certain types of equations, for example the Cholesky factorization and conjugate gradient will only work if
f
″
(
x
k
)
{\displaystyle f''(x_{k})}
is a positive definite matrix. While this may seem like a limitation, it is often a useful indicator of something gone wrong; for example if a minimization problem is being approached and
f
″
(
x
k
)
{\displaystyle f''(x_{k})}
is not positive definite, then the iterations are converging to a saddle point and not a minimum.
On the other hand, if a constrained optimization is done (for example, with Lagrange multipliers), the problem may become one of saddle point finding, in which case the Hessian will be symmetric indefinite and the solution of
x
k
+
1
{\displaystyle x_{k+1}}
will need to be done with a method that will work for such, such as the
L
D
L
⊤
{\displaystyle LDL^{\top }}
variant of Cholesky factorization or the conjugate residual method.
There also exist various quasi-Newton methods, where an approximation for the Hessian (or its inverse directly) is built up from changes in the gradient.
If the Hessian is close to a non-invertible matrix, the inverted Hessian can be numerically unstable and the solution may diverge. In this case, certain workarounds have been tried in the past, which have varied success with certain problems. One can, for example, modify the Hessian by adding a correction matrix
B
k
{\displaystyle B_{k}}
so as to make
f
″
(
x
k
)
+
B
k
{\displaystyle f''(x_{k})+B_{k}}
positive definite. One approach is to diagonalize the Hessian and choose
B
k
{\displaystyle B_{k}}
so that
f
″
(
x
k
)
+
B
k
{\displaystyle f''(x_{k})+B_{k}}
has the same eigenvectors as the Hessian, but with each negative eigenvalue replaced by
ϵ
>
0
{\displaystyle \epsilon >0}
.
An approach exploited in the Levenberg–Marquardt algorithm (which uses an approximate Hessian) is to add a scaled identity matrix to the Hessian,
μ
I
{\displaystyle \mu I}
, with the scale adjusted at every iteration as needed. For large
μ
{\displaystyle \mu }
and small Hessian, the iterations will behave like gradient descent with step size
1
/
μ
{\displaystyle 1/\mu }
. This results in slower but more reliable convergence where the Hessian doesn't provide useful information.
== Some caveats ==
Newton's method, in its original version, has several caveats:
It does not work if the Hessian is not invertible. This is clear from the very definition of Newton's method, which requires taking the inverse of the Hessian.
It may not converge at all, but can enter a cycle having more than 1 point. See the Newton's method § Failure analysis.
It can converge to a saddle point instead of to a local minimum, see the section "Geometric interpretation" in this article.
The popular modifications of Newton's method, such as quasi-Newton methods or Levenberg-Marquardt algorithm mentioned above, also have caveats:
For example, it is usually required that the cost function is (strongly) convex and the Hessian is globally bounded or Lipschitz continuous, for example this is mentioned in the section "Convergence" in this article. If one looks at the papers by Levenberg and Marquardt in the reference for Levenberg–Marquardt algorithm, which are the original sources for the mentioned method, one can see that there is basically no theoretical analysis in the paper by Levenberg, while the paper by Marquardt only analyses a local situation and does not prove a global convergence result. One can compare with Backtracking line search method for Gradient descent, which has good theoretical guarantee under more general assumptions, and can be implemented and works well in practical large scale problems such as Deep Neural Networks.
== See also ==
Quasi-Newton method
Gradient descent
Gauss–Newton algorithm
Levenberg–Marquardt algorithm
Trust region
Optimization
Nelder–Mead method
Self-concordant function - a function for which Newton's method has very good global convergence rate.: Sec.6.2
== Notes ==
== References ==
Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0.
Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-540-35447-5. ISBN 3-540-35445-X. MR 2265882.
Fletcher, Roger (1987). Practical Methods of Optimization (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-91547-8.
Givens, Geof H.; Hoeting, Jennifer A. (2013). Computational Statistics. Hoboken, New Jersey: John Wiley & Sons. pp. 24–58. ISBN 978-0-470-53331-4.
Nocedal, Jorge; Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.
Kovalev, Dmitry; Mishchenko, Konstantin; Richtárik, Peter (2019). "Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates". arXiv:1912.01597 [cs.LG].
== External links ==
Korenblum, Daniel (Aug 29, 2015). "Newton-Raphson visualization (1D)". Bl.ocks. ffe9653768cb80dfc0da. | Wikipedia/Newton's_method_in_optimization |
In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations.
== Definition ==
Given a twice continuously differentiable function
f
{\displaystyle f}
of one real variable, Taylor's theorem for the case
n
=
1
{\displaystyle n=1}
states that
f
(
x
)
=
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
+
R
2
{\displaystyle f(x)=f(a)+f'(a)(x-a)+R_{2}}
where
R
2
{\displaystyle R_{2}}
is the remainder term. The linear approximation is obtained by dropping the remainder:
f
(
x
)
≈
f
(
a
)
+
f
′
(
a
)
(
x
−
a
)
.
{\displaystyle f(x)\approx f(a)+f'(a)(x-a).}
This is a good approximation when
x
{\displaystyle x}
is close enough to
a
{\displaystyle a}
; since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for the tangent line to the graph of
f
{\displaystyle f}
at
(
a
,
f
(
a
)
)
{\displaystyle (a,f(a))}
. For this reason, this process is also called the tangent line approximation. Linear approximations in this case are further improved when the second derivative of a,
f
″
(
a
)
{\displaystyle f''(a)}
, is sufficiently small (close to zero) (i.e., at or near an inflection point).
If
f
{\displaystyle f}
is concave down in the interval between
x
{\displaystyle x}
and
a
{\displaystyle a}
, the approximation will be an overestimate (since the derivative is decreasing in that interval). If
f
{\displaystyle f}
is concave up, the approximation will be an underestimate.
Linear approximations for vector functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the Jacobian matrix. For example, given a differentiable function
f
(
x
,
y
)
{\displaystyle f(x,y)}
with real values, one can approximate
f
(
x
,
y
)
{\displaystyle f(x,y)}
for
(
x
,
y
)
{\displaystyle (x,y)}
close to
(
a
,
b
)
{\displaystyle (a,b)}
by the formula
f
(
x
,
y
)
≈
f
(
a
,
b
)
+
∂
f
∂
x
(
a
,
b
)
(
x
−
a
)
+
∂
f
∂
y
(
a
,
b
)
(
y
−
b
)
.
{\displaystyle f\left(x,y\right)\approx f\left(a,b\right)+{\frac {\partial f}{\partial x}}\left(a,b\right)\left(x-a\right)+{\frac {\partial f}{\partial y}}\left(a,b\right)\left(y-b\right).}
The right-hand side is the equation of the plane tangent to the graph of
z
=
f
(
x
,
y
)
{\displaystyle z=f(x,y)}
at
(
a
,
b
)
.
{\displaystyle (a,b).}
In the more general case of Banach spaces, one has
f
(
x
)
≈
f
(
a
)
+
D
f
(
a
)
(
x
−
a
)
{\displaystyle f(x)\approx f(a)+Df(a)(x-a)}
where
D
f
(
a
)
{\displaystyle Df(a)}
is the Fréchet derivative of
f
{\displaystyle f}
at
a
{\displaystyle a}
.
== Applications ==
=== Optics ===
Gaussian optics is a technique in geometrical optics that describes the behaviour of light rays in optical systems by using the paraxial approximation, in which only rays which make small angles with the optical axis of the system are considered. In this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in which all the optical surfaces are either flat or are portions of a sphere. In this case, simple explicit formulae can be given for parameters of an imaging system such as focal distance, magnification and brightness, in terms of the geometrical shapes and material properties of the constituent elements.
=== Period of oscillation ===
The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, θ0, called the amplitude. It is independent of the mass of the bob. The true period T of a simple pendulum, the time taken for a complete cycle of an ideal simple gravity pendulum, can be written in several different forms (see pendulum), one example being the infinite series:
T
=
2
π
L
g
(
1
+
1
16
θ
0
2
+
11
3072
θ
0
4
+
⋯
)
{\displaystyle T=2\pi {\sqrt {L \over g}}\left(1+{\frac {1}{16}}\theta _{0}^{2}+{\frac {11}{3072}}\theta _{0}^{4}+\cdots \right)}
where L is the length of the pendulum and g is the local acceleration of gravity.
However, if one takes the linear approximation (i.e. if the amplitude is limited to small swings, ) the period is:
In the linear approximation, the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time.
=== Electrical resistivity ===
The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used:
ρ
(
T
)
=
ρ
0
[
1
+
α
(
T
−
T
0
)
]
{\displaystyle \rho (T)=\rho _{0}[1+\alpha (T-T_{0})]}
where
α
{\displaystyle \alpha }
is called the temperature coefficient of resistivity,
T
0
{\displaystyle T_{0}}
is a fixed reference temperature (usually room temperature), and
ρ
0
{\displaystyle \rho _{0}}
is the resistivity at temperature
T
0
{\displaystyle T_{0}}
. The parameter
α
{\displaystyle \alpha }
is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation,
α
{\displaystyle \alpha }
is different for different reference temperatures. For this reason it is usual to specify the temperature that
α
{\displaystyle \alpha }
was measured at with a suffix, such as
α
15
{\displaystyle \alpha _{15}}
, and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
== See also ==
Binomial approximation
Euler's method
Finite differences
Finite difference methods
Newton's method
Power series
Taylor series
== Notes ==
== References ==
== Further reading ==
Weinstein, Alan; Marsden, Jerrold E. (1984). Calculus III. Berlin: Springer-Verlag. p. 775. ISBN 0-387-90985-0.
Strang, Gilbert (1991). Calculus. Wellesley College. p. 94. ISBN 0-9614088-2-0.
Bock, David; Hockett, Shirley O. (2005). How to Prepare for the AP Calculus. Hauppauge, NY: Barrons Educational Series. p. 118. ISBN 0-7641-2382-3. | Wikipedia/Linear_approximation |
In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse
f
−
1
{\displaystyle f^{-1}}
of a continuous and invertible function
f
{\displaystyle f}
, in terms of
f
−
1
{\displaystyle f^{-1}}
and an antiderivative of
f
{\displaystyle f}
. This formula was published in 1905 by Charles-Ange Laisant.
== Statement of the theorem ==
Let
I
1
{\displaystyle I_{1}}
and
I
2
{\displaystyle I_{2}}
be two intervals of
R
{\displaystyle \mathbb {R} }
.
Assume that
f
:
I
1
→
I
2
{\displaystyle f:I_{1}\to I_{2}}
is a continuous and invertible function. It follows from the intermediate value theorem that
f
{\displaystyle f}
is strictly monotone. Consequently,
f
{\displaystyle f}
maps intervals to intervals, so is an open map and thus a homeomorphism. Since
f
{\displaystyle f}
and the inverse function
f
−
1
:
I
2
→
I
1
{\displaystyle f^{-1}:I_{2}\to I_{1}}
are continuous, they have antiderivatives by the fundamental theorem of calculus.
Laisant proved that if
F
{\displaystyle F}
is an antiderivative of
f
{\displaystyle f}
, then the antiderivatives of
f
−
1
{\displaystyle f^{-1}}
are:
∫
f
−
1
(
y
)
d
y
=
y
f
−
1
(
y
)
−
F
∘
f
−
1
(
y
)
+
C
,
{\displaystyle \int f^{-1}(y)\,dy=yf^{-1}(y)-F\circ f^{-1}(y)+C,}
where
C
{\displaystyle C}
is an arbitrary real number. Note that it is not assumed that
f
−
1
{\displaystyle f^{-1}}
is differentiable.
In his 1905 article, Laisant gave three proofs.
=== First proof ===
First, under the additional hypothesis that
f
−
1
{\displaystyle f^{-1}}
is differentiable, one may differentiate the above formula, which completes the proof immediately.
=== Second proof ===
His second proof was geometric. If
f
(
a
)
=
c
{\displaystyle f(a)=c}
and
f
(
b
)
=
d
{\displaystyle f(b)=d}
, the theorem can be written:
∫
c
d
f
−
1
(
y
)
d
y
+
∫
a
b
f
(
x
)
d
x
=
b
d
−
a
c
.
{\displaystyle \int _{c}^{d}f^{-1}(y)\,dy+\int _{a}^{b}f(x)\,dx=bd-ac.}
The figure on the right is a proof without words of this formula. Laisant does not discuss the hypotheses necessary to make this proof rigorous, but this can be proved if
f
{\displaystyle f}
is just assumed to be strictly monotone (but not necessarily continuous, let alone differentiable). In this case, both
f
{\displaystyle f}
and
f
−
1
{\displaystyle f^{-1}}
are Riemann integrable and the identity follows from a bijection between lower/upper Darboux sums of
f
{\displaystyle f}
and upper/lower Darboux sums of
f
−
1
{\displaystyle f^{-1}}
. The antiderivative version of the theorem then follows from the fundamental theorem of calculus in the case when
f
{\displaystyle f}
is also assumed to be continuous.
=== Third proof ===
Laisant's third proof uses the additional hypothesis that
f
{\displaystyle f}
is differentiable. Beginning with
f
−
1
(
f
(
x
)
)
=
x
{\displaystyle f^{-1}(f(x))=x}
, one multiplies by
f
′
(
x
)
{\displaystyle f'(x)}
and integrates both sides. The right-hand side is calculated using integration by parts to be
x
f
(
x
)
−
∫
f
(
x
)
d
x
{\textstyle xf(x)-\int f(x)\,dx}
, and the formula follows.
=== Details ===
One may also think as follows when
f
{\displaystyle f}
is differentiable. As
f
{\displaystyle f}
is continuous at any
x
{\displaystyle x}
,
F
:=
∫
0
x
f
{\displaystyle F:=\int _{0}^{x}f}
is differentiable at all
x
{\displaystyle x}
by the fundamental theorem of calculus. Since
f
{\displaystyle f}
is invertible, its derivative would vanish in at most countably many points. Sort these points by
.
.
.
<
t
−
1
<
t
0
<
t
1
<
.
.
.
{\displaystyle ...<t_{-1}<t_{0}<t_{1}<...}
. Since
g
(
y
)
:=
y
f
−
1
(
y
)
−
F
∘
f
−
1
(
y
)
+
C
{\displaystyle g(y):=yf^{-1}(y)-F\circ f^{-1}(y)+C}
is a composition of differentiable functions on each interval
(
t
i
,
t
i
+
1
)
{\displaystyle (t_{i},t_{i+1})}
, chain rule could be applied
g
′
(
y
)
=
f
−
1
(
y
)
+
y
/
f
′
(
y
)
−
f
∘
f
−
1
(
y
)
.1
/
f
′
(
y
)
+
0
=
f
−
1
(
y
)
{\displaystyle g'(y)=f^{-1}(y)+y/f'(y)-f\circ f^{-1}(y).1/f'(y)+0=f^{-1}(y)}
to see
g
|
(
t
i
,
t
i
+
1
)
{\displaystyle \left.g\right|_{(t_{i},t_{i+1})}}
is an antiderivative for
f
|
(
t
i
,
t
i
+
1
)
{\displaystyle \left.f\right|_{(t_{i},t_{i+1})}}
. We claim
g
{\displaystyle g}
is also differentiable on each of
t
i
{\displaystyle t_{i}}
and does not go unbounded if
I
2
{\displaystyle I_{2}}
is compact. In such a case
f
−
1
{\displaystyle f^{-1}}
is continuous and bounded. By continuity and the fundamental theorem of calculus,
G
(
y
)
:=
C
+
∫
0
y
f
−
1
{\displaystyle G(y):=C+\int _{0}^{y}f^{-1}}
where
C
{\displaystyle C}
is a constant, is a differentiable extension of
g
{\displaystyle g}
. But
g
{\displaystyle g}
is continuous as it's the composition of continuous functions. So is
G
{\displaystyle G}
by differentiability. Therefore,
G
=
g
{\displaystyle G=g}
. One can now use the fundamental theorem of calculus to compute
∫
I
2
f
−
1
{\displaystyle \int _{I_{2}}f^{-1}}
.
Nevertheless, it can be shown that this theorem holds even if
f
{\displaystyle f}
or
f
−
1
{\displaystyle f^{-1}}
is not differentiable: it suffices, for example, to use the Stieltjes integral in the previous argument. On the other hand, even though general monotonic functions are differentiable almost everywhere, the proof of the general formula does not follow, unless
f
−
1
{\displaystyle f^{-1}}
is absolutely continuous.
It is also possible to check that for every
y
{\displaystyle y}
in
I
2
{\displaystyle I_{2}}
, the derivative of the function
y
↦
y
f
−
1
(
y
)
−
F
(
f
−
1
(
y
)
)
{\displaystyle y\mapsto yf^{-1}(y)-F(f^{-1}(y))}
is equal to
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
. In other words:
∀
x
∈
I
1
lim
h
→
0
(
x
+
h
)
f
(
x
+
h
)
−
x
f
(
x
)
−
(
F
(
x
+
h
)
−
F
(
x
)
)
f
(
x
+
h
)
−
f
(
x
)
=
x
.
{\displaystyle \forall x\in I_{1}\quad \lim _{h\to 0}{\frac {(x+h)f(x+h)-xf(x)-\left(F(x+h)-F(x)\right)}{f(x+h)-f(x)}}=x.}
To this end, it suffices to apply the mean value theorem to
F
{\displaystyle F}
between
x
{\displaystyle x}
and
x
+
h
{\displaystyle x+h}
, taking into account that
f
{\displaystyle f}
is monotonic.
== Examples ==
Assume that
f
(
x
)
=
exp
(
x
)
{\displaystyle f(x)=\exp(x)}
, hence
f
−
1
(
y
)
=
ln
(
y
)
{\displaystyle f^{-1}(y)=\ln(y)}
. The formula above gives immediately
∫
ln
(
y
)
d
y
=
y
ln
(
y
)
−
exp
(
ln
(
y
)
)
+
C
=
y
ln
(
y
)
−
y
+
C
.
{\displaystyle \int \ln(y)\,dy=y\ln(y)-\exp(\ln(y))+C=y\ln(y)-y+C.}
Similarly, with
f
(
x
)
=
cos
(
x
)
{\displaystyle f(x)=\cos(x)}
and
f
−
1
(
y
)
=
arccos
(
y
)
{\displaystyle f^{-1}(y)=\arccos(y)}
,
∫
arccos
(
y
)
d
y
=
y
arccos
(
y
)
−
sin
(
arccos
(
y
)
)
+
C
.
{\displaystyle \int \arccos(y)\,dy=y\arccos(y)-\sin(\arccos(y))+C.}
With
f
(
x
)
=
tan
(
x
)
{\displaystyle f(x)=\tan(x)}
and
f
−
1
(
y
)
=
arctan
(
y
)
{\displaystyle f^{-1}(y)=\arctan(y)}
,
∫
arctan
(
y
)
d
y
=
y
arctan
(
y
)
+
ln
|
cos
(
arctan
(
y
)
)
|
+
C
.
{\displaystyle \int \arctan(y)\,dy=y\arctan(y)+\ln \left|\cos(\arctan(y))\right|+C.}
== History ==
Apparently, this theorem of integration was discovered for the first time in 1905 by Charles-Ange Laisant, who "could hardly believe that this theorem is new", and hoped its use would henceforth spread out among students and teachers. This result was published independently in 1912 by an Italian engineer, Alberto Caprilli, in an opuscule entitled "Nuove formole d'integrazione". It was rediscovered in 1955 by Parker, and by a number of mathematicians following him. Nevertheless, they all assume that f or f−1 is differentiable.
The general version of the theorem, free from this additional assumption, was proposed by Michael Spivak in 1965, as an exercise in the Calculus, and a fairly complete proof following the same lines was published by Eric Key in 1994.
This proof relies on the very definition of the Darboux integral, and consists in showing that the upper Darboux sums of the function f are in 1-1 correspondence with the lower Darboux sums of f−1.
In 2013, Michael Bensimhoun, estimating that the general theorem was still insufficiently known, gave two other proofs: The second proof, based on the Stieltjes integral and on its formulae of integration by parts and of homeomorphic change of variables, is the most suitable to establish more complex formulae.
== Generalization to holomorphic functions ==
The above theorem generalizes in the obvious way to holomorphic functions:
Let
U
{\displaystyle U}
and
V
{\displaystyle V}
be two open and simply connected sets of
C
{\displaystyle \mathbb {C} }
, and assume that
f
:
U
→
V
{\displaystyle f:U\to V}
is a biholomorphism. Then
f
{\displaystyle f}
and
f
−
1
{\displaystyle f^{-1}}
have antiderivatives, and if
F
{\displaystyle F}
is an antiderivative of
f
{\displaystyle f}
, the general antiderivative of
f
−
1
{\displaystyle f^{-1}}
is
G
(
z
)
=
z
f
−
1
(
z
)
−
F
∘
f
−
1
(
z
)
+
C
.
{\displaystyle G(z)=zf^{-1}(z)-F\circ f^{-1}(z)+C.}
Because all holomorphic functions are differentiable, the proof is immediate by complex differentiation.
== See also ==
Integration by parts
Legendre transformation
Young's inequality for products
== References == | Wikipedia/Integral_of_inverse_functions |
An ethical calculus is the application of mathematics to calculate issues in ethics.
== Scope ==
Generally, ethical calculus refers to any method of determining a course of action in a circumstance that is not explicitly evaluated in one's ethical code.
A formal philosophy of ethical calculus is a development in the study of ethics, combining elements of natural selection, self-organizing systems, emergence, and algorithm theory. According to ethical calculus, the most ethical course of action in a situation is an absolute, but rather than being based on a static ethical code, the ethical code itself is a function of circumstances. The optimal ethic is the best possible course of action taken by an individual with the given limitations.
While ethical calculus is, in some ways, similar to moral relativism, the former finds its grounds in the circumstance while the latter depends on social and cultural context for moral judgment. Ethical calculus would most accurately be regarded as a form of dynamic moral absolutism.
Ethical calculus is not to be confused with ethics in mathematics or ethics of quantification which study the moral questions coming from mathematical practice and quantification in society.
== Examples ==
Francis Hutcheson devoted a section of his 1725 work Inquiry into the Original of our ideas and Beauty and Virtue to "an attempt to introduce a Mathematical Calculation in subjects of Morality". Formulas included:
M = B × A
where,
M is the moral importance of any agent
B is the benevolence of the agent
A is the ability of the agent
Another example is the felicific calculus formulated by utilitarian philosopher Jeremy Bentham for calculating the degree or amount of pleasure that a specific action is likely to cause. Bentham, an ethical hedonist, believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced. The felicific calculus could, in principle at least, determine the moral status of any considered act.
== See also ==
Ethics
Felicific calculus
Formal ethics
Moral absolutism
Morality
Science of morality
== References == | Wikipedia/Ethical_calculus |
In physics, potential energy is the energy of an object or system due to the body's position relative to other objects, or the configuration of its particles. The energy is equal to the work done against any restoring forces, such as gravity or those in a spring.
The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality.
Common types of potential energy include gravitational potential energy, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge and an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J).
Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is, in turn, called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function.
== Overview ==
There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration.
Forces derivable from a potential are also called conservative forces. The work done by a conservative force is
W
=
−
Δ
U
,
{\displaystyle W=-\Delta U,}
where
Δ
U
{\displaystyle \Delta U}
is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep.
Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall.
Consider a ball whose mass is m dropped from height h. The acceleration g of free fall is approximately constant, so the weight force of the ball mg is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus
U
g
=
m
g
h
.
{\displaystyle U_{\text{g}}=mgh.}
The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position.
== History ==
From around 1840 scientists sought to define and understand energy and work.
The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as 'energy of configuration' in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of 1/2mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded.
== Work and potential energy ==
Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
W
=
∫
C
F
⋅
d
x
=
U
(
x
A
)
−
U
(
x
B
)
{\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}})}
where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B.
The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces.
=== Derivable from a potential ===
In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that
F
=
∇
U
′
=
(
∂
U
′
∂
x
,
∂
U
′
∂
y
,
∂
U
′
∂
z
)
.
{\displaystyle \mathbf {F} ={\nabla U'}=\left({\frac {\partial U'}{\partial x}},{\frac {\partial U'}{\partial y}},{\frac {\partial U'}{\partial z}}\right).}
This means that the units of U′ must be this case, work along the curve is given by
W
=
∫
C
F
⋅
d
x
=
∫
C
∇
U
′
⋅
d
x
,
{\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{C}\nabla U'\cdot d\mathbf {x} ,}
which can be evaluated using the gradient theorem to obtain
W
=
U
′
(
x
B
)
−
U
′
(
x
A
)
.
{\displaystyle W=U'(\mathbf {x} _{\text{B}})-U'(\mathbf {x} _{\text{A}}).}
This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path.
Potential energy U = −U′(x) is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is
W
=
U
(
x
A
)
−
U
(
x
B
)
.
{\displaystyle W=U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}}).}
In this case, the application of the del operator to the work function yields,
∇
W
=
−
∇
U
=
−
(
∂
U
∂
x
,
∂
U
∂
y
,
∂
U
∂
z
)
=
F
,
{\displaystyle {\nabla W}=-{\nabla U}=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,}
and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field.
=== Computing potential energy ===
Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve γ(t) = r(t) from γ(a) = A to γ(b) = B, and computing,
∫
γ
∇
Φ
(
r
)
⋅
d
r
=
∫
a
b
∇
Φ
(
r
(
t
)
)
⋅
r
′
(
t
)
d
t
,
=
∫
a
b
d
d
t
Φ
(
r
(
t
)
)
d
t
=
Φ
(
r
(
b
)
)
−
Φ
(
r
(
a
)
)
=
Φ
(
x
B
)
−
Φ
(
x
A
)
.
{\displaystyle {\begin{aligned}\int _{\gamma }\nabla \Phi (\mathbf {r} )\cdot d\mathbf {r} &=\int _{a}^{b}\nabla \Phi (\mathbf {r} (t))\cdot \mathbf {r} '(t)dt,\\&=\int _{a}^{b}{\frac {d}{dt}}\Phi (\mathbf {r} (t))dt=\Phi (\mathbf {r} (b))-\Phi (\mathbf {r} (a))=\Phi \left(\mathbf {x} _{B}\right)-\Phi \left(\mathbf {x} _{A}\right).\end{aligned}}}
For the force field F, let v = dr/dt, then the gradient theorem yields,
∫
γ
F
⋅
d
r
=
∫
a
b
F
⋅
v
d
t
,
=
−
∫
a
b
d
d
t
U
(
r
(
t
)
)
d
t
=
U
(
x
A
)
−
U
(
x
B
)
.
{\displaystyle {\begin{aligned}\int _{\gamma }\mathbf {F} \cdot d\mathbf {r} &=\int _{a}^{b}\mathbf {F} \cdot \mathbf {v} \,dt,\\&=-\int _{a}^{b}{\frac {d}{dt}}U(\mathbf {r} (t))\,dt=U(\mathbf {x} _{A})-U(\mathbf {x} _{B}).\end{aligned}}}
The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is
P
(
t
)
=
−
∇
U
⋅
v
=
F
⋅
v
.
{\displaystyle P(t)=-{\nabla U}\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .}
Examples of work that can be computed from potential functions are gravity and spring forces.
== Potential energy for near-Earth gravity ==
For small height changes, gravitational potential energy can be computed using
U
g
=
m
g
h
,
{\displaystyle U_{\text{g}}=mgh,}
where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules.
In classical physics, gravity exerts a constant downward force F = (0, 0, Fz) on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory r(t) = (x(t), y(t), z(t)), such as the track of a roller coaster is calculated using its velocity, v = (vx, vy, vz), to obtain
W
=
∫
t
1
t
2
F
⋅
v
d
t
=
∫
t
1
t
2
F
z
v
z
d
t
=
F
z
Δ
z
.
{\displaystyle W=\int _{t_{1}}^{t_{2}}{\boldsymbol {F}}\cdot {\boldsymbol {v}}\,dt=\int _{t_{1}}^{t_{2}}F_{\text{z}}v_{\text{z}}\,dt=F_{\text{z}}\Delta z.}
where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve r(t).
== Potential energy for a linear spring ==
A horizontal spring exerts a force F = (−kx, 0, 0) that is proportional to its deformation in the axial or x-direction. The work of this spring on a body moving along the space curve s(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain
W
=
∫
0
t
F
⋅
v
d
t
=
−
∫
0
t
k
x
v
x
d
t
=
−
∫
0
t
k
x
d
x
d
t
d
t
=
∫
x
(
t
0
)
x
(
t
)
k
x
d
x
=
1
2
k
x
2
{\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} \,dt=-\int _{0}^{t}kxv_{\text{x}}\,dt=-\int _{0}^{t}kx{\frac {dx}{dt}}dt=\int _{x(t_{0})}^{x(t)}kx\,dx={\frac {1}{2}}kx^{2}}
For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvx, is x2/2.
The function
U
(
x
)
=
1
2
k
x
2
,
{\displaystyle U(x)={\frac {1}{2}}kx^{2},}
is called the potential energy of a linear spring.
Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy.
== Potential energy for gravitational forces between two bodies ==
The gravitational potential function, also known as gravitational potential energy, is:
U
=
−
G
M
m
r
,
{\displaystyle U=-{\frac {GMm}{r}},}
The negative sign follows the convention that work is gained from a loss of potential energy.
=== Derivation ===
The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation
F
=
−
G
M
m
r
2
r
^
,
{\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}\mathbf {\hat {r}} ,}
where
r
^
{\displaystyle \mathbf {\hat {r}} }
is a vector of length 1 pointing from M to m and G is the gravitational constant.
Let the mass m move at the velocity v then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by
W
=
−
∫
r
(
t
1
)
r
(
t
2
)
G
M
m
r
3
r
⋅
d
r
=
−
∫
t
1
t
2
G
M
m
r
3
r
⋅
v
d
t
.
{\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.}
The position and velocity of the mass m are given by
r
=
r
e
r
,
v
=
r
˙
e
r
+
r
θ
˙
e
t
,
{\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}},}
where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to,
W
=
−
∫
t
1
t
2
G
m
M
r
3
(
r
e
r
)
⋅
(
r
˙
e
r
+
r
θ
˙
e
t
)
d
t
=
−
∫
t
1
t
2
G
m
M
r
3
r
r
˙
d
t
=
G
M
m
r
(
t
2
)
−
G
M
m
r
(
t
1
)
.
{\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{\text{r}})\cdot ({\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}})\,dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.}
This calculation uses the fact that
d
d
t
r
−
1
=
−
r
−
2
r
˙
=
−
r
˙
r
2
.
{\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.}
== Potential energy for electrostatic forces between two bodies ==
The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's law
F
=
1
4
π
ε
0
Q
q
r
2
r
^
,
{\displaystyle \mathbf {F} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,}
where
r
^
{\displaystyle \mathbf {\hat {r}} }
is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity.
The work W required to move q from A to any point B in the electrostatic force field is given by the potential function
U
(
r
)
=
1
4
π
ε
0
Q
q
r
.
{\displaystyle U(r)={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r}}.}
== Reference level ==
The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience.
Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions.
== Gravitational potential energy ==
Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount.
Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact.
The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail.
=== Local approximation ===
The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant g = 9.8 m/s2 (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the W = Fd equation for work, and the equation
W
F
=
−
Δ
U
F
.
{\displaystyle W_{\text{F}}=-\Delta U_{\text{F}}.}
The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember W = Fd). The upward force required while moving at a constant velocity is equal to the weight, mg, of an object, so the work done in lifting it through a height h is the product mgh. Thus, when accounting only for mass, gravity, and altitude, the equation is:
U
=
m
g
h
{\displaystyle U=mgh}
where U is the potential energy of the object relative to its being on the Earth's surface, m is the mass of the object, g is the acceleration due to gravity, and h is the altitude of the object.
Hence, the potential difference is
Δ
U
=
m
g
Δ
h
.
{\displaystyle \Delta U=mg\Delta h.}
=== General formula ===
However, over large variations in distance, the approximation that g is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance r between the two bodies. Using that definition, the gravitational potential energy of a system of masses m1 and M2 at a distance r using the Newtonian constant of gravitation G is
U
=
−
G
m
1
M
2
r
+
K
,
{\displaystyle U=-G{\frac {m_{1}M_{2}}{r}}+K,}
where K is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that K = 0 (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making U negative; for why this is physically reasonable, see below.
Given this formula for U, the total potential energy of a system of n bodies is found by summing, for all
n
(
n
−
1
)
2
{\textstyle {\frac {n(n-1)}{2}}}
pairs of two bodies, the potential energy of the system of those two bodies.
Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity.
U
=
−
m
(
G
M
1
r
1
+
G
M
2
r
2
)
{\displaystyle U=-m\left(G{\frac {M_{1}}{r_{1}}}+G{\frac {M_{2}}{r_{2}}}\right)}
therefore,
U
=
−
m
∑
G
M
r
,
{\displaystyle U=-m\sum G{\frac {M}{r}},}
=== Negative gravitational energy ===
As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which U becomes zero:
r
=
0
{\displaystyle r=0}
and
r
=
∞
{\displaystyle r=\infty }
. The choice of
U
=
0
{\displaystyle U=0}
at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.
The singularity at
r
=
0
{\displaystyle r=0}
in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with
U
=
0
{\displaystyle U=0}
for
r
=
0
{\displaystyle r=0}
, would result in potential energy being positive, but infinitely large for all nonzero values of r, and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and r is always non-zero in practice, the choice of
U
=
0
{\displaystyle U=0}
at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first.
The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this.
=== Uses ===
Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction.
Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window.
Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls.
Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES).
== Chemical potential energy ==
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc.
== Electric potential energy ==
An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy).
=== Electrostatic potential energy ===
Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q, which is given by
F
e
=
−
1
4
π
ε
0
Q
q
r
2
r
^
,
{\displaystyle \mathbf {F} _{e}=-{\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,}
where
r
^
{\displaystyle \mathbf {\hat {r}} }
is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity.
If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby.
The work W required to move q from A to any point B in the electrostatic force field is given by
Δ
U
A
B
(
r
)
=
−
∫
A
B
F
e
⋅
d
r
{\displaystyle \Delta U_{AB}({\mathbf {r} })=-\int _{A}^{B}\mathbf {F_{e}} \cdot d\mathbf {r} }
typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge.
=== Magnetic potential energy ===
The energy of a magnetic moment
μ
{\displaystyle {\boldsymbol {\mu }}}
in an externally produced magnetic B-field B has potential energy
U
=
−
μ
⋅
B
.
{\displaystyle U=-{\boldsymbol {\mu }}\cdot \mathbf {B} .}
The magnetization M in a field is
U
=
−
1
2
∫
M
⋅
B
d
V
,
{\displaystyle U=-{\frac {1}{2}}\int \mathbf {M} \cdot \mathbf {B} \,dV,}
where the integral can be over all space or, equivalently, where M is nonzero.
Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart.
== Nuclear potential energy ==
Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Their rest mass provides the potential energy for certain kinds of radioactive decay, such as beta decay.
Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions. The process of hydrogen fusion occurring in the Sun is an example of this form of energy release – 600 million tonnes of hydrogen nuclei are fused into helium nuclei, with a loss of about 4 million tonnes of mass per second. This energy, now in the form of kinetic energy and gamma rays, keeps the solar core hot even as electromagnetic radiation carries electromagnetic energy into space.
== Forces and potential energy ==
Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by
ϕ
{\displaystyle \phi }
or
V
{\displaystyle V}
, corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is
U
=
−
G
M
m
r
.
{\displaystyle U=-{\frac {GMm}{r}}.}
The gravitational potential (specific energy) of the two bodies is
ϕ
=
−
(
G
M
r
+
G
m
r
)
=
−
G
(
M
+
m
)
r
=
−
G
M
m
μ
r
=
U
μ
{\displaystyle \phi =-\left({\frac {GM}{r}}+{\frac {Gm}{r}}\right)=-{\frac {G(M+m)}{r}}=-{\frac {GMm}{\mu r}}={\frac {U}{\mu }}}
where
μ
{\displaystyle \mu }
is the reduced mass.
The work done against gravity by moving an infinitesimal mass from point A with
U
=
a
{\displaystyle U=a}
to point B with
U
=
b
{\displaystyle U=b}
is
(
b
−
a
)
{\displaystyle (b-a)}
and the work done going back the other way is
(
a
−
b
)
{\displaystyle (a-b)}
so that the total work done in moving from A to B and returning to A is
U
A
→
B
→
A
=
(
b
−
a
)
+
(
a
−
b
)
=
0.
{\displaystyle U_{A\to B\to A}=(b-a)+(a-b)=0.}
If the potential is redefined at A to be
a
+
c
{\displaystyle a+c}
and the potential at B to be
b
+
c
{\displaystyle b+c}
, where
c
{\displaystyle c}
is a constant (i.e.
c
{\displaystyle c}
can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is
U
A
→
B
=
(
b
+
c
)
−
(
a
+
c
)
=
b
−
a
{\displaystyle U_{A\to B}=(b+c)-(a+c)=b-a}
as before.
In practical terms, this means that one can set the zero of
U
{\displaystyle U}
and
ϕ
{\displaystyle \phi }
anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section).
A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field.
== Notes ==
== References ==
Serway, Raymond A.; Jewett, John W. (2010). Physics for Scientists and Engineers (8th ed.). Brooks/Cole cengage. ISBN 978-1-4390-4844-3.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
== External links ==
What is potential energy? | Wikipedia/Potential_energy |
A calculus (pl.: calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Formation of calculi is known as lithiasis (). Stones can cause a number of medical conditions.
Some common principles (below) apply to stones at any location, but for specifics see the particular stone type in question.
Calculi are not to be confused with gastroliths, which are ingested rather than grown endogenously.
== Types ==
Calculi in the inner ear are called otoliths
Calculi in the urinary system are called urinary calculi and include kidney stones (also called renal calculi or nephroliths) and bladder stones (also called vesical calculi or cystoliths). They can have any of several compositions, including mixed. Principal compositions include oxalate and urate.
Calculi in the prostate are called prostatic calculi.
Calculi in the mammary gland are called breast microcalcifications or mammary microcalcifications.
Calculi of the gallbladder and bile ducts are called gallstones and are primarily developed from bile salts and cholesterol derivatives.
Calculi in the nasal passages (rhinoliths) are rare.
Calculi in the gastrointestinal tract (enteroliths) can be enormous. Individual enteroliths weighing many pounds have been reported in horses.
Calculi in the stomach are called gastric calculi (not to be confused with gastroliths which are exogenous in nature).
Calculi in the salivary glands are called salivary calculi (sialoliths).
Calculi in the tonsils are called tonsillar calculi (tonsilloliths).
Calculi in the veins are called venous calculi (phleboliths).
Calculi in the skin, such as in sweat glands, are not common but occasionally occur.
Calculi in the navel are called omphaloliths.
Calculi are usually asymptomatic, and large calculi may have required many years to grow to their large size.
== Cause ==
From an underlying abnormal excess of the mineral, e.g., with elevated levels of calcium (hypercalcaemia) that may cause kidney stones, dietary factors for gallstones.
Local conditions at the site in question that promote their formation, e.g., local bacteria action (in kidney stones) or slower fluid flow rates, a possible explanation of the majority of salivary duct calculus occurring in the submandibular salivary gland.
Enteroliths are a type of calculus found in the intestines of animals (mostly ruminants) and humans, and may be composed of inorganic or organic constituents.
Bezoars are lumps of indigestible material in the stomach and/or intestines; most commonly, they consist of hair (in which case they are also known as hairballs). A bezoar may form the nidus of an enterolith.
In kidney stones, calcium oxalate is the most common mineral type (see nephrolithiasis). Uric acid is the second most common mineral type, but an in vitro study showed uric acid stones and crystals can promote the formation of calcium oxalate stones.
== Pathophysiology ==
Stones can cause disease by several mechanisms:
Irritation of nearby tissues, causing pain, swelling, and inflammation
Obstruction of an opening or duct, interfering with normal flow and disrupting the function of the organ in question
Predisposition to infection (often due to disruption of normal flow)
A number of important medical conditions are caused by stones:
Nephrolithiasis (kidney stones)
Can cause hydronephrosis (swollen kidneys) and kidney failure
Can predispose to pyelonephritis (kidney infections)
Can progress to urolithiasis
Urolithiasis (urinary bladder stones)
Can progress to bladder outlet obstruction
Cholelithiasis (gallstones)
Can predispose to cholecystitis (gall bladder infections) and ascending cholangitis (biliary tree infection)
Can progress to choledocholithiasis (gallstones in the bile duct) and gallstone pancreatitis (inflammation of the pancreas)
Gastric calculi can cause colic, obstruction, torsion, and necrosis.
== Diagnosis ==
Diagnostic workup varies by the stone type, but in general:
Clinical history and physical examination
Imaging studies:
Some stone types (mainly those with substantial calcium content) can be detected on X-ray and CT scan
Many stone types can be detected by ultrasound
Factors contributing to stone formation (as in #Etiology) are often tested:
Laboratory testing can give levels of relevant substances in blood or urine
Some stones can be directly recovered (at surgery, or when they leave the body spontaneously) and sent to a laboratory for analysis of content
== Treatment ==
Modification of predisposing factors can sometimes slow or reverse stone formation. Treatment varies by stone type, but, in general:
Healthy diet and exercise (promotes flow of energy and nutrition)
Drinking fluids (water and electrolytes like lemon juice, diluted vinegar e.g. in pickles, salad dressings, sauces, soups, shrubs cocktail)
Surgery (lithotomy)
Medication / antibiotics
Extracorporeal shock wave lithotripsy (ESWL) for removal of calculi
== History ==
The earliest operation for curing stones is given in the Sushruta Samhita (6th century BCE). The operation involved exposure and going up through the floor of the bladder.
The care of this disease was forbidden to the physicians that had taken the Hippocratic Oath because:
There was a high probability of intraoperative and postoperative surgical complications like infection or bleeding
The physicians would not perform surgery as in ancient cultures they were two different professions
== Etymology ==
The word comes from Latin calculus "small stone", from calx "limestone, lime", probably related to Greek χάλιξ chalix "small stone, pebble, rubble", which many trace to a Proto-Indo-European language root for "split, break up". Calculus was a term used for various kinds of stones. In the 18th century it came to be used for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.
== See also ==
Bezoar
Calculus (dental)
Lithotomy
== References ==
== External links ==
"The Little Treatise on the Medical Treatment of the Back and of Hemorrhoids" is a manuscript, from the 18th-century, in Arabic, which discusses the treatment of calculi | Wikipedia/Calculus_(medicine) |
The following are important identities involving derivatives and integrals in vector calculus.
== Operator notation ==
=== Gradient ===
For a function
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
in three-dimensional Cartesian coordinate variables, the gradient is the vector field:
grad
(
f
)
=
∇
f
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
f
=
∂
f
∂
x
i
+
∂
f
∂
y
j
+
∂
f
∂
z
k
{\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} }
where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables
ψ
(
x
1
,
…
,
x
n
)
{\displaystyle \psi (x_{1},\ldots ,x_{n})}
, also called a scalar field, the gradient is the vector field:
∇
ψ
=
(
∂
∂
x
1
,
…
,
∂
∂
x
n
)
ψ
=
∂
ψ
∂
x
1
e
1
+
⋯
+
∂
ψ
∂
x
n
e
n
{\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}}
where
e
i
(
i
=
1
,
2
,
.
.
.
,
n
)
{\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)}
are mutually orthogonal unit vectors.
As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change.
For a vector field
A
=
(
A
1
,
…
,
A
n
)
{\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)}
, also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix:
J
A
=
d
A
=
(
∇
A
)
T
=
(
∂
A
i
∂
x
j
)
i
j
.
{\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.}
For a tensor field
T
{\displaystyle \mathbf {T} }
of any order k, the gradient
grad
(
T
)
=
d
T
=
(
∇
T
)
T
{\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}}
is a tensor field of order k + 1.
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 0, the tensor field
∇
T
{\displaystyle \nabla \mathbf {T} }
of order k + 1 is defined by the recursive relation
(
∇
T
)
⋅
C
=
∇
(
T
⋅
C
)
{\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
=== Divergence ===
In Cartesian coordinates, the divergence of a continuously differentiable vector field
F
=
F
x
i
+
F
y
j
+
F
z
k
{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} }
is the scalar-valued function:
div
F
=
∇
⋅
F
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
⋅
(
F
x
,
F
y
,
F
z
)
=
∂
F
x
∂
x
+
∂
F
y
∂
y
+
∂
F
z
∂
z
.
{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.}
As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge.
The divergence of a tensor field
T
{\displaystyle \mathbf {T} }
of non-zero order k is written as
div
(
T
)
=
∇
⋅
T
{\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} }
, a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity,
∇
⋅
(
A
⊗
T
)
=
T
(
∇
⋅
A
)
+
(
A
⋅
∇
)
T
{\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} }
where
A
⋅
∇
{\displaystyle \mathbf {A} \cdot \nabla }
is the directional derivative in the direction of
A
{\displaystyle \mathbf {A} }
multiplied by its magnitude. Specifically, for the outer product of two vectors,
∇
⋅
(
A
B
T
)
=
B
(
∇
⋅
A
)
+
(
A
⋅
∇
)
B
.
{\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .}
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 1, the tensor field
∇
⋅
T
{\displaystyle \nabla \cdot \mathbf {T} }
of order k − 1 is defined by the recursive relation
(
∇
⋅
T
)
⋅
C
=
∇
⋅
(
T
⋅
C
)
{\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
=== Curl ===
In Cartesian coordinates, for
F
=
F
x
i
+
F
y
j
+
F
z
k
{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} }
the curl is the vector field:
curl
F
=
∇
×
F
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
×
(
F
x
,
F
y
,
F
z
)
=
|
i
j
k
∂
∂
x
∂
∂
y
∂
∂
z
F
x
F
y
F
z
|
=
(
∂
F
z
∂
y
−
∂
F
y
∂
z
)
i
+
(
∂
F
x
∂
z
−
∂
F
z
∂
x
)
j
+
(
∂
F
y
∂
x
−
∂
F
x
∂
y
)
k
{\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}}
where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively.
As the name implies the curl is a measure of how much nearby vectors tend in a circular direction.
In Einstein notation, the vector field
F
=
(
F
1
,
F
2
,
F
3
)
{\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}}
has curl given by:
∇
×
F
=
ε
i
j
k
e
i
∂
F
k
∂
x
j
{\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}}
where
ε
{\displaystyle \varepsilon }
= ±1 or 0 is the Levi-Civita parity symbol.
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 1, the tensor field
∇
×
T
{\displaystyle \nabla \times \mathbf {T} }
of order k is defined by the recursive relation
(
∇
×
T
)
⋅
C
=
∇
×
(
T
⋅
C
)
{\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
A tensor field of order greater than one may be decomposed into a sum of outer products, and then the following identity may be used:
∇
×
(
A
⊗
T
)
=
(
∇
×
A
)
⊗
T
−
A
×
(
∇
T
)
.
{\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).}
Specifically, for the outer product of two vectors,
∇
×
(
A
B
T
)
=
(
∇
×
A
)
B
T
−
A
×
(
∇
B
)
.
{\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).}
=== Laplacian ===
In Cartesian coordinates, the Laplacian of a function
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
is
Δ
f
=
∇
2
f
=
(
∇
⋅
∇
)
f
=
∂
2
f
∂
x
2
+
∂
2
f
∂
y
2
+
∂
2
f
∂
z
2
.
{\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.}
The Laplacian is a measure of how much a function is changing over a small sphere centered at the point.
When the Laplacian is equal to 0, the function is called a harmonic function. That is,
Δ
f
=
0.
{\displaystyle \Delta f=0.}
For a tensor field,
T
{\displaystyle \mathbf {T} }
, the Laplacian is generally written as:
Δ
T
=
∇
2
T
=
(
∇
⋅
∇
)
T
{\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} }
and is a tensor field of the same order.
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 0, the tensor field
∇
2
T
{\displaystyle \nabla ^{2}\mathbf {T} }
of order k is defined by the recursive relation
(
∇
2
T
)
⋅
C
=
∇
2
(
T
⋅
C
)
{\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
=== Special notations ===
In Feynman subscript notation,
∇
B
(
A
⋅
B
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
{\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }
where the notation ∇B means the subscripted gradient operates on only the factor B.
More general but similar is the Hestenes overdot notation in geometric algebra. The above identity is then expressed as:
∇
˙
(
A
⋅
B
˙
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
{\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }
where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant.
The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identity C⋅(A×B) = (C×A)⋅B:
∇
⋅
(
A
×
B
)
=
∇
A
⋅
(
A
×
B
)
+
∇
B
⋅
(
A
×
B
)
=
(
∇
A
×
A
)
⋅
B
+
(
∇
B
×
A
)
⋅
B
=
(
∇
A
×
A
)
⋅
B
−
(
A
×
∇
B
)
⋅
B
=
(
∇
A
×
A
)
⋅
B
−
A
⋅
(
∇
B
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}
An alternative method is to use the Cartesian components of the del operator as follows (with implicit summation over the index i):
∇
⋅
(
A
×
B
)
=
e
i
∂
i
⋅
(
A
×
B
)
=
e
i
⋅
∂
i
(
A
×
B
)
=
e
i
⋅
(
∂
i
A
×
B
+
A
×
∂
i
B
)
=
e
i
⋅
(
∂
i
A
×
B
)
+
e
i
⋅
(
A
×
∂
i
B
)
=
(
e
i
×
∂
i
A
)
⋅
B
+
(
e
i
×
A
)
⋅
∂
i
B
=
(
e
i
×
∂
i
A
)
⋅
B
−
(
A
×
e
i
)
⋅
∂
i
B
=
(
e
i
×
∂
i
A
)
⋅
B
−
A
⋅
(
e
i
×
∂
i
B
)
=
(
e
i
∂
i
×
A
)
⋅
B
−
A
⋅
(
e
i
∂
i
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot \partial _{i}(\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}
Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule.
For example, from the identity A⋅(B×C) = (A×B)⋅C
we may derive A⋅(∇×C) = (A×∇)⋅C but not ∇⋅(B×C) = (∇×B)⋅C,
nor from A⋅(B×A) = 0 may we derive A⋅(∇×A) = 0.
On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so that A⋅(∇A×A) = ∇A⋅(A×A) = ∇⋅(A×A) = 0.
Also, from A×(A×C) = A(A⋅C) − (A⋅A)C we may derive ∇×(∇×C) = ∇(∇⋅C) − ∇2C,
but from (Aψ)⋅(Aφ) = (A⋅A)(ψφ) we may not derive (∇ψ)⋅(∇φ) = ∇2(ψφ).
A subscript c on a quantity indicates that it is temporarily considered to be a constant. Since a constant is not a variable, when the substitution rule (see the preceding paragraph) is used it, unlike a variable, may be moved into or out of the scope of a del operator, as in the following example:
∇
⋅
(
A
×
B
)
=
∇
⋅
(
A
×
B
c
)
+
∇
⋅
(
A
c
×
B
)
=
∇
⋅
(
A
×
B
c
)
−
∇
⋅
(
B
×
A
c
)
=
(
∇
×
A
)
⋅
B
c
−
(
∇
×
B
)
⋅
A
c
=
(
∇
×
A
)
⋅
B
−
(
∇
×
B
)
⋅
A
{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })+\nabla \cdot (\mathbf {A} _{\mathrm {c} }\times \mathbf {B} )\\[2pt]&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })-\nabla \cdot (\mathbf {B} \times \mathbf {A} _{\mathrm {c} })\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} _{\mathrm {c} }-(\nabla \times \mathbf {B} )\cdot \mathbf {A} _{\mathrm {c} }\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} \end{aligned}}}
Another way to indicate that a quantity is a constant is to affix it as a subscript to the scope of a del operator, as follows:
∇
(
A
⋅
B
)
A
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
{\displaystyle \nabla \left(\mathbf {A{\cdot }B} \right)_{\mathbf {A} }=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }
For the remainder of this article, Feynman subscript notation will be used where appropriate.
== First derivative identities ==
For scalar fields
ψ
{\displaystyle \psi }
,
ϕ
{\displaystyle \phi }
and vector fields
A
{\displaystyle \mathbf {A} }
,
B
{\displaystyle \mathbf {B} }
, we have the following derivative identities.
=== Distributive properties ===
∇
(
ψ
+
ϕ
)
=
∇
ψ
+
∇
ϕ
∇
(
A
+
B
)
=
∇
A
+
∇
B
∇
⋅
(
A
+
B
)
=
∇
⋅
A
+
∇
⋅
B
∇
×
(
A
+
B
)
=
∇
×
A
+
∇
×
B
{\displaystyle {\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}}
=== First derivative associative properties ===
(
A
⋅
∇
)
ψ
=
A
⋅
(
∇
ψ
)
(
A
⋅
∇
)
B
=
A
⋅
(
∇
B
)
(
A
×
∇
)
ψ
=
A
×
(
∇
ψ
)
(
A
×
∇
)
B
=
A
×
(
∇
B
)
{\displaystyle {\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}}
=== Product rule for multiplication by a scalar ===
We have the following generalizations of the product rule in single-variable calculus.
∇
(
ψ
ϕ
)
=
ϕ
∇
ψ
+
ψ
∇
ϕ
∇
(
ψ
A
)
=
(
∇
ψ
)
A
T
+
ψ
∇
A
=
∇
ψ
⊗
A
+
ψ
∇
A
∇
⋅
(
ψ
A
)
=
ψ
∇
⋅
A
+
(
∇
ψ
)
⋅
A
∇
×
(
ψ
A
)
=
ψ
∇
×
A
+
(
∇
ψ
)
×
A
∇
2
(
ψ
ϕ
)
=
ψ
∇
2
ϕ
+
2
∇
ψ
⋅
∇
ϕ
+
ϕ
∇
2
ψ
{\displaystyle {\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}}
=== Quotient rule for division by a scalar ===
∇
(
ψ
ϕ
)
=
ϕ
∇
ψ
−
ψ
∇
ϕ
ϕ
2
∇
(
A
ϕ
)
=
ϕ
∇
A
−
∇
ϕ
⊗
A
ϕ
2
∇
⋅
(
A
ϕ
)
=
ϕ
∇
⋅
A
−
∇
ϕ
⋅
A
ϕ
2
∇
×
(
A
ϕ
)
=
ϕ
∇
×
A
−
∇
ϕ
×
A
ϕ
2
∇
2
(
ψ
ϕ
)
=
ϕ
∇
2
ψ
−
2
ϕ
∇
(
ψ
ϕ
)
⋅
∇
ϕ
−
ψ
∇
2
ϕ
ϕ
2
{\displaystyle {\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}}
=== Chain rule ===
Let
f
(
x
)
{\displaystyle f(x)}
be a one-variable function from scalars to scalars,
r
(
t
)
=
(
x
1
(
t
)
,
…
,
x
n
(
t
)
)
{\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))}
a parametrized curve,
ϕ
:
R
n
→
R
{\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} }
a function from vectors to scalars, and
A
:
R
n
→
R
n
{\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
a vector field. We have the following special cases of the multi-variable chain rule.
∇
(
f
∘
ϕ
)
=
(
f
′
∘
ϕ
)
∇
ϕ
(
r
∘
f
)
′
=
(
r
′
∘
f
)
f
′
(
ϕ
∘
r
)
′
=
(
∇
ϕ
∘
r
)
⋅
r
′
(
A
∘
r
)
′
=
r
′
⋅
(
∇
A
∘
r
)
∇
(
ϕ
∘
A
)
=
(
∇
A
)
⋅
(
∇
ϕ
∘
A
)
∇
⋅
(
r
∘
ϕ
)
=
∇
ϕ
⋅
(
r
′
∘
ϕ
)
∇
×
(
r
∘
ϕ
)
=
∇
ϕ
×
(
r
′
∘
ϕ
)
{\displaystyle {\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}}
For a vector transformation
x
:
R
n
→
R
n
{\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
we have:
∇
⋅
(
A
∘
x
)
=
t
r
(
(
∇
x
)
⋅
(
∇
A
∘
x
)
)
{\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)}
Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices.
=== Dot product rule ===
∇
(
A
⋅
B
)
=
(
A
⋅
∇
)
B
+
(
B
⋅
∇
)
A
+
A
×
(
∇
×
B
)
+
B
×
(
∇
×
A
)
=
A
⋅
J
B
+
B
⋅
J
A
=
(
∇
B
)
⋅
A
+
(
∇
A
)
⋅
B
{\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}}
where
J
A
=
(
∇
A
)
T
=
(
∂
A
i
/
∂
x
j
)
i
j
{\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}}
denotes the Jacobian matrix of the vector field
A
=
(
A
1
,
…
,
A
n
)
{\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})}
.
Alternatively, using Feynman subscript notation,
∇
(
A
⋅
B
)
=
∇
A
(
A
⋅
B
)
+
∇
B
(
A
⋅
B
)
.
{\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .}
See these notes.
As a special case, when A = B,
1
2
∇
(
A
⋅
A
)
=
A
⋅
J
A
=
(
∇
A
)
⋅
A
=
(
A
⋅
∇
)
A
+
A
×
(
∇
×
A
)
=
A
∇
A
.
{\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.}
The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form.
=== Cross product rule ===
∇
(
A
×
B
)
=
(
∇
A
)
×
B
−
(
∇
B
)
×
A
∇
⋅
(
A
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
∇
×
(
A
×
B
)
=
A
(
∇
⋅
B
)
−
B
(
∇
⋅
A
)
+
(
B
⋅
∇
)
A
−
(
A
⋅
∇
)
B
=
A
(
∇
⋅
B
)
+
(
B
⋅
∇
)
A
−
(
B
(
∇
⋅
A
)
+
(
A
⋅
∇
)
B
)
=
∇
⋅
(
A
B
T
)
−
∇
⋅
(
B
A
T
)
=
∇
⋅
(
A
B
T
−
B
A
T
)
A
×
(
∇
×
B
)
=
∇
B
(
A
⋅
B
)
−
(
A
⋅
∇
)
B
=
A
⋅
J
B
−
(
A
⋅
∇
)
B
=
(
∇
B
)
⋅
A
−
A
⋅
(
∇
B
)
=
A
⋅
(
J
B
−
J
B
T
)
(
A
×
∇
)
×
B
=
(
∇
B
)
⋅
A
−
A
(
∇
⋅
B
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
−
A
(
∇
⋅
B
)
(
A
×
∇
)
⋅
B
=
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla \mathbf {A} )\times \mathbf {B} \,-\,(\nabla \mathbf {B} )\times \mathbf {A} \\[5pt]\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\,-\,\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}}
Note that the matrix
J
B
−
J
B
T
{\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}}
is antisymmetric.
== Second derivative identities ==
=== Divergence of curl is zero ===
The divergence of the curl of any continuously twice-differentiable vector field A is always zero:
∇
⋅
(
∇
×
A
)
=
0
{\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0}
This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex.
=== Divergence of gradient is Laplacian ===
The Laplacian of a scalar field is the divergence of its gradient:
Δ
ψ
=
∇
2
ψ
=
∇
⋅
(
∇
ψ
)
{\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )}
The result is a scalar quantity.
=== Divergence of divergence is not defined ===
The divergence of a vector field A is a scalar, and the divergence of a scalar quantity is undefined. Therefore,
∇
⋅
(
∇
⋅
A
)
is undefined.
{\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}
=== Curl of gradient is zero ===
The curl of the gradient of any continuously twice-differentiable scalar field
φ
{\displaystyle \varphi }
(i.e., differentiability class
C
2
{\displaystyle C^{2}}
) is always the zero vector:
∇
×
(
∇
φ
)
=
0
.
{\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .}
It can be easily proved by expressing
∇
×
(
∇
φ
)
{\displaystyle \nabla \times (\nabla \varphi )}
in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex.
=== Curl of curl ===
∇
×
(
∇
×
A
)
=
∇
(
∇
⋅
A
)
−
∇
2
A
{\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} }
Here ∇2 is the vector Laplacian operating on the vector field A.
=== Curl of divergence is not defined ===
The divergence of a vector field A is a scalar, and the curl of a scalar quantity is undefined. Therefore,
∇
×
(
∇
⋅
A
)
is undefined.
{\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}
=== Second derivative associative properties ===
(
∇
⋅
∇
)
ψ
=
∇
⋅
(
∇
ψ
)
=
∇
2
ψ
(
∇
⋅
∇
)
A
=
∇
⋅
(
∇
A
)
=
∇
2
A
(
∇
×
∇
)
ψ
=
∇
×
(
∇
ψ
)
=
0
(
∇
×
∇
)
A
=
∇
×
(
∇
A
)
=
0
{\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}}
=== A mnemonic ===
The figure to the right is a mnemonic for some of these identities. The abbreviations used are:
D: divergence,
C: curl,
G: gradient,
L: Laplacian,
CC: curl of curl.
Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist.
== Summary of important identities ==
=== Differentiation ===
==== Gradient ====
∇
(
ψ
+
ϕ
)
=
∇
ψ
+
∇
ϕ
{\displaystyle \nabla (\psi +\phi )=\nabla \psi +\nabla \phi }
∇
(
ψ
ϕ
)
=
ϕ
∇
ψ
+
ψ
∇
ϕ
{\displaystyle \nabla (\psi \phi )=\phi \nabla \psi +\psi \nabla \phi }
∇
(
ψ
A
)
=
∇
ψ
⊗
A
+
ψ
∇
A
{\displaystyle \nabla (\psi \mathbf {A} )=\nabla \psi \otimes \mathbf {A} +\psi \nabla \mathbf {A} }
∇
(
A
⋅
B
)
=
(
A
⋅
∇
)
B
+
(
B
⋅
∇
)
A
+
A
×
(
∇
×
B
)
+
B
×
(
∇
×
A
)
{\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=(\mathbf {A} \cdot \nabla )\mathbf {B} +(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {B} )+\mathbf {B} \times (\nabla \times \mathbf {A} )}
==== Divergence ====
∇
⋅
(
A
+
B
)
=
∇
⋅
A
+
∇
⋅
B
{\displaystyle \nabla \cdot (\mathbf {A} +\mathbf {B} )=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} }
∇
⋅
(
ψ
A
)
=
ψ
∇
⋅
A
+
A
⋅
∇
ψ
{\displaystyle \nabla \cdot \left(\psi \mathbf {A} \right)=\psi \nabla \cdot \mathbf {A} +\mathbf {A} \cdot \nabla \psi }
∇
⋅
(
A
×
B
)
=
(
∇
×
A
)
⋅
B
−
(
∇
×
B
)
⋅
A
{\displaystyle \nabla \cdot \left(\mathbf {A} \times \mathbf {B} \right)=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} }
==== Curl ====
∇
×
(
A
+
B
)
=
∇
×
A
+
∇
×
B
{\displaystyle \nabla \times (\mathbf {A} +\mathbf {B} )=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} }
∇
×
(
ψ
A
)
=
ψ
(
∇
×
A
)
−
(
A
×
∇
)
ψ
=
ψ
(
∇
×
A
)
+
(
∇
ψ
)
×
A
{\displaystyle \nabla \times \left(\psi \mathbf {A} \right)=\psi \,(\nabla \times \mathbf {A} )-(\mathbf {A} \times \nabla )\psi =\psi \,(\nabla \times \mathbf {A} )+(\nabla \psi )\times \mathbf {A} }
∇
×
(
ψ
∇
ϕ
)
=
∇
ψ
×
∇
ϕ
{\displaystyle \nabla \times \left(\psi \nabla \phi \right)=\nabla \psi \times \nabla \phi }
∇
×
(
A
×
B
)
=
A
(
∇
⋅
B
)
−
B
(
∇
⋅
A
)
+
(
B
⋅
∇
)
A
−
(
A
⋅
∇
)
B
{\displaystyle \nabla \times \left(\mathbf {A} \times \mathbf {B} \right)=\mathbf {A} \left(\nabla \cdot \mathbf {B} \right)-\mathbf {B} \left(\nabla \cdot \mathbf {A} \right)+\left(\mathbf {B} \cdot \nabla \right)\mathbf {A} -\left(\mathbf {A} \cdot \nabla \right)\mathbf {B} }
==== Vector-dot-Del Operator ====
(
A
⋅
∇
)
B
=
1
2
[
∇
(
A
⋅
B
)
−
∇
×
(
A
×
B
)
−
B
×
(
∇
×
A
)
−
A
×
(
∇
×
B
)
−
B
(
∇
⋅
A
)
+
A
(
∇
⋅
B
)
]
{\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {B} ={\frac {1}{2}}{\bigg [}\nabla (\mathbf {A} \cdot \mathbf {B} )-\nabla \times (\mathbf {A} \times \mathbf {B} )-\mathbf {B} \times (\nabla \times \mathbf {A} )-\mathbf {A} \times (\nabla \times \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )+\mathbf {A} (\nabla \cdot \mathbf {B} ){\bigg ]}}
(
A
⋅
∇
)
A
=
1
2
∇
|
A
|
2
−
A
×
(
∇
×
A
)
=
1
2
∇
|
A
|
2
+
(
∇
×
A
)
×
A
{\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {A} ={\frac {1}{2}}\nabla |\mathbf {A} |^{2}-\mathbf {A} \times (\nabla \times \mathbf {A} )={\frac {1}{2}}\nabla |\mathbf {A} |^{2}+(\nabla \times \mathbf {A} )\times \mathbf {A} }
A
⋅
∇
(
B
⋅
B
)
=
2
B
⋅
(
A
⋅
∇
)
B
{\displaystyle \mathbf {A} \cdot \nabla (\mathbf {B} \cdot \mathbf {B} )=2\mathbf {B} \cdot (\mathbf {A} \cdot \nabla )\mathbf {B} }
==== Second derivatives ====
∇
⋅
(
∇
×
A
)
=
0
{\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0}
∇
×
(
∇
ψ
)
=
0
{\displaystyle \nabla \times (\nabla \psi )=\mathbf {0} }
∇
⋅
(
∇
ψ
)
=
∇
2
ψ
{\displaystyle \nabla \cdot (\nabla \psi )=\nabla ^{2}\psi }
(scalar Laplacian)
∇
(
∇
⋅
A
)
−
∇
×
(
∇
×
A
)
=
∇
2
A
{\displaystyle \nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla \times \left(\nabla \times \mathbf {A} \right)=\nabla ^{2}\mathbf {A} }
(vector Laplacian)
∇
⋅
[
∇
A
+
(
∇
A
)
T
]
=
∇
2
A
+
∇
(
∇
⋅
A
)
{\displaystyle \nabla \cdot {\big [}\nabla \mathbf {A} +(\nabla \mathbf {A} )^{\textsf {T}}{\big ]}=\nabla ^{2}\mathbf {A} +\nabla (\nabla \cdot \mathbf {A} )}
∇
⋅
(
ϕ
∇
ψ
)
=
ϕ
∇
2
ψ
+
∇
ϕ
⋅
∇
ψ
{\displaystyle \nabla \cdot (\phi \nabla \psi )=\phi \nabla ^{2}\psi +\nabla \phi \cdot \nabla \psi }
ψ
∇
2
ϕ
−
ϕ
∇
2
ψ
=
∇
⋅
(
ψ
∇
ϕ
−
ϕ
∇
ψ
)
{\displaystyle \psi \nabla ^{2}\phi -\phi \nabla ^{2}\psi =\nabla \cdot \left(\psi \nabla \phi -\phi \nabla \psi \right)}
∇
2
(
ϕ
ψ
)
=
ϕ
∇
2
ψ
+
2
(
∇
ϕ
)
⋅
(
∇
ψ
)
+
(
∇
2
ϕ
)
ψ
{\displaystyle \nabla ^{2}(\phi \psi )=\phi \nabla ^{2}\psi +2(\nabla \phi )\cdot (\nabla \psi )+\left(\nabla ^{2}\phi \right)\psi }
∇
2
(
ψ
A
)
=
A
∇
2
ψ
+
2
(
∇
ψ
⋅
∇
)
A
+
ψ
∇
2
A
{\displaystyle \nabla ^{2}(\psi \mathbf {A} )=\mathbf {A} \nabla ^{2}\psi +2(\nabla \psi \cdot \nabla )\mathbf {A} +\psi \nabla ^{2}\mathbf {A} }
∇
2
(
A
⋅
B
)
=
A
⋅
∇
2
B
−
B
⋅
∇
2
A
+
2
∇
⋅
(
(
B
⋅
∇
)
A
+
B
×
(
∇
×
A
)
)
{\displaystyle \nabla ^{2}(\mathbf {A} \cdot \mathbf {B} )=\mathbf {A} \cdot \nabla ^{2}\mathbf {B} -\mathbf {B} \cdot \nabla ^{2}\!\mathbf {A} +2\nabla \cdot ((\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {B} \times (\nabla \times \mathbf {A} ))}
(Green's vector identity)
==== Third derivatives ====
∇
2
(
∇
ψ
)
=
∇
(
∇
⋅
(
∇
ψ
)
)
=
∇
(
∇
2
ψ
)
{\displaystyle \nabla ^{2}(\nabla \psi )=\nabla (\nabla \cdot (\nabla \psi ))=\nabla \left(\nabla ^{2}\psi \right)}
∇
2
(
∇
⋅
A
)
=
∇
⋅
(
∇
(
∇
⋅
A
)
)
=
∇
⋅
(
∇
2
A
)
{\displaystyle \nabla ^{2}(\nabla \cdot \mathbf {A} )=\nabla \cdot (\nabla (\nabla \cdot \mathbf {A} ))=\nabla \cdot \left(\nabla ^{2}\mathbf {A} \right)}
∇
2
(
∇
×
A
)
=
−
∇
×
(
∇
×
(
∇
×
A
)
)
=
∇
×
(
∇
2
A
)
{\displaystyle \nabla ^{2}(\nabla \times \mathbf {A} )=-\nabla \times (\nabla \times (\nabla \times \mathbf {A} ))=\nabla \times \left(\nabla ^{2}\mathbf {A} \right)}
=== Integration ===
Below, the curly symbol ∂ means "boundary of" a surface or solid.
==== Surface–volume integrals ====
In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface):
∂
V
{\displaystyle \scriptstyle \partial V}
ψ
d
S
=
∭
V
∇
ψ
d
V
{\displaystyle \psi \,d\mathbf {S} \ =\ \iiint _{V}\nabla \psi \,dV}
∂
V
{\displaystyle \scriptstyle \partial V}
A
⋅
d
S
=
∭
V
∇
⋅
A
d
V
{\displaystyle \mathbf {A} \cdot d\mathbf {S} \ =\ \iiint _{V}\nabla \cdot \mathbf {A} \,dV}
(divergence theorem)
∂
V
{\displaystyle \scriptstyle \partial V}
A
×
d
S
=
−
∭
V
∇
×
A
d
V
{\displaystyle \mathbf {A} \times d\mathbf {S} \ =\ -\iiint _{V}\nabla \times \mathbf {A} \,dV}
∂
V
{\displaystyle \scriptstyle \partial V}
ψ
∇
φ
⋅
d
S
=
∭
V
(
ψ
∇
2
φ
+
∇
φ
⋅
∇
ψ
)
d
V
{\displaystyle \psi \nabla \!\varphi \cdot d\mathbf {S} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi +\nabla \!\varphi \cdot \nabla \!\psi \right)\,dV}
(Green's first identity)
∂
V
{\displaystyle \scriptstyle \partial V}
(
ψ
∇
φ
−
φ
∇
ψ
)
⋅
d
S
=
{\displaystyle \left(\psi \nabla \!\varphi -\varphi \nabla \!\psi \right)\cdot d\mathbf {S} \ =\ }
∂
V
{\displaystyle \scriptstyle \partial V}
(
ψ
∂
φ
∂
n
−
φ
∂
ψ
∂
n
)
d
S
{\displaystyle \left(\psi {\frac {\partial \varphi }{\partial n}}-\varphi {\frac {\partial \psi }{\partial n}}\right)dS}
=
∭
V
(
ψ
∇
2
φ
−
φ
∇
2
ψ
)
d
V
{\displaystyle \displaystyle \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi -\varphi \nabla ^{2}\!\psi \right)\,dV}
(Green's second identity)
∭
V
A
⋅
∇
ψ
d
V
=
{\displaystyle \iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV\ =\ }
∂
V
{\displaystyle \scriptstyle \partial V}
ψ
A
⋅
d
S
−
∭
V
ψ
∇
⋅
A
d
V
{\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV}
(integration by parts)
∭
V
ψ
∇
⋅
A
d
V
=
{\displaystyle \iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV\ =\ }
∂
V
{\displaystyle \scriptstyle \partial V}
ψ
A
⋅
d
S
−
∭
V
A
⋅
∇
ψ
d
V
{\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV}
(integration by parts)
∭
V
A
⋅
(
∇
×
B
)
d
V
=
−
{\displaystyle \iiint _{V}\mathbf {A} \cdot \left(\nabla \times \mathbf {B} \right)\,dV\ =\ -}
∂
V
{\displaystyle \scriptstyle \partial V}
(
A
×
B
)
⋅
d
S
+
∭
V
(
∇
×
A
)
⋅
B
d
V
{\displaystyle \left(\mathbf {A} \times \mathbf {B} \right)\cdot d\mathbf {S} +\iiint _{V}\left(\nabla \times \mathbf {A} \right)\cdot \mathbf {B} \,dV}
(integration by parts)
∂
V
{\displaystyle \scriptstyle \partial V}
A
×
(
d
S
⋅
(
B
C
T
)
)
=
∭
V
A
×
(
∇
⋅
(
B
C
T
)
)
d
V
+
∭
V
B
⋅
(
∇
A
)
×
C
d
V
{\displaystyle \mathbf {A} \times \left(d\mathbf {S} \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\ =\ \iiint _{V}\mathbf {A} \times \left(\nabla \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\,dV+\iiint _{V}\mathbf {B} \cdot (\nabla \mathbf {A} )\times \mathbf {C} \,dV}
∭
V
(
∇
⋅
B
+
B
⋅
∇
)
A
d
V
=
{\displaystyle \iiint _{V}\left(\nabla \cdot \mathbf {B} +\mathbf {B} \cdot \nabla \right)\mathbf {A} \,dV\ =\ }
∂
V
{\displaystyle \scriptstyle \partial V}
(
B
⋅
d
S
)
A
{\displaystyle \left(\mathbf {B} \cdot d\mathbf {S} \right)\mathbf {A} }
==== Curve–surface integrals ====
In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve):
∮
∂
S
A
⋅
d
ℓ
=
∬
S
(
∇
×
A
)
⋅
d
S
{\displaystyle \oint _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}\ =\ \iint _{S}\left(\nabla \times \mathbf {A} \right)\cdot d\mathbf {S} }
(Stokes' theorem)
∮
∂
S
ψ
d
ℓ
=
−
∬
S
∇
ψ
×
d
S
{\displaystyle \oint _{\partial S}\psi \,d{\boldsymbol {\ell }}\ =\ -\iint _{S}\nabla \psi \times d\mathbf {S} }
∮
∂
S
A
×
d
ℓ
=
−
∬
S
(
∇
A
−
(
∇
⋅
A
)
1
)
⋅
d
S
=
−
∬
S
(
d
S
×
∇
)
×
A
{\displaystyle \oint _{\partial S}\mathbf {A} \times d{\boldsymbol {\ell }}\ =\ -\iint _{S}\left(\nabla \mathbf {A} -(\nabla \cdot \mathbf {A} )\mathbf {1} \right)\cdot d\mathbf {S} \ =\ -\iint _{S}\left(d\mathbf {S} \times \nabla \right)\times \mathbf {A} }
∮
∂
S
A
×
(
B
×
d
ℓ
)
=
∬
S
(
∇
×
(
A
B
T
)
)
⋅
d
S
+
∬
S
(
∇
⋅
(
B
A
T
)
)
×
d
S
{\displaystyle \oint _{\partial S}\mathbf {A} \times (\mathbf {B} \times d{\boldsymbol {\ell }})\ =\ \iint _{S}\left(\nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\right)\cdot d\mathbf {S} +\iint _{S}\left(\nabla \cdot \left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\right)\times d\mathbf {S} }
∮
∂
S
(
B
⋅
d
ℓ
)
A
=
∬
S
(
d
S
⋅
[
∇
×
B
−
B
×
∇
]
)
A
{\displaystyle \oint _{\partial S}(\mathbf {B} \cdot d{\boldsymbol {\ell }})\mathbf {A} =\iint _{S}(d\mathbf {S} \cdot \left[\nabla \times \mathbf {B} -\mathbf {B} \times \nabla \right])\mathbf {A} }
Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral):
==== Endpoint-curve integrals ====
In the following endpoint–curve integral theorems, P denotes a 1d open path with signed 0d boundary points
q
−
p
=
∂
P
{\displaystyle \mathbf {q} -\mathbf {p} =\partial P}
and integration along P is from
p
{\displaystyle \mathbf {p} }
to
q
{\displaystyle \mathbf {q} }
:
ψ
|
∂
P
=
ψ
(
q
)
−
ψ
(
p
)
=
∫
P
∇
ψ
⋅
d
ℓ
{\displaystyle \psi |_{\partial P}=\psi (\mathbf {q} )-\psi (\mathbf {p} )=\int _{P}\nabla \psi \cdot d{\boldsymbol {\ell }}}
(gradient theorem)
A
|
∂
P
=
A
(
q
)
−
A
(
p
)
=
∫
P
(
d
ℓ
⋅
∇
)
A
{\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(d{\boldsymbol {\ell }}\cdot \nabla \right)\mathbf {A} }
A
|
∂
P
=
A
(
q
)
−
A
(
p
)
=
∫
P
(
∇
A
)
⋅
d
ℓ
+
∫
P
(
∇
×
A
)
×
d
ℓ
{\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(\nabla \mathbf {A} \right)\cdot d{\boldsymbol {\ell }}+\int _{P}\left(\nabla \times \mathbf {A} \right)\times d{\boldsymbol {\ell }}}
==== Tensor integrals ====
A tensor form of a vector integral theorem may be obtained by replacing the vector (or one of them) by a tensor, provided that the vector is first made to appear only as the right-most vector of each integrand. For example, Stokes' theorem becomes
∮
∂
S
d
ℓ
⋅
T
=
∬
S
d
S
⋅
(
∇
×
T
)
{\displaystyle \oint _{\partial S}d{\boldsymbol {\ell }}\cdot \mathbf {T} \ =\ \iint _{S}d\mathbf {S} \cdot \left(\nabla \times \mathbf {T} \right)}
.
A scalar field may also be treated as a vector and replaced by a vector or tensor. For example, Green's first identity becomes
∂
V
{\displaystyle \scriptstyle \partial V}
ψ
d
S
⋅
∇
A
=
∭
V
(
ψ
∇
2
A
+
∇
ψ
⋅
∇
A
)
d
V
{\displaystyle \psi \,d\mathbf {S} \cdot \nabla \!\mathbf {A} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\mathbf {A} +\nabla \!\psi \cdot \nabla \!\mathbf {A} \right)\,dV}
.
Similar rules apply to algebraic and differentiation formulas. For algebraic formulas one may alternatively use the left-most vector position.
== See also ==
Comparison of vector algebra and geometric algebra
Del in cylindrical and spherical coordinates – Mathematical gradient operator in certain coordinate systems
Differentiation rules – Rules for computing derivatives of functions
Exterior calculus identities
Exterior derivative – Operation on differential forms
List of limits
Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets
Vector algebra relations – Formulas about vectors in three-dimensional Euclidean space
== References ==
== Further reading == | Wikipedia/Vector_calculus_identities |
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
Y
t
=
∫
0
t
H
s
d
X
s
,
{\displaystyle Y_{t}=\int _{0}^{t}H_{s}\,dX_{s},}
where H is a locally square-integrable process adapted to the filtration generated by X (Revuz & Yor 1999, Chapter IV), which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular t is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to t and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms. This can be contrasted to the Stratonovich integral as an alternative formulation; it does follow the chain rule, and does not require Itô's lemma. The two integral forms can be converted to one-another. The Stratonovich integral is obtained as the limiting form of a Riemann sum that employs the average of stochastic variable over each small timestep, whereas the Itô integral considers it only at the beginning.
In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that H is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that H is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums (Revuz & Yor 1999, Chapter IV).
== Notation ==
The process Y defined before as
Y
t
=
∫
0
t
H
d
X
≡
∫
0
t
H
s
d
X
s
,
{\displaystyle Y_{t}=\int _{0}^{t}H\,dX\equiv \int _{0}^{t}H_{s}\,dX_{s},}
is itself a stochastic process with time parameter t, which is also sometimes written as Y = H · X (Rogers & Williams 2000). Alternatively, the integral is often written in differential form dY = H dX, which is equivalent to Y − Y0 = H · X. As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given
(
Ω
,
F
,
(
F
t
)
t
≥
0
,
P
)
.
{\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\geq 0},\mathbb {P} ).}
The σ-algebra
F
t
{\displaystyle {\mathcal {F}}_{t}}
represents the information available up until time t, and a process X is adapted if Xt is
F
t
{\displaystyle {\mathcal {F}}_{t}}
-measurable. A Brownian motion B is understood to be an
F
t
{\displaystyle {\mathcal {F}}_{t}}
-Brownian motion, which is just a standard Brownian motion with the properties that Bt is
F
t
{\displaystyle {\mathcal {F}}_{t}}
-measurable and that Bt+s − Bt is independent of
F
t
{\displaystyle {\mathcal {F}}_{t}}
for all s,t ≥ 0 (Revuz & Yor 1999).
== Integration with respect to Brownian motion ==
The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that B is a Wiener process (Brownian motion) and that H is a right-continuous (càdlàg), adapted and locally bounded process. If
{
π
n
}
{\displaystyle \{\pi _{n}\}}
is a sequence of partitions of [0, t] with mesh width going to zero, then the Itô integral of H with respect to B up to time t is a random variable
∫
0
t
H
d
B
=
lim
n
→
∞
∑
[
t
i
−
1
,
t
i
]
∈
π
n
H
t
i
−
1
(
B
t
i
−
B
t
i
−
1
)
.
{\displaystyle \int _{0}^{t}H\,dB=\lim _{n\rightarrow \infty }\sum _{[t_{i-1},t_{i}]\in \pi _{n}}H_{t_{i-1}}(B_{t_{i}}-B_{t_{i-1}}).}
It can be shown that this limit converges in probability.
For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If H is any predictable process such that ∫0t H2 ds < ∞ for every t ≥ 0 then the integral of H with respect to B can be defined, and H is said to be B-integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that
∫
0
t
(
H
−
H
n
)
2
d
s
→
0
{\displaystyle \int _{0}^{t}(H-H_{n})^{2}\,ds\to 0}
in probability. Then, the Itô integral is
∫
0
t
H
d
B
=
lim
n
→
∞
∫
0
t
H
n
d
B
{\displaystyle \int _{0}^{t}H\,dB=\lim _{n\to \infty }\int _{0}^{t}H_{n}\,dB}
where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry
E
[
(
∫
0
t
H
s
d
B
s
)
2
]
=
E
[
∫
0
t
H
s
2
d
s
]
{\displaystyle \mathbb {E} \left[\left(\int _{0}^{t}H_{s}\,dB_{s}\right)^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,ds\right]}
which holds when H is bounded or, more generally, when the integral on the right hand side is finite.
== Itô processes ==
An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,
X
t
=
X
0
+
∫
0
t
σ
s
d
B
s
+
∫
0
t
μ
s
d
s
.
{\displaystyle X_{t}=X_{0}+\int _{0}^{t}\sigma _{s}\,dB_{s}+\int _{0}^{t}\mu _{s}\,ds.}
Here, B is a Brownian motion and it is required that σ is a predictable B-integrable process, and μ is predictable and (Lebesgue) integrable. That is,
∫
0
t
(
σ
s
2
+
|
μ
s
|
)
d
s
<
∞
{\displaystyle \int _{0}^{t}(\sigma _{s}^{2}+|\mu _{s}|)\,ds<\infty }
for each t. The stochastic integral can be extended to such Itô processes,
∫
0
t
H
d
X
=
∫
0
t
H
s
σ
s
d
B
s
+
∫
0
t
H
s
μ
s
d
s
.
{\displaystyle \int _{0}^{t}H\,dX=\int _{0}^{t}H_{s}\sigma _{s}\,dB_{s}+\int _{0}^{t}H_{s}\mu _{s}\,ds.}
This is defined for all locally bounded and predictable integrands. More generally, it is required that Hσ be B-integrable and Hμ be Lebesgue integrable, so that
∫
0
t
(
H
2
σ
2
+
|
H
μ
|
)
d
s
<
∞
.
{\displaystyle \int _{0}^{t}\left(H^{2}\sigma ^{2}+|H\mu |\right)ds<\infty .}
Such predictable processes H are called X-integrable.
An important result for the study of Itô processes is Itô's lemma. In its simplest form, for any twice continuously differentiable function f on the reals and Itô process X as described above, it states that
Y
t
=
f
(
X
t
)
{\displaystyle Y_{t}=f(X_{t})}
is itself an Itô process satisfying
d
Y
t
=
f
′
(
X
t
)
μ
t
d
t
+
1
2
f
′
′
(
X
t
)
σ
t
2
d
t
+
f
′
(
X
t
)
σ
t
d
B
t
.
{\displaystyle dY_{t}=f^{\prime }(X_{t})\mu _{t}\,dt+{\tfrac {1}{2}}f^{\prime \prime }(X_{t})\sigma _{t}^{2}\,dt+f^{\prime }(X_{t})\sigma _{t}\,dB_{t}.}
This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of f, which comes from the property that Brownian motion has non-zero quadratic variation.
== Semimartingales as integrators ==
The Itô integral is defined with respect to a semimartingale X. These are processes which can be decomposed as X = M + A for a local martingale M and finite variation process A. Important examples of such processes include Brownian motion, which is a martingale, and Lévy processes. For a left continuous, locally bounded and adapted process H the integral H · X exists, and can be calculated as a limit of Riemann sums. Let πn be a sequence of partitions of [0, t] with mesh going to zero,
∫
0
t
H
d
X
=
lim
n
→
∞
∑
t
i
−
1
,
t
i
∈
π
n
H
t
i
−
1
(
X
t
i
−
X
t
i
−
1
)
.
{\displaystyle \int _{0}^{t}H\,dX=\lim _{n\to \infty }\sum _{t_{i-1},t_{i}\in \pi _{n}}H_{t_{i-1}}(X_{t_{i}}-X_{t_{i-1}}).}
This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times.
The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if Hn → H and |Hn| ≤ J for a locally bounded process J, then
∫
0
t
H
n
d
X
→
∫
0
t
H
d
X
,
{\displaystyle \int _{0}^{t}H_{n}\,dX\to \int _{0}^{t}H\,dX,}
in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma.
In general, the stochastic integral H · X can be defined even in cases where the predictable process H is not locally bounded. If K = 1 / (1 + |H|) then K and KH are bounded. Associativity of stochastic integration implies that H is X-integrable, with integral H · X = Y, if and only if Y0 = 0 and K · Y = (KH) · X. The set of X-integrable processes is denoted by L(X).
== Properties ==
The following properties can be found in works such as (Revuz & Yor 1999) and (Rogers & Williams 2000):
The stochastic integral is a càdlàg process. Furthermore, it is a semimartingale.
The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a càdlàg process at a time t is Xt − Xt−, and is often denoted by ΔXt. With this notation, Δ(H · X) = H ΔX. A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous.
Associativity. Let J, K be predictable processes, and K be X-integrable. Then, J is K · X integrable if and only if JK is X-integrable, in which case
J
⋅
(
K
⋅
X
)
=
(
J
K
)
⋅
X
{\displaystyle J\cdot (K\cdot X)=(JK)\cdot X}
Dominated convergence. Suppose that Hn → H and |Hn| ≤ J, where J is an X-integrable process. then Hn · X → H · X. Convergence is in probability at each time t. In fact, it converges uniformly on compact sets in probability.
The stochastic integral commutes with the operation of taking quadratic covariations. If X and Y are semimartingales then any X-integrable process will also be [X, Y]-integrable, and [H · X, Y] = H · [X, Y]. A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process,
[
H
⋅
X
]
=
H
2
⋅
[
X
]
{\displaystyle [H\cdot X]=H^{2}\cdot [X]}
== Integration by parts ==
As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the Itô integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that Itô calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If X and Y are semimartingales then
X
t
Y
t
=
X
0
Y
0
+
∫
0
t
X
s
−
d
Y
s
+
∫
0
t
Y
s
−
d
X
s
+
[
X
,
Y
]
t
{\displaystyle X_{t}Y_{t}=X_{0}Y_{0}+\int _{0}^{t}X_{s-}\,dY_{s}+\int _{0}^{t}Y_{s-}\,dX_{s}+[X,Y]_{t}}
where [X, Y] is the quadratic covariation process.
The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term.
== Itô's lemma ==
Itô's lemma is the version of the chain rule or change of variables formula which applies to the Itô integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous n-dimensional semimartingale X = (X1,...,Xn) and twice continuously differentiable function f from Rn to R, it states that f(X) is a semimartingale and,
d
f
(
X
t
)
=
∑
i
=
1
n
f
i
(
X
t
)
d
X
t
i
+
1
2
∑
i
,
j
=
1
n
f
i
,
j
(
X
t
)
d
[
X
i
,
X
j
]
t
.
{\displaystyle df(X_{t})=\sum _{i=1}^{n}f_{i}(X_{t})\,dX_{t}^{i}+{\frac {1}{2}}\sum _{i,j=1}^{n}f_{i,j}(X_{t})\,d[X^{i},X^{j}]_{t}.}
This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation [Xi,Xj ]. The formula can be generalized to include an explicit time-dependence in
f
,
{\displaystyle f,}
and in other ways (see Itô's lemma).
== Martingale integrators ==
=== Local martingales ===
An important property of the Itô integral is that it preserves the local martingale property. If M is a local martingale and H is a locally bounded predictable process then H · M is also a local martingale. For integrands which are not locally bounded, there are examples where H · M is not a local martingale. However, this can only occur when M is not continuous. If M is a continuous local martingale then a predictable process H is M-integrable if and only if
∫
0
t
H
2
d
[
M
]
<
∞
,
{\displaystyle \int _{0}^{t}H^{2}\,d[M]<\infty ,}
for each t, and H · M is always a local martingale.
The most general statement for a discontinuous local martingale M is that if (H2 · [M])1/2 is locally integrable then H · M exists and is a local martingale.
=== Square integrable martingales ===
For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales M such that E[Mt2] is finite for all t. For any such square integrable martingale M, the quadratic variation process [M] is integrable, and the Itô isometry states that
E
[
(
H
⋅
M
t
)
2
]
=
E
[
∫
0
t
H
2
d
[
M
]
]
.
{\displaystyle \mathbb {E} \left[(H\cdot M_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H^{2}\,d[M]\right].}
This equality holds more generally for any martingale M such that H2 · [M]t is integrable. The Itô isometry is often used as an important step in the construction of the stochastic integral, by defining H · M to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes.
=== p-Integrable martingales ===
For any p > 1, and bounded predictable integrand, the stochastic integral preserves the space of p-integrable martingales. These are càdlàg martingales such that E(|Mt|p) is finite for all t. However, this is not always true in the case where p = 1. There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales.
The maximum process of a càdlàg process M is written as M*t = sups ≤t |Ms|. For any p ≥ 1 and bounded predictable integrand, the stochastic integral preserves the space of càdlàg martingales M such that E[(M*t)p] is finite for all t. If p > 1 then this is the same as the space of p-integrable martingales, by Doob's inequalities.
The Burkholder–Davis–Gundy inequalities state that, for any given p ≥ 1, there exist positive constants c, C that depend on p, but not M or on t such that
c
E
[
[
M
]
t
p
2
]
≤
E
[
(
M
t
∗
)
p
]
≤
C
E
[
[
M
]
t
p
2
]
{\displaystyle c\mathbb {E} \left[[M]_{t}^{\frac {p}{2}}\right]\leq \mathbb {E} \left[(M_{t}^{*})^{p}\right]\leq C\mathbb {E} \left[[M]_{t}^{\frac {p}{2}}\right]}
for all càdlàg local martingales M. These are used to show that if (M*t)p is integrable and H is a bounded predictable process then
E
[
(
(
H
⋅
M
)
t
∗
)
p
]
≤
C
E
[
(
H
2
⋅
[
M
]
t
)
p
2
]
<
∞
{\displaystyle \mathbb {E} \left[((H\cdot M)_{t}^{*})^{p}\right]\leq C\mathbb {E} \left[(H^{2}\cdot [M]_{t})^{\frac {p}{2}}\right]<\infty }
and, consequently, H · M is a p-integrable martingale. More generally, this statement is true whenever (H2 · [M])p/2 is integrable.
== Existence of the integral ==
Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form Ht = A1{t > T} for stopping times T and FT-measurable random variables A, for which the integral is
H
⋅
X
t
≡
1
{
t
>
T
}
A
(
X
t
−
X
T
)
.
{\displaystyle H\cdot X_{t}\equiv \mathbf {1} _{\{t>T\}}A(X_{t}-X_{T}).}
This is extended to all simple predictable processes by the linearity of H · X in H.
For a Brownian motion B, the property that it has independent increments with zero mean and variance Var(Bt) = t can be used to prove the Itô isometry for simple predictable integrands,
E
[
(
H
⋅
B
t
)
2
]
=
E
[
∫
0
t
H
s
2
d
s
]
.
{\displaystyle \mathbb {E} \left[(H\cdot B_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,ds\right].}
By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying
E
[
∫
0
t
H
2
d
s
]
<
∞
,
{\displaystyle \mathbb {E} \left[\int _{0}^{t}H^{2}\,ds\right]<\infty ,}
in such way that the Itô isometry still holds. It can then be extended to all B-integrable processes by localization. This method allows the integral to be defined with respect to any Itô process.
For a general semimartingale X, the decomposition X = M + A into a local martingale M plus a finite variation process A can be used. Then, the integral can be shown to exist separately with respect to M and A and combined using linearity, H · X = H · M + H · A, to get the integral with respect to X. The standard Lebesgue–Stieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the Itô integral for semimartingales will follow from any construction for local martingales.
For a càdlàg square integrable martingale M, a generalized form of the Itô isometry can be used. First, the Doob–Meyer decomposition theorem is used to show that a decomposition M2 = N + ⟨M⟩ exists, where N is a martingale and ⟨M⟩ is a right-continuous, increasing and predictable process starting at zero. This uniquely defines ⟨M⟩, which is referred to as the predictable quadratic variation of M. The Itô isometry for square integrable martingales is then
E
[
(
H
⋅
M
t
)
2
]
=
E
[
∫
0
t
H
s
2
d
⟨
M
⟩
s
]
,
{\displaystyle \mathbb {E} \left[(H\cdot M_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,d\langle M\rangle _{s}\right],}
which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying E[H2 · ⟨M⟩t] < ∞. This method can be extended to all local square integrable martingales by localization. Finally, the Doob–Meyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the Itô integral to be constructed with respect to any semimartingale.
Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case.
Alternative proofs exist only making use of the fact that X is càdlàg, adapted, and the set {H · Xt: |H| ≤ 1 is simple previsible} is bounded in probability for each time t, which is an alternative definition for X to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as Itô's lemma (Protter 2004). Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands (Bichteler 2002).
== Differentiation in Itô calculus ==
The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion:
=== Malliavin derivative ===
Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula (Nualart 2006).
=== Martingale representation ===
The following result allows to express martingales as Itô integrals: if M is a square-integrable martingale on a time interval [0, T] with respect to the filtration generated by a Brownian motion B, then there is a unique adapted square integrable process
α
{\displaystyle \alpha }
on [0, T] such that
M
t
=
M
0
+
∫
0
t
α
s
d
B
s
{\displaystyle M_{t}=M_{0}+\int _{0}^{t}\alpha _{s}\,\mathrm {d} B_{s}}
almost surely, and for all t ∈ [0, T] (Rogers & Williams 2000, Theorem 36.5). This representation theorem can be interpreted formally as saying that α is the "time derivative" of M with respect to Brownian motion B, since α is precisely the process that must be integrated up to time t to obtain Mt − M0, as in deterministic calculus.
== Itô calculus for physicists ==
In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals. Here an Itô stochastic differential equation (SDE) is often formulated via
x
˙
k
=
h
k
+
g
k
l
ξ
l
,
{\displaystyle {\dot {x}}_{k}=h_{k}+g_{kl}\xi _{l},}
where
ξ
j
{\displaystyle \xi _{j}}
is Gaussian white noise with
⟨
ξ
k
(
t
1
)
ξ
l
(
t
2
)
⟩
=
δ
k
l
δ
(
t
1
−
t
2
)
{\displaystyle \langle \xi _{k}(t_{1})\,\xi _{l}(t_{2})\rangle =\delta _{kl}\delta (t_{1}-t_{2})}
and Einstein's summation convention is used.
If
y
=
y
(
x
k
)
{\displaystyle y=y(x_{k})}
is a function of the xk, then Itô's lemma has to be used:
y
˙
=
∂
y
∂
x
j
x
˙
j
+
1
2
∂
2
y
∂
x
k
∂
x
l
g
k
m
g
m
l
.
{\displaystyle {\dot {y}}={\frac {\partial y}{\partial x_{j}}}{\dot {x}}_{j}+{\frac {1}{2}}{\frac {\partial ^{2}y}{\partial x_{k}\,\partial x_{l}}}g_{km}g_{ml}.}
An Itô SDE as above also corresponds to a Stratonovich SDE which reads
x
˙
k
=
h
k
+
g
k
l
ξ
l
−
1
2
∂
g
k
l
∂
x
m
g
m
l
.
{\displaystyle {\dot {x}}_{k}=h_{k}+g_{kl}\xi _{l}-{\frac {1}{2}}{\frac {\partial g_{kl}}{\partial {x_{m}}}}g_{ml}.}
SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero.
For a recent treatment of different interpretations of stochastic differential equations see for example (Lau & Lubensky 2007).
== See also ==
== References == | Wikipedia/Itô_calculus |
Internal set theory (IST) is a mathematical theory of sets developed by Edward Nelson that provides an axiomatic basis for a portion of the nonstandard analysis introduced by Abraham Robinson. Instead of adding new elements to the real numbers, Nelson's approach modifies the axiomatic foundations through syntactic enrichment. Thus, the axioms introduce a new term, "standard", which can be used to make discriminations not possible under the conventional ZFC axioms for sets. Thus, IST is an enrichment of ZFC: all axioms of ZFC are satisfied for all classical predicates, while the new unary predicate "standard" satisfies three additional axioms I, S, and T. In particular, suitable nonstandard elements within the set of real numbers can be shown to have properties that correspond to the properties of infinitesimal and unlimited elements.
Nelson's formulation is made more accessible for the lay-mathematician by leaving out many of the complexities of meta-mathematical logic that were initially required to justify rigorously the consistency of number systems containing infinitesimal elements.
== Intuitive justification ==
Whilst IST has a perfectly formal axiomatic scheme, described below, an intuitive justification of the meaning of the term standard is desirable. This is not part of the formal theory, but is a pedagogical device that might help the student interpret the formalism. The essential distinction, similar to the concept of definable numbers, contrasts the finiteness of the domain of concepts that we can specify and discuss, with the unbounded infinity of the set of numbers; compare finitism.
The number of symbols one writes with is finite.
The number of mathematical symbols on any given page is finite.
The number of pages of mathematics a single mathematician can produce in a lifetime is finite.
Any workable mathematical definition is necessarily finite.
There are only a finite number of distinct objects a mathematician can define in a lifetime.
There will only be a finite number of mathematicians in the course of our (presumably finite) civilization.
Hence there is only a finite set of whole numbers our civilization can discuss in its allotted lifespan.
What that limit actually is, is unknowable to us, being contingent on many accidental cultural factors.
This limitation is not in itself susceptible to mathematical scrutiny, but that there is such a limit, whilst the set of whole numbers continues forever without bound, is a mathematical truth.
The term standard is therefore intuitively taken to correspond to some necessarily finite portion of "accessible" whole numbers. The argument can be applied to any infinite set of objects whatsoever – there are only so many elements that one can specify in finite time using a finite set of symbols and there are always those that lie beyond the limits of our patience and endurance, no matter how we persevere. We must admit to a profusion of nonstandard elements—too large or too anonymous to grasp—within any infinite set.
=== Principles of the standard predicate ===
The following principles follow from the above intuitive motivation and so should be deducible from the formal axioms. For the moment we take the domain of discussion as being the familiar set of whole numbers.
Any mathematical expression that does not use the new predicate standard explicitly or implicitly is an internal formula.
Any definition that does so is an external formula.
Any number uniquely specified by an internal formula is standard (by definition).
Nonstandard numbers are precisely those that cannot be uniquely specified (due to limitations of time and space) by an internal formula.
Nonstandard numbers are elusive: each one is too enormous to be manageable in decimal notation or any other representation, explicit or implicit, no matter how ingenious your notation. Whatever you succeed in producing is by definition merely another standard number.
Nevertheless, there are (many) nonstandard whole numbers in any infinite subset of N.
Nonstandard numbers are completely ordinary numbers, having decimal representations, prime factorizations, etc. Every classical theorem that applies to the natural numbers applies to the nonstandard natural numbers. We have created, not new numbers, but a new method of discriminating between existing numbers.
Moreover, any classical theorem that is true for all standard numbers is necessarily true for all natural numbers. Otherwise the formulation "the smallest number that fails to satisfy the theorem" would be an internal formula that uniquely defined a nonstandard number.
The predicate "nonstandard" is a logically consistent method for distinguishing large numbers—the usual term will be illimited. Reciprocals of these illimited numbers will necessarily be extremely small real numbers—infinitesimals. To avoid confusion with other interpretations of these words, in newer articles on IST those words are replaced with the constructs "i-large" and "i-small".
There are necessarily only finitely many standard numbers—but caution is required: we cannot gather them together and hold that the result is a well-defined mathematical set. This will not be supported by the formalism (the intuitive justification being that the precise bounds of this set vary with time and history). In particular we will not be able to talk about the largest standard number, or the smallest nonstandard number. It will be valid to talk about some finite set that contains all standard numbers—but this non-classical formulation could only apply to a nonstandard set.
== Formal axioms ==
IST is an axiomatic theory in the first-order logic with equality in a language containing a binary predicate symbol ∈ and a unary predicate symbol st(x). Formulas not involving st (i.e., formulas of the usual language of set theory) are called internal, other formulas are called external. We use the abbreviations
∃
s
t
x
ϕ
(
x
)
=
∃
x
(
st
(
x
)
∧
ϕ
(
x
)
)
,
∀
s
t
x
ϕ
(
x
)
=
∀
x
(
st
(
x
)
→
ϕ
(
x
)
)
.
{\displaystyle {\begin{aligned}\exists ^{\mathrm {st} }x\,\phi (x)&=\exists x\,(\operatorname {st} (x)\land \phi (x)),\\\forall ^{\mathrm {st} }x\,\phi (x)&=\forall x\,(\operatorname {st} (x)\to \phi (x)).\end{aligned}}}
IST includes all axioms of the Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Note that the ZFC schemata of separation and replacement are not extended to the new language, they can only be used with internal formulas. Moreover, IST includes three new axiom schemata – conveniently one for each initial in its name: Idealisation, Standardisation, and Transfer.
=== Idealisation ===
For any internal formula
ϕ
{\displaystyle \phi }
without a free occurrence of z, the universal closure of the following formula is an axiom:
∀
s
t
z
(
z
is finite
→
∃
y
∀
x
∈
z
ϕ
(
x
,
y
,
u
1
,
…
,
u
n
)
)
↔
∃
y
∀
s
t
x
ϕ
(
x
,
y
,
u
1
,
…
,
u
n
)
.
{\displaystyle \forall ^{\mathrm {st} }z\,(z{\text{ is finite}}\to \exists y\,\forall x\in z\,\phi (x,y,u_{1},\dots ,u_{n}))\leftrightarrow \exists y\,\forall ^{\mathrm {st} }x\,\phi (x,y,u_{1},\dots ,u_{n}).}
In words: For every internal relation R, and for arbitrary values for all other free variables, we have that if for each standard, finite set F, there exists a g such that
R
(
g
,
f
)
{\displaystyle R(g,f)}
holds for all f in F, then there is a particular G such that for any standard f we have
R
(
g
,
f
)
{\displaystyle R(g,f)}
, and conversely, if there exists G such that for any standard f, we have
R
(
g
,
f
)
{\displaystyle R(g,f)}
, then for each finite set F, there exists a g such that
R
(
g
,
f
)
{\displaystyle R(g,f)}
holds for all f in F.
The statement of this axiom comprises two implications. The right-to-left implication can be reformulated by the simple statement that elements of standard finite sets are standard. The more important left-to-right implication expresses that the collection of all standard sets is contained in a finite (nonstandard) set, and moreover, this finite set can be taken to satisfy any given internal property shared by all standard finite sets.
This very general axiom scheme upholds the existence of "ideal" elements in appropriate circumstances. Three particular applications demonstrate important consequences.
==== Applied to the relation ≠ ====
If S is standard and finite, we take for the relation
R
(
g
,
f
)
{\displaystyle R(g,f)}
: g and f are not equal and g is in S. Since "For every standard finite set F there is an element g in S such that
g
≠
f
{\displaystyle g\neq f}
for all f in F" is false (no such g exists when F = S), we may use Idealisation to tell us that "There is a G in S such that
g
≠
f
{\displaystyle g\neq f}
for all standard f" is also false, i.e. all the elements of S are standard.
If S is infinite, then we take for the relation
R
(
g
,
f
)
{\displaystyle R(g,f)}
: g and f are not equal and g is in S. Since "For every standard finite set F there is an element g in S such that
g
≠
f
{\displaystyle g\neq f}
for all f in F" (the infinite set S is not a subset of the finite set F), we may use Idealisation to derive "There is a G in S such that
g
≠
f
{\displaystyle g\neq f}
for all standard f." In other words, every infinite set contains a nonstandard element (many, in fact).
The power set of a standard finite set is standard (by Transfer) and finite, so all the subsets of a standard finite set are standard.
If S is nonstandard, we take for the relation
R
(
g
,
f
)
{\displaystyle R(g,f)}
: g and f are not equal and g is in S. Since "For every standard finite set F there is an element g in S such that
g
≠
f
{\displaystyle g\neq f}
for all f in F" (the nonstandard set S is not a subset of the standard and finite set F), we may use Idealisation to derive "There is a G in S such that
g
≠
f
{\displaystyle g\neq f}
for all standard f." In other words, every nonstandard set contains a nonstandard element.
As a consequence of all these results, all the elements of a set S are standard if and only if S is standard and finite.
==== Applied to the relation < ====
Since "For every standard, finite set of natural numbers F there is a natural number g such that
g
>
f
{\displaystyle g>f}
for all f in F" (say, g = max(F) + 1), we may use Idealisation to derive "There is a natural number G such that
g
>
f
{\displaystyle g>f}
for all standard natural numbers f." In other words, there exists a natural number greater than each standard natural number.
==== Applied to the relation ∈ ====
We take
R
(
g
,
f
)
{\displaystyle R(g,f)}
: g is a finite set containing element f. Since "For every standard, finite set F, there is a finite set g such that
f
∈
G
{\displaystyle f\in G}
for all f in F" (e.g. g = F), we may use Idealisation to derive "There is a finite set G such that
f
∈
G
{\displaystyle f\in G}
for all standard f." For any set S, the intersection of S with the set G is a finite subset of S that contains every standard element of S. G is necessarily nonstandard, by the ZFC regularity axiom.
=== Standardisation ===
If
ϕ
{\displaystyle \phi }
is any formula (it may be external) without a free occurrence of y, the universal closure of
∀
s
t
x
∃
s
t
y
∀
s
t
t
(
t
∈
y
↔
(
t
∈
x
∧
ϕ
(
t
,
u
1
,
…
,
u
n
)
)
)
{\displaystyle \forall ^{\mathrm {st} }x\,\exists ^{\mathrm {st} }y\,\forall ^{\mathrm {st} }t\,(t\in y\leftrightarrow (t\in x\land \phi (t,u_{1},\dots ,u_{n})))}
is an axiom.
In words: If A is a standard set and P any property, internal or otherwise, then there is a unique, standard subset B of A whose standard elements are precisely the standard elements of A satisfying P (but the behaviour of B's nonstandard elements is not prescribed).
=== Transfer ===
If
ϕ
(
x
,
u
1
,
…
,
u
n
)
{\displaystyle \phi (x,u_{1},\dots ,u_{n})}
is an internal formula with no other free variables than those indicated, then
∀
s
t
u
1
…
∀
s
t
u
n
(
∀
s
t
x
ϕ
(
x
,
u
1
,
…
,
u
n
)
→
∀
x
ϕ
(
x
,
u
1
,
…
,
u
n
)
)
{\displaystyle \forall ^{\mathrm {st} }u_{1}\dots \forall ^{\mathrm {st} }u_{n}\,(\forall ^{\mathrm {st} }x\,\phi (x,u_{1},\dots ,u_{n})\to \forall x\,\phi (x,u_{1},\dots ,u_{n}))}
is an axiom.
In words: If all the parameters A, B, C, ..., W of an internal formula F have standard values then F(x, A, B,..., W) holds for all x's as soon as it holds for all standard x's—from which it follows that all uniquely defined concepts or objects within classical mathematics are standard.
== Formal justification for the axioms ==
Aside from the intuitive motivations suggested above, it is necessary to justify that additional IST axioms do not lead to errors or inconsistencies in reasoning. Mistakes and philosophical weaknesses in reasoning about infinitesimal numbers in the work of Gottfried Leibniz, Johann Bernoulli, Leonhard Euler, Augustin-Louis Cauchy, and others were the reason that they were originally abandoned for the more cumbersome real number-based arguments developed by Georg Cantor, Richard Dedekind, and Karl Weierstrass, which were perceived as being more rigorous by Weierstrass's followers.
The approach for internal set theory is the same as that for any new axiomatic system—we construct a model for the new axioms using the elements of a simpler, more trusted, axiom scheme. This is quite similar to justifying the consistency of the axioms of elliptic non-Euclidean geometry by noting they can be modeled by an appropriate interpretation of great circles on a sphere in ordinary 3-space.
In fact via a suitable model a proof can be given of the relative consistency of IST as compared with ZFC: if ZFC is consistent, then IST is consistent. In fact, a stronger statement can be made: IST is a conservative extension of ZFC: any internal formula that can be proven within internal set theory can be proven in the Zermelo–Fraenkel axioms with the axiom of choice alone.
== Related theories ==
Related theories were developed by Karel Hrbacek and others.
== Notes ==
== References ==
Robert, Alain (1985). Nonstandard analysis. John Wiley & Sons. ISBN 0-471-91703-6.
Internal Set Theory, a chapter of an unfinished book by Nelson. | Wikipedia/Internal_set_theory |
Demography (from Ancient Greek δῆμος (dêmos) 'people, society' and -γραφία (-graphía) 'writing, drawing, description') is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration.
Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population.
Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers. In population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points.
Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and their categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender, date of death, postal code, ethnicity, blood type, emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history.
Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, institutional, cultural, and biological processes influencing a population.
== History ==
Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India. Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population.
In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground.
In the Middle Ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Muslim sociologists like Ibn Khaldun.
One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus De Morgan, On the Application of Probabilities to Life Contingencies (1838).
In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies. His work influenced Thomas Robert Malthus, who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst.
In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition.
The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international 'great demographers' like Adolphe Quetelet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis.
== Methods ==
Demography is the statistical and mathematical study of the size, composition, and spatial distribution of human populations and how these features change over time. Data are obtained from a census of the population and from registries: records of events like birth, deaths, migrations, marriages, divorces, diseases, and employment. To do this, there needs to be an understanding of how they are calculated and the questions they answer which are included in these four concepts: population change, standardization of population numbers, the demographic bookkeeping equation, and population composition.
There are two types of data collection—direct and indirect—with several methods of each type.
=== Direct methods ===
Direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration (registration of place of residence). In developed countries with good registration systems (such as the United States and much of Europe), registry statistics are the best method for estimating the number of births and deaths.
A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data.
Censuses do more than just count people. They typically collect information about families or households in addition to individual characteristics such as age, sex, marital status, literacy/education, employment status, and occupation, and geographical location. They may also collect data on migration (or place of birth or of previous residence), language, religion, nationality (or ethnicity or race), and citizenship. In countries in which the vital registration system may be incomplete, the censuses are also used as a direct source of information about fertility and mortality; for example, the censuses of the People's Republic of China gather information on births and deaths that occurred in the 18 months immediately preceding the census.
=== Indirect methods ===
Indirect methods of collecting data are required in countries and periods where full data are not available, such as is the case in much of the developing world, and most of historical demography. One of these techniques in contemporary demography is the sister method, where survey researchers ask women how many of their sisters have died or had children and at what age. With these surveys, researchers can then indirectly estimate birth or death rates for the entire population. Other indirect methods in contemporary demography include asking people about siblings, parents, and children. Other indirect methods are necessary in historical demography.
There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz).
The United Kingdom has a series of four national birth cohort studies, the first three spaced apart by 12 years: the 1946 National Survey of Health and Development, the 1958 National Child Development Study, the 1970 British Cohort Study, and the Millennium Cohort Study, begun much more recently in 2000. These have followed the lives of samples of people (typically beginning with around 17,000 in each study) for many years, and are still continuing. As the samples have been drawn in a nationally representative way, inferences can be drawn from these studies about the differences between four distinct generations of British people in terms of their health, education, attitudes, childbearing and employment patterns.
Indirect standardization is used when a population is small enough that the number of events (births, deaths, etc.) are also small. In this case, methods must be used to produce a standardized mortality rate (SMR) or standardized incidence rate (SIR).
== Population change ==
Population change is analyzed by measuring the change between one population size to another. Global population continues to rise, which makes population change an essential component to demographics. This is calculated by taking one population size minus the population size in an earlier census. The best way of measuring population change is using the intercensal percentage change. The intercensal percentage change is the absolute change in population between the censuses divided by the population size in the earlier census. Next, multiply this a hundredfold to receive a percentage. When this statistic is achieved, the population growth between two or more nations that differ in size, can be accurately measured and examined.
== Standardization of population numbers ==
For there to be a significant comparison, numbers must be altered for the size of the population that is under study. For example, the fertility rate is calculated as the ratio of the number of births to women of childbearing age to the total number of women in this age range. If these adjustments were not made, we would not know if a nation with a higher rate of births or deaths has a population with more women of childbearing age or more births per eligible woman.
Within the category of standardization, there are two major approaches: direct standardization and indirect standardization.
== Common rates and ratios ==
The crude birth rate, the annual number of live births per 1,000 people.
The general fertility rate, the annual number of live births per 1,000 women of childbearing age (often taken to be from 15 to 49 years old, but sometimes from 15 to 44).
The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.)
The crude death rate, the annual number of deaths per 1,000 people.
The infant mortality rate, the annual number of deaths of children less than 1 year old per 1,000 live births.
The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels.
The total fertility rate, the number of live births per woman completing her reproductive life, if her childbearing at each age reflected current age-specific fertility rates.
The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11.
The gross reproduction rate, the number of daughters who would be born to a woman completing her reproductive life at current age-specific fertility rates.
The net reproduction ratio is the expected number of daughters, per newborn prospective mother, who may or may not survive to and through the ages of childbearing.
A stable population, one that has had constant crude birth and death rates for such a long period of time that the percentage of people in every age class remains constant, or equivalently, the population pyramid has an unchanging structure.
A stationary population, one that is both stable and unchanging in size (the difference between crude birth rate and crude death rate is zero).
Measures of centralisation are concerned with the extent to which an area's population is concentrated in its urban centres.
A stable population does not necessarily remain fixed in size. It can be expanding or shrinking.
The crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher in developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy.
== Basic equations for regional populations ==
Suppose that a country (or other entity) contains Populationt persons at time t.
What is the size of the population at time t + 1 ?
Population
t
+
1
=
Population
t
+
Natural Increase
t
+
Net Migration
t
{\displaystyle {\text{Population}}_{t+1}={\text{Population}}_{t}+{\text{Natural Increase}}_{t}+{\text{Net Migration}}_{t}}
Natural increase from time t to t + 1:
Natural Increase
t
=
Births
t
−
Deaths
t
{\displaystyle {\text{Natural Increase}}_{t}={\text{Births}}_{t}-{\text{Deaths}}_{t}}
Net migration from time t to t + 1:
Net Migration
t
=
Immigration
t
−
Emigration
t
{\displaystyle {\text{Net Migration}}_{t}={\text{Immigration}}_{t}-{\text{Emigration}}_{t}}
These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another.
More generally, while the basic demographic equation holds true by definition, in practice the recording and counting of events (births, deaths, immigration, emigration) and the enumeration of the total population size are subject to error. So allowance needs to be made for error in the underlying statistics when any accounting of population size or change is made.
The figure in this section shows the latest (2004) UN (United Nations) WHO projections of world population out to the year 2150 (red = high, orange = medium, green = low). The UN "medium" projection shows world population reaching an approximate equilibrium at 9 billion by 2075. Working independently, demographers at the International Institute for Applied Systems Analysis in Austria expect world population to peak at 9 billion by 2070. Throughout the 21st century, the average age of the population is likely to continue to rise.
== The doomsday equation for the Earth's population ==
A 1960 issue of Science magazine included an article by Heinz von Foerster and his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958:
Fifty years ago, Science published a study with the provocative title “Doomsday: Friday, 13 November, A.D. 2026”. It fitted world population during the previous two millennia with P = 179 × 109/(2026.9 − t)0.99. This “quasi-hyperbolic” equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter.
—Taagepera, Rein. A world population growth model: Interaction with Earth's carrying capacity and technology in limited space Technological Forecasting and Social Change, vol. 82, February 2014, pp. 34–41
In 1975, von Hoerner suggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e. with the exponent in the denominator assumed to be 1.00):
Global population
=
179000000000
2026.9
−
t
,
{\displaystyle {\text{Global population}}={\frac {179000000000}{2026.9-t}},}
where
2026.9 is 13 November 2026 AD—the date of the so-called "demographic singularity" and von Foerster's 115th anniversary;
t is the number of a year of the Gregorian calendar.
Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671) predicts a population of 5,986,622,074 for the beginning of the year 1997:
179000000000
2026.9
−
1997
=
5986622074.
{\displaystyle {\frac {179000000000}{2026.9-1997}}=5986622074.}
The actual figure was 5,924,787,816.
The doomsday equation is called so because it predicts that the number of people living on the planet Earth will become maximally positive by 13 November 2026, and on the next moment will become negative. Said otherwise, the equation predicts that on 13 November 2026 all humans will instantaneously disappear.
== Science of population ==
Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential). Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the life table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population.
Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus, demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important.
Demography is today widely taught in many universities across the world, attracting students with initial training in social sciences, statistics or health studies. Being at the crossroads of several disciplines such as sociology, economics, epidemiology, geography, anthropology and history, demography offers tools to approach a large range of population issues by combining a more technical quantitative approach that represents the core of the discipline with many other methods borrowed from social or other sciences. Demographic research is conducted in universities, in research institutes, as well as in statistical departments and in several international agencies. Population institutions are part of the CICRED (International Committee for Coordination of Demographic Research) network while most individual scientists engaged in demographic research are members of the International Union for the Scientific Study of Population, or a national association such as the Population Association of America in the United States, or affiliates of the Federation of Canadian Demographers in Canada.
== Population composition ==
Population composition is the description of population defined by characteristics such as age, race, sex or marital status. These descriptions can be necessary for understanding the social dynamics from historical and comparative research. This data is often compared using a population pyramid.
Population composition is also a very important part of historical research. Information ranging back hundreds of years is not always worthwhile, because the numbers of people for which data are available may not provide the information that is important (such as population size). Lack of information on the original data-collection procedures may prevent accurate evaluation of data quality.
== Demographic analysis in institutions and organizations ==
=== Labor market ===
The demographic analysis of labor markets can be used to show slow population growth, population ageing, and the increased importance of immigration. The U.S. Census Bureau projects that in the next 100 years, the United States will face some dramatic demographic changes. The population is expected to grow more slowly and age more rapidly than ever before and the nation will become a nation of immigrants. This influx is projected to rise over the next century as new immigrants and their children will account for over half the U.S. population. These demographic shifts could ignite major adjustments in the economy, more specifically, in labor markets.
=== Turnover and in internal labor markets ===
People decide to exit organizations for many reasons, such as, better jobs, dissatisfaction, and concerns within the family. The causes of turnover can be split into two separate factors, one linked with the culture of the organization, and the other relating to all other factors. People who do not fully accept a culture might leave voluntarily. Or, some individuals might leave because they fail to fit in and fail to change within a particular organization.
=== Population ecology of organizations ===
A basic definition of population ecology is a study of the distribution and abundance of organisms. As it relates to organizations and demography, organizations go through various liabilities to their continued survival. Hospitals, like all other large and complex organizations are impacted in the environment they work. For example, a study was done on the closure of acute care hospitals in Florida between a particular time. The study examined effect size, age, and niche density of these particular hospitals. A population theory says that organizational outcomes are mostly determined by environmental factors. Among several factors of the theory, there are four that apply to the hospital closure example: size, age, density of niches in which organizations operate, and density of niches in which organizations are established.
==== Business organizations ====
Problems in which demographers may be called upon to assist business organizations are when determining the best prospective location in an area of a branch store or service outlet, predicting the demand for a new product, and to analyze certain dynamics of a company's workforce. Choosing a new location for a branch of a bank, choosing the area in which to start a new supermarket, consulting a bank loan officer that a particular location would be a beneficial site to start a car wash, and determining what shopping area would be best to buy and be redeveloped in metropolis area are types of problems in which demographers can be called upon.
Standardization is a useful demographic technique used in the analysis of a business. It can be used as an interpretive and analytic tool for the comparison of different markets.
==== Nonprofit organizations ====
These organizations have interests about the number and characteristics of their clients so they can maximize the sale of their products, their outlook on their influence, or the ends of their power, services, and beneficial works.
== See also ==
== References ==
== Further reading ==
Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011.
Glad, John. 2008. Future Human Evolution: Eugenics in the Twenty-First Century. Hermitage Publishers, ISBN 1-55779-154-6
Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128.
Preston, Samuel, Patrick Heuveline, and Michel Guillot. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334.
Paul R. Ehrlich (1968), The Population Bomb Controversial Neo-Malthusianist pamphlet
Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7
Andrey Korotayev & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS ISBN 5-484-00414-4 [2]
Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131.
Paul Demeny and Geoffrey McNicoll (Eds.). 2003. The Encyclopedia of Population. New York, Macmillan Reference USA, vol.1, 32-37
Phillip Longman (2004), The Empty Cradle: how falling birth rates threaten global prosperity and what to do about it
Sven Kunisch, Stephan A. Boehm, Michael Boppel (eds) (2011). From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg, ISBN 978-3-642-15593-2
Joe McFalls (2007), Population: A Lively Introduction, Population Reference Bureau [3] Archived 1 June 2013 at the Wayback Machine
Ben J. Wattenberg (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X
Perry, Marc J. & Mackun, Paul J. Population Change & Distribution: Census 2000 Brief. (2001)
Preston, Samuel; Heuveline, Patrick; and Guillot Michel. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Schutt, Russell K. 2006. "Investigating the Social World: The Process and Practice of Research". SAGE Publications.
Siegal, Jacob S. (2002), Applied Demography: Applications to Business, Government, Law, and Public Policy. San Diego: Academic Press.
Wattenberg, Ben J. (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X
== External links ==
Quick demography data lookup (archived 4 March 2016)
Historicalstatistics.org Links to historical demographic and economic statistics
United Nations Population Division: Homepage
World Population Prospects, the 2012 Revision, Population estimates and projections for 230 countries and areas (archived 6 May 2011)
World Urbanization Prospects, the 2011 Revision, Estimates and projections of urban and rural populations and urban agglomerations
Probabilistic Population Projections, the 2nd Revision, Probabilistic Population Projections, based on the 2010 Revision of the World Population Prospects (archived 13 December 2012)
Java Simulation of Population Dynamics.
Basic Guide to the World: Population changes and trends, 1960–2003
Brief review of world basic demographic trends
Family and Fertility Surveys (FFS) | Wikipedia/Demography |
Elementary Calculus: An Infinitesimal approach is a textbook by H. Jerome Keisler. The subtitle alludes to the infinitesimal numbers of the hyperreal number system of Abraham Robinson and is sometimes given as An approach using infinitesimals. The book is available freely online and is currently published by Dover.
== Textbook ==
Keisler's textbook is based on Robinson's construction of the hyperreal numbers. Keisler also published a companion book, Foundations of Infinitesimal Calculus, for instructors, which covers the foundational material in more depth.
Keisler defines all basic notions of the calculus such as continuity, derivative, and integral using infinitesimals. The usual definitions in terms of ε–δ techniques are provided at the end of Chapter 5 to enable a transition to a standard sequence.
In his textbook, Keisler used the pedagogical technique of an infinite-magnification microscope, so as to represent graphically, distinct hyperreal numbers infinitely close to each other. Similarly, an infinite-resolution telescope is used to represent infinite numbers.
When one examines a curve, say the graph of ƒ, under a magnifying glass, its curvature decreases proportionally to the magnification power of the lens. Similarly, an infinite-magnification microscope will transform an infinitesimal arc of a graph of ƒ, into a straight line, up to an infinitesimal error (only visible by applying a higher-magnification "microscope"). The derivative of ƒ is then the (standard part of the) slope of that line (see figure).
Thus the microscope is used as a device in explaining the derivative.
== Reception ==
The book was first reviewed by Errett Bishop, noted for his work in constructive mathematics. Bishop's review was harshly critical; see Criticism of nonstandard analysis. Shortly after, Martin Davis and Hausner published a detailed favorable review, as did Andreas Blass and Keith Stroyan. Keisler's student K. Sullivan, as part of her PhD thesis, performed a controlled experiment involving 5 schools, which found Elementary Calculus to have advantages over the standard method of teaching calculus. Despite the benefits described by Sullivan, the vast majority of mathematicians have not adopted infinitesimal methods in their teaching. Recently, Katz & Katz give a positive account of a calculus course based on Keisler's book. O'Donovan also described his experience teaching calculus using infinitesimals. His initial point of view was positive, but later he found pedagogical difficulties with the approach to nonstandard calculus taken by this text and others.
G. R. Blackley remarked in a letter to Prindle, Weber & Schmidt, concerning Elementary Calculus: An Approach Using Infinitesimals, "Such problems as might arise with the book will be political. It is revolutionary. Revolutions are seldom welcomed by the established party, although revolutionaries often are."
Hrbacek writes that the definitions of continuity, derivative, and integral implicitly must be grounded in the ε–δ method in Robinson's theoretical framework, in order to extend definitions to include nonstandard values of the inputs, claiming that the hope that nonstandard calculus could be done without ε–δ methods could not be realized in full. Błaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".
== Transfer principle ==
Between the first and second edition of the Elementary Calculus, much of the theoretical material that was in the first chapter was moved to the epilogue at the end of the book, including the theoretical groundwork of nonstandard analysis.
In the second edition Keisler introduces the extension principle and the transfer principle in the following form:
Every real statement that holds for one or more particular real functions holds for the hyperreal natural extensions of these functions.
Keisler then gives a few examples of real statements to which the principle applies:
Closure law for addition: for any x and y, the sum x + y is defined.
Commutative law for addition: x + y = y + x.
A rule for order: if 0 < x < y then 0 < 1/y < 1/x.
Division by zero is never allowed: x/0 is undefined.
An algebraic identity:
(
x
−
y
)
2
=
x
2
−
2
x
y
+
y
2
{\displaystyle (x-y)^{2}=x^{2}-2xy+y^{2}}
.
A trigonometric identity:
sin
2
x
+
cos
2
x
=
1
{\displaystyle \sin ^{2}x+\cos ^{2}x=1}
.
A rule for logarithms: If x > 0 and y > 0, then
log
10
(
x
y
)
=
log
10
x
+
log
10
y
{\displaystyle \log _{10}(xy)=\log _{10}x+\log _{10}y}
.
== See also ==
Criticism of nonstandard analysis
Influence of nonstandard analysis
Nonstandard calculus
Increment theorem
== Notes ==
== References ==
Bishop, Errett (1977), "Review: H. Jerome Keisler, Elementary calculus", Bull. Amer. Math. Soc., 83: 205–208, doi:10.1090/s0002-9904-1977-14264-x
Blass, Andreas (1978), "Review: Martin Davis, Applied nonstandard analysis, and K. D. Stroyan and W. A. J. Luxemburg, Introduction to the theory of infinitesimals, and H. Jerome Keisler, Foundations of infinitesimal calculus", Bull. Amer. Math. Soc., 84 (1): 34–41, doi:10.1090/S0002-9904-1978-14401-2
Blass writes: "I suspect that many mathematicians harbor, somewhere in the back of their minds, the formula
∫
(
d
x
)
2
+
(
d
y
)
2
{\displaystyle \int {\sqrt {(dx)^{2}+(dy)^{2}}}}
for arc length (and quickly factor out dx before writing it down)" (p. 35).
"Often, as in the examples above, the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers)" (p. 37).
"The relative simplicity of the nonstandard definitions of some concepts of elementary analysis suggests a pedagogical application in freshman calculus. One could make use of the students' intuitive ideas about infinitesimals (which are usually very vague, but so are their ideas about real numbers) to develop calculus on a nonstandard basis" (p. 38).
Davis, Martin (1977), "Review: J. Donald Monk, Mathematical logic", Bull. Amer. Math. Soc., 83: 1007–1011, doi:10.1090/S0002-9904-1977-14357-7
Davis, M.; Hausner, M (1978), "Book review. The Joy of Infinitesimals. J. Keisler's Elementary Calculus", Mathematical Intelligencer, 1: 168–170, doi:10.1007/bf03023265, S2CID 121679411.
Hrbacek, K.; Lessmann, O.; O’Donovan, R. (November 2010), "Analysis with Ultrasmall Numbers", American Mathematical Monthly, 117 (9): 801–816, doi:10.4169/000298910x521661, S2CID 5720030
Hrbacek, K. (2007), "Stratified Analysis?", in Van Den Berg, I.; Neves, V. (eds.), The Strength of Nonstandard Analysis, Springer
Katz, Karin Usadi; Katz, Mikhail G. (2010), "When is .999... less than 1?", The Montana Mathematics Enthusiast, 7 (1): 3–30, arXiv:1007.3018, Bibcode:2010arXiv1007.3018U, doi:10.54870/1551-3440.1381, S2CID 11544878, archived from the original on 20 July 2011
Keisler, H. Jerome (1976), Elementary Calculus: An Approach Using Infinitesimals, Prindle Weber & Schmidt, ISBN 978-0871509116
Keisler, H. Jerome (1976), Foundations of Infinitesimal Calculus, Prindle Weber & Schmidt, ISBN 978-0871502155, retrieved 10 January 2007 A companion to the textbook Elementary Calculus: An Approach Using Infinitesimals.
Keisler, H. Jerome (2011), Elementary Calculus: An Infinitesimal Approach (2nd ed.), New York: Dover Publications, ISBN 978-0-486-48452-5
Madison, E. W.; Stroyan, K. D. (June–July 1977), "Elementary Calculus. by H. Jerome Keisler", The American Mathematical Monthly, 84 (6): 496–500, doi:10.2307/2321930, JSTOR 2321930
O'Donovan, R. (2007), "Pre-University Analysis", in Van Den Berg, I.; Neves, V. (eds.), The Strength of Nonstandard Analysis, Springer
O'Donovan, R.; Kimber, J. (2006), "Nonstandard analysis at pre-university level: Naive magnitude analysis", in Cultand, N; Di Nasso, M.; Ross, D. (eds.), Nonstandard Methods and Applications in Mathematics, Lecture Notes in Logic, vol. 25
Stolzenberg, G. (June 1978), "Letter to the Editor", Notices of the American Mathematical Society, 25 (4): 242
Sullivan, Kathleen (1976), "The Teaching of Elementary Calculus Using the Nonstandard Analysis Approach", The American Mathematical Monthly, 83 (5), Mathematical Association of America: 370–375, doi:10.2307/2318657, JSTOR 2318657
Tall, David (1980), Intuitive infinitesimals in the calculus (poster) (PDF), Fourth International Congress on Mathematics Education, Berkeley
== External links ==
Book in PDF format | Wikipedia/Elementary_Calculus:_An_Infinitesimal_Approach |
In mathematics, synthetic differential geometry is a formalization of the theory of differential geometry in the language of topos theory. There are several insights that allow for such a reformulation. The first is that most of the analytic data for describing the class of smooth manifolds can be encoded into certain fibre bundles on manifolds: namely bundles of jets (see also jet bundle). The second insight is that the operation of assigning a bundle of jets to a smooth manifold is functorial in nature. The third insight is that over a certain category, these are representable functors. Furthermore, their representatives are related to the algebras of dual numbers, so that smooth infinitesimal analysis may be used.
Synthetic differential geometry can serve as a platform for formulating certain otherwise obscure or confusing notions from differential geometry. For example, the meaning of what it means to be natural (or invariant) has a particularly simple expression, even though the formulation in classical differential geometry may be quite difficult.
== Further reading ==
John Lane Bell, Two Approaches to Modelling the Universe: Synthetic Differential Geometry and Frame-Valued Sets (PDF file)
F.W. Lawvere, Outline of synthetic differential geometry (PDF file)
Anders Kock, Synthetic Differential Geometry (PDF file), Cambridge University Press, 2nd Edition, 2006.
R. Lavendhomme, Basic Concepts of Synthetic Differential Geometry, Springer-Verlag, 1996.
Michael Shulman, Synthetic Differential Geometry
Ryszard Paweł Kostecki, Differential Geometry in Toposes | Wikipedia/Synthetic_differential_geometry |
Method of Fluxions (Latin: De Methodis Serierum et Fluxionum) is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus. The book was completed in 1671 and posthumously published in 1736.
== Background ==
Fluxion is Newton's term for a derivative. He originally developed the method at Woolsthorpe Manor during the closing of Cambridge due to the Great Plague of London from 1665 to 1667. Newton did not choose to make his findings known (similarly, his findings which eventually became the Philosophiae Naturalis Principia Mathematica were developed at this time and hidden from the world in Newton's notes for many years). Gottfried Leibniz developed his form of calculus independently around 1673, seven years after Newton had developed the basis for differential calculus, as seen in surviving documents like “the method of fluxions and fluents..." from 1666. Leibniz, however, published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693.
== Impact ==
The calculus notation in use today is mostly that of Leibniz, although Newton's dot notation for differentiation
x
˙
{\displaystyle {\dot {x}}}
is frequently used to denote derivatives with respect to time.
== Rivalry with Leibniz ==
Newton's Method of Fluxions was formally published posthumously, but following Leibniz's publication of the calculus a bitter rivalry erupted between the two mathematicians over who had developed the calculus first, provoking Newton to reveal his work on fluxions.
== Newton's development of analysis ==
For a period of time encompassing Newton's working life, the discipline of analysis was a subject of controversy in the mathematical community. Although analytic techniques provided solutions to long-standing problems, including problems of quadrature and the finding of tangents, the proofs of these solutions were not known to be reducible to the synthetic rules of Euclidean geometry. Instead, analysts were often forced to invoke infinitesimal, or "infinitely small", quantities to justify their algebraic manipulations. Some of Newton's mathematical contemporaries, such as Isaac Barrow, were highly skeptical of such techniques, which had no clear geometric interpretation. Although in his early work Newton also used infinitesimals in his derivations without justifying them, he later developed something akin to the modern definition of limits in order to justify his work.
== See also ==
== References and notes ==
== External links ==
Method of Fluxions at the Internet Archive | Wikipedia/Method_of_fluxions |
In mathematics, the inverse function theorem is a theorem that asserts that, if a real function f has a continuous derivative near a point where its derivative is nonzero, then, near this point, f has an inverse function. The inverse function is also differentiable, and the inverse function rule expresses its derivative as the multiplicative inverse of the derivative of f.
The theorem applies verbatim to complex-valued functions of a complex variable. It generalizes to functions from
n-tuples (of real or complex numbers) to n-tuples, and to functions between vector spaces of the same finite dimension, by replacing "derivative" with "Jacobian matrix" and "nonzero derivative" with "nonzero Jacobian determinant".
If the function of the theorem belongs to a higher differentiability class, the same is true for the inverse function. There are also versions of the inverse function theorem for holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth.
The theorem was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem.
== Statements ==
For functions of a single variable, the theorem states that if
f
{\displaystyle f}
is a continuously differentiable function with nonzero derivative at the point
a
{\displaystyle a}
; then
f
{\displaystyle f}
is injective (or bijective onto the image) in a neighborhood of
a
{\displaystyle a}
, the inverse is continuously differentiable near
b
=
f
(
a
)
{\displaystyle b=f(a)}
, and the derivative of the inverse function at
b
{\displaystyle b}
is the reciprocal of the derivative of
f
{\displaystyle f}
at
a
{\displaystyle a}
:
(
f
−
1
)
′
(
b
)
=
1
f
′
(
a
)
=
1
f
′
(
f
−
1
(
b
)
)
.
{\displaystyle {\bigl (}f^{-1}{\bigr )}'(b)={\frac {1}{f'(a)}}={\frac {1}{f'(f^{-1}(b))}}.}
It can happen that a function
f
{\displaystyle f}
may be injective near a point
a
{\displaystyle a}
while
f
′
(
a
)
=
0
{\displaystyle f'(a)=0}
. An example is
f
(
x
)
=
(
x
−
a
)
3
{\displaystyle f(x)=(x-a)^{3}}
. In fact, for such a function, the inverse cannot be differentiable at
b
=
f
(
a
)
{\displaystyle b=f(a)}
, since if
f
−
1
{\displaystyle f^{-1}}
were differentiable at
b
{\displaystyle b}
, then, by the chain rule,
1
=
(
f
−
1
∘
f
)
′
(
a
)
=
(
f
−
1
)
′
(
b
)
f
′
(
a
)
{\displaystyle 1=(f^{-1}\circ f)'(a)=(f^{-1})'(b)f'(a)}
, which implies
f
′
(
a
)
≠
0
{\displaystyle f'(a)\neq 0}
. (The situation is different for holomorphic functions; see #Holomorphic inverse function theorem below.)
For functions of more than one variable, the theorem states that if
f
{\displaystyle f}
is a continuously differentiable function from an open subset
A
{\displaystyle A}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
into
R
n
{\displaystyle \mathbb {R} ^{n}}
, and the derivative
f
′
(
a
)
{\displaystyle f'(a)}
is invertible at a point a (that is, the determinant of the Jacobian matrix of f at a is non-zero), then there exist neighborhoods
U
{\displaystyle U}
of
a
{\displaystyle a}
in
A
{\displaystyle A}
and
V
{\displaystyle V}
of
b
=
f
(
a
)
{\displaystyle b=f(a)}
such that
f
(
U
)
⊂
V
{\displaystyle f(U)\subset V}
and
f
:
U
→
V
{\displaystyle f:U\to V}
is bijective. Writing
f
=
(
f
1
,
…
,
f
n
)
{\displaystyle f=(f_{1},\ldots ,f_{n})}
, this means that the system of n equations
y
i
=
f
i
(
x
1
,
…
,
x
n
)
{\displaystyle y_{i}=f_{i}(x_{1},\dots ,x_{n})}
has a unique solution for
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
in terms of
y
1
,
…
,
y
n
{\displaystyle y_{1},\dots ,y_{n}}
when
x
∈
U
,
y
∈
V
{\displaystyle x\in U,y\in V}
. Note that the theorem does not say
f
{\displaystyle f}
is bijective onto the image where
f
′
{\displaystyle f'}
is invertible but that it is locally bijective where
f
′
{\displaystyle f'}
is invertible.
Moreover, the theorem says that the inverse function
f
−
1
:
V
→
U
{\displaystyle f^{-1}:V\to U}
is continuously differentiable, and its derivative at
b
=
f
(
a
)
{\displaystyle b=f(a)}
is the inverse map of
f
′
(
a
)
{\displaystyle f'(a)}
; i.e.,
(
f
−
1
)
′
(
b
)
=
f
′
(
a
)
−
1
.
{\displaystyle (f^{-1})'(b)=f'(a)^{-1}.}
In other words, if
J
f
−
1
(
b
)
,
J
f
(
a
)
{\displaystyle Jf^{-1}(b),Jf(a)}
are the Jacobian matrices representing
(
f
−
1
)
′
(
b
)
,
f
′
(
a
)
{\displaystyle (f^{-1})'(b),f'(a)}
, this means:
J
f
−
1
(
b
)
=
J
f
(
a
)
−
1
.
{\displaystyle Jf^{-1}(b)=Jf(a)^{-1}.}
The hard part of the theorem is the existence and differentiability of
f
−
1
{\displaystyle f^{-1}}
. Assuming this, the inverse derivative formula follows from the chain rule applied to
f
−
1
∘
f
=
I
{\displaystyle f^{-1}\circ f=I}
. (Indeed,
1
=
I
′
(
a
)
=
(
f
−
1
∘
f
)
′
(
a
)
=
(
f
−
1
)
′
(
b
)
∘
f
′
(
a
)
.
{\displaystyle 1=I'(a)=(f^{-1}\circ f)'(a)=(f^{-1})'(b)\circ f'(a).}
) Since taking the inverse is infinitely differentiable, the formula for the derivative of the inverse shows that if
f
{\displaystyle f}
is continuously
k
{\displaystyle k}
times differentiable, with invertible derivative at the point a, then the inverse is also continuously
k
{\displaystyle k}
times differentiable. Here
k
{\displaystyle k}
is a positive integer or
∞
{\displaystyle \infty }
.
There are two variants of the inverse function theorem. Given a continuously differentiable map
f
:
U
→
R
m
{\displaystyle f:U\to \mathbb {R} ^{m}}
, the first is
The derivative
f
′
(
a
)
{\displaystyle f'(a)}
is surjective (i.e., the Jacobian matrix representing it has rank
m
{\displaystyle m}
) if and only if there exists a continuously differentiable function
g
{\displaystyle g}
on a neighborhood
V
{\displaystyle V}
of
b
=
f
(
a
)
{\displaystyle b=f(a)}
such that
f
∘
g
=
I
{\displaystyle f\circ g=I}
near
b
{\displaystyle b}
,
and the second is
The derivative
f
′
(
a
)
{\displaystyle f'(a)}
is injective if and only if there exists a continuously differentiable function
g
{\displaystyle g}
on a neighborhood
V
{\displaystyle V}
of
b
=
f
(
a
)
{\displaystyle b=f(a)}
such that
g
∘
f
=
I
{\displaystyle g\circ f=I}
near
a
{\displaystyle a}
.
In the first case (when
f
′
(
a
)
{\displaystyle f'(a)}
is surjective), the point
b
=
f
(
a
)
{\displaystyle b=f(a)}
is called a regular value. Since
m
=
dim
ker
(
f
′
(
a
)
)
+
dim
im
(
f
′
(
a
)
)
{\displaystyle m=\dim \ker(f'(a))+\dim \operatorname {im} (f'(a))}
, the first case is equivalent to saying
b
=
f
(
a
)
{\displaystyle b=f(a)}
is not in the image of critical points
a
{\displaystyle a}
(a critical point is a point
a
{\displaystyle a}
such that the kernel of
f
′
(
a
)
{\displaystyle f'(a)}
is nonzero). The statement in the first case is a special case of the submersion theorem.
These variants are restatements of the inverse functions theorem. Indeed, in the first case when
f
′
(
a
)
{\displaystyle f'(a)}
is surjective, we can find an (injective) linear map
T
{\displaystyle T}
such that
f
′
(
a
)
∘
T
=
I
{\displaystyle f'(a)\circ T=I}
. Define
h
(
x
)
=
a
+
T
x
{\displaystyle h(x)=a+Tx}
so that we have:
(
f
∘
h
)
′
(
0
)
=
f
′
(
a
)
∘
T
=
I
.
{\displaystyle (f\circ h)'(0)=f'(a)\circ T=I.}
Thus, by the inverse function theorem,
f
∘
h
{\displaystyle f\circ h}
has inverse near
0
{\displaystyle 0}
; i.e.,
f
∘
h
∘
(
f
∘
h
)
−
1
=
I
{\displaystyle f\circ h\circ (f\circ h)^{-1}=I}
near
b
{\displaystyle b}
. The second case (
f
′
(
a
)
{\displaystyle f'(a)}
is injective) is seen in the similar way.
== Example ==
Consider the vector-valued function
F
:
R
2
→
R
2
{\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} ^{2}\!}
defined by:
F
(
x
,
y
)
=
[
e
x
cos
y
e
x
sin
y
]
.
{\displaystyle F(x,y)={\begin{bmatrix}{e^{x}\cos y}\\{e^{x}\sin y}\\\end{bmatrix}}.}
The Jacobian matrix of it at
(
x
,
y
)
{\displaystyle (x,y)}
is:
J
F
(
x
,
y
)
=
[
e
x
cos
y
−
e
x
sin
y
e
x
sin
y
e
x
cos
y
]
{\displaystyle JF(x,y)={\begin{bmatrix}{e^{x}\cos y}&{-e^{x}\sin y}\\{e^{x}\sin y}&{e^{x}\cos y}\\\end{bmatrix}}}
with the determinant:
det
J
F
(
x
,
y
)
=
e
2
x
cos
2
y
+
e
2
x
sin
2
y
=
e
2
x
.
{\displaystyle \det JF(x,y)=e^{2x}\cos ^{2}y+e^{2x}\sin ^{2}y=e^{2x}.\,\!}
The determinant
e
2
x
{\displaystyle e^{2x}\!}
is nonzero everywhere. Thus the theorem guarantees that, for every point p in
R
2
{\displaystyle \mathbb {R} ^{2}\!}
, there exists a neighborhood about p over which F is invertible. This does not mean F is invertible over its entire domain: in this case F is not even injective since it is periodic:
F
(
x
,
y
)
=
F
(
x
,
y
+
2
π
)
{\displaystyle F(x,y)=F(x,y+2\pi )\!}
.
== Counter-example ==
If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example
f
(
x
)
=
x
+
2
x
2
sin
(
1
x
)
{\displaystyle f(x)=x+2x^{2}\sin({\tfrac {1}{x}})}
and
f
(
0
)
=
0
{\displaystyle f(0)=0}
has discontinuous derivative
f
′
(
x
)
=
1
−
2
cos
(
1
x
)
+
4
x
sin
(
1
x
)
{\displaystyle f'\!(x)=1-2\cos({\tfrac {1}{x}})+4x\sin({\tfrac {1}{x}})}
and
f
′
(
0
)
=
1
{\displaystyle f'\!(0)=1}
, which vanishes arbitrarily close to
x
=
0
{\displaystyle x=0}
. These critical points are local max/min points of
f
{\displaystyle f}
, so
f
{\displaystyle f}
is not one-to-one (and not invertible) on any interval containing
x
=
0
{\displaystyle x=0}
. Intuitively, the slope
f
′
(
0
)
=
1
{\displaystyle f'\!(0)=1}
does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation.
== Methods of proof ==
As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the Banach fixed-point theorem (which can also be used as the key step in the proof of existence and uniqueness of solutions to ordinary differential equations).
Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem (see Generalizations below).
An alternate proof in finite dimensions hinges on the extreme value theorem for functions on a compact set. This approach has an advantage that the proof generalizes to a situation where there is no Cauchy completeness (see § Over a real closed field).
Yet another proof uses Newton's method, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible.
=== Proof for single-variable functions ===
We want to prove the following: Let
D
⊆
R
{\displaystyle D\subseteq \mathbb {R} }
be an open set with
x
0
∈
D
,
f
:
D
→
R
{\displaystyle x_{0}\in D,f:D\to \mathbb {R} }
a continuously differentiable function defined on
D
{\displaystyle D}
, and suppose that
f
′
(
x
0
)
≠
0
{\displaystyle f'(x_{0})\neq 0}
. Then there exists an open interval
I
{\displaystyle I}
with
x
0
∈
I
{\displaystyle x_{0}\in I}
such that
f
{\displaystyle f}
maps
I
{\displaystyle I}
bijectively onto the open interval
J
=
f
(
I
)
{\displaystyle J=f(I)}
, and such that the inverse function
f
−
1
:
J
→
I
{\displaystyle f^{-1}:J\to I}
is continuously differentiable, and for any
y
∈
J
{\displaystyle y\in J}
, if
x
∈
I
{\displaystyle x\in I}
is such that
f
(
x
)
=
y
{\displaystyle f(x)=y}
, then
(
f
−
1
)
′
(
y
)
=
1
f
′
(
x
)
{\displaystyle (f^{-1})'(y)={\dfrac {1}{f'(x)}}}
.
We may without loss of generality assume that
f
′
(
x
0
)
>
0
{\displaystyle f'(x_{0})>0}
. Given that
D
{\displaystyle D}
is an open set and
f
′
{\displaystyle f'}
is continuous at
x
0
{\displaystyle x_{0}}
, there exists
r
>
0
{\displaystyle r>0}
such that
(
x
0
−
r
,
x
0
+
r
)
⊆
D
{\displaystyle (x_{0}-r,x_{0}+r)\subseteq D}
and
|
f
′
(
x
)
−
f
′
(
x
0
)
|
<
f
′
(
x
0
)
2
for all
|
x
−
x
0
|
<
r
.
{\displaystyle |f'(x)-f'(x_{0})|<{\dfrac {f'(x_{0})}{2}}\qquad {\text{for all }}|x-x_{0}|<r.}
In particular,
f
′
(
x
)
>
f
′
(
x
0
)
2
>
0
for all
|
x
−
x
0
|
<
r
.
{\displaystyle f'(x)>{\dfrac {f'(x_{0})}{2}}>0\qquad {\text{for all }}|x-x_{0}|<r.}
This shows that
f
{\displaystyle f}
is strictly increasing for all
|
x
−
x
0
|
<
r
{\displaystyle |x-x_{0}|<r}
. Let
δ
>
0
{\displaystyle \delta >0}
be such that
δ
<
r
{\displaystyle \delta <r}
. Then
[
x
−
δ
,
x
+
δ
]
⊆
(
x
0
−
r
,
x
0
+
r
)
{\displaystyle [x-\delta ,x+\delta ]\subseteq (x_{0}-r,x_{0}+r)}
. By the intermediate value theorem, we find that
f
{\displaystyle f}
maps the interval
[
x
−
δ
,
x
+
δ
]
{\displaystyle [x-\delta ,x+\delta ]}
bijectively onto
[
f
(
x
−
δ
)
,
f
(
x
+
δ
)
]
{\displaystyle [f(x-\delta ),f(x+\delta )]}
. Denote by
I
=
(
x
−
δ
,
x
+
δ
)
{\displaystyle I=(x-\delta ,x+\delta )}
and
J
=
(
f
(
x
−
δ
)
,
f
(
x
+
δ
)
)
{\displaystyle J=(f(x-\delta ),f(x+\delta ))}
. Then
f
:
I
→
J
{\displaystyle f:I\to J}
is a bijection and the inverse
f
−
1
:
J
→
I
{\displaystyle f^{-1}:J\to I}
exists. The fact that
f
−
1
:
J
→
I
{\displaystyle f^{-1}:J\to I}
is differentiable follows from the differentiability of
f
{\displaystyle f}
. In particular, the result follows from the fact that if
f
:
I
→
R
{\displaystyle f:I\to \mathbb {R} }
is a strictly monotonic and continuous function that is differentiable at
x
0
∈
I
{\displaystyle x_{0}\in I}
with
f
′
(
x
0
)
≠
0
{\displaystyle f'(x_{0})\neq 0}
, then
f
−
1
:
f
(
I
)
→
R
{\displaystyle f^{-1}:f(I)\to \mathbb {R} }
is differentiable with
(
f
−
1
)
′
(
y
0
)
=
1
f
′
(
y
0
)
{\displaystyle (f^{-1})'(y_{0})={\dfrac {1}{f'(y_{0})}}}
, where
y
0
=
f
(
x
0
)
{\displaystyle y_{0}=f(x_{0})}
(a standard result in analysis). This completes the proof.
=== A proof using successive approximation ===
To prove existence, it can be assumed after an affine transformation that
f
(
0
)
=
0
{\displaystyle f(0)=0}
and
f
′
(
0
)
=
I
{\displaystyle f^{\prime }(0)=I}
, so that
a
=
b
=
0
{\displaystyle a=b=0}
.
By the mean value theorem for vector-valued functions, for a differentiable function
u
:
[
0
,
1
]
→
R
m
{\displaystyle u:[0,1]\to \mathbb {R} ^{m}}
,
‖
u
(
1
)
−
u
(
0
)
‖
≤
sup
0
≤
t
≤
1
‖
u
′
(
t
)
‖
{\textstyle \|u(1)-u(0)\|\leq \sup _{0\leq t\leq 1}\|u^{\prime }(t)\|}
. Setting
u
(
t
)
=
f
(
x
+
t
(
x
′
−
x
)
)
−
x
−
t
(
x
′
−
x
)
{\displaystyle u(t)=f(x+t(x^{\prime }-x))-x-t(x^{\prime }-x)}
, it follows that
‖
f
(
x
)
−
f
(
x
′
)
−
x
+
x
′
‖
≤
‖
x
−
x
′
‖
sup
0
≤
t
≤
1
‖
f
′
(
x
+
t
(
x
′
−
x
)
)
−
I
‖
.
{\displaystyle \|f(x)-f(x^{\prime })-x+x^{\prime }\|\leq \|x-x^{\prime }\|\,\sup _{0\leq t\leq 1}\|f^{\prime }(x+t(x^{\prime }-x))-I\|.}
Now choose
δ
>
0
{\displaystyle \delta >0}
so that
‖
f
′
(
x
)
−
I
‖
<
1
2
{\textstyle \|f'(x)-I\|<{1 \over 2}}
for
‖
x
‖
<
δ
{\displaystyle \|x\|<\delta }
. Suppose that
‖
y
‖
<
δ
/
2
{\displaystyle \|y\|<\delta /2}
and define
x
n
{\displaystyle x_{n}}
inductively by
x
0
=
0
{\displaystyle x_{0}=0}
and
x
n
+
1
=
x
n
+
y
−
f
(
x
n
)
{\displaystyle x_{n+1}=x_{n}+y-f(x_{n})}
. The assumptions show that if
‖
x
‖
,
‖
x
′
‖
<
δ
{\displaystyle \|x\|,\,\,\|x^{\prime }\|<\delta }
then
‖
f
(
x
)
−
f
(
x
′
)
−
x
+
x
′
‖
≤
‖
x
−
x
′
‖
/
2
{\displaystyle \|f(x)-f(x^{\prime })-x+x^{\prime }\|\leq \|x-x^{\prime }\|/2}
.
In particular
f
(
x
)
=
f
(
x
′
)
{\displaystyle f(x)=f(x^{\prime })}
implies
x
=
x
′
{\displaystyle x=x^{\prime }}
. In the inductive scheme
‖
x
n
‖
<
δ
{\displaystyle \|x_{n}\|<\delta }
and
‖
x
n
+
1
−
x
n
‖
<
δ
/
2
n
{\displaystyle \|x_{n+1}-x_{n}\|<\delta /2^{n}}
. Thus
(
x
n
)
{\displaystyle (x_{n})}
is a Cauchy sequence tending to
x
{\displaystyle x}
. By construction
f
(
x
)
=
y
{\displaystyle f(x)=y}
as required.
To check that
g
=
f
−
1
{\displaystyle g=f^{-1}}
is C1, write
g
(
y
+
k
)
=
x
+
h
{\displaystyle g(y+k)=x+h}
so that
f
(
x
+
h
)
=
f
(
x
)
+
k
{\displaystyle f(x+h)=f(x)+k}
. By the inequalities above,
‖
h
−
k
‖
<
‖
h
‖
/
2
{\displaystyle \|h-k\|<\|h\|/2}
so that
‖
h
‖
/
2
<
‖
k
‖
<
2
‖
h
‖
{\displaystyle \|h\|/2<\|k\|<2\|h\|}
.
On the other hand, if
A
=
f
′
(
x
)
{\displaystyle A=f^{\prime }(x)}
, then
‖
A
−
I
‖
<
1
/
2
{\displaystyle \|A-I\|<1/2}
. Using the geometric series for
B
=
I
−
A
{\displaystyle B=I-A}
, it follows that
‖
A
−
1
‖
<
2
{\displaystyle \|A^{-1}\|<2}
. But then
‖
g
(
y
+
k
)
−
g
(
y
)
−
f
′
(
g
(
y
)
)
−
1
k
‖
‖
k
‖
=
‖
h
−
f
′
(
x
)
−
1
[
f
(
x
+
h
)
−
f
(
x
)
]
‖
‖
k
‖
≤
4
‖
f
(
x
+
h
)
−
f
(
x
)
−
f
′
(
x
)
h
‖
‖
h
‖
{\displaystyle {\|g(y+k)-g(y)-f^{\prime }(g(y))^{-1}k\| \over \|k\|}={\|h-f^{\prime }(x)^{-1}[f(x+h)-f(x)]\| \over \|k\|}\leq 4{\|f(x+h)-f(x)-f^{\prime }(x)h\| \over \|h\|}}
tends to 0 as
k
{\displaystyle k}
and
h
{\displaystyle h}
tend to 0, proving that
g
{\displaystyle g}
is C1 with
g
′
(
y
)
=
f
′
(
g
(
y
)
)
−
1
{\displaystyle g^{\prime }(y)=f^{\prime }(g(y))^{-1}}
.
The proof above is presented for a finite-dimensional space, but applies equally well for Banach spaces. If an invertible function
f
{\displaystyle f}
is Ck with
k
>
1
{\displaystyle k>1}
, then so too is its inverse. This follows by induction using the fact that the map
F
(
A
)
=
A
−
1
{\displaystyle F(A)=A^{-1}}
on operators is Ck for any
k
{\displaystyle k}
(in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its determinant).
The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné, Serge Lang, Roger Godement and Lars Hörmander.
=== A proof using the contraction mapping principle ===
Here is a proof based on the contraction mapping theorem. Specifically, following T. Tao, it uses the following consequence of the contraction mapping theorem.
Basically, the lemma says that a small perturbation of the identity map by a contraction map is injective and preserves a ball in some sense. Assuming the lemma for a moment, we prove the theorem first. As in the above proof, it is enough to prove the special case when
a
=
0
,
b
=
f
(
a
)
=
0
{\displaystyle a=0,b=f(a)=0}
and
f
′
(
0
)
=
I
{\displaystyle f'(0)=I}
. Let
g
=
f
−
I
{\displaystyle g=f-I}
. The mean value inequality applied to
t
↦
g
(
x
+
t
(
y
−
x
)
)
{\displaystyle t\mapsto g(x+t(y-x))}
says:
|
g
(
y
)
−
g
(
x
)
|
≤
|
y
−
x
|
sup
0
<
t
<
1
|
g
′
(
x
+
t
(
y
−
x
)
)
|
.
{\displaystyle |g(y)-g(x)|\leq |y-x|\sup _{0<t<1}|g'(x+t(y-x))|.}
Since
g
′
(
0
)
=
I
−
I
=
0
{\displaystyle g'(0)=I-I=0}
and
g
′
{\displaystyle g'}
is continuous, we can find an
r
>
0
{\displaystyle r>0}
such that
|
g
(
y
)
−
g
(
x
)
|
≤
2
−
1
|
y
−
x
|
{\displaystyle |g(y)-g(x)|\leq 2^{-1}|y-x|}
for all
x
,
y
{\displaystyle x,y}
in
B
(
0
,
r
)
{\displaystyle B(0,r)}
. Then the early lemma says that
f
=
g
+
I
{\displaystyle f=g+I}
is injective on
B
(
0
,
r
)
{\displaystyle B(0,r)}
and
B
(
0
,
r
/
2
)
⊂
f
(
B
(
0
,
r
)
)
{\displaystyle B(0,r/2)\subset f(B(0,r))}
. Then
f
:
U
=
B
(
0
,
r
)
∩
f
−
1
(
B
(
0
,
r
/
2
)
)
→
V
=
B
(
0
,
r
/
2
)
{\displaystyle f:U=B(0,r)\cap f^{-1}(B(0,r/2))\to V=B(0,r/2)}
is bijective and thus has an inverse. Next, we show the inverse
f
−
1
{\displaystyle f^{-1}}
is continuously differentiable (this part of the argument is the same as that in the previous proof). This time, let
g
=
f
−
1
{\displaystyle g=f^{-1}}
denote the inverse of
f
{\displaystyle f}
and
A
=
f
′
(
x
)
{\displaystyle A=f'(x)}
. For
x
=
g
(
y
)
{\displaystyle x=g(y)}
, we write
g
(
y
+
k
)
=
x
+
h
{\displaystyle g(y+k)=x+h}
or
y
+
k
=
f
(
x
+
h
)
{\displaystyle y+k=f(x+h)}
. Now, by the early estimate, we have
|
h
−
k
|
=
|
f
(
x
+
h
)
−
f
(
x
)
−
h
|
≤
|
h
|
/
2
{\displaystyle |h-k|=|f(x+h)-f(x)-h|\leq |h|/2}
and so
|
h
|
/
2
≤
|
k
|
{\displaystyle |h|/2\leq |k|}
. Writing
‖
⋅
‖
{\displaystyle \|\cdot \|}
for the operator norm,
|
g
(
y
+
k
)
−
g
(
y
)
−
A
−
1
k
|
=
|
h
−
A
−
1
(
f
(
x
+
h
)
−
f
(
x
)
)
|
≤
‖
A
−
1
‖
|
A
h
−
f
(
x
+
h
)
+
f
(
x
)
|
.
{\displaystyle |g(y+k)-g(y)-A^{-1}k|=|h-A^{-1}(f(x+h)-f(x))|\leq \|A^{-1}\||Ah-f(x+h)+f(x)|.}
As
k
→
0
{\displaystyle k\to 0}
, we have
h
→
0
{\displaystyle h\to 0}
and
|
h
|
/
|
k
|
{\displaystyle |h|/|k|}
is bounded. Hence,
g
{\displaystyle g}
is differentiable at
y
{\displaystyle y}
with the derivative
g
′
(
y
)
=
f
′
(
g
(
y
)
)
−
1
{\displaystyle g'(y)=f'(g(y))^{-1}}
. Also,
g
′
{\displaystyle g'}
is the same as the composition
ι
∘
f
′
∘
g
{\displaystyle \iota \circ f'\circ g}
where
ι
:
T
↦
T
−
1
{\displaystyle \iota :T\mapsto T^{-1}}
; so
g
′
{\displaystyle g'}
is continuous.
It remains to show the lemma. First, we have:
|
x
−
y
|
−
|
f
(
x
)
−
f
(
y
)
|
≤
|
g
(
x
)
−
g
(
y
)
|
≤
c
|
x
−
y
|
,
{\displaystyle |x-y|-|f(x)-f(y)|\leq |g(x)-g(y)|\leq c|x-y|,}
which is to say
(
1
−
c
)
|
x
−
y
|
≤
|
f
(
x
)
−
f
(
y
)
|
.
{\displaystyle (1-c)|x-y|\leq |f(x)-f(y)|.}
This proves the first part. Next, we show
f
(
B
(
0
,
r
)
)
⊃
B
(
0
,
(
1
−
c
)
r
)
{\displaystyle f(B(0,r))\supset B(0,(1-c)r)}
. The idea is to note that this is equivalent to, given a point
y
{\displaystyle y}
in
B
(
0
,
(
1
−
c
)
r
)
{\displaystyle B(0,(1-c)r)}
, find a fixed point of the map
F
:
B
¯
(
0
,
r
′
)
→
B
¯
(
0
,
r
′
)
,
x
↦
y
−
g
(
x
)
{\displaystyle F:{\overline {B}}(0,r')\to {\overline {B}}(0,r'),\,x\mapsto y-g(x)}
where
0
<
r
′
<
r
{\displaystyle 0<r'<r}
such that
|
y
|
≤
(
1
−
c
)
r
′
{\displaystyle |y|\leq (1-c)r'}
and the bar means a closed ball. To find a fixed point, we use the contraction mapping theorem and checking that
F
{\displaystyle F}
is a well-defined strict-contraction mapping is straightforward. Finally, we have:
f
(
B
(
0
,
r
)
)
⊂
B
(
0
,
(
1
+
c
)
r
)
{\displaystyle f(B(0,r))\subset B(0,(1+c)r)}
since
|
f
(
x
)
|
=
|
x
+
g
(
x
)
−
g
(
0
)
|
≤
(
1
+
c
)
|
x
|
.
◻
{\displaystyle |f(x)|=|x+g(x)-g(0)|\leq (1+c)|x|.\square }
As might be clear, this proof is not substantially different from the previous one, as the proof of the contraction mapping theorem is by successive approximation.
== Applications ==
=== Implicit function theorem ===
The inverse function theorem can be used to solve a system of equations
f
1
(
x
)
=
y
1
⋮
f
n
(
x
)
=
y
n
,
{\displaystyle {\begin{aligned}&f_{1}(x)=y_{1}\\&\quad \vdots \\&f_{n}(x)=y_{n},\end{aligned}}}
i.e., expressing
y
1
,
…
,
y
n
{\displaystyle y_{1},\dots ,y_{n}}
as functions of
x
=
(
x
1
,
…
,
x
n
)
{\displaystyle x=(x_{1},\dots ,x_{n})}
, provided the Jacobian matrix is invertible. The implicit function theorem allows to solve a more general system of equations:
f
1
(
x
,
y
)
=
0
⋮
f
n
(
x
,
y
)
=
0
{\displaystyle {\begin{aligned}&f_{1}(x,y)=0\\&\quad \vdots \\&f_{n}(x,y)=0\end{aligned}}}
for
y
{\displaystyle y}
in terms of
x
{\displaystyle x}
. Though more general, the theorem is actually a consequence of the inverse function theorem. First, the precise statement of the implicit function theorem is as follows:
given a map
f
:
R
n
×
R
m
→
R
m
{\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{m}}
, if
f
(
a
,
b
)
=
0
{\displaystyle f(a,b)=0}
,
f
{\displaystyle f}
is continuously differentiable in a neighborhood of
(
a
,
b
)
{\displaystyle (a,b)}
and the derivative of
y
↦
f
(
a
,
y
)
{\displaystyle y\mapsto f(a,y)}
at
b
{\displaystyle b}
is invertible, then there exists a differentiable map
g
:
U
→
V
{\displaystyle g:U\to V}
for some neighborhoods
U
,
V
{\displaystyle U,V}
of
a
,
b
{\displaystyle a,b}
such that
f
(
x
,
g
(
x
)
)
=
0
{\displaystyle f(x,g(x))=0}
. Moreover, if
f
(
x
,
y
)
=
0
,
x
∈
U
,
y
∈
V
{\displaystyle f(x,y)=0,x\in U,y\in V}
, then
y
=
g
(
x
)
{\displaystyle y=g(x)}
; i.e.,
g
(
x
)
{\displaystyle g(x)}
is a unique solution.
To see this, consider the map
F
(
x
,
y
)
=
(
x
,
f
(
x
,
y
)
)
{\displaystyle F(x,y)=(x,f(x,y))}
. By the inverse function theorem,
F
:
U
×
V
→
W
{\displaystyle F:U\times V\to W}
has the inverse
G
{\displaystyle G}
for some neighborhoods
U
,
V
,
W
{\displaystyle U,V,W}
. We then have:
(
x
,
y
)
=
F
(
G
1
(
x
,
y
)
,
G
2
(
x
,
y
)
)
=
(
G
1
(
x
,
y
)
,
f
(
G
1
(
x
,
y
)
,
G
2
(
x
,
y
)
)
)
,
{\displaystyle (x,y)=F(G_{1}(x,y),G_{2}(x,y))=(G_{1}(x,y),f(G_{1}(x,y),G_{2}(x,y))),}
implying
x
=
G
1
(
x
,
y
)
{\displaystyle x=G_{1}(x,y)}
and
y
=
f
(
x
,
G
2
(
x
,
y
)
)
.
{\displaystyle y=f(x,G_{2}(x,y)).}
Thus
g
(
x
)
=
G
2
(
x
,
0
)
{\displaystyle g(x)=G_{2}(x,0)}
has the required property.
◻
{\displaystyle \square }
=== Giving a manifold structure ===
In differential geometry, the inverse function theorem is used to show that the pre-image of a regular value under a smooth map is a manifold. Indeed, let
f
:
U
→
R
r
{\displaystyle f:U\to \mathbb {R} ^{r}}
be such a smooth map from an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
(since the result is local, there is no loss of generality with considering such a map). Fix a point
a
{\displaystyle a}
in
f
−
1
(
b
)
{\displaystyle f^{-1}(b)}
and then, by permuting the coordinates on
R
n
{\displaystyle \mathbb {R} ^{n}}
, assume the matrix
[
∂
f
i
∂
x
j
(
a
)
]
1
≤
i
,
j
≤
r
{\displaystyle \left[{\frac {\partial f_{i}}{\partial x_{j}}}(a)\right]_{1\leq i,j\leq r}}
has rank
r
{\displaystyle r}
. Then the map
F
:
U
→
R
r
×
R
n
−
r
=
R
n
,
x
↦
(
f
(
x
)
,
x
r
+
1
,
…
,
x
n
)
{\displaystyle F:U\to \mathbb {R} ^{r}\times \mathbb {R} ^{n-r}=\mathbb {R} ^{n},\,x\mapsto (f(x),x_{r+1},\dots ,x_{n})}
is such that
F
′
(
a
)
{\displaystyle F'(a)}
has rank
n
{\displaystyle n}
. Hence, by the inverse function theorem, we find the smooth inverse
G
{\displaystyle G}
of
F
{\displaystyle F}
defined in a neighborhood
V
×
W
{\displaystyle V\times W}
of
(
b
,
a
r
+
1
,
…
,
a
n
)
{\displaystyle (b,a_{r+1},\dots ,a_{n})}
. We then have
x
=
(
F
∘
G
)
(
x
)
=
(
f
(
G
(
x
)
)
,
G
r
+
1
(
x
)
,
…
,
G
n
(
x
)
)
,
{\displaystyle x=(F\circ G)(x)=(f(G(x)),G_{r+1}(x),\dots ,G_{n}(x)),}
which implies
(
f
∘
G
)
(
x
1
,
…
,
x
n
)
=
(
x
1
,
…
,
x
r
)
.
{\displaystyle (f\circ G)(x_{1},\dots ,x_{n})=(x_{1},\dots ,x_{r}).}
That is, after the change of coordinates by
G
{\displaystyle G}
,
f
{\displaystyle f}
is a coordinate projection (this fact is known as the submersion theorem). Moreover, since
G
:
V
×
W
→
U
′
=
G
(
V
×
W
)
{\displaystyle G:V\times W\to U'=G(V\times W)}
is bijective, the map
g
=
G
(
b
,
⋅
)
:
W
→
f
−
1
(
b
)
∩
U
′
,
(
x
r
+
1
,
…
,
x
n
)
↦
G
(
b
,
x
r
+
1
,
…
,
x
n
)
{\displaystyle g=G(b,\cdot ):W\to f^{-1}(b)\cap U',\,(x_{r+1},\dots ,x_{n})\mapsto G(b,x_{r+1},\dots ,x_{n})}
is bijective with the smooth inverse. That is to say,
g
{\displaystyle g}
gives a local parametrization of
f
−
1
(
b
)
{\displaystyle f^{-1}(b)}
around
a
{\displaystyle a}
. Hence,
f
−
1
(
b
)
{\displaystyle f^{-1}(b)}
is a manifold.
◻
{\displaystyle \square }
(Note the proof is quite similar to the proof of the implicit function theorem and, in fact, the implicit function theorem can be also used instead.)
More generally, the theorem shows that if a smooth map
f
:
P
→
E
{\displaystyle f:P\to E}
is transversal to a submanifold
M
⊂
E
{\displaystyle M\subset E}
, then the pre-image
f
−
1
(
M
)
↪
P
{\displaystyle f^{-1}(M)\hookrightarrow P}
is a submanifold.
== Global version ==
The inverse function theorem is a local result; it applies to each point. A priori, the theorem thus only shows the function
f
{\displaystyle f}
is locally bijective (or locally diffeomorphic of some class). The next topological lemma can be used to upgrade local injectivity to injectivity that is global to some extent.
Proof: First assume
X
{\displaystyle X}
is compact. If the conclusion of the theorem is false, we can find two sequences
x
i
≠
y
i
{\displaystyle x_{i}\neq y_{i}}
such that
f
(
x
i
)
=
f
(
y
i
)
{\displaystyle f(x_{i})=f(y_{i})}
and
x
i
,
y
i
{\displaystyle x_{i},y_{i}}
each converge to some points
x
,
y
{\displaystyle x,y}
in
A
{\displaystyle A}
. Since
f
{\displaystyle f}
is injective on
A
{\displaystyle A}
,
x
=
y
{\displaystyle x=y}
. Now, if
i
{\displaystyle i}
is large enough,
x
i
,
y
i
{\displaystyle x_{i},y_{i}}
are in a neighborhood of
x
=
y
{\displaystyle x=y}
where
f
{\displaystyle f}
is injective; thus,
x
i
=
y
i
{\displaystyle x_{i}=y_{i}}
, a contradiction.
In general, consider the set
E
=
{
(
x
,
y
)
∈
X
2
∣
x
≠
y
,
f
(
x
)
=
f
(
y
)
}
{\displaystyle E=\{(x,y)\in X^{2}\mid x\neq y,f(x)=f(y)\}}
. It is disjoint from
S
×
S
{\displaystyle S\times S}
for any subset
S
⊂
X
{\displaystyle S\subset X}
where
f
{\displaystyle f}
is injective. Let
X
1
⊂
X
2
⊂
⋯
{\displaystyle X_{1}\subset X_{2}\subset \cdots }
be an increasing sequence of compact subsets with union
X
{\displaystyle X}
and with
X
i
{\displaystyle X_{i}}
contained in the interior of
X
i
+
1
{\displaystyle X_{i+1}}
. Then, by the first part of the proof, for each
i
{\displaystyle i}
, we can find a neighborhood
U
i
{\displaystyle U_{i}}
of
A
∩
X
i
{\displaystyle A\cap X_{i}}
such that
U
i
2
⊂
X
2
−
E
{\displaystyle U_{i}^{2}\subset X^{2}-E}
. Then
U
=
⋃
i
U
i
{\displaystyle U=\bigcup _{i}U_{i}}
has the required property.
◻
{\displaystyle \square }
(See also for an alternative approach.)
The lemma implies the following (a sort of) global version of the inverse function theorem:
Note that if
A
{\displaystyle A}
is a point, then the above is the usual inverse function theorem.
== Holomorphic inverse function theorem ==
There is a version of the inverse function theorem for holomorphic maps.
The theorem follows from the usual inverse function theorem. Indeed, let
J
R
(
f
)
{\displaystyle J_{\mathbb {R} }(f)}
denote the Jacobian matrix of
f
{\displaystyle f}
in variables
x
i
,
y
i
{\displaystyle x_{i},y_{i}}
and
J
(
f
)
{\displaystyle J(f)}
for that in
z
j
,
z
¯
j
{\displaystyle z_{j},{\overline {z}}_{j}}
. Then we have
det
J
R
(
f
)
=
|
det
J
(
f
)
|
2
{\displaystyle \det J_{\mathbb {R} }(f)=|\det J(f)|^{2}}
, which is nonzero by assumption. Hence, by the usual inverse function theorem,
f
{\displaystyle f}
is injective near
0
{\displaystyle 0}
with continuously differentiable inverse. By chain rule, with
w
=
f
(
z
)
{\displaystyle w=f(z)}
,
∂
∂
z
¯
j
(
f
j
−
1
∘
f
)
(
z
)
=
∑
k
∂
f
j
−
1
∂
w
k
(
w
)
∂
f
k
∂
z
¯
j
(
z
)
+
∑
k
∂
f
j
−
1
∂
w
¯
k
(
w
)
∂
f
¯
k
∂
z
¯
j
(
z
)
{\displaystyle {\frac {\partial }{\partial {\overline {z}}_{j}}}(f_{j}^{-1}\circ f)(z)=\sum _{k}{\frac {\partial f_{j}^{-1}}{\partial w_{k}}}(w){\frac {\partial f_{k}}{\partial {\overline {z}}_{j}}}(z)+\sum _{k}{\frac {\partial f_{j}^{-1}}{\partial {\overline {w}}_{k}}}(w){\frac {\partial {\overline {f}}_{k}}{\partial {\overline {z}}_{j}}}(z)}
where the left-hand side and the first term on the right vanish since
f
j
−
1
∘
f
{\displaystyle f_{j}^{-1}\circ f}
and
f
k
{\displaystyle f_{k}}
are holomorphic. Thus,
∂
f
j
−
1
∂
w
¯
k
(
w
)
=
0
{\displaystyle {\frac {\partial f_{j}^{-1}}{\partial {\overline {w}}_{k}}}(w)=0}
for each
k
{\displaystyle k}
.
◻
{\displaystyle \square }
Similarly, there is the implicit function theorem for holomorphic functions.
As already noted earlier, it can happen that an injective smooth function has the inverse that is not smooth (e.g.,
f
(
x
)
=
x
3
{\displaystyle f(x)=x^{3}}
in a real variable). This is not the case for holomorphic functions because of:
== Formulations for manifolds ==
The inverse function theorem can be rephrased in terms of differentiable maps between differentiable manifolds. In this context the theorem states that for a differentiable map
F
:
M
→
N
{\displaystyle F:M\to N}
(of class
C
1
{\displaystyle C^{1}}
), if the differential of
F
{\displaystyle F}
,
d
F
p
:
T
p
M
→
T
F
(
p
)
N
{\displaystyle dF_{p}:T_{p}M\to T_{F(p)}N}
is a linear isomorphism at a point
p
{\displaystyle p}
in
M
{\displaystyle M}
then there exists an open neighborhood
U
{\displaystyle U}
of
p
{\displaystyle p}
such that
F
|
U
:
U
→
F
(
U
)
{\displaystyle F|_{U}:U\to F(U)}
is a diffeomorphism. Note that this implies that the connected components of M and N containing p and F(p) have the same dimension, as is already directly implied from the assumption that dFp is an isomorphism.
If the derivative of F is an isomorphism at all points p in M then the map F is a local diffeomorphism.
== Generalizations ==
=== Banach spaces ===
The inverse function theorem can also be generalized to differentiable maps between Banach spaces X and Y. Let U be an open neighbourhood of the origin in X and
F
:
U
→
Y
{\displaystyle F:U\to Y\!}
a continuously differentiable function, and assume that the Fréchet derivative
d
F
0
:
X
→
Y
{\displaystyle dF_{0}:X\to Y\!}
of F at 0 is a bounded linear isomorphism of X onto Y. Then there exists an open neighbourhood V of
F
(
0
)
{\displaystyle F(0)\!}
in Y and a continuously differentiable map
G
:
V
→
X
{\displaystyle G:V\to X\!}
such that
F
(
G
(
y
)
)
=
y
{\displaystyle F(G(y))=y}
for all y in V. Moreover,
G
(
y
)
{\displaystyle G(y)\!}
is the only sufficiently small solution x of the equation
F
(
x
)
=
y
{\displaystyle F(x)=y\!}
.
There is also the inverse function theorem for Banach manifolds.
=== Constant rank theorem ===
The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point. Specifically, if
F
:
M
→
N
{\displaystyle F:M\to N}
has constant rank near a point
p
∈
M
{\displaystyle p\in M\!}
, then there are open neighborhoods U of p and V of
F
(
p
)
{\displaystyle F(p)\!}
and there are diffeomorphisms
u
:
T
p
M
→
U
{\displaystyle u:T_{p}M\to U\!}
and
v
:
T
F
(
p
)
N
→
V
{\displaystyle v:T_{F(p)}N\to V\!}
such that
F
(
U
)
⊆
V
{\displaystyle F(U)\subseteq V\!}
and such that the derivative
d
F
p
:
T
p
M
→
T
F
(
p
)
N
{\displaystyle dF_{p}:T_{p}M\to T_{F(p)}N\!}
is equal to
v
−
1
∘
F
∘
u
{\displaystyle v^{-1}\circ F\circ u\!}
. That is, F "looks like" its derivative near p. The set of points
p
∈
M
{\displaystyle p\in M}
such that the rank is constant in a neighborhood of
p
{\displaystyle p}
is an open dense subset of M; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain.
When the derivative of F is injective (resp. surjective) at a point p, it is also injective (resp. surjective) in a neighborhood of p, and hence the rank of F is constant on that neighborhood, and the constant rank theorem applies.
=== Polynomial functions ===
If it is true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials.
=== Selections ===
When
f
:
R
n
→
R
m
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
with
m
≤
n
{\displaystyle m\leq n}
,
f
{\displaystyle f}
is
k
{\displaystyle k}
times continuously differentiable, and the Jacobian
A
=
∇
f
(
x
¯
)
{\displaystyle A=\nabla f({\overline {x}})}
at a point
x
¯
{\displaystyle {\overline {x}}}
is of rank
m
{\displaystyle m}
, the inverse of
f
{\displaystyle f}
may not be unique. However, there exists a local selection function
s
{\displaystyle s}
such that
f
(
s
(
y
)
)
=
y
{\displaystyle f(s(y))=y}
for all
y
{\displaystyle y}
in a neighborhood of
y
¯
=
f
(
x
¯
)
{\displaystyle {\overline {y}}=f({\overline {x}})}
,
s
(
y
¯
)
=
x
¯
{\displaystyle s({\overline {y}})={\overline {x}}}
,
s
{\displaystyle s}
is
k
{\displaystyle k}
times continuously differentiable in this neighborhood, and
∇
s
(
y
¯
)
=
A
T
(
A
A
T
)
−
1
{\displaystyle \nabla s({\overline {y}})=A^{T}(AA^{T})^{-1}}
(
∇
s
(
y
¯
)
{\displaystyle \nabla s({\overline {y}})}
is the Moore–Penrose pseudoinverse of
A
{\displaystyle A}
).
=== Over a real closed field ===
The inverse function theorem also holds over a real closed field k (or an O-minimal structure). Precisely, the theorem holds for a semialgebraic (or definable) map between open subsets of
k
n
{\displaystyle k^{n}}
that is continuously differentiable.
The usual proof of the IFT uses Banach's fixed point theorem, which relies on the Cauchy completeness. That part of the argument is replaced by the use of the extreme value theorem, which does not need completeness. Explicitly, in § A proof using the contraction mapping principle, the Cauchy completeness is used only to establish the inclusion
B
(
0
,
r
/
2
)
⊂
f
(
B
(
0
,
r
)
)
{\displaystyle B(0,r/2)\subset f(B(0,r))}
. Here, we shall directly show
B
(
0
,
r
/
4
)
⊂
f
(
B
(
0
,
r
)
)
{\displaystyle B(0,r/4)\subset f(B(0,r))}
instead (which is enough). Given a point
y
{\displaystyle y}
in
B
(
0
,
r
/
4
)
{\displaystyle B(0,r/4)}
, consider the function
P
(
x
)
=
|
f
(
x
)
−
y
|
2
{\displaystyle P(x)=|f(x)-y|^{2}}
defined on a neighborhood of
B
¯
(
0
,
r
)
{\displaystyle {\overline {B}}(0,r)}
. If
P
′
(
x
)
=
0
{\displaystyle P'(x)=0}
, then
0
=
P
′
(
x
)
=
2
[
f
1
(
x
)
−
y
1
⋯
f
n
(
x
)
−
y
n
]
f
′
(
x
)
{\displaystyle 0=P'(x)=2[f_{1}(x)-y_{1}\cdots f_{n}(x)-y_{n}]f'(x)}
and so
f
(
x
)
=
y
{\displaystyle f(x)=y}
, since
f
′
(
x
)
{\displaystyle f'(x)}
is invertible. Now, by the extreme value theorem,
P
{\displaystyle P}
admits a minimal at some point
x
0
{\displaystyle x_{0}}
on the closed ball
B
¯
(
0
,
r
)
{\displaystyle {\overline {B}}(0,r)}
, which can be shown to lie in
B
(
0
,
r
)
{\displaystyle B(0,r)}
using
2
−
1
|
x
|
≤
|
f
(
x
)
|
{\displaystyle 2^{-1}|x|\leq |f(x)|}
. Since
P
′
(
x
0
)
=
0
{\displaystyle P'(x_{0})=0}
,
f
(
x
0
)
=
y
{\displaystyle f(x_{0})=y}
, which proves the claimed inclusion.
◻
{\displaystyle \square }
Alternatively, one can deduce the theorem from the one over real numbers by Tarski's principle.
== See also ==
Nash–Moser theorem
== Notes ==
== References ==
Allendoerfer, Carl B. (1974). "Theorems about Differentiable Functions". Calculus of Several Variables and Differentiable Manifolds. New York: Macmillan. pp. 54–88. ISBN 0-02-301840-2.
Baxandall, Peter; Liebeck, Hans (1986). "The Inverse Function Theorem". Vector Calculus. New York: Oxford University Press. pp. 214–225. ISBN 0-19-859652-9.
Nijenhuis, Albert (1974). "Strong derivatives and inverse mappings". Amer. Math. Monthly. 81 (9): 969–980. doi:10.2307/2319298. hdl:10338.dmlcz/102482. JSTOR 2319298.
Griffiths, Phillip; Harris, Joseph (1978), Principles of Algebraic Geometry, John Wiley & Sons, ISBN 978-0-471-05059-9.
Hirsch, Morris W. (1976). Differential Topology. Springer-Verlag. ISBN 978-0-387-90148-0.
Protter, Murray H.; Morrey, Charles B. Jr. (1985). "Transformations and Jacobians". Intermediate Calculus (Second ed.). New York: Springer. pp. 412–420. ISBN 0-387-96058-9.
Renardy, Michael; Rogers, Robert C. (2004). An Introduction to Partial Differential Equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. pp. 337–338. ISBN 0-387-00444-0.
Rudin, Walter (1976). Principles of mathematical analysis. International Series in Pure and Applied Mathematics (Third ed.). New York: McGraw-Hill Book. pp. 221–223. ISBN 978-0-07-085613-4.
Spivak, Michael (1965). Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus. San Francisco: Benjamin Cummings. ISBN 0-8053-9021-9. | Wikipedia/Inverse_function_theorem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.