text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields (also called Galois fields).
The algorithm consists mainly of exponentiation and polynomial GCD computations. It was invented by David G. Cantor and Hans Zassenhaus in 1981.
It is arguably the dominant algorithm for solving the problem, having replaced the earlier Berlekamp's algorithm of 1967. It is currently implemented in many computer algebra systems like PARI/GP.
== Overview ==
=== Background ===
The Cantor–Zassenhaus algorithm takes as input a square-free polynomial
f
(
x
)
{\displaystyle f(x)}
(i.e. one with no repeated factors) of degree n with coefficients in a finite field
F
q
{\displaystyle \mathbb {F} _{q}}
whose irreducible polynomial factors are all of equal degree (algorithms exist for efficiently factoring arbitrary polynomials into a product of polynomials satisfying these conditions, for instance,
f
(
x
)
/
gcd
(
f
(
x
)
,
f
′
(
x
)
)
{\displaystyle f(x)/\gcd(f(x),f'(x))}
is a squarefree polynomial with the same factors as
f
(
x
)
{\displaystyle f(x)}
, so that the Cantor–Zassenhaus algorithm can be used to factor arbitrary polynomials). It gives as output a polynomial
g
(
x
)
{\displaystyle g(x)}
with coefficients in the same field such that
g
(
x
)
{\displaystyle g(x)}
divides
f
(
x
)
{\displaystyle f(x)}
. The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of
f
(
x
)
{\displaystyle f(x)}
into powers of irreducible polynomials (recalling that the ring of polynomials over any field is a unique factorisation domain).
All possible factors of
f
(
x
)
{\displaystyle f(x)}
are contained within the factor ring
R
=
F
q
[
x
]
⟨
f
(
x
)
⟩
{\displaystyle R={\frac {\mathbb {F} _{q}[x]}{\langle f(x)\rangle }}}
. If we suppose that
f
(
x
)
{\displaystyle f(x)}
has irreducible factors
p
1
(
x
)
,
p
2
(
x
)
,
…
,
p
s
(
x
)
{\displaystyle p_{1}(x),p_{2}(x),\ldots ,p_{s}(x)}
, all of degree d, then this factor ring is isomorphic to the direct product of factor rings
S
=
∏
i
=
1
s
F
q
[
x
]
⟨
p
i
(
x
)
⟩
{\displaystyle S=\prod _{i=1}^{s}{\frac {\mathbb {F} _{q}[x]}{\langle p_{i}(x)\rangle }}}
. The isomorphism from R to S, say
ϕ
{\displaystyle \phi }
, maps a polynomial
g
(
x
)
∈
R
{\displaystyle g(x)\in R}
to the s-tuple of its reductions modulo each of the
p
i
(
x
)
{\displaystyle p_{i}(x)}
, i.e. if:
g
(
x
)
≡
g
1
(
x
)
(
mod
p
1
(
x
)
)
,
g
(
x
)
≡
g
2
(
x
)
(
mod
p
2
(
x
)
)
,
⋮
g
(
x
)
≡
g
s
(
x
)
(
mod
p
s
(
x
)
)
,
{\displaystyle {\begin{aligned}g(x)&{}\equiv g_{1}(x){\pmod {p_{1}(x)}},\\g(x)&{}\equiv g_{2}(x){\pmod {p_{2}(x)}},\\&{}\ \ \vdots \\g(x)&{}\equiv g_{s}(x){\pmod {p_{s}(x)}},\end{aligned}}}
then
ϕ
(
g
(
x
)
+
⟨
f
(
x
)
⟩
)
=
(
g
1
(
x
)
+
⟨
p
1
(
x
)
⟩
,
…
,
g
s
(
x
)
+
⟨
p
s
(
x
)
⟩
)
{\displaystyle \phi (g(x)+\langle f(x)\rangle )=(g_{1}(x)+\langle p_{1}(x)\rangle ,\ldots ,g_{s}(x)+\langle p_{s}(x)\rangle )}
. It is important to note the following at this point, as it shall be of critical importance later in the algorithm: Since the
p
i
(
x
)
{\displaystyle p_{i}(x)}
are each irreducible, each of the factor rings in this direct sum is in fact a field. These fields each have degree
q
d
{\displaystyle q^{d}}
.
=== Core result ===
The core result underlying the Cantor–Zassenhaus algorithm is the following: If
a
(
x
)
∈
R
{\displaystyle a(x)\in R}
is a polynomial satisfying:
a
(
x
)
≠
0
,
±
1
{\displaystyle a(x)\neq 0,\pm 1}
a
i
(
x
)
∈
{
0
,
−
1
,
1
}
for
i
=
1
,
2
,
…
,
s
,
{\displaystyle a_{i}(x)\in \{0,-1,1\}{\text{ for }}i=1,2,\ldots ,s,}
where
a
i
(
x
)
{\displaystyle a_{i}(x)}
is the reduction of
a
(
x
)
{\displaystyle a(x)}
modulo
p
i
(
x
)
{\displaystyle p_{i}(x)}
as before, and if any two of the following three sets is non-empty:
A
=
{
i
∣
a
i
(
x
)
=
0
}
,
{\displaystyle A=\{i\mid a_{i}(x)=0\},}
B
=
{
i
∣
a
i
(
x
)
=
−
1
}
,
{\displaystyle B=\{i\mid a_{i}(x)=-1\},}
C
=
{
i
∣
a
i
(
x
)
=
1
}
,
{\displaystyle C=\{i\mid a_{i}(x)=1\},}
then there exist the following non-trivial factors of
f
(
x
)
{\displaystyle f(x)}
:
gcd
(
f
(
x
)
,
a
(
x
)
)
=
∏
i
∈
A
p
i
(
x
)
,
{\displaystyle \gcd(f(x),a(x))=\prod _{i\in A}p_{i}(x),}
gcd
(
f
(
x
)
,
a
(
x
)
+
1
)
=
∏
i
∈
B
p
i
(
x
)
,
{\displaystyle \gcd(f(x),a(x)+1)=\prod _{i\in B}p_{i}(x),}
gcd
(
f
(
x
)
,
a
(
x
)
−
1
)
=
∏
i
∈
C
p
i
(
x
)
.
{\displaystyle \gcd(f(x),a(x)-1)=\prod _{i\in C}p_{i}(x).}
=== Algorithm ===
The Cantor–Zassenhaus algorithm computes polynomials of the same type as
a
(
x
)
{\displaystyle a(x)}
above using the isomorphism discussed in the Background section. It proceeds as follows, in the case where the field
F
q
{\displaystyle \mathbb {F} _{q}}
is of odd-characteristic (the process can be generalised to characteristic 2 fields in a fairly straightforward way. Select a random polynomial
b
(
x
)
∈
R
{\displaystyle b(x)\in R}
such that
b
(
x
)
≠
0
,
±
1
{\displaystyle b(x)\neq 0,\pm 1}
. Set
m
=
(
q
d
−
1
)
/
2
{\displaystyle m=(q^{d}-1)/2}
and compute
b
(
x
)
m
{\displaystyle b(x)^{m}}
. Since
ϕ
{\displaystyle \phi }
is an isomorphism, we have (using our now-established notation):
ϕ
(
b
(
x
)
m
)
=
(
b
1
m
(
x
)
+
⟨
p
1
(
x
)
⟩
,
…
,
b
s
m
(
x
)
+
⟨
p
s
(
x
)
⟩
)
.
{\displaystyle \phi (b(x)^{m})=(b_{1}^{m}(x)+\langle p_{1}(x)\rangle ,\ldots ,b_{s}^{m}(x)+\langle p_{s}(x)\rangle ).}
Now, each
b
i
(
x
)
+
⟨
p
i
(
x
)
⟩
{\displaystyle b_{i}(x)+\langle p_{i}(x)\rangle }
is an element of a field of order
q
d
{\displaystyle q^{d}}
, as noted earlier. The multiplicative subgroup of this field has order
q
d
−
1
{\displaystyle q^{d}-1}
and so, unless
b
i
(
x
)
=
0
{\displaystyle b_{i}(x)=0}
, we have
b
i
(
x
)
q
d
−
1
=
1
{\displaystyle b_{i}(x)^{q^{d}-1}=1}
for each i and hence
b
i
(
x
)
m
=
±
1
{\displaystyle b_{i}(x)^{m}=\pm 1}
for each i. If
b
i
(
x
)
=
0
{\displaystyle b_{i}(x)=0}
, then of course
b
i
(
x
)
m
=
0
{\displaystyle b_{i}(x)^{m}=0}
. Hence
b
(
x
)
m
{\displaystyle b(x)^{m}}
is a polynomial of the same type as
a
(
x
)
{\displaystyle a(x)}
above. Further, since
b
(
x
)
≠
0
,
±
1
{\displaystyle b(x)\neq 0,\pm 1}
, at least two of the sets
A
,
B
{\displaystyle A,B}
and C are non-empty and by computing the above GCDs we may obtain non-trivial factors. Since the ring of polynomials over a field is a Euclidean domain, we may compute these GCDs using the Euclidean algorithm.
== Applications ==
One important application of the Cantor–Zassenhaus algorithm is in computing discrete logarithms over finite fields of prime-power order. Computing discrete logarithms is an important problem in public key cryptography. For a field of prime-power order, the fastest known method is the index calculus method, which involves the factorisation of field elements. If we represent the prime-power order field in the usual way – that is, as polynomials over the prime order base field, reduced modulo an irreducible polynomial of appropriate degree – then this is simply polynomial factorisation, as provided by the Cantor–Zassenhaus algorithm.
== Implementation in computer algebra systems ==
The Cantor–Zassenhaus algorithm is implemented in the PARI/GP computer algebra system as the factormod() function (formerly factorcantor()).
== See also ==
Polynomial factorization
Factorization of polynomials over finite fields
== References ==
== External links ==
https://web.archive.org/web/20200301213349/http://blog.fkraiem.org/2013/12/01/polynomial-factorisation-over-finite-fields-part-3-final-splitting-cantor-zassenhaus-in-odd-characteristic/ | Wikipedia/Cantor–Zassenhaus_algorithm |
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called hash values, hash codes, (hash/message) digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter-storage addressing.
Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys.
Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimal collision).: 527
Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms of data integrity. Hash tables may use non-cryptographic hash functions, while cryptographic hash functions are used in cybersecurity to secure sensitive data such as passwords.
== Overview ==
In a hash table, a hash function takes a key as an input, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them.
A hash function may be considered to perform three functions:
Convert variable-length keys into fixed-length (usually machine-word-length or less) values, by folding them by words or other units using a parity-preserving operator like ADD or XOR,
Scramble the bits of the key so that the resulting values are uniformly distributed over the keyspace, and
Map the key values into ones less than or equal to the size of the table.
A good hash function satisfies two basic properties: it should be very fast to compute, and it should minimize duplication of output values (collisions). Hash functions rely on generating favorable probability distributions for their effectiveness, reducing access time to nearly constant. High table loading factors, pathological key sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure like linked lists, or systematic probing of the table to find an empty slot.
== Hash tables ==
Hash functions are used in conjunction with hash tables to store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code, which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table. In chained hashing, each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. In open address hashing, the table is probed starting from the occupied slot in a specified manner, usually by linear probing, quadratic probing, or double hashing until an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found, or the entire table has been searched (item not in table).
=== Specialized uses ===
Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items.
Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set.
A special case of hashing is known as geometric hashing or the grid method. In these applications, the set of all inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid of cells. The table is often an array with two or more indices (called a grid file, grid index, bucket grid, and similar names), and the hash function returns an index tuple. This principle is widely used in computer graphics, computational geometry, and many other disciplines, to solve many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points, similar shapes in a list of shapes, similar images in an image database, and so on.
Hash tables are also used to implement associative arrays and dynamic sets.
== Properties ==
=== Uniformity ===
A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the same probability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number of collisions—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, then a larger fraction of the lookup operations will have to search through a larger set of colliding table entries.
This criterion only requires the value to be uniformly distributed, not random in any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true.
Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries.
In other words, if a typical set of m records is hashed to n table slots, then the probability of a bucket receiving many more than m/n records should be vanishingly small. In particular, if m < n, then very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even if n is much larger than m—see the birthday problem.
In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to be perfect. There is no algorithmic way of constructing such a function—searching for one is a factorial function of the number of keys to be mapped versus the number of table slots that they are mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function.
=== Testing and measurement ===
When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test. This test is a goodness-of-fit measure: it is the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is
∑
j
=
0
m
−
1
(
b
j
)
(
b
j
+
1
)
/
2
(
n
/
2
m
)
(
n
+
2
m
−
1
)
,
{\displaystyle {\frac {\sum _{j=0}^{m-1}(b_{j})(b_{j}+1)/2}{(n/2m)(n+2m-1)}},}
where n is the number of keys, m is the number of buckets, and bj is the number of items in bucket j.
A ratio within one confidence interval (such as 0.95 to 1.05) is indicative that the hash function evaluated has an expected uniform distribution.
Hash functions can have some technical properties that make it more likely that they will have a uniform distribution when applied. One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability. For the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the tablespace) in the output. Each bit should change with a probability of 50% because, if some bits are reluctant to change, then the keys become clustered around those values. If the bits want to change too readily, then the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature. The relevance of the criterion to a multiplicative hash function is assessed here.
=== Efficiency ===
In data storage and retrieval applications, the use of a hash function is a trade-off between search time and data storage space. If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, the hash function should be computable with minimum latency and secondarily in a minimum number of instructions.
Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods.
Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it is usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions.
Division-based implementations can be of particular concern because a division requires multiple cycles on nearly all processor microarchitectures. Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler. Division can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of machine-language instructions resulting may be more than a dozen and swamp the pipeline. If the microarchitecture has hardware multiply functional units, then the multiply-by-inverse is likely a better approach.
We can allow the table size n to not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, let n be significantly less than 2b. Consider a pseudorandom number generator function P(key) that is uniform on the interval [0, 2b − 1]. A hash function uniform on the interval [0, n − 1] is n P(key) / 2b. We can replace the division by a (possibly faster) right bit shift: n P(key) >> b.
If keys are being hashed repeatedly, and the hash function is costly, then computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position.
=== Universality ===
A universal hashing scheme is a randomized algorithm that selects a hash function h among a family of such functions, in such a way that the probability of a collision of any two distinct keys is 1/m, where m is the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function.
=== Applicability ===
A hash function that allows only certain table sizes or strings only up to a certain length, or cannot accept a seed (i.e. allow double hashing) is less useful than one that does.
A hash function is applicable in a variety of situations. Particularly within cryptography, notable applications include:
Integrity checking: Identical hash values for different files imply equality, providing a reliable means to detect file modifications.
Key derivation: Minor input changes result in a random-looking output alteration, known as the diffusion property. Thus, hash functions are valuable for key derivation functions.
Message authentication codes (MACs): Through the integration of a confidential key with the input data, hash functions can generate MACs ensuring the genuineness of the data, such as in HMACs.
Password storage: The password's hash value does not expose any password details, emphasizing the importance of securely storing hashed passwords on the server.
Signatures: Message hashes are signed rather than the whole message.
=== Deterministic ===
A hash procedure must be deterministic—for a given input value, it must always generate the same hash value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible.
The determinism is in the context of the reuse of the function. For example, Python adds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed. The Python hash (SipHash) is still a valid hash function when used within a single run, but if the values are persisted (for example, written to disk), they can no longer be treated as valid hash values, since in the next run the random value might differ.
=== Defined range ===
It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, then the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression that iteratively processes chunks of the input (such as the characters in a string) to produce the hash value.
=== Variable range ===
In many applications, the range of hash values may be different for each run of the program or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input data z, and the number n of allowed hash values.
A common solution is to compute a fixed hash function with a very large range (say, 0 to 232 − 1), divide the result by n, and use the division's remainder. If n is itself a power of 2, this can be done by bit masking and bit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between 0 and n − 1, for any value of n that may occur in the application. Depending on the function, the remainder may be uniform only for certain values of n, e.g. odd or prime numbers.
=== Variable range with minimal movement (dynamic hash function) ===
When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table.
A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash function H(z,n) (where z is the key being hashed and n is the number of allowed hash values) such that H(z,n + 1) = H(z,n) with probability close to n/(n + 1).
Linear hashing and spiral hashing are examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property. Extendible hashing uses a dynamic hash function that requires space proportional to n to compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional to n to compute the value of H(z,n) have been invented.
A hash function with minimal movement is especially useful in distributed hash tables.
=== Data normalization ===
In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters.
== Hashing integer data types ==
There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method.
=== Identity hash function ===
If the data to be hashed is small enough, then one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this identity hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value.
The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer. Thus the 32-bit integer Integer and 32-bit floating-point Float objects can simply use the value directly, whereas the 64-bit integer Long and 64-bit floating-point Double cannot.
Other types of data can also use this hashing scheme. For example, when mapping character strings between upper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as in extended ASCII or ISO Latin 1), the table has only 28 = 256 entries; in the case of Unicode characters, the table would have 17 × 216 = 1114112 entries.
The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit ZIP codes like 13083 to city names (100000 entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value.
=== Trivial hash function ===
If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, then they may be considered to be already "hashed". In this case, any number of any bits in the key may be extracted and collated as an index into the hash table. For example, a simple hash function might mask off the m least significant bits and use the result as an index into a hash table of size 2m.
=== Mid-squares ===
A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is 123456789 and the hash table size 10000, then squaring the key produces 15241578750190521, so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good because an arbitrary key is not a good multiplier.
=== Division hashing ===
A standard technique is to use a modulo function on the key, by selecting a divisor M which is a prime number close to the table size, so h(K) ≡ K (mod M). The table size is usually a power of 2. This gives a distribution from {0, M − 1}. This gives good results over a large number of key sets. A significant drawback of division hashing is that division requires multiple cycles on most modern architectures (including x86) and can be 10 times slower than multiplication. A second drawback is that it will not break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small.
=== Algebraic coding ===
Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to map n bits to m bits.: 512–513 In this approach, M = 2m, and we postulate an mth-degree polynomial Z(x) = xm + ζm−1xm−1 + ⋯ + ζ0. A key K = (kn−1…k1k0)2 can be regarded as the polynomial K(x) = kn−1xn−1 + ⋯ + k1x + k0. The remainder using polynomial arithmetic modulo 2 is K(x) mod Z(x) = hm−1xm−1 + ⋯ h1x + h0. Then h(K) = (hm−1…h1h0)2. If Z(x) is constructed to have t or fewer non-zero coefficients, then keys which share fewer than t bits are guaranteed to not collide.
Z is a function of k, t, and n (the last of which is a divisor of 2k − 1) and is constructed from the finite field GF(2k). Knuth gives an example: taking (n,m,t) = (15,10,7) yields Z(x) = x10 + x8 + x5 + x4 + x2 + x + 1. The derivation is as follows:
Let S be the smallest set of integers such that {1,2,…,t} ⊆ S and (2j mod n) ∈ S ∀j ∈ S.
Define
P
(
x
)
=
∏
j
∈
S
(
x
−
α
j
)
{\displaystyle P(x)=\prod _{j\in S}(x-\alpha ^{j})}
where α ∈n GF(2k) and where the coefficients of P(x) are computed in this field. Then the degree of P(x) = |S|. Since α2j is a root of P(x) whenever αj is a root, it follows that the coefficients pi of P(x) satisfy p2i = pi, so they are all 0 or 1. If R(x) = rn−1xn−1 + ⋯ + r1x + r0 is any nonzero polynomial modulo 2 with at most t nonzero coefficients, then R(x) is not a multiple of P(x) modulo 2. If follows that the corresponding hash function will map keys with fewer than t bits in common to unique indices.: 542–543
The usual outcome is that either n will get large, or t will get large, or both, for the scheme to be computationally feasible. Therefore, it is more suited to hardware or microcode implementation.: 542–543
=== Unique permutation hashing ===
Unique permutation hashing has a guaranteed best worst-case insertion time.
=== Multiplicative hashing ===
Standard multiplicative hashing uses the formula ha(K) = ⌊(aK mod W) / (W/M)⌋, which produces a hash value in {0, …, M − 1}. The value a is an appropriately chosen value that should be relatively prime to W; it should be large, and its binary representation a random mix of 1s and 0s. An important practical special case occurs when W = 2w and M = 2m are powers of 2 and w is the machine word size. In this case, this formula becomes ha(K) = ⌊(aK mod 2w) / 2w−m⌋. This is special because arithmetic modulo 2w is done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, in C, for example, this function becomes
and for fixed m and w this translates into a single integer multiplication and right-shift, making it one of the fastest hash functions to compute.
Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits. A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. The resulting function looks like:
=== Fibonacci hashing ===
Fibonacci hashing is a form of multiplicative hashing in which the multiplier is 2w / ϕ, where w is the machine word length and ϕ (phi) is the golden ratio (approximately 1.618). A property of this multiplier is that it uniformly distributes over the table space, blocks of consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths are:
16: a = 9E3716 = 4050310
32: a = 9E3779B916 = 265443576910
48: a = 9E3779B97F4B16 = 17396110258977110
64: a = 9E3779B97F4A7C1516 = 1140071481932319848510
The multiplier should be odd, so the least significant bit of the output is invertible modulo 2w. The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this.
=== Zobrist hashing ===
Tabulation hashing, more generally known as Zobrist hashing after Albert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys).
Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game-playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64×12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position.
Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28×4 random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28×8 64-bit random numbers.
This kind of function has some nice theoretical properties, one of which is called 3-tuple independence, meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values.
=== Customized hash function ===
A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies.
== Hashing variable-length data ==
When the data values are long (or variable-length) character strings—such as personal names, web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way.
=== Middle and ends ===
Simplistic hash functions may add the first and last n characters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions that do not hash on all characters of a string can readily become linear due to redundancies, clustering, or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends, or other fields are zero or some other invariant constant that does not differentiate the keys; then the invariant parts of the keys can be ignored.
=== Character folding ===
The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes; this clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0, and printable strings do not use the last byte code or most of the first 32 byte codes, so the information, which uses the remaining byte codes, is clustered in the remaining bits in an unobvious manner.
The classic approach, dubbed the PJW hash based on the work of Peter J. Weinberger at Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the "Dragon Book". This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, xored back into the low byte of the cumulative quantity. The result is a word-size hash code to which a modulo or other reducing operation can be applied to produce the final hash index.
Today, especially with the advent of 64-bit word sizes, much more efficient variable-length string hashing by word chunks is available.
=== Word length folding ===
Modern microprocessors will allow for much faster processing if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32-bit or 64-bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified randomizing value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table.
=== Radix conversion hashing ===
Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted as xk−1ak−1 + xk−2ak−2 + ⋯ + x1a + x0. This is simply a polynomial in a radix a > 1 that takes the components (x0,x1,...,xk−1) as the characters of the input string of length k. It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value of a is usually a prime number large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions. Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit word will hash only a 26-character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to 1019, or 19 decimal digits with radix 10.
=== Rolling hash ===
In some applications, such as substring search, one can compute a hash function h for every k-character substring of a given n-character string by advancing a window of width k characters along the string, where k is a fixed integer, and n > k. The straightforward solution, which is to extract such a substring at every character position in the text and compute h separately, requires a number of operations proportional to k·n. However, with the proper choice of h, one can use the technique of rolling hash to compute all those hashes with an effort proportional to mk + n where m is the number of occurrences of the substring.
The most familiar algorithm of this type is Rabin-Karp with best and average case performance O(n+mk) and worst case O(n·k) (in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such as t="AAAAAAAAAAA", and s="AAA"). The hash function used for the algorithm is usually the Rabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used.
=== Fuzzy hash ===
=== Perceptual hash ===
== Analysis ==
Worst case results for a hash function can be assessed two ways: theoretical and practical. The theoretical worst case is the probability that all keys map to a single slot. The practical worst case is the expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability 1/m, a characteristic of universal hash functions.
While Knuth worries about adversarial attack on real time systems, Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability of k of n keys mapping to a single slot is αk / (eα k!), where α is the load factor, n/m.
== History ==
The term hash offers a natural analogy with its non-technical meaning (to chop up or make a mess out of something), given how hash functions scramble their input data to derive their output.: 514 In his research for the precise origin of the term, Donald Knuth notes that, while Hans Peter Luhn of IBM appears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself did not appear in published literature until the late 1960s, in Herbert Hellerman's Digital Computer System Principles, even though it was already widespread jargon by then.: 547–548
== See also ==
== Notes ==
== References ==
== External links ==
The Goulburn Hashing Function (PDF) by Mayur Patel
Hash Function Construction for Textual and Geometrical Data Retrieval (PDF) Latest Trends on Computers, Vol.2, pp. 483–489, CSCC Conference, Corfu, 2010 | Wikipedia/Hash_function |
The Knuth–Bendix completion algorithm (named after Donald Knuth and Peter Bendix) is a semi-decision algorithm for transforming a set of equations (over terms) into a confluent term rewriting system. When the algorithm succeeds, it effectively solves the word problem for the specified algebra.
Buchberger's algorithm for computing Gröbner bases is a very similar algorithm. Although developed independently, it may also be seen as the instantiation of Knuth–Bendix algorithm in the theory of polynomial rings.
== Introduction ==
For a set E of equations, its deductive closure (⁎⟷E) is the set of all equations that can be derived by applying equations from E in any order.
Formally, E is considered a binary relation, (⟶E) is its rewrite closure, and (⁎⟷E) is the equivalence closure of (⟶E).
For a set R of rewrite rules, its deductive closure (⁎⟶R ∘ ⁎⟵R) is the set of all equations that can be confirmed by applying rules from R left-to-right to both sides until they are literally equal.
Formally, R is again viewed as a binary relation, (⟶R) is its rewrite closure, (⟵R) is its converse, and (⁎⟶R ∘ ⁎⟵R) is the relation composition of their reflexive transitive closures (⁎⟶R and ⁎⟵R).
For example, if E = {1⋅x = x, x−1⋅x = 1, (x⋅y)⋅z = x⋅(y⋅z)} are the group axioms, the derivation chain
a−1⋅(a⋅b) ⁎⟷E (a−1⋅a)⋅b ⁎⟷E 1⋅b ⁎⟷E b
demonstrates that a−1⋅(a⋅b) ⁎⟷E b is a member of E's deductive closure.
If R = { 1⋅x → x, x−1⋅x → 1, (x⋅y)⋅z → x⋅(y⋅z) } is a "rewrite rule" version of E, the derivation chains
(a−1⋅a)⋅b ⁎⟶R 1⋅b ⁎⟶R b and b ⁎⟵R b
demonstrate that (a−1⋅a)⋅b ⁎⟶R∘⁎⟵R b is a member of R's deductive closure.
However, there is no way to derive a−1⋅(a⋅b) ⁎⟶R∘⁎⟵R b similar to above, since a right-to-left application of the rule (x⋅y)⋅z → x⋅(y⋅z) is not allowed.
The Knuth–Bendix algorithm takes a set E of equations between terms, and a reduction ordering (>) on the set of all terms, and attempts to construct a confluent and terminating term rewriting system R that has the same deductive closure as E.
While proving consequences from E often requires human intuition, proving consequences from R does not.
For more details, see Confluence (abstract rewriting)#Motivating examples, which gives an example proof from group theory, performed both using E and using R.
== Rules ==
Given a set E of equations between terms, the following inference rules can be used to transform it into an equivalent convergent term rewrite system (if possible):
They are based on a user-given reduction ordering (>) on the set of all terms; it is lifted to a well-founded ordering (▻) on the set of rewrite rules by defining (s → t) ▻ (l → r) if
s >e l in the encompassment ordering, or
s and l are literally similar and t > r.
== Example ==
The following example run, obtained from the E theorem prover, computes a completion of the (additive) group axioms as in Knuth, Bendix (1970).
It starts with the three initial equations for the group (neutral element 0, inverse elements, associativity), using f(X,Y) for X+Y, and i(X) for −X.
The 10 starred equations turn out to constitute the resulting convergent rewrite system.
"pm" is short for "paramodulation", implementing deduce. Critical pair computation is an instance of paramodulation for equational unit clauses.
"rw" is rewriting, implementing compose, collapse, and simplify.
Orienting of equations is done implicitly and not recorded.
See also Word problem (mathematics) for another presentation of this example.
== String rewriting systems in group theory ==
An important case in computational group theory is string rewriting systems which can be used to give canonical labels to elements or cosets of a finitely presented group as products of the generators. This special case is the focus of this section.
=== Motivation in group theory ===
The critical pair lemma states that a term rewriting system is locally confluent (or weakly confluent) if and only if all its critical pairs are convergent. Furthermore, we have Newman's lemma which states that if an (abstract) rewriting system is strongly normalizing and weakly confluent, then the rewriting system is confluent. So, if we can add rules to the term rewriting system in order to force all critical pairs to be convergent while maintaining the strong normalizing property, then this will force the resultant rewriting system to be confluent.
Consider a finitely presented monoid
M
=
⟨
X
∣
R
⟩
{\displaystyle M=\langle X\mid R\rangle }
where X is a finite set of generators and R is a set of defining relations on X. Let X* be the set of all words in X (i.e. the free monoid generated by X). Since the relations R generate an equivalence relation on X*, one can consider elements of M to be the equivalence classes of X* under R. For each class {w1, w2, ... } it is desirable to choose a standard representative wk. This representative is called the canonical or normal form for each word wk in the class. If there is a computable method to determine for each wk its normal form wi then the word problem is easily solved. A confluent rewriting system allows one to do precisely this.
Although the choice of a canonical form can theoretically be made in an arbitrary fashion this approach is generally not computable. (Consider that an equivalence relation on a language can produce an infinite number of infinite classes.) If the language is well-ordered then the order < gives a consistent method for defining minimal representatives, however computing these representatives may still not be possible. In particular, if a rewriting system is used to calculate minimal representatives then the order < should also have the property:
A < B → XAY < XBY for all words A,B,X,Y
This property is called translation invariance. An order that is both translation-invariant and a well-order is called a reduction order.
From the presentation of the monoid it is possible to define a rewriting system given by the relations R. If A x B is in R then either A < B in which case B → A is a rule in the rewriting system, otherwise A > B and A → B. Since < is a reduction order a given word W can be reduced W > W_1 > ... > W_n where W_n is irreducible under the rewriting system. However, depending on the rules that are applied at each Wi → Wi+1 it is possible to end up with two different irreducible reductions Wn ≠ W'm of W. However, if the rewriting system given by the relations is converted to a confluent rewriting system via the Knuth–Bendix algorithm, then all reductions are guaranteed to produce the same irreducible word, namely the normal form for that word.
=== Description of the algorithm for finitely presented monoids ===
Suppose we are given a presentation
⟨
X
∣
R
⟩
{\displaystyle \langle X\mid R\rangle }
, where
X
{\displaystyle X}
is a set of generators and
R
{\displaystyle R}
is a set of relations giving the rewriting system. Suppose further that we have a reduction ordering
<
{\displaystyle <}
among the words generated by
X
{\displaystyle X}
(e.g., shortlex order). For each relation
P
i
=
Q
i
{\displaystyle P_{i}=Q_{i}}
in
R
{\displaystyle R}
, suppose
Q
i
<
P
i
{\displaystyle Q_{i}<P_{i}}
. Thus we begin with the set of reductions
P
i
→
Q
i
{\displaystyle P_{i}\rightarrow Q_{i}}
.
First, if any relation
P
i
=
Q
i
{\displaystyle P_{i}=Q_{i}}
can be reduced, replace
P
i
{\displaystyle P_{i}}
and
Q
i
{\displaystyle Q_{i}}
with the reductions.
Next, we add more reductions (that is, rewriting rules) to eliminate possible exceptions of confluence. Suppose that
P
i
{\displaystyle P_{i}}
and
P
j
{\displaystyle P_{j}}
overlap.
Case 1: either the prefix of
P
i
{\displaystyle P_{i}}
equals the suffix of
P
j
{\displaystyle P_{j}}
, or vice versa. In the former case, we can write
P
i
=
B
C
{\displaystyle P_{i}=BC}
and
P
j
=
A
B
{\displaystyle P_{j}=AB}
; in the latter case,
P
i
=
A
B
{\displaystyle P_{i}=AB}
and
P
j
=
B
C
{\displaystyle P_{j}=BC}
.
Case 2: either
P
i
{\displaystyle P_{i}}
is completely contained in (surrounded by)
P
j
{\displaystyle P_{j}}
, or vice versa. In the former case, we can write
P
i
=
B
{\displaystyle P_{i}=B}
and
P
j
=
A
B
C
{\displaystyle P_{j}=ABC}
; in the latter case,
P
i
=
A
B
C
{\displaystyle P_{i}=ABC}
and
P
j
=
B
{\displaystyle P_{j}=B}
.
Reduce the word
A
B
C
{\displaystyle ABC}
using
P
i
{\displaystyle P_{i}}
first, then using
P
j
{\displaystyle P_{j}}
first. Call the results
r
1
,
r
2
{\displaystyle r_{1},r_{2}}
, respectively. If
r
1
≠
r
2
{\displaystyle r_{1}\neq r_{2}}
, then we have an instance where confluence could fail. Hence, add the reduction
max
r
1
,
r
2
→
min
r
1
,
r
2
{\displaystyle \max r_{1},r_{2}\rightarrow \min r_{1},r_{2}}
to
R
{\displaystyle R}
.
After adding a rule to
R
{\displaystyle R}
, remove any rules in
R
{\displaystyle R}
that might have reducible left sides (after checking if such rules have critical pairs with other rules).
Repeat the procedure until all overlapping left sides have been checked.
=== Examples ===
==== A terminating example ====
Consider the monoid:
⟨
x
,
y
∣
x
3
=
y
3
=
(
x
y
)
3
=
1
⟩
{\displaystyle \langle x,y\mid x^{3}=y^{3}=(xy)^{3}=1\rangle }
.
We use the shortlex order. This is an infinite monoid but nevertheless, the Knuth–Bendix algorithm is able to solve the word problem.
Our beginning three reductions are therefore
A suffix of
x
3
{\displaystyle x^{3}}
(namely
x
{\displaystyle x}
) is a prefix of
(
x
y
)
3
=
x
y
x
y
x
y
{\displaystyle (xy)^{3}=xyxyxy}
, so consider the word
x
3
y
x
y
x
y
{\displaystyle x^{3}yxyxy}
. Reducing using (1), we get
y
x
y
x
y
{\displaystyle yxyxy}
. Reducing using (3), we get
x
2
{\displaystyle x^{2}}
. Hence, we get
y
x
y
x
y
=
x
2
{\displaystyle yxyxy=x^{2}}
, giving the reduction rule
Similarly, using
x
y
x
y
x
y
3
{\displaystyle xyxyxy^{3}}
and reducing using (2) and (3), we get
x
y
x
y
x
=
y
2
{\displaystyle xyxyx=y^{2}}
. Hence the reduction
Both of these rules obsolete (3), so we remove it.
Next, consider
x
3
y
x
y
x
{\displaystyle x^{3}yxyx}
by overlapping (1) and (5). Reducing we get
y
x
y
x
=
x
2
y
2
{\displaystyle yxyx=x^{2}y^{2}}
, so we add the rule
Considering
x
y
x
y
x
3
{\displaystyle xyxyx^{3}}
by overlapping (1) and (5), we get
x
y
x
y
=
y
2
x
2
{\displaystyle xyxy=y^{2}x^{2}}
, so we add the rule
These obsolete rules (4) and (5), so we remove them.
Now, we are left with the rewriting system
Checking the overlaps of these rules, we find no potential failures of confluence. Therefore, we have a confluent rewriting system, and the algorithm terminates successfully.
==== A non-terminating example ====
The order of the generators may crucially affect whether the Knuth–Bendix completion terminates. As an example, consider the free Abelian group by the monoid presentation:
⟨
x
,
y
,
x
−
1
,
y
−
1
|
x
y
=
y
x
,
x
x
−
1
=
x
−
1
x
=
y
y
−
1
=
y
−
1
y
=
1
⟩
.
{\displaystyle \langle x,y,x^{-1},y^{-1}\,|\,xy=yx,xx^{-1}=x^{-1}x=yy^{-1}=y^{-1}y=1\rangle .}
The Knuth–Bendix completion with respect to lexicographic order
x
<
x
−
1
<
y
<
y
−
1
{\displaystyle x<x^{-1}<y<y^{-1}}
finishes with a convergent system, however considering the length-lexicographic order
x
<
y
<
x
−
1
<
y
−
1
{\displaystyle x<y<x^{-1}<y^{-1}}
it does not finish for there are no finite convergent systems compatible with this latter order.
== Generalizations ==
If Knuth–Bendix does not succeed, it will either run forever and produce successive approximations to an infinite complete system, or fail when it encounters an unorientable equation (i.e. an equation that it cannot turn into a rewrite rule). An enhanced version will not fail on unorientable equations and produces a ground confluent system, providing a semi-algorithm for the word problem.
The notion of logged rewriting discussed in the paper by Heyworth and Wensley listed below allows some recording or logging of the rewriting process as it proceeds. This is useful for computing identities among relations for presentations of groups.
== References ==
D. Knuth; P. Bendix (1970). J. Leech (ed.). Simple Word Problems in Universal Algebras (PDF). Pergamon Press. pp. 263–297.
Gérard Huet (1981). "A Complete Proof of Correctness of the Knuth-Bendix Completion Algorithm" (PDF). J. Comput. Syst. Sci. 23 (1): 11–21. doi:10.1016/0022-0000(81)90002-7.
C. Sims. 'Computations with finitely presented groups.' Cambridge, 1994.
Anne Heyworth and C.D. Wensley. "Logged rewriting and identities among relators." Groups St. Andrews 2001 in Oxford. Vol. I, 256–276, London Math. Soc. Lecture Note Ser., 304, Cambridge Univ. Press, Cambridge, 2003.
== External links ==
Weisstein, Eric W. "Knuth–Bendix Completion Algorithm". MathWorld.
Knuth-Bendix Completion Visualizer | Wikipedia/Knuth–Bendix_completion_algorithm |
Fermat (named after Pierre de Fermat) is a computer algebra system developed by Prof. Robert H. Lewis of Fordham University. It can work on integers (of arbitrary size), rational numbers, real numbers, complex numbers, modular numbers, finite field elements, multivariable polynomials, rational functions, or polynomials modulo other polynomials. The main areas of application are multivariate rational function arithmetic and matrix algebra over rings of multivariate polynomials or rational functions. Fermat does not do simplification of transcendental functions or symbolic integration.
A session with Fermat usually starts by choosing rational or modular "mode" to establish the ground field (or ground ring)
F
{\displaystyle F}
as
Z
{\displaystyle \mathbb {Z} }
or
Z
/
n
{\displaystyle \mathbb {Z} /n}
. On top of this may be attached any number of symbolic variables
t
1
,
t
2
,
…
,
t
n
,
{\displaystyle t_{1},t_{2},\dots ,t_{n},}
thereby creating the polynomial ring
F
[
t
1
,
t
2
,
…
,
t
n
]
{\displaystyle F[t_{1},t_{2},\dots ,t_{n}]}
and its quotient field. Further, some polynomials
p
,
q
,
…
{\displaystyle p,q,\dots }
involving some of the
t
i
{\displaystyle t_{i}}
can be chosen to mod out with, creating the quotient ring
F
(
t
1
,
t
2
,
…
)
/
(
p
,
q
,
…
)
.
{\displaystyle F(t_{1},t_{2},\dots )/(p,q,\dots ).}
Finally, it is possible to allow Laurent polynomials, those with negative as well as positive exponents. Once the computational ring is established in this way, all computations are of elements of this ring. The computational ring can be changed later in the session.
The polynomial gcd procedures, which call each other in a highly recursive manner, are about 7000 lines of code.
Fermat has extensive built-in primitives for array and matrix manipulations, such as submatrix, sparse matrix, determinant, normalize, column reduce, row echelon, Smith normal form, and matrix inverse. It is consistently faster than some well known computer algebra systems, especially in multivariate polynomial gcd. It is also space efficient.
The basic data item in Fermat is a multivariate rational function or quolynomial. The numerator and denominator are polynomials with no common factor. Polynomials are implemented recursively as general linked lists, unlike some systems that implement polynomials as lists of monomials. To implement (most) finite fields, the user finds an irreducible monic polynomial in a symbolic variable, say
p
(
t
1
)
,
{\displaystyle p(t_{1}),}
and commands Fermat to mod out by it. This may be continued recursively,
q
(
t
2
,
t
1
)
,
{\displaystyle q(t_{2},t_{1}),}
etc. Low level data structures are set up to facilitate arithmetic and gcd over this newly created ground field. Two special fields,
G
F
(
2
8
)
{\displaystyle GF(2^{8})}
and
G
F
(
2
16
)
,
{\displaystyle GF(2^{16}),}
are more efficiently implemented at the bit level.
== History ==
With Windows 10, and thanks to Bogdan Radu, it is now possible (May 2021) to run Fermat Linux natively on Windows. See the main web page http://home.bway.net/lewis
Fermat was last updated on 20 May 2020 (Mac and Linux; latest Windows version: 1 November 2011).
In an earlier version, called FFermat (Float Fermat), the basic number type is floating point numbers of 18 digits. That version allows for numerical computing techniques, has extensive graphics capabilities, no sophisticated polynomial gcd algorithms, and is available only for Mac OS 9.
Fermat was originally written in Pascal for a DEC VAX, then for the classic Mac OS during 1985–1996. It was ported to Microsoft Windows in 1998. In 2003 it was translated into C and ported to Linux (Intel machines) and Unix (Sparc/Sun). It is about 98,000 lines of C code.
The FFermat and (old) Windows Fermat Pascal source code have been made available to the public under a restrictive license.
The manual was extensively revised and updated on 25 July 2011 (latest small revision in June 2016, apparently another revision on 25 March 2020).
== See also ==
Comparison of computer algebra systems
== References ==
== External links ==
Official website
Windows Fermat Pascal source code
Float Fermat Pascal source code
Robert H. Lewis at academia.edu | Wikipedia/Fermat_(computer_algebra_system) |
Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma. It runs on Unix-like operating systems, as well as Windows.
== Introduction ==
Magma is produced and distributed by the Computational Algebra Group within the Sydney School of Mathematics and Statistics at the University of Sydney.
In late 2006, the book Discovering Mathematics with Magma was published by Springer as volume 19 of the Algorithms and Computations in Mathematics series.
The Magma system is used extensively within pure mathematics. The Computational Algebra Group maintain a list of publications that cite Magma, and as of 2010 there are about 2600 citations, mostly in pure mathematics, but also including papers from areas as diverse as economics and geophysics.
== History ==
The predecessor of the Magma system was named Cayley (1982–1993), after Arthur Cayley.
Magma was officially released in August 1993 (version 1.0). Version 2.0 of Magma was released in June 1996 and subsequent versions of 2.X have been released approximately once per year.
In 2013, the Computational Algebra Group finalized an agreement with the Simons Foundation, whereby the Simons Foundation will underwrite all costs of providing Magma to all U.S. nonprofit, non-governmental scientific research or educational institutions. All students, researchers and faculty associated with a participating institution will be able to access Magma for free, through that institution.
== Mathematical areas covered by the system ==
Group theory
Magma includes permutation, matrix, finitely presented, soluble, abelian (finite or infinite), polycyclic, braid and straight-line program groups. Several databases of groups are also included.
Number theory
Magma contains asymptotically fast algorithms for all fundamental integer and polynomial operations, such as the Schönhage–Strassen algorithm for fast multiplication of integers and polynomials. Integer factorization algorithms include the Elliptic Curve Method, the Quadratic sieve and the Number field sieve.
Algebraic number theory
Magma includes the KANT computer algebra system for comprehensive computations in algebraic number fields. A special type also allows one to compute in the algebraic closure of a field.
Module theory and linear algebra
Magma contains asymptotically fast algorithms for all fundamental dense matrix operations, such as Strassen multiplication.
Sparse matrices
Magma contains the structured Gaussian elimination and Lanczos algorithms for reducing sparse systems which arise in index calculus methods, while Magma uses Markowitz pivoting for several other sparse linear algebra problems.
Lattices and the LLL algorithm
Magma has a provable implementation of fpLLL, which is an LLL algorithm for integer matrices which uses floating point numbers for the Gram–Schmidt coefficients, but such that the result is rigorously proven to be LLL-reduced.
Commutative algebra and Gröbner bases
Magma has an efficient implementation of the Faugère F4 algorithm for computing Gröbner bases.
Representation theory
Magma has extensive tools for computing in representation theory, including the computation of character tables of finite groups and the Meataxe algorithm.
Invariant theory
Magma has a type for invariant rings of finite groups, for which one can primary, secondary and fundamental invariants, and compute with the module structure.
Lie theory
Algebraic geometry
Arithmetic geometry
Finite incidence structures
Cryptography
Coding theory
Optimization
== See also ==
Comparison of computer algebra systems
== References ==
== External links ==
Official website
Magma Free Online Calculator
Magma's High Performance for computing Gröbner Bases (2004)
Magma's High Performance for computing Hermite Normal Forms of integer matrices
Magma V2.12 is apparently "Overall Best in the World at Polynomial GCD" :-)
Magma example code
Liste von Publikationen, die Magma zitieren | Wikipedia/Magma_(computer_algebra_system) |
Axiom is a free, general-purpose computer algebra system. It consists of an interpreter environment, a compiler and a library, which defines a strongly typed hierarchy.
== History ==
Two computer algebra systems named Scratchpad were developed by IBM. The first one was started in 1965 by James Griesmer at the request of Ralph Gomory, and written in Fortran. The development of this software was stopped before any public release. The second Scratchpad, originally named Scratchpad II, was developed from 1977 on, at Thomas J. Watson Research Center, under the direction of Richard Dimick Jenks.
The design is principally due to Richard D. Jenks (IBM Research), James H. Davenport (University of Bath), Barry M. Trager (IBM Research), David Y.Y. Yun (Southern Methodist University) and Victor S. Miller (IBM Research). Early consultants on the project were David Barton (University of California, Berkeley) and James W. Thatcher (IBM Research). Implementation included Robert Sutor (IBM Research), Scott C. Morrison (University of California, Berkeley), Christine J. Sundaresan (IBM Research), Timothy Daly (IBM Research), Patrizia Gianni (University of Pisa), Albrecht Fortenbacher (Universitaet Karlsruhe), Stephen M. Watt (IBM Research and University of Waterloo), Josh Cohen (Yale University), Michael Rothstein (Kent State University), Manuel Bronstein (IBM Research), Michael Monagan (Simon Fraser University), Jonathan Steinbach (IBM Research), William Burge (IBM Research), Jim Wen (IBM Research), William Sit (City College of New York), and Clifton Williamson (IBM Research)
Scratchpad II was renamed Axiom when IBM decided, circa 1990, to make it a commercial product. A few years later, it was sold to NAG. In 2001, it was withdrawn from the market and re-released under the Modified BSD License. Since then, the project's lead developer has been Tim Daly.
In 2007, Axiom was forked twice, originating two different open-source projects: OpenAxiom and FriCAS, following "serious disagreement about project goals". The Axiom project continued to be developed by Tim Daly.
The current research direction is "Proving Axiom Sane", that is, logical, rational, judicious, and sound.
== Documentation ==
Axiom is a literate program. The source code is becoming available in a set of volumes which are available on the www.nongnu.org/axiom website. These volumes contain the actual source code of the system.
The currently available documents are:
Combined Table of Contents
Volume 0: Axiom Jenks and Sutor—The main textbook
Volume 1: Axiom Tutorial—A simple introduction
Volume 2: Axiom Users Guide—Detailed examples of domain use (incomplete)
Volume 3: Axiom Programmers Guide—Guided examples of program writing (incomplete)
Volume 4: Axiom Developers Guide—Short essays on developer-specific topics (incomplete)
Volume 5: Axiom Interpreter—Source code for Axiom interpreter (incomplete)
Volume 6: Axiom Command—Source code for system commands and scripts (incomplete)
Volume 7: Axiom Hyperdoc—Source code and explanation of X11 Hyperdoc help browser
Volume 7.1 Axiom Hyperdoc Pages—Source code for Hyperdoc pages
Volume 8: Axiom Graphics—Source code for X11 Graphics subsystem
Volume 8.1 Axiom Gallery—A Gallery of Axiom images
Volume 9: Axiom Compiler—Source code for Spad compiler (incomplete)
Volume 10: Axiom Algebra Implementation—Essays on implementation issues (incomplete)
Volume 10.1: Axiom Algebra Theory—Essays containing background theory
Volume 10.2: Axiom Algebra Categories—Source code for Axiom categories
Volume 10.3: Axiom Algebra Domains—Source code for Axiom domains
Volume 10.4: Axiom Algebra Packages—Source code for Axiom packages
Volume 10.5: Axiom Algebra Numerics—Source code for Axiom numerics
Volume 11: Axiom Browser—Source pages for Axiom Firefox browser front end
Volume 12: Axiom Crystal—Source code for Axiom Crystal front end (incomplete)
Volume 13: Proving Axiom Correct—Prove Axiom Algebra (incomplete)
Volume 15: The Axiom SANE Compiler
Bibliography: Axiom Bibliography—Literature references
Bug List: Axiom Bug List-Bug List
Reference Card: Axiom Reference Card—Useful function summary
== Videos ==
The Axiom project has a major focus on providing documentation. Recently the project announced the first in a series of instructional videos, which are also available on the www.nongnu.org/axiom website. The first video provides details on the Axiom information sources.
== Philosophy ==
The Axiom project focuses on the “30 Year Horizon”. The primary philosophy is that Axiom needs to develop several fundamental features in order to be useful to the next generation of computational mathematicians. Knuth's literate programming technique is used throughout the source code. Axiom plans to use proof technology to prove the correctness of the algorithms (such as Coq and ACL2).
Binary AXIOM packages are available for installation on a wide variety of platforms, such as Debian GNU/Linux.
== Design ==
In Axiom, each object has a type. Examples of types are mathematical structures (such as rings, fields, polynomials) as well as data structures from computer science (e.g., lists, trees, hash tables).
A function can take a type as argument, and its return value can also be a type. For example, Fraction is a function, that takes an IntegralDomain as argument, and returns the field of fractions of its argument. As another example, the ring of
4
×
4
{\displaystyle 4\times 4}
matrices with rational entries would be constructed as SquareMatrix(4, Fraction Integer). Of course, when working in this domain, 1 is interpreted as the identity matrix and A^-1 would give the inverse of the matrix A, if it exists.
Several operations can have the same name, and the types of both the arguments and the result are used to determine which operation is applied (cf. function overloading).
Axiom comes with an extension language called SPAD. All the mathematical knowledge of Axiom is written in this language. The interpreter accepts roughly the same language.
== Features ==
Within the interpreter environment, Axiom uses type inference and a heuristic algorithm to make explicit type annotations mostly unnecessary.
It features 'HyperDoc', an interactive browser-like help system, and can display two and three dimensional graphics, also providing interactive features like rotation and lighting. It also has a specialized interaction mode for Emacs, as well as a plugin for the TeXmacs editor.
Axiom has an implementation of the Risch algorithm for elementary integration, which was done by Manuel Bronstein and Barry Trager. While this implementation can find most elementary antiderivatives and whether they exist, it does have some non-implemented branches, and raises an error when such cases are encountered during integration.
== See also ==
A# programming language
Aldor programming language
List of computer algebra systems
== References ==
== Further reading ==
James H. Griesmer; Richard D. Jenks (1971). SCRATCHPAD/1: An interactive facility for symbolic mathematics | Proceedings of the second ACM symposium on Symbolic and algebraic manipulation (SYMSAC '71). pp. 42–58.
Clemens G. Raab; Michael F. Singer (2022). Integration in Finite Terms: Fundamental Sources. Springer. ISBN 978-3030987664.
Richard D. Jenks (1971). META/PLUS - The Syntax Extension Facility for SCRATCHPAD (Research report). IBM Thomas J. Watson Research Center. RC 3259.
James H. Griesmer; Richard D. Jenks (1972). Experience with an online symbolic mathematics system | Proceedings of the ONLINE72 Conference. Vol. 1. Brunel University. pp. 457–476.
James H. Griesmer; Richard D. Jenks (1972). "Scratchpad". ACM SIGPLAN Notices. 7 (10): 93–102. doi:10.1145/942576.807019.
Richard D. Jenks (1974). "The SCRATCHPAD language". ACM SIGSAM Bulletin. 8 (2): 20–30. doi:10.1145/1086830.1086834. S2CID 14537956.
Arthur C. Norman (1975). "Computing with Formal Power Series". ACM Transactions on Mathematical Software. 1 (4): 346–356. doi:10.1145/355656.355660. ISSN 0098-3500. S2CID 18321863.
Richard D. Jenks (1976). A pattern compiler | Proceedings of the third ACM symposium on Symbolic and algebraic manipulation (SYMSAC '76). pp. 60–65.
E. Lueken (1977). Ueberlegungen zur Implementierung eines Formelmanipulationssystems (Masters thesis) (in German). Germany: Technischen Universitat Carolo-Wilhelmina zu Braunschweig.
George E. Andrews (1984). Ramanujan and SCRATCHPAD | Proceedings of the 1984 MACSYMA Users' Conference. Schenectady: General Electric. pp. 383–408.
James H. Davenport; P. Gianni; Richard D. Jenks; V. Miller; Scott Morrison; M. Rothstein; C. Sundaresan; Robert S. Sutor; Barry Trager (1984). Scratchpad. Mathematical Sciences Department, IBM Thomas J. Watson Research Center.
Richard D. Jenks (1984). "The New SCRATCHPAD Language and System for Computer Algebra". Proceedings of the 1984 MACSYMA Users' Conference: 409–416.
Richard D. Jenks (1984). A primer: 11 keys to New Scratchpad | Proceedings of International Symposium on Symbolic and Algebraic Computation '84. Springer. pp. 123–147.
Robert S. Sutor (1985). The Scratchpad II Computer Algebra Language and System | Proceedings of International Symposium on Symbolic and Algebraic Computation '85. Springer. pp. 32–33.
Rüdiger Gebauer; H. Michael Möller (1986). Buchberger's algorithm and staggered linear bases | Proceedings of the fifth ACM symposium on Symbolic and algebraic computation (International Symposium on Symbolic and Algebraic Computation '86). ACM. pp. 218–221. ISBN 978-0-89791-199-3.
Richard D. Jenks; Robert S. Sutor; Stephen M. Watt (1986). Scratchpad II: an abstract datatype system for mathematical computation (Research report). IBM Thomas J. Watson Research Center. RC 12327.
Michael Lucks; Bruce W. Char (1986). A fast implementation of polynomial factorization | Proceedings of SYMSAC '86. ACM. pp. 228–232. ISBN 978-0-89791-199-3.
J. Purtilo (1986). Applications of a software interconnection system in mathematical problem solving environments | Proceedings of SYMSAC '86. ACM. pp. 16–23. ISBN 978-0-89791-199-3.
William H. Burge; Stephen M. Watt (1987). Infinite Structure in SCRATCHPAD II (Research report). IBM Thomas J. Watson Research Center. RC 12794.
Pascale Sénéchaud; Françoise Siebert; Gilles Villard (1987). Scratchpad II: Présentation d'un nouveau langage de calcul formel. TIM (Research report) (in French). IMAG, Grenoble Institute of Technology. 640-M.
Robert S. Sutor; Richard D. Jenks (1987). "The type inference and coercion facilities in the scratchpad II interpreter". Papers of the Symposium on Interpreters and interpretive techniques - SIGPLAN '87. pp. 56–63. doi:10.1145/29650.29656. ISBN 978-0-89791-235-8. S2CID 17700911.
George E. Andrews (1988). R. Janssen (ed.). Application of SCRATCHPAD to problems in special functions and combinatorics | Trends in Computer Algebra. Lecture Notes in Computer Science. Springer. pp. 159–166.
James H. Davenport; Yvon Siret; Evelyne Tournier (1993) [1988]. Computer Algebra: Systems and Algorithms for Algebraic Computation. Academic Press. ISBN 978-0122042300.
Rüdiger Gebauer; H. Michael Möller (1988). "On an installation of Buchberger's algorithm". Journal of Symbolic Computation. 6 (2–3): 275–286. doi:10.1016/s0747-7171(88)80048-8. ISSN 0747-7171.
Fritz Schwarz (1988). R. Janssen (ed.). Programming with abstract data types: the symmetry package (SPDE) in Scratchpad | Trends in Computer Algebra. Lecture Notes in Computer Science. Springer. pp. 167–176.
David Shannon; Moss Sweedler (1988). "Using Gröbner bases to determine algebra membership, split surjective algebra homomorphisms determine birational equivalence". Journal of Symbolic Computation. 6 (2–3): 267–273. doi:10.1016/s0747-7171(88)80047-6.
Hans-J. Boehm (1989). "Type inference in the presence of type abstraction". ACM SIGPLAN Notices. 24 (7): 192–206. doi:10.1145/74818.74835.
Manuel Bronstein (1989). Simplification of real elementary functions | Proceedings of the International Symposium on Symbolic and Algebraic Computation (SIGSAM '89). ACM. pp. 207–211.
Claire Dicrescenzo; Dominique Duval (1989). P. Gianni (ed.). Algebraic extensions and algebraic closure in Scratchpad II | Symbolic and Algebraic Computation. Springer. pp. 440–446.
Timothy Daly "Axiom -- Thirty Years of Lisp"
Timothy Daly "Axiom" Invited Talk, Free Software Conference, Lyon, France, May, 2002
Timothy Daly "Axiom" Invited Talk, Libre Software Meeting, Metz, France, July 9–12, 2003
== External links ==
Media related to Axiom (computer algebra software) at Wikimedia Commons
Axiom Homepage
Source code repositories: GNU Savannah
Jenks, R.D. and Sutor, R. "Axiom, The Scientific Computation System"
Daly, T. "Axiom Volume 1: Tutorial"
Software forks:
OpenAxiom (SourceForge)
FriCAS (SourceForge) | Wikipedia/Axiom_(computer_algebra_system) |
Engineering Equation Solver (EES) is a commercial software package used for solution of systems of simultaneous non-linear equations. It provides many useful specialized functions and equations for the solution of thermodynamics and heat transfer problems, making it a useful and widely used program for mechanical engineers working in these fields. EES stores thermodynamic properties, which eliminates iterative problem solving by hand through the use of code that calls properties at the specified thermodynamic properties. EES performs the iterative solving, eliminating the tedious and time-consuming task of acquiring thermodynamic properties with its built-in functions.
EES also includes parametric tables that allow the user to compare a number of variables at a time. Parametric tables can also be used to generate plots. EES can also integrate, both as a command in code and in tables. EES also provides optimization tools that minimize or maximize a chosen variable by varying a number of other variables. Lookup tables can be created to store information that can be accessed by a call in the code. EES code allows the user to input equations in any order and obtain a solution, but also can contain if-then statements, which can also be nested within each other to create if-then-else statements. Users can write functions for use in their code, and also procedures, which are functions with multiple outputs.
Adjusting the preferences allows the user choose a unit system, specify stop criteria, including the number of iterations, and also enable/disable unit checking and recommending units, among other options. Users can also specify guess values and variable limits to aid the iterative solving process and help EES quickly and successfully find a solution.
The program is developed by F-Chart Software, a commercial spin-off of Prof Sanford A Klein from Department of Mechanical Engineering
University of Wisconsin-Madison.
EES is included as attached software for a number of undergraduate thermodynamics, heat-transfer and fluid mechanics textbooks from McGraw-Hill.
It integrates closely with the dynamic system simulation package TRNSYS, by some of the same authors.
== References ==
== External links ==
Official site | Wikipedia/Engineering_Equation_Solver |
GAP (Groups, Algorithms and Programming) is an open source computer algebra system for computational discrete algebra with particular emphasis on computational group theory.
== History ==
GAP was developed at Lehrstuhl D für Mathematik (LDFM), Rheinisch-Westfälische Technische Hochschule Aachen, Germany from 1986 to 1997. After the retirement of Joachim Neubüser from the chair of LDFM, the development and maintenance of GAP was coordinated by the School of Mathematical and Computational Sciences at the University of St Andrews, Scotland. In the summer of 2005 coordination was transferred to an equal partnership of four 'GAP Centres', located at the University of St Andrews, RWTH Aachen, Technische Universität Braunschweig, and Colorado State University at Fort Collins; in April 2020, a fifth GAP Centre located at the TU Kaiserslautern was added.
== Features ==
GAP contains a procedural programming language and a large collection of functions to create and manipulate various mathematical objects. It supports integers and rational numbers of arbitrary size, memory permitting. Finite groups can be defined as groups of permutations and it is also possible to define finitely presented groups by specifying generators and relations. Several databases of important finite groups are included. GAP also allows to work with matrices and with finite fields (which are represented using Conway polynomials). Rings, modules and Lie algebras are also supported.
== Distribution ==
GAP and its sources, including packages (sets of user contributed programs), data library (including a list of small groups) and the manual, are distributed freely, subject to "copyleft" conditions. GAP runs on any Unix system, under Windows, and on Macintosh systems. The standard distribution requires about 300 MB (about 400 MB if all the packages are loaded).
The user contributed packages are an important feature of the system, adding a great deal of functionality. GAP offers package authors the opportunity to submit these packages for a process of peer review, hopefully improving the quality of the final packages, and providing recognition akin to an academic publication for their authors. As of March 2021, there are 151 packages distributed with GAP, of which approximately 71 have been through this process.
An interface is available for using the SINGULAR computer algebra system from within GAP. GAP is also included in the mathematical software system SageMath.
== Sample session ==
=== Permutation group ===
=== Euclidean ring ===
== See also ==
Comparison of computer algebra systems
== References ==
== External links ==
Official website
Gap-system on GitHub | Wikipedia/GAP_(computer_algebra_system) |
The differential analyser is a mechanical analogue computer designed to solve differential equations by integration, using wheel-and-disc mechanisms to perform the integration. It was one of the first advanced computing devices to be used operationally.
In addition to the integrator devices, the machine used an epicyclic differential mechanism to perform addition or subtraction - similar to that used on a front-wheel drive car, where the speed of the two output shafts (driving the wheels) may differ but the speeds add up to the speed of the input shaft. Multiplication/division by integer values was achieved by simple gear ratios; multiplication by fractional values was achieved by means of a multiplier table, where a human operator would have to keep a stylus tracking the slope of a bar. A variant of this human-operated table was used to implement other functions such as polynomials.
== History ==
Research on solutions for differential equations using mechanical devices, discounting planimeters, started at least as early as 1836, when the French physicist Gaspard-Gustave Coriolis designed a mechanical device to integrate differential equations of the first order.
The first description of a device which could integrate differential equations of any order was published in 1876 by James Thomson, who was born in Belfast in 1822, but lived in Scotland from the age of 10. Though Thomson called his device an "integrating machine", it is his description of the device, together with the additional publication in 1876 of two further descriptions by his younger brother, Lord Kelvin, which represents the invention of the differential analyser.
One of the earliest practical uses of Thomson's concepts was a tide-predicting machine built by Kelvin starting in 1872–3. On Lord Kelvin's advice, Thomson's integrating machine was later incorporated into a fire-control system for naval gunnery being developed by Arthur Pollen, resulting in an electrically driven, mechanical analogue computer, which was completed by about 1912. Italian mathematician Ernesto Pascal also developed integraphs for the mechanical integration of differential equations and published details in 1914.
However, the first widely practical general-purpose differential analyser was constructed by Harold Locke Hazen and Vannevar Bush at MIT, 1928–1931, comprising six mechanical integrators. In the same year, Bush described this machine in a journal article as a "continuous integraph". When he published a further article on the device in 1931, he called it a "differential analyzer". In this article, Bush stated that "[the] present device incorporates the same basic idea of interconnection of integrating units as did [Lord Kelvin's]. In detail, however, there is little resemblance to the earlier model." According to his 1970 autobiography, Bush was "unaware of Kelvin’s work until after the first differential analyzer was operational." Claude Shannon was hired as a research assistant in 1936 to run the differential analyzer in Bush's lab.
Douglas Hartree of Manchester University brought Bush's design to England, where he constructed his first "proof of concept" model with his student, Arthur Porter, during 1934. As a result of this, the university acquired a full-scale machine incorporating four mechanical integrators in March 1935, which was built by Metropolitan-Vickers, and was, according to Hartree, "[the] first machine of its kind in operation outside the United States". During the next five years three more were added, at Cambridge University, Queen's University Belfast, and the Royal Aircraft Establishment in Farnborough. One of the integrators from this proof of concept is on display in the History of Computing section of the Science Museum in London, alongside a complete Manchester machine.
In Norway, the locally built Oslo Analyser was finished during 1938, based on the same principles as the MIT machine. This machine had 12 integrators, and was the largest analyser built for a period of four years.
In the United States, further differential analysers were built at the Ballistic Research Laboratory in Maryland and in the basement of the Moore School of Electrical Engineering at the University of Pennsylvania during the early 1940s. The latter was used extensively in the computation of artillery firing tables prior to the invention of the ENIAC, which, in many ways, was modelled on the differential analyser. Also in the early 1940s, with Samuel H. Caldwell, one of the initial contributors during the early 1930s, Bush attempted an electrical, rather than mechanical, variation, but the digital computer built elsewhere had much greater promise and the project ceased. In 1947, UCLA installed a differential analyser built for them by General Electric at a cost of $125,000. By 1950, this machine had been joined by three more. The UCLA differential analyzer appeared in 1950's Destination Moon, and the same footage in 1951's When Worlds Collide, where it was called "DA". A different shot appears in 1956's Earth vs. the Flying Saucers.
At Osaka Imperial University (present-day Osaka University) around 1944,
a complete differential analyser machine was developed (illustrated) to calculate the movement of an object and other problems with mechanical components, and then draws graphs on paper with a pen. It was later transferred to the Tokyo University of Science and has been displayed at the school's Museum of Science in Shinjuku Ward. Restored in 2014, it is one of only two still operational differential analyzers produced before the end of World War II.
In Canada, a differential analyser was constructed at the University of Toronto in 1948 by Beatrice Helen Worsley, but it appears to have had little or no use.
A differential analyser may have been used in the development of the bouncing bomb, used to attack German hydroelectric dams during World War II. Differential analysers have also been used in the calculation of soil erosion by river control authorities.
The differential analyser was eventually rendered obsolete by electronic analogue computers and, later, digital computers.
== Use of Meccano ==
The model differential analyser built at Manchester University in 1934 by Douglas Hartree and Arthur Porter made extensive use of Meccano parts: this meant that the machine was less costly to build, and it proved "accurate enough for the solution of many scientific problems". A similar machine built by J.B. Bratt at Cambridge University in 1935 is now in the Museum of Transport and Technology (MOTAT) collection in Auckland, New Zealand. A memorandum written for the British military's Armament Research Department in 1944 describes how this machine had been modified during World War II for improved reliability and enhanced capability, and identifies its wartime applications as including research on the flow of heat, explosive detonations, and simulations of transmission lines.
It has been estimated, by Garry Tee that "about 15 Meccano model Differential Analysers were built for serious work by scientists and researchers around the world".
== See also ==
Torque amplifier
Ball-and-disk integrator
General purpose analog computer
== Notes ==
== Bibliography ==
Thomson, James (1876). "An Integrating Machine having a new Kinematic Principle". Proceedings of the Royal Society. 24 (164–170): 262–5. doi:10.1098/rspl.1875.0033.
Thomson, William (1876). "Mechanical Integration of Linear Differential Equations of the Second Order with Variable Coefficients". Proceedings of the Royal Society. 24 (164–170): 269–71. doi:10.1098/rspl.1875.0035. S2CID 62694536.
Thomson, William (1876). "Mechanical Integration of the general Linear Differential Equation of any Order with Variable Coefficients". Proceedings of the Royal Society. 24 (164–170): 271–5. doi:10.1098/rspl.1875.0036.
Bush, Vannevar (1936). "Instrumental analysis". Bulletin of the American Mathematical Society. 42 (10): 649–69. doi:10.1090/S0002-9904-1936-06390-1.
Hartree, D. R.; Porter, Porter (1934–1935), "The construction and operation of a model differential analyser", Memoirs and Proceedings of the Manchester Literary and Philosophical Society, 79: 51–73, reprinted as a pamphlet July 1935
Worsley, Beatrice Helen (1947). A mathematical survey of computing devices with an appendix on an error analysis of differential analyzers (Master's Thesis, MIT).
Crank, J. (1947). The Differential Analyser, London: Longmans, Green (this is the only book that describes how to set up and operate a mechanical differential analyser).
MacNee, A.B. (1948). An electronic differential analyzer (RLE, Technical Report 90, MIT. Note that this paper describes a very early electronic analogue computer, not a mechanical differential analyser: it is included because the author clearly felt that the only way to introduce such an innovation was to describe it as an "electronic differential analyser").
== External links ==
Vannevar Bush bio which focuses on the Differential Analyzer
The Differential Analyser Explained (updated July 2009)
Tim Robinson's Meccano Differential Analyser
Professor Stephen Boyd at Stanford University provides a brief explanation of its working. | Wikipedia/Differential_analyser |
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
The set of rational functions over a field K is a field, the field of fractions of the ring of the polynomial functions over K.
== Definitions ==
A function
f
{\displaystyle f}
is called a rational function if it can be written in the form
f
(
x
)
=
P
(
x
)
Q
(
x
)
{\displaystyle f(x)={\frac {P(x)}{Q(x)}}}
where
P
{\displaystyle P}
and
Q
{\displaystyle Q}
are polynomial functions of
x
{\displaystyle x}
and
Q
{\displaystyle Q}
is not the zero function. The domain of
f
{\displaystyle f}
is the set of all values of
x
{\displaystyle x}
for which the denominator
Q
(
x
)
{\displaystyle Q(x)}
is not zero.
However, if
P
{\displaystyle \textstyle P}
and
Q
{\displaystyle \textstyle Q}
have a non-constant polynomial greatest common divisor
R
{\displaystyle \textstyle R}
, then setting
P
=
P
1
R
{\displaystyle \textstyle P=P_{1}R}
and
Q
=
Q
1
R
{\displaystyle \textstyle Q=Q_{1}R}
produces a rational function
f
1
(
x
)
=
P
1
(
x
)
Q
1
(
x
)
,
{\displaystyle f_{1}(x)={\frac {P_{1}(x)}{Q_{1}(x)}},}
which may have a larger domain than
f
{\displaystyle f}
, and is equal to
f
{\displaystyle f}
on the domain of
f
.
{\displaystyle f.}
It is a common usage to identify
f
{\displaystyle f}
and
f
1
{\displaystyle f_{1}}
, that is to extend "by continuity" the domain of
f
{\displaystyle f}
to that of
f
1
.
{\displaystyle f_{1}.}
Indeed, one can define a rational fraction as an equivalence class of fractions of polynomials, where two fractions
A
(
x
)
B
(
x
)
{\displaystyle \textstyle {\frac {A(x)}{B(x)}}}
and
C
(
x
)
D
(
x
)
{\displaystyle \textstyle {\frac {C(x)}{D(x)}}}
are considered equivalent if
A
(
x
)
D
(
x
)
=
B
(
x
)
C
(
x
)
{\displaystyle A(x)D(x)=B(x)C(x)}
. In this case
P
(
x
)
Q
(
x
)
{\displaystyle \textstyle {\frac {P(x)}{Q(x)}}}
is equivalent to
P
1
(
x
)
Q
1
(
x
)
.
{\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.}
A proper rational function is a rational function in which the degree of
P
(
x
)
{\displaystyle P(x)}
is less than the degree of
Q
(
x
)
{\displaystyle Q(x)}
and both are real polynomials, named by analogy to a proper fraction in
Q
.
{\displaystyle \mathbb {Q} .}
=== Complex rational functions ===
In complex analysis, a rational function
f
(
z
)
=
P
(
z
)
Q
(
z
)
{\displaystyle f(z)={\frac {P(z)}{Q(z)}}}
is the ratio of two polynomials with complex coefficients, where Q is not the zero polynomial and P and Q have no common factor (this avoids f taking the indeterminate value 0/0).
The domain of f is the set of complex numbers such that
Q
(
z
)
≠
0
{\displaystyle Q(z)\neq 0}
.
Every rational function can be naturally extended to a function whose domain and range are the whole Riemann sphere (complex projective line).
A complex rational function with degree one is a Möbius transformation.
Rational functions are representative examples of meromorphic functions.
Iteration of rational functions on the Riemann sphere (i.e. a rational mapping) creates discrete dynamical systems.
Julia sets for rational maps
=== Degree ===
There are several non equivalent definitions of the degree of a rational function.
Most commonly, the degree of a rational function is the maximum of the degrees of its constituent polynomials P and Q, when the fraction is reduced to lowest terms. If the degree of f is d, then the equation
f
(
z
)
=
w
{\displaystyle f(z)=w\,}
has d distinct solutions in z except for certain values of w, called critical values, where two or more solutions coincide or where some solution is rejected at infinity (that is, when the degree of the equation decreases after having cleared the denominator).
The degree of the graph of a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator.
In some contexts, such as in asymptotic analysis, the degree of a rational function is the difference between the degrees of the numerator and the denominator.: §13.6.1 : Chapter IV
In network synthesis and network analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called a biquadratic function.
== Examples ==
The rational function
f
(
x
)
=
x
3
−
2
x
2
(
x
2
−
5
)
{\displaystyle f(x)={\frac {x^{3}-2x}{2(x^{2}-5)}}}
is not defined at
x
2
=
5
⇔
x
=
±
5
.
{\displaystyle x^{2}=5\Leftrightarrow x=\pm {\sqrt {5}}.}
It is asymptotic to
x
2
{\displaystyle {\tfrac {x}{2}}}
as
x
→
∞
.
{\displaystyle x\to \infty .}
The rational function
f
(
x
)
=
x
2
+
2
x
2
+
1
{\displaystyle f(x)={\frac {x^{2}+2}{x^{2}+1}}}
is defined for all real numbers, but not for all complex numbers, since if x were a square root of
−
1
{\displaystyle -1}
(i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero:
f
(
i
)
=
i
2
+
2
i
2
+
1
=
−
1
+
2
−
1
+
1
=
1
0
,
{\displaystyle f(i)={\frac {i^{2}+2}{i^{2}+1}}={\frac {-1+2}{-1+1}}={\frac {1}{0}},}
which is undefined.
A constant function such as f(x) = π is a rational function since constants are polynomials. The function itself is rational, even though the value of f(x) is irrational for all x.
Every polynomial function
f
(
x
)
=
P
(
x
)
{\displaystyle f(x)=P(x)}
is a rational function with
Q
(
x
)
=
1.
{\displaystyle Q(x)=1.}
A function that cannot be written in this form, such as
f
(
x
)
=
sin
(
x
)
,
{\displaystyle f(x)=\sin(x),}
is not a rational function. However, the adjective "irrational" is not generally used for functions.
Every Laurent polynomial can be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is a subring of the rational functions.
The rational function
f
(
x
)
=
x
x
{\displaystyle f(x)={\tfrac {x}{x}}}
is equal to 1 for all x except 0, where there is a removable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, since x/x is equivalent to 1/1.
== Taylor series ==
The coefficients of a Taylor series of any rational function satisfy a linear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collecting like terms after clearing the denominator.
For example,
1
x
2
−
x
+
2
=
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle {\frac {1}{x^{2}-x+2}}=\sum _{k=0}^{\infty }a_{k}x^{k}.}
Multiplying through by the denominator and distributing,
1
=
(
x
2
−
x
+
2
)
∑
k
=
0
∞
a
k
x
k
{\displaystyle 1=(x^{2}-x+2)\sum _{k=0}^{\infty }a_{k}x^{k}}
1
=
∑
k
=
0
∞
a
k
x
k
+
2
−
∑
k
=
0
∞
a
k
x
k
+
1
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=0}^{\infty }a_{k}x^{k+2}-\sum _{k=0}^{\infty }a_{k}x^{k+1}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
After adjusting the indices of the sums to get the same powers of x, we get
1
=
∑
k
=
2
∞
a
k
−
2
x
k
−
∑
k
=
1
∞
a
k
−
1
x
k
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=2}^{\infty }a_{k-2}x^{k}-\sum _{k=1}^{\infty }a_{k-1}x^{k}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
Combining like terms gives
1
=
2
a
0
+
(
2
a
1
−
a
0
)
x
+
∑
k
=
2
∞
(
a
k
−
2
−
a
k
−
1
+
2
a
k
)
x
k
.
{\displaystyle 1=2a_{0}+(2a_{1}-a_{0})x+\sum _{k=2}^{\infty }(a_{k-2}-a_{k-1}+2a_{k})x^{k}.}
Since this holds true for all x in the radius of convergence of the original Taylor series, we can compute as follows. Since the constant term on the left must equal the constant term on the right it follows that
a
0
=
1
2
.
{\displaystyle a_{0}={\frac {1}{2}}.}
Then, since there are no powers of x on the left, all of the coefficients on the right must be zero, from which it follows that
a
1
=
1
4
{\displaystyle a_{1}={\frac {1}{4}}}
a
k
=
1
2
(
a
k
−
1
−
a
k
−
2
)
for
k
≥
2.
{\displaystyle a_{k}={\frac {1}{2}}(a_{k-1}-a_{k-2})\quad {\text{for}}\ k\geq 2.}
Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by using partial fraction decomposition we can write any proper rational function as a sum of factors of the form 1 / (ax + b) and expand these as geometric series, giving an explicit formula for the Taylor coefficients; this is the method of generating functions.
== Abstract algebra ==
In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting, given a field F and some indeterminate X, a rational expression (also known as a rational fraction or, in algebraic geometry, a rational function) is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However, since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors.
The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X.
=== Notion of a rational function on an algebraic variety ===
Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn).
An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line.
== Applications ==
Rational functions are used in numerical analysis for interpolation and approximation of functions, for example the Padé approximants introduced by Henri Padé. Approximations in terms of rational functions are well suited for computer algebra systems and other numerical software. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials.
Rational functions are used to approximate or model more complex equations in science and engineering including fields and forces in physics, spectroscopy in analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo, wave functions for atoms and molecules, optics and photography to improve image resolution, and acoustics and sound.
In signal processing, the Laplace transform (for continuous systems) or the z-transform (for discrete-time systems) of the impulse response of commonly-used linear time-invariant systems (filters) with infinite impulse response are rational functions over complex numbers.
== See also ==
Partial fraction decomposition
Partial fractions in integration
Function field of an algebraic variety
Algebraic fractions – a generalization of rational functions that allows taking integer roots
== References ==
== Further reading ==
"Rational function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007), "Section 3.4. Rational Function Interpolation and Extrapolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8
== External links ==
Dynamic visualization of rational functions with JSXGraph | Wikipedia/Irrational_function |
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring
K
[
x
1
,
…
,
x
n
]
{\displaystyle K[x_{1},\ldots ,x_{n}]}
over a field
K
{\displaystyle K}
. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.
Gröbner basis computation can be seen as a multivariate, non-linear generalization of both Euclid's algorithm for computing polynomial greatest common divisors, and
Gaussian elimination for linear systems.
Gröbner bases were introduced by Bruno Buchberger in his 1965 Ph.D. thesis, which also included an algorithm to compute them (Buchberger's algorithm). He named them after his advisor Wolfgang Gröbner. In 2007, Buchberger received the Association for Computing Machinery's Paris Kanellakis Theory and Practice Award for this work.
However, the Russian mathematician Nikolai Günther had introduced a similar notion in 1913, published in various Russian mathematical journals. These papers were largely ignored by the mathematical community until their rediscovery in 1987 by Bodo Renschuch et al. An analogous concept for multivariate power series was developed independently by Heisuke Hironaka in 1964, who named them standard bases. This term has been used by some authors to also denote Gröbner bases.
The theory of Gröbner bases has been extended by many authors in various directions. It has been generalized to other structures such as polynomials over principal ideal rings or polynomial rings, and also some classes of non-commutative rings and algebras, like Ore algebras.
== Tools ==
=== Polynomial ring ===
Gröbner bases are primarily defined for ideals in a polynomial ring
R
=
K
[
x
1
,
…
,
x
n
]
{\displaystyle R=K[x_{1},\ldots ,x_{n}]}
over a field K. Although the theory works for any field, most Gröbner basis computations are done either when K is the field of rationals or the integers modulo a prime number.
In the context of Gröbner bases, a nonzero polynomial in
R
=
K
[
x
1
,
…
,
x
n
]
{\displaystyle R=K[x_{1},\ldots ,x_{n}]}
is commonly represented as a sum
c
1
M
1
+
⋯
+
c
m
M
m
,
{\displaystyle c_{1}M_{1}+\cdots +c_{m}M_{m},}
where the
c
i
{\displaystyle c_{i}}
are nonzero elements of K, called coefficients, and the
M
i
{\displaystyle M_{i}}
are monomials (called power products by Buchberger and some of his followers) of the form
x
1
a
1
⋯
x
n
a
n
,
{\displaystyle x_{1}^{a_{1}}\cdots x_{n}^{a_{n}},}
where the
a
i
{\displaystyle a_{i}}
are nonnegative integers. The vector
A
=
[
a
1
,
…
,
a
n
]
{\displaystyle A=[a_{1},\ldots ,a_{n}]}
is called the exponent vector of the monomial. When the list
X
=
[
x
1
,
…
,
x
n
]
{\displaystyle X=[x_{1},\ldots ,x_{n}]}
of the variables is fixed, the notation of monomials is often abbreviated as
x
1
a
1
⋯
x
n
a
n
=
X
A
.
{\displaystyle x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}=X^{A}.}
Monomials are uniquely defined by their exponent vectors, and, when a monomial ordering (see below) is fixed, a polynomial is uniquely represented by the ordered list of the ordered pairs formed by an exponent vector and the corresponding coefficient. This representation of polynomials is especially efficient for Gröbner basis computation in computers, although it is less convenient for other computations such as polynomial factorization and polynomial greatest common divisor.
If
F
=
{
f
1
,
…
,
f
k
}
{\displaystyle F=\{f_{1},\ldots ,f_{k}\}}
is a finite set of polynomials in the polynomial ring R, the ideal generated by F is the set of linear combinations of elements of F with coefficients in R; that is the set of polynomials that can be written
∑
i
=
1
k
g
i
f
i
{\textstyle \sum _{i=1}^{k}g_{i}f_{i}}
with
g
1
,
…
,
g
k
∈
R
.
{\displaystyle g_{1},\ldots ,g_{k}\in R.}
=== Monomial ordering ===
All operations related to Gröbner bases require the choice of a total order on the monomials, with the following properties of compatibility with multiplication. For all monomials M, N, P,
M
≤
N
⟺
M
P
≤
N
P
{\displaystyle M\leq N\Longleftrightarrow MP\leq NP}
M
≤
M
P
{\displaystyle M\leq MP}
.
A total order satisfying these condition is sometimes called an admissible ordering.
These conditions imply that the order is a well-order, that is, every strictly decreasing sequence of monomials is finite.
Although Gröbner basis theory does not depend on a particular choice of an admissible monomial ordering, three monomial orderings are especially important for the applications:
Lexicographical ordering, commonly called lex or plex (for pure lexical ordering).
Total degree reverse lexicographical ordering, commonly called degrevlex.
Elimination ordering, lexdeg.
Gröbner basis theory was initially introduced for the lexicographical ordering. It was soon realised that the Gröbner basis for degrevlex is almost always much easier to compute, and that it is almost always easier to compute a lex Gröbner basis by first computing the degrevlex basis and then using a "change of ordering algorithm". When elimination is needed, degrevlex is not convenient; both lex and lexdeg may be used but, again, many computations are relatively easy with lexdeg and almost impossible with lex.
=== Basic operations ===
==== Leading term, coefficient and monomial ====
Once a monomial ordering is fixed, the terms of a polynomial (product of a monomial with its nonzero coefficient) are naturally ordered by decreasing monomials (for this order). This makes the representation of a polynomial as a sorted list of pairs coefficient–exponent vector a canonical representation of the polynomials (that is, two polynomials are equal if and only if they have the same representation).
The first (greatest) term of a polynomial p for this ordering and the corresponding monomial and coefficient are respectively called the leading term, leading monomial and leading coefficient and denoted, in this article, lt(p), lm(p) and lc(p).
Most polynomial operations related to Gröbner bases involve the leading terms. So, the representation of polynomials as sorted lists make these operations particularly efficient (reading the first element of a list takes a constant time, independently of the length of the list).
==== Polynomial operations ====
The other polynomial operations involved in Gröbner basis computations are also compatible with the monomial ordering; that is, they can be performed without reordering the result:
The addition of two polynomials consists in a merge of the two corresponding lists of terms, with a special treatment in the case of a conflict (that is, when the same monomial appears in the two polynomials).
The multiplication of a polynomial by a scalar consists of multiplying each coefficient by this scalar, without any other change in the representation.
The multiplication of a polynomial by a monomial m consists of multiplying each monomial of the polynomial by m. This does not change the term ordering by definition of a monomial ordering.
==== Divisibility of monomials ====
Let
M
=
x
1
a
1
⋯
x
n
a
n
{\displaystyle M=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}}
and
N
=
x
1
b
1
⋯
x
n
b
n
{\displaystyle N=x_{1}^{b_{1}}\cdots x_{n}^{b_{n}}}
be two monomials, with exponent vectors
A
=
[
a
1
,
…
,
a
n
]
{\displaystyle A=[a_{1},\ldots ,a_{n}]}
and
B
=
[
b
1
,
…
,
b
n
]
.
{\displaystyle B=[b_{1},\ldots ,b_{n}].}
One says that M divides N, or that N is a multiple of M, if
a
i
≤
b
i
{\displaystyle a_{i}\leq b_{i}}
for every i; that is, if A is componentwise not greater than B. In this case, the quotient
N
M
{\textstyle {\frac {N}{M}}}
is defined as
N
M
=
x
1
b
1
−
a
1
⋯
x
n
b
n
−
a
n
.
{\textstyle {\frac {N}{M}}=x_{1}^{b_{1}-a_{1}}\cdots x_{n}^{b_{n}-a_{n}}.}
In other words, the exponent vector of
N
M
{\textstyle {\frac {N}{M}}}
is the componentwise subtraction of the exponent vectors of N and M.
The greatest common divisor gcd(M, N) of M and N is the monomial
x
1
min
(
a
1
,
b
1
)
⋯
x
n
min
(
a
n
,
b
n
)
{\textstyle x_{1}^{\min(a_{1},b_{1})}\cdots x_{n}^{\min(a_{n},b_{n})}}
whose exponent vector is the componentwise minimum of A and B. The least common multiple lcm(M, N) is defined similarly with max instead of min.
One has
lcm
(
M
,
N
)
=
M
N
gcd
(
M
,
N
)
.
{\displaystyle \operatorname {lcm} (M,N)={\frac {MN}{\gcd(M,N)}}.}
=== Reduction ===
The reduction of a polynomial by other polynomials with respect to a monomial ordering is central to Gröbner basis theory. It is a generalization of both row reduction occurring in Gaussian elimination and division steps of the
Euclidean division of univariate polynomials. When completed as much as possible, it is sometimes called multivariate division although its result is not uniquely defined.
Lead-reduction is a special case of reduction that is easier to compute. It is fundamental for Gröbner basis computation, since general reduction is needed only at the end of a Gröbner basis computation, for getting a reduced Gröbner basis from a non-reduced one.
Let an admissible monomial ordering be fixed, to which refers every monomial comparison that will occur in this section.
A polynomial f is lead-reducible by another polynomial g if the leading monomial lm(f) is a multiple of lm(g). The polynomial f is reducible by g if some monomial of f is a multiple of lm(g). (So, if f is lead-reducible by g, it is also reducible, but f may be reducible without being lead-reducible.)
Suppose that f is reducible by g, and let cm be a term of f such that the monomial m is a multiple of lm(g). A one-step reduction of f by g consists of replacing f by
red
1
(
f
,
g
)
=
f
−
c
lc
(
g
)
m
lm
(
g
)
g
.
{\displaystyle \operatorname {red} _{1}(f,g)=f-{\frac {c}{\operatorname {lc} (g)}}\,{\frac {m}{\operatorname {lm} (g)}}\,g.}
This operation removes the monomial m from f without changing the terms with a monomial greater than m (for the monomial ordering). In particular, a one step lead-reduction of f produces a polynomial all of whose monomials are smaller than lm(f).
Given a finite set G of polynomials, one says that f is reducible or lead-reducible by G if it is reducible or lead-reducible, respectively, by at least one element g of G. In this case, a one-step reduction (resp. one-step lead-reduction) of f by G is any one-step reduction (resp. one-step lead-reduction) of f by an element of G.
The (complete) reduction (resp. lead-reduction) of f by G consists of iterating one-step reductions (respect. one-step lead reductions) until getting a polynomial that is irreducible (resp. lead-irreducible) by G. It is sometimes called a normal form of f by G. In general this form is not uniquely defined because there are, in general, several elements of G that can be used for reducing f; this non-uniqueness is the starting point of Gröbner basis theory.
The definition of the reduction shows immediately that, if h is a normal form of f by G, one has
f
=
h
+
∑
g
∈
G
q
g
g
,
{\displaystyle f=h+\sum _{g\in G}q_{g}\,g,}
where h is irreducible by G and the
q
g
{\displaystyle q_{g}}
are polynomials such that
lm
(
q
g
g
)
≤
lm
(
f
)
.
{\displaystyle \operatorname {lm} (q_{g}\,g)\leq \operatorname {lm} (f).}
In the case of univariate polynomials, if G consists of a single element g, then h is the remainder of the Euclidean division of f by g, and qg is the quotient. Moreover, the division algorithm is exactly the process of lead-reduction. For this reason, some authors use the term multivariate division instead of reduction.
==== Non uniqueness of reduction ====
In the example that follows, there are exactly two complete lead-reductions that produce two very different results. The fact that the results are irreducible (not only lead-irreducible) is specific to the example, although this is rather common with such small examples.
In this two variable example, the monomial ordering that is used is the lexicographic order with
x
>
y
,
{\displaystyle x>y,}
and we consider the reduction of
f
=
2
x
3
−
x
2
y
+
y
3
+
3
y
{\displaystyle f=2x^{3}-x^{2}y+y^{3}+3y}
, by
G
=
{
g
1
,
g
2
}
,
{\displaystyle G=\{g_{1},g_{2}\},}
with
g
1
=
x
2
+
y
2
−
1
,
g
2
=
x
y
−
2.
{\displaystyle {\begin{aligned}g_{1}&=x^{2}+y^{2}-1,\\g_{2}&=xy-2.\end{aligned}}}
For the first reduction step, either the first or the second term of f may be reduced. However, the reduction of a term amounts to removing this term at the cost of adding new lower terms; if it is not the first reducible term that is reduced, it may occur that a further reduction adds a similar term, which must be reduced again. It is therefore always better to reduce first the largest (for the monomial order) reducible term; that is, in particular, to lead-reduce first until getting a lead-irreducible polynomial.
The leading term
2
x
3
{\displaystyle 2x^{3}}
of f is reducible by
g
1
{\displaystyle g_{1}}
and not by
g
2
.
{\displaystyle g_{2}.}
So the first reduction step consists of multiplying
g
1
{\displaystyle g_{1}}
by −2x and adding the result to f:
f
→
−
2
x
g
1
f
1
=
f
−
2
x
g
1
=
−
x
2
y
−
2
x
y
2
+
2
x
+
y
3
+
3
y
.
{\displaystyle f\;\xrightarrow {\overset {}{-2xg_{1}}} \;f_{1}=f-2xg_{1}=-x^{2}y-2xy^{2}+2x+y^{3}+3y.}
The leading term
−
x
2
y
{\displaystyle -x^{2}y}
of
f
1
{\displaystyle f_{1}}
is a multiple of the leading monomials of both
g
1
{\displaystyle g_{1}}
and
g
2
,
{\displaystyle g_{2},}
So, one has two choices for the second reduction step. If one chooses
g
2
,
{\displaystyle g_{2},}
one gets a polynomial that can be reduced again by
g
2
:
{\displaystyle g_{2}\colon }
f
→
−
2
x
g
1
f
1
→
x
g
2
−
2
x
y
2
+
y
3
+
3
y
→
2
y
g
2
f
2
=
y
3
−
y
.
{\displaystyle f\;\xrightarrow {\overset {}{-2xg_{1}}} \;f_{1}\;\xrightarrow {xg_{2}} \;-2xy^{2}+y^{3}+3y\;\xrightarrow {2yg_{2}} \;f_{2}=y^{3}-y.}
No further reduction is possible, so
f
2
{\displaystyle f_{2}}
is a complete reduction of f.
One gets a different result with the other choice for the second step:
f
→
−
2
x
g
1
f
1
→
y
g
1
−
2
x
y
2
+
2
x
+
2
y
3
+
2
y
→
2
y
g
2
f
3
=
2
x
+
2
y
3
−
2
y
.
{\displaystyle f\;\xrightarrow {\overset {}{-2xg_{1}}} \;f_{1}\;\xrightarrow {yg_{1}} \;-2xy^{2}+2x+2y^{3}+2y\;\xrightarrow {2yg_{2}} \;f_{3}=2x+2y^{3}-2y.}
Again, the result
f
3
{\displaystyle f_{3}}
is irreducible, although only lead reductions were done.
In summary, the complete reduction of f can result in either
f
2
=
y
3
−
y
{\displaystyle f_{2}=y^{3}-y}
or
f
3
=
2
x
+
2
y
3
−
2
y
.
{\displaystyle f_{3}=2x+2y^{3}-2y.}
It is for dealing with the problems set by this non-uniqueness that Buchberger introduced Gröbner bases and S-polynomials. Intuitively,
0
=
f
−
f
{\displaystyle 0=f-f}
may be reduced to
f
2
−
f
3
.
{\displaystyle f_{2}-f_{3}.}
This implies that
f
2
−
f
3
{\displaystyle f_{2}-f_{3}}
belongs to the ideal generated by G. So, this ideal is not changed by adding
f
3
−
f
2
{\displaystyle f_{3}-f_{2}}
to G, and this allows more reductions. In particular,
f
3
{\displaystyle f_{3}}
can be reduced to
f
2
{\displaystyle f_{2}}
by
f
3
−
f
2
{\displaystyle f_{3}-f_{2}}
and this restores the uniqueness of the reduced form.
Here Buchberger's algorithm for Gröbner bases would begin by adding to G the polynomial
g
3
=
y
g
1
−
x
g
2
=
2
x
+
y
3
−
y
.
{\displaystyle g_{3}=yg_{1}-xg_{2}=2x+y^{3}-y.}
This polynomial, called S-polynomial by Buchberger, is the difference of the one-step reductions of the least common multiple
x
2
y
{\displaystyle x^{2}y}
of the leading monomials of
g
1
{\displaystyle g_{1}}
and
g
2
{\displaystyle g_{2}}
, by
g
2
{\displaystyle g_{2}}
and
g
1
{\displaystyle g_{1}}
respectively:
g
3
=
(
x
2
y
−
x
2
y
l
t
(
g
2
)
g
2
)
−
(
x
2
y
−
x
2
y
l
t
(
g
1
)
g
1
)
=
x
2
y
l
t
(
g
1
)
g
1
−
x
2
y
l
t
(
g
2
)
g
2
{\displaystyle g_{3}=\left(x^{2}y-{\frac {x^{2}y}{lt(g_{2})}}g_{2}\right)-\left(x^{2}y-{\frac {x^{2}y}{lt(g_{1})}}g_{1}\right)={\frac {x^{2}y}{lt(g_{1})}}g_{1}-{\frac {x^{2}y}{lt(g_{2})}}g_{2}}
.
In this example, one has
g
3
=
f
3
−
f
2
.
{\displaystyle g_{3}=f_{3}-f_{2}.}
This does not complete Buchberger's algorithm, as xy gives different results, when reduced by
g
2
{\displaystyle g_{2}}
or
g
3
.
{\displaystyle g_{3}.}
=== S-polynomial ===
Given monomial ordering, the S-polynomial or critical pair of two polynomials f and g is the polynomial
S
(
f
,
g
)
=
red
1
(
l
c
m
,
g
)
−
red
1
(
l
c
m
,
f
)
{\displaystyle S(f,g)=\operatorname {red} _{1}(\mathrm {lcm} ,g)-\operatorname {red} _{1}(\mathrm {lcm} ,f)}
;
where lcm denotes the least common multiple of the leading monomials of f and g.
Using the definition of
red
1
{\displaystyle \operatorname {red} _{1}}
, this translates to:
S
(
f
,
g
)
=
(
l
c
m
−
1
lc
(
g
)
l
c
m
lm
(
g
)
g
)
−
(
l
c
m
−
1
lc
(
f
)
l
c
m
lm
(
f
)
f
)
=
1
lc
(
f
)
l
c
m
lm
(
f
)
f
−
1
lc
(
g
)
l
c
m
lm
(
g
)
g
.
{\displaystyle {\begin{aligned}S(f,g)&=\left(\mathrm {lcm} -{\frac {1}{\operatorname {lc} (g)}}\,{\frac {\mathrm {lcm} }{\operatorname {lm} (g)}}\,g\right)-\left(\mathrm {lcm} -{\frac {1}{\operatorname {lc} (f)}}\,{\frac {\mathrm {lcm} }{\operatorname {lm} (f)}}\,f\right)\\&={\frac {1}{\operatorname {lc} (f)}}\,{\frac {\mathrm {lcm} }{\operatorname {lm} (f)}}\,f-{\frac {1}{\operatorname {lc} (g)}}\,{\frac {\mathrm {lcm} }{\operatorname {lm} (g)}}\,g\\\end{aligned}}.}
Using the property that relates the lcm and the gcd, the S-polynomial can also be written as:
S
(
f
,
g
)
=
1
lc
(
f
)
lm
(
g
)
g
c
d
f
−
1
lc
(
g
)
lm
(
f
)
g
c
d
g
;
{\displaystyle S(f,g)={\frac {1}{\operatorname {lc} (f)}}\,{\frac {\operatorname {lm} (g)}{\mathrm {gcd} }}\,f-{\frac {1}{\operatorname {lc} (g)}}\,{\frac {\operatorname {lm} (f)}{\mathrm {gcd} }}\,g;}
where gcd denotes the greatest common divisor of the leading monomials of f and g.
As the monomials that are reducible by both f and g are exactly the multiples of lcm, one can deal with all cases of non-uniqueness of the reduction by considering only the S-polynomials. This is a fundamental fact for Gröbner basis theory and all algorithms for computing them.
For avoiding fractions when dealing with polynomials with integer coefficients, the S polynomial is often defined as
S
(
f
,
g
)
=
lc
(
g
)
lm
(
g
)
g
c
d
f
−
lc
(
f
)
lm
(
f
)
g
c
d
g
;
{\displaystyle S(f,g)=\operatorname {lc} (g)\,{\frac {\operatorname {lm} (g)}{\mathrm {gcd} }}\,f-\operatorname {lc} (f)\,{\frac {\operatorname {lm} (f)}{\mathrm {gcd} }}\,g;}
This does not change anything to the theory since the two polynomials are associates.
== Definition ==
Let
R
=
F
[
x
1
,
…
,
x
n
]
{\displaystyle R=F[x_{1},\ldots ,x_{n}]}
be a polynomial ring over a field F. In this section, we suppose that an admissible monomial ordering has been fixed.
Let G be a finite set of polynomials in R that generates an ideal I. The set G is a Gröbner basis (with respect to the monomial ordering), or, more precisely, a Gröbner basis of I if
the ideal generated by the leading monomials of the polynomials in I equals the ideal generated by the leading monomials of G,
or, equivalently,
There are many characterizing properties, which can each be taken as an equivalent definition of Gröbner bases. For conciseness, in the following list, the notation "one-word/another word" means that one can take either "one-word" or "another word" for having two different characterizations of Gröbner bases. All the following assertions are characterizations of Gröbner bases:
Counting the above definition, this provides 12 characterizations of Gröbner bases. The fact that so many characterizations are possible makes Gröbner bases very useful. For example, condition 3 provides an algorithm for testing ideal membership; condition 4 provides an algorithm for testing whether a set of polynomials is a Gröbner basis and forms the basis of Buchberger's algorithm for computing Gröbner bases; conditions 5 and 6 allow computing in
R
/
I
{\displaystyle R/I}
in a way that is very similar to modular arithmetic.
=== Existence ===
For every admissible monomial ordering and every finite set G of polynomials, there is a Gröbner basis that contains G and generates the same ideal. Moreover, such a Gröbner basis may be computed with Buchberger's algorithm.
This algorithm uses condition 4, and proceeds roughly as follows: for any two elements of G, compute the complete reduction by G of their S-polynomial, and add the result to G if it is not zero; repeat this operation with the new elements of G included until, eventually, all reductions produce zero.
The algorithm terminates always because of Dickson's lemma or because polynomial rings are Noetherian (Hilbert's basis theorem). Condition 4 ensures that the result is a Gröbner basis, and the definitions of S-polynomials and reduction ensure that the generated ideal is not changed.
The above method is an algorithm for computing Gröbner bases; however, it is very inefficient. Many improvements of the original Buchberger's algorithm, and several other algorithms have been proposed and implemented, which dramatically improve the efficiency. See § Algorithms and implementations, below.
=== Reduced Gröbner bases ===
A Gröbner basis is minimal if all leading monomials of its elements are irreducible by the other elements of the basis. Given a Gröbner basis of an ideal I, one gets a minimal Gröbner basis of I by removing the polynomials whose leading monomials are multiple of the leading monomial of another element of the Gröbner basis. However, if two polynomials of the basis have the same leading monomial, only one must be removed. So, every Gröbner basis contains a minimal Gröbner basis as a subset.
All minimal Gröbner bases of a given ideal (for a fixed monomial ordering) have the same number of elements, and the same leading monomials, and the non-minimal Gröbner bases have more elements than the minimal ones.
A Gröbner basis is reduced if every polynomial in it is irreducible by the other elements of the basis, and has 1 as leading coefficient. So, every reduced Gröbner basis is minimal, but a minimal Gröbner basis need not be reduced.
Given a Gröbner basis of an ideal I, one gets a reduced Gröbner basis of I by first removing the polynomials that are lead-reducible by other elements of the basis (for getting a minimal basis); then replacing each element of the basis by the result of the complete reduction by the other elements of the basis; and, finally, by dividing each element of the basis by its leading coefficient.
All reduced Gröbner bases of an ideal (for a fixed monomial ordering) are equal. It follows that two ideals are equal if and only if they have the same reduced Gröbner basis.
Sometimes, reduced Gröbner bases are defined without the condition on the leading coefficients. In this case, the uniqueness of reduced Gröbner bases is true only up to the multiplication of polynomials by a nonzero constant.
When working with polynomials over the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers, it is useful to work only with polynomials with integer coefficients. In this case, the condition on the leading coefficients in the definition of a reduced basis may be replaced by the condition that all elements of the basis are primitive polynomials with integer coefficients, with positive leading coefficients. This restores the uniqueness of reduced bases.
=== Special cases ===
For every monomial ordering, the empty set of polynomials is the unique Gröbner basis of the zero ideal.
For every monomial ordering, a set of polynomials that contains a nonzero constant is a Gröbner basis of the unit ideal (the whole polynomial ring). Conversely, every Gröbner basis of the unit ideal contains a nonzero constant. The reduced Gröbner basis of the unit is formed by the single polynomial 1.
In the case of polynomials in a single variable, there is a unique admissible monomial ordering, the ordering by the degree. The minimal Gröbner bases are the singletons consisting of a single polynomial. The reduced Gröbner bases are the monic polynomials.
== Example and counterexample ==
Let
R
=
Q
[
x
,
y
]
{\displaystyle R=\mathbb {Q} [x,y]}
be the ring of bivariate polynomials with rational coefficients and consider the ideal
I
=
⟨
f
,
g
⟩
{\displaystyle I=\langle f,g\rangle }
generated by the polynomials
f
=
x
2
−
y
{\displaystyle f=x^{2}-y}
,
g
=
x
3
−
x
{\displaystyle g=x^{3}-x}
.
By reducing g by f, one obtains a new polynomial k such that
I
=
⟨
f
,
k
⟩
:
{\displaystyle I=\langle f,k\rangle :}
k
=
g
−
x
f
=
x
y
−
x
.
{\displaystyle k=g-xf=xy-x.}
None of f and k is reducible by the other, but xk is reducible by f, which gives another polynomial in I:
h
=
x
k
−
(
y
−
1
)
f
=
y
2
−
y
.
{\displaystyle h=xk-(y-1)f=y^{2}-y.}
Under lexicographic ordering with
x
>
y
{\displaystyle x>y}
we have
lt(f) = x2
lt(k) = xy
lt(h) = y2
As f, k and h belong to I, and none of them is reducible by the others, none of
{
f
,
k
}
,
{\displaystyle \{f,k\},}
{
f
,
h
}
,
{\displaystyle \{f,h\},}
and
{
h
,
k
}
{\displaystyle \{h,k\}}
is a Gröbner basis of I.
On the other hand, {f, k, h} is a Gröbner basis of I, since the S-polynomials
y
f
−
x
k
=
y
(
x
2
−
y
)
−
x
(
x
y
−
x
)
=
f
−
h
y
k
−
x
h
=
y
(
x
y
−
x
)
−
x
(
y
2
−
y
)
=
0
y
2
f
−
x
2
h
=
y
(
y
f
−
x
k
)
+
x
(
y
k
−
x
h
)
{\displaystyle {\begin{aligned}yf-xk&=y(x^{2}-y)-x(xy-x)=f-h\\yk-xh&=y(xy-x)-x(y^{2}-y)=0\\y^{2}f-x^{2}h&=y(yf-xk)+x(yk-xh)\end{aligned}}}
can be reduced to zero by f, k and h.
The method that has been used here for finding h and k, and proving that {f, k, h} is a Gröbner basis is a direct application of Buchberger's algorithm. So, it can be applied mechanically to any similar example, although, in general, there are many polynomials and S-polynomials to consider, and the computation is generally too large for being done without a computer.
== Properties and applications of Gröbner bases ==
Unless explicitly stated, all the results that follow are true for any monomial ordering (see that article for the definitions of the different orders that are mentioned below).
It is a common misconception that the lexicographical order is needed for some of these results. On the contrary, the lexicographical order is, almost always, the most difficult to compute, and using it makes impractical many computations that are relatively easy with graded reverse lexicographic order (grevlex), or, when elimination is needed, the elimination order (lexdeg) which restricts to grevlex on each block of variables.
=== Equality of ideals ===
Reduced Gröbner bases are unique for any given ideal and any monomial ordering. Thus two ideals are equal if and only if they have the same (reduced) Gröbner basis (usually a Gröbner basis software always produces reduced Gröbner bases).
=== Membership and inclusion of ideals ===
The reduction of a polynomial f by the Gröbner basis G of an ideal I yields 0 if and only if f is in I. This allows to test the membership of an element in an ideal. Another method consists in verifying that the Gröbner basis of G∪{f} is equal to G.
To test if the ideal I generated by f1, ..., fk is contained in the ideal J, it suffices to test that every fI is in J. One may also test the equality of the reduced Gröbner bases of J and J ∪ {f1, ...,fk}.
=== Solutions of a system of algebraic equations ===
Any set of polynomials may be viewed as a system of polynomial equations by equating the polynomials to zero. The set of the solutions of such a system depends only on the generated ideal, and, therefore does not change when the given generating set is replaced by the Gröbner basis, for any ordering, of the generated ideal. Such a solution, with coordinates in an algebraically closed field containing the coefficients of the polynomials, is called a zero of the ideal. In the usual case of rational coefficients, this algebraically closed field is chosen as the complex field.
An ideal does not have any zero (the system of equations is inconsistent) if and only if 1 belongs to the ideal (this is Hilbert's Nullstellensatz), or, equivalently, if its Gröbner basis (for any monomial ordering) contains 1, or, also, if the corresponding reduced Gröbner basis is [1].
Given the Gröbner basis G of an ideal I, it has only a finite number of zeros, if and only if, for each variable x, G contains a polynomial with a leading monomial that is a power of x (without any other variable appearing in the leading term). If this is the case, then the number of zeros, counted with multiplicity, is equal to the number of monomials that are not multiples of any leading monomial of G. This number is called the degree of the ideal.
When the number of zeros is finite, the Gröbner basis for a lexicographical monomial ordering provides, theoretically, a solution: the first coordinate of a solution is a root of the greatest common divisor of polynomials of the basis that depend only on the first variable. After substituting this root in the basis, the second coordinate of this solution is a root of the greatest common divisor of the resulting polynomials that depend only on the second variable, and so on. This solving process is only theoretical, because it implies GCD computation and root-finding of polynomials with approximate coefficients, which are not practicable because of numeric instability. Therefore, other methods have been developed to solve polynomial systems through Gröbner bases (see System of polynomial equations for more details).
=== Dimension, degree and Hilbert series ===
The dimension of an ideal I in a polynomial ring R is the Krull dimension of the ring R/I and is equal to the dimension of the algebraic set of the zeros of I. It is also equal to number of hyperplanes in general position which are needed to have an intersection with the algebraic set, which is a finite number of points. The degree of the ideal and of its associated algebraic set is the number of points of this finite intersection, counted with multiplicity. In particular, the degree of a hypersurface is equal to the degree of its definition polynomial.
The dimension depend only on the set of the leading monomials of the Gröbner basis of the ideal for any monomial ordering. The same is true for the degree and degree-compatible monomial orderings; a monomial ordering is degree compatible is smaller for the degree implies smaller for the monomial ordering.
The dimension is the maximal size of a subset S of the variables such that there is no leading monomial depending only on the variables in S. Thus, if the ideal has dimension 0, then for each variable x there is a leading monomial in the Gröbner basis that is a power of x.
Both dimension and degree may be deduced from the Hilbert series of the ideal, which is the series
∑
i
=
0
∞
d
i
t
i
{\textstyle \sum _{i=0}^{\infty }d_{i}t^{i}}
, where
d
i
{\displaystyle d_{i}}
is the number of monomials of degree i that are not multiple of any leading monomial in the Gröbner basis. The Hilbert series may be summed into a rational fraction
∑
i
=
0
∞
d
i
t
i
=
P
(
t
)
(
1
−
t
)
d
,
{\displaystyle \sum _{i=0}^{\infty }d_{i}t^{i}={\frac {P(t)}{(1-t)^{d}}},}
where d is the dimension of the ideal and
P
(
t
)
{\displaystyle P(t)}
is a polynomial. The number
P
(
1
)
{\displaystyle P(1)}
is the degree of the algebraic set defined by the ideal, in the case of a homogeneous ideal or a monomial ordering compatible with the degree; that is, to compare two monomials, one compares their total degrees first.
The dimension does not depend on the choice of a monomial ordering, although the Hilbert series and the polynomial
P
(
t
)
{\displaystyle P(t)}
may change with changes of the monomial ordering. However, for homogeneous ideals or monomial orderings compatible with the degree, the Hilbert series and the polynomial
P
(
t
)
{\displaystyle P(t)}
do not depend on the choice of monomial ordering.
Most computer algebra systems that provide functions to compute Gröbner bases provide also functions for computing the Hilbert series, and thus also the dimension and the degree.
=== Elimination ===
The computation of Gröbner bases for an elimination monomial ordering allows computational elimination theory. This is based on the following theorem.
Consider a polynomial ring
K
[
x
1
,
…
,
x
n
,
y
1
,
…
,
y
m
]
=
K
[
X
,
Y
]
,
{\displaystyle K[x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{m}]=K[X,Y],}
in which the variables are split into two subsets X and Y. Let us also choose an elimination monomial ordering "eliminating" X, that is a monomial ordering for which two monomials are compared by comparing first the X-parts, and, in case of equality only, considering the Y-parts. This implies that a monomial containing an X-variable is greater than every monomial independent of X.
If G is a Gröbner basis of an ideal I for this monomial ordering, then
G
∩
K
[
Y
]
{\displaystyle G\cap K[Y]}
is a Gröbner basis of
I
∩
K
[
Y
]
{\displaystyle I\cap K[Y]}
(this ideal is often called the elimination ideal). Moreover,
G
∩
K
[
Y
]
{\displaystyle G\cap K[Y]}
consists exactly of the polynomials of G whose leading terms belong to K[Y] (this makes the computation of
G
∩
K
[
Y
]
{\displaystyle G\cap K[Y]}
very easy, as only the leading monomials need to be checked).
This elimination property has many applications, some described in the next sections.
Another application, in algebraic geometry, is that elimination realizes the geometric operation of projection of an affine algebraic set into a subspace of the ambient space: with above notation, the (Zariski closure of) the projection of the algebraic set defined by the ideal I into the Y-subspace is defined by the ideal
I
∩
K
[
Y
]
.
{\displaystyle I\cap K[Y].}
The lexicographical ordering such that
x
1
>
⋯
>
x
n
{\displaystyle x_{1}>\cdots >x_{n}}
is an elimination ordering for every partition
{
x
1
,
…
,
x
k
}
,
{
x
k
+
1
,
…
,
x
n
}
.
{\displaystyle \{x_{1},\ldots ,x_{k}\},\{x_{k+1},\ldots ,x_{n}\}.}
Thus a Gröbner basis for this ordering carries much more information than usually necessary. This may explain why Gröbner bases for the lexicographical ordering are usually the most difficult to compute.
=== Intersecting ideals ===
If I and J are two ideals generated respectively by {f1, ..., fm} and {g1, ..., gk}, then a single Gröbner basis computation produces a Gröbner basis of their intersection I ∩ J. For this, one introduces a new indeterminate t, and one uses an elimination ordering such that the first block contains only t and the other block contains all the other variables (this means that a monomial containing t is greater than every monomial that does not contain t). With this monomial ordering, a Gröbner basis of I ∩ J consists in the polynomials that do not contain t, in the Gröbner basis of the ideal
K
=
⟨
t
f
1
,
…
,
t
f
m
,
(
1
−
t
)
g
1
,
…
,
(
1
−
t
)
g
k
⟩
.
{\displaystyle K=\langle tf_{1},\ldots ,tf_{m},(1-t)g_{1},\ldots ,(1-t)g_{k}\rangle .}
In other words, I ∩ J is obtained by eliminating t in K.
This may be proven by observing that the ideal K consists of the polynomials
(
a
−
b
)
t
+
b
{\displaystyle (a-b)t+b}
such that
a
∈
I
{\displaystyle a\in I}
and
b
∈
J
{\displaystyle b\in J}
. Such a polynomial is independent of t if and only if a = b, which means that
b
∈
I
∩
J
.
{\displaystyle b\in I\cap J.}
=== Implicitization of a rational curve ===
A rational curve is an algebraic curve that has a set of parametric equations of the form
x
1
=
f
1
(
t
)
g
1
(
t
)
⋮
x
n
=
f
n
(
t
)
g
n
(
t
)
,
{\displaystyle {\begin{aligned}x_{1}&={\frac {f_{1}(t)}{g_{1}(t)}}\\&\;\;\vdots \\x_{n}&={\frac {f_{n}(t)}{g_{n}(t)}},\end{aligned}}}
where
f
i
(
t
)
{\displaystyle f_{i}(t)}
and
g
i
(
t
)
{\displaystyle g_{i}(t)}
are univariate polynomials for 1 ≤ i ≤ n. One may (and will) suppose that
f
i
(
t
)
{\displaystyle f_{i}(t)}
and
g
i
(
t
)
{\displaystyle g_{i}(t)}
are coprime (they have no non-constant common factors).
Implicitization consists in computing the implicit equations of such a curve. In case of n = 2, that is for plane curves, this may be computed with the resultant. The implicit equation is the following resultant:
Res
t
(
g
1
x
1
−
f
1
,
g
2
x
2
−
f
2
)
.
{\displaystyle {\text{Res}}_{t}(g_{1}x_{1}-f_{1},g_{2}x_{2}-f_{2}).}
Elimination with Gröbner bases allows to implicitize for any value of n, simply by eliminating t in the ideal
⟨
g
1
x
1
−
f
1
,
…
,
g
n
x
n
−
f
n
⟩
.
{\displaystyle \langle g_{1}x_{1}-f_{1},\ldots ,g_{n}x_{n}-f_{n}\rangle .}
If n = 2, the result is the same as with the resultant, if the map
t
↦
(
x
1
,
x
2
)
{\displaystyle t\mapsto (x_{1},x_{2})}
is injective for almost every t. In the other case, the resultant is a power of the result of the elimination.
=== Saturation ===
When modeling a problem by polynomial equations, it is often assumed that some quantities are non-zero, so as to avoid degenerate cases. For example, when dealing with triangles, many properties become false if the triangle degenerates to a line segment, i.e. the length of one side is equal to the sum of the lengths of the other sides. In such situations, one cannot deduce relevant information from the polynomial system unless the degenerate solutions are ignored. More precisely, the system of equations defines an algebraic set which may have several irreducible components, and one must remove the components on which the degeneracy conditions are everywhere zero.
This is done by saturating the equations by the degeneracy conditions, which may be done via the elimination property of Gröbner bases.
==== Definition of the saturation ====
The localization of a ring consists in adjoining to it the formal inverses of some elements. This section concerns only the case of a single element, or equivalently a finite number of elements (adjoining the inverses of several elements is equivalent to adjoining the inverse of their product). The localization of a ring R by an element f is the ring
R
f
=
R
[
t
]
/
(
1
−
f
t
)
,
{\displaystyle R_{f}=R[t]/(1-ft),}
where t is a new indeterminate representing the inverse of f. The localization of an ideal I of R is the ideal
I
f
=
R
f
I
{\displaystyle I_{f}=R_{f}I}
of
R
f
.
{\displaystyle R_{f}.}
When R is a polynomial ring, computing in
R
f
{\displaystyle R_{f}}
is not efficient because of the need to manage the denominators. Therefore, localization is usually replaced by the operation of saturation.
The saturation with respect to f of an ideal I in R is the inverse image of
R
f
I
{\displaystyle R_{f}I}
under the canonical map from R to
R
f
.
{\displaystyle R_{f}.}
It is the ideal
I
:
f
∞
=
{
g
∈
R
∣
(
∃
k
∈
N
)
f
k
g
∈
I
}
{\displaystyle I:f^{\infty }=\{g\in R\mid (\exists k\in \mathbb {N} )f^{k}g\in I\}}
consisting in all elements of R whose product with some power of f belongs to I.
If J is the ideal generated by I and 1−ft in R[t], then
I
:
f
∞
=
J
∩
R
.
{\displaystyle I:f^{\infty }=J\cap R.}
It follows that, if R is a polynomial ring, a Gröbner basis computation eliminating t produces a Gröbner basis of the saturation of an ideal by a polynomial.
The important property of the saturation, which ensures that it removes from the algebraic set defined by the ideal I the irreducible components on which the polynomial f is zero, is the following: The primary decomposition of
I
:
f
∞
{\displaystyle I:f^{\infty }}
consists of the components of the primary decomposition of I that do not contain any power of f.
==== Computation of the saturation ====
A Gröbner basis of the saturation by f of a polynomial ideal generated by a finite set of polynomials F, may be obtained by eliminating t in
F
∪
{
1
−
t
f
}
,
{\displaystyle F\cup \{1-tf\},}
that is by keeping the polynomials independent of t in the Gröbner basis of
F
∪
{
1
−
t
f
}
{\displaystyle F\cup \{1-tf\}}
for an elimination ordering eliminating t.
Instead of using F, one may also start from a Gröbner basis of F. Which method is most efficient depends on the problem. However, if the saturation does not remove any component, that is if the ideal is equal to its saturated ideal, computing first the Gröbner basis of F is usually faster. On the other hand, if the saturation removes some components, the direct computation may be dramatically faster.
If one wants to saturate with respect to several polynomials
f
1
,
…
,
f
k
{\displaystyle f_{1},\ldots ,f_{k}}
or with respect to a single polynomial which is a product
f
=
f
1
⋯
f
k
,
{\displaystyle f=f_{1}\cdots f_{k},}
there are three ways to proceed which give the same result but may have very different computation times (it depends on the problem which is the most efficient).
Saturating by
f
=
f
1
⋯
f
k
{\displaystyle f=f_{1}\cdots f_{k}}
in a single Gröbner basis computation.
Saturating by
f
1
,
{\displaystyle f_{1},}
then saturating the result by
f
2
,
{\displaystyle f_{2},}
and so on.
Adding to F or to its Gröbner basis the polynomials
1
−
t
1
f
1
,
…
,
1
−
t
k
f
k
,
{\displaystyle 1-t_{1}f_{1},\ldots ,1-t_{k}f_{k},}
and eliminating the
t
i
{\displaystyle t_{i}}
in a single Gröbner basis computation.
=== Effective Nullstellensatz ===
Hilbert's Nullstellensatz has two versions. The first one asserts that a set of polynomials has no common zeros over an algebraic closure of the field of the coefficients, if and only if 1 belongs to the generated ideal. This is easily tested with a Gröbner basis computation, because 1 belongs to an ideal if and only if 1 belongs to the Gröbner basis of the ideal, for any monomial ordering.
The second version asserts that the set of common zeros (in an algebraic closure of the field of the coefficients) of an ideal is contained in the hypersurface of the zeros of a polynomial f, if and only if a power of f belongs to the ideal. This may be tested by saturating the ideal by f; in fact, a power of f belongs to the ideal if and only if the saturation by f provides a Gröbner basis containing 1.
=== Implicitization in higher dimension ===
By definition, an affine rational variety of dimension k may be described by parametric equations of the form
x
1
=
p
1
p
0
⋮
x
n
=
p
n
p
0
,
{\displaystyle {\begin{aligned}x_{1}&={\frac {p_{1}}{p_{0}}}\\&\;\;\vdots \\x_{n}&={\frac {p_{n}}{p_{0}}},\end{aligned}}}
where
p
0
,
…
,
p
n
{\displaystyle p_{0},\ldots ,p_{n}}
are n+1 polynomials in the k variables (parameters of the parameterization)
t
1
,
…
,
t
k
.
{\displaystyle t_{1},\ldots ,t_{k}.}
Thus the parameters
t
1
,
…
,
t
k
{\displaystyle t_{1},\ldots ,t_{k}}
and the coordinates
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
of the points of the variety are zeros of the ideal
I
=
⟨
p
0
x
1
−
p
1
,
…
,
p
0
x
n
−
p
n
⟩
.
{\displaystyle I=\left\langle p_{0}x_{1}-p_{1},\ldots ,p_{0}x_{n}-p_{n}\right\rangle .}
One could guess that it suffices to eliminate the parameters to obtain the implicit equations of the variety, as it has been done in the case of curves. Unfortunately this is not always the case. If the
p
i
{\displaystyle p_{i}}
have a common zero (sometimes called base point), every irreducible component of the non-empty algebraic set defined by the
p
i
{\displaystyle p_{i}}
is an irreducible component of the algebraic set defined by I. It follows that, in this case, the direct elimination of the
t
i
{\displaystyle t_{i}}
provides an empty set of polynomials.
Therefore, if k>1, two Gröbner basis computations are needed to implicitize:
Saturate
I
{\displaystyle I}
by
p
0
{\displaystyle p_{0}}
to get a Gröbner basis
G
{\displaystyle G}
Eliminate the
t
i
{\displaystyle t_{i}}
from
G
{\displaystyle G}
to get a Gröbner basis of the ideal (of the implicit equations) of the variety.
== Algorithms and implementations ==
Buchberger's algorithm is the oldest algorithm for computing Gröbner bases. It has been devised by Bruno Buchberger together with the Gröbner basis theory. It is straightforward to implement, but it appeared soon that raw implementations can solve only trivial problems. The main issues are the following ones:
Even when the resulting Gröbner basis is small, the intermediate polynomials can be huge. It results that most of the computing time may be spent in memory management. So, specialized memory management algorithms may be a fundamental part of an efficient implementation.
The integers occurring during a computation may be sufficiently large for making fast multiplication algorithms and multimodular arithmetic useful. For this reason, most optimized implementations use the GMPlibrary. Also, modular arithmetic, Chinese remainder theorem and Hensel lifting are used in optimized implementations
The choice of the S-polynomials to reduce and of the polynomials used for reducing them is devoted to heuristics. As in many computational problems, heuristics cannot detect most hidden simplifications, and if heuristic choices are avoided, one may get a dramatic improvement of the algorithm efficiency.
In most cases most S-polynomials that are computed are reduced to zero; that is, most computing time is spent to compute zero.
The monomial ordering that is most often needed for the applications (pure lexicographic) is not the ordering that leads to the easiest computation, generally the ordering degrevlex.
For solving 3. many improvements, variants and heuristics have been proposed before the introduction of F4 and F5 algorithms by Jean-Charles Faugère. As these algorithms are designed for integer coefficients or with coefficients in the integers modulo a prime number, Buchberger's algorithm remains useful for more general coefficients.
Roughly speaking, F4 algorithm solves 3. by replacing many S-polynomial reductions by the row reduction of a single large matrix for which advanced methods of linear algebra can be used. This solves partially issue 4., as reductions to zero in Buchberger's algorithm correspond to relations between rows of the matrix to be reduced, and the zero rows of the reduced matrix correspond to a basis of the vector space of these relations.
F5 algorithm improves F4 by introducing a criterion that allows reducing the size of the matrices to be reduced. This criterion is almost optimal, since the matrices to be reduced have full rank in sufficiently regular cases (in particular, when the input polynomials form a regular sequence). Tuning F5 for a general use is difficult, since its performances depend on an order on the input polynomials and a balance between the incrementation of the working polynomial degree and of the number of the input polynomials that are considered. To date (2022), there is no distributed implementation that is significantly more efficient than F4, but, over modular integers F5 has been used successfully for several cryptographic challenges; for example, for breaking HFE challenge.
Issue 5. has been solved by the discovery of basis conversion algorithms that start from the Gröbner basis for one monomial ordering for computing a Gröbner basis for another monomial ordering. FGLM algorithm is such a basis conversion algorithm that works only in the zero-dimensional case (where the polynomials have a finite number of complex common zeros) and has a polynomial complexity in the number of common zeros. A basis conversion algorithm that works is the general case is the Gröbner walk algorithm. In its original form, FGLM may be the critical step for solving systems of polynomial equations because FGML does not take into account the sparsity of involved matrices. This has been fixed by the introduction of sparse FGLM algorithms.
Most general-purpose computer algebra systems have implementations of one or several algorithms for Gröbner bases, often also embedded in other functions, such as for solving systems of polynomial equations or for simplifying trigonometric functions; this is the case, for example, of CoCoA, GAP, Macaulay 2, Magma, Maple, Mathematica, SINGULAR, SageMath and SymPy. When F4 is available, it is generally much more efficient than Buchberger's algorithm. The implementation techniques and algorithmic variants are not always documented, although they may have a dramatic effect on efficiency.
Implementations of F4 and (sparse)-FGLM are included in the library Msolve. Beside Gröbner algorithms, Msolve contains fast algorithms for real-root isolation, and combines all these functions in an algorithm for the real solutions of systems of polynomial equations that outperforms dramatically the other software for this problem (Maple and Magma). Msolve is available on GitHub, and is interfaced with Julia, Maple and SageMath; this means that Msolve can be used directly from within these software environments.
== Complexity ==
The complexity of the Gröbner basis computations is commonly evaluated in term of the number n of variables and the maximal degree d of the input polynomials.
In the worst case, the main parameter of the complexity is the maximal degree of the elements of the resulting reduced Gröbner basis. More precisely, if the Gröbner basis contains an element of a large degree D, this element may contain
Ω
(
D
n
)
{\displaystyle \Omega (D^{n})}
nonzero terms whose computation requires a time of
Ω
(
D
n
)
>
D
Ω
(
n
)
.
{\displaystyle \Omega (D^{n})>D^{\Omega (n)}.}
On the other hand, if all polynomials in the reduced Gröbner basis a homogeneous ideal have a degree of at most D, the Gröbner basis can be computed by linear algebra on the vector space of polynomials of degree less than 2D, which has a dimension
O
(
D
n
)
.
{\displaystyle O(D^{n}).}
So, the complexity of this computation is
O
(
D
n
)
O
(
1
)
=
D
O
(
n
)
.
{\displaystyle O(D^{n})^{O(1)}=D^{O(n)}.}
The worst-case complexity of a Gröbner basis computation is doubly exponential in n. More precisely, the complexity is upper bounded by a polynomial in
d
2
n
.
{\textstyle d^{2^{n}}.}
Using little o notation, it is therefore bounded by
d
2
n
+
o
(
n
)
.
{\textstyle d^{2^{n+o(n)}}.}
On the other hand, examples have been given of reduced Gröbner bases containing polynomials of degree
d
2
Ω
(
n
)
,
{\textstyle d^{2^{\Omega (n)}},}
or containing
d
2
Ω
(
n
)
{\textstyle d^{2^{\Omega (n)}}}
elements. As every algorithm for computing a Gröbner basis must write its result, this provides a lower bound of the complexity.
Gröbner basis is EXPSPACE-complete.
== Generalizations ==
The concept and algorithms of Gröbner bases have been generalized to submodules of free modules over a polynomial ring. In fact, if L is a free module over a ring R, then one may consider the direct sum
R
⊕
L
{\displaystyle R\oplus L}
as a ring by defining the product of two elements of L to be 0. This ring may be identified with
R
[
e
1
,
…
,
e
l
]
/
⟨
{
e
i
e
j
|
1
≤
i
≤
j
≤
l
}
⟩
{\displaystyle R[e_{1},\ldots ,e_{l}]/\left\langle \{e_{i}e_{j}|1\leq i\leq j\leq l\}\right\rangle }
, where
e
1
,
…
,
e
l
{\displaystyle e_{1},\ldots ,e_{l}}
is a basis of L. This allows identifying a submodule of L generated by
g
1
,
…
,
g
k
{\displaystyle g_{1},\ldots ,g_{k}}
with the ideal of
R
[
e
1
,
…
,
e
l
]
{\displaystyle R[e_{1},\ldots ,e_{l}]}
generated by
g
1
,
…
,
g
k
{\displaystyle g_{1},\ldots ,g_{k}}
and the products
e
i
e
j
{\displaystyle e_{i}e_{j}}
,
1
≤
i
≤
j
≤
l
{\displaystyle 1\leq i\leq j\leq l}
. If R is a polynomial ring, this reduces the theory and the algorithms of Gröbner bases of modules to the theory and the algorithms of Gröbner bases of ideals.
The concept and algorithms of Gröbner bases have also been generalized to ideals over various rings, commutative or not, like polynomial rings over a principal ideal ring or Weyl algebras.
== Areas of applications ==
=== Error-Correcting Codes ===
Gröbner bases have been applied in the theory of error-correcting codes for algebraic decoding. By using Gröbner basis computation on various forms of error-correcting equations, decoding methods were developed for correcting errors of cyclic codes, affine variety codes, algebraic-geometric codes and even general linear block codes. Applying Gröbner basis in algebraic decoding is still a research area of channel coding theory.
== See also ==
Bergman's diamond lemma, an extension of Gröbner bases to non-commutative rings
Graver basis
Janet basis
Regular chains, an alternative way to represent algebraic sets
== References ==
== Further reading ==
Adams, William W.; Loustaunau, Philippe (1994). An Introduction to Gröbner Bases. Graduate Studies in Mathematics. Vol. 3. American Mathematical Society. ISBN 0-8218-3804-0.
Li, Huishi (2011). Gröbner Bases in Ring Theory. World Scientific. ISBN 978-981-4365-13-0.
Becker, Thomas; Weispfenning, Volker (1998). Gröbner Bases: A Computational Approach to Commutative Algebra. Graduate Texts in Mathematics. Vol. 141. Springer. ISBN 0-387-97971-9.
Buchberger, Bruno (1965). An Algorithm for Finding the Basis Elements of the Residue Class Ring of a Zero Dimensional Polynomial Ideal (PDF) (PhD). University of Innsbruck. — (2006). "Bruno Buchberger's PhD thesis 1965: An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal". Journal of Symbolic Computation. 41 (3–4). Translated by Abramson, M.: 471–511. doi:10.1016/j.jsc.2005.09.007. [This is Buchberger's thesis inventing Gröbner bases.]
Buchberger, Bruno (1970). "An Algorithmic Criterion for the Solvability of a System of Algebraic Equations" (PDF). Aequationes Mathematicae. 4: 374–383. doi:10.1007/BF01844169. S2CID 189834323. (This is the journal publication of Buchberger's thesis.)Burchberger, B.; Winkler, F., eds. (26 February 1998). "An Algorithmic Criterion for the Solvability of a System of Algebraic Equations". Gröbner Bases and Applications. London Mathematical Society Lecture Note Series. Vol. 251. Cambridge University Press. pp. 535–545. ISBN 978-0-521-63298-0.
Buchberger, Bruno; Kauers, Manuel (2010). "Gröbner Bases". Scholarpedia. 5 (10): 7763. Bibcode:2010SchpJ...5.7763B. doi:10.4249/scholarpedia.7763.
Fröberg, Ralf (1997). An Introduction to Gröbner Bases. Wiley. ISBN 0-471-97442-0.
Sturmfels, Bernd (November 2005). "What is ... a Gröbner Basis?" (PDF). Notices of the American Mathematical Society. 52 (10): 1199–1200, a brief introduction{{cite journal}}: CS1 maint: postscript (link)
Shirshov, Anatoliĭ I. (1999). "Certain algorithmic problems for Lie algebras" (PDF). ACM SIGSAM Bulletin. 33 (2): 3–6. doi:10.1145/334714.334715. S2CID 37070503. (translated from Sibirsk. Mat. Zh. Siberian Mathematics Journal 3 (1962), 292–296).
Aschenbrenner, Matthias; Hillar, Christopher (2007). "Finite generation of symmetric ideals". Transactions of the American Mathematical Society. 359 (11): 5171–92. arXiv:math/0411514. doi:10.1090/S0002-9947-07-04116-5. S2CID 5656701. (on infinite dimensional Gröbner bases for polynomial rings in infinitely many indeterminates).
== External links ==
Faugère's own implementation of his F4 algorithm
"Gröbner basis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Buchberger, B. (2003). "Gröbner Bases: A Short Introduction for Systems Theorists" (PDF). In Moreno-Diaz, R.; Buchberger, B.; Freire, J. (eds.). Computer Aided Systems Theory — EUROCAST 2001: A Selection of Papers from the 8th International Workshop on Computer Aided Systems Theory. Springer. pp. 1–19. ISBN 978-3-540-45654-4.
Buchberger, B.; Zapletal, A. "Gröbner Bases Bibliography".
Comparative Timings Page for Gröbner Bases Software
Prof. Bruno Buchberger Bruno Buchberger
Weisstein, Eric W. "Gröbner Basis". MathWorld.
Gröbner basis introduction on Scholarpedia | Wikipedia/Multivariate_division_algorithm |
Derive was a computer algebra system, developed as a successor to muMATH by the Soft Warehouse in Honolulu, Hawaii, now owned by Texas Instruments. Derive was implemented in muLISP, also by Soft Warehouse. The first release was in 1988 for DOS. It was discontinued on June 29, 2007, in favor of the TI-Nspire CAS. The final version is Derive 6.1 for Windows.
Since Derive required comparably little memory, it was suitable for use on older and smaller machines. It was available for the DOS and Windows platforms and served as an inspiration for the computer algebra system in certain TI pocket calculators.
== Books ==
Derive 1.0 - A Mathematical Assistant Program (2nd printing, 3rd ed.). Honolulu, Hawaii, USA: Soft Warehouse, Inc. August 1989 [June 1989 (September 1988)].
Jerry Glynn, Exploring Math from Algebra to Calculus with Derive, A Mathematical Assistant, Mathware Inc, 1992, ISBN 0-9623629-0-5
Leon Magiera, General Physics Problem Solving With Cas Derive, Nova Science Pub Inc 2001, ISBN 1-59033-057-9
Vladimir Dyakonov. Handbook on application system Derive. Moscow (Russia) 1996, Phismatlit, 320 p, ISBN 5-02-015100-9
Vladimir Dyakonov. Computers algebra systems Derive. Moscow (Russia) 2002, SOLON-R, 320 p, ISBN 5-93455-139-6
== See also ==
List of computer algebra systems
== References ==
== External links ==
Derive Review at scientific-computing.com Archived 2008-10-02 at the Wayback Machine
Derive Newsletter from the International Derive Users Group | Wikipedia/Derive_(computer_algebra_system) |
In mathematics, Gosper's algorithm, due to Bill Gosper, is a procedure for finding sums of hypergeometric terms that are themselves hypergeometric terms. That is: suppose one has a(1) + ... + a(n) = S(n) − S(0), where S(n) is a hypergeometric term (i.e., S(n + 1)/S(n) is a rational function of n); then necessarily a(n) is itself a hypergeometric term, and given the formula for a(n) Gosper's algorithm finds that for S(n).
== Outline of the algorithm ==
Step 1: Find a polynomial p such that, writing b(n) = a(n)/p(n), the ratio b(n)/b(n − 1) has the form q(n)/r(n) where q and r are polynomials and no q(n) has a nontrivial factor with r(n + j) for j = 0, 1, 2, ... . (This is always possible, whether or not the series is summable in closed form.)
Step 2: Find a polynomial ƒ such that S(n) = q(n + 1)/p(n) ƒ(n) a(n). If the series is summable in closed form then clearly a rational function ƒ with this property exists; in fact it must always be a polynomial, and an upper bound on its degree can be found. Determining ƒ (or finding that there is no such ƒ) is then a matter of solving a system of linear equations.
== Relationship to Wilf–Zeilberger pairs ==
Gosper's algorithm can be used to discover Wilf–Zeilberger pairs, where they exist. Suppose that F(n + 1, k) − F(n, k) = G(n, k + 1) − G(n, k) where F is known but G is not. Then feed a(k) := F(n + 1, k) − F(n, k) into Gosper's algorithm. (Treat this as a function of k whose coefficients happen to be functions of n rather than numbers; everything in the algorithm works in this setting.) If it successfully finds S(k) with S(k) − S(k − 1) = a(k), then we are done: this is the required G. If not, there is no such G.
== Definite versus indefinite summation ==
Gosper's algorithm finds (where possible) a hypergeometric closed form for the indefinite sum of hypergeometric terms. It can happen that there is no such closed form, but that the sum over all n, or some particular set of values of n, has a closed form. This question is only meaningful when the coefficients are themselves functions of some other variable. So, suppose a(n,k) is a hypergeometric term in both n and k: that is, a(n, k)/a(n − 1,k) and a(n, k)/a(n, k − 1) are rational functions of n and k. Then Zeilberger's algorithm and Petkovšek's algorithm may be used to find closed forms for the sum over k of a(n, k).
== History ==
Bill Gosper discovered this algorithm in the 1970s while working on the Macsyma computer algebra system at SAIL and MIT.
== Notes ==
== References ==
Gosper, Jr., Ralph William "Bill" (January 1978) [1977-09-26]. "Decision procedure for indefinite hypergeometric summation" (PDF). Proceedings of the National Academy of Sciences of the United States of America. Mathematics. 75 (1). Xerox, Palo Alto Research Center, Palo Alto, California, USA: 40–42. Bibcode:1978PNAS...75...40G. doi:10.1073/pnas.75.1.40. PMC 411178. PMID 16592483. Archived (PDF) from the original on 2019-04-12. Retrieved 2020-01-10. algorithm / binomial coefficient identities / closed form / symbolic computation / linear recurrences | Wikipedia/Gosper's_algorithm |
In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash).
In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure.
== Overview ==
Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.
An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.
Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams or control-interpreted Petri nets.
The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite-state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.
Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula
p
{\displaystyle p}
, and a structure
M
{\displaystyle M}
with initial state
s
{\displaystyle s}
, decide if
M
,
s
⊨
p
{\displaystyle M,s\models p}
. If
M
{\displaystyle M}
is finite, as it is in hardware, model checking reduces to a graph search.
== Symbolic model checking ==
Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic.
Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.
=== Example ===
One example of such a system requirement:
Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:
◻
(
(
call
∧
◊
open
)
→
(
(
¬
atfloor
∧
¬
open
)
U
(
open
∨
(
(
atfloor
∧
¬
open
)
U
(
open
∨
(
(
¬
atfloor
∧
¬
open
)
U
(
open
∨
(
(
atfloor
∧
¬
open
)
U
(
open
∨
(
¬
atfloor
U
open
)
)
)
)
)
)
)
)
)
)
{\displaystyle {\begin{aligned}\Box {\Big (}({\texttt {call}}\land \Diamond {\texttt {open}})\to &{\big (}(\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor ((\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (\lnot {\texttt {atfloor}}~{\mathcal {U}}~{\texttt {open}})))))))){\big )}{\Big )}\end{aligned}}}
Here,
◻
{\displaystyle \Box }
should be read as "always",
◊
{\displaystyle \Diamond }
as "eventually",
U
{\displaystyle {\mathcal {U}}}
as "until" and the other symbols are standard logical symbols,
∨
{\displaystyle \lor }
for "or",
∧
{\displaystyle \land }
for "and" and
¬
{\displaystyle \lnot }
for "not".
== Techniques ==
Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem.
Symbolic algorithms avoid ever explicitly constructing the graph for the FSM; instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan, as well as of Olivier Coudert and Jean-Christophe Madre, and the development of open-source BDD manipulation libraries such as CUDD and BuDDy.
Bounded model-checking algorithms unroll the FSM for a fixed number of steps,
k
{\displaystyle k}
, and check whether a property violation can occur in
k
{\displaystyle k}
or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of
k
{\displaystyle k}
until all possible violations have been ruled out (cf. Iterative deepening depth-first search).
Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-Boolean variables and to only consider Boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion.
Counterexample-guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.
Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems.
== First-order logic ==
Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is considered:
Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula.
This problem is in the circuit class AC0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables.
== Tools ==
Here is a list of significant model-checking tools:
Afra: a model checker for Rebeca which is an actor-based language for modeling concurrent and reactive systems
Alloy (Alloy Analyzer)
BLAST (Berkeley Lazy Abstraction Software Verification Tool)
CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems
CPAchecker: an open-source software model checker for C programs, based on the CPA framework
ECLAIR: a platform for the automatic analysis, verification, testing, and transformation of C and C++ programs
FDR2: a model checker for verifying real-time systems modelled and specified as CSP Processes
FizzBee: an easier to use alternative to TLA+, that uses Python-like specification language, that has both behavioral modeling like TLA+ and probabilistic modeling like PRISM
ISP code level verifier for MPI programs
Java Pathfinder: an open-source model checker for Java programs
Libdmc: a framework for distributed model checking
mCRL2 Toolset, Boost Software License, Based on ACP
NuSMV: a new symbolic model checker
PAT: an enhanced simulator, model checker and refinement checker for concurrent and real-time systems
Prism: a probabilistic symbolic model checker
Roméo: an integrated tool environment for modelling, simulation, and verification of real-time systems modelled as parametric, time, and stopwatch Petri nets
SPIN: a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion
Storm: A model checker for probabilistic systems.
TAPAs: a tool for the analysis of process algebra
TAPAAL: an integrated tool environment for modelling, validation, and verification of Timed-Arc Petri Nets
TLA+ model checker by Leslie Lamport
UPPAAL: an integrated tool environment for modelling, validation, and verification of real-time systems modelled as networks of timed automata
Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows.
== See also ==
== References ==
== Further reading == | Wikipedia/Model_checker |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
== Definition and motivation ==
=== Motivating examples ===
=== Definition ===
Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:
Right distributivity: (x + y) · z = x · z + y · z
Left distributivity: z · (x + y) = z · x + z · y
Compatibility with scalars: (ax) · (by) = (ab) (x · y).
These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
== Basic concepts ==
=== Algebra homomorphisms ===
Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as
H
o
m
K
-alg
(
A
,
B
)
.
{\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).}
A K-algebra isomorphism is a bijective K-algebra homomorphism.
=== Subalgebras and ideals ===
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
=== Extension of scalars ===
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product
V
F
:=
V
⊗
K
F
{\displaystyle V_{F}:=V\otimes _{K}F}
. So if A is an algebra over K, then
A
F
{\displaystyle A_{F}}
is an algebra over F.
== Kinds of algebras and examples ==
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
=== Unital algebra ===
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
=== Zero algebra ===
An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
A unital zero algebra is the direct sum
K
⊕
V
{\displaystyle K\oplus V}
of a field
K
{\displaystyle K}
and a
K
{\displaystyle K}
-vector space
V
{\displaystyle V}
, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written as
k
+
v
{\displaystyle k+v}
with
k
∈
K
{\displaystyle k\in K}
and
v
∈
V
{\displaystyle v\in V}
, and the product is the only bilinear operation such that
v
w
=
0
{\displaystyle vw=0}
for every
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
{\displaystyle V}
. So, if
k
1
,
k
2
∈
K
{\displaystyle k_{1},k_{2}\in K}
and
v
1
,
v
2
∈
V
{\displaystyle v_{1},v_{2}\in V}
, one has
(
k
1
+
v
1
)
(
k
2
+
v
2
)
=
k
1
k
2
+
(
k
1
v
2
+
k
2
v
1
)
.
{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module
V
{\displaystyle V}
correspond exactly to the ideals of
K
⊕
V
{\displaystyle K\oplus V}
that are contained in
V
{\displaystyle V}
.
For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
=== Associative algebra ===
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
=== Non-associative algebra ===
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map
A
×
A
→
A
{\displaystyle A\times A\rightarrow A}
. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
== Algebras and rings ==
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
η
:
K
→
Z
(
A
)
,
{\displaystyle \eta \colon K\to Z(A),}
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
K
×
A
→
A
{\displaystyle K\times A\to A}
given by
(
k
,
a
)
↦
η
(
k
)
a
.
{\displaystyle (k,a)\mapsto \eta (k)a.}
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
f
(
k
a
)
=
k
f
(
a
)
{\displaystyle f(ka)=kf(a)}
for all
k
∈
K
{\displaystyle k\in K}
and
a
∈
A
{\displaystyle a\in A}
. In other words, the following diagram commutes:
K
η
A
↙
η
B
↘
A
f
⟶
B
{\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}}
== Structure coefficients ==
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
e
i
e
j
=
∑
k
=
1
n
c
i
,
j
,
k
e
k
{\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}}
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
== Classification of low-dimensional unital associative algebras over the complex numbers ==
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
1
⋅
1
=
1
,
1
⋅
a
=
a
,
a
⋅
1
=
a
.
{\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.}
It remains to specify
a
a
=
1
{\displaystyle \textstyle aa=1}
for the first algebra,
a
a
=
0
{\displaystyle \textstyle aa=0}
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
a
a
=
a
,
b
b
=
b
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0}
for the first algebra,
a
a
=
a
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0}
for the second algebra,
a
a
=
b
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0}
for the third algebra,
a
a
=
1
,
b
b
=
0
,
a
b
=
−
b
a
=
b
{\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b}
for the fourth algebra,
a
a
=
0
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0}
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
== Generalization: algebra over a ring ==
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
=== Associative algebras over rings ===
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to
H
×
H
{\displaystyle \mathbb {H} \times \mathbb {H} }
, the direct product of two quaternion algebras. The center of that ring is
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} }
, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional
R
{\displaystyle \mathbb {R} }
-algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism
R
→
A
{\displaystyle R\to A}
defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural
Z
{\displaystyle \mathbb {Z} }
-module structure, since one can take the unique homomorphism
Z
→
A
{\displaystyle \mathbb {Z} \to A}
. On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
== See also ==
Algebra over an operad
Alternative algebra
Clifford algebra
Composition algebra
Differential algebra
Free algebra
Geometric algebra
Max-plus algebra
Mutation (algebra)
Operator algebra
Zariski's lemma
== Notes ==
== References ==
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. | Wikipedia/Algebra_(ring_theory) |
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is
[
3
0
0
2
]
{\displaystyle \left[{\begin{smallmatrix}3&0\\0&2\end{smallmatrix}}\right]}
, while an example of a 3×3 diagonal matrix is
[
6
0
0
0
5
0
0
0
4
]
{\displaystyle \left[{\begin{smallmatrix}6&0&0\\0&5&0\\0&0&4\end{smallmatrix}}\right]}
. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example,
[
0.5
0
0
0.5
]
{\displaystyle \left[{\begin{smallmatrix}0.5&0\\0&0.5\end{smallmatrix}}\right]}
.
In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
== Definition ==
As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix D = (di,j) with n columns and n rows is diagonal if
∀
i
,
j
∈
{
1
,
2
,
…
,
n
}
,
i
≠
j
⟹
d
i
,
j
=
0.
{\displaystyle \forall i,j\in \{1,2,\ldots ,n\},i\neq j\implies d_{i,j}=0.}
However, the main diagonal entries are unrestricted.
The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with all the entries not of the form di,i being zero. For example:
[
1
0
0
0
4
0
0
0
−
3
0
0
0
]
or
[
1
0
0
0
0
0
4
0
0
0
0
0
−
3
0
0
]
{\displaystyle {\begin{bmatrix}1&0&0\\0&4&0\\0&0&-3\\0&0&0\\\end{bmatrix}}\quad {\text{or}}\quad {\begin{bmatrix}1&0&0&0&0\\0&4&0&0&0\\0&0&-3&0&0\end{bmatrix}}}
More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a square diagonal matrix. A square diagonal matrix is a symmetric matrix, so this can also be called a symmetric diagonal matrix.
The following matrix is square diagonal matrix:
[
1
0
0
0
4
0
0
0
−
2
]
{\displaystyle {\begin{bmatrix}1&0&0\\0&4&0\\0&0&-2\end{bmatrix}}}
If the entries are real numbers or complex numbers, then it is a normal matrix as well.
In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".
== Vector-to-matrix diag operator ==
A diagonal matrix D can be constructed from a vector
a
=
[
a
1
…
a
n
]
T
{\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\dots &a_{n}\end{bmatrix}}^{\textsf {T}}}
using the
diag
{\displaystyle \operatorname {diag} }
operator:
D
=
diag
(
a
1
,
…
,
a
n
)
.
{\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n}).}
This may be written more compactly as
D
=
diag
(
a
)
{\displaystyle \mathbf {D} =\operatorname {diag} (\mathbf {a} )}
.
The same operator is also used to represent block diagonal matrices as
A
=
diag
(
A
1
,
…
,
A
n
)
{\displaystyle \mathbf {A} =\operatorname {diag} (\mathbf {A} _{1},\dots ,\mathbf {A} _{n})}
where each argument Ai is a matrix.
The diag operator may be written as
diag
(
a
)
=
(
a
1
T
)
∘
I
,
{\displaystyle \operatorname {diag} (\mathbf {a} )=\left(\mathbf {a} \mathbf {1} ^{\textsf {T}}\right)\circ \mathbf {I} ,}
where
∘
{\displaystyle \circ }
represents the Hadamard product, and 1 is a constant vector with elements 1.
== Matrix-to-vector diag operator ==
The inverse matrix-to-vector diag operator is sometimes denoted by the identically named
diag
(
D
)
=
[
a
1
…
a
n
]
T
,
{\displaystyle \operatorname {diag} (\mathbf {D} )={\begin{bmatrix}a_{1}&\dots &a_{n}\end{bmatrix}}^{\textsf {T}},}
where the argument is now a matrix, and the result is a vector of its diagonal entries.
The following property holds:
diag
(
A
B
)
=
∑
j
(
A
∘
B
T
)
i
j
=
(
A
∘
B
T
)
1
.
{\displaystyle \operatorname {diag} (\mathbf {A} \mathbf {B} )=\sum _{j}\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)_{ij}=\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)\mathbf {1} .}
== Scalar matrix ==
A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix I. Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form:
[
λ
0
0
0
λ
0
0
0
λ
]
≡
λ
I
3
{\displaystyle {\begin{bmatrix}\lambda &0&0\\0&\lambda &0\\0&0&\lambda \end{bmatrix}}\equiv \lambda {\boldsymbol {I}}_{3}}
The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix
D
=
diag
(
a
1
,
…
,
a
n
)
{\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})}
has
a
i
≠
a
j
,
{\displaystyle a_{i}\neq a_{j},}
then given a matrix M with
m
i
j
≠
0
,
{\displaystyle m_{ij}\neq 0,}
the (i, j) term of the products are:
(
D
M
)
i
j
=
a
i
m
i
j
{\displaystyle (\mathbf {DM} )_{ij}=a_{i}m_{ij}}
and
(
M
D
)
i
j
=
m
i
j
a
j
,
{\displaystyle (\mathbf {MD} )_{ij}=m_{ij}a_{j},}
and
a
j
m
i
j
≠
m
i
j
a
i
{\displaystyle a_{j}m_{ij}\neq m_{ij}a_{i}}
(since one can divide by mij), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.
For an abstract vector space V (rather than the concrete vector space Kn), the analog of scalar matrices are scalar transformations. This is true more generally for a module M over a ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map
R
→
End
(
M
)
,
{\displaystyle R\to \operatorname {End} (M),}
(from a scalar λ to its corresponding scalar transformation, multiplication by λ) exhibiting End(M) as a R-algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group GL(V). The former is more generally true free modules
M
≅
R
n
,
{\displaystyle M\cong R^{n},}
for which the endomorphism algebra is isomorphic to a matrix algebra.
== Vector operations ==
Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix
D
=
diag
(
a
1
,
…
,
a
n
)
{\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})}
and a vector
v
=
[
x
1
⋯
x
n
]
T
{\displaystyle \mathbf {v} ={\begin{bmatrix}x_{1}&\dotsm &x_{n}\end{bmatrix}}^{\textsf {T}}}
, the product is:
D
v
=
diag
(
a
1
,
…
,
a
n
)
[
x
1
⋮
x
n
]
=
[
a
1
⋱
a
n
]
[
x
1
⋮
x
n
]
=
[
a
1
x
1
⋮
a
n
x
n
]
.
{\displaystyle \mathbf {D} \mathbf {v} =\operatorname {diag} (a_{1},\dots ,a_{n}){\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}\\&\ddots \\&&a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.}
This can be expressed more compactly by using a vector instead of a diagonal matrix,
d
=
[
a
1
⋯
a
n
]
T
{\displaystyle \mathbf {d} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}}
, and taking the Hadamard product of the vectors (entrywise product), denoted
d
∘
v
{\displaystyle \mathbf {d} \circ \mathbf {v} }
:
D
v
=
d
∘
v
=
[
a
1
⋮
a
n
]
∘
[
x
1
⋮
x
n
]
=
[
a
1
x
1
⋮
a
n
x
n
]
.
{\displaystyle \mathbf {D} \mathbf {v} =\mathbf {d} \circ \mathbf {v} ={\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}\circ {\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.}
This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF, since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.
== Matrix operations ==
The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an. Then, for addition, we have
diag
(
a
1
,
…
,
a
n
)
+
diag
(
b
1
,
…
,
b
n
)
=
diag
(
a
1
+
b
1
,
…
,
a
n
+
b
n
)
{\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})+\operatorname {diag} (b_{1},\,\ldots ,\,b_{n})=\operatorname {diag} (a_{1}+b_{1},\,\ldots ,\,a_{n}+b_{n})}
and for matrix multiplication,
diag
(
a
1
,
…
,
a
n
)
diag
(
b
1
,
…
,
b
n
)
=
diag
(
a
1
b
1
,
…
,
a
n
b
n
)
.
{\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})\operatorname {diag} (b_{1},\,\ldots ,\,b_{n})=\operatorname {diag} (a_{1}b_{1},\,\ldots ,\,a_{n}b_{n}).}
The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all nonzero. In this case, we have
diag
(
a
1
,
…
,
a
n
)
−
1
=
diag
(
a
1
−
1
,
…
,
a
n
−
1
)
.
{\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})^{-1}=\operatorname {diag} (a_{1}^{-1},\,\ldots ,\,a_{n}^{-1}).}
In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices.
Multiplying an n-by-n matrix A from the left with diag(a1, ..., an) amounts to multiplying the i-th row of A by ai for all i; multiplying the matrix A from the right with diag(a1, ..., an) amounts to multiplying the i-th column of A by ai for all i.
== Operator matrix in eigenbasis ==
As explained in determining coefficients of operator matrix, there is a special basis, e1, ..., en, for which the matrix A takes the diagonal form. Hence, in the defining equation
A
e
j
=
∑
i
a
i
,
j
e
i
{\textstyle \mathbf {Ae} _{j}=\sum _{i}a_{i,j}\mathbf {e} _{i}}
, all coefficients ai, j with i ≠ j are zero, leaving only one term per sum. The surviving diagonal elements, ai, j, are known as eigenvalues and designated with λi in the equation, which reduces to
A
e
i
=
λ
i
e
i
.
{\displaystyle \mathbf {Ae} _{i}=\lambda _{i}\mathbf {e} _{i}.}
The resulting equation is known as eigenvalue equation and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.
In other words, the eigenvalues of diag(λ1, ..., λn) are λ1, ..., λn with associated eigenvectors of e1, ..., en.
== Properties ==
The determinant of diag(a1, ..., an) is the product a1⋯an.
The adjugate of a diagonal matrix is again diagonal.
Where all matrices are square,
A matrix is diagonal if and only if it is triangular and normal.
A matrix is diagonal if and only if it is both upper- and lower-triangular.
A diagonal matrix is symmetric.
The identity matrix In and zero matrix are diagonal.
A 1×1 matrix is always diagonal.
The square of a 2×2 matrix with zero trace is always diagonal.
== Applications ==
Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix.
In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X−1AX is diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable.
Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA∗ = A∗A then there exists a unitary matrix U such that UAU∗ is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V such that U∗AV is diagonal with positive entries.
== Operator theory ==
In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation.
Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.
== See also ==
== Notes ==
== References ==
== Sources ==
Horn, Roger Alan; Johnson, Charles Royal (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 | Wikipedia/Scalar_transformation |
In mathematics, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is solvable if its derived series terminates in the zero subalgebra. The derived Lie algebra of the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is the subalgebra of
g
{\displaystyle {\mathfrak {g}}}
, denoted
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
that consists of all linear combinations of Lie brackets of pairs of elements of
g
{\displaystyle {\mathfrak {g}}}
. The derived series is the sequence of subalgebras
g
≥
[
g
,
g
]
≥
[
[
g
,
g
]
,
[
g
,
g
]
]
≥
[
[
[
g
,
g
]
,
[
g
,
g
]
]
,
[
[
g
,
g
]
,
[
g
,
g
]
]
]
≥
.
.
.
{\displaystyle {\mathfrak {g}}\geq [{\mathfrak {g}},{\mathfrak {g}}]\geq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\geq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\geq ...}
If the derived series eventually arrives at the zero subalgebra, then the Lie algebra is called solvable. The derived series for Lie algebras is analogous to the derived series for commutator subgroups in group theory, and solvable Lie algebras are analogs of solvable groups.
Any nilpotent Lie algebra is a fortiori solvable but the converse is not true. The solvable Lie algebras and the semisimple Lie algebras form two large and generally complementary classes, as is shown by the Levi decomposition. The solvable Lie algebras are precisely those that can be obtained from semidirect products, starting from 0 and adding one dimension at a time.
A maximal solvable subalgebra is called a Borel subalgebra. The largest solvable ideal of a Lie algebra is called the radical.
== Characterizations ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional Lie algebra over a field of characteristic 0. The following are equivalent.
(i)
g
{\displaystyle {\mathfrak {g}}}
is solvable.
(ii)
a
d
(
g
)
{\displaystyle {\rm {ad}}({\mathfrak {g}})}
, the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
, is solvable.
(iii) There is a finite sequence of ideals
a
i
{\displaystyle {\mathfrak {a}}_{i}}
of
g
{\displaystyle {\mathfrak {g}}}
:
g
=
a
0
⊃
a
1
⊃
.
.
.
a
r
=
0
,
[
a
i
,
a
i
]
⊂
a
i
+
1
∀
i
.
{\displaystyle {\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{r}=0,\quad [{\mathfrak {a}}_{i},{\mathfrak {a}}_{i}]\subset {\mathfrak {a}}_{i+1}\,\,\forall i.}
(iv)
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
is nilpotent.
(v) For
g
{\displaystyle {\mathfrak {g}}}
n
{\displaystyle n}
-dimensional, there is a finite sequence of subalgebras
a
i
{\displaystyle {\mathfrak {a}}_{i}}
of
g
{\displaystyle {\mathfrak {g}}}
:
g
=
a
0
⊃
a
1
⊃
.
.
.
a
n
=
0
,
dim
a
i
/
a
i
+
1
=
1
∀
i
,
{\displaystyle {\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{n}=0,\quad \operatorname {dim} {\mathfrak {a}}_{i}/{\mathfrak {a}}_{i+1}=1\,\,\forall i,}
with each
a
i
+
1
{\displaystyle {\mathfrak {a}}_{i+1}}
an ideal in
a
i
{\displaystyle {\mathfrak {a}}_{i}}
. A sequence of this type is called an elementary sequence.
(vi) There is a finite sequence of subalgebras
g
i
{\displaystyle {\mathfrak {g}}_{i}}
of
g
{\displaystyle {\mathfrak {g}}}
,
g
=
g
0
⊃
g
1
⊃
.
.
.
g
r
=
0
,
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\supset {\mathfrak {g}}_{1}\supset ...{\mathfrak {g}}_{r}=0,}
such that
g
i
+
1
{\displaystyle {\mathfrak {g}}_{i+1}}
is an ideal in
g
i
{\displaystyle {\mathfrak {g}}_{i}}
and
g
i
/
g
i
+
1
{\displaystyle {\mathfrak {g}}_{i}/{\mathfrak {g}}_{i+1}}
is abelian.
(vii) The Killing form
B
{\displaystyle B}
of
g
{\displaystyle {\mathfrak {g}}}
satisfies
B
(
X
,
Y
)
=
0
{\displaystyle B(X,Y)=0}
for all X in
g
{\displaystyle {\mathfrak {g}}}
and Y in
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
. This is Cartan's criterion for solvability.
== Properties ==
Lie's Theorem states that if
V
{\displaystyle V}
is a finite-dimensional vector space over an algebraically closed field of characteristic zero, and
g
{\displaystyle {\mathfrak {g}}}
is a solvable Lie algebra, and if
π
{\displaystyle \pi }
is a representation of
g
{\displaystyle {\mathfrak {g}}}
over
V
{\displaystyle V}
, then there exists a simultaneous eigenvector
v
∈
V
{\displaystyle v\in V}
of the endomorphisms
π
(
X
)
{\displaystyle \pi (X)}
for all elements
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
.
Every Lie subalgebra and quotient of a solvable Lie algebra are solvable.
Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and an ideal
h
{\displaystyle {\mathfrak {h}}}
in it,
g
{\displaystyle {\mathfrak {g}}}
is solvable if and only if both
h
{\displaystyle {\mathfrak {h}}}
and
g
/
h
{\displaystyle {\mathfrak {g}}/{\mathfrak {h}}}
are solvable.
The analogous statement is true for nilpotent Lie algebras provided
h
{\displaystyle {\mathfrak {h}}}
is contained in the center. Thus, an extension of a solvable algebra by a solvable algebra is solvable, while a central extension of a nilpotent algebra by a nilpotent algebra is nilpotent.
A solvable nonzero Lie algebra has a nonzero abelian ideal, the last nonzero term in the derived series.
If
a
,
b
⊂
g
{\displaystyle {\mathfrak {a}},{\mathfrak {b}}\subset {\mathfrak {g}}}
are solvable ideals, then so is
a
+
b
{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}}
. Consequently, if
g
{\displaystyle {\mathfrak {g}}}
is finite-dimensional, then there is a unique solvable ideal
r
⊂
g
{\displaystyle {\mathfrak {r}}\subset {\mathfrak {g}}}
containing all solvable ideals in
g
{\displaystyle {\mathfrak {g}}}
. This ideal is the radical of
g
{\displaystyle {\mathfrak {g}}}
.
A solvable Lie algebra
g
{\displaystyle {\mathfrak {g}}}
has a unique largest nilpotent ideal
n
{\displaystyle {\mathfrak {n}}}
, called the nilradical, the set of all
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
such that
a
d
X
{\displaystyle {\rm {ad}}_{X}}
is nilpotent. If D is any derivation of
g
{\displaystyle {\mathfrak {g}}}
, then
D
(
g
)
⊂
n
{\displaystyle D({\mathfrak {g}})\subset {\mathfrak {n}}}
.
== Completely solvable Lie algebras ==
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called completely solvable or split solvable if it has an elementary sequence{(V) As above definition} of ideals in
g
{\displaystyle {\mathfrak {g}}}
from
0
{\displaystyle 0}
to
g
{\displaystyle {\mathfrak {g}}}
. A finite-dimensional nilpotent Lie algebra is completely solvable, and a completely solvable Lie algebra is solvable. Over an algebraically closed field a solvable Lie algebra is completely solvable, but the
3
{\displaystyle 3}
-dimensional real Lie algebra of the group of Euclidean isometries of the plane is solvable but not completely solvable.
A solvable Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is split solvable if and only if the eigenvalues of
a
d
X
{\displaystyle {\rm {ad}}_{X}}
are in
k
{\displaystyle k}
for all
X
{\displaystyle X}
in
g
{\displaystyle {\mathfrak {g}}}
.
== Examples ==
=== Abelian Lie algebras ===
Every abelian Lie algebra
a
{\displaystyle {\mathfrak {a}}}
is solvable by definition, since its commutator
[
a
,
a
]
=
0
{\displaystyle [{\mathfrak {a}},{\mathfrak {a}}]=0}
. This includes the Lie algebra of diagonal matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
, which are of the form
{
[
∗
0
0
0
∗
0
0
0
∗
]
}
{\displaystyle \left\{{\begin{bmatrix}*&0&0\\0&*&0\\0&0&*\end{bmatrix}}\right\}}
for
n
=
3
{\displaystyle n=3}
. The Lie algebra structure on a vector space
V
{\displaystyle V}
given by the trivial bracket
[
m
,
n
]
=
0
{\displaystyle [m,n]=0}
for any two matrices
m
,
n
∈
End
(
V
)
{\displaystyle m,n\in {\text{End}}(V)}
gives another example.
=== Nilpotent Lie algebras ===
Another class of examples comes from nilpotent Lie algebras since the adjoint representation is solvable. Some examples include the upper-diagonal matrices, such as the class of matrices of the form
{
[
0
∗
∗
0
0
∗
0
0
0
]
}
{\displaystyle \left\{{\begin{bmatrix}0&*&*\\0&0&*\\0&0&0\end{bmatrix}}\right\}}
called the Lie algebra of strictly upper triangular matrices. In addition, the Lie algebra of upper diagonal matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
form a solvable Lie algebra. This includes matrices of the form
{
[
∗
∗
∗
0
∗
∗
0
0
∗
]
}
{\displaystyle \left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&0&*\end{bmatrix}}\right\}}
and is denoted
b
k
{\displaystyle {\mathfrak {b}}_{k}}
.
=== Solvable but not split-solvable ===
Let
g
{\displaystyle {\mathfrak {g}}}
be the set of matrices on the form
X
=
(
0
θ
x
−
θ
0
y
0
0
0
)
,
θ
,
x
,
y
∈
R
.
{\displaystyle X=\left({\begin{matrix}0&\theta &x\\-\theta &0&y\\0&0&0\end{matrix}}\right),\quad \theta ,x,y\in \mathbb {R} .}
Then
g
{\displaystyle {\mathfrak {g}}}
is solvable, but not split solvable. It is isomorphic with the Lie algebra of the group of translations and rotations in the plane.
=== Non-example ===
A semisimple Lie algebra
l
{\displaystyle {\mathfrak {l}}}
is never solvable since its radical
Rad
(
l
)
{\displaystyle {\text{Rad}}({\mathfrak {l}})}
, which is the largest solvable ideal in
l
{\displaystyle {\mathfrak {l}}}
, is trivial. page 11
== Solvable Lie groups ==
Because the term "solvable" is also used for solvable groups in group theory, there are several possible definitions of solvable Lie group. For a Lie group
G
{\displaystyle G}
, there is
termination of the usual derived series of the group
G
{\displaystyle G}
(as an abstract group);
termination of the closures of the derived series;
having a solvable Lie algebra
== See also ==
Cartan's criterion
Killing form
Lie-Kolchin theorem
Solvmanifold
Dixmier mapping
== Notes ==
== References ==
Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97527-6. MR 1153249.
Humphreys, James E. (1972). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9. New York: Springer-Verlag. ISBN 0-387-90053-5.
Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5..
Serre, Jean-Pierre (2001). Complex Semisimple Lie Algebras. Berlin: Springer. ISBN 3-5406-7827-1.
== External links ==
EoM article Lie algebra, solvable
EoM article Lie group, solvable | Wikipedia/Derived_algebra |
In mathematics, the special linear Lie algebra of order
n
{\displaystyle n}
over a field
F
{\displaystyle F}
, denoted
s
l
n
F
{\displaystyle {\mathfrak {sl}}_{n}F}
or
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
, is the Lie algebra of all the
n
×
n
{\displaystyle n\times n}
matrices (with entries in
F
{\displaystyle F}
) with trace zero and with the Lie bracket
[
X
,
Y
]
:=
X
Y
−
Y
X
{\displaystyle [X,Y]:=XY-YX}
given by the commutator. This algebra is well studied and understood, and is often used as a model for the study of other Lie algebras. The Lie group that it generates is the special linear group.
== Applications ==
The Lie algebra
s
l
2
C
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {C} }
is central to the study of special relativity, general relativity and supersymmetry: its fundamental representation is the so-called spinor representation, while its adjoint representation generates the Lorentz group SO(3,1) of special relativity.
The algebra
s
l
2
R
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {R} }
plays an important role in the study of chaos and fractals, as it generates the Möbius group SL(2,R), which describes the automorphisms of the hyperbolic plane, the simplest Riemann surface of negative curvature; by contrast, SL(2,C) describes the automorphisms of the hyperbolic 3-dimensional ball.
== Representation theory ==
=== Representation theory of sl2C ===
The Lie algebra
s
l
2
C
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {C} }
is a three-dimensional complex Lie algebra. Its defining feature is that it contains a basis
e
,
h
,
f
{\displaystyle e,h,f}
satisfying the commutation relations
[
e
,
f
]
=
h
{\displaystyle [e,f]=h}
,
[
h
,
f
]
=
−
2
f
{\displaystyle [h,f]=-2f}
, and
[
h
,
e
]
=
2
e
{\displaystyle [h,e]=2e}
.
This is a Cartan-Weyl basis for
s
l
2
C
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {C} }
.
It has an explicit realization in terms of 2-by-2 complex matrices with zero trace:
E
=
[
0
1
0
0
]
{\displaystyle E={\begin{bmatrix}0&1\\0&0\end{bmatrix}}}
,
F
=
[
0
0
1
0
]
{\displaystyle F={\begin{bmatrix}0&0\\1&0\end{bmatrix}}}
,
H
=
[
1
0
0
−
1
]
{\displaystyle H={\begin{bmatrix}1&0\\0&-1\end{bmatrix}}}
.
This is the fundamental or defining representation for
s
l
2
C
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {C} }
.
The Lie algebra
s
l
2
C
{\displaystyle {\mathfrak {sl}}_{2}\mathbb {C} }
can be viewed as a subspace of its universal enveloping algebra
U
=
U
(
s
l
2
C
)
{\displaystyle U=U({\mathfrak {sl}}_{2}\mathbb {C} )}
and, in
U
{\displaystyle U}
, there are the following commutator relations shown by induction:
[
h
,
f
k
]
=
−
2
k
f
k
,
[
h
,
e
k
]
=
2
k
e
k
{\displaystyle [h,f^{k}]=-2kf^{k},\,[h,e^{k}]=2ke^{k}}
,
[
e
,
f
k
]
=
−
k
(
k
−
1
)
f
k
−
1
+
k
f
k
−
1
h
{\displaystyle [e,f^{k}]=-k(k-1)f^{k-1}+kf^{k-1}h}
.
Note that, here, the powers
f
k
{\displaystyle f^{k}}
, etc. refer to powers as elements of the algebra U and not matrix powers. The first basic fact (that follows from the above commutator relations) is:
From this lemma, one deduces the following fundamental result:
The first statement is true since either
v
j
{\displaystyle v_{j}}
is zero or has
h
{\displaystyle h}
-eigenvalue distinct from the eigenvalues of the others that are nonzero. Saying
v
{\displaystyle v}
is a
b
{\displaystyle {\mathfrak {b}}}
-weight vector is equivalent to saying that it is simultaneously an eigenvector of
h
{\displaystyle h}
and
e
{\displaystyle e}
; a short calculation then shows that, in that case, the
e
{\displaystyle e}
-eigenvalue of
v
{\displaystyle v}
is zero:
e
⋅
v
=
0
{\displaystyle e\cdot v=0}
. Thus, for some integer
N
≥
0
{\displaystyle N\geq 0}
,
v
N
≠
0
,
v
N
+
1
=
v
N
+
2
=
⋯
=
0
{\displaystyle v_{N}\neq 0,v_{N+1}=v_{N+2}=\cdots =0}
and in particular, by the early lemma,
0
=
e
⋅
v
N
+
1
=
(
λ
−
(
N
+
1
)
+
1
)
v
N
,
{\displaystyle 0=e\cdot v_{N+1}=(\lambda -(N+1)+1)v_{N},}
which implies that
λ
=
N
{\displaystyle \lambda =N}
. It remains to show
W
=
span
{
v
j
∣
j
≥
0
}
{\displaystyle W=\operatorname {span} \{v_{j}\mid j\geq 0\}}
is irreducible. If
0
≠
W
′
⊂
W
{\displaystyle 0\neq W'\subset W}
is a subrepresentation, then it admits an eigenvector, which must have eigenvalue of the form
N
−
2
j
{\displaystyle N-2j}
; thus is proportional to
v
j
{\displaystyle v_{j}}
. By the preceding lemma, we have
v
=
v
0
{\displaystyle v=v_{0}}
is in
W
{\displaystyle W}
and thus
W
′
=
W
{\displaystyle W'=W}
.
◻
{\displaystyle \square }
As a corollary, one deduces:
If
V
{\displaystyle V}
has finite dimension and is irreducible, then
h
{\displaystyle h}
-eigenvalue of v is a nonnegative integer
N
{\displaystyle N}
and
V
{\displaystyle V}
has a basis
v
,
f
v
,
f
2
v
,
⋯
,
f
N
v
{\displaystyle v,fv,f^{2}v,\cdots ,f^{N}v}
.
Conversely, if the
h
{\displaystyle h}
-eigenvalue of
v
{\displaystyle v}
is a nonnegative integer and
V
{\displaystyle V}
is irreducible, then
V
{\displaystyle V}
has a basis
v
,
f
v
,
f
2
v
,
⋯
,
f
N
v
{\displaystyle v,fv,f^{2}v,\cdots ,f^{N}v}
; in particular has finite dimension.
The beautiful special case of
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
shows a general way to find irreducible representations of Lie algebras. Namely, we divide the algebra to three subalgebras "h" (the Cartan subalgebra), "e", and "f", which behave approximately like their namesakes in
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
. Namely, in an irreducible representation, we have a "highest" eigenvector of "h", on which "e" acts by zero. The basis of the irreducible representation is generated by the action of "f" on the highest eigenvectors of "h". See the theorem of the highest weight.
=== Representation theory of slnC ===
When
g
=
s
l
n
C
=
s
l
V
{\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{n}\mathbb {C} =\operatorname {\mathfrak {sl}} V}
for a complex vector space
V
{\displaystyle V}
of dimension
n
{\displaystyle n}
, each finite-dimensional irreducible representation of
g
{\displaystyle {\mathfrak {g}}}
can be found as a subrepresentation of a tensor power of
V
{\displaystyle V}
.
The Lie algebra can be explicitly realized as a matrix Lie algebra of traceless
n
×
n
{\displaystyle n\times n}
matrices. This is the fundamental representation for
s
l
n
C
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} }
.
Set
M
i
,
j
{\displaystyle M_{i,j}}
to be the matrix with one in the
i
,
j
{\displaystyle i,j}
entry and zeroes everywhere else. Then
H
i
:=
M
i
,
i
−
M
i
+
1
,
i
+
1
,
with
1
≤
i
≤
n
−
1
{\displaystyle H_{i}:=M_{i,i}-M_{i+1,i+1},{\text{ with }}1\leq i\leq n-1}
M
i
,
j
,
with
i
≠
j
{\displaystyle M_{i,j},{\text{ with }}i\neq j}
Form a basis for
s
l
n
C
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} }
. This is technically an abuse of notation, and these are really the image of the basis of
s
l
n
C
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} }
in the fundamental representation.
Furthermore, this is in fact a Cartan–Weyl basis, with the
H
i
{\displaystyle H_{i}}
spanning the Cartan subalgebra. Introducing notation
E
i
,
j
=
M
i
,
j
{\displaystyle E_{i,j}=M_{i,j}}
if
j
>
i
{\displaystyle j>i}
, and
F
i
,
j
=
M
i
,
j
T
=
M
j
,
i
{\displaystyle F_{i,j}=M_{i,j}^{T}=M_{j,i}}
, also if
j
>
i
{\displaystyle j>i}
, the
E
i
,
j
{\displaystyle E_{i,j}}
are positive roots and
F
i
,
j
{\displaystyle F_{i,j}}
are corresponding negative roots.
A basis of simple roots is given by
E
i
,
i
+
1
{\displaystyle E_{i,i+1}}
for
1
≤
i
≤
n
−
1
{\displaystyle 1\leq i\leq n-1}
.
== Notes ==
== References ==
Etingof, Pavel. "Lecture Notes on Representation Theory".
Kac, Victor (1990). "Integrable Representations of Kac–Moody Algebras and the Weyl Group". Infinite dimensional Lie algebras (3rd ed.). Cambridge University Press. pp. 30–46. doi:10.1017/CBO9780511626234.004. ISBN 0-521-46693-8.
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer
A. L. Onishchik, E. B. Vinberg, V. V. Gorbatsevich, Structure of Lie groups and Lie algebras. Lie groups and Lie algebras, III. Encyclopaedia of Mathematical Sciences, 41. Springer-Verlag, Berlin, 1994. iv+248 pp. (A translation of Current problems in mathematics. Fundamental directions. Vol. 41, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1990. Translation by V. Minachin. Translation edited by A. L. Onishchik and E. B. Vinberg) ISBN 3-540-54683-9
V. L. Popov, E. B. Vinberg, Invariant theory. Algebraic geometry. IV. Linear algebraic groups. Encyclopaedia of Mathematical Sciences, 55. Springer-Verlag, Berlin, 1994. vi+284 pp. (A translation of Algebraic geometry. 4, Akad. Nauk SSSR Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1989. Translation edited by A. N. Parshin and I. R. Shafarevich) ISBN 3-540-54682-0
Serre, Jean-Pierre (2001), Algèbres de Lie semi-simples complexes [Complex Semisimple Lie Algebras], Springer Monographs in Mathematics, translated by Jones, G. A., Springer, doi:10.1007/978-3-642-56884-8, ISBN 978-3-540-67827-4.
== See also ==
Affine Weyl group
Finite Coxeter group
Hasse diagram
Linear algebraic group
Nilpotent orbit
Root system
sl2-triple
Weyl group | Wikipedia/Special_linear_Lie_algebra |
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector
v
{\displaystyle \mathbf {v} }
of a linear transformation
T
{\displaystyle T}
is scaled by a constant factor
λ
{\displaystyle \lambda }
when the linear transformation is applied to it:
T
v
=
λ
v
{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }
. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor
λ
{\displaystyle \lambda }
(possibly negative).
Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.
The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system.
== Matrices ==
For an
n
×
n
{\displaystyle n{\times }n}
matrix A and a nonzero vector
v
{\displaystyle \mathbf {v} }
of length
n
{\displaystyle n}
, if multiplying A by
v
{\displaystyle \mathbf {v} }
(denoted
A
v
{\displaystyle A\mathbf {v} }
) simply scales
v
{\displaystyle \mathbf {v} }
by a factor λ, where λ is a scalar, then
v
{\displaystyle \mathbf {v} }
is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as:
A
v
=
λ
v
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }
.
Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices.
== Overview ==
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.
In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation
T
(
v
)
=
λ
v
,
{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}
referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.
The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like
d
d
x
{\displaystyle {\tfrac {d}{dx}}}
, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as
d
d
x
e
λ
x
=
λ
e
λ
x
.
{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}
Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication
A
v
=
λ
v
,
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}
where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it.
Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them:
The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue.
If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis.
== History ==
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.
In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.
In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.
Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.
Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961.
== Eigenvalues and eigenvectors of matrices ==
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.
Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications.
Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors
x
=
[
1
−
3
4
]
and
y
=
[
−
20
60
−
80
]
.
{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.}
These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that
x
=
λ
y
.
{\displaystyle \mathbf {x} =\lambda \mathbf {y} .}
In this case,
λ
=
−
1
20
{\displaystyle \lambda =-{\frac {1}{20}}}
.
Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A,
A
v
=
w
,
{\displaystyle A\mathbf {v} =\mathbf {w} ,}
or
[
A
11
A
12
⋯
A
1
n
A
21
A
22
⋯
A
2
n
⋮
⋮
⋱
⋮
A
n
1
A
n
2
⋯
A
n
n
]
[
v
1
v
2
⋮
v
n
]
=
[
w
1
w
2
⋮
w
n
]
{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}
where, for each row,
w
i
=
A
i
1
v
1
+
A
i
2
v
2
+
⋯
+
A
i
n
v
n
=
∑
j
=
1
n
A
i
j
v
j
.
{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.}
If it occurs that v and w are scalar multiples, that is if
then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A.
Equation (1) can be stated equivalently as
where I is the n by n identity matrix and 0 is the zero vector.
=== Eigenvalues and the characteristic polynomial ===
Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A.
The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,
where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A.
As a brief example, which is described in more detail in the examples section later, consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
Taking the determinant of (A − λI), the characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation
(
A
−
λ
I
)
v
=
0
{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }
. In this example, the eigenvectors are any nonzero scalar multiples of
v
λ
=
1
=
[
1
−
1
]
,
v
λ
=
3
=
[
1
1
]
.
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.}
If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
=== Spectrum of a matrix ===
The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.
An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix.
=== Algebraic multiplicity ===
Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.
Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,
det
(
A
−
λ
I
)
=
(
λ
1
−
λ
)
μ
A
(
λ
1
)
(
λ
2
−
λ
)
μ
A
(
λ
2
)
⋯
(
λ
d
−
λ
)
μ
A
(
λ
d
)
.
{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.}
If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as
1
≤
μ
A
(
λ
i
)
≤
n
,
μ
A
=
∑
i
=
1
d
μ
A
(
λ
i
)
=
n
.
{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}}
If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue.
=== Eigenspaces, geometric multiplicity, and the eigenbasis for matrices ===
Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2),
E
=
{
v
:
(
A
−
λ
I
)
v
=
0
}
.
{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.}
On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of
C
n
{\displaystyle \mathbb {C} ^{n}}
.
Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ.
The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
. Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as
γ
A
(
λ
)
=
n
−
rank
(
A
−
λ
I
)
.
{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).}
Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n.
1
≤
γ
A
(
λ
)
≤
μ
A
(
λ
)
≤
n
{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n}
To prove the inequality
γ
A
(
λ
)
≤
μ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}
, consider how the definition of geometric multiplicity implies the existence of
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
orthonormal eigenvectors
v
1
,
…
,
v
γ
A
(
λ
)
{\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}
, such that
A
v
k
=
λ
v
k
{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}
. We can therefore find a (unitary) matrix V whose first
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
columns are these eigenvectors, and whose remaining columns can be any orthonormal set of
n
−
γ
A
(
λ
)
{\displaystyle n-\gamma _{A}(\lambda )}
vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating
D
:=
V
T
A
V
{\displaystyle D:=V^{T}AV}
, we get a matrix whose top left block is the diagonal matrix
λ
I
γ
A
(
λ
)
{\displaystyle \lambda I_{\gamma _{A}(\lambda )}}
. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding
−
ξ
V
{\displaystyle -\xi V}
on both sides, we get
(
A
−
ξ
I
)
V
=
V
(
D
−
ξ
I
)
{\displaystyle (A-\xi I)V=V(D-\xi I)}
since I commutes with V. In other words,
A
−
ξ
I
{\displaystyle A-\xi I}
is similar to
D
−
ξ
I
{\displaystyle D-\xi I}
, and
det
(
A
−
ξ
I
)
=
det
(
D
−
ξ
I
)
{\displaystyle \det(A-\xi I)=\det(D-\xi I)}
. But from the definition of D, we know that
det
(
D
−
ξ
I
)
{\displaystyle \det(D-\xi I)}
contains a factor
(
ξ
−
λ
)
γ
A
(
λ
)
{\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}
, which means that the algebraic multiplicity of
λ
{\displaystyle \lambda }
must satisfy
μ
A
(
λ
)
≥
γ
A
(
λ
)
{\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}
.
Suppose A has
d
≤
n
{\displaystyle d\leq n}
distinct eigenvalues
λ
1
,
…
,
λ
d
{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}
, where the geometric multiplicity of
λ
i
{\displaystyle \lambda _{i}}
is
γ
A
(
λ
i
)
{\displaystyle \gamma _{A}(\lambda _{i})}
. The total geometric multiplicity of A,
γ
A
=
∑
i
=
1
d
γ
A
(
λ
i
)
,
d
≤
γ
A
≤
n
,
{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}
is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If
γ
A
=
n
{\displaystyle \gamma _{A}=n}
, then
The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space
C
n
{\displaystyle \mathbb {C} ^{n}}
.
A basis of
C
n
{\displaystyle \mathbb {C} ^{n}}
can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis
Any vector in
C
n
{\displaystyle \mathbb {C} ^{n}}
can be written as a linear combination of eigenvectors of A.
=== Additional properties ===
Let
A
{\displaystyle A}
be an arbitrary
n
×
n
{\displaystyle n\times n}
matrix of complex numbers with eigenvalues
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
. Each eigenvalue appears
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
times in this list, where
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
The trace of
A
{\displaystyle A}
, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
tr
(
A
)
=
∑
i
=
1
n
a
i
i
=
∑
i
=
1
n
λ
i
=
λ
1
+
λ
2
+
⋯
+
λ
n
.
{\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.}
The determinant of
A
{\displaystyle A}
is the product of all its eigenvalues,
det
(
A
)
=
∏
i
=
1
n
λ
i
=
λ
1
λ
2
⋯
λ
n
.
{\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.}
The eigenvalues of the
k
{\displaystyle k}
th power of
A
{\displaystyle A}
; i.e., the eigenvalues of
A
k
{\displaystyle A^{k}}
, for any positive integer
k
{\displaystyle k}
, are
λ
1
k
,
…
,
λ
n
k
{\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}}
.
The matrix
A
{\displaystyle A}
is invertible if and only if every eigenvalue is nonzero.
If
A
{\displaystyle A}
is invertible, then the eigenvalues of
A
−
1
{\displaystyle A^{-1}}
are
1
λ
1
,
…
,
1
λ
n
{\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}}
and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
If
A
{\displaystyle A}
is equal to its conjugate transpose
A
∗
{\displaystyle A^{*}}
, or equivalently if
A
{\displaystyle A}
is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
If
A
{\displaystyle A}
is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.
If
A
{\displaystyle A}
is unitary, every eigenvalue has absolute value
|
λ
i
|
=
1
{\displaystyle |\lambda _{i}|=1}
.
If
A
{\displaystyle A}
is a
n
×
n
{\displaystyle n\times n}
matrix and
{
λ
1
,
…
,
λ
k
}
{\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}}
are its eigenvalues, then the eigenvalues of matrix
I
+
A
{\displaystyle I+A}
(where
I
{\displaystyle I}
is the identity matrix) are
{
λ
1
+
1
,
…
,
λ
k
+
1
}
{\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}}
. Moreover, if
α
∈
C
{\displaystyle \alpha \in \mathbb {C} }
, the eigenvalues of
α
I
+
A
{\displaystyle \alpha I+A}
are
{
λ
1
+
α
,
…
,
λ
k
+
α
}
{\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}}
. More generally, for a polynomial
P
{\displaystyle P}
the eigenvalues of matrix
P
(
A
)
{\displaystyle P(A)}
are
{
P
(
λ
1
)
,
…
,
P
(
λ
k
)
}
{\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}}
.
=== Left and right eigenvectors ===
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the
n
×
n
{\displaystyle n\times n}
matrix
A
{\displaystyle A}
in the defining equation, equation (1),
A
v
=
λ
v
.
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}
The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix
A
{\displaystyle A}
. In this formulation, the defining equation is
u
A
=
κ
u
,
{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,}
where
κ
{\displaystyle \kappa }
is a scalar and
u
{\displaystyle u}
is a
1
×
n
{\displaystyle 1\times n}
matrix. Any row vector
u
{\displaystyle u}
satisfying this equation is called a left eigenvector of
A
{\displaystyle A}
and
κ
{\displaystyle \kappa }
is its associated eigenvalue. Taking the transpose of this equation,
A
T
u
T
=
κ
u
T
.
{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.}
Comparing this equation to equation (1), it follows immediately that a left eigenvector of
A
{\displaystyle A}
is the same as the transpose of a right eigenvector of
A
T
{\displaystyle A^{\textsf {T}}}
, with the same eigenvalue. Furthermore, since the characteristic polynomial of
A
T
{\displaystyle A^{\textsf {T}}}
is the same as the characteristic polynomial of
A
{\displaystyle A}
, the left and right eigenvectors of
A
{\displaystyle A}
are associated with the same eigenvalues.
=== Diagonalization and the eigendecomposition ===
Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,
Q
=
[
v
1
v
2
⋯
v
n
]
.
{\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.}
Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue,
A
Q
=
[
λ
1
v
1
λ
2
v
2
⋯
λ
n
v
n
]
.
{\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.}
With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then
A
Q
=
Q
Λ
.
{\displaystyle AQ=Q\Lambda .}
Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1,
A
=
Q
Λ
Q
−
1
,
{\displaystyle A=Q\Lambda Q^{-1},}
or by instead left multiplying both sides by Q−1,
Q
−
1
A
Q
=
Λ
.
{\displaystyle Q^{-1}AQ=\Lambda .}
A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.
Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces.
=== Variational characterization ===
In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of
H
{\displaystyle H}
is the maximum value of the quadratic form
x
T
H
x
/
x
T
x
{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }
. A value of
x
{\displaystyle \mathbf {x} }
that realizes that maximum is an eigenvector.
=== Matrix examples ===
==== Two-dimensional matrix example ====
Consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.
Taking the determinant to find characteristic polynomial of A,
det
(
A
−
λ
I
)
=
|
[
2
1
1
2
]
−
λ
[
1
0
0
1
]
|
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
=
(
λ
−
3
)
(
λ
−
1
)
.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A.
For λ=1, equation (2) becomes,
(
A
−
I
)
v
λ
=
1
=
[
1
1
1
1
]
[
v
1
v
2
]
=
[
0
0
]
{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}
1
v
1
+
1
v
2
=
0
{\displaystyle 1v_{1}+1v_{2}=0}
Any nonzero vector with v1 = −v2 solves this equation. Therefore,
v
λ
=
1
=
[
v
1
−
v
1
]
=
[
1
−
1
]
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector.
For λ=3, equation (2) becomes
(
A
−
3
I
)
v
λ
=
3
=
[
−
1
1
1
−
1
]
[
v
1
v
2
]
=
[
0
0
]
−
1
v
1
+
1
v
2
=
0
;
1
v
1
−
1
v
2
=
0
{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}}
Any nonzero vector with v1 = v2 solves this equation. Therefore,
v
λ
=
3
=
[
v
1
v
1
]
=
[
1
1
]
{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector.
Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively.
==== Three-dimensional matrix example ====
Consider the matrix
A
=
[
2
0
0
0
3
4
0
4
9
]
.
{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
[
2
0
0
0
3
4
0
4
9
]
−
λ
[
1
0
0
0
1
0
0
0
1
]
|
=
|
2
−
λ
0
0
0
3
−
λ
4
0
4
9
−
λ
|
,
=
(
2
−
λ
)
[
(
3
−
λ
)
(
9
−
λ
)
−
16
]
=
−
λ
3
+
14
λ
2
−
35
λ
+
22.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}}
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors
[
1
0
0
]
T
{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}}
,
[
0
−
2
1
]
T
{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}}
, and
[
0
1
2
]
T
{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}}
, or any nonzero multiple thereof.
==== Three-dimensional matrix example with complex eigenvalues ====
Consider the cyclic permutation matrix
A
=
[
0
1
0
0
0
1
1
0
0
]
.
{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.}
This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are
λ
1
=
1
λ
2
=
−
1
2
+
i
3
2
λ
3
=
λ
2
∗
=
−
1
2
−
i
3
2
{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}
where
i
{\displaystyle i}
is an imaginary unit with
i
2
=
−
1
{\displaystyle i^{2}=-1}
.
For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,
A
[
5
5
5
]
=
[
5
5
5
]
=
1
⋅
[
5
5
5
]
.
{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.}
For the complex conjugate pair of imaginary eigenvalues,
λ
2
λ
3
=
1
,
λ
2
2
=
λ
3
,
λ
3
2
=
λ
2
.
{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.}
Then
A
[
1
λ
2
λ
3
]
=
[
λ
2
λ
3
1
]
=
λ
2
⋅
[
1
λ
2
λ
3
]
,
{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}
and
A
[
1
λ
3
λ
2
]
=
[
λ
3
λ
2
1
]
=
λ
3
⋅
[
1
λ
3
λ
2
]
.
{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.}
Therefore, the other two eigenvectors of A are complex and are
v
λ
2
=
[
1
λ
2
λ
3
]
T
{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}
and
v
λ
3
=
[
1
λ
3
λ
2
]
T
{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}
with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,
v
λ
2
=
v
λ
3
∗
.
{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.}
==== Diagonal matrix example ====
Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix
A
=
[
1
0
0
0
2
0
0
0
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
0
0
]
,
v
λ
2
=
[
0
1
0
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Triangular matrix example ====
A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.
Consider the lower triangular matrix,
A
=
[
1
0
0
1
2
0
2
3
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
These eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
−
1
1
2
]
,
v
λ
2
=
[
0
1
−
3
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Matrix with repeated eigenvalues example ====
As in the previous example, the lower triangular matrix
A
=
[
2
0
0
0
1
2
0
0
0
1
3
0
0
0
1
3
]
,
{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}
has a characteristic polynomial that is the product of its diagonal elements,
det
(
A
−
λ
I
)
=
|
2
−
λ
0
0
0
1
2
−
λ
0
0
0
1
3
−
λ
0
0
0
1
3
−
λ
|
=
(
2
−
λ
)
2
(
3
−
λ
)
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.}
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.
On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector
[
0
1
−
1
1
]
T
{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}
and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector
[
0
0
0
1
]
T
{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}
. The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
=== Eigenvector-eigenvalue identity ===
For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix,
|
v
i
,
j
|
2
=
∏
k
(
λ
i
−
λ
k
(
M
j
)
)
∏
k
≠
i
(
λ
i
−
λ
k
)
,
{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}
where
M
j
{\textstyle M_{j}}
is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature.
== Eigenvalues and eigenfunctions of differential operators ==
The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation
D
f
(
t
)
=
λ
f
(
t
)
{\displaystyle Df(t)=\lambda f(t)}
The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions.
=== Derivative operator example ===
Consider the derivative operator
d
d
t
{\displaystyle {\tfrac {d}{dt}}}
with eigenvalue equation
d
d
t
f
(
t
)
=
λ
f
(
t
)
.
{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function
f
(
t
)
=
f
(
0
)
e
λ
t
,
{\displaystyle f(t)=f(0)e^{\lambda t},}
is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant.
The main eigenfunction article gives other examples.
== General definition ==
The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V,
T
:
V
→
V
.
{\displaystyle T:V\to V.}
We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that
This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.
=== Eigenspaces, geometric multiplicity, and the eigenbasis ===
Given an eigenvalue λ, consider the set
E
=
{
v
:
T
(
v
)
=
λ
v
}
,
{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},}
which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ.
By definition of a linear transformation,
T
(
x
+
y
)
=
T
(
x
)
+
T
(
y
)
,
T
(
α
x
)
=
α
T
(
x
)
,
{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}}
for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then
T
(
u
+
v
)
=
λ
(
u
+
v
)
,
T
(
α
v
)
=
λ
(
α
v
)
.
{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}}
So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.
If that subspace has dimension 1, it is sometimes called an eigenline.
The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.
Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.
=== Spectral theory ===
If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue.
For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
=== Associative algebras and representation theory ===
One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.
The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively.
Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence.
== Dynamic equations ==
The simplest difference equations have the form
x
t
=
a
1
x
t
−
1
+
a
2
x
t
−
2
+
⋯
+
a
k
x
t
−
k
.
{\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.}
The solution of this equation for x in terms of t is found by using its characteristic equation
λ
k
−
a
1
λ
k
−
1
−
a
2
λ
k
−
2
−
⋯
−
a
k
−
1
λ
−
a
k
=
0
,
{\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,}
which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations
x
t
−
1
=
x
t
−
1
,
…
,
x
t
−
k
+
1
=
x
t
−
k
+
1
,
{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}
giving a k-dimensional system of the first order in the stacked variable vector
[
x
t
⋯
x
t
−
k
+
1
]
{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}
in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots
λ
1
,
…
,
λ
k
,
{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}
for use in the solution equation
x
t
=
c
1
λ
1
t
+
⋯
+
c
k
λ
k
t
.
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.}
A similar procedure is used for solving a differential equation of the form
d
k
x
d
t
k
+
a
k
−
1
d
k
−
1
x
d
t
k
−
1
+
⋯
+
a
1
d
x
d
t
+
a
0
x
=
0.
{\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.}
== Calculation ==
The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.
=== Classical method ===
The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point.
==== Eigenvalues ====
The eigenvalues of a matrix
A
{\displaystyle A}
can be determined by finding the roots of the characteristic polynomial. This is easy for
2
×
2
{\displaystyle 2\times 2}
matrices, but the difficulty increases rapidly with the size of the matrix.
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an
n
×
n
{\displaystyle n\times n}
matrix is a sum of
n
!
{\displaystyle n!}
different products.
Explicit algebraic formulas for the roots of a polynomial exist only if the degree
n
{\displaystyle n}
is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree
n
{\displaystyle n}
is the characteristic polynomial of some companion matrix of order
n
{\displaystyle n}
.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical.
==== Eigenvectors ====
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix
A
=
[
4
1
6
3
]
{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}}
we can find its eigenvectors by solving the equation
A
v
=
6
v
{\displaystyle Av=6v}
, that is
[
4
1
6
3
]
[
x
y
]
=
6
⋅
[
x
y
]
{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}}
This matrix equation is equivalent to two linear equations
{
4
x
+
y
=
6
x
6
x
+
3
y
=
6
y
{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}
that is
{
−
2
x
+
y
=
0
6
x
−
3
y
=
0
{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.}
Both equations reduce to the single linear equation
y
=
2
x
{\displaystyle y=2x}
. Therefore, any vector of the form
[
a
2
a
]
T
{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
a
{\displaystyle a}
, is an eigenvector of
A
{\displaystyle A}
with eigenvalue
λ
=
6
{\displaystyle \lambda =6}
.
The matrix
A
{\displaystyle A}
above has another eigenvalue
λ
=
1
{\displaystyle \lambda =1}
. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of
3
x
+
y
=
0
{\displaystyle 3x+y=0}
, that is, any vector of the form
[
b
−
3
b
]
T
{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
b
{\displaystyle b}
.
=== Simple iterative methods ===
The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by
(
A
−
μ
I
)
−
1
{\displaystyle (A-\mu I)^{-1}}
; this causes it to converge to an eigenvector of the eigenvalue closest to
μ
∈
C
{\displaystyle \mu \in \mathbb {C} }
.
If
v
{\displaystyle \mathbf {v} }
is (a good approximation of) an eigenvector of
A
{\displaystyle A}
, then the corresponding eigenvalue can be computed as
λ
=
v
∗
A
v
v
∗
v
{\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}}
where
v
∗
{\displaystyle \mathbf {v} ^{*}}
denotes the conjugate transpose of
v
{\displaystyle \mathbf {v} }
.
=== Modern methods ===
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.
Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.
== Applications ==
=== Geometric transformations ===
Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
The characteristic equation for a rotation is a quadratic equation with discriminant
D
=
−
4
(
sin
θ
)
2
{\displaystyle D=-4(\sin \theta )^{2}}
, which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,
cos
θ
±
i
sin
θ
{\displaystyle \cos \theta \pm i\sin \theta }
; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues.
=== Principal component analysis ===
The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.
=== Graphs ===
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix
A
{\displaystyle A}
, or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either
D
−
A
{\displaystyle D-A}
(sometimes called the combinatorial Laplacian) or
I
−
D
−
1
/
2
A
D
−
1
/
2
{\displaystyle I-D^{-1/2}AD^{-1/2}}
(sometimes called the normalized Laplacian), where
D
{\displaystyle D}
is a diagonal matrix with
D
i
i
{\displaystyle D_{ii}}
equal to the degree of vertex
v
i
{\displaystyle v_{i}}
, and in
D
−
1
/
2
{\displaystyle D^{-1/2}}
, the
i
{\displaystyle i}
th diagonal entry is
1
/
deg
(
v
i
)
{\textstyle 1/{\sqrt {\deg(v_{i})}}}
. The
k
{\displaystyle k}
th principal eigenvector of a graph is defined as either the eigenvector corresponding to the
k
{\displaystyle k}
th largest or
k
{\displaystyle k}
th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.
=== Markov chains ===
A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.
=== Vibration analysis ===
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by
m
x
¨
+
k
x
=
0
{\displaystyle m{\ddot {x}}+kx=0}
or
m
x
¨
=
−
k
x
{\displaystyle m{\ddot {x}}=-kx}
That is, acceleration is proportional to position (i.e., we expect
x
{\displaystyle x}
to be sinusoidal in time).
In
n
{\displaystyle n}
dimensions,
m
{\displaystyle m}
becomes a mass matrix and
k
{\displaystyle k}
a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
k
x
=
ω
2
m
x
{\displaystyle kx=\omega ^{2}mx}
where
ω
2
{\displaystyle \omega ^{2}}
is the eigenvalue and
ω
{\displaystyle \omega }
is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of
k
{\displaystyle k}
alone. Furthermore, damped vibration, governed by
m
x
¨
+
c
x
˙
+
k
x
=
0
{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}
leads to a so-called quadratic eigenvalue problem,
(
ω
2
m
+
ω
c
+
k
)
x
=
0.
{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.}
This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
=== Tensor of moment of inertia ===
In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.
=== Stress tensor ===
In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.
=== Schrödinger equation ===
An example of an eigenvalue equation where the transformation
T
{\displaystyle T}
is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
H
ψ
E
=
E
ψ
E
{\displaystyle H\psi _{E}=E\psi _{E}\,}
where
H
{\displaystyle H}
, the Hamiltonian, is a second-order differential operator and
ψ
E
{\displaystyle \psi _{E}}
, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue
E
{\displaystyle E}
, interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for
ψ
E
{\displaystyle \psi _{E}}
within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which
ψ
E
{\displaystyle \psi _{E}}
and
H
{\displaystyle H}
can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
. In this notation, the Schrödinger equation is:
H
|
Ψ
E
⟩
=
E
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle }
where
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
is an eigenstate of
H
{\displaystyle H}
and
E
{\displaystyle E}
represents the eigenvalue.
H
{\displaystyle H}
is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above
H
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle }
is understood to be the vector obtained by application of the transformation
H
{\displaystyle H}
to
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
.
=== Wave transport ===
Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix
t
{\displaystyle \mathbf {t} }
. The eigenvectors of the transmission operator
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,
τ
{\displaystyle \tau }
, of
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with
τ
max
=
1
{\displaystyle \tau _{\max }=1}
and
τ
min
=
0
{\displaystyle \tau _{\min }=0}
. Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.
=== Molecular orbitals ===
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
=== Geology and glaciology ===
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms.
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered
v
1
,
v
2
,
v
3
{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}
by their eigenvalues
E
1
≥
E
2
≥
E
3
{\displaystyle E_{1}\geq E_{2}\geq E_{3}}
;
v
1
{\displaystyle \mathbf {v} _{1}}
then is the primary orientation/dip of clast,
v
2
{\displaystyle \mathbf {v} _{2}}
is the secondary and
v
3
{\displaystyle \mathbf {v} _{3}}
is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of
E
1
{\displaystyle E_{1}}
,
E
2
{\displaystyle E_{2}}
, and
E
3
{\displaystyle E_{3}}
are dictated by the nature of the sediment's fabric. If
E
1
=
E
2
=
E
3
{\displaystyle E_{1}=E_{2}=E_{3}}
, the fabric is said to be isotropic. If
E
1
=
E
2
>
E
3
{\displaystyle E_{1}=E_{2}>E_{3}}
, the fabric is said to be planar. If
E
1
>
E
2
>
E
3
{\displaystyle E_{1}>E_{2}>E_{3}}
, the fabric is said to be linear.
=== Basic reproduction number ===
The basic reproduction number (
R
0
{\displaystyle R_{0}}
) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then
R
0
{\displaystyle R_{0}}
is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,
t
G
{\displaystyle t_{G}}
, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time
t
G
{\displaystyle t_{G}}
has passed. The value
R
0
{\displaystyle R_{0}}
is then the largest eigenvalue of the next generation matrix.
=== Eigenfaces ===
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.
== See also ==
Antieigenvalue theory
Eigenoperator
Eigenplane
Eigenmoments
Eigenvalue algorithm
Quantum states
Jordan normal form
List of numerical-analysis software
Nonlinear eigenproblem
Normal eigenvalue
Quadratic eigenvalue problem
Singular value
Spectrum of a matrix
== Notes ==
=== Citations ===
== Sources ==
== Further reading ==
== External links ==
What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts"
Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu.
Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy
Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown
Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an
n
×
n
{\displaystyle n\times n}
size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.)
Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors
=== Theory ===
Computation of Eigenvalues
Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst | Wikipedia/Algebraic_multiplicity |
In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.
The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces.
See the article decomposition of a module for a way to write a module as a direct sum of submodules.
== Construction for vector spaces and abelian groups ==
We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth.
=== Construction for two vector spaces ===
Suppose V and W are vector spaces over the field K. The Cartesian product V × W can be given the structure of a vector space over K (Halmos 1974, §18) by defining the operations componentwise:
(v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)
α (v, w) = (α v, α w)
for v, v1, v2 ∈ V, w, w1, w2 ∈ W, and α ∈ K.
The resulting vector space is called the direct sum of V and W and is usually denoted by a plus symbol inside a circle:
V
⊕
W
{\displaystyle V\oplus W}
It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w.
The subspace V × {0} of V ⊕ W is isomorphic to V and is often identified with V; similarly for {0} × W and W. (See internal direct sum below.) With this identification, every element of V ⊕ W can be written in one and only one way as the sum of an element of V and an element of W. The dimension of V ⊕ W is equal to the sum of the dimensions of V and W. One elementary use is the reconstruction of a finite vector space from any subspace W and its orthogonal complement:
R
n
=
W
⊕
W
⊥
{\displaystyle \mathbb {R} ^{n}=W\oplus W^{\perp }}
This construction readily generalizes to any finite number of vector spaces.
=== Construction for two abelian groups ===
For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum (Mac Lane & Birkhoff 1999, §V.6). Thus the Cartesian product G × H is equipped with the structure of an abelian group by defining the operations componentwise:
(g1, h1) + (g2, h2) = (g1 + g2, h1 + h2)
for g1, g2 in G, and h1, h2 in H.
Integral multiples are similarly defined componentwise by
n(g, h) = (ng, nh)
for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above.
The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle:
G
⊕
H
{\displaystyle G\oplus H}
It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h.
The subgroup G × {0} of G ⊕ H is isomorphic to G and is often identified with G; similarly for {0} × H and H. (See internal direct sum below.) With this identification, it is true that every element of G ⊕ H can be written in one and only one way as the sum of an element of G and an element of H. The rank of G ⊕ H is equal to the sum of the ranks of G and H.
This construction readily generalises to any finite number of abelian groups.
== Construction for an arbitrary family of modules ==
One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows (Bourbaki 1989, §II.1.6).
Let R be a ring, and {Mi : i ∈ I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is then defined to be the set of all sequences
(
α
i
)
{\displaystyle (\alpha _{i})}
where
α
i
∈
M
i
{\displaystyle \alpha _{i}\in M_{i}}
and
α
i
=
0
{\displaystyle \alpha _{i}=0}
for cofinitely many indices i. (The direct product is analogous but the indices do not need to cofinitely vanish.)
It can also be defined as functions α from I to the disjoint union of the modules Mi such that α(i) ∈ Mi for all i ∈ I and α(i) = 0 for cofinitely many indices i. These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set I, with the fiber over
i
∈
I
{\displaystyle i\in I}
being
M
i
{\displaystyle M_{i}}
.
This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing
(
α
+
β
)
i
=
α
i
+
β
i
{\displaystyle (\alpha +\beta )_{i}=\alpha _{i}+\beta _{i}}
for all i (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element r from R by defining
r
(
α
)
i
=
(
r
α
)
i
{\displaystyle r(\alpha )_{i}=(r\alpha )_{i}}
for all i. In this way, the direct sum becomes a left R-module, and it is denoted
⨁
i
∈
I
M
i
.
{\displaystyle \bigoplus _{i\in I}M_{i}.}
It is customary to write the sequence
(
α
i
)
{\displaystyle (\alpha _{i})}
as a sum
∑
α
i
{\displaystyle \sum \alpha _{i}}
. Sometimes a primed summation
∑
′
α
i
{\displaystyle \sum '\alpha _{i}}
is used to indicate that cofinitely many of the terms are zero.
== Properties ==
The direct sum is a submodule of the direct product of the modules Mi (Bourbaki 1989, §II.1.7). The direct product is the set of all functions α from I to the disjoint union of the modules Mi with α(i)∈Mi, but not necessarily vanishing for all but finitely many i. If the index set I is finite, then the direct sum and the direct product are equal.
Each of the modules Mi may be identified with the submodule of the direct sum consisting of those functions which vanish on all indices different from i. With these identifications, every element x of the direct sum can be written in one and only one way as a sum of finitely many elements from the modules Mi.
If the Mi are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the Mi. The same is true for the rank of abelian groups and the length of modules.
Every vector space over the field K is isomorphic to a direct sum of sufficiently many copies of K, so in a sense only these direct sums have to be considered. This is not true for modules over arbitrary rings.
The tensor product distributes over direct sums in the following sense: if N is some right R-module, then the direct sum of the tensor products of N with Mi (which are abelian groups) is naturally isomorphic to the tensor product of N with the direct sum of the Mi.
Direct sums are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum.
The abelian group of R-linear homomorphisms from the direct sum to some left R-module L is naturally isomorphic to the direct product of the abelian groups of R-linear homomorphisms from Mi to L:
Hom
R
(
⨁
i
∈
I
M
i
,
L
)
≅
∏
i
∈
I
Hom
R
(
M
i
,
L
)
.
{\displaystyle \operatorname {Hom} _{R}{\biggl (}\bigoplus _{i\in I}M_{i},L{\biggr )}\cong \prod _{i\in I}\operatorname {Hom} _{R}\left(M_{i},L\right).}
Indeed, there is clearly a homomorphism τ from the left hand side to the right hand side, where τ(θ)(i) is the R-linear homomorphism sending x∈Mi to θ(x) (using the natural inclusion of Mi into the direct sum). The inverse of the homomorphism τ is defined by
τ
−
1
(
β
)
(
α
)
=
∑
i
∈
I
β
(
i
)
(
α
(
i
)
)
{\displaystyle \tau ^{-1}(\beta )(\alpha )=\sum _{i\in I}\beta (i)(\alpha (i))}
for any α in the direct sum of the modules Mi. The key point is that the definition of τ−1 makes sense because α(i) is zero for all but finitely many i, and so the sum is finite.In particular, the dual vector space of a direct sum of vector spaces is isomorphic to the direct product of the duals of those spaces.
The finite direct sum of modules is a biproduct: If
p
k
:
A
1
⊕
⋯
⊕
A
n
→
A
k
{\displaystyle p_{k}:A_{1}\oplus \cdots \oplus A_{n}\to A_{k}}
are the canonical projection mappings and
i
k
:
A
k
↦
A
1
⊕
⋯
⊕
A
n
{\displaystyle i_{k}:A_{k}\mapsto A_{1}\oplus \cdots \oplus A_{n}}
are the inclusion mappings, then
i
1
∘
p
1
+
⋯
+
i
n
∘
p
n
{\displaystyle i_{1}\circ p_{1}+\cdots +i_{n}\circ p_{n}}
equals the identity morphism of A1 ⊕ ⋯ ⊕ An, and
p
k
∘
i
l
{\displaystyle p_{k}\circ i_{l}}
is the identity morphism of Ak in the case l = k, and is the zero map otherwise.
== Internal direct sum ==
Suppose M is an R-module and Mi is a submodule of M for each i in I. If every x in M can be written in exactly one way as a sum of finitely many elements of the Mi, then we say that M is the internal direct sum of the submodules Mi (Halmos 1974, §18). In this case, M is naturally isomorphic to the (external) direct sum of the Mi as defined above (Adamson 1972, p.61).
A submodule N of M is a direct summand of M if there exists some other submodule N′ of M such that M is the internal direct sum of N and N′. In this case, N and N′ are called complementary submodules.
== Universal property ==
In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left R-modules, which means that it is characterized by the following universal property. For every i in I, consider the natural embedding
j
i
:
M
i
→
⨁
i
∈
I
M
i
{\displaystyle j_{i}:M_{i}\rightarrow \bigoplus _{i\in I}M_{i}}
which sends the elements of Mi to those functions which are zero for all arguments but i. Now let M be an arbitrary R-module and fi : Mi → M be arbitrary R-linear maps for every i, then there exists precisely one R-linear map
f
:
⨁
i
∈
I
M
i
→
M
{\displaystyle f:\bigoplus _{i\in I}M_{i}\rightarrow M}
such that f o ji = fi for all i.
== Grothendieck group ==
The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group.
== Direct sum of modules with additional structure ==
If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces.
In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product (see note below and the remark on direct sums of rings).
=== Direct sum of algebras ===
A direct sum of algebras
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is the direct sum as vector spaces, with product
(
x
1
+
y
1
)
(
x
2
+
y
2
)
=
(
x
1
x
2
+
y
1
y
2
)
.
{\displaystyle (x_{1}+y_{1})(x_{2}+y_{2})=(x_{1}x_{2}+y_{1}y_{2}).}
Consider these classical examples:
R
⊕
R
{\displaystyle \mathbf {R} \oplus \mathbf {R} }
is ring isomorphic to split-complex numbers, also used in interval analysis.
C
⊕
C
{\displaystyle \mathbf {C} \oplus \mathbf {C} }
is the algebra of tessarines introduced by James Cockle in 1848.
H
⊕
H
,
{\displaystyle \mathbf {H} \oplus \mathbf {H} ,}
called the split-biquaternions, was introduced by William Kingdon Clifford in 1873.
Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his Lectures on Matrices (1934), page 151.
Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts:
λ
(
x
⊕
y
)
=
λ
x
⊕
λ
y
{\displaystyle \lambda (x\oplus y)=\lambda x\oplus \lambda y}
while for the direct product a scalar factor may be collected alternately with the parts, but not both:
λ
(
x
,
y
)
=
(
λ
x
,
y
)
=
(
x
,
λ
y
)
.
{\displaystyle \lambda (x,y)=(\lambda x,y)=(x,\lambda y).}
Ian R. Porteous uses the three direct sums above, denoting them
2
R
,
2
C
,
2
H
,
{\displaystyle ^{2}R,\ ^{2}C,\ ^{2}H,}
as rings of scalars in his analysis of Clifford Algebras and the Classical Groups (1995).
The construction described above, as well as Wedderburn's use of the terms direct sum and direct product follow a different convention than the one in category theory. In categorical terms, Wedderburn's direct sum is a categorical product, whilst Wedderburn's direct product is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras.
=== Direct sum of Banach spaces ===
The direct sum of two Banach spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is the direct sum of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
considered as vector spaces, with the norm
‖
(
x
,
y
)
‖
=
‖
x
‖
X
+
‖
y
‖
Y
{\displaystyle \|(x,y)\|=\|x\|_{X}+\|y\|_{Y}}
for all
x
∈
X
{\displaystyle x\in X}
and
y
∈
Y
.
{\displaystyle y\in Y.}
Generally, if
X
i
{\displaystyle X_{i}}
is a collection of Banach spaces, where
i
{\displaystyle i}
traverses the index set
I
,
{\displaystyle I,}
then the direct sum
⨁
i
∈
I
X
i
{\displaystyle \bigoplus _{i\in I}X_{i}}
is a module consisting of all functions
x
{\displaystyle x}
defined over
I
{\displaystyle I}
such that
x
(
i
)
∈
X
i
{\displaystyle x(i)\in X_{i}}
for all
i
∈
I
{\displaystyle i\in I}
and
∑
i
∈
I
‖
x
(
i
)
‖
X
i
<
∞
.
{\displaystyle \sum _{i\in I}\|x(i)\|_{X_{i}}<\infty .}
The norm is given by the sum above. The direct sum with this norm is again a Banach space.
For example, if we take the index set
I
=
N
{\displaystyle I=\mathbb {N} }
and
X
i
=
R
,
{\displaystyle X_{i}=\mathbb {R} ,}
then the direct sum
⨁
i
∈
N
X
i
{\displaystyle \bigoplus _{i\in \mathbb {N} }X_{i}}
is the space
ℓ
1
,
{\displaystyle \ell _{1},}
which consists of all the sequences
(
a
i
)
{\displaystyle \left(a_{i}\right)}
of reals with finite norm
‖
a
‖
=
∑
i
|
a
i
|
.
{\textstyle \|a\|=\sum _{i}\left|a_{i}\right|.}
A closed subspace
A
{\displaystyle A}
of a Banach space
X
{\displaystyle X}
is complemented if there is another closed subspace
B
{\displaystyle B}
of
X
{\displaystyle X}
such that
X
{\displaystyle X}
is equal to the internal direct sum
A
⊕
B
.
{\displaystyle A\oplus B.}
Note that not every closed subspace is complemented; e.g.
c
0
{\displaystyle c_{0}}
is not complemented in
ℓ
∞
.
{\displaystyle \ell ^{\infty }.}
=== Direct sum of modules with bilinear forms ===
Let
{
(
M
i
,
b
i
)
:
i
∈
I
}
{\displaystyle \left\{\left(M_{i},b_{i}\right):i\in I\right\}}
be a family indexed by
I
{\displaystyle I}
of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form
B
{\displaystyle B}
defined by
B
(
(
x
i
)
,
(
y
i
)
)
=
∑
i
∈
I
b
i
(
x
i
,
y
i
)
{\displaystyle B\left({\left({x_{i}}\right),\left({y_{i}}\right)}\right)=\sum _{i\in I}b_{i}\left({x_{i},y_{i}}\right)}
in which the summation makes sense even for infinite index sets
I
{\displaystyle I}
because only finitely many of the terms are non-zero.
=== Direct sum of Hilbert spaces ===
If finitely many Hilbert spaces
H
1
,
…
,
H
n
{\displaystyle H_{1},\ldots ,H_{n}}
are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as:
⟨
(
x
1
,
…
,
x
n
)
,
(
y
1
,
…
,
y
n
)
⟩
=
⟨
x
1
,
y
1
⟩
+
⋯
+
⟨
x
n
,
y
n
⟩
.
{\displaystyle \left\langle \left(x_{1},\ldots ,x_{n}\right),\left(y_{1},\ldots ,y_{n}\right)\right\rangle =\langle x_{1},y_{1}\rangle +\cdots +\langle x_{n},y_{n}\rangle .}
The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces.
If infinitely many Hilbert spaces
H
i
{\displaystyle H_{i}}
for
i
∈
I
{\displaystyle i\in I}
are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces
H
i
{\displaystyle H_{i}}
to be the completion of this inner product space.
Alternatively and equivalently, one can define the direct sum of the Hilbert spaces
H
i
{\displaystyle H_{i}}
as the space of all functions α with domain
I
,
{\displaystyle I,}
such that
α
(
i
)
{\displaystyle \alpha (i)}
is an element of
H
i
{\displaystyle H_{i}}
for every
i
∈
I
{\displaystyle i\in I}
and:
∑
i
‖
α
(
i
)
‖
2
<
∞
.
{\displaystyle \sum _{i}\left\|\alpha _{(i)}\right\|^{2}<\infty .}
The inner product of two such function α and β is then defined as:
⟨
α
,
β
⟩
=
∑
i
⟨
α
i
,
β
i
⟩
.
{\displaystyle \langle \alpha ,\beta \rangle =\sum _{i}\langle \alpha _{i},\beta _{i}\rangle .}
This space is complete and we get a Hilbert space.
For example, if we take the index set
I
=
N
{\displaystyle I=\mathbb {N} }
and
X
i
=
R
,
{\displaystyle X_{i}=\mathbb {R} ,}
then the direct sum
⊕
i
∈
N
X
i
{\displaystyle \oplus _{i\in \mathbb {N} }X_{i}}
is the space
ℓ
2
,
{\displaystyle \ell _{2},}
which consists of all the sequences
(
a
i
)
{\displaystyle \left(a_{i}\right)}
of reals with finite norm
‖
a
‖
=
∑
i
‖
a
i
‖
2
.
{\textstyle \|a\|={\sqrt {\sum _{i}\left\|a_{i}\right\|^{2}}}.}
Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different.
Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either
R
or
C
.
{\displaystyle \mathbb {R} {\text{ or }}\mathbb {C} .}
This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space.
== See also ==
Biproduct – in category theory, an object that is both product and coproduct in compatible waysPages displaying wikidata descriptions as a fallback
Indecomposable module
Jordan–Hölder theorem – Decomposition of an algebraic structurePages displaying short descriptions of redirect targets
Krull–Schmidt theorem – Mathematical theorem
Split exact sequence – Type of short exact sequence in mathematics
== References ==
Adamson, Iain T. (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3.
Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9.
Dummit, David S.; Foote, Richard M. (1991), Abstract algebra, Englewood Cliffs, NJ: Prentice Hall, Inc., ISBN 0-13-004771-6.
Halmos, Paul (1974), Finite dimensional vector spaces, Springer, ISBN 0-387-90093-4
Mac Lane, S.; Birkhoff, G. (1999), Algebra, AMS Chelsea, ISBN 0-8218-1646-2. | Wikipedia/Direct_sum_of_Lie_algebras |
In mathematics, a Lie algebra (pronounced LEE) is a vector space
g
{\displaystyle {\mathfrak {g}}}
together with an operation called the Lie bracket, an alternating bilinear map
g
×
g
→
g
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}}
, that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors
x
{\displaystyle x}
and
y
{\displaystyle y}
is denoted
[
x
,
y
]
{\displaystyle [x,y]}
. A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket,
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
.
Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra.
In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space
g
{\displaystyle {\mathfrak {g}}}
to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give
g
{\displaystyle {\mathfrak {g}}}
the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces.
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics.
An elementary example (not directly coming from an associative algebra) is the 3-dimensional space
g
=
R
3
{\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}}
with Lie bracket defined by the cross product
[
x
,
y
]
=
x
×
y
.
{\displaystyle [x,y]=x\times y.}
This is skew-symmetric since
x
×
y
=
−
y
×
x
{\displaystyle x\times y=-y\times x}
, and instead of associativity it satisfies the Jacobi identity:
x
×
(
y
×
z
)
+
y
×
(
z
×
x
)
+
z
×
(
x
×
y
)
=
0.
{\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.}
This is the Lie algebra of the Lie group of rotations of space, and each vector
v
∈
R
3
{\displaystyle v\in \mathbb {R} ^{3}}
may be pictured as an infinitesimal rotation around the axis
v
{\displaystyle v}
, with angular speed equal to the magnitude
of
v
{\displaystyle v}
. The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property
[
x
,
x
]
=
x
×
x
=
0
{\displaystyle [x,x]=x\times x=0}
.
== History ==
Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used.
== Definition of a Lie algebra ==
A Lie algebra is a vector space
g
{\displaystyle \,{\mathfrak {g}}}
over a field
F
{\displaystyle F}
together with a binary operation
[
⋅
,
⋅
]
:
g
×
g
→
g
{\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
called the Lie bracket, satisfying the following axioms:
Bilinearity,
[
a
x
+
b
y
,
z
]
=
a
[
x
,
z
]
+
b
[
y
,
z
]
,
{\displaystyle [ax+by,z]=a[x,z]+b[y,z],}
[
z
,
a
x
+
b
y
]
=
a
[
z
,
x
]
+
b
[
z
,
y
]
{\displaystyle [z,ax+by]=a[z,x]+b[z,y]}
for all scalars
a
,
b
{\displaystyle a,b}
in
F
{\displaystyle F}
and all elements
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Alternating property,
[
x
,
x
]
=
0
{\displaystyle [x,x]=0\ }
for all
x
{\displaystyle x}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Jacobi identity,
[
x
,
[
y
,
z
]
]
+
[
z
,
[
x
,
y
]
]
+
[
y
,
[
z
,
x
]
]
=
0
{\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ }
for all
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation.
Using bilinearity to expand the Lie bracket
[
x
+
y
,
x
+
y
]
{\displaystyle [x+y,x+y]}
and using the alternating property shows that
[
x
,
y
]
+
[
y
,
x
]
=
0
{\displaystyle [x,y]+[y,x]=0}
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. Thus bilinearity and the alternating property together imply
Anticommutativity,
[
x
,
y
]
=
−
[
y
,
x
]
,
{\displaystyle [x,y]=-[y,x],\ }
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies
[
x
,
x
]
=
−
[
x
,
x
]
.
{\displaystyle [x,x]=-[x,x].}
It is customary to denote a Lie algebra by a lower-case fraktur letter such as
g
,
h
,
b
,
n
{\displaystyle {\mathfrak {g,h,b,n}}}
. If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
.
=== Generators and dimension ===
The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
means a subset of
g
{\displaystyle {\mathfrak {g}}}
such that any Lie subalgebra (as defined below) that contains S must be all of
g
{\displaystyle {\mathfrak {g}}}
. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is spanned (as a vector space) by all iterated brackets of elements of S.
== Basic examples ==
=== Abelian Lie algebras ===
A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space
V
{\displaystyle V}
endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket.
=== The Lie algebra of matrices ===
On an associative algebra
A
{\displaystyle A}
over a field
F
{\displaystyle F}
with multiplication written as
x
y
{\displaystyle xy}
, a Lie bracket may be defined by the commutator
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
. With this bracket,
A
{\displaystyle A}
is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on
A
{\displaystyle A}
.)
The endomorphism ring of an
F
{\displaystyle F}
-vector space
V
{\displaystyle V}
with the above Lie bracket is denoted
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
For a field F and a positive integer n, the space of n × n matrices over F, denoted
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
or
g
l
n
(
F
)
{\displaystyle {\mathfrak {gl}}_{n}(F)}
, is a Lie algebra with bracket given by the commutator of matrices:
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra.
When F is the real numbers,
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
is the Lie algebra of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
, the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise,
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
is the Lie algebra of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. The Lie bracket on
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
can be viewed as the Lie algebra of the algebraic group
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
over F.
== Definitions ==
=== Subalgebras, ideals and homomorphisms ===
The Lie bracket is not required to be associative, meaning that
[
[
x
,
y
]
,
z
]
{\displaystyle [[x,y],z]}
need not be equal to
[
x
,
[
y
,
z
]
]
{\displaystyle [x,[y,z]]}
. Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace
h
⊆
g
{\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}}
which is closed under the Lie bracket. An ideal
i
⊆
g
{\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}}
is a linear subspace that satisfies the stronger condition:
[
g
,
i
]
⊆
i
.
{\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.}
In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals.
A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets:
ϕ
:
g
→
h
,
ϕ
(
[
x
,
y
]
)
=
[
ϕ
(
x
)
,
ϕ
(
y
)
]
for all
x
,
y
∈
g
.
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.}
An isomorphism of Lie algebras is a bijective homomorphism.
As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and an ideal
i
{\displaystyle {\mathfrak {i}}}
in it, the quotient Lie algebra
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
is defined, with a surjective homomorphism
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism
ϕ
:
g
→
h
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}}
of Lie algebras, the image of
ϕ
{\displaystyle \phi }
is a Lie subalgebra of
h
{\displaystyle {\mathfrak {h}}}
that is isomorphic to
g
/
ker
(
ϕ
)
{\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )}
.
For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
are said to commute if their bracket vanishes:
[
x
,
y
]
=
0
{\displaystyle [x,y]=0}
.
The centralizer subalgebra of a subset
S
⊂
g
{\displaystyle S\subset {\mathfrak {g}}}
is the set of elements commuting with
S
{\displaystyle S}
: that is,
z
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
=
0
for all
s
∈
S
}
{\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}}
. The centralizer of
g
{\displaystyle {\mathfrak {g}}}
itself is the center
z
(
g
)
{\displaystyle {\mathfrak {z}}({\mathfrak {g}})}
. Similarly, for a subspace S, the normalizer subalgebra of
S
{\displaystyle S}
is
n
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
∈
S
for all
s
∈
S
}
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}}
. If
S
{\displaystyle S}
is a Lie subalgebra,
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
is the largest subalgebra such that
S
{\displaystyle S}
is an ideal of
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
.
==== Example ====
The subspace
t
n
{\displaystyle {\mathfrak {t}}_{n}}
of diagonal matrices in
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is an abelian Lie subalgebra. (It is a Cartan subalgebra of
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
, analogous to a maximal torus in the theory of compact Lie groups.) Here
t
n
{\displaystyle {\mathfrak {t}}_{n}}
is not an ideal in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
for
n
≥
2
{\displaystyle n\geq 2}
. For example, when
n
=
2
{\displaystyle n=2}
, this follows from the calculation:
[
[
a
b
c
d
]
,
[
x
0
0
y
]
]
=
[
a
x
b
y
c
x
d
y
]
−
[
a
x
b
x
c
y
d
y
]
=
[
0
b
(
y
−
x
)
c
(
x
−
y
)
0
]
{\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}}
(which is not always in
t
2
{\displaystyle {\mathfrak {t}}_{2}}
).
Every one-dimensional linear subspace of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is an abelian Lie subalgebra, but it need not be an ideal.
=== Product and semidirect product ===
For two Lie algebras
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g'}}}
, the product Lie algebra is the vector space
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
consisting of all ordered pairs
(
x
,
x
′
)
,
x
∈
g
,
x
′
∈
g
′
{\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}}
, with Lie bracket
[
(
x
,
x
′
)
,
(
y
,
y
′
)
]
=
(
[
x
,
y
]
,
[
x
′
,
y
′
]
)
.
{\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).}
This is the product in the category of Lie algebras. Note that the copies of
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g}}'}
in
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
commute with each other:
[
(
x
,
0
)
,
(
0
,
x
′
)
]
=
0.
{\displaystyle [(x,0),(0,x')]=0.}
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and
i
{\displaystyle {\mathfrak {i}}}
an ideal of
g
{\displaystyle {\mathfrak {g}}}
. If the canonical map
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
splits (i.e., admits a section
g
/
i
→
g
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}}
, as a homomorphism of Lie algebras), then
g
{\displaystyle {\mathfrak {g}}}
is said to be a semidirect product of
i
{\displaystyle {\mathfrak {i}}}
and
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
,
g
=
g
/
i
⋉
i
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}}
. See also semidirect sum of Lie algebras.
=== Derivations ===
For an algebra A over a field F, a derivation of A over F is a linear map
D
:
A
→
A
{\displaystyle D\colon A\to A}
that satisfies the Leibniz rule
D
(
x
y
)
=
D
(
x
)
y
+
x
D
(
y
)
{\displaystyle D(xy)=D(x)y+xD(y)}
for all
x
,
y
∈
A
{\displaystyle x,y\in A}
. (The definition makes sense for a possibly non-associative algebra.) Given two derivations
D
1
{\displaystyle D_{1}}
and
D
2
{\displaystyle D_{2}}
, their commutator
[
D
1
,
D
2
]
:=
D
1
D
2
−
D
2
D
1
{\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}}
is again a derivation. This operation makes the space
Der
k
(
A
)
{\displaystyle {\text{Der}}_{k}(A)}
of all derivations of A over F into a Lie algebra.
Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that
(
1
+
ϵ
D
)
(
x
y
)
≡
(
1
+
ϵ
D
)
(
x
)
⋅
(
1
+
ϵ
D
)
(
y
)
(
mod
ϵ
2
)
{\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}}
(where 1 denotes the identity map on A) gives exactly the definition of D being a derivation.
Example: the Lie algebra of vector fields. Let A be the ring
C
∞
(
X
)
{\displaystyle C^{\infty }(X)}
of smooth functions on a smooth manifold X. Then a derivation of A over
R
{\displaystyle \mathbb {R} }
is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking,
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras
g
→
Vect
(
X
)
{\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)}
. (An example is illustrated below.)
A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field F determines its Lie algebra of derivations,
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
. That is, a derivation of
g
{\displaystyle {\mathfrak {g}}}
is a linear map
D
:
g
→
g
{\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}}
such that
D
(
[
x
,
y
]
)
=
[
D
(
x
)
,
y
]
+
[
x
,
D
(
y
)
]
{\displaystyle D([x,y])=[D(x),y]+[x,D(y)]}
.
The inner derivation associated to any
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
is the adjoint mapping
a
d
x
{\displaystyle \mathrm {ad} _{x}}
defined by
a
d
x
(
y
)
:=
[
x
,
y
]
{\displaystyle \mathrm {ad} _{x}(y):=[x,y]}
. (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras,
ad
:
g
→
Der
F
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})}
. The image
Inn
F
(
g
)
{\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})}
is an ideal in
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
, and the Lie algebra of outer derivations is defined as the quotient Lie algebra,
Out
F
(
g
)
=
Der
F
(
g
)
/
Inn
F
(
g
)
{\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})}
. (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite.
In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space
V
{\displaystyle V}
with Lie bracket zero, the Lie algebra
Out
F
(
V
)
{\displaystyle {\text{Out}}_{F}(V)}
can be identified with
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
== Examples ==
=== Matrix Lie algebras ===
A matrix group is a Lie group consisting of invertible matrices,
G
⊂
G
L
(
n
,
R
)
{\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )}
, where the group operation of G is matrix multiplication. The corresponding Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is the space of matrices which are tangent vectors to G inside the linear space
M
n
(
R
)
{\displaystyle M_{n}(\mathbb {R} )}
: this consists of derivatives of smooth curves in G at the identity matrix
I
{\displaystyle I}
:
g
=
{
X
=
c
′
(
0
)
∈
M
n
(
R
)
:
smooth
c
:
R
→
G
,
c
(
0
)
=
I
}
.
{\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.}
The Lie bracket of
g
{\displaystyle {\mathfrak {g}}}
is given by the commutator of matrices,
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. Given a Lie algebra
g
⊂
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )}
, one can recover the Lie group as the subgroup generated by the matrix exponential of elements of
g
{\displaystyle {\mathfrak {g}}}
. (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping
exp
:
M
n
(
R
)
→
M
n
(
R
)
{\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )}
is defined by
exp
(
X
)
=
I
+
X
+
1
2
!
X
2
+
1
3
!
X
3
+
⋯
{\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots }
, which converges for every matrix
X
{\displaystyle X}
.
The same comments apply to complex Lie subgroups of
G
L
(
n
,
C
)
{\displaystyle GL(n,\mathbb {C} )}
and the complex matrix exponential,
exp
:
M
n
(
C
)
→
M
n
(
C
)
{\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )}
(defined by the same formula).
Here are some matrix Lie groups and their Lie algebras.
For a positive integer n, the special linear group
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
consists of all real n × n matrices with determinant 1. This is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve volume and orientation. More abstractly,
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
is the commutator subgroup of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
. Its Lie algebra
s
l
(
n
,
R
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )}
consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group
S
L
(
n
,
C
)
{\displaystyle {\rm {SL}}(n,\mathbb {C} )}
and its Lie algebra
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
.
The orthogonal group
O
(
n
)
{\displaystyle \mathrm {O} (n)}
plays a basic role in geometry: it is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve the length of vectors. For example, rotations and reflections belong to
O
(
n
)
{\displaystyle \mathrm {O} (n)}
. Equivalently, this is the group of n x n orthogonal matrices, meaning that
A
T
=
A
−
1
{\displaystyle A^{\mathrm {T} }=A^{-1}}
, where
A
T
{\displaystyle A^{\mathrm {T} }}
denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group
S
O
(
n
)
{\displaystyle \mathrm {SO} (n)}
, consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
, the subspace of skew-symmetric matrices in
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
(
X
T
=
−
X
{\displaystyle X^{\rm {T}}=-X}
). See also infinitesimal rotations with skew-symmetric matrices.
The complex orthogonal group
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
, its identity component
S
O
(
n
,
C
)
{\displaystyle \mathrm {SO} (n,\mathbb {C} )}
, and the Lie algebra
s
o
(
n
,
C
)
{\displaystyle {\mathfrak {so}}(n,\mathbb {C} )}
are given by the same formulas applied to n x n complex matrices. Equivalently,
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the standard symmetric bilinear form on
C
n
{\displaystyle \mathbb {C} ^{n}}
.
The unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the length of vectors in
C
n
{\displaystyle \mathbb {C} ^{n}}
(with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying
A
∗
=
A
−
1
{\displaystyle A^{*}=A^{-1}}
, where
A
∗
{\displaystyle A^{*}}
denotes the conjugate transpose of a matrix). Its Lie algebra
u
(
n
)
{\displaystyle {\mathfrak {u}}(n)}
consists of the skew-hermitian matrices in
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
(
X
∗
=
−
X
{\displaystyle X^{*}=-X}
). This is a Lie algebra over
R
{\displaystyle \mathbb {R} }
, not over
C
{\displaystyle \mathbb {C} }
. (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is a real Lie subgroup of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. For example,
U
(
1
)
{\displaystyle \mathrm {U} (1)}
is the circle group, and its Lie algebra (from this point of view) is
i
R
⊂
C
=
g
l
(
1
,
C
)
{\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )}
.
The special unitary group
S
U
(
n
)
{\displaystyle \mathrm {SU} (n)}
is the subgroup of matrices with determinant 1 in
U
(
n
)
{\displaystyle \mathrm {U} (n)}
. Its Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
consists of the skew-hermitian matrices with trace zero.
The symplectic group
S
p
(
2
n
,
R
)
{\displaystyle \mathrm {Sp} (2n,\mathbb {R} )}
is the subgroup of
G
L
(
2
n
,
R
)
{\displaystyle \mathrm {GL} (2n,\mathbb {R} )}
that preserves the standard alternating bilinear form on
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
. Its Lie algebra is the symplectic Lie algebra
s
p
(
2
n
,
R
)
{\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )}
.
The classical Lie algebras are those listed above, along with variants over any field.
=== Two dimensions ===
Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples.
There is a unique nonabelian Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of dimension 2 over any field F, up to isomorphism. Here
g
{\displaystyle {\mathfrak {g}}}
has a basis
X
,
Y
{\displaystyle X,Y}
for which the bracket is given by
[
X
,
Y
]
=
Y
{\displaystyle \left[X,Y\right]=Y}
. (This determines the Lie bracket completely, because the axioms imply that
[
X
,
X
]
=
0
{\displaystyle [X,X]=0}
and
[
Y
,
Y
]
=
0
{\displaystyle [Y,Y]=0}
.) Over the real numbers,
g
{\displaystyle {\mathfrak {g}}}
can be viewed as the Lie algebra of the Lie group
G
=
A
f
f
(
1
,
R
)
{\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )}
of affine transformations of the real line,
x
↦
a
x
+
b
{\displaystyle x\mapsto ax+b}
.
The affine group G can be identified with the group of matrices
(
a
b
0
1
)
{\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)}
under matrix multiplication, with
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
,
a
≠
0
{\displaystyle a\neq 0}
. Its Lie algebra is the Lie subalgebra
g
{\displaystyle {\mathfrak {g}}}
of
g
l
(
2
,
R
)
{\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )}
consisting of all matrices
(
c
d
0
0
)
.
{\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).}
In these terms, the basis above for
g
{\displaystyle {\mathfrak {g}}}
is given by the matrices
X
=
(
1
0
0
0
)
,
Y
=
(
0
1
0
0
)
.
{\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).}
For any field
F
{\displaystyle F}
, the 1-dimensional subspace
F
⋅
Y
{\displaystyle F\cdot Y}
is an ideal in the 2-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, by the formula
[
X
,
Y
]
=
Y
∈
F
⋅
Y
{\displaystyle [X,Y]=Y\in F\cdot Y}
. Both of the Lie algebras
F
⋅
Y
{\displaystyle F\cdot Y}
and
g
/
(
F
⋅
Y
)
{\displaystyle {\mathfrak {g}}/(F\cdot Y)}
are abelian (because 1-dimensional). In this sense,
g
{\displaystyle {\mathfrak {g}}}
can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below.
=== Three dimensions ===
The Heisenberg algebra
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
over a field F is the three-dimensional Lie algebra with a basis
X
,
Y
,
Z
{\displaystyle X,Y,Z}
such that
[
X
,
Y
]
=
Z
,
[
X
,
Z
]
=
0
,
[
Y
,
Z
]
=
0
{\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0}
.
It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis
X
=
(
0
1
0
0
0
0
0
0
0
)
,
Y
=
(
0
0
0
0
0
1
0
0
0
)
,
Z
=
(
0
0
1
0
0
0
0
0
0
)
.
{\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad }
Over the real numbers,
h
3
(
R
)
{\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )}
is the Lie algebra of the Heisenberg group
H
3
(
R
)
{\displaystyle \mathrm {H} _{3}(\mathbb {R} )}
, that is, the group of matrices
(
1
a
c
0
1
b
0
0
1
)
{\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)}
under matrix multiplication.
For any field F, the center of
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is the 1-dimensional ideal
F
⋅
Z
{\displaystyle F\cdot Z}
, and the quotient
h
3
(
F
)
/
(
F
⋅
Z
)
{\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)}
is abelian, isomorphic to
F
2
{\displaystyle F^{2}}
. In the terminology below, it follows that
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is nilpotent (though not abelian).
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over
R
{\displaystyle \mathbb {R} }
. A basis is given by the three matrices
F
1
=
(
0
0
0
0
0
−
1
0
1
0
)
,
F
2
=
(
0
0
1
0
0
0
−
1
0
0
)
,
F
3
=
(
0
−
1
0
1
0
0
0
0
0
)
.
{\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad }
The commutation relations among these generators are
[
F
1
,
F
2
]
=
F
3
,
{\displaystyle [F_{1},F_{2}]=F_{3},}
[
F
2
,
F
3
]
=
F
1
,
{\displaystyle [F_{2},F_{3}]=F_{1},}
[
F
3
,
F
1
]
=
F
2
.
{\displaystyle [F_{3},F_{1}]=F_{2}.}
The cross product of vectors in
R
3
{\displaystyle \mathbb {R} ^{3}}
is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
. Also,
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics.
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
Another simple Lie algebra of dimension 3, in this case over
C
{\displaystyle \mathbb {C} }
, is the space
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
of 2 x 2 matrices of trace zero. A basis is given by the three matrices
H
=
(
1
0
0
−
1
)
,
E
=
(
0
1
0
0
)
,
F
=
(
0
0
1
0
)
.
{\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).}
The Lie bracket is given by:
[
H
,
E
]
=
2
E
,
{\displaystyle [H,E]=2E,}
[
H
,
F
]
=
−
2
F
,
{\displaystyle [H,F]=-2F,}
[
E
,
F
]
=
H
.
{\displaystyle [E,F]=H.}
Using these formulas, one can show that the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
, the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the
(
c
+
2
)
{\displaystyle (c+2)}
-eigenspace, while F maps the c-eigenspace into the
(
c
−
2
)
{\displaystyle (c-2)}
-eigenspace.
The Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is isomorphic to the complexification of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
, meaning the tensor product
s
o
(
3
)
⊗
R
C
{\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} }
. The formulas for the Lie bracket are easier to analyze in the case of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
. As a result, it is common to analyze complex representations of the group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
by relating them to representations of the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
.
=== Infinite dimensions ===
The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over
R
{\displaystyle \mathbb {R} }
.
The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over
C
{\displaystyle \mathbb {C} }
, with structure much like that of the finite-dimensional simple Lie algebras (such as
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
).
The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras.
The Virasoro algebra is important in string theory.
The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint
V
↦
L
(
V
)
{\displaystyle V\mapsto L(V)}
, called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra
L
(
V
)
{\displaystyle L(V)}
is infinite-dimensional for V of dimension at least 2.
== Representations ==
=== Definitions ===
Given a vector space V, let
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. A representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on V is a Lie algebra homomorphism
π
:
g
→
g
l
(
V
)
.
{\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).}
That is,
π
{\displaystyle \pi }
sends each element of
g
{\displaystyle {\mathfrak {g}}}
to a linear map from V to itself, in such a way that the Lie bracket on
g
{\displaystyle {\mathfrak {g}}}
corresponds to the commutator of linear maps.
A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
for some positive integer n.
=== Adjoint representation ===
For any Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the adjoint representation is the representation
ad
:
g
→
g
l
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}
given by
ad
(
x
)
(
y
)
=
[
x
,
y
]
{\displaystyle \operatorname {ad} (x)(y)=[x,y]}
. (This is a representation of
g
{\displaystyle {\mathfrak {g}}}
by the Jacobi identity.)
=== Goals of representation theory ===
One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of
g
{\displaystyle {\mathfrak {g}}}
. For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula.
=== Universal enveloping algebra ===
The functor that takes an associative algebra A over a field F to A as a Lie algebra (by
[
X
,
Y
]
:=
X
Y
−
Y
X
{\displaystyle [X,Y]:=XY-YX}
) has a left adjoint
g
↦
U
(
g
)
{\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})}
, called the universal enveloping algebra. To construct this: given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over F, let
T
(
g
)
=
F
⊕
g
⊕
(
g
⊗
g
)
⊕
(
g
⊗
g
⊗
g
)
⊕
⋯
{\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots }
be the tensor algebra on
g
{\displaystyle {\mathfrak {g}}}
, also called the free associative algebra on the vector space
g
{\displaystyle {\mathfrak {g}}}
. Here
⊗
{\displaystyle \otimes }
denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
generated by the elements
X
Y
−
Y
X
−
[
X
,
Y
]
{\displaystyle XY-YX-[X,Y]}
for
X
,
Y
∈
g
{\displaystyle X,Y\in {\mathfrak {g}}}
; then the universal enveloping algebra is the quotient ring
U
(
g
)
=
T
(
g
)
/
I
{\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I}
. It satisfies the Poincaré–Birkhoff–Witt theorem: if
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
is a basis for
g
{\displaystyle {\mathfrak {g}}}
as an F-vector space, then a basis for
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is given by all ordered products
e
1
i
1
⋯
e
n
i
n
{\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}}
with
i
1
,
…
,
i
n
{\displaystyle i_{1},\ldots ,i_{n}}
natural numbers. In particular, the map
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective.
Representations of
g
{\displaystyle {\mathfrak {g}}}
are equivalent to modules over the universal enveloping algebra. The fact that
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra.
=== Representation theory in physics ===
The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
. Typically, the space of states is far from being irreducible under the pertinent operators, but
one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
== Structure theory and classification ==
Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups.
=== Abelian, nilpotent, and solvable ===
Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in
g
{\displaystyle {\mathfrak {g}}}
. In particular, the Lie algebra of an abelian Lie group (such as the group
R
n
{\displaystyle \mathbb {R} ^{n}}
under addition or the torus group
T
n
{\displaystyle \mathbb {T} ^{n}}
) is abelian. Every finite-dimensional abelian Lie algebra over a field
F
{\displaystyle F}
is isomorphic to
F
n
{\displaystyle F^{n}}
for some
n
≥
0
{\displaystyle n\geq 0}
, meaning an n-dimensional vector space with Lie bracket zero.
A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
, meaning the linear subspace spanned by all brackets
[
x
,
y
]
{\displaystyle [x,y]}
with
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
. The commutator subalgebra is an ideal in
g
{\displaystyle {\mathfrak {g}}}
, in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if the lower central series
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
g
]
⊇
[
[
[
g
,
g
]
,
g
]
,
g
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if there is a finite sequence of ideals in
g
{\displaystyle {\mathfrak {g}}}
,
0
=
a
0
⊆
a
1
⊆
⋯
⊆
a
r
=
g
,
{\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},}
such that
a
j
/
a
j
−
1
{\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}}
is central in
g
/
a
j
−
1
{\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}}
for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in
g
{\displaystyle {\mathfrak {g}}}
the adjoint endomorphism
ad
(
u
)
:
g
→
g
,
ad
(
u
)
v
=
[
u
,
v
]
{\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]}
is nilpotent.
More generally, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is said to be solvable if the derived series:
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
[
g
,
g
]
]
⊇
[
[
[
g
,
g
]
,
[
g
,
g
]
]
,
[
[
g
,
g
]
,
[
g
,
g
]
]
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is solvable if there is a finite sequence of Lie subalgebras,
0
=
m
0
⊆
m
1
⊆
⋯
⊆
m
r
=
g
,
{\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},}
such that
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j-1}}
is an ideal in
m
j
{\displaystyle {\mathfrak {m}}_{j}}
with
m
j
/
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}}
abelian for each j.
Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over
R
{\displaystyle \mathbb {R} }
.
For example, for a positive integer n and a field F of characteristic zero, the radical of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space
b
n
{\displaystyle {\mathfrak {b}}_{n}}
of upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
; this is not nilpotent when
n
≥
2
{\displaystyle n\geq 2}
. An example of a nilpotent Lie algebra is the space
u
n
{\displaystyle {\mathfrak {u}}_{n}}
of strictly upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
;
this is not abelian when
n
≥
3
{\displaystyle n\geq 3}
.
=== Simple and semisimple ===
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called simple if it is not abelian and the only ideals in
g
{\displaystyle {\mathfrak {g}}}
are 0 and
g
{\displaystyle {\mathfrak {g}}}
. (In particular, a one-dimensional—necessarily abelian—Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is by definition not simple, even though its only ideals are 0 and
g
{\displaystyle {\mathfrak {g}}}
.) A finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called semisimple if the only solvable ideal in
g
{\displaystyle {\mathfrak {g}}}
is 0. In characteristic zero, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if it is isomorphic to a product of simple Lie algebras,
g
≅
g
1
×
⋯
×
g
r
{\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}}
.
For example, the Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple for every
n
≥
2
{\displaystyle n\geq 2}
and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple for every
n
≥
2
{\displaystyle n\geq 2}
. The Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple if
n
=
3
{\displaystyle n=3}
or
n
≥
5
{\displaystyle n\geq 5}
. (There are "exceptional isomorphisms"
s
o
(
3
)
≅
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)}
and
s
o
(
4
)
≅
s
u
(
2
)
×
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)}
.)
The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations).
A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra.
For example,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is reductive for F of characteristic zero: for
n
≥
2
{\displaystyle n\geq 2}
, it is isomorphic to the product
g
l
(
n
,
F
)
≅
F
×
s
l
(
n
,
F
)
,
{\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),}
where F denotes the center of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
, the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
contains few ideals: only 0, the center F,
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
, and all of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
.
=== Cartan's criterion ===
Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on
g
{\displaystyle {\mathfrak {g}}}
defined by
K
(
u
,
v
)
=
tr
(
ad
(
u
)
ad
(
v
)
)
,
{\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),}
where tr denotes the trace of a linear operator. Namely: a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if the Killing form is nondegenerate. A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is solvable if and only if
K
(
g
,
[
g
,
g
]
)
=
0.
{\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.}
=== Classification ===
The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras.
The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is
s
l
(
n
+
1
,
F
)
{\displaystyle {\mathfrak {sl}}(n+1,F)}
, Bn is
s
o
(
2
n
+
1
,
F
)
{\displaystyle {\mathfrak {so}}(2n+1,F)}
, Cn is
s
p
(
2
n
,
F
)
{\displaystyle {\mathfrak {sp}}(2n,F)}
, and Dn is
s
o
(
2
n
,
F
)
{\displaystyle {\mathfrak {so}}(2n,F)}
. The other five are known as the exceptional Lie algebras.
The classification of finite-dimensional simple Lie algebras over
R
{\displaystyle \mathbb {R} }
is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over
R
{\displaystyle \mathbb {R} }
by considering its complexification
g
⊗
R
C
{\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} }
.
In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic
p
>
3
{\displaystyle p>3}
were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero.
== Relation to Lie groups ==
Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups.
The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over
R
{\displaystyle \mathbb {R} }
(concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, there is a connected Lie group
G
{\displaystyle G}
with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3).
For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over
R
{\displaystyle \mathbb {R} }
.
The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra.
Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood.
For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group.
Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group
G
=
R
{\displaystyle G=\mathbb {R} }
, an infinite-dimensional representation of
G
{\displaystyle G}
can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras.
== Real form and complexification ==
Given a complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a real Lie algebra
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is said to be a real form of
g
{\displaystyle {\mathfrak {g}}}
if the complexification
g
0
⊗
R
C
{\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} }
is isomorphic to
g
{\displaystyle {\mathfrak {g}}}
. A real form need not be unique; for example,
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
has two real forms up to isomorphism,
s
l
(
2
,
R
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )}
and
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
.
Given a semisimple complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism.
== Lie algebra with additional structures ==
A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex.
For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers
Q
{\displaystyle \mathbb {Q} }
to describe rational homotopy theory in algebraic terms.
== Lie ring ==
The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over R is an R-module with an alternating R-bilinear map
[
,
]
:
g
×
g
→
g
{\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
that satisfies the Jacobi identity. A Lie algebra over the ring
Z
{\displaystyle \mathbb {Z} }
of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.)
Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below.
p-adic Lie groups are related to Lie algebras over the field
Q
p
{\displaystyle \mathbb {Q} _{p}}
of p-adic numbers as well as over the ring
Z
p
{\displaystyle \mathbb {Z} _{p}}
of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers.
=== Examples ===
Here is a construction of Lie rings arising from the study of abstract groups. For elements
x
,
y
{\displaystyle x,y}
of a group, define the commutator
[
x
,
y
]
=
x
−
1
y
−
1
x
y
{\displaystyle [x,y]=x^{-1}y^{-1}xy}
. Let
G
=
G
1
⊇
G
2
⊇
G
3
⊇
⋯
⊇
G
n
⊇
⋯
{\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots }
be a filtration of a group
G
{\displaystyle G}
, that is, a chain of subgroups such that
[
G
i
,
G
j
]
{\displaystyle [G_{i},G_{j}]}
is contained in
G
i
+
j
{\displaystyle G_{i+j}}
for all
i
,
j
{\displaystyle i,j}
. (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then
L
=
⨁
i
≥
1
G
i
/
G
i
+
1
{\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}}
is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group
G
i
/
G
i
+
1
{\displaystyle G_{i}/G_{i+1}}
), and with Lie bracket
G
i
/
G
i
+
1
×
G
j
/
G
j
+
1
→
G
i
+
j
/
G
i
+
j
+
1
{\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}}
given by commutators in the group:
[
x
G
i
+
1
,
y
G
j
+
1
]
:=
[
x
,
y
]
G
i
+
j
+
1
.
{\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.}
For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
.
== Definition using category-theoretic notation ==
The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.)
For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism
τ
:
A
⊗
A
→
A
⊗
A
{\displaystyle \tau :A\otimes A\to A\otimes A}
is defined by
τ
(
x
⊗
y
)
=
y
⊗
x
.
{\displaystyle \tau (x\otimes y)=y\otimes x.}
The cyclic-permutation braiding
σ
:
A
⊗
A
⊗
A
→
A
⊗
A
⊗
A
{\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A}
is defined as
σ
=
(
i
d
⊗
τ
)
∘
(
τ
⊗
i
d
)
,
{\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),}
where
i
d
{\displaystyle \mathrm {id} }
is the identity morphism. Equivalently,
σ
{\displaystyle \sigma }
is defined by
σ
(
x
⊗
y
⊗
z
)
=
y
⊗
z
⊗
x
.
{\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.}
With this notation, a Lie algebra can be defined as an object
A
{\displaystyle A}
in the category of vector spaces together with a morphism
[
⋅
,
⋅
]
:
A
⊗
A
→
A
{\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A}
that satisfies the two morphism equalities
[
⋅
,
⋅
]
∘
(
i
d
+
τ
)
=
0
,
{\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,}
and
[
⋅
,
⋅
]
∘
(
[
⋅
,
⋅
]
⊗
i
d
)
∘
(
i
d
+
σ
+
σ
2
)
=
0.
{\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.}
== Generalization ==
Several generalizations of a Lie algebra have been proposed, many from physics. Among them are graded Lie algebras, Lie superalgebras, Lie n-algebras,
== See also ==
== Remarks ==
== References ==
== Sources ==
Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312.
Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355.
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229.
Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562.
Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927.
Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819
Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691
Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252
O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive.
O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive.
Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031
Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691.
Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308.
Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help)
== External links ==
Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20.
"Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists". | Wikipedia/Lie_algebras |
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.
The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.
== Introduction ==
The characteristic function is a way to describe a random variable X.
The characteristic function,
φ
X
(
t
)
=
E
[
e
i
t
X
]
,
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{itX}\right],}
a function of t,
determines the behavior and properties of the probability distribution of X.
It is equivalent to a probability density function or cumulative distribution function, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions.
If a random variable admits a density function, then the characteristic function is its Fourier dual, in the sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating function
M
X
(
t
)
{\displaystyle M_{X}(t)}
, then the domain of the characteristic function can be extended to the complex plane, and
φ
X
(
−
i
t
)
=
M
X
(
t
)
.
{\displaystyle \varphi _{X}(-it)=M_{X}(t).}
Note however that the characteristic function of a distribution is well defined for all real values of t, even when the moment-generating function is not well defined for all real values of t.
The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables.
== Definition ==
For a scalar random variable X the characteristic function is defined as the expected value of eitX, where i is the imaginary unit, and t ∈ R is the argument of the characteristic function:
{
φ
X
:
R
→
C
φ
X
(
t
)
=
E
[
e
i
t
X
]
=
∫
R
e
i
t
x
d
F
X
(
x
)
=
∫
R
e
i
t
x
f
X
(
x
)
d
x
=
∫
0
1
e
i
t
Q
X
(
p
)
d
p
{\displaystyle {\begin{cases}\displaystyle \varphi _{X}\!:\mathbb {R} \to \mathbb {C} \\\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{itX}\right]=\int _{\mathbb {R} }e^{itx}\,dF_{X}(x)=\int _{\mathbb {R} }e^{itx}f_{X}(x)\,dx=\int _{0}^{1}e^{itQ_{X}(p)}\,dp\end{cases}}}
Here FX is the cumulative distribution function of X, fX is the corresponding probability density function, QX(p) is the corresponding inverse cumulative distribution function also called the quantile function, and the integrals are of the Riemann–Stieltjes kind. If a random variable X has a probability density function then the characteristic function is its Fourier transform with sign reversal in the complex exponential. This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform. For example, some authors define φX(t) = E[e−2πitX], which is essentially a change of parameter. Other notation may be encountered in the literature:
p
^
{\displaystyle \scriptstyle {\hat {p}}}
as the characteristic function for a probability measure p, or
f
^
{\displaystyle \scriptstyle {\hat {f}}}
as the characteristic function corresponding to a density f.
== Generalizations ==
The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where the random variable X takes its values. For common cases such definitions are listed below:
If X is a k-dimensional random vector, then for t ∈ Rk
φ
X
(
t
)
=
E
[
exp
(
i
t
T
X
)
]
,
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[\exp(it^{T}\!X)\right],}
where
t
T
{\textstyle t^{T}}
is the transpose of the vector
t
{\textstyle t}
,
If X is a k × p-dimensional random matrix, then for t ∈ Rk×p
φ
X
(
t
)
=
E
[
exp
(
i
tr
(
t
T
X
)
)
]
,
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[\exp \left(i\operatorname {tr} (t^{T}\!X)\right)\right],}
where
tr
(
⋅
)
{\textstyle \operatorname {tr} (\cdot )}
is the trace operator,
If X is a complex random variable, then for t ∈ C
φ
X
(
t
)
=
E
[
exp
(
i
Re
(
t
¯
X
)
)
]
,
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[\exp \left(i\operatorname {Re} \left({\overline {t}}X\right)\right)\right],}
where
t
¯
{\textstyle {\overline {t}}}
is the complex conjugate of
t
{\textstyle t}
and
Re
(
z
)
{\textstyle \operatorname {Re} (z)}
is the real part of the complex number
z
{\textstyle z}
,
If X is a k-dimensional complex random vector, then for t ∈ Ck
φ
X
(
t
)
=
E
[
exp
(
i
Re
(
t
∗
X
)
)
]
,
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[\exp(i\operatorname {Re} (t^{*}\!X))\right],}
where
t
∗
{\textstyle t^{*}}
is the conjugate transpose of the vector
t
{\textstyle t}
,
If X(s) is a stochastic process, then for all functions t(s) such that the integral
∫
R
t
(
s
)
X
(
s
)
d
s
{\textstyle \int _{\mathbb {R} }t(s)X(s)\,\mathrm {d} s}
converges for almost all realizations of X
φ
X
(
t
)
=
E
[
exp
(
i
∫
R
t
(
s
)
X
(
s
)
d
s
)
]
.
{\displaystyle \varphi _{X}(t)=\operatorname {E} \left[\exp \left(i\int _{\mathbf {R} }t(s)X(s)\,ds\right)\right].}
== Examples ==
Oberhettinger (1973) provides extensive tables of characteristic functions.
== Properties ==
The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose measure is finite.
A characteristic function is uniformly continuous on the entire space.
It is non-vanishing in a region around zero: φ(0) = 1.
It is bounded: |φ(t)| ≤ 1.
It is Hermitian: φ(−t) = φ(t). In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even.
There is a bijection between probability distributions and characteristic functions. That is, for any two random variables X1, X2, both have the same probability distribution if and only if
φ
X
1
=
φ
X
2
{\displaystyle \varphi _{X_{1}}=\varphi _{X_{2}}}
.
If a random variable X has moments up to k-th order, then the characteristic function φX is k times continuously differentiable on the entire real line. In this case
E
[
X
k
]
=
i
−
k
φ
X
(
k
)
(
0
)
.
{\displaystyle \operatorname {E} [X^{k}]=i^{-k}\varphi _{X}^{(k)}(0).}
If a characteristic function φX has a k-th derivative at zero, then the random variable X has all moments up to k if k is even, but only up to k – 1 if k is odd.
φ
X
(
k
)
(
0
)
=
i
k
E
[
X
k
]
{\displaystyle \varphi _{X}^{(k)}(0)=i^{k}\operatorname {E} [X^{k}]}
If X1, ..., Xn are independent random variables, and a1, ..., an are some constants, then the characteristic function of the linear combination of the Xi variables is
φ
a
1
X
1
+
⋯
+
a
n
X
n
(
t
)
=
φ
X
1
(
a
1
t
)
⋯
φ
X
n
(
a
n
t
)
.
{\displaystyle \varphi _{a_{1}X_{1}+\cdots +a_{n}X_{n}}(t)=\varphi _{X_{1}}(a_{1}t)\cdots \varphi _{X_{n}}(a_{n}t).}
One specific case is the sum of two independent random variables X1 and X2 in which case one has
φ
X
1
+
X
2
(
t
)
=
φ
X
1
(
t
)
⋅
φ
X
2
(
t
)
.
{\displaystyle \varphi _{X_{1}+X_{2}}(t)=\varphi _{X_{1}}(t)\cdot \varphi _{X_{2}}(t).}
Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be two random variables with characteristic functions
φ
X
{\displaystyle \varphi _{X}}
and
φ
Y
{\displaystyle \varphi _{Y}}
.
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent if and only if
φ
X
,
Y
(
s
,
t
)
=
φ
X
(
s
)
φ
Y
(
t
)
for all
(
s
,
t
)
∈
R
2
{\displaystyle \varphi _{X,Y}(s,t)=\varphi _{X}(s)\varphi _{Y}(t)\quad {\text{ for all }}\quad (s,t)\in \mathbb {R} ^{2}}
.
The tail behavior of the characteristic function determines the smoothness of the corresponding density function.
Let the random variable
Y
=
a
X
+
b
{\displaystyle Y=aX+b}
be the linear transformation of a random variable
X
{\displaystyle X}
. The characteristic function of
Y
{\displaystyle Y}
is
φ
Y
(
t
)
=
e
i
t
b
φ
X
(
a
t
)
{\displaystyle \varphi _{Y}(t)=e^{itb}\varphi _{X}(at)}
. For random vectors
X
{\displaystyle X}
and
Y
=
A
X
+
B
{\displaystyle Y=AX+B}
(where A is a constant matrix and B a constant vector), we have
φ
Y
(
t
)
=
e
i
t
⊤
B
φ
X
(
A
⊤
t
)
{\displaystyle \varphi _{Y}(t)=e^{it^{\top }B}\varphi _{X}(A^{\top }t)}
.
=== Continuity ===
The bijection stated above between probability distributions and characteristic functions is sequentially continuous. That is, whenever a sequence of distribution functions Fj(x) converges (weakly) to some distribution F(x), the corresponding sequence of characteristic functions φj(t) will also converge, and the limit φ(t) will correspond to the characteristic function of law F. More formally, this is stated as
Lévy’s continuity theorem: A sequence Xj of n-variate random variables converges in distribution to random variable X if and only if the sequence φXj converges pointwise to a function φ which is continuous at the origin. Where φ is the characteristic function of X.
This theorem can be used to prove the law of large numbers and the central limit theorem.
=== Inversion formula ===
There is a one-to-one correspondence between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute φ when we know the distribution function F (or density f). If, on the other hand, we know the characteristic function φ and want to find the corresponding distribution function, then one of the following inversion theorems can be used.
Theorem. If the characteristic function φX of a random variable X is integrable, then FX is absolutely continuous, and therefore X has a probability density function. In the univariate case (i.e. when X is scalar-valued) the density function is given by
f
X
(
x
)
=
F
X
′
(
x
)
=
1
2
π
∫
R
e
−
i
t
x
φ
X
(
t
)
d
t
.
{\displaystyle f_{X}(x)=F_{X}'(x)={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{-itx}\varphi _{X}(t)\,dt.}
In the multivariate case it is
f
X
(
x
)
=
1
(
2
π
)
n
∫
R
n
e
−
i
(
t
⋅
x
)
φ
X
(
t
)
λ
(
d
t
)
{\displaystyle f_{X}(x)={\frac {1}{(2\pi )^{n}}}\int _{\mathbf {R} ^{n}}e^{-i(t\cdot x)}\varphi _{X}(t)\lambda (dt)}
where
t
⋅
x
{\textstyle t\cdot x}
is the dot product.
The density function is the Radon–Nikodym derivative of the distribution μX with respect to the Lebesgue measure λ:
f
X
(
x
)
=
d
μ
X
d
λ
(
x
)
.
{\displaystyle f_{X}(x)={\frac {d\mu _{X}}{d\lambda }}(x).}
Theorem (Lévy). If φX is characteristic function of distribution function FX, two points a < b are such that {x | a < x < b} is a continuity set of μX (in the univariate case this condition is equivalent to continuity of FX at points a and b), then
If X is scalar:
F
X
(
b
)
−
F
X
(
a
)
=
1
2
π
lim
T
→
∞
∫
−
T
+
T
e
−
i
t
a
−
e
−
i
t
b
i
t
φ
X
(
t
)
d
t
.
{\displaystyle F_{X}(b)-F_{X}(a)={\frac {1}{2\pi }}\lim _{T\to \infty }\int _{-T}^{+T}{\frac {e^{-ita}-e^{-itb}}{it}}\,\varphi _{X}(t)\,dt.}
This formula can be re-stated in a form more convenient for numerical computation as
F
(
x
+
h
)
−
F
(
x
−
h
)
2
h
=
1
2
π
∫
−
∞
∞
sin
h
t
h
t
e
−
i
t
x
φ
X
(
t
)
d
t
.
{\displaystyle {\frac {F(x+h)-F(x-h)}{2h}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\frac {\sin ht}{ht}}e^{-itx}\varphi _{X}(t)\,dt.}
For a random variable bounded from below one can obtain
F
(
b
)
{\displaystyle F(b)}
by taking
a
{\displaystyle a}
such that
F
(
a
)
=
0.
{\displaystyle F(a)=0.}
Otherwise, if a random variable is not bounded from below, the limit for
a
→
−
∞
{\displaystyle a\to -\infty }
gives
F
(
b
)
{\displaystyle F(b)}
, but is numerically impractical.
If X is a vector random variable:
μ
X
(
{
a
<
x
<
b
}
)
=
1
(
2
π
)
n
lim
T
1
→
∞
⋯
lim
T
n
→
∞
∫
−
T
1
≤
t
1
≤
T
1
⋯
∫
−
T
n
≤
t
n
≤
T
n
∏
k
=
1
n
(
e
−
i
t
k
a
k
−
e
−
i
t
k
b
k
i
t
k
)
φ
X
(
t
)
λ
(
d
t
1
×
⋯
×
d
t
n
)
{\displaystyle \mu _{X}{\big (}\{a<x<b\}{\big )}={\frac {1}{(2\pi )^{n}}}\lim _{T_{1}\to \infty }\cdots \lim _{T_{n}\to \infty }\int \limits _{-T_{1}\leq t_{1}\leq T_{1}}\cdots \int \limits _{-T_{n}\leq t_{n}\leq T_{n}}\prod _{k=1}^{n}\left({\frac {e^{-it_{k}a_{k}}-e^{-it_{k}b_{k}}}{it_{k}}}\right)\varphi _{X}(t)\lambda (dt_{1}\times \cdots \times dt_{n})}
Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX) then
If X is scalar:
F
X
(
a
)
−
F
X
(
a
−
0
)
=
lim
T
→
∞
1
2
T
∫
−
T
+
T
e
−
i
t
a
φ
X
(
t
)
d
t
{\displaystyle F_{X}(a)-F_{X}(a-0)=\lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{+T}e^{-ita}\varphi _{X}(t)\,dt}
If X is a vector random variable:
μ
X
(
{
a
}
)
=
lim
T
1
→
∞
⋯
lim
T
n
→
∞
(
∏
k
=
1
n
1
2
T
k
)
∫
[
−
T
1
,
T
1
]
×
⋯
×
[
−
T
n
,
T
n
]
e
−
i
(
t
⋅
a
)
φ
X
(
t
)
λ
(
d
t
)
{\displaystyle \mu _{X}(\{a\})=\lim _{T_{1}\to \infty }\cdots \lim _{T_{n}\to \infty }\left(\prod _{k=1}^{n}{\frac {1}{2T_{k}}}\right)\int \limits _{[-T_{1},T_{1}]\times \dots \times [-T_{n},T_{n}]}e^{-i(t\cdot a)}\varphi _{X}(t)\lambda (dt)}
Theorem (Gil-Pelaez). For a univariate random variable X, if x is a continuity point of FX then
F
X
(
x
)
=
1
2
−
1
π
∫
0
∞
Im
[
e
−
i
t
x
φ
X
(
t
)
]
t
d
t
{\displaystyle F_{X}(x)={\frac {1}{2}}-{\frac {1}{\pi }}\int _{0}^{\infty }{\frac {\operatorname {Im} [e^{-itx}\varphi _{X}(t)]}{t}}\,dt}
where the imaginary part of a complex number
z
{\displaystyle z}
is given by
I
m
(
z
)
=
(
z
−
z
∗
)
/
2
i
{\displaystyle \mathrm {Im} (z)=(z-z^{*})/2i}
.
And its density function is:
f
X
(
x
)
=
1
π
∫
0
∞
Re
[
e
−
i
t
x
φ
X
(
t
)
]
d
t
{\displaystyle f_{X}(x)={\frac {1}{\pi }}\int _{0}^{\infty }\operatorname {Re} [e^{-itx}\varphi _{X}(t)]\,dt}
The integral may be not Lebesgue-integrable; for example, when X is the discrete random variable that is always 0, it becomes the Dirichlet integral.
Inversion formulas for multivariate distributions are available.
=== Criteria for characteristic functions ===
The set of all characteristic functions is closed under certain operations:
A convex linear combination
∑
n
a
n
φ
n
(
t
)
{\textstyle \sum _{n}a_{n}\varphi _{n}(t)}
(with
a
n
≥
0
,
∑
n
a
n
=
1
{\textstyle a_{n}\geq 0,\ \sum _{n}a_{n}=1}
) of a finite or a countable number of characteristic functions is also a characteristic function.
The product of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin.
If φ is a characteristic function and α is a real number, then
φ
¯
{\displaystyle {\bar {\varphi }}}
, Re(φ), |φ|2, and φ(αt) are also characteristic functions.
It is well known that any non-decreasing càdlàg function F with limits F(−∞) = 0, F(+∞) = 1 corresponds to a cumulative distribution function of some random variable. There is also interest in finding similar simple criteria for when a given function φ could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.
Bochner’s theorem. An arbitrary function φ : Rn → C is the characteristic function of some random variable if and only if φ is positive definite, continuous at the origin, and if φ(0) = 1.
Khinchine’s criterion. A complex-valued, absolutely continuous function φ, with φ(0) = 1, is a characteristic function if and only if it admits the representation
φ
(
t
)
=
∫
R
g
(
t
+
θ
)
g
(
θ
)
¯
d
θ
.
{\displaystyle \varphi (t)=\int _{\mathbf {R} }g(t+\theta ){\overline {g(\theta )}}\,d\theta .}
Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function φ, with φ(0) = 1, is a characteristic function if and only if
(
−
1
)
n
(
∫
R
φ
(
p
t
)
e
−
t
2
/
2
H
2
n
(
t
)
d
t
)
≥
0
{\displaystyle (-1)^{n}\left(\int _{\mathbf {R} }\varphi (pt)e^{-t^{2}/2}H_{2n}(t)\,dt\right)\geq 0}
for n = 0,1,2,..., and all p > 0. Here H2n denotes the Hermite polynomial of degree 2n.
Pólya’s theorem. If
φ
{\displaystyle \varphi }
is a real-valued, even, continuous function which satisfies the conditions
φ
(
0
)
=
1
{\displaystyle \varphi (0)=1}
,
φ
{\displaystyle \varphi }
is convex for
t
>
0
{\displaystyle t>0}
,
φ
(
∞
)
=
0
{\displaystyle \varphi (\infty )=0}
,
then φ(t) is the characteristic function of an absolutely continuous distribution symmetric about 0.
== Uses ==
Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.
=== Basic manipulations of distributions ===
Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and
S
n
=
∑
i
=
1
n
a
i
X
i
,
{\displaystyle S_{n}=\sum _{i=1}^{n}a_{i}X_{i},\,\!}
where the ai are constants, then the characteristic function for Sn is given by
φ
S
n
(
t
)
=
φ
X
1
(
a
1
t
)
φ
X
2
(
a
2
t
)
⋯
φ
X
n
(
a
n
t
)
{\displaystyle \varphi _{S_{n}}(t)=\varphi _{X_{1}}(a_{1}t)\varphi _{X_{2}}(a_{2}t)\cdots \varphi _{X_{n}}(a_{n}t)\,\!}
In particular, φX+Y(t) = φX(t)φY(t). To see this, write out the definition of characteristic function:
φ
X
+
Y
(
t
)
=
E
[
e
i
t
(
X
+
Y
)
]
=
E
[
e
i
t
X
e
i
t
Y
]
=
E
[
e
i
t
X
]
E
[
e
i
t
Y
]
=
φ
X
(
t
)
φ
Y
(
t
)
{\displaystyle \varphi _{X+Y}(t)=\operatorname {E} \left[e^{it(X+Y)}\right]=\operatorname {E} \left[e^{itX}e^{itY}\right]=\operatorname {E} \left[e^{itX}\right]\operatorname {E} \left[e^{itY}\right]=\varphi _{X}(t)\varphi _{Y}(t)}
The independence of X and Y is required to establish the equality of the third and fourth expressions.
Another special case of interest for identically distributed random variables is when ai = 1 / n and then Sn is the sample mean. In this case, writing X for the mean,
φ
X
¯
(
t
)
=
φ
X
(
t
n
)
n
{\displaystyle \varphi _{\overline {X}}(t)=\varphi _{X}\!\left({\tfrac {t}{n}}\right)^{n}}
=== Moments ===
Characteristic functions can also be used to find moments of a random variable. Provided that the n-th moment exists, the characteristic function can be differentiated n times:
E
[
X
n
]
=
i
−
n
[
d
n
d
t
n
φ
X
(
t
)
]
t
=
0
=
i
−
n
φ
X
(
n
)
(
0
)
,
{\displaystyle \operatorname {E} \left[X^{n}\right]=i^{-n}\left[{\frac {d^{n}}{dt^{n}}}\varphi _{X}(t)\right]_{t=0}=i^{-n}\varphi _{X}^{(n)}(0),\!}
This can be formally written using the derivatives of the Dirac delta function:
f
X
(
x
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
δ
(
n
)
(
x
)
E
[
X
n
]
{\displaystyle f_{X}(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\delta ^{(n)}(x)\operatorname {E} [X^{n}]}
which allows a formal solution to the moment problem.
For example, suppose X has a standard Cauchy distribution. Then φX(t) = e−|t|. This is not differentiable at t = 0, showing that the Cauchy distribution has no expectation. Also, the characteristic function of the sample mean X of n independent observations has characteristic function φX(t) = (e−|t|/n)n = e−|t|, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
As a further example, suppose X follows a Gaussian distribution i.e.
X
∼
N
(
μ
,
σ
2
)
{\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}
. Then
φ
X
(
t
)
=
e
μ
i
t
−
1
2
σ
2
t
2
{\displaystyle \varphi _{X}(t)=e^{\mu it-{\frac {1}{2}}\sigma ^{2}t^{2}}}
and
E
[
X
]
=
i
−
1
[
d
d
t
φ
X
(
t
)
]
t
=
0
=
i
−
1
[
(
i
μ
−
σ
2
t
)
φ
X
(
t
)
]
t
=
0
=
μ
{\displaystyle \operatorname {E} \left[X\right]=i^{-1}\left[{\frac {d}{dt}}\varphi _{X}(t)\right]_{t=0}=i^{-1}\left[(i\mu -\sigma ^{2}t)\varphi _{X}(t)\right]_{t=0}=\mu }
A similar calculation shows
E
[
X
2
]
=
μ
2
+
σ
2
{\displaystyle \operatorname {E} \left[X^{2}\right]=\mu ^{2}+\sigma ^{2}}
and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate
E
[
X
2
]
{\displaystyle \operatorname {E} \left[X^{2}\right]}
.
The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the second cumulant generating function.
=== Data analysis ===
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit time series models where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020) and Li et al. (2020) for training generative adversarial networks.
=== Example ===
The gamma distribution with scale parameter θ and a shape parameter k has the characteristic function
(
1
−
θ
i
t
)
−
k
.
{\displaystyle (1-\theta it)^{-k}.}
Now suppose that we have
X
∼
Γ
(
k
1
,
θ
)
and
Y
∼
Γ
(
k
2
,
θ
)
{\displaystyle X~\sim \Gamma (k_{1},\theta ){\mbox{ and }}Y\sim \Gamma (k_{2},\theta )}
with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are
φ
X
(
t
)
=
(
1
−
θ
i
t
)
−
k
1
,
φ
Y
(
t
)
=
(
1
−
θ
i
t
)
−
k
2
{\displaystyle \varphi _{X}(t)=(1-\theta it)^{-k_{1}},\,\qquad \varphi _{Y}(t)=(1-\theta it)^{-k_{2}}}
which by independence and the basic properties of characteristic function leads to
φ
X
+
Y
(
t
)
=
φ
X
(
t
)
φ
Y
(
t
)
=
(
1
−
θ
i
t
)
−
k
1
(
1
−
θ
i
t
)
−
k
2
=
(
1
−
θ
i
t
)
−
(
k
1
+
k
2
)
.
{\displaystyle \varphi _{X+Y}(t)=\varphi _{X}(t)\varphi _{Y}(t)=(1-\theta it)^{-k_{1}}(1-\theta it)^{-k_{2}}=\left(1-\theta it\right)^{-(k_{1}+k_{2})}.}
This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude
X
+
Y
∼
Γ
(
k
1
+
k
2
,
θ
)
{\displaystyle X+Y\sim \Gamma (k_{1}+k_{2},\theta )}
The result can be expanded to n independent gamma distributed random variables with the same scale parameter and we get
∀
i
∈
{
1
,
…
,
n
}
:
X
i
∼
Γ
(
k
i
,
θ
)
⇒
∑
i
=
1
n
X
i
∼
Γ
(
∑
i
=
1
n
k
i
,
θ
)
.
{\displaystyle \forall i\in \{1,\ldots ,n\}:X_{i}\sim \Gamma (k_{i},\theta )\qquad \Rightarrow \qquad \sum _{i=1}^{n}X_{i}\sim \Gamma \left(\sum _{i=1}^{n}k_{i},\theta \right).}
== Entire characteristic functions ==
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytic continuation, in cases where this is possible.
== Related concepts ==
Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function p(x) is the complex conjugate of the continuous Fourier transform of p(x) (according to the usual convention; see continuous Fourier transform – other conventions).
φ
X
(
t
)
=
⟨
e
i
t
X
⟩
=
∫
R
e
i
t
x
p
(
x
)
d
x
=
(
∫
R
e
−
i
t
x
p
(
x
)
d
x
)
¯
=
P
(
t
)
¯
,
{\displaystyle \varphi _{X}(t)=\langle e^{itX}\rangle =\int _{\mathbf {R} }e^{itx}p(x)\,dx={\overline {\left(\int _{\mathbf {R} }e^{-itx}p(x)\,dx\right)}}={\overline {P(t)}},}
where P(t) denotes the continuous Fourier transform of the probability density function p(x). Likewise, p(x) may be recovered from φX(t) through the inverse Fourier transform:
p
(
x
)
=
1
2
π
∫
R
e
i
t
x
P
(
t
)
d
t
=
1
2
π
∫
R
e
i
t
x
φ
X
(
t
)
¯
d
t
.
{\displaystyle p(x)={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{itx}P(t)\,dt={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{itx}{\overline {\varphi _{X}(t)}}\,dt.}
Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.
Another related concept is the representation of probability distributions as elements of a reproducing kernel Hilbert space via the kernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of the kernel function.
== See also ==
Subindependence, a weaker condition than independence, that is defined in terms of characteristic functions.
Cumulant, a term of the cumulant generating functions, which are logs of the characteristic functions.
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
"Characteristic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Characteristic_function_(probability_theory) |
This is a glossary of tensor theory. For expositions of tensor theory from different points of view, see:
Tensor
Tensor (intrinsic definition)
Application of tensor theory in engineering science
For some history of the abstract theory see also multilinear algebra.
== Classical notation ==
Ricci calculus
The earliest foundation of tensor theory – tensor index notation.
Order of a tensor
The components of a tensor with respect to a basis is an indexed array. The order of a tensor is the number of indices needed. Some texts may refer to the tensor order using the term degree or rank.
Rank of a tensor
The rank of a tensor is the minimum number of rank-one tensor that must be summed to obtain the tensor. A rank-one tensor may be defined as expressible as the outer product of the number of nonzero vectors needed to obtain the correct order.
Dyadic tensor
A dyadic tensor is a tensor of order two, and may be represented as a square matrix. In contrast, a dyad is specifically a dyadic tensor of rank one.
Einstein notation
This notation is based on the understanding that whenever a multidimensional array contains a repeated index letter, the default interpretation is that the product is summed over all permitted values of the index. For example, if aij is a matrix, then under this convention aii is its trace. The Einstein convention is widely used in physics and engineering texts, to the extent that if summation is not to be applied, it is normal to note that explicitly.
Kronecker delta
Levi-Civita symbol
Covariant tensor
Contravariant tensor
The classical interpretation is by components. For example, in the differential form aidxi the components ai are a covariant vector. That means all indices are lower; contravariant means all indices are upper.
Mixed tensor
This refers to any tensor that has both lower and upper indices.
Cartesian tensor
Cartesian tensors are widely used in various branches of continuum mechanics, such as fluid mechanics and elasticity. In classical continuum mechanics, the space of interest is usually 3-dimensional Euclidean space, as is the tangent space at each point. If we restrict the local coordinates to be Cartesian coordinates with the same scale centered at the point of interest, the metric tensor is the Kronecker delta. This means that there is no need to distinguish covariant and contravariant components, and furthermore there is no need to distinguish tensors and tensor densities. All Cartesian-tensor indices are written as subscripts. Cartesian tensors achieve considerable computational simplification at the cost of generality and of some theoretical insight.
Contraction of a tensor
Raising and lowering indices
Symmetric tensor
Antisymmetric tensor
Multiple cross products
== Algebraic notation ==
This avoids the initial use of components, and is distinguished by the explicit use of the tensor product symbol.
Tensor product
If v and w are vectors in vector spaces V and W respectively, then
v
⊗
w
{\displaystyle v\otimes w}
is a tensor in
V
⊗
W
.
{\displaystyle V\otimes W.}
That is, the ⊗ operation is a binary operation, but it takes values into a fresh space (it is in a strong sense external). The ⊗ operation is a bilinear map; but no other conditions are applied to it.
Pure tensor
A pure tensor of V ⊗ W is one that is of the form v ⊗ w.
It could be written dyadically aibj, or more accurately aibj ei ⊗ fj, where the ei are a basis for V and the fj a basis for W. Therefore, unless V and W have the same dimension, the array of components need not be square. Such pure tensors are not generic: if both V and W have dimension greater than 1, there will be tensors that are not pure, and there will be non-linear conditions for a tensor to satisfy, to be pure. For more see Segre embedding.
Tensor algebra
In the tensor algebra T(V) of a vector space V, the operation
⊗
{\displaystyle \otimes }
becomes a normal (internal) binary operation. A consequence is that T(V) has infinite dimension unless V has dimension 0. The free algebra on a set X is for practical purposes the same as the tensor algebra on the vector space with X as basis.
Hodge star operator
Exterior power
The wedge product is the anti-symmetric form of the ⊗ operation. The quotient space of T(V) on which it becomes an internal operation is the exterior algebra of V; it is a graded algebra, with the graded piece of weight k being called the k-th exterior power of V.
Symmetric power, symmetric algebra
This is the invariant way of constructing polynomial algebras.
== Applications ==
Metric tensor
Strain tensor
Stress–energy tensor
== Tensor field theory ==
Jacobian matrix
Tensor field
Tensor density
Lie derivative
Tensor derivative
Differential geometry
== Abstract algebra ==
Tensor product of fields
This is an operation on fields, that does not always produce a field.
Tensor product of R-algebras
Clifford module
A representation of a Clifford algebra which gives a realisation of a Clifford algebra as a matrix algebra.
Tor functors
These are the derived functors of the tensor product, and feature strongly in homological algebra. The name comes from the torsion subgroup in abelian group theory.
Symbolic method of invariant theory
Derived category
Grothendieck's six operations
These are highly abstract approaches used in some parts of geometry.
== Spinors ==
See:
Spin group
Spin-c group
Spinor
Pin group
Pinors
Spinor field
Killing spinor
Spin manifold
== References ==
== Books ==
Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6
Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7.
Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X.
Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7.
Synge, John L; Schild, Alfred (1949). Tensor Calculus. Dover Publications 1978 edition. ISBN 978-0-486-63612-2. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Tensor_notation |
In mathematics, the exponential function is the unique real function which maps zero to one and has a derivative everywhere equal to its value. The exponential of a variable
x
{\displaystyle x}
is denoted
exp
x
{\displaystyle \exp x}
or
e
x
{\displaystyle e^{x}}
, with the two notations used interchangeably. It is called exponential because its argument can be seen as an exponent to which a constant number e ≈ 2.718, the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature.
The exponential function converts sums to products: it maps the additive identity 0 to the multiplicative identity 1, and the exponential of a sum is equal to the product of separate exponentials,
exp
(
x
+
y
)
=
exp
x
⋅
exp
y
{\displaystyle \exp(x+y)=\exp x\cdot \exp y}
. Its inverse function, the natural logarithm,
ln
{\displaystyle \ln }
or
log
{\displaystyle \log }
, converts products to sums:
ln
(
x
⋅
y
)
=
ln
x
+
ln
y
{\displaystyle \ln(x\cdot y)=\ln x+\ln y}
.
The exponential function is occasionally called the natural exponential function, matching the name natural logarithm, for distinguishing it from some other functions that are also commonly called exponential functions. These functions include the functions of the form
f
(
x
)
=
b
x
{\displaystyle f(x)=b^{x}}
, which is exponentiation with a fixed base
b
{\displaystyle b}
. More generally, and especially in applications, functions of the general form
f
(
x
)
=
a
b
x
{\displaystyle f(x)=ab^{x}}
are also called exponential functions. They grow or decay exponentially in that the rate that
f
(
x
)
{\displaystyle f(x)}
changes when
x
{\displaystyle x}
is increased is proportional to the current value of
f
(
x
)
{\displaystyle f(x)}
.
The exponential function can be generalized to accept complex numbers as arguments. This reveals relations between multiplication of complex numbers, rotations in the complex plane, and trigonometry. Euler's formula
exp
i
θ
=
cos
θ
+
i
sin
θ
{\displaystyle \exp i\theta =\cos \theta +i\sin \theta }
expresses and summarizes these relations.
The exponential function can be even further generalized to accept other types of arguments, such as matrices and elements of Lie algebras.
== Graph ==
The graph of
y
=
e
x
{\displaystyle y=e^{x}}
is upward-sloping, and increases faster than every power of
x
{\displaystyle x}
. The graph always lies above the x-axis, but becomes arbitrarily close to it for large negative x; thus, the x-axis is a horizontal asymptote. The equation
d
d
x
e
x
=
e
x
{\displaystyle {\tfrac {d}{dx}}e^{x}=e^{x}}
means that the slope of the tangent to the graph at each point is equal to its height (its y-coordinate) at that point.
== Definitions and fundamental properties ==
There are several equivalent definitions of the exponential function, although of very different nature.
=== Differential equation ===
One of the simplest definitions is: The exponential function is the unique differentiable function that equals its derivative, and takes the value 1 for the value 0 of its variable.
This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function.
Uniqueness: If
f
(
x
)
{\displaystyle f(x)}
and
g
(
x
)
{\displaystyle g(x)}
are two functions satisfying the above definition, then the derivative of
f
/
g
{\displaystyle f/g}
is zero everywhere because of the quotient rule. It follows that
f
/
g
{\displaystyle f/g}
is constant; this constant is 1 since
f
(
0
)
=
g
(
0
)
=
1
{\displaystyle f(0)=g(0)=1}
.
Existence is proved in each of the two following sections.
=== Inverse of natural logarithm ===
The exponential function is the inverse function of the natural logarithm. The inverse function theorem implies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has
ln
(
exp
x
)
=
x
exp
(
ln
y
)
=
y
{\displaystyle {\begin{aligned}\ln(\exp x)&=x\\\exp(\ln y)&=y\end{aligned}}}
for every real number
x
{\displaystyle x}
and every positive real number
y
.
{\displaystyle y.}
=== Power series ===
The exponential function is the sum of the power series
exp
(
x
)
=
1
+
x
+
x
2
2
!
+
x
3
3
!
+
⋯
=
∑
n
=
0
∞
x
n
n
!
,
{\displaystyle {\begin{aligned}\exp(x)&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}},\end{aligned}}}
where
n
!
{\displaystyle n!}
is the factorial of n (the product of the n first positive integers). This series is absolutely convergent for every
x
{\displaystyle x}
per the ratio test. So, the derivative of the sum can be computed by term-by-term differentiation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every
x
{\displaystyle x}
, and is everywhere the sum of its Maclaurin series.
=== Functional equation ===
The exponential satisfies the functional equation:
exp
(
x
+
y
)
=
exp
(
x
)
⋅
exp
(
y
)
.
{\displaystyle \exp(x+y)=\exp(x)\cdot \exp(y).}
This results from the uniqueness and the fact that the function
f
(
x
)
=
exp
(
x
+
y
)
/
exp
(
y
)
{\displaystyle f(x)=\exp(x+y)/\exp(y)}
satisfies the above definition.
It can be proved that a function that satisfies this functional equation has the form
x
↦
exp
(
c
x
)
{\displaystyle x\mapsto \exp(cx)}
if it is either continuous or monotonic. It is thus differentiable, and equals the exponential function if its derivative at 0 is 1.
=== Limit of integer powers ===
The exponential function is the limit, as the integer n goes to infinity,
exp
(
x
)
=
lim
n
→
+
∞
(
1
+
x
n
)
n
.
{\displaystyle \exp(x)=\lim _{n\to +\infty }\left(1+{\frac {x}{n}}\right)^{n}.}
By continuity of the logarithm, this can be proved by taking logarithms and proving
x
=
lim
n
→
∞
ln
(
1
+
x
n
)
n
=
lim
n
→
∞
n
ln
(
1
+
x
n
)
,
{\displaystyle x=\lim _{n\to \infty }\ln \left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }n\ln \left(1+{\frac {x}{n}}\right),}
for example with Taylor's theorem.
=== Properties ===
Reciprocal: The functional equation implies
e
x
e
−
x
=
1
{\displaystyle e^{x}e^{-x}=1}
. Therefore
e
x
≠
0
{\displaystyle e^{x}\neq 0}
for every
x
{\displaystyle x}
and
1
e
x
=
e
−
x
.
{\displaystyle {\frac {1}{e^{x}}}=e^{-x}.}
Positiveness:
e
x
>
0
{\displaystyle e^{x}>0}
for every real number
x
{\displaystyle x}
. This results from the intermediate value theorem, since
e
0
=
1
{\displaystyle e^{0}=1}
and, if one would have
e
x
<
0
{\displaystyle e^{x}<0}
for some
x
{\displaystyle x}
, there would be an
y
{\displaystyle y}
such that
e
y
=
0
{\displaystyle e^{y}=0}
between
0
{\displaystyle 0}
and
x
{\displaystyle x}
. Since the exponential function equals its derivative, this implies that the exponential function is monotonically increasing.
Extension of exponentiation to positive real bases: Let b be a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one has
b
=
exp
(
ln
b
)
.
{\displaystyle b=\exp(\ln b).}
If n is an integer, the functional equation of the logarithm implies
b
n
=
exp
(
ln
b
n
)
=
exp
(
n
ln
b
)
.
{\displaystyle b^{n}=\exp(\ln b^{n})=\exp(n\ln b).}
Since the right-most expression is defined if n is any real number, this allows defining
b
x
{\displaystyle b^{x}}
for every positive real number b and every real number x:
b
x
=
exp
(
x
ln
b
)
.
{\displaystyle b^{x}=\exp(x\ln b).}
In particular, if b is the Euler's number
e
=
exp
(
1
)
,
{\displaystyle e=\exp(1),}
one has
ln
e
=
1
{\displaystyle \ln e=1}
(inverse function) and thus
e
x
=
exp
(
x
)
.
{\displaystyle e^{x}=\exp(x).}
This shows the equivalence of the two notations for the exponential function.
== General exponential functions ==
A function is commonly called an exponential function—with an indefinite article—if it has the form
x
↦
b
x
{\displaystyle x\mapsto b^{x}}
, that is, if it is obtained from exponentiation by fixing the base and letting the exponent vary.
More generally and especially in applied contexts, the term exponential function is commonly used for functions of the form
f
(
x
)
=
a
b
x
{\displaystyle f(x)=ab^{x}}
. This may be motivated by the fact that, if the values of the function represent quantities, a change of measurement unit changes the value of
a
{\displaystyle a}
, and so, it is nonsensical to impose
a
=
1
{\displaystyle a=1}
.
These most general exponential functions are the differentiable functions that satisfy the following equivalent characterizations.
f
(
x
)
=
a
b
x
{\displaystyle f(x)=ab^{x}}
for every
x
{\displaystyle x}
and some constants
a
{\displaystyle a}
and
b
>
0
{\displaystyle b>0}
.
f
(
x
)
=
a
e
k
x
{\displaystyle f(x)=ae^{kx}}
for every
x
{\displaystyle x}
and some constants
a
{\displaystyle a}
and
k
{\displaystyle k}
.
The value of
f
′
(
x
)
/
f
(
x
)
{\displaystyle f'(x)/f(x)}
is independent of
x
{\displaystyle x}
.
For every
d
,
{\displaystyle d,}
the value of
f
(
x
+
d
)
/
f
(
x
)
{\displaystyle f(x+d)/f(x)}
is independent of
x
;
{\displaystyle x;}
that is,
f
(
x
+
d
)
f
(
x
)
=
f
(
y
+
d
)
f
(
y
)
{\displaystyle {\frac {f(x+d)}{f(x)}}={\frac {f(y+d)}{f(y)}}}
for every x, y.
The base of an exponential function is the base of the exponentiation that appears in it when written as
x
→
a
b
x
{\displaystyle x\to ab^{x}}
, namely
b
{\displaystyle b}
. The base is
e
k
{\displaystyle e^{k}}
in the second characterization,
exp
f
′
(
x
)
f
(
x
)
{\textstyle \exp {\frac {f'(x)}{f(x)}}}
in the third one, and
(
f
(
x
+
d
)
f
(
x
)
)
1
/
d
{\textstyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}}
in the last one.
=== In applications ===
The last characterization is important in empirical sciences, as allowing a direct experimental test whether a function is an exponential function.
Exponential growth or exponential decay—where the variable change is proportional to the variable value—are thus modeled with exponential functions. Examples are unlimited population growth leading to Malthusian catastrophe, continuously compounded interest, and radioactive decay.
If the modeling function has the form
x
↦
a
e
k
x
,
{\displaystyle x\mapsto ae^{kx},}
or, equivalently, is a solution of the differential equation
y
′
=
k
y
{\displaystyle y'=ky}
, the constant
k
{\displaystyle k}
is called, depending on the context, the decay constant, disintegration constant, rate constant, or transformation constant.
=== Equivalence proof ===
For proving the equivalence of the above properties, one can proceed as follows.
The two first characterizations are equivalent, since, if
b
=
e
k
{\displaystyle b=e^{k}}
and
k
=
ln
b
{\displaystyle k=\ln b}
, one has
e
k
x
=
(
e
k
)
x
=
b
x
.
{\displaystyle e^{kx}=(e^{k})^{x}=b^{x}.}
The basic properties of the exponential function (derivative and functional equation) implies immediately the third and the last condition.
Suppose that the third condition is verified, and let
k
{\displaystyle k}
be the constant value of
f
′
(
x
)
/
f
(
x
)
.
{\displaystyle f'(x)/f(x).}
Since
∂
e
k
x
∂
x
=
k
e
k
x
,
{\textstyle {\frac {\partial e^{kx}}{\partial x}}=ke^{kx},}
the quotient rule for derivation
implies that
∂
∂
x
f
(
x
)
e
k
x
=
0
,
{\displaystyle {\frac {\partial }{\partial x}}\,{\frac {f(x)}{e^{kx}}}=0,}
and thus that there is a constant
a
{\displaystyle a}
such that
f
(
x
)
=
a
e
k
x
.
{\displaystyle f(x)=ae^{kx}.}
If the last condition is verified, let
φ
(
d
)
=
f
(
x
+
d
)
/
f
(
x
)
,
{\textstyle \varphi (d)=f(x+d)/f(x),}
which is independent of
x
{\displaystyle x}
. Using
φ
(
0
)
=
1
{\displaystyle \varphi (0)=1}
, one gets
f
(
x
+
d
)
−
f
(
x
)
d
=
f
(
x
)
φ
(
d
)
−
φ
(
0
)
d
.
{\displaystyle {\frac {f(x+d)-f(x)}{d}}=f(x)\,{\frac {\varphi (d)-\varphi (0)}{d}}.}
Taking the limit when
d
{\displaystyle d}
tends to zero, one gets that the third condition is verified with
k
=
φ
′
(
0
)
{\displaystyle k=\varphi '(0)}
. It follows therefore that
f
(
x
)
=
a
e
k
x
{\displaystyle f(x)=ae^{kx}}
for some
a
,
{\displaystyle a,}
and
φ
(
d
)
=
e
k
d
.
{\displaystyle \varphi (d)=e^{kd}.}
As a byproduct, one gets that
(
f
(
x
+
d
)
f
(
x
)
)
1
/
d
=
e
k
{\displaystyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}=e^{k}}
is independent of both
x
{\displaystyle x}
and
d
{\displaystyle d}
.
== Compound interest ==
The earliest occurrence of the exponential function was in Jacob Bernoulli's study of compound interests in 1683.
This is this study that led Bernoulli to consider the number
lim
n
→
∞
(
1
+
1
n
)
n
{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}}
now known as Euler's number and denoted
e
{\displaystyle e}
.
The exponential function is involved as follows in the computation of continuously compounded interests.
If a principal amount of 1 earns interest at an annual rate of x compounded monthly, then the interest earned each month is x/12 times the current value, so each month the total value is multiplied by (1 + x/12), and the value at the end of the year is (1 + x/12)12. If instead interest is compounded daily, this becomes (1 + x/365)365. Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function,
exp
x
=
lim
n
→
∞
(
1
+
x
n
)
n
{\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}}
first given by Leonhard Euler.
== Differential equations ==
Exponential functions occur very often in solutions of differential equations.
The exponential functions can be defined as solutions of differential equations. Indeed, the exponential function is a solution of the simplest possible differential equation, namely
y
′
=
y
{\displaystyle y'=y}
. Every other exponential function, of the form
y
=
a
b
x
{\displaystyle y=ab^{x}}
, is a solution of the differential equation
y
′
=
k
y
{\displaystyle y'=ky}
, and every solution of this differential equation has this form.
The solutions of an equation of the form
y
′
+
k
y
=
f
(
x
)
{\displaystyle y'+ky=f(x)}
involve exponential functions in a more sophisticated way, since they have the form
y
=
c
e
−
k
x
+
e
−
k
x
∫
f
(
x
)
e
k
x
d
x
,
{\displaystyle y=ce^{-kx}+e^{-kx}\int f(x)e^{kx}dx,}
where
c
{\displaystyle c}
is an arbitrary constant and the integral denotes any antiderivative of its argument.
More generally, the solutions of every linear differential equation with constant coefficients can be expressed in terms of exponential functions and, when they are not homogeneous, antiderivatives. This holds true also for systems of linear differential equations with constant coefficients.
== Complex exponential ==
The exponential function can be naturally extended to a complex function, which is a function with the complex numbers as domain and codomain, such that its restriction to the reals is the above-defined exponential function, called real exponential function in what follows. This function is also called the exponential function, and also denoted
e
z
{\displaystyle e^{z}}
or
exp
(
z
)
{\displaystyle \exp(z)}
. For distinguishing the complex case from the real one, the extended function is also called complex exponential function or simply complex exponential.
Most of the definitions of the exponential function can be used verbatim for definiting the complex exponential function, and the proof of their equivalence is the same as in the real case.
The complex exponential function can be defined in several equivalent ways that are the same as in the real case.
The complex exponential is the unique complex function that equals its complex derivative and takes the value
1
{\displaystyle 1}
for the argument
0
{\displaystyle 0}
:
d
e
z
d
z
=
e
z
and
e
0
=
1.
{\displaystyle {\frac {de^{z}}{dz}}=e^{z}\quad {\text{and}}\quad e^{0}=1.}
The complex exponential function is the sum of the series
e
z
=
∑
k
=
0
∞
z
k
k
!
.
{\displaystyle e^{z}=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}.}
This series is absolutely convergent for every complex number
z
{\displaystyle z}
. So, the complex differential is an entire function.
The complex exponential function is the limit
e
z
=
lim
n
→
∞
(
1
+
z
n
)
n
{\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}}
The functional equation
e
w
+
z
=
e
w
e
z
{\displaystyle e^{w+z}=e^{w}e^{z}}
holds for every complex numbers
w
{\displaystyle w}
and
z
{\displaystyle z}
. The complex exponential is the unique continuous function that satisfies this functional equation and has the value
1
{\displaystyle 1}
for
z
=
0
{\displaystyle z=0}
.
The complex logarithm is a right-inverse function of the complex exponential:
e
log
z
=
z
.
{\displaystyle e^{\log z}=z.}
However, since the complex logarithm is a multivalued function, one has
log
e
z
=
{
z
+
2
i
k
π
∣
k
∈
Z
}
,
{\displaystyle \log e^{z}=\{z+2ik\pi \mid k\in \mathbb {Z} \},}
and it is difficult to define the complex exponential from the complex logarithm. On the opposite, this is the complex logarithm that is often defined from the complex exponential.
The complex exponential has the following properties:
1
e
z
=
e
−
z
{\displaystyle {\frac {1}{e^{z}}}=e^{-z}}
and
e
z
≠
0
for every
z
∈
C
.
{\displaystyle e^{z}\neq 0\quad {\text{for every }}z\in \mathbb {C} .}
It is periodic function of period
2
i
π
{\displaystyle 2i\pi }
; that is
e
z
+
2
i
k
π
=
e
z
for every
k
∈
Z
.
{\displaystyle e^{z+2ik\pi }=e^{z}\quad {\text{for every }}k\in \mathbb {Z} .}
This results from Euler's identity
e
i
π
=
−
1
{\displaystyle e^{i\pi }=-1}
and the functional identity.
The complex conjugate of the complex exponential is
e
z
¯
=
e
z
¯
.
{\displaystyle {\overline {e^{z}}}=e^{\overline {z}}.}
Its modulus is
|
e
z
|
=
e
|
ℜ
(
z
)
|
,
{\displaystyle |e^{z}|=e^{|\Re (z)|},}
where
ℜ
(
z
)
{\displaystyle \Re (z)}
denotes the real part of
z
{\displaystyle z}
.
=== Relationship with trigonometry ===
Complex exponential and trigonometric functions are strongly related by Euler's formula:
e
i
t
=
cos
(
t
)
+
i
sin
(
t
)
.
{\displaystyle e^{it}=\cos(t)+i\sin(t).}
This formula provides the decomposition of complex exponential into real and imaginary parts:
e
x
+
i
y
=
e
x
cos
y
+
i
e
x
sin
y
.
{\displaystyle e^{x+iy}=e^{x}\,\cos y+ie^{x}\,\sin y.}
The trigonometric functions can be expressed in terms of complex exponentials:
cos
x
=
e
i
x
+
e
−
i
x
2
sin
x
=
e
i
x
−
e
−
i
x
2
i
tan
x
=
i
1
−
e
2
i
x
1
+
e
2
i
x
{\displaystyle {\begin{aligned}\cos x&={\frac {e^{ix}+e^{-ix}}{2}}\\\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\\tan x&=i\,{\frac {1-e^{2ix}}{1+e^{2ix}}}\end{aligned}}}
In these formulas,
x
,
y
,
t
{\displaystyle x,y,t}
are commonly interpreted as real variables, but the formulas remain valid if the variables are interpreted as complex variables. These formulas may be used to define trigonometric functions of a complex variable.
=== Plots ===
3D plots of real part, imaginary part, and modulus of the exponential function
Considering the complex exponential function as a function involving four real variables:
v
+
i
w
=
exp
(
x
+
i
y
)
{\displaystyle v+iw=\exp(x+iy)}
the graph of the exponential function is a two-dimensional surface curving through four dimensions.
Starting with a color-coded portion of the
x
y
{\displaystyle xy}
domain, the following are depictions of the graph as variously projected into two or three dimensions.
Graphs of the complex exponential function
The second image shows how the domain complex plane is mapped into the range complex plane:
zero is mapped to 1
the real
x
{\displaystyle x}
axis is mapped to the positive real
v
{\displaystyle v}
axis
the imaginary
y
{\displaystyle y}
axis is wrapped around the unit circle at a constant angular rate
values with negative real parts are mapped inside the unit circle
values with positive real parts are mapped outside of the unit circle
values with a constant real part are mapped to circles centered at zero
values with a constant imaginary part are mapped to rays extending from zero
The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image.
The third image shows the graph extended along the real
x
{\displaystyle x}
axis. It shows the graph is a surface of revolution about the
x
{\displaystyle x}
axis of the graph of the real exponential function, producing a horn or funnel shape.
The fourth image shows the graph extended along the imaginary
y
{\displaystyle y}
axis. It shows that the graph's surface for positive and negative
y
{\displaystyle y}
values doesn't really meet along the negative real
v
{\displaystyle v}
axis, but instead forms a spiral surface about the
y
{\displaystyle y}
axis. Because its
y
{\displaystyle y}
values have been extended to ±2π, this image also better depicts the 2π periodicity in the imaginary
y
{\displaystyle y}
value.
== Matrices and Banach algebras ==
The power series definition of the exponential function makes sense for square matrices (for which the function is called the matrix exponential) and more generally in any unital Banach algebra B. In this setting, e0 = 1, and ex is invertible with inverse e−x for any x in B. If xy = yx, then ex + y = exey, but this identity can fail for noncommuting x and y.
Some alternative definitions lead to the same function. For instance, ex can be defined as
lim
n
→
∞
(
1
+
x
n
)
n
.
{\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.}
Or ex can be defined as fx(1), where fx : R → B is the solution to the differential equation dfx/dt(t) = x fx(t), with initial condition fx(0) = 1; it follows that fx(t) = etx for every t in R.
== Lie algebras ==
Given a Lie group G and its associated Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the exponential map is a map
g
{\displaystyle {\mathfrak {g}}}
↦ G satisfying similar properties. In fact, since R is the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie group GL(n,R) of invertible n × n matrices has as Lie algebra M(n,R), the space of all n × n matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map.
The identity
exp
(
x
+
y
)
=
exp
(
x
)
exp
(
y
)
{\displaystyle \exp(x+y)=\exp(x)\exp(y)}
can fail for Lie algebra elements x and y that do not commute; the Baker–Campbell–Hausdorff formula supplies the necessary correction terms.
== Transcendency ==
The function ez is a transcendental function, which means that it is not a root of a polynomial over the ring of the rational fractions
C
(
z
)
.
{\displaystyle \mathbb {C} (z).}
If a1, ..., an are distinct complex numbers, then ea1z, ..., eanz are linearly independent over
C
(
z
)
{\displaystyle \mathbb {C} (z)}
, and hence ez is transcendental over
C
(
z
)
{\displaystyle \mathbb {C} (z)}
.
== Computation ==
The Taylor series definition above is generally efficient for computing (an approximation of)
e
x
{\displaystyle e^{x}}
. However, when computing near the argument
x
=
0
{\displaystyle x=0}
, the result will be close to 1, and computing the value of the difference
e
x
−
1
{\displaystyle e^{x}-1}
with floating-point arithmetic may lead to the loss of (possibly all) significant figures, producing a large relative error, possibly even a meaningless result.
Following a proposal by William Kahan, it may thus be useful to have a dedicated routine, often called expm1, which computes ex − 1 directly, bypassing computation of ex. For example,
one may use the Taylor series:
e
x
−
1
=
x
+
x
2
2
+
x
3
6
+
⋯
+
x
n
n
!
+
⋯
.
{\displaystyle e^{x}-1=x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots .}
This was first implemented in 1979 in the Hewlett-Packard HP-41C calculator, and provided by several calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems, and programming languages (for example C99).
In addition to base e, the IEEE 754-2008 standard defines similar exponential functions near 0 for base 2 and 10:
2
x
−
1
{\displaystyle 2^{x}-1}
and
10
x
−
1
{\displaystyle 10^{x}-1}
.
A similar approach has been used for the logarithm; see log1p.
An identity in terms of the hyperbolic tangent,
expm1
(
x
)
=
e
x
−
1
=
2
tanh
(
x
/
2
)
1
−
tanh
(
x
/
2
)
,
{\displaystyle \operatorname {expm1} (x)=e^{x}-1={\frac {2\tanh(x/2)}{1-\tanh(x/2)}},}
gives a high-precision value for small values of x on systems that do not implement expm1(x).
=== Continued fractions ===
The exponential function can also be computed with continued fractions.
A continued fraction for ex can be obtained via an identity of Euler:
e
x
=
1
+
x
1
−
x
x
+
2
−
2
x
x
+
3
−
3
x
x
+
4
−
⋱
{\displaystyle e^{x}=1+{\cfrac {x}{1-{\cfrac {x}{x+2-{\cfrac {2x}{x+3-{\cfrac {3x}{x+4-\ddots }}}}}}}}}
The following generalized continued fraction for ez converges more quickly:
e
z
=
1
+
2
z
2
−
z
+
z
2
6
+
z
2
10
+
z
2
14
+
⋱
{\displaystyle e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}}
or, by applying the substitution z = x/y:
e
x
y
=
1
+
2
x
2
y
−
x
+
x
2
6
y
+
x
2
10
y
+
x
2
14
y
+
⋱
{\displaystyle e^{\frac {x}{y}}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+\ddots }}}}}}}}}
with a special case for z = 2:
e
2
=
1
+
4
0
+
2
2
6
+
2
2
10
+
2
2
14
+
⋱
=
7
+
2
5
+
1
7
+
1
9
+
1
11
+
⋱
{\displaystyle e^{2}=1+{\cfrac {4}{0+{\cfrac {2^{2}}{6+{\cfrac {2^{2}}{10+{\cfrac {2^{2}}{14+\ddots }}}}}}}}=7+{\cfrac {2}{5+{\cfrac {1}{7+{\cfrac {1}{9+{\cfrac {1}{11+\ddots }}}}}}}}}
This formula also converges, though more slowly, for z > 2. For example:
e
3
=
1
+
6
−
1
+
3
2
6
+
3
2
10
+
3
2
14
+
⋱
=
13
+
54
7
+
9
14
+
9
18
+
9
22
+
⋱
{\displaystyle e^{3}=1+{\cfrac {6}{-1+{\cfrac {3^{2}}{6+{\cfrac {3^{2}}{10+{\cfrac {3^{2}}{14+\ddots }}}}}}}}=13+{\cfrac {54}{7+{\cfrac {9}{14+{\cfrac {9}{18+{\cfrac {9}{22+\ddots }}}}}}}}}
== See also ==
== Notes ==
== References ==
== External links ==
"Exponential function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Complex_exponential_function |
Database theory encapsulates a broad range of topics related to the study and research of the theoretical realm of databases and database management systems.
Theoretical aspects of data management include, among other areas, the foundations of query languages, computational complexity and expressive power of queries, finite model theory, database design theory, dependency theory, foundations of concurrency control and database recovery, deductive databases, temporal and spatial databases, real-time databases, managing uncertain data and probabilistic databases, and Web data.
Most research work has traditionally been based on the relational model, since this model is usually considered the simplest and most foundational model of interest. Corresponding results for other data models, such as object-oriented or semi-structured models, or, more recently, graph data models and XML, are often derivable from those for the relational model.
Database theory helps one to understand the complexity and power of query languages and their connection to logic. Starting from relational algebra and first-order logic (which are equivalent by Codd's theorem) and the insight that important queries such as graph reachability are not expressible in this language, more powerful language based on logic programming and fixpoint logic such as Datalog were studied. The theory also explores foundations of query optimization and data integration. Here most work studied conjunctive queries, which admit query optimization even under constraints using the chase algorithm.
The main research conferences in the area are the ACM Symposium on Principles of Database Systems (PODS) and the International Conference on Database Theory (ICDT).
== See also ==
Data integration
Conjunctive query
Expressive power
== References ==
=== General references ===
Abiteboul, Serge; Hull, Richard B.; Vianu, Victor (1995), Foundations of Databases, Addison-Wesley, ISBN 0-201-53771-0
David Maier, The Theory of Relational Databases. Copyright 1983 David Maier. Available at http://web.cecs.pdx.edu/~maier/TheoryBook/TRD.html
== External links ==
Media related to Database theory at Wikimedia Commons | Wikipedia/Database_theory |
In commutative algebra, the constructible topology on the spectrum
Spec
(
A
)
{\displaystyle \operatorname {Spec} (A)}
of a commutative ring
A
{\displaystyle A}
is a topology where each closed set is the image of
Spec
(
B
)
{\displaystyle \operatorname {Spec} (B)}
in
Spec
(
A
)
{\displaystyle \operatorname {Spec} (A)}
for some algebra B over A. An important feature of this construction is that the map
Spec
(
B
)
→
Spec
(
A
)
{\displaystyle \operatorname {Spec} (B)\to \operatorname {Spec} (A)}
is a closed map with respect to the constructible topology.
With respect to this topology,
Spec
(
A
)
{\displaystyle \operatorname {Spec} (A)}
is a compact, Hausdorff, and totally disconnected topological space (i.e., a Stone space). In general, the constructible topology is a finer topology than the Zariski topology, and the two topologies coincide if and only if
A
/
nil
(
A
)
{\displaystyle A/\operatorname {nil} (A)}
is a von Neumann regular ring, where
nil
(
A
)
{\displaystyle \operatorname {nil} (A)}
is the nilradical of A.
Despite the terminology being similar, the constructible topology is not the same as the set of all constructible sets.
== See also ==
Constructible set (topology)
== References ==
Atiyah, Michael Francis; Macdonald, I.G. (1969), Introduction to Commutative Algebra, Westview Press, p. 87, ISBN 978-0-201-40751-8
Knight, J. T. (1971), Commutative Algebra, Cambridge University Press, pp. 121–123, ISBN 0-521-08193-9 | Wikipedia/Constructible_topology |
In the mathematical field of model theory, a theory is called stable if it satisfies certain combinatorial restrictions on its complexity. Stable theories are rooted in the proof of Morley's categoricity theorem and were extensively studied as part of Saharon Shelah's classification theory, which showed a dichotomy that either the models of a theory admit a nice classification or the models are too numerous to have any hope of a reasonable classification. A first step of this program was showing that if a theory is not stable then its models are too numerous to classify.
Stable theories were the predominant subject of pure model theory from the 1970s through the 1990s, so their study shaped modern model theory and there is a rich framework and set of tools to analyze them. A major direction in model theory is "neostability theory," which tries to generalize the concepts of stability theory to broader contexts, such as simple and NIP theories.
== Motivation and history ==
A common goal in model theory is to study a first-order theory by analyzing the complexity of the Boolean algebras of (parameter) definable sets in its models. One can equivalently analyze the complexity of the Stone duals of these Boolean algebras, which are type spaces. Stability restricts the complexity of these type spaces by restricting their cardinalities. Since types represent the possible behaviors of elements in a theory's models, restricting the number of types restricts the complexity of these models.
Stability theory has its roots in Michael Morley's 1965 proof of Łoś's conjecture on categorical theories. In this proof, the key notion was that of a totally transcendental theory, defined by restricting the topological complexity of the type spaces. However, Morley showed that (for countable theories) this topological restriction is equivalent to a cardinality restriction, a strong form of stability now called
ω
{\displaystyle \omega }
-stability, and he made significant use of this equivalence. In the course of generalizing Morley's categoricity theorem to uncountable theories, Frederick Rowbottom generalized
ω
{\displaystyle \omega }
-stability by introducing
κ
{\displaystyle \kappa }
-stable theories for some cardinal
κ
{\displaystyle \kappa }
, and finally Shelah introduced stable theories.
Stability theory was much further developed in the course of Shelah's classification theory program. The main goal of this program was to show a dichotomy that either the models of a first-order theory can be nicely classified up to isomorphism using a tree of cardinal-invariants (generalizing, for example, the classification of vector spaces over a fixed field by their dimension), or are so complicated that no reasonable classification is possible. Among the concrete results from this classification theory were theorems on the possible spectrum functions of a theory, counting the number of models of cardinality
κ
{\displaystyle \kappa }
as a function of
κ
{\displaystyle \kappa }
. Shelah's approach was to identify a series of "dividing lines" for theories. A dividing line is a property of a theory such that both it and its negation have strong structural consequences; one should imply the models of the theory are chaotic, while the other should yield a positive structure theory. Stability was the first such dividing line in the classification theory program, and since its failure was shown to rule out any reasonable classification, all further work could assume the theory to be stable. Thus much of classification theory was concerned with analyzing stable theories and various subsets of stable theories given by further dividing lines, such as superstable theories.
One of the key features of stable theories developed by Shelah is that they admit a general notion of independence called non-forking independence, generalizing linear independence from vector spaces and algebraic independence from field theory. Although non-forking independence makes sense in arbitrary theories, and remains a key tool beyond stable theories, it has particularly good geometric and combinatorial properties in stable theories. As with linear independence, this allows the definition of independent sets and of local dimensions as the cardinalities of maximal instances of these independent sets, which are well-defined under additional hypotheses. These local dimensions then give rise to the cardinal-invariants classifying models up to isomorphism.
== Definition and alternate characterizations ==
Let T be a complete first-order theory.
For a given infinite cardinal
κ
{\displaystyle \kappa }
, T is
κ
{\displaystyle \kappa }
-stable if for every set A of cardinality
κ
{\displaystyle \kappa }
in a model of T, the set S(A) of complete types over A also has cardinality
κ
{\displaystyle \kappa }
. This is the smallest the cardinality of S(A) can be, while it can be as large as
2
κ
{\displaystyle 2^{\kappa }}
. For the case
κ
=
ℵ
0
{\displaystyle \kappa =\aleph _{0}}
, it is common to say T is
ω
{\displaystyle \omega }
-stable rather than
ℵ
0
{\displaystyle \aleph _{0}}
-stable.
T is stable if it is
κ
{\displaystyle \kappa }
-stable for some infinite cardinal
κ
{\displaystyle \kappa }
.
Restrictions on the cardinals
κ
{\displaystyle \kappa }
for which a theory can simultaneously by
κ
{\displaystyle \kappa }
-stable are described by the stability spectrum, which singles out the even tamer subset of superstable theories.
A common alternate definition of stable theories is that they do not have the order property. A theory has the order property if there is a formula
ϕ
(
x
¯
,
y
¯
)
{\displaystyle \phi ({\bar {x}},{\bar {y}})}
and two infinite sequences of tuples
A
=
(
a
¯
i
:
i
∈
N
)
{\displaystyle A=({\bar {a}}_{i}:i\in \mathbb {N} )}
,
B
=
(
b
¯
j
:
j
∈
N
)
{\displaystyle B=({\bar {b}}_{j}:j\in \mathbb {N} )}
in some model M such that
ϕ
{\displaystyle \phi }
defines an infinite half graph on
A
×
B
{\displaystyle A\times B}
, i.e.
ϕ
(
a
¯
i
,
b
¯
j
)
{\displaystyle \phi ({\bar {a}}_{i},{\bar {b}}_{j})}
is true in M
⟺
i
≤
j
{\displaystyle \iff i\leq j}
. This is equivalent to there being a formula
ψ
(
x
¯
,
y
¯
)
{\displaystyle \psi ({\bar {x}},{\bar {y}})}
and an infinite sequence of tuples
A
=
(
a
¯
i
:
i
∈
N
)
{\displaystyle A=({\bar {a}}_{i}:i\in \mathbb {N} )}
in some model M such that
ψ
{\displaystyle \psi }
defines an infinite linear order on A, i.e.
ψ
(
a
¯
i
,
a
¯
j
)
{\displaystyle \psi ({\bar {a}}_{i},{\bar {a}}_{j})}
is true in M
⟺
i
≤
j
{\displaystyle \iff i\leq j}
.
There are numerous further characterizations of stability. As with Morley's totally transcendental theories, the cardinality restrictions of stability are equivalent to bounding the topological complexity of type spaces in terms of Cantor-Bendixson rank. Another characterization is via the properties that non-forking independence has in stable theories, such as being symmetric. This characterizes stability in the sense that any theory with an abstract independence relation satisfying certain of these properties must be stable and the independence relation must be non-forking independence.
Any of these definitions, except via an abstract independence relation, can instead be used to define what it means for a single formula to be stable in a given theory T. Then T can be defined to be stable if every formula is stable in T. Localizing results to stable formulas allows these results to be applied to stable formulas in unstable theories, and this localization to single formulas is often useful even in the case of stable theories.
== Examples and non-examples ==
For an unstable theory, consider the theory DLO of dense linear orders without endpoints. Then the atomic order relation has the order property. Alternatively, unrealized 1-types over a set A correspond to cuts (generalized Dedekind cuts, without the requirements that the two sets be non-empty and that the lower set have no greatest element) in the ordering of A, and there exist dense orders of any cardinality
κ
{\displaystyle \kappa }
with
2
κ
{\displaystyle 2^{\kappa }}
-many cuts.
Another unstable theory is the theory of the Rado graph, where the atomic edge relation has the order property.
For a stable theory, consider the theory
A
C
F
p
{\displaystyle ACF_{p}}
of algebraically closed fields of characteristic p, allowing
p
=
0
{\displaystyle p=0}
. Then if K is a model of
A
C
F
p
{\displaystyle ACF_{p}}
, counting types over a set
A
⊂
K
{\displaystyle A\subset K}
is equivalent to counting types over the field k generated by A in K. There is a (continuous) bijection from the space of n-types over k to the space of prime ideals in the polynomial ring
k
[
X
1
,
…
,
X
n
]
{\displaystyle k[X_{1},\dots ,X_{n}]}
. Since such ideals are finitely generated, there are only
|
k
|
+
ℵ
0
{\displaystyle |k|+\aleph _{0}}
many, so
A
C
F
p
{\displaystyle ACF_{p}}
is
κ
{\displaystyle \kappa }
-stable for all infinite
κ
{\displaystyle \kappa }
.
Some further examples of stable theories are listed below.
The theory of any module over a ring (in particular, any theory of vector spaces or abelian groups).
The theory of non-abelian free groups.
The theory of differentially closed fields of characteristic p. When
p
=
0
{\displaystyle p=0}
, the theory is
ω
{\displaystyle \omega }
-stable.
The theory of any nowhere dense graph class. These include graph classes with bounded expansion, which in turn include planar graphs and any graph class of bounded degree.
== Geometric stability theory ==
Geometric stability theory is concerned with the fine analysis of local geometries in models and how their properties influence global structure. This line of results was later key in various applications of stability theory, for example to Diophantine geometry. It is usually taken to start in the late 1970s with Boris Zilber's analysis of totally categorical theories, eventually showing that they are not finitely axiomatizble. Every model of a totally categorical theory is controlled by (i.e. is prime and minimal over) a strongly minimal set, which carries a matroid structure determined by (model-theoretic) algebraic closure that gives notions of independence and dimension. In this setting, geometric stability theory then asks the local question of what the possibilities are for the structure of the strongly minimal set, and the local-to-global question of how the strongly minimal set controls the whole model.
The second question is answered by Zilber's Ladder Theorem, showing every model of a totally categorical theory is built up by a finite sequence of something like "definable fiber bundles" over the strongly minimal set. For the first question, Zilber's Trichotomy Conjecture was that the geometry of a strongly minimal set must be either like that of a set with no structure, or the set must essentially carry the structure of a vector space, or the structure of an algebraically closed field, with the first two cases called locally modular. This conjecture illustrates two central themes. First, that (local) modularity serves to divide combinatorial or linear behavior from nonlinear, geometric complexity as in algebraic geometry. Second, that complicated combinatorial geometry necessarily comes from algebraic objects; this is akin to the classical problem of finding a coordinate ring for an abstract projective plane defined by incidences, and further examples are the group configuration theorems showing certain combinatorial dependencies among elements must arise from multiplication in a definable group. By developing analogues of parts of algebraic geometry in strongly minimal sets, such as intersection theory, Zilber proved a weak form of the Trichotomy Conjecture for uncountably categorical theories. Although Ehud Hrushovski developed the Hrushovski construction to disprove the full conjecture, it was later proved with additional hypotheses in the setting of "Zariski geometries".
Notions from Shelah's classification program, such as regular types, forking, and orthogonality, allowed these ideas to be carried to greater generality, especially in superstable theories. Here, sets defined by regular types play the role of strongly minimal sets, with their local geometry determined by forking dependence rather than algebraic dependence. In place of the single strongly minimal set controlling models of a totally categorical theory, there may be many such local geometries defined by regular types, and orthogonality describes when these types have no interaction.
== Applications ==
While stable theories are fundamental in model theory, this section lists applications of stable theories to other areas of mathematics. This list does not aim for completeness, but rather a sense of breadth.
Since the theory of differentially closed fields of characteristic 0 is
ω
{\displaystyle \omega }
-stable, there are many applications of stability theory in differential algebra. For example, the existence and uniqueness of the differential closure of such a field (an analogue of the algebraic closure) were proved by Lenore Blum and Shelah respectively, using general results on prime models in
ω
{\displaystyle \omega }
-stable theories.
In Diophantine geometry, Ehud Hrushovski used geometric stability theory to prove the Mordell-Lang conjecture for function fields in all characteristics, which generalizes Faltings's theorem about counting rational points on curves and the Manin-Mumford conjecture about counting torsion points on curves. The key point in the proof was using Zilber's Trichotomy in differential fields to show certain arithmetically defined groups are locally modular.
In online machine learning, the Littlestone dimension of a concept class is a complexity measure characterizing learnability, analogous to the VC-dimension in PAC learning. Bounding the Littlestone dimension of a concept class is equivalent to a combinatorial characterization of stability involving binary trees. This equivlanece has been used, for example, to prove that online learnability of a concept class is equivalent to differentially private PAC learnability.
In functional analysis, Jean-Louis Krivine and Bernard Maurey defined a notion of stability for Banach spaces, equivalent to stating that no quantifier-free formula has the order property (in continuous logic, rather than first-order logic). They then showed that every stable Banach space admits an almost-isometric embedding of ℓp for some
p
∈
[
1
,
∞
)
{\displaystyle p\in [1,\infty )}
. This is part of a broader interplay between functional analysis and stability in continuous logic; for example, early results of Alexander Grothendieck in functional analysis can be interpreted as equivalent to fundamental results of stability theory.
A countable (possibly finite) structure is ultrahomogeneous if every finite partial automorphism extends to an automorphism of the full structure. Gregory Cherlin and Alistair Lachlan provided a general classification theory for stable ultrahomogeneous structures, including all finite ones. In particular, their results show that for any fixed finite relational language, the finite homogeneous structures fall into finitely many infinite families with members parametrized by numerical invariants and finitely many sporadic examples. Furthermore, every sporadic example becomes part of an infinite family in some richer language, and new sporadic examples always appear in suitably richer languages.
In arithmetic combinatorics, Hrushovski proved results on the structure of approximate subgroups, for example implying a strengthened version of Gromov's theorem on groups of polynomial growth. Although this did not directly use stable theories, the key insight was that fundamental results from stable group theory could be generalized and applied in this setting. This directly led to the Breuillard-Green-Tao theorem classifying approximate subgroups.
== Generalizations ==
For about twenty years after its introduction, stability was the main subject of pure model theory. A central direction of modern pure model theory, sometimes called "neostability" or "classification theory,"consists of generalizing the concepts and techniques developed for stable theories to broader classes of theories, and this has fed into many of the more recent applications of model theory.
Two notable examples of such broader classes are simple and NIP theories. These are orthogonal generalizations of stable theories, since a theory is both simple and NIP if and only if it is stable. Roughly, NIP theories keep the good combinatorial behavior from stable theories, while simple theories keep the good geometric behavior of non-forking independence. In particular, simple theories can be characterized by non-forking independence being symmetric, while NIP can be characterized by bounding the number of types realized over either finite or infinite sets.
Another direction of generalization is to recapitulate classification theory beyond the setting of complete first-order theories, such as in abstract elementary classes.
== See also ==
Stability spectrum
Spectrum of a theory
Morley's categoricity theorem
NIP theories
== Notes ==
== References ==
== External links ==
A map of the model-theoretic classification of theories, highlighting stability
Two book reviews discussing stability and classification theory for non-model theorists: Fundamentals of Stability Theory and Classification Theory
An overview of (geometric) stability theory for non-model theorists | Wikipedia/Stable_theory |
In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space.
== Stone spaces ==
Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The topology on S(B) is generated by a basis consisting of all sets of the form
{
x
∈
S
(
B
)
∣
b
∈
x
}
,
{\displaystyle \{x\in S(B)\mid b\in x\},}
where b is an element of B. These sets are also closed and so are clopen (both closed and open). This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also profinite spaces). Conversely, given any Stone space X, the collection of subsets of X that are clopen is a Boolean algebra.
== Representation theorem ==
A simple version of Stone's representation theorem states that every Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The isomorphism sends an element
b
∈
B
{\displaystyle b\in B}
to the set of all ultrafilters that contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.
An extension of the classical Stone duality to the category of Boolean spaces (that is, zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor).
== See also ==
Stone's representation theorem for distributive lattices
Representation theorem – Proof that every structure with certain properties is isomorphic to another structure
Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets
List of Boolean algebra topics
Stonean space – Topological space in which the closure of every open set is openPages displaying short descriptions of redirect targets
Stone functor – Functor in category theory
Profinite group – Topological group that is in a certain sense assembled from a system of finite groups
Ultrafilter lemma – Maximal proper filterPages displaying short descriptions of redirect targets
== Citations ==
== References ==
Halmos, Paul; Givant, Steven (1998). Logic as Algebra. Dolciani Mathematical Expositions. Vol. 21. The Mathematical Association of America. ISBN 0-88385-327-2.
Johnstone, Peter T. (1982). Stone Spaces. Cambridge University Press. ISBN 0-521-23893-5.
Burris, Stanley N.; Sankappanavar, H.P. (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-2. | Wikipedia/Stone's_representation_theorem_for_Boolean_algebras |
In logic, mathematics, computer science, and linguistics, a formal language is a set of strings whose symbols are taken from a set called "alphabet".
The alphabet of a formal language consists of symbols that concatenate into strings (also called "words"). Words that belong to a particular formal language are sometimes called well-formed words. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar.
In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way.
The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.
== History ==
In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes.
Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language."
In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages.
In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools.
Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60.
== Words over an alphabet ==
An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode.
A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.
== Definition ==
Given a non-empty set
Σ
{\displaystyle \Sigma }
, a formal language
L
{\displaystyle L}
over
Σ
{\displaystyle \Sigma }
is a subset of
Σ
∗
{\displaystyle \Sigma ^{*}}
, which is the set of all possible finite-length words over
Σ
{\displaystyle \Sigma }
. We call the set
Σ
{\displaystyle \Sigma }
the alphabet of
L
{\displaystyle L}
. On the other hand, given a formal language
L
{\displaystyle L}
over
Σ
{\displaystyle \Sigma }
, a word
w
∈
Σ
∗
{\displaystyle w\in \Sigma ^{*}}
is well-formed if
w
∈
L
{\displaystyle w\in L}
. Similarly, an expression
E
⊆
Σ
∗
{\displaystyle E\subseteq \Sigma ^{*}}
is well-formed if
E
⊆
L
{\displaystyle E\subseteq L}
. Sometimes, a formal language
L
{\displaystyle L}
over
Σ
{\displaystyle \Sigma }
has a set of clear rules and constraints for the creation of all possible well-formed words from
Σ
∗
{\displaystyle \Sigma ^{*}}
.
In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant. On the other hand, we can just say "a formal language
L
{\displaystyle L}
" when its alphabet
Σ
{\displaystyle \Sigma }
is clear in the context.
While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it.
== Examples ==
The following rules describe a formal language L over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}:
Every nonempty string that does not contain "+" or "=" and does not start with "0" is in L.
The string "0" is in L.
A string containing "=" is in L if and only if there is exactly one "=", and it separates two valid strings of L.
A string containing "+" but not "=" is in L if and only if every "+" in the string separates two valid strings of L.
No string is in L other than those implied by the previous rules.
Under these rules, the string "23+4=555" is in L, but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc.
=== Constructions ===
For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language L as just L = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all (L = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages:
L = Σ*, the set of all words over Σ;
L = {a}* = {an}, where n ranges over the natural numbers and "an" means "a" repeated n times (this is the set of words consisting only of the symbol "a");
the set of syntactically correct programs in a given programming language (the syntax of which is usually defined by a context-free grammar);
the set of inputs upon which a certain Turing machine halts; or
the set of maximal strings of alphanumeric ASCII characters on this line, i.e., the set {the, set, of, maximal, strings, alphanumeric, ASCII, characters, on, this, line, i, e}.
== Language-specification formalisms ==
Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as
those strings generated by some formal grammar;
those strings described or matched by a particular regular expression;
those strings accepted by some automaton, such as a Turing machine or finite-state automaton;
those strings for which some decision procedure (an algorithm that asks a sequence of related YES/NO questions) produces the answer YES.
Typical questions asked about such formalisms include:
What is their expressive power? (Can formalism X describe every language that formalism Y can describe? Can it describe other languages?)
What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described by formalism X?)
What is their comparability? (How difficult is it to decide whether two languages, one described in formalism X and one in formalism Y, or in X again, are actually the same language?).
Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications.
== Operations on languages ==
Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations.
Examples: suppose
L
1
{\displaystyle L_{1}}
and
L
2
{\displaystyle L_{2}}
are languages over some common alphabet
Σ
{\displaystyle \Sigma }
.
The concatenation
L
1
⋅
L
2
{\displaystyle L_{1}\cdot L_{2}}
consists of all strings of the form
v
w
{\displaystyle vw}
where
v
{\displaystyle v}
is a string from
L
1
{\displaystyle L_{1}}
and
w
{\displaystyle w}
is a string from
L
2
{\displaystyle L_{2}}
.
The intersection
L
1
∩
L
2
{\displaystyle L_{1}\cap L_{2}}
of
L
1
{\displaystyle L_{1}}
and
L
2
{\displaystyle L_{2}}
consists of all strings that are contained in both languages
The complement
¬
L
1
{\displaystyle \neg L_{1}}
of
L
1
{\displaystyle L_{1}}
with respect to
Σ
{\displaystyle \Sigma }
consists of all strings over
Σ
{\displaystyle \Sigma }
that are not in
L
1
{\displaystyle L_{1}}
.
The Kleene star: the language consisting of all words that are concatenations of zero or more words in the original language;
Reversal:
Let ε be the empty word, then
ε
R
=
ε
{\displaystyle \varepsilon ^{R}=\varepsilon }
, and
for each non-empty word
w
=
σ
1
⋯
σ
n
{\displaystyle w=\sigma _{1}\cdots \sigma _{n}}
(where
σ
1
,
…
,
σ
n
{\displaystyle \sigma _{1},\ldots ,\sigma _{n}}
are elements of some alphabet), let
w
R
=
σ
n
⋯
σ
1
{\displaystyle w^{R}=\sigma _{n}\cdots \sigma _{1}}
,
then for a formal language
L
{\displaystyle L}
,
L
R
=
{
w
R
∣
w
∈
L
}
{\displaystyle L^{R}=\{w^{R}\mid w\in L\}}
.
String homomorphism
Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right.
== Applications ==
=== Programming languages ===
A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built.
Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute.
=== Formal theories, systems, and proofs ===
In mathematical logic, a formal theory is a set of sentences expressed in a formal language.
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems
F
S
{\displaystyle {\mathcal {FS}}}
and
F
S
′
{\displaystyle {\mathcal {FS'}}}
may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance).
A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions.
==== Interpretations and models ====
Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true.
== See also ==
Combinatorics on words
Formal method
Free monoid
Grammar framework
Mathematical notation
String (computer science)
== Notes ==
== References ==
=== Citations ===
=== Sources ===
Works cited
General references
== External links ==
"Formal language", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
University of Maryland, Formal Language Definitions
James Power, "Notes on Formal Language Theory and Parsing" Archived 21 November 2007 at the Wayback Machine, 29 November 2002.
Drafts of some chapters in the "Handbook of Formal Language Theory", Vol. 1–3, G. Rozenberg and A. Salomaa (eds.), Springer Verlag, (1997):
Alexandru Mateescu and Arto Salomaa, "Preface" in Vol.1, pp. v–viii, and "Formal Languages: An Introduction and a Synopsis", Chapter 1 in Vol. 1, pp. 1–39
Sheng Yu, "Regular Languages", Chapter 2 in Vol. 1
Jean-Michel Autebert, Jean Berstel, Luc Boasson, "Context-Free Languages and Push-Down Automata", Chapter 3 in Vol. 1
Christian Choffrut and Juhani Karhumäki, "Combinatorics of Words", Chapter 6 in Vol. 1
Tero Harju and Juhani Karhumäki, "Morphisms", Chapter 7 in Vol. 1, pp. 439–510
Jean-Eric Pin, "Syntactic semigroups", Chapter 10 in Vol. 1, pp. 679–746
M. Crochemore and C. Hancart, "Automata for matching patterns", Chapter 9 in Vol. 2
Dora Giammarresi, Antonio Restivo, "Two-dimensional Languages", Chapter 4 in Vol. 3, pp. 215–267 | Wikipedia/Formal_language_theory |
This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
See also the glossary of number theory terms at Glossary of number theory.
== A ==
abc conjecture
The abc conjecture of Masser and Oesterlé attempts to state as much as possible about repeated prime factors in an equation a + b = c. For example 3 + 125 = 128 but the prime powers here are exceptional.
Arakelov class group
The Arakelov class group is the analogue of the ideal class group or divisor class group for Arakelov divisors.
Arakelov divisor
An Arakelov divisor (or replete divisor) on a global field is an extension of the concept of divisor or fractional ideal. It is a formal linear combination of places of the field with finite places having integer coefficients and the infinite places having real coefficients.
Arakelov height
The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields.
Arakelov theory
Arakelov theory is an approach to arithmetic geometry that explicitly includes the 'infinite primes'.
Arithmetic of abelian varieties
See main article arithmetic of abelian varieties
Artin L-functions
Artin L-functions are defined for quite general Galois representations. The introduction of étale cohomology in the 1960s meant that Hasse–Weil L-functions could be regarded as Artin L-functions for the Galois representations on l-adic cohomology groups.
== B ==
Bad reduction
See good reduction.
Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture on elliptic curves postulates a connection between the rank of an elliptic curve and the order of pole of its Hasse–Weil L-function. It has been an important landmark in Diophantine geometry since the mid-1960s, with results such as the Coates–Wiles theorem, Gross–Zagier theorem and Kolyvagin's theorem.
== C ==
Canonical height
The canonical height on an abelian variety is a height function that is a distinguished quadratic form. See Néron–Tate height.
Chabauty's method
Chabauty's method, based on p-adic analytic functions, is a special application but capable of proving cases of the Mordell conjecture for curves whose Jacobian's rank is less than its dimension. It developed ideas from Thoralf Skolem's method for an algebraic torus. (Other older methods for Diophantine problems include Runge's method.)
Coates–Wiles theorem
The Coates–Wiles theorem states that an elliptic curve with complex multiplication by an imaginary quadratic field of class number 1 and positive rank has L-function with a zero at s = 1. This is a special case of the Birch and Swinnerton-Dyer conjecture.
Crystalline cohomology
Crystalline cohomology is a p-adic cohomology theory in characteristic p, introduced by Alexander Grothendieck to fill the gap left by étale cohomology which is deficient in using mod p coefficients in this case. It is one of a number of theories deriving in some way from Dwork's method, and has applications outside purely arithmetical questions.
== D ==
Diagonal forms
Diagonal forms are some of the simplest projective varieties to study from an arithmetic point of view (including the Fermat varieties). Their local zeta-functions are computed in terms of Jacobi sums. Waring's problem is the most classical case.
Diophantine dimension
The Diophantine dimension of a field is the smallest natural number k, if it exists, such that the field of is class Ck: that is, such that any homogeneous polynomial of degree d in N variables has a non-trivial zero whenever N > dk. Algebraically closed fields are of Diophantine dimension 0; quasi-algebraically closed fields of dimension 1.
Discriminant of a point
The discriminant of a point refers to two related concepts relative to a point P on an algebraic variety V defined over a number field K: the geometric (logarithmic) discriminant d(P) and the arithmetic discriminant, defined by Vojta. The difference between the two may be compared to the difference between the arithmetic genus of a singular curve and the geometric genus of the desingularisation. The arithmetic genus is larger than the geometric genus, and the height of a point may be bounded in terms of the arithmetic genus. Obtaining similar bounds involving the geometric genus would have significant consequences.
Dwork's method
Bernard Dwork used distinctive methods of p-adic analysis, p-adic algebraic differential equations, Koszul complexes and other techniques that have not all been absorbed into general theories such as crystalline cohomology. He first proved the rationality of local zeta-functions, the initial advance in the direction of the Weil conjectures.
== E ==
Étale cohomology
The search for a Weil cohomology (q.v.) was at least partially fulfilled in the étale cohomology theory of Alexander Grothendieck and Michael Artin. It provided a proof of the functional equation for the local zeta-functions, and was basic in the formulation of the Tate conjecture (q.v.) and numerous other theories.
== F ==
Faltings height
The Faltings height of an elliptic curve or abelian variety defined over a number field is a measure of its complexity introduced by Faltings in his proof of the Mordell conjecture.
Fermat's Last Theorem
Fermat's Last Theorem, the most celebrated conjecture of Diophantine geometry, was proved by Andrew Wiles and Richard Taylor.
Flat cohomology
Flat cohomology is, for the school of Grothendieck, one terminal point of development. It has the disadvantage of being quite hard to compute with. The reason that the flat topology has been considered the 'right' foundational topos for scheme theory goes back to the fact of faithfully-flat descent, the discovery of Grothendieck that the representable functors are sheaves for it (i.e. a very general gluing axiom holds).
Function field analogy
It was realised in the nineteenth century that the ring of integers of a number field has analogies with the affine coordinate ring of an algebraic curve or compact Riemann surface, with a point or more removed corresponding to the 'infinite places' of a number field. This idea is more precisely encoded in the theory that global fields should all be treated on the same basis. The idea goes further. Thus elliptic surfaces over the complex numbers, also, have some quite strict analogies with elliptic curves over number fields.
== G ==
Geometric class field theory
The extension of class field theory-style results on abelian coverings to varieties of dimension at least two is often called geometric class field theory.
Good reduction
Fundamental to local analysis in arithmetic problems is to reduce modulo all prime numbers p or, more generally, prime ideals. In the typical situation this presents little difficulty for almost all p; for example denominators of fractions are tricky, in that reduction modulo a prime in the denominator looks like division by zero, but that rules out only finitely many p per fraction. With a little extra sophistication, homogeneous coordinates allow clearing of denominators by multiplying by a common scalar. For a given, single point one can do this and not leave a common factor p. However singularity theory enters: a non-singular point may become a singular point on reduction modulo p, because the Zariski tangent space can become larger when linear terms reduce to 0 (the geometric formulation shows it is not the fault of a single set of coordinates). Good reduction refers to the reduced variety having the same properties as the original, for example, an algebraic curve having the same genus, or a smooth variety remaining smooth. In general there will be a finite set S of primes for a given variety V, assumed smooth, such that there is otherwise a smooth reduced Vp over Z/pZ. For abelian varieties, good reduction is connected with ramification in the field of division points by the Néron–Ogg–Shafarevich criterion. The theory is subtle, in the sense that the freedom to change variables to try to improve matters is rather unobvious: see Néron model, potential good reduction, Tate curve, semistable abelian variety, semistable elliptic curve, Serre–Tate theorem.
Grothendieck–Katz conjecture
The Grothendieck–Katz p-curvature conjecture applies reduction modulo primes to algebraic differential equations, to derive information on algebraic function solutions. The initial result of this type was Eisenstein's theorem.
== H ==
Hasse principle
The Hasse principle states that solubility for a global field is the same as solubility in all relevant local fields. One of the main objectives of Diophantine geometry is to classify cases where the Hasse principle holds. Generally that is for a large number of variables, when the degree of an equation is held fixed. The Hasse principle is often associated with the success of the Hardy–Littlewood circle method. When the circle method works, it can provide extra, quantitative information such as asymptotic number of solutions. Reducing the number of variables makes the circle method harder; therefore failures of the Hasse principle, for example for cubic forms in small numbers of variables (and in particular for elliptic curves as cubic curves) are at a general level connected with the limitations of the analytic approach.
Hasse–Weil L-function
A Hasse–Weil L-function, sometimes called a global L-function, is an Euler product formed from local zeta-functions. The properties of such L-functions remain largely in the realm of conjecture, with the proof of the Taniyama–Shimura conjecture being a breakthrough. The Langlands philosophy is largely complementary to the theory of global L-functions.
Height function
A height function in Diophantine geometry quantifies the size of solutions to Diophantine equations.
Hilbertian fields
A Hilbertian field K is one for which the projective spaces over K are not thin sets in the sense of Jean-Pierre Serre. This is a geometric take on Hilbert's irreducibility theorem which shows the rational numbers are Hilbertian. Results are applied to the inverse Galois problem. Thin sets (the French word is mince) are in some sense analogous to the meagre sets (French maigre) of the Baire category theorem.
== I ==
Igusa zeta-function
An Igusa zeta-function, named for Jun-ichi Igusa, is a generating function counting numbers of points on an algebraic variety modulo high powers pn of a fixed prime number p. General rationality theorems are now known, drawing on methods of mathematical logic.
Infinite descent
Infinite descent was Pierre de Fermat's classical method for Diophantine equations. It became one half of the standard proof of the Mordell–Weil theorem, with the other being an argument with height functions (q.v.). Descent is something like division by two in a group of principal homogeneous spaces (often called 'descents', when written out by equations); in more modern terms in a Galois cohomology group which is to be proved finite. See Selmer group.
Iwasawa theory
Iwasawa theory builds up from the analytic number theory and Stickelberger's theorem as a theory of ideal class groups as Galois modules and p-adic L-functions (with roots in Kummer congruence on Bernoulli numbers). In its early days in the late 1960s it was called Iwasawa's analogue of the Jacobian. The analogy was with the Jacobian variety J of a curve C over a finite field F (qua Picard variety), where the finite field has roots of unity added to make finite field extensions F′ The local zeta-function (q.v.) of C can be recovered from the points J(F′) as Galois module. In the same way, Iwasawa added pn-power roots of unity for fixed p and with n → ∞, for his analogue, to a number field K, and considered the inverse limit of class groups, finding a p-adic L-function earlier introduced by Kubota and Leopoldt.
== K ==
K-theory
Algebraic K-theory is on one hand a quite general theory with an abstract algebra flavour, and, on the other hand, implicated in some formulations of arithmetic conjectures. See for example Birch–Tate conjecture, Lichtenbaum conjecture.
== L ==
Lang conjecture
Enrico Bombieri (dimension 2), Serge Lang and Paul Vojta (integral points case) and Piotr Blass have conjectured that algebraic varieties of general type do not have Zariski dense subsets of K-rational points, for K a finitely-generated field. This circle of ideas includes the understanding of analytic hyperbolicity and the Lang conjectures on that, and the Vojta conjectures. An analytically hyperbolic algebraic variety V over the complex numbers is one such that no holomorphic mapping from the whole complex plane to it exists, that is not constant. Examples include compact Riemann surfaces of genus g > 1. Lang conjectured that V is analytically hyperbolic if and only if all subvarieties are of general type.
Linear torus
A linear torus is a geometrically irreducible Zariski-closed subgroup of an affine torus (product of multiplicative groups).
Local zeta-function
A local zeta-function is a generating function for the number of points on an algebraic variety V over a finite field F, over the finite field extensions of F. According to the Weil conjectures (q.v.) these functions, for non-singular varieties, exhibit properties closely analogous to the Riemann zeta-function, including the Riemann hypothesis.
== M ==
Manin–Mumford conjecture
The Manin–Mumford conjecture, now proved by Michel Raynaud, states that a curve C in its Jacobian variety J can only contain a finite number of points that are of finite order in J, unless C = J.
Mordell conjecture
The Mordell conjecture is now the Faltings theorem, and states that a curve of genus at least two has only finitely many rational points. The Uniformity conjecture states that there should be a uniform bound on the number of such points, depending only on the genus and the field of definition.
Mordell–Lang conjecture
The Mordell–Lang conjecture, now proved by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings, is a conjecture of Lang unifying the Mordell conjecture and Manin–Mumford conjecture in an abelian variety or semiabelian variety.
Mordell–Weil theorem
The Mordell–Weil theorem is a foundational result stating that for an abelian variety A over a number field K the group A(K) is a finitely-generated abelian group. This was proved initially for number fields K, but extends to all finitely-generated fields.
Mordellic variety
A Mordellic variety is an algebraic variety which has only finitely many points in any finitely generated field.
== N ==
Naive height
The naive height or classical height of a vector of rational numbers is the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial.
Néron symbol
The Néron symbol is a bimultiplicative pairing between divisors and algebraic cycles on an Abelian variety used in Néron's formulation of the Néron–Tate height as a sum of local contributions. The global Néron symbol, which is the sum of the local symbols, is just the negative of the height pairing.
Néron–Tate height
The Néron–Tate height (also often referred to as the canonical height) on an abelian variety A is a height function (q.v.) that is essentially intrinsic, and an exact quadratic form, rather than approximately quadratic with respect to the addition on A as provided by the general theory of heights. It can be defined from a general height by a limiting process; there are also formulae, in the sense that it is a sum of local contributions.
Nevanlinna invariant
The Nevanlinna invariant of an ample divisor D on a normal projective variety X is a real number which describes the rate of growth of the number of rational points on the variety with respect to the embedding defined by the divisor. It has similar formal properties to the abscissa of convergence of the height zeta function and it is conjectured that they are essentially the same.
== O ==
Ordinary reduction
An Abelian variety A of dimension d has ordinary reduction at a prime p if it has good reduction at p and in addition the p-torsion has rank d.
== Q ==
Quasi-algebraic closure
The topic of quasi-algebraic closure, i.e. solubility guaranteed by a number of variables polynomial in the degree of an equation, grew out of studies of the Brauer group and the Chevalley–Warning theorem. It stalled in the face of counterexamples; but see Ax–Kochen theorem from mathematical logic.
== R ==
Reduction modulo a prime number or ideal
See good reduction.
Replete ideal
A replete ideal in a number field K is a formal product of a fractional ideal of K and a vector of positive real numbers with components indexed by the infinite places of K. A replete divisor is an Arakelov divisor.
== S ==
Sato–Tate conjecture
The Sato–Tate conjecture describes the distribution of Frobenius elements in the Tate modules of the elliptic curves over finite fields obtained from reducing a given elliptic curve over the rationals. Mikio Sato and, independently, John Tate suggested it around 1960. It is a prototype for Galois representations in general.
Skolem's method
See Chabauty's method.
Special set
The special set in an algebraic variety is the subset in which one might expect to find many rational points. The precise definition varies according to context. One definition is the Zariski closure of the union of images of algebraic groups under non-trivial rational maps; alternatively one may take images of abelian varieties; another definition is the union of all subvarieties that are not of general type. For abelian varieties the definition would be the union of all translates of proper abelian subvarieties. For a complex variety, the holomorphic special set is the Zariski closure of the images of all non-constant holomorphic maps from C. Lang conjectured that the analytic and algebraic special sets are equal.
Subspace theorem
Schmidt's subspace theorem shows that points of small height in projective space lie in a finite number of hyperplanes. A quantitative form of the theorem, in which the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei (1977) to allow more general absolute values on number fields. The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation.
== T ==
Tamagawa numbers
The direct Tamagawa number definition works well only for linear algebraic groups. There the Weil conjecture on Tamagawa numbers was eventually proved. For abelian varieties, and in particular the Birch–Swinnerton-Dyer conjecture (q.v.), the Tamagawa number approach to a local–global principle fails on a direct attempt, though it has had heuristic value over many years. Now a sophisticated equivariant Tamagawa number conjecture is a major research problem.
Tate conjecture
The Tate conjecture (John Tate, 1963) provided an analogue to the Hodge conjecture, also on algebraic cycles, but well within arithmetic geometry. It also gave, for elliptic surfaces, an analogue of the Birch–Swinnerton-Dyer conjecture (q.v.), leading quickly to a clarification of the latter and a recognition of its importance.
Tate curve
The Tate curve is a particular elliptic curve over the p-adic numbers introduced by John Tate to study bad reduction (see good reduction).
Tsen rank
The Tsen rank of a field, named for C. C. Tsen who introduced their study in 1936, is the smallest natural number i, if it exists, such that the field is of class Ti: that is, such that any system of polynomials with no constant term of degree dj in n variables has a non-trivial zero whenever n > Σ dji. Algebraically closed fields are of Tsen rank zero. The Tsen rank is greater or equal to the Diophantine dimension but it is not known if they are equal except in the case of rank zero.
== U ==
Uniformity conjecture
The uniformity conjecture states that for any number field K and g > 2, there is a uniform bound B(g,K) on the number of K-rational points on any curve of genus g. The conjecture would follow from the Bombieri–Lang conjecture.
Unlikely intersection
An unlikely intersection is an algebraic subgroup intersecting a subvariety of a torus or abelian variety in a set of unusually large dimension, such as is involved in the Mordell–Lang conjecture.
== V ==
Vojta conjecture
The Vojta conjecture is a complex of conjectures by Paul Vojta, making analogies between Diophantine approximation and Nevanlinna theory.
== W ==
Weights
The yoga of weights is a formulation by Alexander Grothendieck of analogies between Hodge theory and l-adic cohomology.
Weil cohomology
The initial idea, later somewhat modified, for proving the Weil conjectures (q.v.), was to construct a cohomology theory applying to algebraic varieties over finite fields that would both be as good as singular homology at detecting topological structure, and have Frobenius mappings acting in such a way that the Lefschetz fixed-point theorem could be applied to the counting in local zeta-functions. For later history see motive (algebraic geometry), motivic cohomology.
Weil conjectures
The Weil conjectures were three highly influential conjectures of André Weil, made public around 1949, on local zeta-functions. The proof was completed in 1973. Those being proved, there remain extensions of the Chevalley–Warning theorem congruence, which comes from an elementary method, and improvements of Weil bounds, e.g. better estimates for curves of the number of points than come from Weil's basic theorem of 1940. The latter turn out to be of interest for Algebraic geometry codes.
Weil distributions on algebraic varieties
André Weil proposed a theory in the 1920s and 1930s on prime ideal decomposition of algebraic numbers in coordinates of points on algebraic varieties. It has remained somewhat under-developed.
Weil function
A Weil function on an algebraic variety is a real-valued function defined off some Cartier divisor which generalises the concept of Green's function in Arakelov theory. They are used in the construction of the local components of the Néron–Tate height.
Weil height machine
The Weil height machine is an effective procedure for assigning a height function to any divisor on smooth projective variety over a number field (or to Cartier divisors on non-smooth varieties).
== See also ==
Glossary of number theory
Arithmetic topology
Arithmetic dynamics
== References ==
Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034.
Hindry, Marc; Silverman, Joseph H. (2000). Diophantine Geometry: An Introduction. Graduate Texts in Mathematics. Vol. 201. ISBN 0-387-98981-1. Zbl 0948.11023.
Lang, Serge (1988). Introduction to Arakelov theory. New York: Springer-Verlag. ISBN 0-387-96793-1. MR 0969124. Zbl 0667.14001.
Lang, Serge (1997). Survey of Diophantine Geometry. Springer-Verlag. ISBN 3-540-61223-8. Zbl 0869.11051.
Neukirch, Jürgen (1999). Algebraic Number Theory. Grundlehren der Mathematischen Wissenschaften. Vol. 322. Springer-Verlag. ISBN 978-3-540-65399-8. Zbl 0956.11021.
== Further reading ==
Dino Lorenzini (1996), An invitation to arithmetic geometry, AMS Bookstore, ISBN 978-0-8218-0267-0 | Wikipedia/Mordell-Lang_conjecture |
In mathematics, the André–Oort conjecture is a problem in Diophantine geometry, a branch of number theory, that can be seen as a non-abelian analogue of the Manin–Mumford conjecture, which is now a theorem (proven in several different ways).
The conjecture concerns itself with a characterization of the Zariski closure of sets of special points in Shimura varieties.
A special case of the conjecture was stated by Yves André in 1989 and a more general statement (albeit with a restriction on the type of the Shimura variety) was conjectured by Frans Oort in 1995. The modern version is a natural generalization of these two conjectures.
== Statement ==
The conjecture in its modern form is as follows. Each irreducible component of the Zariski closure of a set of special points in a Shimura variety is a special subvariety.
André's first version of the conjecture was just for one dimensional irreducible components, while Oort proposed that it should be true for irreducible components of arbitrary dimension in the moduli space of principally polarised Abelian varieties of dimension g.
It seems that André was motivated by applications to transcendence theory while Oort by the analogy with the Manin-Mumford
conjecture.
== Results ==
Various results have been established towards the full conjecture by Ben Moonen, Yves André, Andrei Yafaev, Bas Edixhoven, Laurent Clozel, Bruno Klingler and Emmanuel Ullmo, among others. Some of these results were conditional upon the generalized Riemann hypothesis (GRH) being true.
In fact, the proof of the full conjecture under GRH was published by Bruno Klingler, Emmanuel Ullmo and Andrei Yafaev in 2014 in the Annals of Mathematics.
In 2006, Umberto Zannier and Jonathan Pila used techniques from o-minimal geometry and transcendental number theory to develop an approach to the Manin-Mumford-André-Oort type of problems.
In 2009, Jonathan Pila proved the André-Oort conjecture unconditionally for arbitrary products of modular curves, a result which earned him the 2011 Clay Research Award.
Bruno Klingler, Emmanuel Ullmo and Andrei Yafaev proved, in 2014, the functional transcendence result needed for the general Pila-Zannier approach and Emmanuel Ullmo has deduced from it a technical result needed for the induction step in the strategy. The remaining technical ingredient was the problem of bounding below the Galois degrees of special points.
For the case of the Siegel modular variety, this bound was deduced by Jacob Tsimerman in 2015 from the averaged Colmez conjecture and the Masser-Wustholtz isogeny estimates. The averaged Colmez conjecture was proved by Xinyi Yuan and Shou-Wu Zhang and independently by Andreatta, Goren, Howard and Madapusi-Pera.
In 2019-2020, Gal Biniyamini, Harry Schmidt and Andrei Yafaev, building on previous work and ideas of Harry Schmidt on torsion points in tori and abelian varieties and Gal Biniyamini's point counting results, have formulated a conjecture on bounds of heights of special points and deduced from its validity the bounds for the Galois degrees of special points needed for the proof of the full André-Oort conjecture.
In September 2021, Jonathan Pila, Ananth Shankar, and Jacob Tsimerman claimed in a paper (featuring an appendix written by Hélène Esnault and Michael Groechenig) a proof of the Biniyamini-Schmidt-Yafaev height conjecture, thus completing the proof of the André-Oort conjecture using the Pila-Zannier strategy.
== Coleman–Oort conjecture ==
A related conjecture that has two forms, equivalent if the André–Oort conjecture is assumed, is the Coleman–Oort conjecture. Robert Coleman conjectured that for sufficiently large g, there are only finitely many smooth projective curves C of genus g, such that the Jacobian variety J(C) is an abelian variety of CM-type. Oort then conjectured that the Torelli locus – of the moduli space of abelian varieties of dimension g – has for sufficiently large g no special subvariety of dimension > 0 that intersects the image of the Torelli mapping in a dense open subset.
== Generalizations ==
Manin-Mumford and André–Oort conjectures can be generalized in many directions, for example by relaxing the
properties of points being `special' (and considering the so-called `unlikely locus' instead) or looking at more general ambient varieties: abelian or semi-abelian schemes, mixed Shimura varieties etc.... These
generalizations are colloquially known as the Zilber–Pink conjectures because problems of this type were proposed by Richard Pink and Boris Zilber.
Most of these questions are open and are a subject of current active research.
== See also ==
Zilber–Pink conjecture
== References ==
== Further reading ==
Zannier, Umberto (2012). "About the André–Oort conjecture". Some Problems of Unlikely Intersections in Arithmetic and Geometry. Princeton: Princeton University Press. pp. 96–127. ISBN 978-0-691-15370-4.
Yafaev, Andrei (2007), Burns, David; Buzzard, Kevin; Nekovar, Jan (eds.), "The André-Oort conjecture - a survey", L-functions and Galois Representations, Cambridge: Cambridge University Press, pp. 381–406, doi:10.1017/cbo9780511721267.011, ISBN 978-0-511-72126-7 | Wikipedia/André–Oort_conjecture |
This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
See also the glossary of number theory terms at Glossary of number theory.
== A ==
abc conjecture
The abc conjecture of Masser and Oesterlé attempts to state as much as possible about repeated prime factors in an equation a + b = c. For example 3 + 125 = 128 but the prime powers here are exceptional.
Arakelov class group
The Arakelov class group is the analogue of the ideal class group or divisor class group for Arakelov divisors.
Arakelov divisor
An Arakelov divisor (or replete divisor) on a global field is an extension of the concept of divisor or fractional ideal. It is a formal linear combination of places of the field with finite places having integer coefficients and the infinite places having real coefficients.
Arakelov height
The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields.
Arakelov theory
Arakelov theory is an approach to arithmetic geometry that explicitly includes the 'infinite primes'.
Arithmetic of abelian varieties
See main article arithmetic of abelian varieties
Artin L-functions
Artin L-functions are defined for quite general Galois representations. The introduction of étale cohomology in the 1960s meant that Hasse–Weil L-functions could be regarded as Artin L-functions for the Galois representations on l-adic cohomology groups.
== B ==
Bad reduction
See good reduction.
Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture on elliptic curves postulates a connection between the rank of an elliptic curve and the order of pole of its Hasse–Weil L-function. It has been an important landmark in Diophantine geometry since the mid-1960s, with results such as the Coates–Wiles theorem, Gross–Zagier theorem and Kolyvagin's theorem.
== C ==
Canonical height
The canonical height on an abelian variety is a height function that is a distinguished quadratic form. See Néron–Tate height.
Chabauty's method
Chabauty's method, based on p-adic analytic functions, is a special application but capable of proving cases of the Mordell conjecture for curves whose Jacobian's rank is less than its dimension. It developed ideas from Thoralf Skolem's method for an algebraic torus. (Other older methods for Diophantine problems include Runge's method.)
Coates–Wiles theorem
The Coates–Wiles theorem states that an elliptic curve with complex multiplication by an imaginary quadratic field of class number 1 and positive rank has L-function with a zero at s = 1. This is a special case of the Birch and Swinnerton-Dyer conjecture.
Crystalline cohomology
Crystalline cohomology is a p-adic cohomology theory in characteristic p, introduced by Alexander Grothendieck to fill the gap left by étale cohomology which is deficient in using mod p coefficients in this case. It is one of a number of theories deriving in some way from Dwork's method, and has applications outside purely arithmetical questions.
== D ==
Diagonal forms
Diagonal forms are some of the simplest projective varieties to study from an arithmetic point of view (including the Fermat varieties). Their local zeta-functions are computed in terms of Jacobi sums. Waring's problem is the most classical case.
Diophantine dimension
The Diophantine dimension of a field is the smallest natural number k, if it exists, such that the field of is class Ck: that is, such that any homogeneous polynomial of degree d in N variables has a non-trivial zero whenever N > dk. Algebraically closed fields are of Diophantine dimension 0; quasi-algebraically closed fields of dimension 1.
Discriminant of a point
The discriminant of a point refers to two related concepts relative to a point P on an algebraic variety V defined over a number field K: the geometric (logarithmic) discriminant d(P) and the arithmetic discriminant, defined by Vojta. The difference between the two may be compared to the difference between the arithmetic genus of a singular curve and the geometric genus of the desingularisation. The arithmetic genus is larger than the geometric genus, and the height of a point may be bounded in terms of the arithmetic genus. Obtaining similar bounds involving the geometric genus would have significant consequences.
Dwork's method
Bernard Dwork used distinctive methods of p-adic analysis, p-adic algebraic differential equations, Koszul complexes and other techniques that have not all been absorbed into general theories such as crystalline cohomology. He first proved the rationality of local zeta-functions, the initial advance in the direction of the Weil conjectures.
== E ==
Étale cohomology
The search for a Weil cohomology (q.v.) was at least partially fulfilled in the étale cohomology theory of Alexander Grothendieck and Michael Artin. It provided a proof of the functional equation for the local zeta-functions, and was basic in the formulation of the Tate conjecture (q.v.) and numerous other theories.
== F ==
Faltings height
The Faltings height of an elliptic curve or abelian variety defined over a number field is a measure of its complexity introduced by Faltings in his proof of the Mordell conjecture.
Fermat's Last Theorem
Fermat's Last Theorem, the most celebrated conjecture of Diophantine geometry, was proved by Andrew Wiles and Richard Taylor.
Flat cohomology
Flat cohomology is, for the school of Grothendieck, one terminal point of development. It has the disadvantage of being quite hard to compute with. The reason that the flat topology has been considered the 'right' foundational topos for scheme theory goes back to the fact of faithfully-flat descent, the discovery of Grothendieck that the representable functors are sheaves for it (i.e. a very general gluing axiom holds).
Function field analogy
It was realised in the nineteenth century that the ring of integers of a number field has analogies with the affine coordinate ring of an algebraic curve or compact Riemann surface, with a point or more removed corresponding to the 'infinite places' of a number field. This idea is more precisely encoded in the theory that global fields should all be treated on the same basis. The idea goes further. Thus elliptic surfaces over the complex numbers, also, have some quite strict analogies with elliptic curves over number fields.
== G ==
Geometric class field theory
The extension of class field theory-style results on abelian coverings to varieties of dimension at least two is often called geometric class field theory.
Good reduction
Fundamental to local analysis in arithmetic problems is to reduce modulo all prime numbers p or, more generally, prime ideals. In the typical situation this presents little difficulty for almost all p; for example denominators of fractions are tricky, in that reduction modulo a prime in the denominator looks like division by zero, but that rules out only finitely many p per fraction. With a little extra sophistication, homogeneous coordinates allow clearing of denominators by multiplying by a common scalar. For a given, single point one can do this and not leave a common factor p. However singularity theory enters: a non-singular point may become a singular point on reduction modulo p, because the Zariski tangent space can become larger when linear terms reduce to 0 (the geometric formulation shows it is not the fault of a single set of coordinates). Good reduction refers to the reduced variety having the same properties as the original, for example, an algebraic curve having the same genus, or a smooth variety remaining smooth. In general there will be a finite set S of primes for a given variety V, assumed smooth, such that there is otherwise a smooth reduced Vp over Z/pZ. For abelian varieties, good reduction is connected with ramification in the field of division points by the Néron–Ogg–Shafarevich criterion. The theory is subtle, in the sense that the freedom to change variables to try to improve matters is rather unobvious: see Néron model, potential good reduction, Tate curve, semistable abelian variety, semistable elliptic curve, Serre–Tate theorem.
Grothendieck–Katz conjecture
The Grothendieck–Katz p-curvature conjecture applies reduction modulo primes to algebraic differential equations, to derive information on algebraic function solutions. The initial result of this type was Eisenstein's theorem.
== H ==
Hasse principle
The Hasse principle states that solubility for a global field is the same as solubility in all relevant local fields. One of the main objectives of Diophantine geometry is to classify cases where the Hasse principle holds. Generally that is for a large number of variables, when the degree of an equation is held fixed. The Hasse principle is often associated with the success of the Hardy–Littlewood circle method. When the circle method works, it can provide extra, quantitative information such as asymptotic number of solutions. Reducing the number of variables makes the circle method harder; therefore failures of the Hasse principle, for example for cubic forms in small numbers of variables (and in particular for elliptic curves as cubic curves) are at a general level connected with the limitations of the analytic approach.
Hasse–Weil L-function
A Hasse–Weil L-function, sometimes called a global L-function, is an Euler product formed from local zeta-functions. The properties of such L-functions remain largely in the realm of conjecture, with the proof of the Taniyama–Shimura conjecture being a breakthrough. The Langlands philosophy is largely complementary to the theory of global L-functions.
Height function
A height function in Diophantine geometry quantifies the size of solutions to Diophantine equations.
Hilbertian fields
A Hilbertian field K is one for which the projective spaces over K are not thin sets in the sense of Jean-Pierre Serre. This is a geometric take on Hilbert's irreducibility theorem which shows the rational numbers are Hilbertian. Results are applied to the inverse Galois problem. Thin sets (the French word is mince) are in some sense analogous to the meagre sets (French maigre) of the Baire category theorem.
== I ==
Igusa zeta-function
An Igusa zeta-function, named for Jun-ichi Igusa, is a generating function counting numbers of points on an algebraic variety modulo high powers pn of a fixed prime number p. General rationality theorems are now known, drawing on methods of mathematical logic.
Infinite descent
Infinite descent was Pierre de Fermat's classical method for Diophantine equations. It became one half of the standard proof of the Mordell–Weil theorem, with the other being an argument with height functions (q.v.). Descent is something like division by two in a group of principal homogeneous spaces (often called 'descents', when written out by equations); in more modern terms in a Galois cohomology group which is to be proved finite. See Selmer group.
Iwasawa theory
Iwasawa theory builds up from the analytic number theory and Stickelberger's theorem as a theory of ideal class groups as Galois modules and p-adic L-functions (with roots in Kummer congruence on Bernoulli numbers). In its early days in the late 1960s it was called Iwasawa's analogue of the Jacobian. The analogy was with the Jacobian variety J of a curve C over a finite field F (qua Picard variety), where the finite field has roots of unity added to make finite field extensions F′ The local zeta-function (q.v.) of C can be recovered from the points J(F′) as Galois module. In the same way, Iwasawa added pn-power roots of unity for fixed p and with n → ∞, for his analogue, to a number field K, and considered the inverse limit of class groups, finding a p-adic L-function earlier introduced by Kubota and Leopoldt.
== K ==
K-theory
Algebraic K-theory is on one hand a quite general theory with an abstract algebra flavour, and, on the other hand, implicated in some formulations of arithmetic conjectures. See for example Birch–Tate conjecture, Lichtenbaum conjecture.
== L ==
Lang conjecture
Enrico Bombieri (dimension 2), Serge Lang and Paul Vojta (integral points case) and Piotr Blass have conjectured that algebraic varieties of general type do not have Zariski dense subsets of K-rational points, for K a finitely-generated field. This circle of ideas includes the understanding of analytic hyperbolicity and the Lang conjectures on that, and the Vojta conjectures. An analytically hyperbolic algebraic variety V over the complex numbers is one such that no holomorphic mapping from the whole complex plane to it exists, that is not constant. Examples include compact Riemann surfaces of genus g > 1. Lang conjectured that V is analytically hyperbolic if and only if all subvarieties are of general type.
Linear torus
A linear torus is a geometrically irreducible Zariski-closed subgroup of an affine torus (product of multiplicative groups).
Local zeta-function
A local zeta-function is a generating function for the number of points on an algebraic variety V over a finite field F, over the finite field extensions of F. According to the Weil conjectures (q.v.) these functions, for non-singular varieties, exhibit properties closely analogous to the Riemann zeta-function, including the Riemann hypothesis.
== M ==
Manin–Mumford conjecture
The Manin–Mumford conjecture, now proved by Michel Raynaud, states that a curve C in its Jacobian variety J can only contain a finite number of points that are of finite order in J, unless C = J.
Mordell conjecture
The Mordell conjecture is now the Faltings theorem, and states that a curve of genus at least two has only finitely many rational points. The Uniformity conjecture states that there should be a uniform bound on the number of such points, depending only on the genus and the field of definition.
Mordell–Lang conjecture
The Mordell–Lang conjecture, now proved by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings, is a conjecture of Lang unifying the Mordell conjecture and Manin–Mumford conjecture in an abelian variety or semiabelian variety.
Mordell–Weil theorem
The Mordell–Weil theorem is a foundational result stating that for an abelian variety A over a number field K the group A(K) is a finitely-generated abelian group. This was proved initially for number fields K, but extends to all finitely-generated fields.
Mordellic variety
A Mordellic variety is an algebraic variety which has only finitely many points in any finitely generated field.
== N ==
Naive height
The naive height or classical height of a vector of rational numbers is the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial.
Néron symbol
The Néron symbol is a bimultiplicative pairing between divisors and algebraic cycles on an Abelian variety used in Néron's formulation of the Néron–Tate height as a sum of local contributions. The global Néron symbol, which is the sum of the local symbols, is just the negative of the height pairing.
Néron–Tate height
The Néron–Tate height (also often referred to as the canonical height) on an abelian variety A is a height function (q.v.) that is essentially intrinsic, and an exact quadratic form, rather than approximately quadratic with respect to the addition on A as provided by the general theory of heights. It can be defined from a general height by a limiting process; there are also formulae, in the sense that it is a sum of local contributions.
Nevanlinna invariant
The Nevanlinna invariant of an ample divisor D on a normal projective variety X is a real number which describes the rate of growth of the number of rational points on the variety with respect to the embedding defined by the divisor. It has similar formal properties to the abscissa of convergence of the height zeta function and it is conjectured that they are essentially the same.
== O ==
Ordinary reduction
An Abelian variety A of dimension d has ordinary reduction at a prime p if it has good reduction at p and in addition the p-torsion has rank d.
== Q ==
Quasi-algebraic closure
The topic of quasi-algebraic closure, i.e. solubility guaranteed by a number of variables polynomial in the degree of an equation, grew out of studies of the Brauer group and the Chevalley–Warning theorem. It stalled in the face of counterexamples; but see Ax–Kochen theorem from mathematical logic.
== R ==
Reduction modulo a prime number or ideal
See good reduction.
Replete ideal
A replete ideal in a number field K is a formal product of a fractional ideal of K and a vector of positive real numbers with components indexed by the infinite places of K. A replete divisor is an Arakelov divisor.
== S ==
Sato–Tate conjecture
The Sato–Tate conjecture describes the distribution of Frobenius elements in the Tate modules of the elliptic curves over finite fields obtained from reducing a given elliptic curve over the rationals. Mikio Sato and, independently, John Tate suggested it around 1960. It is a prototype for Galois representations in general.
Skolem's method
See Chabauty's method.
Special set
The special set in an algebraic variety is the subset in which one might expect to find many rational points. The precise definition varies according to context. One definition is the Zariski closure of the union of images of algebraic groups under non-trivial rational maps; alternatively one may take images of abelian varieties; another definition is the union of all subvarieties that are not of general type. For abelian varieties the definition would be the union of all translates of proper abelian subvarieties. For a complex variety, the holomorphic special set is the Zariski closure of the images of all non-constant holomorphic maps from C. Lang conjectured that the analytic and algebraic special sets are equal.
Subspace theorem
Schmidt's subspace theorem shows that points of small height in projective space lie in a finite number of hyperplanes. A quantitative form of the theorem, in which the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei (1977) to allow more general absolute values on number fields. The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation.
== T ==
Tamagawa numbers
The direct Tamagawa number definition works well only for linear algebraic groups. There the Weil conjecture on Tamagawa numbers was eventually proved. For abelian varieties, and in particular the Birch–Swinnerton-Dyer conjecture (q.v.), the Tamagawa number approach to a local–global principle fails on a direct attempt, though it has had heuristic value over many years. Now a sophisticated equivariant Tamagawa number conjecture is a major research problem.
Tate conjecture
The Tate conjecture (John Tate, 1963) provided an analogue to the Hodge conjecture, also on algebraic cycles, but well within arithmetic geometry. It also gave, for elliptic surfaces, an analogue of the Birch–Swinnerton-Dyer conjecture (q.v.), leading quickly to a clarification of the latter and a recognition of its importance.
Tate curve
The Tate curve is a particular elliptic curve over the p-adic numbers introduced by John Tate to study bad reduction (see good reduction).
Tsen rank
The Tsen rank of a field, named for C. C. Tsen who introduced their study in 1936, is the smallest natural number i, if it exists, such that the field is of class Ti: that is, such that any system of polynomials with no constant term of degree dj in n variables has a non-trivial zero whenever n > Σ dji. Algebraically closed fields are of Tsen rank zero. The Tsen rank is greater or equal to the Diophantine dimension but it is not known if they are equal except in the case of rank zero.
== U ==
Uniformity conjecture
The uniformity conjecture states that for any number field K and g > 2, there is a uniform bound B(g,K) on the number of K-rational points on any curve of genus g. The conjecture would follow from the Bombieri–Lang conjecture.
Unlikely intersection
An unlikely intersection is an algebraic subgroup intersecting a subvariety of a torus or abelian variety in a set of unusually large dimension, such as is involved in the Mordell–Lang conjecture.
== V ==
Vojta conjecture
The Vojta conjecture is a complex of conjectures by Paul Vojta, making analogies between Diophantine approximation and Nevanlinna theory.
== W ==
Weights
The yoga of weights is a formulation by Alexander Grothendieck of analogies between Hodge theory and l-adic cohomology.
Weil cohomology
The initial idea, later somewhat modified, for proving the Weil conjectures (q.v.), was to construct a cohomology theory applying to algebraic varieties over finite fields that would both be as good as singular homology at detecting topological structure, and have Frobenius mappings acting in such a way that the Lefschetz fixed-point theorem could be applied to the counting in local zeta-functions. For later history see motive (algebraic geometry), motivic cohomology.
Weil conjectures
The Weil conjectures were three highly influential conjectures of André Weil, made public around 1949, on local zeta-functions. The proof was completed in 1973. Those being proved, there remain extensions of the Chevalley–Warning theorem congruence, which comes from an elementary method, and improvements of Weil bounds, e.g. better estimates for curves of the number of points than come from Weil's basic theorem of 1940. The latter turn out to be of interest for Algebraic geometry codes.
Weil distributions on algebraic varieties
André Weil proposed a theory in the 1920s and 1930s on prime ideal decomposition of algebraic numbers in coordinates of points on algebraic varieties. It has remained somewhat under-developed.
Weil function
A Weil function on an algebraic variety is a real-valued function defined off some Cartier divisor which generalises the concept of Green's function in Arakelov theory. They are used in the construction of the local components of the Néron–Tate height.
Weil height machine
The Weil height machine is an effective procedure for assigning a height function to any divisor on smooth projective variety over a number field (or to Cartier divisors on non-smooth varieties).
== See also ==
Glossary of number theory
Arithmetic topology
Arithmetic dynamics
== References ==
Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034.
Hindry, Marc; Silverman, Joseph H. (2000). Diophantine Geometry: An Introduction. Graduate Texts in Mathematics. Vol. 201. ISBN 0-387-98981-1. Zbl 0948.11023.
Lang, Serge (1988). Introduction to Arakelov theory. New York: Springer-Verlag. ISBN 0-387-96793-1. MR 0969124. Zbl 0667.14001.
Lang, Serge (1997). Survey of Diophantine Geometry. Springer-Verlag. ISBN 3-540-61223-8. Zbl 0869.11051.
Neukirch, Jürgen (1999). Algebraic Number Theory. Grundlehren der Mathematischen Wissenschaften. Vol. 322. Springer-Verlag. ISBN 978-3-540-65399-8. Zbl 0956.11021.
== Further reading ==
Dino Lorenzini (1996), An invitation to arithmetic geometry, AMS Bookstore, ISBN 978-0-8218-0267-0 | Wikipedia/Mordell–Lang_conjecture |
In model theory—a branch of mathematical logic—a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite. A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal.
Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality.
Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures.
The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces, and the theories ACFp of algebraically closed fields of characteristic p. As the example ACFp shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated ("curves").
More generally, a subset of a structure that is defined as the set of realizations of a formula φ(x) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions.
A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry. A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by Ehud Hrushovski, who developed a method known as "Hrushovski construction" to build new strongly minimal structures from finite structures.
== See also ==
C-minimal theory
o-minimal theory
== References ==
Baldwin, John T.; Lachlan, Alistair H. (1971), "On Strongly Minimal Sets", The Journal of Symbolic Logic, 36 (1), The Journal of Symbolic Logic, Vol. 36, No. 1: 79–96, doi:10.2307/2271517, JSTOR 2271517
Hrushovski, Ehud (1993), "A new strongly minimal set", Annals of Pure and Applied Logic, 62 (2): 147, doi:10.1016/0168-0072(93)90171-9 | Wikipedia/Strongly_minimal_theory |
In model theory, a branch of mathematical logic, a complete theory T is said to satisfy NIP ("not the independence property") if none of its formulae satisfy the independence property—that is, if none of its formulae can pick out any given subset of an arbitrarily large finite set.
== Definition ==
Let T be a complete L-theory. An L-formula φ(x,y) is said to have the independence property (with respect to x, y) if in every model M of T there is, for each n = {0,1,...,n − 1} < ω, a family of tuples b0,...,bn−1 such that for each of the 2n subsets X of n there is a tuple a in M for which
M
⊨
φ
(
a
,
b
i
)
⇔
i
∈
X
.
{\displaystyle M\models \varphi ({\boldsymbol {a}},{\boldsymbol {b}}_{i})\quad \Leftrightarrow \quad i\in X.}
The theory T is said to have the independence property if some formula has the independence property. If no L-formula has the independence property then T is called dependent, or said to satisfy NIP. An L-structure is said to have the independence property (respectively, NIP) if its theory has the independence property (respectively, NIP). The terminology comes from the notion of independence in the sense of boolean algebras.
In the nomenclature of Vapnik–Chervonenkis theory, we may say that a collection S of subsets of X shatters a set B ⊆ X if every subset of B is of the form B ∩ S for some S ∈ S. Then T has the independence property if in some model M of T there is a definable family (Sa | a∈Mn) ⊆ Mk that shatters arbitrarily large finite subsets of Mk. In other words, (Sa | a∈Mn) has infinite Vapnik–Chervonenkis dimension.
== Examples ==
Any complete theory T that has the independence property is unstable.
In arithmetic, i.e. the structure (N,+,·), the formula "y divides x" has the independence property. This formula is just
(
∃
k
)
(
y
⋅
k
=
x
)
.
{\displaystyle (\exists k)(y\cdot k=x).}
So, for any finite n we take the n 1-tuples bi to be the first n prime numbers, and then for any subset X of {0,1,...,n − 1} we let a be the product of those bi such that i is in X. Then bi divides a if and only if i ∈ X.
Every o-minimal theory satisfies NIP. This fact has had unexpected applications to neural network learning.
Examples of NIP theories include also the theories of all the following structures:
linear orders, trees, abelian linearly ordered groups, algebraically closed valued fields, and the p-adic field for any p.
== Notes ==
== References ==
Anthony, Martin; Bartlett, Peter L. (1999). Neural network learning: theoretical foundations. Cambridge University Press. ISBN 978-0-521-57353-5.
Hodges, Wilfrid (1993). Model theory. Cambridge University Press. ISBN 978-0-521-30442-9.
Knight, Julia; Pillay, Anand; Steinhorn, Charles (1986). "Definable sets in ordered structures II". Transactions of the American Mathematical Society. 295 (2): 593–605. doi:10.2307/2000053. JSTOR 2000053.
Pillay, Anand; Steinhorn, Charles (1986). "Definable sets in ordered structures I". Transactions of the American Mathematical Society. 295 (2): 565–592. doi:10.2307/2000052. JSTOR 2000052.
Poizat, Bruno (2000). A Course in Model Theory. Springer. ISBN 978-0-387-98655-5.
Simon, Pierre (2015). A Guide to NIP Theories. Cambridge University Press. ISBN 9781107057753. | Wikipedia/NIP_(model_theory) |
Model-theoretic grammars, also known as constraint-based grammars, contrast with generative grammars in the way they define sets of sentences: they state constraints on syntactic structure rather than providing operations for generating syntactic objects. A generative grammar provides a set of operations such as rewriting, insertion, deletion, movement, or combination, and is interpreted as a definition of the set of all and only the objects that these operations are capable of producing through iterative application. A model-theoretic grammar simply states a set of conditions that an object must meet, and can be regarded as defining the set of all and only the structures of a certain sort that satisfy all of the constraints. The approach applies the mathematical techniques of model theory to the task of syntactic description: a grammar is a theory in the logician's sense (a consistent set of statements) and the well-formed structures are the models that satisfy the theory.
== History ==
David E. Johnson and Paul M. Postal introduced the idea of model-theoretic syntax in their 1980 book Arc Pair Grammar.
== Examples of model-theoretic grammars ==
The following is a sample of grammars falling under the model-theoretic umbrella:
the non-procedural variant of Transformational grammar (TG) of George Lakoff, that formulates constraints on potential tree sequences
Johnson and Postal's formalization of Relational grammar (RG) (1980), Generalized phrase structure grammar (GPSG) in the variants developed by Gazdar et al. (1988), Blackburn et al. (1993) and Rogers (1997)
Lexical functional grammar (LFG) in the formalization of Ronald Kaplan (1995)
Head-driven phrase structure grammar (HPSG) in the formalization of King (1999)
Constraint Handling Rules (CHR) grammars
The implicit model underlying The Cambridge Grammar of the English Language
== Strengths ==
One benefit of model-theoretic grammars over generative grammars is that they allow for gradience in grammaticality. A structure may deviate only slightly from a theory or it may be highly deviant. Generative grammars, in contrast, "entail a sharp boundary between the perfect and the nonexistent, and do not even permit gradience in ungrammaticality to be represented."
== References == | Wikipedia/Model-theoretic_grammar |
This page is about the concept in mathematical logic. For the concepts in sociology, see Institutional theory and Institutional logic.
In mathematical logic, institutional model theory generalizes a large portion of first-order model theory to an arbitrary logical system.
== Overview ==
The notion of "logical system" here is formalized as an institution. Institutions constitute a model-oriented meta-theory on logical systems similar to how the theory of rings and modules constitute a meta-theory for classical linear algebra. Another analogy can be made with universal algebra versus groups, rings, modules etc. By abstracting away from the realities of the actual conventional logics, it can be noticed that institution theory comes in fact closer to the realities of non-conventional logics.
Institutional model theory analyzes and generalizes classical model-theoretic notions and results, like
elementary diagrams
elementary embeddings
ultraproducts, Los' theorem
saturated models
axiomatizability
varieties, Birkhoff axiomatizability
Craig interpolation
Robinson consistency
Beth definability
Gödel's completeness theorem
For each concept and theorem, the infrastructure and properties required are analyzed and formulated as conditions on institutions, thus providing a detailed insight to which properties of first-order logic they rely on and how much they can be generalized to other logics.
== References ==
Răzvan Diaconescu: Institution-Independent Model Theory. Birkhäuser, 2008. ISBN 978-3-7643-8707-5.
Răzvan Diaconescu: Jewels of Institution-Independent Model Theory. In: K. Futatsugi, J.-P. Jouannaud, J. Meseguer (eds.): Algebra, Meaning and Computation. Essays Dedicated to Joseph A. Goguen on the Occasion of His 65th Birthday. Lecture Notes in Computer Science 4060, p. 65-98, Springer-Verlag, 2006.
Marius Petria and Rãzvan Diaconescu: Abstract Beth definability in institutions. Journal of Symbolic Logic 71(3), p. 1002-1028, 2006.
Daniel Gǎinǎ and Andrei Popescu: An institution-independent generalisation of Tarski's elementary chain theorem, Journal of Logic and Computation 16(6), p. 713-735, 2006.
Till Mossakowski, Joseph Goguen, Rãzvan Diaconescu, Andrzej Tarlecki: What is a Logic?. In Jean-Yves Beziau, editor, Logica Universalis, pages 113-133. Birkhauser, 2005.
Andrzej Tarlecki: Quasi-varieties in abstract algebraic institutions. Journal of Computer and System Sciences 33(3), p. 333-360, 1986.
== External links ==
Răzvan Diaconescu's publication list - contains recent work on institutional model theory | Wikipedia/Institutional_model_theory |
In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ =
ℵ
0
{\displaystyle \aleph _{0}}
= ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories.
== Equivalent conditions for omega-categoricity ==
Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors.
Given a countable complete first-order theory T with infinite models, the following are equivalent:
The theory T is omega-categorical.
Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n).
Some countable model of T has an oligomorphic automorphism group.
The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite.
For every natural number n, T has only finitely many n-types.
For every natural number n, every n-type is isolated.
For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite.
Every model of T is atomic.
Every countable model of T is atomic.
The theory T has a countable atomic and saturated model.
The theory T has a saturated prime model.
== Examples ==
The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. More generally, the theory of the Fraïssé limit of any uniformly locally finite Fraïssé class is omega-categorical. Hence, the following theories are omega-categorical:
The theory of dense linear orders without endpoints (Cantor's isomorphism theorem)
The theory of the Rado graph
The theory of infinite linear spaces over any finite field
The theory of atomless Boolean algebras
== Notes ==
== References ==
Cameron, Peter J. (1990), Oligomorphic permutation groups, London Mathematical Society Lecture Note Series, vol. 152, Cambridge: Cambridge University Press, ISBN 0-521-38836-8, Zbl 0813.20002
Chang, Chen Chung; Keisler, H. Jerome (1989) [1973], Model Theory, Elsevier, ISBN 978-0-7204-0692-4
Hodges, Wilfrid (1993), Model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-30442-9
Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6
Macpherson, Dugald (2011), "A survey of homogeneous structures", Discrete Mathematics, 311 (15): 1599–1634, doi:10.1016/j.disc.2011.01.024, MR 2800979
Poizat, Bruno (2000), A Course in Model Theory: An Introduction to Contemporary Mathematical Logic, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98655-5
Rothmaler, Philipp (2000), Introduction to Model Theory, New York: Taylor & Francis, ISBN 978-90-5699-313-9 | Wikipedia/Omega-categorical_theory |
In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
== Model companion and model completion ==
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an
ℵ
0
{\displaystyle \aleph _{0}}
-categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
== Examples ==
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
== Non-examples ==
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
== Sufficient condition for completeness of model-complete theories ==
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
== Notes ==
== References ==
Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3.
Chang, Chen Chung; Keisler, H. Jerome (2012) [1990]. Model Theory. Dover Books on Mathematics (3rd ed.). Dover Publications. p. 672. ISBN 978-0-486-48821-9.
Hirschfeld, Joram; Wheeler, William H. (1975). "Model-completions and model-companions". Forcing, Arithmetic, Division Rings. Lecture Notes in Mathematics. Vol. 454. Springer. pp. 44–54. doi:10.1007/BFb0064085. ISBN 978-3-540-07157-0. MR 0389581.
Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics 217. New York: Springer-Verlag. ISBN 0-387-98760-6.
Saracino, D. (August 1973). "Model Companions for ℵ0-Categorical Theories". Proceedings of the American Mathematical Society. 39 (3): 591–598.
Simmons, H. (1976). "Large and Small Existentially Closed Structures". Journal of Symbolic Logic. 41 (2): 379–390. | Wikipedia/Model_completion |
In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
== Model companion and model completion ==
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an
ℵ
0
{\displaystyle \aleph _{0}}
-categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
== Examples ==
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
== Non-examples ==
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
== Sufficient condition for completeness of model-complete theories ==
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
== Notes ==
== References ==
Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3.
Chang, Chen Chung; Keisler, H. Jerome (2012) [1990]. Model Theory. Dover Books on Mathematics (3rd ed.). Dover Publications. p. 672. ISBN 978-0-486-48821-9.
Hirschfeld, Joram; Wheeler, William H. (1975). "Model-completions and model-companions". Forcing, Arithmetic, Division Rings. Lecture Notes in Mathematics. Vol. 454. Springer. pp. 44–54. doi:10.1007/BFb0064085. ISBN 978-3-540-07157-0. MR 0389581.
Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics 217. New York: Springer-Verlag. ISBN 0-387-98760-6.
Saracino, D. (August 1973). "Model Companions for ℵ0-Categorical Theories". Proceedings of the American Mathematical Society. 39 (3): 591–598.
Simmons, H. (1976). "Large and Small Existentially Closed Structures". Journal of Symbolic Logic. 41 (2): 379–390. | Wikipedia/Model-complete |
Computable model theory is a branch of model theory which deals with questions of computability as they apply to model-theoretical structures.
Computable model theory introduces the ideas of computable and decidable models and theories and one of the basic problems is discovering whether or not computable or decidable models fulfilling certain model-theoretic conditions can be shown to exist.
Computable model theory was developed almost simultaneously by mathematicians in the West, primarily located in the United States and Australia, and Soviet Russia during the middle of the 20th century. Because of the Cold War there was little communication between these two groups and so a number of important results were discovered independently.
== See also ==
Vaught conjecture
== References ==
Harizanov, V. S. (1998), "Pure Computable Model Theory", in Ershov, Iurii Leonidovich (ed.), Handbook of Recursive Mathematics, Volume 1: Recursive Model Theory, Studies in Logic and the Foundations of Mathematics, vol. 138, North Holland, pp. 3–114, ISBN 978-0-444-50003-8, MR 1673621. | Wikipedia/Computable_model_theory |
In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
== Model companion and model completion ==
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an
ℵ
0
{\displaystyle \aleph _{0}}
-categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
== Examples ==
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
== Non-examples ==
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
== Sufficient condition for completeness of model-complete theories ==
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
== Notes ==
== References ==
Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3.
Chang, Chen Chung; Keisler, H. Jerome (2012) [1990]. Model Theory. Dover Books on Mathematics (3rd ed.). Dover Publications. p. 672. ISBN 978-0-486-48821-9.
Hirschfeld, Joram; Wheeler, William H. (1975). "Model-completions and model-companions". Forcing, Arithmetic, Division Rings. Lecture Notes in Mathematics. Vol. 454. Springer. pp. 44–54. doi:10.1007/BFb0064085. ISBN 978-3-540-07157-0. MR 0389581.
Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics 217. New York: Springer-Verlag. ISBN 0-387-98760-6.
Saracino, D. (August 1973). "Model Companions for ℵ0-Categorical Theories". Proceedings of the American Mathematical Society. 39 (3): 591–598.
Simmons, H. (1976). "Large and Small Existentially Closed Structures". Journal of Symbolic Logic. 41 (2): 379–390. | Wikipedia/Model_companion |
In mathematics, a basic semialgebraic set is a set defined by polynomial equalities and polynomial inequalities, and a semialgebraic set is a finite union of basic semialgebraic sets. A semialgebraic function is a function with a semialgebraic graph. Such sets and functions are mainly studied in real algebraic geometry which is the appropriate framework for algebraic geometry over the real numbers.
== Definition ==
Let
F
{\displaystyle \mathbb {F} }
be a real closed field (For example
F
{\displaystyle \mathbb {F} }
could be the field of real numbers
R
{\displaystyle \mathbb {R} }
).
A subset
S
{\displaystyle S}
of
F
n
{\displaystyle \mathbb {F} ^{n}}
is a semialgebraic set if it is a finite union of sets defined by polynomial equalities of the form
{
(
x
1
,
.
.
.
,
x
n
)
∈
F
n
∣
P
(
x
1
,
.
.
.
,
x
n
)
=
0
}
{\displaystyle \{(x_{1},...,x_{n})\in \mathbb {F} ^{n}\mid P(x_{1},...,x_{n})=0\}}
and of sets defined by polynomial inequalities of the form
{
(
x
1
,
.
.
.
,
x
n
)
∈
F
n
∣
P
(
x
1
,
.
.
.
,
x
n
)
>
0
}
.
{\displaystyle \{(x_{1},...,x_{n})\in \mathbb {F} ^{n}\mid P(x_{1},...,x_{n})>0\}.}
== Properties ==
Similarly to algebraic subvarieties, finite unions and intersections of semialgebraic sets are still semialgebraic sets. Furthermore, unlike subvarieties, the complement of a semialgebraic set is again semialgebraic. Finally, and most importantly, the Tarski–Seidenberg theorem says that they are also closed under the projection operation: in other words a semialgebraic set projected onto a linear subspace yields another semialgebraic set (as is the case for quantifier elimination). These properties together mean that semialgebraic sets form an o-minimal structure on R.
A semialgebraic set (or function) is said to be defined over a subring A of R if there is some description, as in the definition, where the polynomials can be chosen to have coefficients in A.
On a dense open subset of the semialgebraic set S, it is (locally) a submanifold. One can define the dimension of S to be the largest dimension at points at which it is a submanifold. It is not hard to see that a semialgebraic set lies inside an algebraic subvariety of the same dimension.
== See also ==
Łojasiewicz inequality
Existential theory of the reals
Subanalytic set
Piecewise algebraic space
== References ==
Bochnak, J.; Coste, M.; Roy, M.-F. (1998), Real algebraic geometry, Berlin: Springer-Verlag, ISBN 9783662037188.
Bierstone, Edward; Milman, Pierre D. (1988), "Semianalytic and subanalytic sets", Inst. Hautes Études Sci. Publ. Math., 67: 5–42, doi:10.1007/BF02699126, MR 0972342, S2CID 56006439.
van den Dries, L. (1998), Tame topology and o-minimal structures, Cambridge University Press, ISBN 9780521598385.
== External links ==
PlanetMath page | Wikipedia/Semialgebraic_sets |
In mathematical logic, and more specifically in model theory, an infinite structure (M,<,...) that is totally ordered by < is called an o-minimal structure if and only if every definable subset X ⊆ M (with parameters taken from M) is a finite union of intervals and points.
O-minimality can be regarded as a weak form of quantifier elimination. A structure M is o-minimal if and only if every formula with one free variable and parameters in M is equivalent to a quantifier-free formula involving only the ordering, also with parameters in M. This is analogous to the minimal structures, which are exactly the analogous property down to equality.
A theory T is an o-minimal theory if every model of T is o-minimal. It is known that the complete theory T of an o-minimal structure is an o-minimal theory. This result is remarkable because, in contrast, the complete theory of a minimal structure need not be a strongly minimal theory, that is, there may be an elementarily equivalent structure that is not minimal.
== Set-theoretic definition ==
O-minimal structures can be defined without recourse to model theory. Here we define a structure on a nonempty set M in a set-theoretic manner, as a sequence S = (Sn), n = 0,1,2,... such that
Sn is a boolean algebra of subsets of Mn
if D ∈ Sn then M × D and D ×M are in Sn+1
the set {(x1,...,xn) ∈ Mn : x1 = xn} is in Sn
if D ∈ Sn+1 and π : Mn+1 → Mn is the projection map on the first n coordinates, then π(D) ∈ Sn.
For a subset A of M, we consider the smallest structure S(A) containing S such that every finite subset of A is contained in S1. A subset D of Mn is called A-definable if it is contained in Sn(A); in that case A is called a set of parameters for D. A subset is called definable if it is A-definable for some A.
If M has a dense linear order without endpoints on it, say <, then a structure S on M is called o-minimal (respect to <) if it satisfies the extra axioms
the set < (={(x,y) ∈ M2 : x < y}) is in S2
the definable subsets of M are precisely the finite unions of intervals and points.
The "o" stands for "order", since any o-minimal structure requires an ordering on the underlying set.
== Model theoretic definition ==
O-minimal structures originated in model theory and so have a simpler — but equivalent — definition using the language of model theory. Specifically if L is a language including a binary relation <, and (M,<,...) is an L-structure where < is interpreted to satisfy the axioms of a dense linear order, then (M,<,...) is called an o-minimal structure if for any definable set X ⊆ M there are finitely many open intervals I1,..., Ir in M ∪ {±∞} and a finite set X0 such that
X
=
X
0
∪
I
1
∪
…
∪
I
r
.
{\displaystyle X=X_{0}\cup I_{1}\cup \ldots \cup I_{r}.}
== Examples ==
Examples of o-minimal theories are:
The complete theory of dense linear orders in the language with just the ordering.
RCF, the theory of real closed fields.
The complete theory of the real field with restricted analytic functions added (i.e. analytic functions on a neighborhood of [0,1]n, restricted to [0,1]n; note that the unrestricted sine function has infinitely many roots, and so cannot be definable in an o-minimal structure.)
The complete theory of the real field with a symbol for the exponential function by Wilkie's theorem. More generally, the complete theory of the real numbers with Pfaffian functions added.
The last two examples can be combined: given any o-minimal expansion of the real field (such as the real field with restricted analytic functions), one can define its Pfaffian closure, which is again an o-minimal structure. (The Pfaffian closure of a structure is, in particular, closed under Pfaffian chains where arbitrary definable functions are used in place of polynomials.)
In the case of RCF, the definable sets are the semialgebraic sets. Thus the study of o-minimal structures and theories generalises real algebraic geometry. A major line of current research is based on discovering expansions of the real ordered field that are o-minimal. Despite the generality of application, one can show a great deal about the geometry of set definable in o-minimal structures. There is a cell decomposition theorem, Whitney and Verdier stratification theorems and a good notion of dimension and Euler characteristic.
Moreover, continuously differentiable definable functions in a o-minimal structure satisfy a generalization of Łojasiewicz inequality, a property that has been used to guarantee the convergence of some non-smooth optimization methods, such as the stochastic subgradient method (under some mild assumptions).
== See also ==
Semialgebraic set
Real algebraic geometry
Strongly minimal theory
Weakly o-minimal structure
C-minimal theory
Tame topology
== Notes ==
== References ==
van den Dries, Lou (1998). Tame Topology and o-minimal Structures. London Mathematical Society Lecture Note Series. Vol. 248. Cambridge: Cambridge University Press. ISBN 978-0-521-59838-5. Zbl 0953.03045.
Marker, David (2000). "Review of "Tame Topology and o-minimal Structures"" (PDF). Bulletin of the American Mathematical Society. 37 (3): 351–357. doi:10.1090/S0273-0979-00-00866-1.
Marker, David (2002). Model theory: An introduction. Graduate Texts in Mathematics. Vol. 217. New York, NY: Springer-Verlag. ISBN 978-0-387-98760-6. Zbl 1003.03034.
Pillay, Anand; Steinhorn, Charles (1986). "Definable Sets in Ordered Structures I" (PDF). Transactions of the American Mathematical Society. 295 (2): 565–592. doi:10.2307/2000052. JSTOR 2000052. Zbl 0662.03023.
Knight, Julia; Pillay, Anand; Steinhorn, Charles (1986). "Definable Sets in Ordered Structures II". Transactions of the American Mathematical Society. 295 (2): 593–605. doi:10.2307/2000053. JSTOR 2000053. Zbl 0662.03024.
Pillay, Anand; Steinhorn, Charles (1988). "Definable Sets in Ordered Structures III". Transactions of the American Mathematical Society. 309 (2): 469–476. doi:10.2307/2000920. JSTOR 2000920. Zbl 0707.03024.
Wilkie, A.J. (1996). "Model completeness results for expansions of the ordered field of real numbers by restricted Pfaffian functions and the exponential function" (PDF). Journal of the American Mathematical Society. 9 (4): 1051–1095. doi:10.1090/S0894-0347-96-00216-0.
Denef, J.; van den Dries, L. (1989). "p-adic and real subanalytic sets". Annals of Mathematics. 128 (1): 79–138. doi:10.2307/1971463. JSTOR 1971463.
== External links ==
Model Theory preprint server
Real Algebraic and Analytic Geometry Preprint Server | Wikipedia/O-minimal_theory |
In the mathematical field of set theory, the continuum means the real numbers, or the corresponding (infinite) cardinal number, denoted by
c
{\displaystyle {\mathfrak {c}}}
. Georg Cantor proved that the cardinality
c
{\displaystyle {\mathfrak {c}}}
is larger than the smallest infinity, namely,
ℵ
0
{\displaystyle \aleph _{0}}
. He also proved that
c
{\displaystyle {\mathfrak {c}}}
is equal to
2
ℵ
0
{\displaystyle 2^{\aleph _{0}}\!}
, the cardinality of the power set of the natural numbers.
The cardinality of the continuum is the size of the set of real numbers. The continuum hypothesis is sometimes stated by saying that no cardinality lies between that of the continuum and that of the natural numbers,
ℵ
0
{\displaystyle \aleph _{0}}
, or alternatively, that
c
=
ℵ
1
{\displaystyle {\mathfrak {c}}=\aleph _{1}}
.
== Linear continuum ==
According to Raymond Wilder (1965), there are four axioms that make a set C and the relation < into a linear continuum:
C is simply ordered with respect to <.
If [A,B] is a cut of C, then either A has a last element or B has a first element. (compare Dedekind cut)
There exists a non-empty, countable subset S of C such that, if x,y ∈ C such that x < y, then there exists z ∈ S such that x < z < y. (separability axiom)
C has no first element and no last element. (Unboundedness axiom)
These axioms characterize the order type of the real number line.
== See also ==
Aleph null
Suslin's problem
Transfinite number
== References ==
== Bibliography == | Wikipedia/Continuum_(set_theory) |
In mathematical logic, abstract model theory is a generalization of model theory that studies the general properties of extensions of first-order logic and their models.
Abstract model theory provides an approach that allows us to step back and study a wide range of logics and their relationships. The starting point for the study of abstract models, which resulted in good examples was Lindström's theorem.
In 1974 Jon Barwise provided an axiomatization of abstract model theory.
== See also ==
Lindström's theorem
Institution (computer science)
Institutional model theory
== References ==
== Further reading ==
Jon Barwise; Solomon Feferman (1985). Model-theoretic logics. Springer-Verlag. ISBN 978-0-387-90936-3. | Wikipedia/Abstract_model_theory |
The term transformation theory refers to a procedure and a "picture" used by Paul Dirac in his early formulation of quantum theory, from around 1927.
This "transformation" idea refers to the changes a quantum state undergoes in the course of time, whereby its vector "moves" between "positions" or "orientations" in its Hilbert space. Time evolution, quantum transitions, and symmetry transformations in quantum mechanics may thus be viewed as the systematic theory of abstract, generalized rotations in this space of quantum state vectors.
Remaining in full use today, it would be regarded as a topic in the mathematics of Hilbert space, although, technically speaking, it is somewhat more general in scope. While the terminology is reminiscent of rotations of vectors in ordinary space, the Hilbert space of a quantum object is more general and holds its entire quantum state.
(The term further sometimes evokes the wave–particle duality, according to which a particle (a "small" physical object) may display either particle or wave aspects, depending on the observational situation. Or, indeed, a variety of intermediate aspects, as the situation demands.)
== References == | Wikipedia/Transformation_theory_(quantum_mechanics) |
In physics, complementarity is a conceptual aspect of quantum mechanics that Niels Bohr regarded as an essential feature of the theory. The complementarity principle holds that certain pairs of complementary properties cannot all be observed or measured simultaneously. For example, position and momentum, frequency and lifetime, or optical phase and photon number. In contemporary terms, complementarity encompasses both the uncertainty principle and wave-particle duality.
Bohr considered one of the foundational truths of quantum mechanics to be the fact that setting up an experiment to measure one quantity of a pair, for instance the position of an electron, excludes the possibility of measuring the other, yet understanding both experiments is necessary to characterize the object under study. In Bohr's view, the behavior of atomic and subatomic objects cannot be separated from the measuring instruments that create the context in which the measured objects behave. Consequently, there is no "single picture" that unifies the results obtained in these different experimental contexts, and only the "totality of the phenomena" together can provide a completely informative description.
== History ==
=== Background ===
Complementarity as a physical model derives from Niels Bohr's 1927 lecture during the Como Conference in Italy, at a scientific celebration of the work of Alessandro Volta 100 years previous.: 103 Bohr's subject was complementarity, the idea that measurements of quantum events provide complementary information through seemingly contradictory results. While Bohr's presentation was not well received, it did crystallize the issues ultimately leading to the modern wave-particle duality concept.: 315 The contradictory results that triggered Bohr's ideas had been building up over the previous 20 years.
This contradictory evidence came both from light and from electrons.
The wave theory of light, broadly successful for over a hundred years, had been challenged by Planck's 1901 model of blackbody radiation and Einstein's 1905 interpretation of the photoelectric effect. These theoretical models use discrete energy, a quantum, to describe the interaction of light with matter. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light.: 211 The experimental evidence of particle-like momentum seemingly contradicted other experiments demonstrating the wave-like interference of light.
The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thompson, Robert Millikan, and Charles Wilson, among others, had shown that free electrons had particle properties. However, in 1924, Louis de Broglie proposed that electrons had an associated wave and Schrödinger demonstrated that wave equations accurately account for electron properties in atoms. Again some experiments showed particle properties and others wave properties.
Bohr's resolution of these contradictions is to accept them. In his Como lecture he says: "our interpretation of the experimental material rests essentially upon the
classical concepts." Direct observation being impossible, observations of quantum effects are necessarily classical. Whatever the nature of quantum events, our only information will arrive via classical results. If experiments sometimes produce wave results and sometimes particle results, that is the nature of light and of the ultimate constituents of matter.
=== Bohr's lectures ===
Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding an as-yet-unpublished result, a thought experiment about a microscope using gamma rays. This thought experiment implied a tradeoff between uncertainties that would later be formalized as the uncertainty principle. To Bohr, Heisenberg's paper did not make clear the distinction between a position measurement merely disturbing the momentum value that a particle carried and the more radical idea that momentum was meaningless or undefinable in a context where position was measured instead. Upon returning from his vacation, by which time Heisenberg had already submitted his paper for publication, Bohr convinced Heisenberg that the uncertainty tradeoff was a manifestation of the deeper concept of complementarity. Heisenberg duly appended a note to this effect to his paper, before its publication, stating:
Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand.
Bohr publicly introduced the principle of complementarity in a lecture he delivered on 16 September 1927 at the International Physics Congress held in Como, Italy, attended by most of the leading physicists of the era, with the notable exceptions of Einstein, Schrödinger, and Dirac. However, these three were in attendance one month later when Bohr again presented the principle at the Fifth Solvay Congress in Brussels, Belgium. The lecture was published in the proceedings of both of these conferences, and was republished the following year in Naturwissenschaften (in German) and in Nature (in English).
In his original lecture on the topic, Bohr pointed out that just as the finitude of the speed of light implies the impossibility of a sharp separation between space and time (relativity), the finitude of the quantum of action implies the impossibility of a sharp separation between the behavior of a system and its interaction with the measuring instruments and leads to the well-known difficulties with the concept of 'state' in quantum theory; the notion of complementarity is intended to capture this new situation in epistemology created by quantum theory. Physicists F.A.M. Frescura and Basil Hiley have summarized the reasons for the introduction of the principle of complementarity in physics as follows:
In the traditional view, it is assumed that there exists a reality in space-time and that this reality is a given thing, all of whose aspects can be viewed or articulated at any given moment. Bohr was the first to point out that quantum mechanics called this traditional outlook into question. To him the "indivisibility of the quantum of action" [...] implied that not all aspects of a system can be viewed simultaneously. By using one particular piece of apparatus only certain features could be made manifest at the expense of others, while with a different piece of apparatus another complementary aspect could be made manifest in such a way that the original set became non-manifest, that is, the original attributes were no longer well defined. For Bohr, this was an indication that the principle of complementarity, a principle that he had previously known to appear extensively in other intellectual disciplines but which did not appear in classical physics, should be adopted as a universal principle.
=== Debate following the lectures ===
Complementarity was a central feature of Bohr's reply to the EPR paradox, an attempt by Albert Einstein, Boris Podolsky and Nathan Rosen to argue that quantum particles must have position and momentum even without being measured and so quantum mechanics must be an incomplete theory. The thought experiment proposed by Einstein, Podolsky and Rosen involved producing two particles and sending them far apart. The experimenter could choose to measure either the position or the momentum of one particle. Given that result, they could in principle make a precise prediction of what the corresponding measurement on the other, faraway particle would find. To Einstein, Podolsky and Rosen, this implied that the faraway particle must have precise values of both quantities whether or not that particle is measured in any way. Bohr argued in response that the deduction of a position value could not be transferred over to the situation where a momentum value is measured, and vice versa.
Later expositions of complementarity by Bohr include a 1938 lecture in Warsaw and a 1949 article written for a festschrift honoring Albert Einstein. It was also covered in a 1953 essay by Bohr's collaborator Léon Rosenfeld.
== Mathematical formalism ==
For Bohr, complementarity was the "ultimate reason" behind the uncertainty principle. All attempts to grapple with atomic phenomena using classical physics were eventually frustrated, he wrote, leading to the recognition that those phenomena have "complementary aspects". But classical physics can be generalized to address this, and with "astounding simplicity", by describing physical quantities using non-commutative algebra. This mathematical expression of complementarity builds on the work of Hermann Weyl and Julian Schwinger, starting with Hilbert spaces and unitary transformation, leading to the theorems of mutually unbiased bases.
In the mathematical formulation of quantum mechanics, physical quantities that classical mechanics had treated as real-valued variables become self-adjoint operators on a Hilbert space. These operators, called "observables", can fail to commute, in which case they are called "incompatible":
[
A
^
,
B
^
]
:=
A
^
B
^
−
B
^
A
^
≠
0
^
.
{\displaystyle \left[{\hat {A}},{\hat {B}}\right]:={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}\neq {\hat {0}}.}
Incompatible observables cannot have a complete set of common eigenstates; there can be some simultaneous eigenstates of
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
, but not enough in number to constitute a complete basis. The canonical commutation relation
[
x
^
,
p
^
]
=
i
ℏ
{\displaystyle \left[{\hat {x}},{\hat {p}}\right]=i\hbar }
implies that this applies to position and momentum. In a Bohrian view, this is a mathematical statement that position and momentum are complementary aspects. Likewise, an analogous relationship holds for any two of the spin observables defined by the Pauli matrices; measurements of spin along perpendicular axes are complementary. The Pauli spin observables are defined for a quantum system described by a two-dimensional Hilbert space; mutually unbiased bases generalize these observables to Hilbert spaces of arbitrary finite dimension. Two bases
{
|
a
j
⟩
}
{\displaystyle \{|a_{j}\rangle \}}
and
{
|
b
k
⟩
}
{\displaystyle \{|b_{k}\rangle \}}
for an
N
{\displaystyle N}
-dimensional Hilbert space are mutually unbiased when
|
⟨
a
j
|
b
k
⟩
|
2
=
1
N
for all
j
,
k
=
1
,
.
.
.
N
−
1.
{\displaystyle |\langle a_{j}|b_{k}\rangle |^{2}={\frac {1}{N}}\ {\text{for all}}\ j,k=1,...N-1.}
Here the basis vector
a
1
{\displaystyle a_{1}}
, for example, has the same overlap with every
b
k
{\displaystyle b_{k}}
; there is equal transition probability between a state in one basis and any state in the other basis. Each basis corresponds to an observable, and the observables for two mutually unbiased bases are complementary to each other. This leads to a definition of 'Principle of Complementarity' as:
For each degree of freedom the dynamical variables are a pair of complementary observables.The concept of complementarity has also been applied to quantum measurements described by positive-operator-valued measures (POVMs).
== Continuous complementarity ==
While the concept of complementarity can be discussed via two experimental extremes, continuous tradeoff is also possible.
In 1979 Wooters and Zurek introduced an information-theoretic treatment of the double-slit experiment providing on approach to a quantiative model of complementarity.
The wave-particle relation, introduced by Daniel Greenberger and Allaine Yasin in 1988, and since then refined by others, quantifies the trade-off between measuring particle path distinguishability,
D
{\displaystyle D}
, and wave interference fringe visibility,
V
{\displaystyle V}
:
D
2
+
V
2
≤
1
{\displaystyle D^{2}+V^{2}\ \leq \ 1}
The values of
D
{\displaystyle D}
and
V
{\displaystyle V}
can vary between 0 and 1 individually, but any experiment that combines particle and wave detection will limit one or the other, or both. The detailed definition of the two terms vary among applications, but the relation expresses the verified constraint that efforts to detect particle paths will result in less visible wave interference.
== Modern role ==
While many of the early discussions of complementarity discussed hypothetical experiments, advances in technology have allowed advanced tests of this concept. Experiments like the quantum eraser verify the key ideas in complementarity; modern exploration of quantum entanglement builds directly on complementarity:
The most sensible position, according to quantum mechanics, is to assume that no such waves preexist before
any measurement.
In his Nobel lecture, physicist Julian Schwinger linked complementarity to quantum field theory:
Indeed, relativistic quantum mechanics-the union of the complementarity principle of Bohr with the relativity principle of Einstein-is quantum field theory.
The Consistent histories interpretation of quantum mechanics takes a generalized form of complementarity as a key defining postulate.
== See also ==
Copenhagen interpretation
Canonical coordinates
Conjugate variables
Interpretations of quantum mechanics
Wave–particle duality
== References ==
== Further reading ==
Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994).
Niels Bohr, Causality and Complementarity: supplementary papers edited by Jan Faye and Henry J. Folse. The Philosophical Writings of Niels Bohr, Volume IV. Ox Bow Press. 1998.
Rhodes, Richard (1986). The Making of the Atomic Bomb. Simon & Schuster. ISBN 0-671-44133-7. OCLC 231117096.
== External links ==
Discussions with Einstein on Epistemological Problems in Atomic Physics
Einstein's Reply to Criticisms | Wikipedia/Complementarity_(physics) |
Modern Quantum Mechanics, often called Sakurai or Sakurai and Napolitano, is a standard graduate-level quantum mechanics textbook written originally by J. J. Sakurai and edited by San Fu Tuan in 1985, with later editions coauthored by Jim Napolitano. Sakurai died in 1982 before he could finish the textbook and both the first edition of the book, published in 1985 by Benjamin Cummings, and the revised edition of 1994, published by Addison-Wesley, were edited and completed by Tuan posthumously. The book was updated by Napolitano and released two later editions. The second edition was initially published by Addison-Wesley in 2010 and rereleased as an eBook by Cambridge University Press, who released a third edition in 2020.
== Table of contents (3rd edition) ==
Prefaces
Chapter 1: Fundamental Concepts
Chapter 2: Quantum Dynamics
Chapter 3: Theory of Angular Momentum
Chapter 4: Symmetry in Quantum Mechanics
Chapter 5: Approximation Methods
Chapter 6: Scattering Theory
Chapter 7: Identical Particles
Chapter 8: Relativistic Quantum Mechanics
Appendix A: Electromagnetic Units
Appendix B: Elementary Solutions to Schrödinger's Wave Equation
Appendix C: Hamiltonian for a Charge in an Electromagnetic Field
Appendix D: Proof of the Angular-Momentum Rule (3.358)
Appendix E: Finding Clebsch-Gordan Coefficients
Appendix F: Notes on Complex Variables
Bibliography
Index
== Reception ==
Early editions of the book have received several reviews. It is a standard textbook on the subject and is recommended in other works on the subject, it has inspired other textbooks on the subject, and it is used as a point of comparison in book reviews. Along with Griffith's Introduction to Quantum Mechanics, the book was also analyzed in a review of the "Philosophical Standpoints of Textbooks in Quantum Mechanics" in June 2020.
== Publication history ==
Sakurai, J. J. (1985). Tuan, San Fu (ed.). Modern Quantum Mechanics (1st ed.). Menlo Park, Calif.: Benjamin Cummings. ISBN 0-8053-7501-5. OCLC 11518382.
Sakurai, J. J. (1994). Tuan, San Fu (ed.). Modern Quantum Mechanics (Rev. ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-53929-2. OCLC 28065703. (hardcover)
Sakurai, J. J.; Napolitano, Jim (2010). Modern quantum mechanics (2nd ed.). Boston: Addison-Wesley. ISBN 978-0-8053-8291-4. OCLC 641998678. (hardcover)
Sakurai, J. J.; Napolitano, Jim (2017). Modern Quantum Mechanics (2nd ed.). Cambridge. ISBN 978-1-108-49999-6. OCLC 1105708539.{{cite book}}: CS1 maint: location missing publisher (link) (eBook)
Sakurai, J. J.; Napolitano, Jim (2020). Modern Quantum Mechanics (3rd ed.). Cambridge. ISBN 978-1-108-47322-4. OCLC 1202949320.{{cite book}}: CS1 maint: location missing publisher (link) (hardcover)
Sakurai, J. J.; Napolitano, Jim (2020). Modern Quantum Mechanics (3rd ed.). Cambridge. ISBN 978-1-108-64592-8. OCLC 1202949320.{{cite book}}: CS1 maint: location missing publisher (link) (eBook)
== See also ==
Introduction to Quantum Mechanics, an undergraduate text by David J. Griffiths
List of textbooks on classical mechanics and quantum mechanics
== References ==
== External links ==
Publisher's website for the 2nd edition
Publisher's website for the 3rd edition
Book in the Internet Archive | Wikipedia/Modern_Quantum_Mechanics |
Quantum networks form an important element of quantum computing and quantum communication systems. Quantum networks facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors. A quantum processor is a machine able to perform quantum circuits on a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems.
== Basics ==
=== Quantum networks for computation ===
Networked quantum computing or distributed quantum computing works by linking multiple quantum processors through a quantum network by sending qubits in between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form a computer cluster in classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances.
=== Quantum networks for communication ===
In the realm of quantum communication, one wants to send qubits from one quantum processor to another over long distances. This way, local quantum networks can be intra connected into a quantum internet. A quantum internet supports many applications, which derive their power from the fact that by creating quantum entangled qubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such as quantum key distribution in quantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast to quantum computing where interesting applications can be realized only if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60). Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed.
=== Overview of the elements of a quantum network ===
The basic structure of a quantum network and more generally a quantum internet is analogous to a classical network. First, we have end nodes on which applications are ultimately run. These end nodes are quantum processors of at least one qubit. Some applications of a quantum internet require quantum processors of several qubits as well as a quantum memory at the end nodes.
Second, to transport qubits from one node to another, we need communication lines. For the purpose of quantum communication, standard telecom fibers can be used. For networked quantum computing, in which quantum processors are linked at short distances, different wavelengths are chosen depending on the exact hardware platform of the quantum processor.
Third, to make maximum use of communication infrastructure, one requires optical switches capable of delivering qubits to the intended quantum processor. These switches need to preserve quantum coherence, which makes them more challenging to realize than standard optical switches.
Finally, one requires a quantum repeater to transport qubits over long distances. Repeaters appear in between end nodes. Since qubits cannot be copied (No-cloning theorem), classical signal amplification is not possible. By necessity, a quantum repeater works in a fundamentally different way than a classical repeater.
== Elements of a quantum network ==
=== End nodes: quantum processors ===
End nodes can both receive and emit information. Telecommunication lasers and parametric down-conversion combined with photodetectors can be used for quantum key distribution. In this case, the end nodes can in many cases be very simple devices consisting only of beamsplitters and photodetectors.
However, for many protocols more sophisticated end nodes are desirable. These systems provide advanced processing capabilities and can also be used as quantum repeaters. Their chief advantage is that they can store and retransmit quantum information without disrupting the underlying quantum state. The quantum state being stored can either be the relative spin of an electron in a magnetic field or the energy state of an electron. They can also perform quantum logic gates.
One way of realizing such end nodes is by using color centers in diamond, such as the nitrogen-vacancy center. This system forms a small quantum processor featuring several qubits. NV centers can be utilized at room temperatures. Small scale quantum algorithms and quantum error correction has already been demonstrated in this system, as well as the ability to entangle two and three quantum processors, and perform deterministic quantum teleportation.
Another possible platform are quantum processors based on ion traps, which utilize radio-frequency magnetic fields and lasers. In a multispecies trapped-ion node network, photons entangled with a parent atom are used to entangle different nodes. Also, cavity quantum electrodynamics (Cavity QED) is one possible method of doing this. In Cavity QED, photonic quantum states can be transferred to and from atomic quantum states stored in single atoms contained in optical cavities. This allows for the transfer of quantum states between single atoms using optical fiber in addition to the creation of remote entanglement between distant atoms.
=== Communication lines: physical layer ===
Over long distances, the primary method of operating quantum networks is to use optical networks and photon-based qubits. This is due to optical networks having a reduced chance of decoherence. Optical networks have the advantage of being able to re-use existing optical fiber. Alternately, free space networks can be implemented that transmit quantum information through the atmosphere or through a vacuum.
==== Fiber optic networks ====
Optical networks using existing telecommunication fiber can be implemented using hardware similar to existing telecommunication equipment. This fiber can be either single-mode or multi-mode, with single-mode allowing for more precise communication. At the sender, a single photon source can be created by heavily attenuating a standard telecommunication laser such that the mean number of photons per pulse is less than 1. For receiving, an avalanche photodetector can be used. Various methods of phase or polarization control can be used such as interferometers and beam splitters. In the case of entanglement based protocols, entangled photons can be generated through spontaneous parametric down-conversion. In both cases, the telecom fiber can be multiplexed to send non-quantum timing and control signals.
In 2020 a team of researchers affiliated with several institutions in China has succeeded in sending entangled quantum memories over a 50-kilometer coiled fiber cable.
==== Free space networks ====
Free space quantum networks operate similar to fiber optic networks but rely on line of sight between the communicating parties instead of using a fiber optic connection. Free space networks can typically support higher transmission rates than fiber optic networks and do not have to account for polarization scrambling caused by optical fiber. However, over long distances, free space communication is subject to an increased chance of environmental disturbance on the photons.
Free space communication is also possible from a satellite to the ground. A quantum satellite capable of entanglement distribution over a distance of 1,203 km has been demonstrated. The experimental exchange of single photons from a global navigation satellite system at a slant distance of 20,000 km has also been reported. These satellites can play an important role in linking smaller ground-based networks over larger distances. In free-space networks, atmospheric conditions such as turbulence, scattering, and absorption present challenges that affect the fidelity of transmitted quantum states. To mitigate these effects, researchers employ adaptive optics, advanced modulation schemes, and error correction techniques. The resilience of QKD protocols against eavesdropping plays a crucial role in ensuring the security of the transmitted data. Specifically, protocols like BB84 and decoy-state schemes have been adapted for free-space environments to improve robustness against potential security vulnerabilities.
=== Repeaters ===
Long-distance communication is hindered by the effects of signal loss and decoherence inherent to most transport mediums such as optical fiber. In classical communication, amplifiers can be used to boost the signal during transmission, but in a quantum network amplifiers cannot be used since qubits cannot be copied – known as the no-cloning theorem. That is, to implement an amplifier, the complete state of the flying qubit would need to be determined, something which is both unwanted and impossible.
==== Trusted repeaters ====
An intermediary step which allows the testing of communication infrastructure are trusted repeaters. Importantly, a trusted repeater cannot be used to transmit qubits over long distances. Instead, a trusted repeater can only be used to perform quantum key distribution with the additional assumption that the repeater is trusted. Consider two end nodes A and B, and a trusted repeater R in the middle. A and R now perform quantum key distribution to generate a key
k
A
R
{\displaystyle k_{AR}}
. Similarly, R and B run quantum key distribution to generate a key
k
R
B
{\displaystyle k_{RB}}
. A and B can now obtain a key
k
A
B
{\displaystyle k_{AB}}
between themselves as follows: A sends
k
A
B
{\displaystyle k_{AB}}
to R encrypted with the key
k
A
R
{\displaystyle k_{AR}}
. R decrypts to obtain
k
A
B
{\displaystyle k_{AB}}
. R then re-encrypts
k
A
B
{\displaystyle k_{AB}}
using the key
k
R
B
{\displaystyle k_{RB}}
and sends it to B. B decrypts to obtain
k
A
B
{\displaystyle k_{AB}}
. A and B now share the key
k
A
B
{\displaystyle k_{AB}}
. The key is secure from an outside eavesdropper, but clearly the repeater R also knows
k
A
B
{\displaystyle k_{AB}}
. This means that any subsequent communication between A and B does not provide end to end security, but is only secure as long as A and B trust the repeater R.
==== Quantum repeaters ====
A true quantum repeater allows the end to end generation of quantum entanglement, and thus – by using quantum teleportation – the end to end transmission of qubits. In quantum key distribution protocols one can test for such entanglement. This means that when making encryption keys, the sender and receiver are secure even if they do not trust the quantum repeater. Any other application of a quantum internet also requires the end to end transmission of qubits, and thus a quantum repeater.
Quantum repeaters allow entanglement and can be established at distant nodes without physically sending an entangled qubit the entire distance.
In this case, the quantum network consists of many short distance links of perhaps tens or hundreds of kilometers. In the simplest case of a single repeater, two pairs of entangled qubits are established:
|
A
⟩
{\displaystyle |A\rangle }
and
|
R
a
⟩
{\displaystyle |R_{a}\rangle }
located at the sender and the repeater, and a second pair
|
R
b
⟩
{\displaystyle |R_{b}\rangle }
and
|
B
⟩
{\displaystyle |B\rangle }
located at the repeater and the receiver. These initial entangled qubits can be easily created, for example through parametric down conversion, with one qubit physically transmitted to an adjacent node. At this point, the repeater can perform a Bell measurement on the qubits
|
R
a
⟩
{\displaystyle |R_{a}\rangle }
and
|
R
b
⟩
{\displaystyle |R_{b}\rangle }
thus teleporting the quantum state of
|
R
a
⟩
{\displaystyle |R_{a}\rangle }
onto
|
B
⟩
{\displaystyle |B\rangle }
. This has the effect of "swapping" the entanglement such that
|
A
⟩
{\displaystyle |A\rangle }
and
|
B
⟩
{\displaystyle |B\rangle }
are now entangled at a distance twice that of the initial entangled pairs. It can be seen that a network of such repeaters can be used linearly or in a hierarchical fashion to establish entanglement over great distances.
Hardware platforms suitable as end nodes above can also function as quantum repeaters. However, there are also hardware platforms specific only to the task of acting as a repeater, without the capabilities of performing quantum gates.
==== Error correction ====
Error correction can be used in quantum repeaters. Due to technological limitations, however, the applicability is limited to very short distances as quantum error correction schemes capable of protecting qubits over long distances would require an extremely large amount of qubits and hence extremely large quantum computers.
Errors in communication can be broadly classified into two types: Loss errors (due to optical fiber/environment) and operation errors (such as depolarization, dephasing etc.). While redundancy can be used to detect and correct classical errors, redundant qubits cannot be created due to the no-cloning theorem. As a result, other types of error correction must be introduced such as the Shor code or one of a number of more general and efficient codes. All of these codes work by distributing the quantum information across multiple entangled qubits so that operation errors as well as loss errors can be corrected.
In addition to quantum error correction, classical error correction can be employed by quantum networks in special cases such as quantum key distribution. In these cases, the goal of the quantum communication is to securely transmit a string of classical bits. Traditional error correction codes such as Hamming codes can be applied to the bit string before encoding and transmission on the quantum network.
==== Entanglement purification ====
Quantum decoherence can occur when one qubit from a maximally entangled bell state is transmitted across a quantum network. Entanglement purification allows for the creation of nearly maximally entangled qubits from a large number of arbitrary weakly entangled qubits, and thus provides additional protection against errors. Entanglement purification (also known as Entanglement distillation) has already been demonstrated in Nitrogen-vacancy centers in diamond.
== Applications ==
A quantum internet supports numerous applications, enabled by quantum entanglement. In general, quantum entanglement is well suited for tasks that require coordination, synchronization or privacy.
Examples of such applications include quantum key distribution, clock stabilization, protocols for distributed system problems such as leader election or Byzantine agreement, extending the baseline of telescopes, as well as position verification, secure identification and two-party cryptography in the noisy-storage model. A quantum internet also enables secure access to a quantum computer in the cloud. Specifically, a quantum internet enables very simple quantum devices to connect to a remote quantum computer in such a way that computations can be performed there without the quantum computer finding out what this computation actually is (the input and output quantum states can not be measured without destroying the computation, but the circuit composition used for the calculation will be known).
=== Secure communications ===
When it comes to communicating in any form the largest issue has always been keeping these communications private. Quantum networks would allow for information to be created, stored and transmitted, potentially achieving "a level of privacy, security and computational clout that is impossible to achieve with today’s Internet."
By applying a quantum operator that the user selects to a system of information the information can then be sent to the receiver without a chance of an eavesdropper being able to accurately be able to record the sent information without either the sender or receiver knowing. Unlike classical information that is transmitted in bits and assigned either a 0 or 1 value, the quantum information used in quantum networks uses quantum bits (qubits), which can have both 0 and 1 value at the same time, being in a state of superposition. This works because if a listener tries to listen in then they will change the information in an unintended way by listening, thereby tipping their hand to the people on whom they are attacking. Secondly, without the proper quantum operator to decode the information they will corrupt the sent information without being able to use it themselves. Furthermore, qubits can be encoded in a variety of materials, including in the polarization of photons or the spin states of electrons.
== Current status ==
=== Quantum internet ===
One example of a prototype quantum communication network is the eight-user city-scale quantum network described in a paper published in September 2020. The network located in Bristol used already deployed fibre-infrastructure and worked without active switching or trusted nodes.
In 2022, Researchers at the University of Science and Technology of China and Jinan Institute of Quantum Technology demonstrated quantum entanglement between two memory devices located at 12.5 km apart from each other within an urban environment.
In the same year, Physicist at the Delft University of Technology in Netherlands has taken a significant step toward the network of the future by using a technique called quantum teleportation that sends data to three physical locations which was previously only possible with two locations.
In 2024, researchers in the U.K and Germany achieved a first by producing, storing, and retrieving quantum information. This milestone involved interfacing a quantum dot light source and a quantum memory system, paving the way for practical applications despite challenges like quantum information loss over long distances.
In February 2025, researchers from Oxford University experimentally demonstrated the distribution of quantum computations between two photonically interconnected trapped-ion modules. Each module contained dedicated network and circuit qubits, and they were separated by approximately two meters. The team achieved deterministic teleportation of a controlled-Z gate between two circuit qubits located in separate modules, attaining an 86% fidelity. This experiment also marked the first implementation of a distributed quantum algorithm comprising multiple non-local two-qubit gates, specifically Grover's search algorithm, which was executed with a 71% success rate. These advancements represented significant progress toward scalable quantum computing and the development of a quantum internet.
=== Quantum networks for computation ===
In 2021, researchers at the Max Planck Institute of Quantum Optics in Germany reported a first prototype of quantum logic gates for distributed quantum computers.
=== Experimental quantum modems ===
A research team at the Max-Planck-Institute of Quantum Optics in Garching, Germany is finding success in transporting quantum data from flying and stable qubits via infrared spectrum matching. This requires a sophisticated, super-cooled yttrium silicate crystal to sandwich erbium in a mirrored environment to achieve resonance matching of infrared wavelengths found in fiber optic networks. The team successfully demonstrated the device works without data loss.
=== Mobile quantum networks ===
In 2021, researchers in China reported the successful transmission of entangled photons between drones, used as nodes for the development of mobile quantum networks or flexible network extensions. This could be the first work in which entangled particles were sent between two moving devices. Also, it has been researched the application of quantum communications to improve 6G mobile networks for joint detection and data transfer with quantum entanglement, where there are possible advantages such as security and energy efficiency.
=== Quantum key distribution networks ===
Several test networks have been deployed that are tailored to the task of quantum key distribution either at short distances (but connecting many users), or over larger distances by relying on trusted repeaters. These networks do not yet allow for the end to end transmission of qubits or the end to end creation of entanglement between far away nodes.
DARPA Quantum Network
Starting in the early 2000s, DARPA began sponsorship of a quantum network development project with the aim of implementing secure communication. The DARPA Quantum Network became operational within the BBN Technologies laboratory in late 2003 and was expanded further in 2004 to include nodes at Harvard and Boston Universities. The network consists of multiple physical layers including fiber optics supporting phase-modulated lasers and entangled photons as well free-space links.
SECOQC Vienna QKD network
From 2003 to 2008 the Secure Communication based on Quantum Cryptography (SECOQC) project developed a collaborative network between a number of European institutions. The architecture chosen for the SECOQC project is a trusted repeater architecture which consists of point-to-point quantum links between devices where long distance communication is accomplished through the use of repeaters.
Chinese hierarchical network
In May 2009, a hierarchical quantum network was demonstrated in Wuhu, China. The hierarchical network consists of a backbone network of four nodes connecting a number of subnets. The backbone nodes are connected through an optical switching quantum router. Nodes within each subnet are also connected through an optical switch and are connected to the backbone network through a trusted relay.
Geneva area network (SwissQuantum)
The SwissQuantum network developed and tested between 2009 and 2011 linked facilities at CERN with the University of Geneva and hepia in Geneva. The SwissQuantum program focused on transitioning the technologies developed in the SECOQC and other research quantum networks into a production environment. In particular the integration with existing telecommunication networks, and its reliability and robustness.
Tokyo QKD network
In 2010, a number of organizations from Japan and the European Union setup and tested the Tokyo QKD network. The Tokyo network build upon existing QKD technologies and adopted a SECOQC like network architecture. For the first time, one-time-pad encryption was implemented at high enough data rates to support popular end-user application such as secure voice and video conferencing. Previous large-scale QKD networks typically used classical encryption algorithms such as AES for high-rate data transfer and use the quantum-derived keys for low rate data or for regularly re-keying the classical encryption algorithms.
Beijing-Shanghai Trunk Line
In September 2017, a 2000-km quantum key distribution network between Beijing and Shanghai, China, was officially opened. This trunk line will serve as a backbone connecting quantum networks in Beijing, Shanghai, Jinan in Shandong province and Hefei in Anhui province. During the opening ceremony, two employees from the Bank of Communications completed a transaction from Shanghai to Beijing using the network. The State Grid Corporation of China is also developing a managing application for the link. The line uses 32 trusted nodes as repeaters. A quantum telecommunication network has been also put into service in Wuhan, capital of central China's Hubei Province, which will be connected to the trunk. Other similar city quantum networks along the Yangtze River are planned to follow.
In 2021, researchers working on this network of networks reported that they combined over 700 optical fibers with two QKD-ground-to-satellite links using a trusted relay structure for a total distance between nodes of up to ~4,600 km, which makes it Earth's largest integrated quantum communication network.
IQNET
IQNET (Intelligent Quantum Networks and Technologies) was founded in 2017 by Caltech and AT&T. Together, they are collaborating with the Fermi National Accelerator Laboratory, and the Jet Propulsion Laboratory. In December 2020, IQNET published a work in PRX Quantum that reported a successful teleportation of time-bin qubits across 44 km of fiber. For the first time, the published work includes a theoretical modelling of the experimental setup. The two test beds for performed measurements were the Caltech Quantum Network and the Fermilab Quantum Network. This research represents an important step in establishing a quantum internet of the future, which would revolutionise the fields of secure communication, data storage, precision sensing, and computing.
== See also ==
Quantum mechanics
Quantum computer
Quantum bus
== References ==
== Further reading ==
== External links ==
https://web.archive.org/web/20090716121402/http://itvibe.com/news/2583/
http://www.vnunet.com/vnunet/news/2125164/first-quantum-computr-network-goes-online
Elliott, Chip (2004). "The DARPA Quantum Network". arXiv:quant-ph/0412029.
http://www.cse.wustl.edu/~jain/cse571-07/ftp/quantum/
https://web.archive.org/web/20141229113448/http://www.ipod.org.uk/reality/reality_quantum_entanglement.asp | Wikipedia/Quantum_network |
Quantum Theory: Concepts and Methods is a 1993 quantum physics textbook by Israeli physicist Asher Peres. Well-regarded among the physics community, it is known for unconventional choices of topics to include.
== Contents ==
In his preface, Peres summarized his goals as follows:
The purpose of this book is to clarify the conceptual meaning of quantum theory, and to explain some of the mathematical methods that it utilizes. This text is not concerned with specialized topics such as atomic structure, or strong or weak interactions, but with the very foundations of the theory. This is not, however, a book on the philosophy of science. The approach is pragmatic and strictly instrumentalist. This attitude will undoubtedly antagonize some readers, but it has its own logic: quantum phenomena do not occur in a Hilbert space, they occur in a laboratory.
The book is divided into three parts. The first, "Gathering the Tools", introduces quantum mechanics as a theory of "preparations" and "tests", and it develops the mathematical formalism of Hilbert spaces, concluding with the spectral theory used to understand the quantum mechanics of continuous-valued observables. Part II, "Cryptodeterminism and Quantum Inseparability", focuses on Bell's theorem and other demonstrations that quantum mechanics is incompatible with local hidden-variable theories. (Within its substantial discussion of the failure of hidden variable theories, the book includes a FORTRAN program for testing whether a list of vectors forms a Kochen–Specker configuration.) Part III, "Quantum Dynamics and Information", covers the role of spacetime symmetry in quantum physics, the relation of quantum information to thermodynamics, semiclassical approximation methods, quantum chaos, and the treatment of measurement in quantum mechanics.
To generate the figures in his chapter on quantum chaos, including plots in phase space of chaotic motion, Peres wrote PostScript code that executed simulations in the printer itself.
The book develops the methodology of mathematically representing quantum measurements by POVMs, and it provided the first pedagogical treatment of how to use a POVM for quantum key distribution. Peres downplayed the importance of the uncertainty principle; that specific term only appears once in his index, and its entry points to that same page in the index. The text itself does discuss the uncertainty principle, pointing out how an oversimplified "derivation" of it breaks down, and posing as a homework problem the task of finding three quantum-physics textbooks with a demonstrably incorrect uncertainty relation.
== Reception ==
Physicist Leslie E. Ballentine gave the textbook a positive review, declaring it a good introduction to quantum foundations and ongoing research therein. John C. Baez also gave the book a positive assessment, calling it "clear-headed" and finding that it contained "a lot of gems that I hadn't seen", such as the Wigner–Araki–Yanase theorem. Michael Nielsen wrote of the textbook, "Revelation! Suddenly, all the key results of 30 years of work (several of those results due to Asher) were distilled into beautiful and simple explanations." Nielsen and Isaac Chuang said in their own influential textbook that Peres' was "superb", providing "an extremely clear exposition of elementary quantum mechanics" as well as an "extensive discussion of the Bell inequalities and related results". Jochen Rau's introductory textbook on quantum physics described Peres' work as "an excellent place to start" learning about Bell inequalities and related topics like Gleason's theorem.
N. David Mermin wrote that Peres had bridged the "textual gap" between conceptually-oriented books, aimed at understanding what quantum physics implies about the nature of the world, and more practical books intended to teach how to apply quantum mechanics. Mermin found the book praiseworthy, noting that he had "only a few complaints". He wrote:
Peres is careless in discriminating among the various kinds of assumptions one needs to prove the impossibility of a no-hidden-variables theory that reproduces the statistical predictions of quantum mechanics. I would guess that this is because even though he is a master practitioner of this particular art form, deep in his heart he is so firmly convinced that hidden variables cannot capture the essence of quantum mechanics, that he is simply not interested in precisely what you need to assume to prove that they cannot.
Mermin called the book "a treasure trove of novel perspectives on quantum mechanics" and said that Peres' choice of topics is "a catalogue of common omissions" from other approaches.
Meinhard E. Mayer declared that he would "recommend it to anyone teaching or studying quantum mechanics", finding Part II the most interesting of the book. While he noted some disappointment with Peres' selection of topics to include in the chapter on measurement, he reserved most of his negativity for the publisher, saying (as Ballentine also did) that they had priced the book beyond the reach of graduate students:
Such pricing practices are not justified when one considers that many publishers provide very little copyediting or typesetting any more, as is obvious from the "TeX"-ish look of most books published recently, this one included.
Mermin, Mayer and Baez noted that Peres briefly dismissed the many-worlds interpretation of quantum mechanics. Peres argued that all varieties of many-worlds interpretations merely shifted the arbitrariness or vagueness of the wavefunction collapse idea to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated. Moreover, Peres dismissed "spontaneous collapse" models like Ghirardi–Rimini–Weber theory in the same brief section, designating them "mutations" of quantum mechanics. In a review that praised the book's thoroughness, Tony Sudbery noted that Peres disparaged the idea that human consciousness plays a special role in quantum mechanics.
Manuel Bächtold analyzed Peres' textbook from a standpoint of philosophical pragmatism. John Conway and Simon Kochen used a Kochen–Specker configuration from the book in order to prove their free will theorem. Peres' insistence in his textbook that the classical analogue of a quantum state is a Liouville density function was influential in the development of QBism.
== Related works ==
John Watrous places Peres' textbook among the "indispensable references", along with Nielsen and Chuang's Quantum Computation and Quantum Information and Mark Wilde's Quantum Information Theory. In their obituary for Peres, William Wootters, Charles Bennett and coauthors call Quantum Theory: Concepts and Methods the "modern successor" to John von Neumann's 1955 Mathematical Foundations of Quantum Mechanics.
== Editions ==
Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. ISBN 0-7923-2549-4. OCLC 28854083. Original hardcover.
Peres, Asher (1995). Quantum Theory: Concepts and Methods. Kluwer. ISBN 9780792336327. OCLC 901395752. Paperback reprint.
Peres, Asher (2001). ペレス量子論の概念と手法―先端研究へのアプローチ (in Japanese). Translated by Ōba, Ichirō; Yamanaka, Yoshiya; Nakazato, Hiromichi. Maruzen. ISBN 9784621049228. OCLC 834645102.
== Notes ==
== References == | Wikipedia/Quantum_Theory:_Concepts_and_Methods |
Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group.
== Notation ==
The notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, while quantum states use bra–ket notation. Wide hats are for operators, narrow hats are for unit vectors (including their components in tensor index notation). The summation convention on the repeated tensor indices is used, unless stated otherwise. The Minkowski metric signature is (+−−−).
== Symmetry transformations on the wavefunction in non-relativistic quantum mechanics ==
=== Continuous symmetries ===
Generally, the correspondence between continuous symmetries and conservation laws is given by Noether's theorem.
The form of the fundamental quantum operators, for example the energy operator as a partial time derivative and momentum operator as a spatial gradient, becomes clear when one considers the initial state, then changes one parameter of it slightly. This can be done for displacements (lengths), durations (time), and angles (rotations). Additionally, the invariance of certain quantities can be seen by making such changes in lengths and angles, illustrating conservation of these quantities.
In what follows, transformations on only one-particle wavefunctions in the form:
Ω
^
ψ
(
r
,
t
)
=
ψ
(
r
′
,
t
′
)
{\displaystyle {\widehat {\Omega }}\psi (\mathbf {r} ,t)=\psi (\mathbf {r} ',t')}
are considered, where
Ω
^
{\displaystyle {\widehat {\Omega }}}
denotes a unitary operator. Unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state (representing the total probability of finding the particle somewhere with some spin) must be invariant under these transformations. The inverse is the Hermitian conjugate
Ω
^
−
1
=
Ω
^
†
{\displaystyle {\widehat {\Omega }}^{-1}={\widehat {\Omega }}^{\dagger }}
. The results can be extended to many-particle wavefunctions. Written in Dirac notation as standard, the transformations on quantum state vectors are:
Ω
^
|
r
(
t
)
⟩
=
|
r
′
(
t
′
)
⟩
{\displaystyle {\widehat {\Omega }}\left|\mathbf {r} (t)\right\rangle =\left|\mathbf {r} '(t')\right\rangle }
Now, the action of
Ω
^
{\displaystyle {\widehat {\Omega }}}
changes ψ(r, t) to ψ(r′, t′), so the inverse
Ω
^
−
1
=
Ω
^
†
{\displaystyle {\widehat {\Omega }}^{-1}={\widehat {\Omega }}^{\dagger }}
changes ψ(r′, t′) back to ψ(r, t). Thus, an operator
A
^
{\displaystyle {\widehat {A}}}
invariant under
Ω
^
{\displaystyle {\widehat {\Omega }}}
satisfies [I am sorry, but this is non-sequitor. You have not laid a foundation for this proposition]:
A
^
ψ
=
Ω
^
†
A
^
Ω
^
ψ
⇒
Ω
^
A
^
ψ
=
A
^
Ω
^
ψ
.
{\displaystyle {\widehat {A}}\psi ={\widehat {\Omega }}^{\dagger }{\widehat {A}}{\widehat {\Omega }}\psi \quad \Rightarrow \quad {\widehat {\Omega }}{\widehat {A}}\psi ={\widehat {A}}{\widehat {\Omega }}\psi .}
Concomitantly,
[
Ω
^
,
A
^
]
ψ
=
0
{\displaystyle [{\widehat {\Omega }},{\widehat {A}}]\psi =0}
for any state ψ. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i.e. the operator equals its Hermitian conjugate,
A
^
=
A
^
†
{\displaystyle {\widehat {A}}={\widehat {A}}^{\dagger }}
.
=== Overview of Lie group theory ===
Following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall
Let G be a Lie group, which is a group that locally is parameterized by a finite number N of real continuously varying parameters ξ1, ξ2, ..., ξN. In more mathematical language, this means that G is a smooth manifold that is also a group, for which the group operations are smooth.
the dimension of the group, N, is the number of parameters it has.
the group elements, g, in G are functions of the parameters:
g
=
G
(
ξ
1
,
ξ
2
,
…
)
{\displaystyle g=G(\xi _{1},\xi _{2},\dots )}
and all parameters set to zero returns the identity element of the group:
I
=
G
(
0
,
0
,
…
)
{\displaystyle I=G(0,0,\dots )}
Group elements are often matrices which act on vectors, or transformations acting on functions.
The generators of the group are the partial derivatives of the group elements with respect to the group parameters with the result evaluated when the parameter is set to zero:
X
j
=
∂
g
∂
ξ
j
|
ξ
j
=
0
{\displaystyle X_{j}=\left.{\frac {\partial g}{\partial \xi _{j}}}\right|_{\xi _{j}=0}}
In the language of manifolds, the generators are the elements of the tangent space to G at the identity. The generators are also known as infinitesimal group elements or as the elements of the Lie algebra of G. (See the discussion below of the commutator.) One aspect of generators in theoretical physics is they can be constructed themselves as operators corresponding to symmetries, which may be written as matrices, or as differential operators. In quantum theory, for unitary representations of the group, the generators require a factor of i:
X
j
=
i
∂
g
∂
ξ
j
|
ξ
j
=
0
{\displaystyle X_{j}=i\left.{\frac {\partial g}{\partial \xi _{j}}}\right|_{\xi _{j}=0}}
The generators of the group form a vector space, which means linear combinations of generators also form a generator.
The generators (whether matrices or differential operators) satisfy the commutation relations:
[
X
a
,
X
b
]
=
i
f
a
b
c
X
c
{\displaystyle \left[X_{a},X_{b}\right]=if_{abc}X_{c}}
where fabc are the (basis dependent) structure constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra. Due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices.
The representations of the group then describe the ways that the group G (or its Lie algebra) can act on a vector space. (The vector space might be, for example, the space of eigenvectors for a Hamiltonian having G as its symmetry group.) We denote the representations using a capital D. One can then differentiate D to obtain a representation of the Lie algebra, often also denoted by D. These two representations are related as follows:
D
[
g
(
ξ
j
)
]
≡
D
(
ξ
j
)
=
e
i
ξ
j
D
(
X
j
)
{\displaystyle D[g(\xi _{j})]\equiv D(\xi _{j})=e^{i\xi _{j}D(X_{j})}}
without summation on the repeated index j. Representations are linear operators that take in group elements and preserve the composition rule:
D
(
ξ
a
)
D
(
ξ
b
)
=
D
(
ξ
a
ξ
b
)
.
{\displaystyle D(\xi _{a})D(\xi _{b})=D(\xi _{a}\xi _{b}).}
A representation which cannot be decomposed into a direct sum of other representations, is called irreducible. It is conventional to label irreducible representations by a superscripted number n in brackets, as in D(n), or if there is more than one number, we write D(n, m, ...).
There is an additional subtlety that arises in quantum theory, where two vectors that differ by multiplication by a scalar represent the same physical state. Here, the pertinent notion of representation is a projective representation, one that only satisfies the composition law up to a scalar. In the context of quantum mechanical spin, such representations are called spinorial.
=== Momentum and energy as generators of translation and time evolution, and rotation ===
The space translation operator
T
^
(
Δ
r
)
{\displaystyle {\widehat {T}}(\Delta \mathbf {r} )}
acts on a wavefunction to shift the space coordinates by an infinitesimal displacement Δr. The explicit expression
T
^
{\displaystyle {\widehat {T}}}
can be quickly determined by a Taylor expansion of ψ(r + Δr, t) about r, then (keeping the first order term and neglecting second and higher order terms), replace the space derivatives by the momentum operator
p
^
{\displaystyle {\widehat {\mathbf {p} }}}
. Similarly for the time translation operator acting on the time parameter, the Taylor expansion of ψ(r, t + Δt) is about t, and the time derivative replaced by the energy operator
E
^
{\displaystyle {\widehat {E}}}
.
The exponential functions arise by definition as those limits, due to Euler, and can be understood physically and mathematically as follows. A net translation can be composed of many small translations, so to obtain the translation operator for a finite increment, replace Δr by Δr/N and Δt by Δt/N, where N is a positive non-zero integer. Then as N increases, the magnitude of Δr and Δt become even smaller, while leaving the directions unchanged. Acting the infinitesimal operators on the wavefunction N times and taking the limit as N tends to infinity gives the finite operators.
Space and time translations commute, which means the operators and generators commute.
For a time-independent Hamiltonian, energy is conserved in time and quantum states are stationary states: the eigenstates of the Hamiltonian are the energy eigenvalues E:
U
^
(
t
)
=
exp
(
−
i
Δ
t
E
ℏ
)
{\displaystyle {\widehat {U}}(t)=\exp \left(-{\frac {i\Delta tE}{\hbar }}\right)}
and all stationary states have the form
ψ
(
r
,
t
+
t
0
)
=
U
^
(
t
−
t
0
)
ψ
(
r
,
t
0
)
{\displaystyle \psi (\mathbf {r} ,t+t_{0})={\widehat {U}}(t-t_{0})\psi (\mathbf {r} ,t_{0})}
where t0 is the initial time, usually set to zero since there is no loss of continuity when the initial time is set.
An alternative notation is
U
^
(
t
−
t
0
)
≡
U
(
t
,
t
0
)
{\displaystyle {\widehat {U}}(t-t_{0})\equiv U(t,t_{0})}
.
=== Angular momentum as the generator of rotations ===
==== Orbital angular momentum ====
The rotation operator,
R
^
{\displaystyle {\widehat {R}}}
, acts on a wavefunction to rotate the spatial coordinates of a particle by a constant angle Δθ:
R
^
(
Δ
θ
,
a
^
)
ψ
(
r
,
t
)
=
ψ
(
r
′
,
t
)
{\displaystyle {\widehat {R}}(\Delta \theta ,{\hat {\mathbf {a} }})\psi (\mathbf {r} ,t)=\psi (\mathbf {r} ',t)}
where r′ are the rotated coordinates about an axis defined by a unit vector
a
^
=
(
a
1
,
a
2
,
a
3
)
{\displaystyle {\hat {\mathbf {a} }}=(a_{1},a_{2},a_{3})}
through an angular increment Δθ, given by:
r
′
=
R
^
(
Δ
θ
,
a
^
)
r
.
{\displaystyle \mathbf {r} '={\widehat {R}}(\Delta \theta ,{\hat {\mathbf {a} }})\mathbf {r} \,.}
where
R
^
(
Δ
θ
,
a
^
)
{\displaystyle {\widehat {R}}(\Delta \theta ,{\hat {\mathbf {a} }})}
is a rotation matrix dependent on the axis and angle. In group theoretic language, the rotation matrices are group elements, and the angles and axis
Δ
θ
a
^
=
Δ
θ
(
a
1
,
a
2
,
a
3
)
{\displaystyle \Delta \theta {\hat {\mathbf {a} }}=\Delta \theta (a_{1},a_{2},a_{3})}
are the parameters, of the three-dimensional special orthogonal group, SO(3). The rotation matrices about the standard Cartesian basis vector
e
^
x
,
e
^
y
,
e
^
z
{\displaystyle {\hat {\mathbf {e} }}_{x},{\hat {\mathbf {e} }}_{y},{\hat {\mathbf {e} }}_{z}}
through angle Δθ, and the corresponding generators of rotations J = (Jx, Jy, Jz), are:
More generally for rotations about an axis defined by
a
^
{\displaystyle {\hat {\mathbf {a} }}}
, the rotation matrix elements are:
[
R
^
(
θ
,
a
^
)
]
i
j
=
(
δ
i
j
−
a
i
a
j
)
cos
θ
−
ε
i
j
k
a
k
sin
θ
+
a
i
a
j
{\displaystyle [{\widehat {R}}(\theta ,{\hat {\mathbf {a} }})]_{ij}=(\delta _{ij}-a_{i}a_{j})\cos \theta -\varepsilon _{ijk}a_{k}\sin \theta +a_{i}a_{j}}
where δij is the Kronecker delta, and εijk is the Levi-Civita symbol.
It is not as obvious how to determine the rotational operator compared to space and time translations. We may consider a special case (rotations about the x, y, or z-axis) then infer the general result, or use the general rotation matrix directly and tensor index notation with δij and εijk. To derive the infinitesimal rotation operator, which corresponds to small Δθ, we use the small angle approximations sin(Δθ) ≈ Δθ and cos(Δθ) ≈ 1, then Taylor expand about r or ri, keep the first order term, and substitute the angular momentum operator components.
The z-component of angular momentum can be replaced by the component along the axis defined by
a
^
{\displaystyle {\hat {\mathbf {a} }}}
, using the dot product
a
^
⋅
L
^
{\displaystyle {\hat {\mathbf {a} }}\cdot {\widehat {\mathbf {L} }}}
.
Again, a finite rotation can be made from many small rotations, replacing Δθ by Δθ/N and taking the limit as N tends to infinity gives the rotation operator for a finite rotation.
Rotations about the same axis do commute, for example a rotation through angles θ1 and θ2 about axis i can be written
R
(
θ
1
+
θ
2
,
e
i
)
=
R
(
θ
1
e
i
)
R
(
θ
2
e
i
)
,
[
R
(
θ
1
e
i
)
,
R
(
θ
2
e
i
)
]
=
0
.
{\displaystyle R(\theta _{1}+\theta _{2},\mathbf {e} _{i})=R(\theta _{1}\mathbf {e} _{i})R(\theta _{2}\mathbf {e} _{i})\,,\quad [R(\theta _{1}\mathbf {e} _{i}),R(\theta _{2}\mathbf {e} _{i})]=0\,.}
However, rotations about different axes do not commute. The general commutation rules are summarized by
[
L
i
,
L
j
]
=
i
ℏ
ε
i
j
k
L
k
.
{\displaystyle [L_{i},L_{j}]=i\hbar \varepsilon _{ijk}L_{k}.}
In this sense, orbital angular momentum has the common sense properties of rotations. Each of the above commutators can be easily demonstrated by holding an everyday object and rotating it through the same angle about any two different axes in both possible orderings; the final configurations are different.
In quantum mechanics, there is another form of rotation which mathematically appears similar to the orbital case, but has different properties, described next.
==== Spin angular momentum ====
All previous quantities have classical definitions. Spin is a quantity possessed by particles in quantum mechanics without any classical analogue, having the units of angular momentum. The spin vector operator is denoted
S
^
=
(
S
x
^
,
S
y
^
,
S
z
^
)
{\displaystyle {\widehat {\mathbf {S} }}=({\widehat {S_{x}}},{\widehat {S_{y}}},{\widehat {S_{z}}})}
. The eigenvalues of its components are the possible outcomes (in units of
ℏ
{\displaystyle \hbar }
) of a measurement of the spin projected onto one of the basis directions.
Rotations (of ordinary space) about an axis
a
^
{\displaystyle {\hat {\mathbf {a} }}}
through angle θ about the unit vector
a
^
{\displaystyle {\hat {a}}}
in space acting on a multicomponent wave function (spinor) at a point in space is represented by:
However, unlike orbital angular momentum in which the z-projection quantum number ℓ can only take positive or negative integer values (including zero), the z-projection spin quantum number s can take all positive and negative half-integer values. There are rotational matrices for each spin quantum number.
Evaluating the exponential for a given z-projection spin quantum number s gives a (2s + 1)-dimensional spin matrix. This can be used to define a spinor as a column vector of 2s + 1 components which transforms to a rotated coordinate system according to the spin matrix at a fixed point in space.
For the simplest non-trivial case of s = 1/2, the spin operator is given by
S
^
=
ℏ
2
σ
{\displaystyle {\widehat {\mathbf {S} }}={\frac {\hbar }{2}}{\boldsymbol {\sigma }}}
where the Pauli matrices in the standard representation are:
σ
1
=
σ
x
=
(
0
1
1
0
)
,
σ
2
=
σ
y
=
(
0
−
i
i
0
)
,
σ
3
=
σ
z
=
(
1
0
0
−
1
)
{\displaystyle \sigma _{1}=\sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}}\,,\quad \sigma _{2}=\sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\,,\quad \sigma _{3}=\sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}}
==== Total angular momentum ====
The total angular momentum operator is the sum of the orbital and spin
J
^
=
L
^
+
S
^
{\displaystyle {\widehat {\mathbf {J} }}={\widehat {\mathbf {L} }}+{\widehat {\mathbf {S} }}}
and is an important quantity for multi-particle systems, especially in nuclear physics and the quantum chemistry of multi-electron atoms and molecules.
We have a similar rotation matrix:
J
^
(
θ
,
a
^
)
=
exp
(
−
i
ℏ
θ
a
^
⋅
J
^
)
{\displaystyle {\widehat {J}}(\theta ,{\hat {\mathbf {a} }})=\exp \left(-{\frac {i}{\hbar }}\theta {\hat {\mathbf {a} }}\cdot {\widehat {\mathbf {J} }}\right)}
=== Conserved quantities in the quantum harmonic oscillator ===
The dynamical symmetry group of the n dimensional quantum harmonic oscillator is the special unitary group SU(n). As an example, the number of infinitesimal generators of the corresponding Lie algebras of SU(2) and SU(3) are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems.
The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum.
== Lorentz group in relativistic quantum mechanics ==
Following is an overview of the Lorentz group; a treatment of boosts and rotations in spacetime. Throughout this section, see (for example) T. Ohlsson (2011) and E. Abers (2004).
Lorentz transformations can be parametrized by rapidity φ for a boost in the direction of a three-dimensional unit vector
n
^
=
(
n
1
,
n
2
,
n
3
)
{\displaystyle {\hat {\mathbf {n} }}=(n_{1},n_{2},n_{3})}
, and a rotation angle θ about a three-dimensional unit vector
a
^
=
(
a
1
,
a
2
,
a
3
)
{\displaystyle {\hat {\mathbf {a} }}=(a_{1},a_{2},a_{3})}
defining an axis, so
φ
n
^
=
φ
(
n
1
,
n
2
,
n
3
)
{\displaystyle \varphi {\hat {\mathbf {n} }}=\varphi (n_{1},n_{2},n_{3})}
and
θ
a
^
=
θ
(
a
1
,
a
2
,
a
3
)
{\displaystyle \theta {\hat {\mathbf {a} }}=\theta (a_{1},a_{2},a_{3})}
are together six parameters of the Lorentz group (three for rotations and three for boosts). The Lorentz group is 6-dimensional.
=== Pure rotations in spacetime ===
The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. Three of the Lorentz group elements
R
^
x
,
R
^
y
,
R
^
z
{\displaystyle {\widehat {R}}_{x},{\widehat {R}}_{y},{\widehat {R}}_{z}}
and generators J = (J1, J2, J3) for pure rotations are:
The rotation matrices act on any four vector A = (A0, A1, A2, A3) and rotate the space-like components according to
A
′
=
R
^
(
Δ
θ
,
n
^
)
A
{\displaystyle \mathbf {A} '={\widehat {R}}(\Delta \theta ,{\hat {\mathbf {n} }})\mathbf {A} }
leaving the time-like coordinate unchanged. In matrix expressions, A is treated as a column vector.
=== Pure boosts in spacetime ===
A boost with velocity ctanhφ in the x, y, or z directions given by the standard Cartesian basis vector
e
^
x
,
e
^
y
,
e
^
z
{\displaystyle {\hat {\mathbf {e} }}_{x},{\hat {\mathbf {e} }}_{y},{\hat {\mathbf {e} }}_{z}}
, are the boost transformation matrices. These matrices
B
^
x
,
B
^
y
,
B
^
z
{\displaystyle {\widehat {B}}_{x},{\widehat {B}}_{y},{\widehat {B}}_{z}}
and the corresponding generators K = (K1, K2, K3) are the remaining three group elements and generators of the Lorentz group:
The boost matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
A
′
=
B
^
(
φ
,
n
^
)
A
{\displaystyle \mathbf {A} '={\widehat {B}}(\varphi ,{\hat {\mathbf {n} }})\mathbf {A} }
The term "boost" refers to the relative velocity between two frames, and is not to be conflated with momentum as the generator of translations, as explained below.
=== Combining boosts and rotations ===
Products of rotations give another rotation (a frequent exemplification of a subgroup), while products of boosts and boosts or of rotations and boosts cannot be expressed as pure boosts or pure rotations. In general, any Lorentz transformation can be expressed as a product of a pure rotation and a pure boost. For more background see (for example) B.R. Durney (2011) and H.L. Berk et al. and references therein.
The boost and rotation generators have representations denoted D(K) and D(J) respectively, the capital D in this context indicates a group representation.
For the Lorentz group, the representations D(K) and D(J) of the generators K and J fulfill the following commutation rules.
In all commutators, the boost entities mixed with those for rotations, although rotations alone simply give another rotation. Exponentiating the generators gives the boost and rotation operators which combine into the general Lorentz transformation, under which the spacetime coordinates transform from one rest frame to another boosted and/or rotating frame. Likewise, exponentiating the representations of the generators gives the representations of the boost and rotation operators, under which a particle's spinor field transforms.
In the literature, the boost generators K and rotation generators J are sometimes combined into one generator for Lorentz transformations M, an antisymmetric four-dimensional matrix with entries:
M
0
a
=
−
M
a
0
=
K
a
,
M
a
b
=
ε
a
b
c
J
c
.
{\displaystyle M^{0a}=-M^{a0}=K_{a}\,,\quad M^{ab}=\varepsilon _{abc}J_{c}\,.}
and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix ω, with entries:
ω
0
a
=
−
ω
a
0
=
φ
n
a
,
ω
a
b
=
θ
ε
a
b
c
a
c
,
{\displaystyle \omega _{0a}=-\omega _{a0}=\varphi n_{a}\,,\quad \omega _{ab}=\theta \varepsilon _{abc}a_{c}\,,}
The general Lorentz transformation is then:
Λ
(
φ
,
n
^
,
θ
,
a
^
)
=
exp
(
−
i
2
ω
α
β
M
α
β
)
=
exp
[
−
i
2
(
φ
n
^
⋅
K
+
θ
a
^
⋅
J
)
]
{\displaystyle \Lambda (\varphi ,{\hat {\mathbf {n} }},\theta ,{\hat {\mathbf {a} }})=\exp \left(-{\frac {i}{2}}\omega _{\alpha \beta }M^{\alpha \beta }\right)=\exp \left[-{\frac {i}{2}}\left(\varphi {\hat {\mathbf {n} }}\cdot \mathbf {K} +\theta {\hat {\mathbf {a} }}\cdot \mathbf {J} \right)\right]}
with summation over repeated matrix indices α and β. The Λ matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
A
′
=
Λ
(
φ
,
n
^
,
θ
,
a
^
)
A
{\displaystyle \mathbf {A} '=\Lambda (\varphi ,{\hat {\mathbf {n} }},\theta ,{\hat {\mathbf {a} }})\mathbf {A} }
=== Transformations of spinor wavefunctions in relativistic quantum mechanics ===
In relativistic quantum mechanics, wavefunctions are no longer single-component scalar fields, but now 2(2s + 1) component spinor fields, where s is the spin of the particle. The transformations of these functions in spacetime are given below.
Under a proper orthochronous Lorentz transformation (r, t) → Λ(r, t) in Minkowski space, all one-particle quantum states ψσ locally transform under some representation D of the Lorentz group:
ψ
σ
(
r
,
t
)
→
D
(
Λ
)
ψ
σ
(
Λ
−
1
(
r
,
t
)
)
{\displaystyle \psi _{\sigma }(\mathbf {r} ,t)\rightarrow D(\Lambda )\psi _{\sigma }(\Lambda ^{-1}(\mathbf {r} ,t))}
where D(Λ) is a finite-dimensional representation, in other words a (2s + 1)×(2s + 1) dimensional square matrix, and ψ is thought of as a column vector containing components with the (2s + 1) allowed values of σ:
ψ
(
r
,
t
)
=
[
ψ
σ
=
s
(
r
,
t
)
ψ
σ
=
s
−
1
(
r
,
t
)
⋮
ψ
σ
=
−
s
+
1
(
r
,
t
)
ψ
σ
=
−
s
(
r
,
t
)
]
⇌
ψ
(
r
,
t
)
†
=
[
ψ
σ
=
s
(
r
,
t
)
⋆
ψ
σ
=
s
−
1
(
r
,
t
)
⋆
⋯
ψ
σ
=
−
s
+
1
(
r
,
t
)
⋆
ψ
σ
=
−
s
(
r
,
t
)
⋆
]
{\displaystyle \psi (\mathbf {r} ,t)={\begin{bmatrix}\psi _{\sigma =s}(\mathbf {r} ,t)\\\psi _{\sigma =s-1}(\mathbf {r} ,t)\\\vdots \\\psi _{\sigma =-s+1}(\mathbf {r} ,t)\\\psi _{\sigma =-s}(\mathbf {r} ,t)\end{bmatrix}}\quad \rightleftharpoons \quad {\psi (\mathbf {r} ,t)}^{\dagger }={\begin{bmatrix}{\psi _{\sigma =s}(\mathbf {r} ,t)}^{\star }&{\psi _{\sigma =s-1}(\mathbf {r} ,t)}^{\star }&\cdots &{\psi _{\sigma =-s+1}(\mathbf {r} ,t)}^{\star }&{\psi _{\sigma =-s}(\mathbf {r} ,t)}^{\star }\end{bmatrix}}}
=== Real irreducible representations and spin ===
The irreducible representations of D(K) and D(J), in short "irreps", can be used to build to spin representations of the Lorentz group. Defining new operators:
A
=
J
+
i
K
2
,
B
=
J
−
i
K
2
,
{\displaystyle \mathbf {A} ={\frac {\mathbf {J} +i\mathbf {K} }{2}}\,,\quad \mathbf {B} ={\frac {\mathbf {J} -i\mathbf {K} }{2}}\,,}
so A and B are simply complex conjugates of each other, it follows they satisfy the symmetrically formed commutators:
[
A
i
,
A
j
]
=
ε
i
j
k
A
k
,
[
B
i
,
B
j
]
=
ε
i
j
k
B
k
,
[
A
i
,
B
j
]
=
0
,
{\displaystyle \left[A_{i},A_{j}\right]=\varepsilon _{ijk}A_{k}\,,\quad \left[B_{i},B_{j}\right]=\varepsilon _{ijk}B_{k}\,,\quad \left[A_{i},B_{j}\right]=0\,,}
and these are essentially the commutators the orbital and spin angular momentum operators satisfy. Therefore, A and B form operator algebras analogous to angular momentum; same ladder operators, z-projections, etc., independently of each other as each of their components mutually commute. By the analogy to the spin quantum number, we can introduce positive integers or half integers, a, b, with corresponding sets of values m = a, a − 1, ... −a + 1, −a and n = b, b − 1, ... −b + 1, −b. The matrices satisfying the above commutation relations are the same as for spins a and b have components given by multiplying Kronecker delta values with angular momentum matrix elements:
(
A
x
)
m
′
n
′
,
m
n
=
δ
n
′
n
(
J
x
(
m
)
)
m
′
m
(
B
x
)
m
′
n
′
,
m
n
=
δ
m
′
m
(
J
x
(
n
)
)
n
′
n
{\displaystyle \left(A_{x}\right)_{m'n',mn}=\delta _{n'n}\left(J_{x}^{(m)}\right)_{m'm}\,\quad \left(B_{x}\right)_{m'n',mn}=\delta _{m'm}\left(J_{x}^{(n)}\right)_{n'n}}
(
A
y
)
m
′
n
′
,
m
n
=
δ
n
′
n
(
J
y
(
m
)
)
m
′
m
(
B
y
)
m
′
n
′
,
m
n
=
δ
m
′
m
(
J
y
(
n
)
)
n
′
n
{\displaystyle \left(A_{y}\right)_{m'n',mn}=\delta _{n'n}\left(J_{y}^{(m)}\right)_{m'm}\,\quad \left(B_{y}\right)_{m'n',mn}=\delta _{m'm}\left(J_{y}^{(n)}\right)_{n'n}}
(
A
z
)
m
′
n
′
,
m
n
=
δ
n
′
n
(
J
z
(
m
)
)
m
′
m
(
B
z
)
m
′
n
′
,
m
n
=
δ
m
′
m
(
J
z
(
n
)
)
n
′
n
{\displaystyle \left(A_{z}\right)_{m'n',mn}=\delta _{n'n}\left(J_{z}^{(m)}\right)_{m'm}\,\quad \left(B_{z}\right)_{m'n',mn}=\delta _{m'm}\left(J_{z}^{(n)}\right)_{n'n}}
where in each case the row number m′n′ and column number mn are separated by a comma, and in turn:
(
J
z
(
m
)
)
m
′
m
=
m
δ
m
′
m
(
J
x
(
m
)
±
i
J
y
(
m
)
)
m
′
m
=
m
δ
a
′
,
a
±
1
(
a
∓
m
)
(
a
±
m
+
1
)
{\displaystyle \left(J_{z}^{(m)}\right)_{m'm}=m\delta _{m'm}\,\quad \left(J_{x}^{(m)}\pm iJ_{y}^{(m)}\right)_{m'm}=m\delta _{a',a\pm 1}{\sqrt {(a\mp m)(a\pm m+1)}}}
and similarly for J(n). The three J(m) matrices are each (2m + 1)×(2m + 1) square matrices, and the three J(n) are each (2n + 1)×(2n + 1) square matrices. The integers or half-integers m and n numerate all the irreducible representations by, in equivalent notations used by authors: D(m, n) ≡ (m, n) ≡ D(m) ⊗ D(n), which are each [(2m + 1)(2n + 1)]×[(2m + 1)(2n + 1)] square matrices.
Applying this to particles with spin s;
left-handed (2s + 1)-component spinors transform under the real irreps D(s, 0),
right-handed (2s + 1)-component spinors transform under the real irreps D(0, s),
taking direct sums symbolized by ⊕ (see direct sum of matrices for the simpler matrix concept), one obtains the representations under which 2(2s + 1)-component spinors transform: D(m, n) ⊕ D(n, m) where m + n = s. These are also real irreps, but as shown above, they split into complex conjugates.
In these cases the D refers to any of D(J), D(K), or a full Lorentz transformation D(Λ).
=== Relativistic wave equations ===
In the context of the Dirac equation and Weyl equation, the Weyl spinors satisfying the Weyl equation transform under the simplest irreducible spin representations of the Lorentz group, since the spin quantum number in this case is the smallest non-zero number allowed: 1/2. The 2-component left-handed Weyl spinor transforms under D(1/2, 0) and the 2-component right-handed Weyl spinor transforms under D(0, 1/2). Dirac spinors satisfying the Dirac equation transform under the representation D(1/2, 0) ⊕ D(0, 1/2), the direct sum of the irreps for the Weyl spinors.
== The Poincaré group in relativistic quantum mechanics and field theory ==
Space translations, time translations, rotations, and boosts, all taken together, constitute the Poincaré group. The group elements are the three rotation matrices and three boost matrices (as in the Lorentz group), and one for time translations and three for space translations in spacetime. There is a generator for each. Therefore, the Poincaré group is 10-dimensional.
In special relativity, space and time can be collected into a four-position vector X = (ct, −r), and in parallel so can energy and momentum which combine into a four-momentum vector P = (E/c, −p). With relativistic quantum mechanics in mind, the time duration and spatial displacement parameters (four in total, one for time and three for space) combine into a spacetime displacement ΔX = (cΔt, −Δr), and the energy and momentum operators are inserted in the four-momentum to obtain a four-momentum operator,
P
^
=
(
E
^
c
,
−
p
^
)
=
i
ℏ
(
1
c
∂
∂
t
,
∇
)
,
{\displaystyle {\widehat {\mathbf {P} }}=\left({\frac {\widehat {E}}{c}},-{\widehat {\mathbf {p} }}\right)=i\hbar \left({\frac {1}{c}}{\frac {\partial }{\partial t}},\nabla \right)\,,}
which are the generators of spacetime translations (four in total, one time and three space):
X
^
(
Δ
X
)
=
exp
(
−
i
ℏ
Δ
X
⋅
P
^
)
=
exp
[
−
i
ℏ
(
Δ
t
E
^
+
Δ
r
⋅
p
^
)
]
.
{\displaystyle {\widehat {X}}(\Delta \mathbf {X} )=\exp \left(-{\frac {i}{\hbar }}\Delta \mathbf {X} \cdot {\widehat {\mathbf {P} }}\right)=\exp \left[-{\frac {i}{\hbar }}\left(\Delta t{\widehat {E}}+\Delta \mathbf {r} \cdot {\widehat {\mathbf {p} }}\right)\right]\,.}
There are commutation relations between the components four-momentum P (generators of spacetime translations), and angular momentum M (generators of Lorentz transformations), that define the Poincaré algebra:
[
P
μ
,
P
ν
]
=
0
{\displaystyle [P_{\mu },P_{\nu }]=0\,}
1
i
[
M
μ
ν
,
P
ρ
]
=
η
μ
ρ
P
ν
−
η
ν
ρ
P
μ
{\displaystyle {\frac {1}{i}}[M_{\mu \nu },P_{\rho }]=\eta _{\mu \rho }P_{\nu }-\eta _{\nu \rho }P_{\mu }\,}
1
i
[
M
μ
ν
,
M
ρ
σ
]
=
η
μ
ρ
M
ν
σ
−
η
μ
σ
M
ν
ρ
−
η
ν
ρ
M
μ
σ
+
η
ν
σ
M
μ
ρ
{\displaystyle {\frac {1}{i}}[M_{\mu \nu },M_{\rho \sigma }]=\eta _{\mu \rho }M_{\nu \sigma }-\eta _{\mu \sigma }M_{\nu \rho }-\eta _{\nu \rho }M_{\mu \sigma }+\eta _{\nu \sigma }M_{\mu \rho }\,}
where η is the Minkowski metric tensor. (It is common to drop any hats for the four-momentum operators in the commutation relations). These equations are an expression of the fundamental properties of space and time as far as they are known today. They have a classical counterpart where the commutators are replaced by Poisson brackets.
To describe spin in relativistic quantum mechanics, the Pauli–Lubanski pseudovector
W
μ
=
1
2
ε
μ
ν
ρ
σ
J
ν
ρ
P
σ
,
{\displaystyle W_{\mu }={\frac {1}{2}}\varepsilon _{\mu \nu \rho \sigma }J^{\nu \rho }P^{\sigma },}
a Casimir operator, is the constant spin contribution to the total angular momentum, and there are commutation relations between P and W and between M and W:
[
P
μ
,
W
ν
]
=
0
,
{\displaystyle \left[P^{\mu },W^{\nu }\right]=0\,,}
[
J
μ
ν
,
W
ρ
]
=
i
(
η
ρ
ν
W
μ
−
η
ρ
μ
W
ν
)
,
{\displaystyle \left[J^{\mu \nu },W^{\rho }\right]=i\left(\eta ^{\rho \nu }W^{\mu }-\eta ^{\rho \mu }W^{\nu }\right)\,,}
[
W
μ
,
W
ν
]
=
−
i
ϵ
μ
ν
ρ
σ
W
ρ
P
σ
.
{\displaystyle \left[W_{\mu },W_{\nu }\right]=-i\epsilon _{\mu \nu \rho \sigma }W^{\rho }P^{\sigma }\,.}
Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group.
== Symmetries in quantum field theory and particle physics ==
=== Unitary groups in quantum field theory ===
Group theory is an abstract way of mathematically analyzing symmetries. Unitary operators are paramount to quantum theory, so unitary groups are important in particle physics. The group of N dimensional unitary square matrices is denoted U(N). Unitary operators preserve inner products which means probabilities are also preserved, so the quantum mechanics of the system is invariant under unitary transformations. Let
U
^
{\displaystyle {\widehat {U}}}
be a unitary operator, so the inverse is the Hermitian adjoint
U
^
−
1
=
U
^
†
{\displaystyle {\widehat {U}}^{-1}={\widehat {U}}^{\dagger }}
, which commutes with the Hamiltonian:
[
U
^
,
H
^
]
=
0
{\displaystyle \left[{\widehat {U}},{\widehat {H}}\right]=0}
then the observable corresponding to the operator
U
^
{\displaystyle {\widehat {U}}}
is conserved, and the Hamiltonian is invariant under the transformation
U
^
{\displaystyle {\widehat {U}}}
.
Since the predictions of quantum mechanics should be invariant under the action of a group, physicists look for unitary transformations to represent the group.
Important subgroups of each U(N) are those unitary matrices which have unit determinant (or are "unimodular"): these are called the special unitary groups and are denoted SU(N).
==== U(1) ====
The simplest unitary group is U(1), which is just the complex numbers of modulus 1. This one-dimensional matrix entry is of the form:
U
=
e
−
i
θ
{\displaystyle U=e^{-i\theta }}
in which θ is the parameter of the group, and the group is Abelian since one-dimensional matrices always commute under matrix multiplication. Lagrangians in quantum field theory for complex scalar fields are often invariant under U(1) transformations. If there is a quantum number a associated with the U(1) symmetry, for example baryon and the three lepton numbers in electromagnetic interactions, we have:
U
=
e
−
i
a
θ
{\displaystyle U=e^{-ia\theta }}
==== U(2) and SU(2) ====
The general form of an element of a U(2) element is parametrized by two complex numbers a and b:
U
=
(
a
b
−
b
⋆
a
⋆
)
{\displaystyle U={\begin{pmatrix}a&b\\-b^{\star }&a^{\star }\\\end{pmatrix}}}
and for SU(2), the determinant is restricted to 1:
det
(
U
)
=
a
a
⋆
+
b
b
⋆
=
|
a
|
2
+
|
b
|
2
=
1
{\displaystyle \det(U)=aa^{\star }+bb^{\star }={|a|}^{2}+{|b|}^{2}=1}
In group theoretic language, the Pauli matrices are the generators of the special unitary group in two dimensions, denoted SU(2). Their commutation relation is the same as for orbital angular momentum, aside from a factor of 2:
[
σ
a
,
σ
b
]
=
2
i
ℏ
ε
a
b
c
σ
c
{\displaystyle [\sigma _{a},\sigma _{b}]=2i\hbar \varepsilon _{abc}\sigma _{c}}
A group element of SU(2) can be written:
U
(
θ
,
e
^
j
)
=
e
i
θ
σ
j
/
2
{\displaystyle U(\theta ,{\hat {\mathbf {e} }}_{j})=e^{i\theta \sigma _{j}/2}}
where σj is a Pauli matrix, and the group parameters are the angles turned through about an axis.
The two-dimensional isotropic quantum harmonic oscillator has symmetry group SU(2), while the symmetry algebra of the rational anisotropic oscillator is a nonlinear extension of u(2).
==== U(3) and SU(3) ====
The eight Gell-Mann matrices λn (see article for them and the structure constants) are important for quantum chromodynamics. They originally arose in the theory SU(3) of flavor which is still of practical importance in nuclear physics. They are the generators for the SU(3) group, so an element of SU(3) can be written analogously to an element of SU(2):
U
(
θ
,
e
^
j
)
=
exp
(
−
i
2
∑
n
=
1
8
θ
n
λ
n
)
{\displaystyle U(\theta ,{\hat {\mathbf {e} }}_{j})=\exp \left(-{\frac {i}{2}}\sum _{n=1}^{8}\theta _{n}\lambda _{n}\right)}
where θn are eight independent parameters. The λn matrices satisfy the commutator:
[
λ
a
,
λ
b
]
=
2
i
f
a
b
c
λ
c
{\displaystyle \left[\lambda _{a},\lambda _{b}\right]=2if_{abc}\lambda _{c}}
where the indices a, b, c take the values 1, 2, 3, ..., 8. The structure constants fabc are totally antisymmetric in all indices analogous to those of SU(2). In the standard colour charge basis (r for red, g for green, b for blue):
|
r
⟩
=
(
1
0
0
)
,
|
g
⟩
=
(
0
1
0
)
,
|
b
⟩
=
(
0
0
1
)
{\displaystyle |r\rangle ={\begin{pmatrix}1\\0\\0\end{pmatrix}}\,,\quad |g\rangle ={\begin{pmatrix}0\\1\\0\end{pmatrix}}\,,\quad |b\rangle ={\begin{pmatrix}0\\0\\1\end{pmatrix}}}
the colour states are eigenstates of the λ3 and λ8 matrices, while the other matrices mix colour states together.
The eight gluons states (8-dimensional column vectors) are simultaneous eigenstates of the adjoint representation of SU(3), the 8-dimensional representation acting on its own Lie algebra su(3), for the λ3 and λ8 matrices. By forming tensor products of representations (the standard representation and its dual) and taking appropriate quotients, protons and neutrons, and other hadrons are eigenstates of various representations of SU(3) of color. The representations of SU(3) can be described by a "theorem of the highest weight".
=== Matter and antimatter ===
In relativistic quantum mechanics, relativistic wave equations predict a remarkable symmetry of nature: that every particle has a corresponding antiparticle. This is mathematically contained in the spinor fields which are the solutions of the relativistic wave equations.
Charge conjugation switches particles and antiparticles. Physical laws and interactions unchanged by this operation have C symmetry.
=== Discrete spacetime symmetries ===
Parity mirrors the orientation of the spatial coordinates from left-handed to right-handed. Informally, space is "reflected" into its mirror image. Physical laws and interactions unchanged by this operation have P symmetry.
Time reversal flips the time coordinate, which amounts to time running from future to past. A curious property of time, which space does not have, is that it is unidirectional: particles traveling forwards in time are equivalent to antiparticles traveling back in time. Physical laws and interactions unchanged by this operation have T symmetry.
=== C, P, T symmetries ===
Parity (physics) § Molecules
CPT theorem
CP violation
PT symmetry
Lorentz violation
=== Gauge theory ===
In quantum electrodynamics, the local symmetry group is U(1) and is abelian. In quantum chromodynamics, the local symmetry group is SU(3) and is non-abelian.
The electromagnetic interaction is mediated by photons, which have no electric charge. The electromagnetic tensor has an electromagnetic four-potential field possessing gauge symmetry.
The strong (color) interaction is mediated by gluons, which can have eight color charges. There are eight gluon field strength tensors with corresponding gluon four potentials field, each possessing gauge symmetry.
=== The strong (color) interaction ===
==== Color charge ====
Analogous to the spin operator, there are color charge operators in terms of the Gell-Mann matrices λj:
F
^
j
=
1
2
λ
j
{\displaystyle {\hat {F}}_{j}={\frac {1}{2}}\lambda _{j}}
and since color charge is a conserved charge, all color charge operators must commute with the Hamiltonian:
[
F
^
j
,
H
^
]
=
0
{\displaystyle \left[{\hat {F}}_{j},{\hat {H}}\right]=0}
==== Isospin ====
Isospin is conserved in strong interactions.
=== The weak and electromagnetic interactions ===
==== Duality transformation ====
Magnetic monopoles can be theoretically realized, although current observations and theory are consistent with them existing or not existing. Electric and magnetic charges can effectively be "rotated into one another" by a duality transformation.
==== Electroweak symmetry ====
Electroweak symmetry
Electroweak symmetry breaking
=== Supersymmetry ===
A Lie superalgebra is an algebra in which (suitable) basis elements either have a commutation relation or have an anticommutation relation. Symmetries have been proposed to the effect that all fermionic particles have bosonic analogues, and vice versa. These symmetry have theoretical appeal in that no extra assumptions (such as existence of strings) barring symmetries are made. In addition, by assuming supersymmetry, a number of puzzling issues can be resolved. These symmetries, which are represented by Lie superalgebras, have not been confirmed experimentally. It is now believed that they are broken symmetries, if they exist. But it has been speculated that dark matter is constitutes gravitinos, a spin 3/2 particle with mass, its supersymmetric partner being the graviton.
== Exchange symmetry ==
The concept of exchange symmetry is derived from a fundamental postulate of quantum statistics, which states that no observable physical quantity should change after exchanging two identical particles. It states that because all observables are proportional to
|
ψ
|
2
{\displaystyle \left|\psi \right|^{2}}
for a system of identical particles, the wave function
ψ
{\displaystyle \psi }
must either remain the same or change sign upon such an exchange. More generally, for a system of n identical particles the wave function
ψ
{\displaystyle \psi }
must transform as an irreducible representation of the finite symmetric group Sn. It turns out that, according to the spin-statistics theorem, fermion states transform as the antisymmetric irreducible representation of Sn and boson states as the symmetric irreducible representation.
Because the exchange of two identical particles is mathematically equivalent to the rotation of each particle by 180 degrees (and so to the rotation of one particle's frame by 360 degrees), the symmetric nature of the wave function depends on the particle's spin after the rotation operator is applied to it. Integer spin particles do not change the sign of their wave function upon a 360 degree rotation—therefore the sign of the wave function of the entire system does not change. Semi-integer spin particles change the sign of their wave function upon a 360 degree rotation (see more in spin–statistics theorem).
Particles for which the wave function does not change sign upon exchange are called bosons, or particles with a symmetric wave function. The particles for which the wave function of the system changes sign are called fermions, or particles with an antisymmetric wave function.
Fermions therefore obey different statistics (called Fermi–Dirac statistics) than bosons (which obey Bose–Einstein statistics). One of the consequences of Fermi–Dirac statistics is the exclusion principle for fermions—no two identical fermions can share the same quantum state (in other words, the wave function of two identical fermions in the same state is zero). This in turn results in degeneracy pressure for fermions—the strong resistance of fermions to compression into smaller volume. This resistance gives rise to the “stiffness” or “rigidity” of ordinary atomic matter (as atoms contain electrons which are fermions).
== See also ==
== Footnotes ==
== References ==
== Further reading ==
== External links ==
The molecular symmetry group[1] @ The University of Western Ontario
(2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem Archived 2014-07-20 at the Wayback Machine
Reece, R.D. (2006). "A Derivation of the Quantum Mechanical Momentum Operator in the Position Representation".
Soper, D.E. (2011). "Position and momentum in quantum mechanics" (PDF).
Lie groups
Porter, F. (2009). "Lie Groups and Lie Algebras" (PDF). Archived from the original (PDF) on 2017-03-29. Retrieved 2013-06-05.
Continuous Groups, Lie Groups, and Lie Algebras Archived 2016-03-04 at the Wayback Machine
Mulders, P.J. (November 2011). "Quantum field theory" (PDF). Department of Theoretical Physics, VU University. 6.04.
Hall, B.C. (2000). "An Elementary Introduction to Groups and Representations". arXiv:math-ph/0005032. | Wikipedia/Symmetry_in_quantum_mechanics |
Quantization (in British English quantisation) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics.
== Historical overview ==
In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve the ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds
E
=
h
ν
{\displaystyle E=h\nu }
for the frequency
ν
{\displaystyle \nu }
. Here,
h
{\displaystyle h}
is called the Planck constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities.
In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon". In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules".
The preceding theories have been successful, but they are very phenomenological theories. However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta".
The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics. (1931).
== Canonical quantization ==
Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state.
== Quantization schemes ==
Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, Groenewold's theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables.
== Covariant canonical quantization ==
There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism.
== Deformation quantization ==
One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions.
More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory.
For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term 3ħ2/2. (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing l.)
As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics.
== Geometric quantization ==
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space.
== Path integral quantization ==
A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation.
== Other types ==
Loop quantum gravity (loop quantization)
Uncertainty principle (quantum statistical mechanics approach)
Schwinger's quantum action principle
== See also ==
== References ==
Ali, S. T., & Engliš, M. (2005). "Quantization methods: a guide for physicists and analysts". Reviews in Mathematical Physics 17 (04), 391-490. arXiv:math-ph/0405065 doi:10.1142/S0129055X05002376
Abraham, R. & Marsden (1985): Foundations of Mechanics, ed. Addison–Wesley, ISBN 0-8053-0102-X
Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, Bibcode:2013qtm..book.....H
Woodhouse, Nicholas M. J. (2007). Geometric quantization. Oxford mathematical monographs (2. ed., repr ed.). Oxford: Clarendon Press. ISBN 978-0-19-850270-8.
Landsman, N. P. (2005-07-25). "Between classical and quantum". arXiv:quant-ph/0506082.
M. Peskin, D. Schroeder, An Introduction to Quantum Field Theory (Westview Press, 1995) ISBN 0-201-50397-2
Weinberg, Steven, The Quantum Theory of Fields (3 volumes)
Curtright, T. L.; Zachos, C. K. (2012). "Quantum Mechanics in Phase Space". Asia Pacific Physics Newsletter. 01: 37–46. arXiv:1104.5269. doi:10.1142/S2251158X12000069. S2CID 119230734.
G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric and Algebraic Topological Methods in Quantum Mechanics (World Scientific, 2005) ISBN 981-256-129-3
Todorov, Ivan (2012). ""Quantization is a mystery"". arXiv:1206.3116 [math-ph].
== Notes == | Wikipedia/Quantization_(physics) |
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying but inaccessible variables, with the additional requirement that distant events be statistically independent.
The mathematical implications of a local hidden-variable theory with regards to quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts, a result since confirmed by a range of detailed Bell test experiments.
== Models ==
=== Single qubit ===
A collection of related theorems, beginning with Bell's proof in 1964, show that quantum mechanics is incompatible with local hidden variables. However, as Bell pointed out, restricted sets of quantum phenomena can be imitated using local hidden-variable models. Bell provided a local hidden-variable model for quantum measurements upon a spin-1/2 particle, or in the terminology of quantum information theory, a single qubit. Bell's model was later simplified by N. David Mermin, and a closely related model was presented by Simon B. Kochen and Ernst Specker. The existence of these models is related to the fact that Gleason's theorem does not apply to the case of a single qubit.
=== Bipartite quantum states ===
Bell also pointed out that up until then, discussions of quantum entanglement focused on cases where the results of measurements upon two particles were either perfectly correlated or perfectly anti-correlated. These special cases can also be explained using local hidden variables.
For separable states of two particles, there is a simple hidden-variable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hidden-variable model. Such states are entangled, but do not violate any Bell inequality. The so-called Werner states are a single-parameter family of states that are invariant under any transformation of the type
U
⊗
U
,
{\displaystyle U\otimes U,}
where
U
{\displaystyle U}
is a unitary matrix. For two qubits, they are noisy singlets given as
ϱ
=
p
|
ψ
−
⟩
⟨
ψ
−
|
+
(
1
−
p
)
I
4
,
{\displaystyle \varrho =p\vert \psi ^{-}\rangle \langle \psi ^{-}\vert +(1-p){\frac {\mathbb {I} }{4}},}
where the singlet is defined as
|
ψ
−
⟩
=
1
2
(
|
01
⟩
−
|
10
⟩
)
{\displaystyle \vert \psi ^{-}\rangle ={\tfrac {1}{\sqrt {2}}}\left(\vert 01\rangle -\vert 10\rangle \right)}
.
Reinhard F. Werner showed that such states allow for a hidden-variable model for
p
≤
1
/
2
{\displaystyle p\leq 1/2}
, while they are entangled if
p
>
1
/
3
{\displaystyle p>1/3}
. The bound for hidden-variable models could be improved until
p
=
2
/
3
{\displaystyle p=2/3}
. Hidden-variable models have been constructed for Werner states even if positive operator-valued measurements (POVM) are allowed, not only von Neumann measurements. Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise. Beside bipartite systems, there are also results for the multipartite case. A hidden-variable model for any von Neumann measurements at the parties has been presented for a three-qubit quantum state.
== Time-dependent variables ==
Previously some new hypotheses were conjectured concerning the role of time in constructing hidden-variables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by Richard D. Gill, Gregor Weihs, Anton Zeilinger and Marek Żukowski, as well as D. M. Appleby.
== See also ==
EPR paradox
Bohr–Einstein debates
== References == | Wikipedia/Local_hidden-variable_theory |
Physics World is the membership magazine of the Institute of Physics, one of the largest physical societies in the world. It is an international monthly magazine covering all areas of physics, pure and applied, and is aimed at physicists in research, industry, physics outreach, and education worldwide.
== Overview ==
The magazine was launched in 1988 by IOP Publishing Ltd, under the founding editorship of Philip Campbell. The magazine is made available free of cost to members of the Institute of Physics, who can access a digital edition of the magazine; selected articles can be read by anyone for free online. It was redesigned in September 2005 and has an audited circulation of just under 35000.
The current editor is Matin Durrani. Others on the team are Michael Banks (news editor) and Tushna Commissariat and Sarah Teah (features editors). Hamish Johnston, Margaret Harris and Tami Freeman are online editors.
Alongside the print and online magazine, Physics World produces films and two podcasts. The Physics World Stories podcast is hosted by Andrew Glester and is produced monthly. The Physics World Weekly podcast is hosted by James Dacey.
== Breakthrough of the Year ==
The magazine makes two awards each year. These are the Physics World Breakthrough of the Year and the Physics World Book of the Year, which have both been awarded annually since 2009.
Top 10 works and winners of the Breakthrough of the Year
2009: "to August Jonathan Home and colleagues at NIST for unveiled the first small-scale device that could be described as a complete "quantum computer"
Top results from Tevatron
Spins spotted in room-temperature silicon
Graphane makes its debut
Magnetic monopoles spotted in spin ices
Water on the Moon
Atoms teleport information over long distance
Black-hole analogue traps sound
Dark matter spotted in Minnesota
A 2.36 TeV big bang at the LHC
2010: "to ALPHA and the ASACUSA group at CERN for have created new ways of controlling antihydrogen"
Exoplanet atmosphere laid bare
Quantum effects seen in a visible object
Visible-light cloaking of large objects
Hail the first sound lasers
A Bose–Einstein condensate from light
Relativity with a human touch
Towards a Star Wars telepresence
Proton is smaller than we thought
CERN achieves landmark collisions
2011: Aephraim M. Steinberg and colleagues from the University of Toronto in Canada for using the technique of "weak measurement" to track the average paths of single photons passing through a Young's interference experiment.
Measuring the wavefunction
Cloaking in space and time
Measuring the universe using black holes
Turning darkness into light
Taking the temperature of the early universe
Catching the flavour of a neutrino oscillation
Living laser brought to life
Complete quantum computer made on a single chip
Seeing pure relics from the Big Bang
2012: "to the ATLAS and CMS collaborations at CERN for their joint discovery of a Higgs-like particle at the Large Hadron Collider".
Majorana fermions
Time-reversal violation
Galaxy-cluster motion
Peering through opaque materials
Room-temperature maser
Wiping data will cost you energy
Entangling twisted beams
Neutrino-based communication
Generating and storing energy in one step
2013: "the IceCube Neutrino Observatory for making the first observations of high-energy cosmic neutrinos".
Nuclear physics goes pear-shaped
Creating 'molecules' of light
Planck reveals 'almost perfect' universe
Quantum microscope' peers into the hydrogen atom
Quantum state endures for 39 minutes at room temperature
The first carbon-nanotube computer
B-mode polarization spotted in cosmic microwave background
The first laser-cooled Bose–Einstein condensate
Hofstadter's butterfly spotted in graphene
2014: "to the landing by the European Space Agency of the Philae (spacecraft) on 67P/Churyumov–Gerasimenko", which was the first time a probe had been landed on a comet
Quasar shines a bright light on cosmic web
Neutrinos spotted from Sun's main nuclear reaction
Laser fusion passes milestone
Electrons' magnetic interactions isolated at long last
Disorder sharpens optical-fibre images
Data stored in magnetic holograms
Lasers ignite 'supernovae' in the lab
Quantum data are compressed for the first time
Physicists sound-out acoustic tractor beam
2015: "for being the first to achieve the simultaneous quantum teleportation of two inherent properties of a fundamental particle – the photon".
Cyclotron radiation from a single electron is measured for the first time
Weyl fermions are spotted at long last
Physicists claim 'loophole-free' Bell-violation experiment
First visible light detected directly from an exoplanet
LHCb claims discovery of two pentaquarks
Hydrogen sulphide is warmest ever superconductor at 203 K
Portable 'battlefield MRI' comes out of the lab
Fermionic microscope sees first light
Silicon quantum logic gate is a first
2016: "to LIGO's gravitational wave discovery".
Schrödinger's cat lives and dies in two boxes at once
Elusive nuclear-clock transition spotted in thorium-229
New gravimeter-on-a-chip is tiny yet extremely sensitive
Negative refraction of electrons spotted in graphene
Rocky planet found in habitable zone around Sun's nearest neighbour
Physicists take entanglement beyond identical ions
'Radical' new microscope lens combines high resolution with large field of view
Quantum computer simulates fundamental particle interactions for the first time
The single-atom engine that could
2017: "to First multimessenger observation of a neutron star merger".
Physicists create first ‘topological’ laser
Lightning makes radioactive isotopes
Super-resolution microscope combines Nobel-winning technologies
Particle-free quantum communication is achieved in the lab
Ultra-high-energy cosmic rays have extra-galactic origins
‘Time crystals’ built in the lab
Metamaterial enhances natural cooling without power input
Three-photon interference measured at long last
Muons reveal hidden void in Egyptian pyramid
2018: "Discovery that led to the development of “twistronics”, which is a new and very promising technique for adjusting the electronic properties of graphene by rotating adjacent layers of the material."
Multifunctional carbon fibres enable “massless” energy storage
Compensator expands global access to advanced radiotherapy
IPCC Special Report on 1.5 °C climate change
EXPLORER PET/CT produces first total-body scans
Combustion-free, propeller-free plane takes flight
Quantum mechanics defies causal order, experiment confirms
Activating retinal stem cells restores vision in mice
Ancient hydrogen reveals clues to dark matter’s identity
Superconductivity spotted in a quasicrystal
2019: "First direct observation of a black hole and its ‘shadow’ by the Event Horizon Telescope"
Neuroprosthetic devices translate brain activity into speech
First detection of a “Marsquake”
CERN physicists spot symmetry violation in charm mesons
“Little Big Coil” creates record-breaking continuous magnetic field
Casimir effect creates “quantum trap” for tiny objects
Antimatter quantum interferometry makes its debut
Quantum computer outperforms conventional supercomputer
Trapped interferometer makes a compact gravity probe
Wearable MEG scanner used with children for the first time
2020: "Silicon-based light with a direct band gap in microelectronics"
Taking snapshots of a quantum measurement
Quantum correlations discovered in massive mirrors
Borexino spots solar neutrinos from elusive fusion cycle
First observation of a ferroelectric nematic liquid crystal
Thin-film perovskite detectors slash imaging dose
Fundamental constants set limit on speed of sound
Expanding twistronics to photons
Mixed beams enhance particle therapy accuracy
The first room-temperature superconductor
2021: "Quantum entanglement of two macroscopic objects"
Restoring speech in a paralysed man
Making 30 lasers emit as one
Quantifying wave–particle duality
Milestone for laser fusion
Innovative particle cooling techniques
Observing a black hole’s magnetic field
Achieving coherent quantum control of nuclei
Observing Pauli blocking in ultracold fermionic gases
Confirming the muon’s theory-defying magnetism
2022: "Deflection of a near-Earth asteroid by DART satellite"
Ushering in a new era for ultracold chemistry
Observing the tetraneutron
Super-efficient electricity generation
The fastest possible optoelectronic switch
Opening a new window on the universe by JWST
First-in-human FLASH proton therapy
Perfecting light transmission and absorption
Cubic boron arsenide is a champion semiconductor
Detecting an Aharonov–Bohm effect for gravity
2023: "Brain–computer interface that allowed a paralysed man to walk"
Growing electrodes inside living tissue
Neutrinos probe the proton’s structure
Simulating an expanding universe in a BEC
A double slit in time
Building blocks for a large-scale quantum network
First X-ray image of a single atom
“Smoking gun” evidence of early galaxies transforming the universe
Supersonic cracks in materials
Antimatter does not fall up
Fusion energy breakthrough
2024: "Quantum error correction with 48 logical qubits; and independently, below the surface code threshold"
Light-absorbing dye turns skin of live mouse transparent
Laser cooling positronium
Modelling lung cells to personalize radiotherapy
A semiconductor and a novel switch made from graphene
Detecting the decay of individual nuclei
Two distinct descriptions of nuclei unified for the first time
New titanium:sapphire laser is tiny, low-cost and tuneable
Entangled photons conceal and enhance images
First samples returned from the Moon’s far side
== Book of the Year ==
Top 10 books and the Book of the Year winner
A blue ribbon () appears against the winner.
2009: The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius by Graham Farmelo
The Physics of Rugby – Trevor Davis (Nottingham University Press)
First Principles: The Crazy Business of Doing Serious Science – Howard Burton (Key Porter Books)
Oliver Heaviside: Maverick Mastermind of Electricity – Basil Mahon (Institute of Engineering and Technology)
Atomic: The First War of Physics and the Secret History of the Atom Bomb – Jim Baggott (Icon Books)
Lives in Science – Joseph C Hermanowicz (University of Chicago Press)
13 Things That Don't Make Sense – Michael Brooks (Profile Books)
Deciphering the Cosmic Number: The Strange Friendship of Wolfgang Pauli and Carl Jung – Arthur I Miller (W W Norton)
Perfect Rigor – Masha Gessen (Houghton Mifflin Harcourt)
Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World – Eugenie Samuel Reich (Palgrave Macmillan)
2010: The Edge of Physics: Dispatches from the Frontiers of Cosmology by Anil Ananthaswamy
The Tunguska Mystery – Vladimir Rubtsov (Springer)
Coming Climate Crisis? Consider the Past, Beware the Big Fix – Claire L Parkinson (Rowman & Littlefield)
How It Ends – Chris Impey (W W Norton)
Lake Views: This World and the Universe – Steven Weinberg (Harvard University Press)
The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It – Scott Patterson (Crown Business)
Newton and the Counterfeiter – Thomas Levenson (Faber and Faber)
Packing for Mars – Mary Roach (One World Publications/ W W Norton)
Massive: The Hunt for the God Particle – Ian Sample (Virgin Books/Basic Books)
How to Teach Quantum Physics to Your Dog – Chad Orzel
2011: Quantum Man: Richard Feynman's Life in Science by Lawrence Krauss from Case Western Reserve University
Engineering Animals – Mark Denny and Alan McFadzean
Measure of the Earth: the Enlightenment Expedition that Reshaped the World – Larrie Ferreiro
The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos – Brian Greene
Lab Coats in Hollywood: Science, Scientists and Cinema – David Kirby
Quantum Man: Richard Feynman's Life in Science – Lawrence Krauss
Rising Force: the Magic of Magnetic Levitation – James Livingston
Modernist Cuisine – Nathan Myhrvold, Chris Young and Maxime Bilet
The 4% Universe: Dark Matter, Dark Energy, and the Race to Discover the Rest of Reality – Richard Panek
Radioactive: Marie and Pierre Curie, A Tale of Love and Fallout – Lauren Redniss
Hindsight and Popular Astronomy – Alan Whiting
2012: How the Hippies Saved Physics by David Kaiser from the Massachusetts Institute of Technology
A Hole at the Bottom of the Sea: The Race to Kill the BP Oil Gusher – Joel Achenbach
The Science Magpie: A Hoard of Fascinating Facts – Simon Flynn
The Idea Factory: Bell Labs and the Great Age of American Innovation – Jon Gertner
Erwin Schrödinger and the Quantum Revolution – John Gribbin
The Geek Manifesto: Why Science Matters – Mark Henderson
Life's Ratchet: How Molecular Machines Extract Order from Chaos – Peter M Hoffmann
How the Hippies Saved Physics: Science, Counterculture and the Quantum Revival – David Kaiser
How to Teach Relativity to Your Dog – Chad Orzel
Pricing the Future: Finance, Physics and the 300-Year Journey to the Black–Scholes Equation – George Szpiro
Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything – Margaret Wertheim
2013: Physics in Mind: a Quantum View of the Brain by the biophysicist Werner Loewenstein
The Spark of Life: Electricity in the Human Body – Frances Ashcroft
The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World – Sean M. Carroll
Hans Christian Ørsted: Reading Nature's Mind – Dan Charly Christensen
Churchill's Bomb: a Hidden History of Science, War and Politics – Graham Farmelo
Physics in Mind: a Quantum View of the Brain – Werner Loewenstein
J Robert Oppenheimer: A Life Inside the Center – Ray Monk
The Simpsons and their Mathematical Secrets – Simon Singh
Time Reborn: From the Crisis in Physics to the Future of the Universe – Lee Smolin
The Theoretical Minimum: What You Need to Know to Start Doing Physics – Leonard Susskind and George Hrabovsky
Weird Life: the Search for Life That Is Very, Very Different from Our Own – David Toomey
2014: Stuff Matters: The Strange Stories of the Marvellous Materials that Shape our Man-made World - Mark Miodownik
Wizards, Aliens & Starships: Physics and Math in Fantasy and Science Fiction - Charles Adler
Serving the Reich: the Struggle for the Soul of Physics Under Hitler - Philip Ball
Five Billion Years of Solitude: the Search for Life Among the Stars - Lee Billings
Plutopia: Nuclear Families, Atomic Cities, and the Great Soviet and American Plutonium Disasters - Kate Brown
Smashing Physics: Inside the World’s Biggest Experiment - Jon Butterworth
Sonic Wonderland: a Scientific Odyssey of Sound - Trevor Cox
The Perfect Theory: a Century of Geniuses and the Battle Over General Relativity - Pedro G Ferreira
Stuff Matters: the Strange Stories of the Marvellous Materials that Shape Our Man-made World - Mark Miodownik
Einstein and the Quantum: the Quest of the Valiant Swabian - Douglas Stone
Island on Fire: the Extraordinary Story of Laki, the Volcano that Turned Eighteenth-century Europe - Dark Alexandra Witze and Jeff Kanipe
2015: Trespassing on Einstein’s Lawn: a Father, a Daughter, the Meaning of Nothing and the Beginning of Everything - Amanda Gefter
Life on the Edge: the Coming of Age of Quantum Biology - Jim Al-Khalili and Johnjoe McFadden
Physics on Your Feet: Ninety Minutes of Shame but a PhD for the Rest of Your Life - Dmitry Budker and Alexander Sushkov
Half-Life: the Divided Life of Bruno Pontecorvo, Physicist or Spy - Frank Close
Beyond: Our Future in Space - Chris Impey
The Water Book: the Extraordinary Story of Our Most Ordinary Substance - Alok Jha
Monsters: the Hindenburg Disaster and the Birth of Pathological Technology - Ed Regis
Tunnel Visions: the Rise and Fall of the Superconducting Super Collider - Michael Riordan, Lillian Hoddeson, Adrienne Kolb
The Copernicus Complex: the Quest for our Cosmic (In)Significance - Caleb Scharf
Atoms Under the Floorboards: the Surprising Science Hidden in Your Home - Chris Woodford
2016: Why String Theory? - Joseph Conlon
The Jazz of Physics: the Secret Link Between Music and the Structure of the Universe - Stephon Alexander
Storm in a Teacup: the Physics of Everyday Life - Helen Czerski
Big Science: Ernest Lawrence and the Invention that Launched the Military-Industrial Complex - Michael Hiltzik
Strange Glow: the Story of Radiation - Timothy Jorgensen
Cosmos: the Infographic Book of Space - Stuart Lowe and Chris North
Spooky Action at a Distance: the Phenomenon that Reimagines Space and Time - George Musser
Goldilocks and the Water Bears: the Search for Life in the Universe - Louisa Preston
Reality Is Not What It Seems: the Journey to Quantum Gravity - Carlo Rovelli
The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age - Gino Segrè and Bettina Hoerlin
2017: Inferior: How Science Got Women Wrong and the New Research That’s Rewriting the Story - Angela Saini
Marconi: the Man Who Networked the World by Marc Raboy
Hidden Figures: the Untold Story of the African American Women Who Helped Win the Space Race by Margot Lee Shetterly
The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars by Dava Sobel
Scale: the Universal Laws of Life and Death in Organisms, Cities and Companies by Geoffrey West
Not A Scientist: How Politicians Mistake, Misrepresent and Utterly Mangle Science by Dave Levitan
Inferior: How Science Got Women Wrong and the New Research That’s Rewriting the Story by Angela Saini
Mapping the Heavens: the Radical Scientific Ideas That Reveal the Cosmos by Priyamvada Natarajan
We Have No Idea by Jorge Cham and Daniel Whiteson
The Secret Science of Superheroes edited by Ed. Mark Lorch and Andy Miah
The Death of Expertise: the Campaign Against Established Knowledge and Why it Matters by Tom Nichols
2018: Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different - Philip Ball
Treknology: the Science of Star Trek from Tricorders to Warp Drives by Ethan Siegel
Ad Astra: an Illustrated Guide to Leaving the Planet by Dallas Campbell
Exact Thinking in Demented Times: the Vienna Circle and the Epic Quest for the Foundations of Science by Karl Sigmund
Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different by Philip Ball
The Order of Time by Carlo Rovelli
Lost in Math: How Beauty Leads Physics Astray by Sabine Hossenfelder
The Dialogues: Conversations About the Nature of the Universe by Clifford V Johnson
When the Uncertainty Principle Goes to 11: Or How to Explain Quantum Physics with Heavy Metal by Philip Moriarty
What is Real: the Unfinished Quest for the Meaning of Quantum Physics by Adam Becker
Hello World: How to be Human in the Age of the Machine by Hannah Fry
2019: The Demon in the Machine: How Hidden Webs of Information are Solving the Mystery of Life - Paul Davies
The Moon: a History for the Future by Oliver Morton
The Case Against Reality: How Evolution Hid the Truth from Our Eyes by Donald D Hoffman
Fire, Ice and Physics: the Science of Game of Thrones by Rebecca C Thompson
Underland: A Deep Time Journey by Robert Macfarlane
The Demon in the Machine: How Hidden Webs of Information are Solving the Mystery of Life by Paul Davies
The Second Kind of Impossible: the Extraordinary Quest For A New Form of Matter by Paul J Steinhardt
Superior: the Return of Race Science by Angela Saini
Einstein’s Unfinished Revolution: the Search for What Lies Beyond the Quantum by Lee Smolin
The Universe Speaks in Numbers: How Modern Maths Reveals Nature’s Deepest Secrets by Graham Farmelo
Catching Stardust: Comets, Asteroids and the Birth of the Solar System by Natalie Starkey
== Pictures of the Year ==
Top 10 Favourite Pictures of the Year
2015:
New Horizons uncovers Pluto's icy secrets
Lasers reveal previously unseen fossil details
Clap your eyes on the first 'images' of thunder
Could lasers guide and control the path of lightning?
Gravitational lensing creates 'Einstein's cross' of distant supernova
Revealing the secret strength of a sea sponge
Satellite sensor unexpectedly detects waves in upper atmosphere
Balloon bursts approach the speed of sound
Imaging the polarity of individual chemical bonds
Organic microflowers bloom bright
== References ==
== External links ==
Official website
Physics World's channel on YouTube | Wikipedia/Physics_World |
In physics, a hidden-variable theory is a deterministic model which seeks to explain the probabilistic nature of quantum mechanics by introducing additional, possibly inaccessible, variables.
The mathematical formulation of quantum mechanics assumes that the state of a system prior to measurement is indeterminate; quantitative bounds on this indeterminacy are expressed by the Heisenberg uncertainty principle. Most hidden-variable theories are attempts to avoid this indeterminacy, but possibly at the expense of requiring that nonlocal interactions be allowed. One notable hidden-variable theory is the de Broglie–Bohm theory.
In their 1935 EPR paper, Albert Einstein, Boris Podolsky, and Nathan Rosen argued that quantum entanglement might imply that quantum mechanics is an incomplete description of reality. John Stewart Bell in 1964, in his eponymous theorem proved that correlations between particles under any local hidden variable theory must obey certain constraints. Subsequently, Bell test experiments have demonstrated broad violation of these constraints, ruling out such theories. Bell's theorem, however, does not rule out the possibility of nonlocal theories or superdeterminism; these therefore cannot be falsified by Bell tests.
== Motivation ==
Macroscopic physics requires classical mechanics which allows accurate predictions of mechanical motion with reproducible, high precision. Quantum phenomena require quantum mechanics, which allows accurate predictions of statistical averages only. If quantum states had hidden-variables awaiting ingenious new measurement technologies, then the latter (statistical results) might be convertible to a form of the former (classical-mechanical motion).
This classical mechanics description would eliminate unsettling characteristics of quantum theory like the uncertainty principle. More fundamentally however, a successful model of quantum phenomena with hidden variables implies quantum entities with intrinsic values independent of measurements. Existing quantum mechanics asserts that state properties can only be known after a measurement. As N. David Mermin puts it:It is a fundamental quantum doctrine that a measurement does not, in general, reveal a pre-existing value of the measured property. On the contrary, the outcome of a measurement is brought into being by the act of measurement itself...
In other words, whereas a hidden-variable theory would imply intrinsic particle properties, in quantum mechanics an electron has no definite position and velocity to even be revealed.
== History ==
=== "God does not play dice" ===
In June 1926, Max Born published a paper, in which he was the first to clearly enunciate the probabilistic interpretation of the quantum wave function, which had been introduced by Erwin Schrödinger earlier in the year. Born concluded the paper as follows:Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive.Born's interpretation of the wave function was criticized by Schrödinger, who had previously attempted to interpret it in real physical terms, but Albert Einstein's response became one of the earliest and most famous assertions that quantum mechanics is incomplete:Quantum mechanics is very worthy of respect. But an inner voice tells me this is not the genuine article after all. The theory delivers much but it hardly brings us closer to the Old One's secret. In any event, I am convinced that He is not playing dice.Niels Bohr reportedly replied to Einstein's later expression of this sentiment by advising him to "stop telling God what to do."
=== Early attempts at hidden-variable theories ===
Shortly after making his famous "God does not play dice" comment, Einstein attempted to formulate a deterministic counter proposal to quantum mechanics, presenting a paper at a meeting of the Academy of Sciences in Berlin, on 5 May 1927, titled "Bestimmt Schrödinger's Wellenmechanik die Bewegung eines Systems vollständig oder nur im Sinne der Statistik?" ("Does Schrödinger's wave mechanics determine the motion of a system completely or only in the statistical sense?"). However, as the paper was being prepared for publication in the academy's journal, Einstein decided to withdraw it, possibly because he discovered that, contrary to his intention, his use of Schrödinger's field to guide localized particles allowed just the kind of non-local influences he intended to avoid.
At the Fifth Solvay Congress, held in Belgium in October 1927 and attended by all the major theoretical physicists of the era, Louis de Broglie presented his own version of a deterministic hidden-variable theory, apparently unaware of Einstein's aborted attempt earlier in the year. In his theory, every particle had an associated, hidden "pilot wave" which served to guide its trajectory through space. The theory was subject to criticism at the Congress, particularly by Wolfgang Pauli, which de Broglie did not adequately answer; de Broglie abandoned the theory shortly thereafter.
=== Declaration of completeness of quantum mechanics, and the Bohr–Einstein debates ===
Also at the Fifth Solvay Congress, Max Born and Werner Heisenberg made a presentation summarizing the recent tremendous theoretical development of quantum mechanics. At the conclusion of the presentation, they declared:[W]hile we consider ... a quantum mechanical treatment of the electromagnetic field ... as not yet finished, we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification...
On the question of the 'validity of the law of causality' we have this opinion: as long as one takes into account only experiments that lie in the domain of our currently acquired physical and quantum mechanical experience, the assumption of indeterminism in principle, here taken as fundamental, agrees with experience.Although there is no record of Einstein responding to Born and Heisenberg during the technical sessions of the Fifth Solvay Congress, he did challenge the completeness of quantum mechanics at various times. In his tribute article for Born's retirement he discussed the quantum representation of a macroscopic ball bouncing elastically between rigid barriers. He argues that such a quantum representation does not represent a specific ball, but "time ensemble of systems". As such the representation is correct, but incomplete because it does not represent the real individual macroscopic case. Einstein considered quantum mechanics incomplete "because the state function, in general, does not even describe the individual event/system".
=== Von Neumann's proof ===
John von Neumann in his 1932 book Mathematical Foundations of Quantum Mechanics had presented a proof that there could be no "hidden parameters" in quantum mechanics. The validity of von Neumann's proof was questioned by Grete Hermann in 1935, who found a flaw in the proof. The critical issue concerned averages over ensembles. Von Neumann assumed that a relation between the expected values of different observable quantities holds for each possible value of the "hidden parameters", rather than only for a statistical average over them. However Hermann's work went mostly unnoticed until its rediscovery by John Stewart Bell more than 30 years later.
The validity and definitiveness of von Neumann's proof were also questioned by Hans Reichenbach, and possibly in conversation though not in print by Albert Einstein. Reportedly, in a conversation circa 1938 with his assistants Peter Bergmann and Valentine Bargmann, Einstein pulled von Neumann's book off his shelf, pointed to the same assumption critiqued by Hermann and Bell, and asked why one should believe in it. Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.
=== EPR paradox ===
Einstein argued that quantum mechanics could not be a complete theory of physical reality. He wrote,
Consider a mechanical system consisting of two partial systems A and B which interact with each other only during a limited time. Let the ψ function [i.e., wavefunction] before their interaction be given. Then the Schrödinger equation will furnish the ψ function after the interaction has taken place. Let us now determine the physical state of the partial system A as completely as possible by measurements. Then quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the physical quantities (observables) of A have been measured (for instance, coordinates or momenta). Since there can be only one physical state of B after the interaction which cannot reasonably be considered to depend on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated to the physical state. This coordination of several ψ functions to the same physical state of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical state of a single system.
Together with Boris Podolsky and Nathan Rosen, Einstein published a paper that gave a related but distinct argument against the completeness of quantum mechanics. They proposed a thought experiment involving a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is impossible according to the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
Bohr answered the Einstein–Podolsky–Rosen challenge as follows:
[The argument of] Einstein, Podolsky and Rosen contains an ambiguity as regards the meaning of the expression "without in any way disturbing a system." ... [E]ven at this stage [i.e., the measurement of, for example, a particle that is part of an entangled pair], there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term "physical reality" can be properly attached, we see that the argumentation of the mentioned authors does not justify their conclusion that quantum-mechanical description is essentially incomplete."
Bohr is here choosing to define a "physical reality" as limited to a phenomenon that is immediately observable by an arbitrarily chosen and explicitly specified technique, using his own special definition of the term 'phenomenon'. He wrote in 1948:
As a more appropriate way of expression, one may strongly advocate limitation of the use of the word phenomenon to refer exclusively to observations obtained under specified circumstances, including an account of the whole experiment.
This was, of course, in conflict with the EPR criterion of reality.
=== Bell's theorem ===
In 1964, John Stewart Bell showed through his famous theorem that if local hidden variables exist, certain experiments could be performed involving quantum entanglement where the result would satisfy a Bell inequality. If, on the other hand, statistical correlations resulting from quantum entanglement could not be explained by local hidden variables, the Bell inequality would be violated. Another no-go theorem concerning hidden-variable theories is the Kochen–Specker theorem.
Physicists such as Alain Aspect and Paul Kwiat have performed experiments that have found violations of these inequalities up to 242 standard deviations. This rules out local hidden-variable theories, but does not rule out non-local ones. Theoretically, there could be experimental problems that affect the validity of the experimental findings.
Gerard 't Hooft has disputed the validity of Bell's theorem on the basis of the superdeterminism loophole and proposed some ideas to construct local deterministic models.
== Bohm's hidden-variable theory ==
In 1952, David Bohm proposed a hidden variable theory. Bohm unknowingly rediscovered (and extended) the idea that Louis de Broglie's pilot wave theory had proposed in 1927 (and abandoned) – hence this theory is commonly called "de Broglie-Bohm theory". Assuming the validity of Bell's theorem, any deterministic hidden-variable theory that is consistent with quantum mechanics would have to be non-local, maintaining the existence of instantaneous or faster-than-light relations (correlations) between physically separated entities.
Bohm posited both the quantum particle, e.g. an electron, and a hidden 'guiding wave' that governs its motion. Thus, in this theory electrons are quite clearly particles. When a double-slit experiment is performed, the electron goes through either one of the slits. Also, the slit passed through is not random but is governed by the (hidden) pilot wave, resulting in the wave pattern that is observed.
In Bohm's interpretation, the (non-local) quantum potential constitutes an implicate (hidden) order which organizes a particle, and which may itself be the result of yet a further implicate order: a superimplicate order which organizes a field. Nowadays Bohm's theory is considered to be one of many interpretations of quantum mechanics. Some consider it the simplest theory to explain quantum phenomena. Nevertheless, it is a hidden-variable theory, and necessarily so. The major reference for Bohm's theory today is his book with Basil Hiley, published posthumously.
A possible weakness of Bohm's theory is that some (including Einstein, Pauli, and Heisenberg) feel that it looks contrived. (Indeed, Bohm thought this of his original formulation of the theory.) Bohm said he considered his theory to be unacceptable as a physical theory due to the guiding wave's existence in an abstract multi-dimensional configuration space, rather than three-dimensional space.
== Recent developments ==
In August 2011, Roger Colbeck and Renato Renner published a proof that any extension of quantum mechanical theory, whether using hidden variables or otherwise, cannot provide a more accurate prediction of outcomes, assuming that observers can freely choose the measurement settings. Colbeck and Renner write: "In the present work, we have ... excluded the possibility that any extension of quantum theory (not necessarily in the form of local hidden variables) can help predict the outcomes of any measurement on any quantum state. In this sense, we show the following: under the assumption that measurement settings can be chosen freely, quantum theory really is complete".
In January 2013, Giancarlo Ghirardi and Raffaele Romano described a model which, "under a different free choice assumption [...] violates [the statement by Colbeck and Renner] for almost all states of a bipartite two-level system, in a possibly experimentally testable way".
== See also ==
== References ==
== Bibliography ==
Peres, Asher; Zurek, Wojciech (1982). "Is quantum theory universally valid?". American Journal of Physics. 50 (9): 807–810. Bibcode:1982AmJPh..50..807P. doi:10.1119/1.13086.
Jammer, Max (1985). "The EPR Problem in Its Historical Development". In Lahti, P.; Mittelstaedt, P. (eds.). Symposium on the Foundations of Modern Physics: 50 years of the Einstein–Podolsky–Rosen Gedankenexperiment. Singapore: World Scientific. pp. 129–149.
Fine, Arthur (1986). The Shaky Game: Einstein Realism and the Quantum Theory. Chicago: University of Chicago Press. | Wikipedia/Hidden_variable_theory |
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
== Overview ==
Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing the auxiliary concept of a regulator (for example, the minimal distance
ϵ
{\displaystyle \epsilon }
in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example,
ϵ
→
0
{\displaystyle \epsilon \to 0}
), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like
1
/
ϵ
{\displaystyle 1/\epsilon }
which are not well-defined in the limit
ϵ
→
0
{\displaystyle \epsilon \to 0}
. Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as
1
/
ϵ
{\displaystyle 1/\epsilon }
— are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent.
The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts. The underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition. Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a Landau pole and for nonrenormalizable couplings like the Fermi interaction. However, even for these two examples, if the regulator only gives reasonable results for
ϵ
≫
ℏ
c
/
Λ
{\displaystyle \epsilon \gg \hbar c/\Lambda }
(where
Λ
{\displaystyle \Lambda }
is a superior energy cuttoff) and we are working with scales of the order of
ℏ
c
/
Λ
′
{\displaystyle \hbar c/\Lambda '}
, regulators with
ℏ
c
/
Λ
≪
ϵ
≪
ℏ
c
/
Λ
′
{\displaystyle \hbar c/\Lambda \ll \epsilon \ll \hbar c/\Lambda '}
still give pretty accurate approximations. The physical reason why we can't take the limit of ε going to zero is the existence of new physics below Λ.
It is not always possible to define a regularization such that the limit of ε going to zero is independent of the regularization. In this case, one says that the theory contains an anomaly. Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly).
== Classical physics example ==
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.
The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius re. The mass–energy in the field is
m
e
m
=
∫
1
2
E
2
d
V
=
∫
r
e
∞
1
2
(
q
4
π
r
2
)
2
4
π
r
2
d
r
=
q
2
8
π
r
e
,
{\displaystyle m_{\mathrm {em} }=\int {1 \over 2}E^{2}\,dV=\int _{r_{e}}^{\infty }{\frac {1}{2}}\left({q \over 4\pi r^{2}}\right)^{2}4\pi r^{2}\,dr={q^{2} \over 8\pi r_{e}},}
which becomes infinite as re → 0. This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of re that makes
m
e
m
{\displaystyle m_{\mathrm {em} }}
equal to the electron mass is called the classical electron radius, which (setting
q
=
e
{\displaystyle q=e}
and restoring factors of c and
ε
0
{\displaystyle \varepsilon _{0}}
) turns out to be
r
e
=
e
2
4
π
ε
0
m
e
c
2
=
α
ℏ
m
e
c
≈
2.8
×
10
−
15
m
.
{\displaystyle r_{e}={e^{2} \over 4\pi \varepsilon _{0}m_{\mathrm {e} }c^{2}}=\alpha {\hbar \over m_{\mathrm {e} }c}\approx 2.8\times 10^{-15}\ \mathrm {m} .}
where
α
≈
1
/
137.040
{\displaystyle \alpha \approx 1/137.040}
is the fine-structure constant, and
ℏ
/
m
e
c
{\displaystyle \hbar /m_{\mathrm {e} }c}
is the Compton wavelength of the electron.
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. Addressing this problem requires new kinds of additional physical constraints. For instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system below a certain size. Similar regularization arguments work in other renormalization problems. For example, a theory may hold under one narrow set of conditions, but due to calculations involving infinities or singularities, it may breakdown under other conditions or scales. In the case of the electron, another way to avoid infinite mass-energy while retaining the point nature of the particle is to postulate tiny additional dimensions over which the particle could 'spread out' rather than restrict its motion solely over 3D space. This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions. Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternative strategy to resolve infinities in such classical problems.
== Specific types ==
Specific types of regularization procedures include
Dimensional regularization
Pauli–Villars regularization
Lattice regularization
Zeta function regularization
Causal regularization
Hadamard regularization
== Realistic regularization ==
=== Conceptual problem ===
Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules. However, a regularization method is needed to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops. Also a renormalization scheme is needed. Regularization results in regularized n-point Green's functions (propagators), and a suitable limiting procedure (a renormalization scheme) leading to perturbative S-matrix elements. These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states). However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant, by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof). So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning. In addition, there are qualms about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g.
=== Pauli's conjecture ===
As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators. So it is a prudent, conservative approach to retain the vertices in Feynman series, and modify only the Feynman propagators to create a regularized Feynman series. This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf. and representation of physical reality by Feynman diagrams.
In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics. So its propagators (i) do not need to be regularized, and (ii) can be regarded as such a regularization of the propagators used in quantum field theories that might reflect the underlying physics. The additional parameters of such a theory do not need to be removed (i.e. the theory needs no renormalization) and may provide some new information about the physics of quantum scattering, though they may turn out experimentally to be negligible. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.
=== Opinions ===
Paul Dirac was persistently, extremely critical about procedures of renormalization. In 1963, he wrote, "… in the renormalization theory we have a theory that has defied all the attempts of the mathematician to make it sound. I am inclined to suspect that the renormalization theory is something that will not survive in the future,…" He further observed that "One can distinguish between two main procedures for a theoretical physicist. One of them is to work from the experimental basis ... The other procedure is to work from the mathematical basis. One examines and criticizes the existing theory. One tries to pin-point the faults in it and then tries to remove them. The difficulty here is to remove the faults without destroying the very great successes of the existing theory."
Abdus Salam remarked in 1972, "Field-theoretic infinities first encountered in Lorentz's computation of electron have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational."
However, in Gerard ’t Hooft’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized. Nature might be telling us something, and we should find out what it is."
The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it.
=== Minimal realistic regularization ===
Considering distinct theoretical problems, Dirac in 1963 suggested: "I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. At this point I find myself in disagreement with most physicists. They are inclined to think one master idea will be discovered that will solve all these problems together. I think it is asking too much to hope that anyone will be able to solve all these problems together. One should separate them one from another as much as possible and try to tackle them separately. And I believe the future development of physics will consist of solving them one at a time, and that after any one of them has been solved there will still be a great mystery about how to attack further ones."
According to Dirac, "Quantum electrodynamics is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis."
Dirac’s two preceding remarks suggest that we should start searching for a realistic regularization in the case of quantum electrodynamics (QED) in the four-dimensional Minkowski spacetime, starting with the original QED Lagrangian density.
The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form. The free-field part of the Lagrangian density determines the Feynman propagators, whereas the rest determines the vertices. As the QED vertices are considered to adequately describe interactions in QED scattering, it makes sense to modify only the free-field part of the Lagrangian density so as to obtain such regularized Feynman series that the Lehmann–Symanzik–Zimmermann reduction formula provides a perturbative S-matrix that: (i) is Lorentz-invariant and unitary; (ii) involves only the QED particles; (iii) depends solely on QED parameters and those introduced by the modification of the Feynman propagators—for particular values of these parameters it is equal to the QED perturbative S-matrix; and (iv) exhibits the same symmetries as the QED perturbative S-matrix. Let us refer to such a regularization as the minimal realistic regularization, and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.
== Transport theoretic approach ==
According to Bjorken and Drell, it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations. And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path. If we looked more closely, we would see individual neutrons running around." And then he wondered, "Could it be that the real world consists of little X-ons which can be seen only at very tiny distances? And that in our measurements we are always observing on such a large scale that we can’t see these little X-ons, and that is why we get the differential equations? ... Are they [therefore] also correct only as a smoothed-out imitation of a really much more complicated microscopic world?"
Already in 1938, Heisenberg proposed that a quantum field theory can provide only an idealized, large-scale description of quantum dynamics, valid for distances larger than some fundamental length, expected also by Bjorken and Drell in 1965. Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.
== Hints at new physics ==
The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model. Infinities of the non-gravitational forces in QFT can be controlled via renormalization only but additional regularization - and hence new physics—is required uniquely for gravity. The regularizers model, and work around, the breakdown of QFT at small scales and thus show clearly the need for some other theory to come into play beyond QFT at these scales. A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed.
== See also ==
Zeldovich regularization
== References == | Wikipedia/Regularization_(physics) |
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point.
Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept.
== Philosophical atomism ==
The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units.
== Groundwork ==
Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound.: 293
Near the end of the 18th century, a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements. Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed.: 293 Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures.
== Dalton's law of multiple proportions ==
John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity.
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, but he got them right in the following examples:
Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. The modern equivalents of his terms would be monoxide and dioxide.
Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide and iron(III) oxide and their formulas are FeO and Fe2O3 respectively. Iron(II) oxide's formula is normally written as FeO, but since it is a crystalline substance one could alternately write it as Fe2O2, and when we contrast that with Fe2O3, the 2:3 ratio stands out plainly. Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide".
Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there is 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2.
Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles.
Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second. Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century.
Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074.
== Opposition to atomic theory ==
Dalton's atomic theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century.
One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive". Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule". Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions.
The modern definitions of atom and molecule—an atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the late half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro.
The various quantities of a particular element involved in the constitution of different molecules are integral multiples of a fundamental quantity that always manifests itself as an indivisible entity and which must properly be named atom.
Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction.
A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe.: 232
This generation of anti-atomists can be grouped in two camps.
The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models.: 237
These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties.
== Isomerism ==
Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements.
In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane.
Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types.
== Mendeleev's periodic table ==
Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them.: 117 For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium.: 118 Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions.
The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change had to wait for the discovery of the nucleus.: 228
In addition, an entire row of the table was not shown
because the noble gases had not been discovered when Mendeleev devised his table.: 222
== Statistical mechanics ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of particles. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. These results were largely ignored for a century.: 25
James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function.: 26 Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics."
Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality.
At the beginning of the 20th century, Albert Einstein independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would "not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate.
=== Brownian motion ===
In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms.
== Discovery of the electron ==
Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays.: 86 : 364
A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was.
In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles.: 363
In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs).: 85 In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light.: 86 By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions. These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge.
In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it). The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained that ions are atoms that have a surplus or shortage of electrons.
Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a liquid, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces. Thus the positive electrification in Thomson's model was a temporary concept. Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra or valencies.
In 1906, Robert A. Millikan and Harvey Fletcher performed the oil drop experiment in which they measured the charge of an electron to be about -1.6 × 10−19, a value now defined as -1 e. Since the hydrogen ion and the electron were known to be indivisible and a hydrogen atom is neutral in charge, it followed that the positive charge in hydrogen was equal to this value, i.e. 1 e.
== Discovery of the nucleus ==
Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus.: 296
Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully.
Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons.: 304
Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium, etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table.
== Bohr model ==
Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom.: 19
Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910: 197 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2π. The dynamical structure of these models was still classical, but in 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy.: 197 Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).
In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of He+.: 197 He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915.: 91
Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms.
== Discovery of isotopes ==
While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties.
That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons.
== Discovery of the proton ==
Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention.
In 1898, J. J. Thomson found that the positive charge of a hydrogen ion was equal to the negative charge of a single electron.
In an April 1911 paper concerning his studies on alpha particle scattering, Ernest Rutherford estimated that the charge of an atomic nucleus, expressed as a multiplier of hydrogen's nuclear charge (qe), is roughly half the atom's atomic weight.
In June 1911, Van den Broek noted that on the periodic table, each successive chemical element increased in atomic weight on average by 2, which in turn suggested that each successive element's nuclear charge increased by 1 qe. In 1913, van den Broek further proposed that the electric charge of an atom's nucleus, expressed as a multiplier of the elementary charge, is equal to the element's sequential position on the periodic table. Rutherford defined this position as being the element's atomic number.
In 1913, Henry Moseley measured the X-ray emissions of all the elements on the periodic table and found that the frequency of the X-ray emissions was a mathematical function of the element's atomic number and the charge of a hydrogen nucleus (see Moseley's law).
In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off.
These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name "proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920.
The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element.
During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons.
== Discovery of the neutron ==
Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay. Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field.
In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron.
== Modern quantum mechanical models ==
In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment.
A consequence of describing particles as waveforms rather than points is that it is mathematically impossible to calculate with precision both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, a concept first introduced by Werner Heisenberg in 1927.
Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which contains just two electrons—numerical methods are used to solve the Schrödinger equation.
Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules.: 182
== See also ==
== Footnotes ==
== Bibliography ==
Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 978-0-201-02116-5. {{cite book}}: ISBN / Date incompatibility (help)
Andrew G. van Melsen (1960) [First published 1952]. From Atomos to Atom: The History of the Concept Atom. Translated by Henry J. Koren. Dover Publications. ISBN 0-486-49584-1. {{cite book}}: ISBN / Date incompatibility (help)
J. P. Millington (1906). John Dalton. J. M. Dent & Co. (London); E. P. Dutton & Co. (New York).
Jaume Navarro (2012). A History of the Electron: J. J. and G. P. Thomson. Cambridge University Press. ISBN 978-1-107-00522-8.
Trusted, Jennifer (1999). The Mystery of Matter. MacMillan. ISBN 0-333-76002-6.
Bernard Pullman (1998). The Atom in the History of Human Thought. Translated by Axel Reisinger. Oxford University Press. ISBN 0-19-511447-7.
Jean Perrin (1910) [1909]. Brownian Movement and Molecular Reality. Translated by F. Soddy. Taylor and Francis.
Ida Freund (1904). The Study of Chemical Composition. Cambridge University Press.
Thomas Thomson (1807). A System of Chemistry: In Five Volumes, Volume 3. John Brown.
Thomas Thomson (1831). The History of Chemistry, Volume 2. H. Colburn, and R. Bentley.
John Dalton (1808). A New System of Chemical Philosophy vol. 1.
John Dalton (1817). A New System of Chemical Philosophy vol. 2.
Stanislao Cannizzaro (1858). Sketch of a Course of Chemical Philosophy. The Alembic Club.
== Further reading ==
Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York.
Alan J. Rocke (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro, Ohio State University Press, Columbus (open access full text at http://digital.case.edu/islandora/object/ksl%3Ax633gj985).
== External links ==
Atomism by S. Mark Cohen.
Atomic Theory – detailed information on atomic theory with respect to electrons and electricity.
The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion | Wikipedia/Atomic_theory |
Objective-collapse theories, also known spontaneous collapse models or dynamical reduction models, are proposed solutions to the measurement problem in quantum mechanics. As with other interpretations of quantum mechanics, they are possible explanations of why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases.
In collapse theories, the Schrödinger equation is supplemented with additional nonlinear and stochastic terms (spontaneous collapses) which localize the wave function in space. The resulting dynamics is such that for microscopic isolated systems, the new terms have a negligible effect; therefore, the usual quantum properties are recovered, apart from very tiny deviations. Such deviations can potentially be detected in dedicated experiments, and efforts are increasing worldwide towards testing them.
An inbuilt amplification mechanism makes sure that for macroscopic systems consisting of many particles, the collapse becomes stronger than the quantum dynamics. Then their wave function is always well-localized in space, so well-localized that it behaves, for all practical purposes, like a point moving in space according to Newton's laws.
In this sense, collapse models provide a unified description of microscopic and macroscopic systems, avoiding the conceptual problems associated to measurements in quantum theory.
The most well-known examples of such theories are:
Ghirardi–Rimini–Weber (GRW) model
Continuous spontaneous localization (CSL) model
Diósi–Penrose (DP) model
Collapse theories stand in opposition to many-worlds interpretation theories, in that they hold that a process of wave function collapse curtails the branching of the wave function and removes unobserved behaviour.
== History of collapse theories ==
Philip Pearle's 1976 paper pioneered the quantum nonlinear stochastic equations to model the collapse of the wave function in a dynamical way;: 477 this formalism was later used for the CSL model. However, these models lacked the character of “universality” of the dynamics, i.e. its applicability to an arbitrary physical system (at least at the non-relativistic level), a necessary condition for any model to become a viable option.
The next major advance came in 1986, when Ghirardi, Rimini and Weber published the paper with the meaningful title “Unified dynamics for microscopic and macroscopic systems”, where they presented what is now known as the GRW model, after the initials of the authors. The model has two guiding principles:
The position basis states are used in the dynamic state reduction (the "preferred basis" is position);
The modification must reduce superpositions for macroscopic objects without altering the microscopic predictions.
In 1990 the efforts for the GRW group on one side, and of P. Pearle on the other side, were brought together in formulating the Continuous Spontaneous Localization (CSL) model, where the Schrödinger dynamics and a randomly fluctuating classical field produce collapse into spatially localized eigentstates.: 478
In the late 1980s and 1990s, Diosi and Penrose and others: 508 independently formulated the idea that the wave function collapse is related to gravity. The dynamical equation is structurally similar to the CSL equation.
== Most popular models ==
Three models are most widely discussed in the literature:
Ghirardi–Rimini–Weber (GRW) model: It is assumed that each constituent of a physical system independently undergoes spontaneous collapses. The collapses are random in time, distributed according to a Poisson distribution; they are random in space and are more likely to occur where the wave function is larger. In between collapses, the wave function evolves according to the Schrödinger equation. For composite systems, the collapse on each constituent causes the collapse of the center of mass wave functions.
Continuous spontaneous localization (CSL) model: The Schrödinger equation is supplemented with a nonlinear and stochastic diffusion process driven by a suitably chosen universal noise coupled to the mass-density of the system, which counteracts the quantum spread of the wave function. As for the GRW model, the larger the system, the stronger the collapse, thus explaining the quantum-to-classical transition as a progressive breakdown of quantum linearity, when the system's mass increases. The CSL model is formulated in terms of identical particles.
Diósi–Penrose (DP) model: Diósi and Penrose formulated the idea that gravity is responsible for the collapse of the wave function. Penrose argued that, in a quantum gravity scenario where a spatial superposition creates the superposition of two different spacetime curvatures, gravity does not tolerate such superpositions and spontaneously collapses them. He also provided a phenomenological formula for the collapse time. Independently and prior to Penrose, Diósi presented a dynamical model that collapses the wave function with the same time scale suggested by Penrose.
The Quantum Mechanics with Universal Position Localization (QMUPL) model should also be mentioned; an extension of the GRW model for identical particles formulated by Tumulka, which proves several important mathematical results regarding the collapse equations.
In all models listed so far, the noise responsible for the collapse is Markovian (memoryless): either a Poisson process in the discrete GRW model, or a white noise in the continuous models. The models can be generalized to include arbitrary (colored) noises, possibly with a frequency cutoff: the CSL model has been extended to its colored version (cCSL), as well as the QMUPL model (cQMUPL). In these new models the collapse properties remain basically unaltered, but specific physical predictions can change significantly.
In all collapse models, the noise effect must prevent quantum mechanical linearity and unitarity and thus cannot be described within quantum-mechanics.: 423
Because the noise responsible for the collapse induces Brownian motion on each constituent of a physical system, energy is not conserved. The kinetic energy increases at a constant rate. Such a feature can be modified, without altering the collapse properties, by including appropriate dissipative effects in the dynamics. This is achieved for the GRW, CSL, QMUPL and DP models, obtaining their dissipative counterparts (dGRW, dCSL, dQMUPL, DP). The QMUPL model has been further generalized to include both colored noise as well as dissipative effects (dcQMUPL model).
== Tests of collapse models ==
Collapse models modify the Schrödinger equation; therefore, they make predictions that differ from standard quantum mechanical predictions. Although the deviations are difficult to detect, there is a growing number of experiments searching for spontaneous collapse effects. They can be classified in two groups:
Interferometric experiments. They are refined versions of the double-slit experiment, showing the wave nature of matter (and light). The modern versions are meant to increase the mass of the system, the time of flight, and/or the delocalization distance in order to create ever larger superpositions. The most prominent experiments of this kind are with atoms, molecules and phonons.
Non-interferometric experiments. They are based on the fact that the collapse noise, besides collapsing the wave function, also induces a diffusion on top of particles’ motion, which acts always, also when the wave function is already localized. Experiments of this kind involve cold atoms, opto-mechanical systems, gravitational wave detectors, underground experiments.
== Problems and criticisms to collapse theories ==
=== Violation of the principle of the conservation of energy ===
According to collapse theories, energy is not conserved, also for isolated particles. More precisely, in the GRW, CSL and DP models the kinetic energy increases at a constant rate, which is small but non-zero.
This is often presented as an unavoidable consequence of Heisenberg's uncertainty principle: the collapse in position causes a larger uncertainty in momentum. This explanation is wrong; in collapse theories the collapse in position also determines a localization in momentum, driving the wave function to an almost minimum uncertainty state both in position and in momentum, compatibly with Heisenberg's principle. The reason the energy increases is that the collapse noise diffuses the particle, thus accelerating it.
This is the same situation as in classical Brownian motion, and similarly this increase can be stopped by adding dissipative effects. Dissipative versions of the QMUPL, GRW, CSL and DP models exist, where the collapse properties are left unaltered with respect to the original models, while the energy thermalizes to a finite value (therefore it can even decrease, depending on its initial value).
Still, in the dissipative model the energy is not strictly conserved. A resolution to this situation might come by considering also the noise a dynamical variable with its own energy, which is exchanged with the quantum system in such a way that the energy of the total system and noise together is conserved.
=== Relativistic collapse models ===
One of the biggest challenges in collapse theories is to make them compatible with relativistic requirements. The GRW, CSL and DP models are not. The biggest difficulty is how to combine the nonlocal character of the collapse, which is necessary in order to make it compatible with the experimentally verified violation of Bell inequalities, with the relativistic principle of locality. Models exist that attempt to generalize in a relativistic sense the GRW and CSL models, but their status as relativistic theories is still unclear. The formulation of a proper Lorentz-covariant theory of continuous objective collapse is still a matter of research.
=== Tails problem ===
In all collapse theories, the wave function is never fully contained within one (small) region of space, because the Schrödinger term of the dynamics will always spread it outside. Therefore, wave functions always contain tails stretching out to infinity, although their “weight” is smaller in larger systems. Critics of collapse theories argue that it is not clear how to interpret these tails. Two distinct problems have been discussed in the literature. The first is the “bare” tails problem: it is not clear how to interpret these tails because they amount to the system never being really fully localized in space. A special case of this problem is known as the “counting anomaly”. Supporters of collapse theories mostly dismiss this criticism as a misunderstanding of the theory, as in the context of dynamical collapse theories, the absolute square of the wave function is interpreted as an actual matter density. In this case, the tails merely represent an immeasurably small amount of smeared-out matter. This leads into the second problem, however, the so-called “structured tails problem”: it is not clear how to interpret these tails because even though their “amount of matter” is small, that matter is structured like a perfectly legitimate world. Thus, after the box is opened and Schroedinger's cat has collapsed to the “alive” state, there still exists a tail of the wavefunction containing “low matter” entity structured like a dead cat. Collapse theorists have offered a range of possible solutions to the structured tails problem, but it remains an open problem.
== See also ==
== References ==
== External links ==
Giancarlo Ghirardi, Collapse Theories, Stanford Encyclopedia of Philosophy (First published Thu Mar 7, 2002; substantive revision Fri May 15, 2020)
"Physics Experiments Spell Doom for Quantum 'Collapse' Theory". Quanta Magazine. 2022-10-20. Retrieved 2022-10-21. | Wikipedia/Objective-collapse_theory |
Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. It was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation.
In some contrast to the wave formulation, it produces spectra of (mostly energy) operators by purely algebraic, ladder operator methods. Relying on these methods, Wolfgang Pauli derived the hydrogen atom spectrum in 1926, before the development of wave mechanics.
== Development of matrix mechanics ==
In 1925, Werner Heisenberg, Max Born, and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics.
=== Epiphany at Helgoland ===
In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen. By May 1925 he began trying to describe atomic systems by observables only. On June 7, after weeks of failing to alleviate his hay fever with aspirin and cocaine, Heisenberg left for the pollen-free North Sea island of Helgoland. While there, in between climbing and memorizing poems from Goethe's West-östlicher Diwan, he continued to ponder the spectral issue and eventually realised that adopting non-commuting observables might solve the problem. He later wrote:
It was about three o' clock at night when the final result of the calculation lay before me. At first I was deeply shaken. I was so excited that I could not think of sleep. So I left the house and awaited the sunrise on the top of a rock.: 275
=== The three fundamental papers ===
After Heisenberg returned to Göttingen, he showed Wolfgang Pauli his calculations, commenting at one point:
Everything is still vague and unclear to me, but it seems as if the electrons will no more move on orbits.
On July 9 Heisenberg gave the same paper of his calculations to Max Born, saying that "he had written a crazy paper and did not dare to send it in for publication, and that Born should read it and advise him" prior to publication. Heisenberg then departed for a while, leaving Born to analyse the paper.
In the paper, Heisenberg formulated quantum theory without sharp electron orbits. Hendrik Kramers had earlier calculated the relative intensities of spectral lines in the Sommerfeld model by interpreting the Fourier coefficients of the orbits as intensities. But his answer, like all other calculations in the old quantum theory, was only correct for large orbits.
Heisenberg, after a collaboration with Kramers, began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation, so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum. The quantities in Heisenberg's formulation were the classical position and momentum, but now they were no longer sharply defined. Each quantity was represented by a collection of Fourier coefficients with two indices, corresponding to the initial and final states.
When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper.
A follow-on paper was submitted for publication before the end of the year by all three authors. (A brief review of Born's role in the development of the matrix mechanics formulation of quantum mechanics along with a discussion of the key formula involving the non-commutativity of the probability amplitudes can be found in an article by Jeremy Bernstein. A detailed historical and technical account can be found in Mehra and Rechenberg's book The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926.)
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912 and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics.
Born, however, had learned matrix algebra from Rosanes, as already noted, but Born had also learned Hilbert's theory of integral equations and quadratic forms for an infinite number of variables as was apparent from a citation by Born of Hilbert's work Grundzüge einer allgemeinen Theorie der Linearen Integralgleichungen published in 1912.
Jordan, too, was well equipped for the task. For a number of years, he had been an assistant to Richard Courant at Göttingen in the preparation of Courant and David Hilbert's book Methoden der mathematischen Physik I, which was published in 1924. This book, fortuitously, contained a great many of the mathematical tools necessary for the continued development of quantum mechanics.
In 1926, John von Neumann became assistant to David Hilbert, and he would coin the term Hilbert space to describe the algebra and analysis which were used in the development of quantum mechanics.
A linchpin contribution to this formulation was achieved in Dirac's reinterpretation/synthesis paper of 1925, which invented the language and framework usually employed today, in full display of the noncommutative structure of the entire construction.
=== Heisenberg's reasoning ===
Before matrix mechanics, the old quantum theory described the motion of a particle by a classical orbit, with well defined position and momentum X(t), P(t), with the restriction that the time integral over one period T of the momentum times the velocity must be a positive integer multiple of the Planck constant
∫
0
T
P
d
X
d
t
d
t
=
∫
0
T
P
d
X
=
n
h
.
{\displaystyle \int _{0}^{T}P\;{\frac {dX}{dt}}\;dt=\int _{0}^{T}P\;dX=nh.}
While this restriction correctly selects orbits with more or less the
right energy values En, the old quantum mechanical formalism did not describe time dependent processes, such as the emission or absorption of radiation.
When a classical particle is weakly coupled to a radiation field, so that the radiative damping can be neglected, it will emit radiation in a pattern that repeats itself every orbital period. The frequencies that make up the outgoing wave are then integer multiples of the orbital frequency, and this is a reflection of the fact that X(t) is periodic, so that its Fourier representation has frequencies 2πn/T only.
X
(
t
)
=
∑
n
=
−
∞
∞
e
2
π
i
n
t
/
T
X
n
.
{\displaystyle X(t)=\sum _{n=-\infty }^{\infty }e^{2\pi int/T}X_{n}.}
The coefficients Xn are complex numbers. The ones with negative frequencies must be the complex conjugates of the ones with positive frequencies, so that X(t) will always be real,
X
n
=
X
−
n
∗
.
{\displaystyle X_{n}=X_{-n}^{*}.}
A quantum mechanical particle, on the other hand, cannot emit radiation continuously; it can only emit photons. Assuming that the quantum particle started in orbit number n, emitted a photon, then ended up in orbit number m, the energy of the photon is En − Em, which means that its frequency is En − Em/h.
For large n and m, but with n − m relatively small, these are the classical frequencies by Bohr's correspondence principle
E
n
−
E
m
≈
h
(
n
−
m
)
T
.
{\displaystyle E_{n}-E_{m}\approx {\frac {h(n-m)}{T}}.}
In the formula above, T is the classical period of either orbit n or orbit m, since the difference between them is higher order in h. But for small n and m, or if n − m is large, the frequencies are not integer multiples of any single frequency.
Since the frequencies that the particle emits are the same as the frequencies in the Fourier description of its motion, this suggests that something in the time-dependent description of the particle is oscillating with frequency En − Em/h. Heisenberg called this quantity Xnm,
and demanded that it should reduce to the classical Fourier coefficients in the classical limit. For large values of n and m but with n − m relatively small,
Xnm is the (n − m)th Fourier coefficient of the classical motion at orbit n. Since Xnm has opposite frequency to Xmn, the condition that X is real becomes
X
n
m
=
X
m
n
∗
.
{\displaystyle X_{nm}=X_{mn}^{*}.}
By definition, Xnm only has the frequency En − Em/h, so its time evolution is simple:
X
n
m
(
t
)
=
e
2
π
i
(
E
n
−
E
m
)
t
/
h
X
n
m
(
0
)
=
e
i
(
E
n
−
E
m
)
t
/
ℏ
X
n
m
(
0
)
.
{\displaystyle X_{nm}(t)=e^{2\pi i(E_{n}-E_{m})t/h}X_{nm}(0)=e^{i(E_{n}-E_{m})t/\hbar }X_{nm}(0).}
This is the original form of Heisenberg's equation of motion.
Given two arrays Xnm and Pnm describing two physical quantities, Heisenberg could form a new array of the same type by combining the terms XnkPkm, which also oscillate with the right frequency. Since the Fourier coefficients of the product of two quantities is the convolution of the Fourier coefficients of each one separately, the correspondence with Fourier series allowed Heisenberg to deduce the rule by which the arrays should be multiplied,
(
X
P
)
m
n
=
∑
k
=
0
∞
X
m
k
P
k
n
.
{\displaystyle (XP)_{mn}=\sum _{k=0}^{\infty }X_{mk}P_{kn}.}
Born pointed out that this is the law of matrix multiplication, so that the position, the momentum, the energy, all the observable quantities in the theory, are interpreted as matrices. Under this multiplication rule, the product depends on the order: XP is different from PX.
The X matrix is a complete description of the motion of a quantum mechanical particle. Because the frequencies in the quantum motion are not multiples of a common frequency, the matrix elements cannot be interpreted as the Fourier coefficients of a sharp classical trajectory. Nevertheless, as matrices, X(t) and P(t) satisfy the classical equations of motion; also see Ehrenfest's theorem, below.
=== Matrix basics ===
When it was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, matrix mechanics was not immediately accepted and was a source of controversy, at first. Schrödinger's later introduction of wave mechanics was greatly favored.
Part of the reason was that Heisenberg's formulation was in an odd mathematical language, for the time, while Schrödinger's formulation was based on familiar wave equations. But there was also a deeper sociological reason. Quantum mechanics had been developing by two paths, one led by Einstein, who emphasized the wave–particle duality he proposed for photons, and the other led by Bohr, that emphasized the discrete energy states and quantum jumps that Bohr discovered. De Broglie had reproduced the discrete energy states within Einstein's framework – the quantum condition is the standing wave condition, and this gave hope to those in the Einstein school that all the discrete aspects of quantum mechanics would be subsumed into a continuous wave mechanics.
Matrix mechanics, on the other hand, came from the Bohr school, which was concerned with discrete energy states and quantum jumps. Bohr's followers did not appreciate physical models that pictured electrons as waves, or as anything at all. They preferred to focus on the quantities that were directly connected to experiments.
In atomic physics, spectroscopy gave observational data on atomic transitions arising from the interactions of atoms with light quanta. The Bohr school required that only those quantities that were in principle measurable by spectroscopy should appear in the theory. These quantities include the energy levels and their intensities but they do not include the exact location of a particle in its Bohr orbit. It is very hard to imagine an experiment that could determine whether an electron in the ground state of a hydrogen atom is to the right or to the left of the nucleus. It was a deep conviction that such questions did not have an answer.
The matrix formulation was built on the premise that all physical observables are represented by matrices, whose elements are indexed by two different energy levels. The set of eigenvalues of the matrix were eventually understood to be the set of all possible values that the observable can have. Since Heisenberg's matrices are Hermitian, the eigenvalues are real.
If an observable is measured and the result is a certain eigenvalue, the corresponding eigenvector is the state of the system immediately after the measurement. The act of measurement in matrix mechanics collapses the state of the system. If one measures two observables simultaneously, the state of the system collapses to a common eigenvector of the two observables. Since most matrices don't have any eigenvectors in common, most observables can never be measured precisely at the same time. This is the uncertainty principle.
If two matrices share their eigenvectors, they can be simultaneously diagonalized. In the basis where they are both diagonal, it is clear that their product does not depend on their order because multiplication of diagonal matrices is just multiplication of numbers. The uncertainty principle, by contrast, is an expression of the fact that often two matrices A and B do not always commute, i.e., that AB − BA does not necessarily equal 0. The fundamental commutation relation of matrix mechanics,
∑
k
(
X
n
k
P
k
m
−
P
n
k
X
k
m
)
=
i
ℏ
δ
n
m
{\displaystyle \sum _{k}\left(X_{nk}P_{km}-P_{nk}X_{km}\right)=i\hbar \,\delta _{nm}}
implies then that there are no states that simultaneously have a definite position and momentum.
This principle of uncertainty holds for many other pairs of observables as well. For example, the energy does not commute with the position either, so it is impossible to precisely determine the position and energy of an electron in an atom.
=== Nobel Prize ===
In 1928, Albert Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics. The announcement of the Nobel Prize in Physics for 1932 was delayed until November 1933. It was at that time that it was announced Heisenberg had won the Prize for 1932 "for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen" and Erwin Schrödinger and Paul Adrien Maurice Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory".
It might well be asked why Born was not awarded the Prize in 1932, along with Heisenberg, and Bernstein proffers speculations on this matter. One of them relates to Jordan joining the Nazi Party on May 1, 1933, and becoming a stormtrooper. Jordan's Party affiliations and Jordan's links to Born may well have affected Born's chance at the Prize at that time. Bernstein further notes that when Born finally won the Prize in 1954, Jordan was still alive, while the Prize was awarded for the statistical interpretation of quantum mechanics, attributable to Born alone.
Heisenberg's reactions to Born for Heisenberg receiving the Prize for 1932 and for Born receiving the Prize in 1954 are also instructive in evaluating whether Born should have shared the Prize with Heisenberg. On November 25, 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration – you, Jordan and I". Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside".
In 1954, Heisenberg wrote an article honoring Max Planck for his insight in 1900. In the article, Heisenberg credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye".
== Mathematical development ==
Once Heisenberg introduced the matrices for X and P, he could find their matrix elements in special cases by guesswork, guided by the correspondence principle. Since the matrix elements are the quantum mechanical analogs of Fourier coefficients of the classical orbits, the simplest case is the harmonic oscillator, where the classical position and momentum, X(t) and P(t), are sinusoidal.
=== Harmonic oscillator ===
In units where the mass and frequency of the oscillator are equal to one (see nondimensionalization), the energy of the oscillator is
H
=
1
2
(
P
2
+
X
2
)
.
{\displaystyle H={\tfrac {1}{2}}\left(P^{2}+X^{2}\right).}
The level sets of H are the clockwise orbits, and they are nested circles in phase space. The classical orbit with energy E is
X
(
t
)
=
2
E
cos
(
t
)
,
P
(
t
)
=
−
2
E
sin
(
t
)
.
{\displaystyle X(t)={\sqrt {2E}}\cos(t),\qquad P(t)=-{\sqrt {2E}}\sin(t)~.}
The old quantum condition dictates that the integral of P dX over an orbit, which is the area of the circle in phase space, must be an integer multiple of the Planck constant. The area of the circle of radius √2E is 2πE. So
E
=
n
h
2
π
=
n
ℏ
,
{\displaystyle E={\frac {nh}{2\pi }}=n\hbar \,,}
or, in natural units where ħ = 1, the energy is an integer.
The Fourier components of X(t) and P(t) are simple, and more so if they are combined into the quantities
A
(
t
)
=
X
(
t
)
+
i
P
(
t
)
=
2
E
e
−
i
t
,
A
†
(
t
)
=
X
(
t
)
−
i
P
(
t
)
=
2
E
e
i
t
.
{\displaystyle A(t)=X(t)+iP(t)={\sqrt {2E}}\,e^{-it},\quad A^{\dagger }(t)=X(t)-iP(t)={\sqrt {2E}}\,e^{it}.}
Both A and A† have only a single frequency, and X and P can be recovered from their sum and difference.
Since A(t) has a classical Fourier series with only the lowest frequency, and the matrix element Amn is the (m − n)th Fourier coefficient of the classical orbit, the matrix for A is nonzero only on the line just above the diagonal, where it is equal to √2En. The matrix for A† is likewise only nonzero on the line below the diagonal, with the same elements. Thus, from A and A†, reconstruction yields
2
X
(
0
)
=
ℏ
[
0
1
0
0
0
⋯
1
0
2
0
0
⋯
0
2
0
3
0
⋯
0
0
3
0
4
⋯
⋮
⋮
⋮
⋮
⋮
⋱
]
,
{\displaystyle {\sqrt {2}}X(0)={\sqrt {\hbar }}\;{\begin{bmatrix}0&{\sqrt {1}}&0&0&0&\cdots \\{\sqrt {1}}&0&{\sqrt {2}}&0&0&\cdots \\0&{\sqrt {2}}&0&{\sqrt {3}}&0&\cdots \\0&0&{\sqrt {3}}&0&{\sqrt {4}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{bmatrix}},}
and
2
P
(
0
)
=
ℏ
[
0
−
i
1
0
0
0
⋯
i
1
0
−
i
2
0
0
⋯
0
i
2
0
−
i
3
0
⋯
0
0
i
3
0
−
i
4
⋯
⋮
⋮
⋮
⋮
⋮
⋱
]
,
{\displaystyle {\sqrt {2}}P(0)={\sqrt {\hbar }}\;{\begin{bmatrix}0&-i{\sqrt {1}}&0&0&0&\cdots \\i{\sqrt {1}}&0&-i{\sqrt {2}}&0&0&\cdots \\0&i{\sqrt {2}}&0&-i{\sqrt {3}}&0&\cdots \\0&0&i{\sqrt {3}}&0&-i{\sqrt {4}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{bmatrix}},}
which, up to the choice of units, are the Heisenberg matrices for the harmonic oscillator. Both matrices are Hermitian, since they are constructed from the Fourier coefficients of real quantities.
Finding X(t) and P(t) is direct, since they are quantum Fourier coefficients so they evolve simply with time,
X
m
n
(
t
)
=
X
m
n
(
0
)
e
i
(
E
m
−
E
n
)
t
,
P
m
n
(
t
)
=
P
m
n
(
0
)
e
i
(
E
m
−
E
n
)
t
.
{\displaystyle X_{mn}(t)=X_{mn}(0)e^{i(E_{m}-E_{n})t},\quad P_{mn}(t)=P_{mn}(0)e^{i(E_{m}-E_{n})t}~.}
The matrix product of X and P is not hermitian, but has a real and imaginary part. The real part is one half the symmetric expression XP + PX, while the imaginary part is proportional to the commutator
[
X
,
P
]
=
(
X
P
−
P
X
)
.
{\displaystyle [X,P]=(XP-PX).}
It is simple to verify explicitly that XP − PX in the case of the harmonic oscillator, is iħ, multiplied by the identity.
It is likewise simple to verify that the matrix
H
=
1
2
(
X
2
+
P
2
)
{\displaystyle H={\tfrac {1}{2}}\left(X^{2}+P^{2}\right)}
is a diagonal matrix, with eigenvalues Ei.
=== Conservation of energy ===
The harmonic oscillator is an important case. Finding the matrices is easier than determining the general conditions from these special forms. For this reason, Heisenberg investigated the anharmonic oscillator, with Hamiltonian
H
=
1
2
P
2
+
1
2
X
2
+
ε
X
3
.
{\displaystyle H={\tfrac {1}{2}}P^{2}+{\tfrac {1}{2}}X^{2}+\varepsilon X^{3}~.}
In this case, the X and P matrices are no longer simple off-diagonal matrices, since the corresponding classical orbits are slightly squashed and displaced, so that they have Fourier coefficients at every classical frequency. To determine the matrix elements, Heisenberg required that the classical equations of motion be obeyed as matrix equations,
d
X
d
t
=
P
,
d
P
d
t
=
−
X
−
3
ε
X
2
.
{\displaystyle {\frac {dX}{dt}}=P~,\qquad {\frac {dP}{dt}}=-X-3\varepsilon X^{2}~.}
He noticed that if this could be done, then H, considered as a matrix function of X and P, will have zero time derivative.
d
H
d
t
=
P
∗
d
P
d
t
+
(
X
+
3
ε
X
2
)
∗
d
X
d
t
=
0
,
{\displaystyle {\frac {dH}{dt}}=P*{\frac {dP}{dt}}+\left(X+3\varepsilon X^{2}\right)*{\frac {dX}{dt}}=0~,}
where A∗B is the anticommutator,
A
∗
B
=
1
2
(
A
B
+
B
A
)
.
{\displaystyle A*B={\tfrac {1}{2}}(AB+BA)~.}
Given that all the off diagonal elements have a nonzero frequency; H being constant implies that H is diagonal.
It was clear to Heisenberg that in this system, the energy could be exactly conserved in an arbitrary quantum system, a very encouraging sign.
The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse.
=== Differentiation trick — canonical commutation relations ===
Demanding that the classical equations of motion are preserved is not a strong enough condition to determine the matrix elements. The Planck constant does not appear in the classical equations, so that the matrices could be constructed for many different values of ħ and still satisfy the equations of motion, but with different energy levels.
So, in order to implement his program, Heisenberg needed to use the old quantum condition to fix the energy levels, then fill in the matrices with Fourier coefficients of the classical equations, then alter the matrix coefficients and the energy levels slightly to make sure the classical equations are satisfied. This is clearly not satisfactory. The old quantum conditions refer to the area enclosed by the sharp classical orbits, which do not exist in the new formalism.
The most important thing that Heisenberg discovered is how to translate the old quantum condition into a simple statement in matrix mechanics.
To do this, he investigated the action integral as a matrix quantity,
∫
0
T
∑
k
P
m
k
(
t
)
d
X
k
n
d
t
d
t
≈
?
J
m
n
.
{\displaystyle \int _{0}^{T}\sum _{k}P_{mk}(t){\frac {dX_{kn}}{dt}}dt\,\,{\stackrel {\scriptstyle ?}{\approx }}\,\,J_{mn}~.}
There are several problems with this integral, all stemming from the incompatibility of the matrix formalism with the old picture of orbits. Which period T should be used? Semiclassically, it should be either m or n, but the difference is order ħ, and an answer to order ħ is sought. The quantum condition tells us that Jmn is 2πn on the diagonal, so the fact that J is classically constant tells us that the off-diagonal elements are zero.
His crucial insight was to differentiate the quantum condition with respect to n. This idea only makes complete sense in the classical limit, where n is not an integer but the continuous action variable J, but Heisenberg performed analogous manipulations with matrices, where the intermediate expressions are sometimes discrete differences and sometimes derivatives.
In the following discussion, for the sake of clarity, the differentiation will be performed on the classical variables, and the transition to matrix mechanics will be done afterwards, guided by the correspondence principle.
In the classical setting, the derivative is the derivative with respect to J of the integral which defines J, so it is tautologically equal to 1.
d
d
J
∫
0
T
P
d
X
=
1
=
∫
0
T
d
t
(
d
P
d
J
d
X
d
t
+
P
d
d
J
d
X
d
t
)
=
∫
0
T
d
t
(
d
P
d
J
d
X
d
t
−
d
P
d
t
d
X
d
J
)
{\displaystyle {\begin{aligned}{}{\frac {d}{dJ}}\int _{0}^{T}PdX&=1\\&=\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{dt}}+P{\frac {d}{dJ}}{\frac {dX}{dt}}\right)\\&=\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{dt}}-{\frac {dP}{dt}}{\frac {dX}{dJ}}\right)\end{aligned}}}
where the derivatives dP/dJ and dX/dJ should be interpreted as differences with respect to J at corresponding times on nearby orbits, exactly what would be obtained if the Fourier coefficients of the orbital motion were differentiated. (These derivatives are symplectically orthogonal in phase space to the time derivatives dP/dt and dX/dt).
The final expression is clarified by introducing the variable canonically conjugate to J, which is called the angle variable θ: The derivative with respect to time is a derivative with respect to θ, up to a factor of 2πT,
2
π
T
∫
0
T
d
t
(
d
P
d
J
d
X
d
θ
−
d
P
d
θ
d
X
d
J
)
=
1
.
{\displaystyle {\frac {2\pi }{T}}\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{d\theta }}-{\frac {dP}{d\theta }}{\frac {dX}{dJ}}\right)=1\,.}
So the quantum condition integral is the average value over one cycle of the Poisson bracket of X and P.
An analogous differentiation of the Fourier series of P dX demonstrates that the off-diagonal elements of the Poisson bracket are all zero. The Poisson bracket of two canonically conjugate variables, such as X and P, is the constant value 1, so this integral really is the average value of 1; so it is 1, as we knew all along, because it is dJ/dJ after all. But Heisenberg, Born and Jordan, unlike Dirac, were not familiar with the theory of Poisson brackets, so, for them, the differentiation effectively evaluated {X, P} in J,θ coordinates.
The Poisson Bracket, unlike the action integral, does have a simple translation to matrix mechanics – it normally corresponds to the imaginary part of the product of two variables, the commutator.
To see this, examine the (antisymmetrized) product of two matrices A and B in the correspondence limit, where the matrix elements are slowly varying functions of the index, keeping in mind that the answer is zero classically.
In the correspondence limit, when indices m, n are large and nearby, while k, r are small, the rate of change of the matrix elements in the diagonal direction is the matrix element of the J derivative of the corresponding classical quantity. So it is possible to shift any matrix element diagonally through the correspondence,
A
(
m
+
r
)
(
n
+
r
)
−
A
m
n
≈
r
(
d
A
d
J
)
m
n
{\displaystyle A_{(m+r)(n+r)}-A_{mn}\approx r\;\left({\frac {dA}{dJ}}\right)_{mn}}
where the right hand side is really only the (m − n)th Fourier component of dA/dJ at the orbit near m to this semiclassical order, not a full well-defined matrix.
The semiclassical time derivative of a matrix element is obtained up to a factor of i by multiplying by the distance from the diagonal,
i
k
A
m
(
m
+
k
)
≈
(
T
2
π
d
A
d
t
)
m
(
m
+
k
)
=
(
d
A
d
θ
)
m
(
m
+
k
)
.
{\displaystyle ikA_{m(m+k)}\approx \left({\frac {T}{2\pi }}{\frac {dA}{dt}}\right)_{m(m+k)}=\left({\frac {dA}{d\theta }}\right)_{m(m+k)}\,.}
since the coefficient Am(m+k) is semiclassically the kth Fourier coefficient of the mth classical orbit.
The imaginary part of the product of A and B can be evaluated by shifting the matrix elements around so as to reproduce the classical answer, which is zero.
The leading nonzero residual is then given entirely by the shifting. Since all the matrix elements are at indices which have a small distance from the large index position (m,m), it helps to introduce two temporary notations:
A[r,k] = A(m+r)(m+k) for the matrices, and dA/dJ[r] for the rth Fourier components of classical quantities,
(
A
B
−
B
A
)
[
0
,
k
]
=
∑
r
=
−
∞
∞
(
A
[
0
,
r
]
B
[
r
,
k
]
−
A
[
r
,
k
]
B
[
0
,
r
]
)
=
∑
r
(
A
[
−
r
+
k
,
k
]
+
(
r
−
k
)
d
A
d
J
[
r
]
)
(
B
[
0
,
k
−
r
]
+
r
d
B
d
J
[
r
−
k
]
)
−
∑
r
A
[
r
,
k
]
B
[
0
,
r
]
.
{\displaystyle {\begin{aligned}(AB-BA)[0,k]&=\sum _{r=-\infty }^{\infty }{\bigl (}A[0,r]B[r,k]-A[r,k]B[0,r]{\bigr )}\\&=\sum _{r}\left(A[-r+k,k]+(r-k){\frac {dA}{dJ}}[r]\right)\left(B[0,k-r]+r{\frac {dB}{dJ}}[r-k]\right)-\sum _{r}A[r,k]B[0,r]\,.\end{aligned}}}
Flipping the summation variable in the first sum from r to r′ = k − r, the matrix element becomes,
∑
r
′
(
A
[
r
′
,
k
]
−
r
′
d
A
d
J
[
k
−
r
′
]
)
(
B
[
0
,
r
′
]
+
(
k
−
r
′
)
d
B
d
J
[
r
′
]
)
−
∑
r
A
[
r
,
k
]
B
[
0
,
r
]
{\displaystyle \sum _{r'}\left(A[r',k]-r'{\frac {dA}{dJ}}[k-r']\right)\left(B[0,r']+(k-r'){\frac {dB}{dJ}}[r']\right)-\sum _{r}A[r,k]B[0,r]}
and it is clear that the principal (classical) part cancels.
The leading quantum part, neglecting the higher order product of derivatives in the residual expression, is then equal to
∑
r
′
(
d
B
d
J
[
r
′
]
(
k
−
r
′
)
A
[
r
′
,
k
]
−
d
A
d
J
[
k
−
r
′
]
r
′
B
[
0
,
r
′
]
)
{\displaystyle \sum _{r'}\left({\frac {dB}{dJ}}[r'](k-r')A[r',k]-{\frac {dA}{dJ}}[k-r']r'B[0,r']\right)}
so that, finally,
(
A
B
−
B
A
)
[
0
,
k
]
=
∑
r
′
(
d
B
d
J
[
r
′
]
i
d
A
d
θ
[
k
−
r
′
]
−
d
A
d
J
[
k
−
r
′
]
i
d
B
d
θ
[
r
′
]
)
{\displaystyle (AB-BA)[0,k]=\sum _{r'}\left({\frac {dB}{dJ}}[r']i{\frac {dA}{d\theta }}[k-r']-{\frac {dA}{dJ}}[k-r']i{\frac {dB}{d\theta }}[r']\right)}
which can be identified with i times the kth classical Fourier component of the Poisson bracket.
Heisenberg's original differentiation trick was eventually extended to a full semiclassical derivation of the quantum condition, in collaboration with Born and Jordan.
Once they were able to establish that
i
ℏ
{
X
,
P
}
P
B
⟼
[
X
,
P
]
≡
X
P
−
P
X
=
i
ℏ
,
{\displaystyle i\hbar \{X,P\}_{\mathrm {PB} }\qquad \longmapsto \qquad [X,P]\equiv XP-PX=i\hbar \,,}
this condition replaced and extended the old quantization rule, allowing the matrix elements of P and X for an arbitrary system to be determined simply from the form of the Hamiltonian.
The new quantization rule was assumed to be universally true, even though the derivation from the old quantum theory required semiclassical reasoning.
(A full quantum treatment, however, for more elaborate arguments of the brackets, was appreciated in the 1940s to amount to extending Poisson brackets to Moyal brackets.)
=== State vectors and the Heisenberg equation ===
To make the transition to standard quantum mechanics, the most important further addition was the quantum state vector, now written |ψ⟩,
which is the vector that the matrices act on. Without the state vector, it is not clear which particular motion the Heisenberg matrices are describing, since they include all the motions somewhere.
The interpretation of the state vector, whose components are written ψm, was furnished by Born. This interpretation is statistical: the result of a measurement of the physical quantity corresponding to the matrix A is random, with an average value equal to
∑
m
n
ψ
m
∗
A
m
n
ψ
n
.
{\displaystyle \sum _{mn}\psi _{m}^{*}A_{mn}\psi _{n}\,.}
Alternatively, and equivalently, the state vector gives the probability amplitude ψn for the quantum system to be in the energy state n.
Once the state vector was introduced, matrix mechanics could be rotated to any basis, where the H matrix need no longer be diagonal. The Heisenberg equation of motion in its original form states that Amn evolves in time like a Fourier component,
A
m
n
(
t
)
=
e
i
(
E
m
−
E
n
)
t
A
m
n
(
0
)
,
{\displaystyle A_{mn}(t)=e^{i(E_{m}-E_{n})t}A_{mn}(0)~,}
which can be recast in differential form
d
A
m
n
d
t
=
i
(
E
m
−
E
n
)
A
m
n
,
{\displaystyle {\frac {dA_{mn}}{dt}}=i(E_{m}-E_{n})A_{mn}~,}
and it can be restated so that it is true in an arbitrary basis, by noting that the H matrix is diagonal with diagonal values Em,
d
A
d
t
=
i
(
H
A
−
A
H
)
.
{\displaystyle {\frac {dA}{dt}}=i(HA-AH)~.}
This is now a matrix equation, so it holds in any basis. This is the modern form of the Heisenberg equation of motion.
Its formal solution is:
A
(
t
)
=
e
i
H
t
A
(
0
)
e
−
i
H
t
.
{\displaystyle A(t)=e^{iHt}A(0)e^{-iHt}~.}
All these forms of the equation of motion above say the same thing, that A(t) is equivalent to A(0), through a basis rotation by the unitary matrix eiHt, a systematic picture elucidated by Dirac in his bra–ket notation.
Conversely, by rotating the basis for the state vector at each time by eiHt, the time dependence in the matrices can be undone. The matrices are now time independent, but the state vector rotates,
|
ψ
(
t
)
⟩
=
e
−
i
H
t
|
ψ
(
0
)
⟩
,
d
|
ψ
⟩
d
t
=
−
i
H
|
ψ
⟩
.
{\displaystyle |\psi (t)\rangle =e^{-iHt}|\psi (0)\rangle ,\qquad {\frac {d|\psi \rangle }{dt}}=-iH|\psi \rangle \,.}
This is the Schrödinger equation for the state vector, and this time-dependent change of basis amounts to transformation to the Schrödinger picture, with ⟨x|ψ⟩ = ψ(x).
In quantum mechanics in the Heisenberg picture the state vector, |ψ⟩ does not change with time, while an observable A satisfies the Heisenberg equation of motion,
The extra term is for operators such as
A
=
(
X
+
t
2
P
)
{\displaystyle A=\left(X+t^{2}P\right)}
which have an explicit time dependence, in addition to the time dependence from the unitary evolution discussed.
The Heisenberg picture does not distinguish time from space, so it is better suited to relativistic theories than the Schrödinger equation. Moreover, the similarity to classical physics is more manifest: the Hamiltonian equations of motion for classical mechanics are recovered by replacing the commutator above by the Poisson bracket (see also below). By the Stone–von Neumann theorem, the Heisenberg picture and the Schrödinger picture must be unitarily equivalent, as detailed below.
== Further results ==
Matrix mechanics rapidly developed into modern quantum mechanics, and gave interesting physical results on the spectra of atoms.
=== Wave mechanics ===
Jordan noted that the commutation relations ensure that P acts as a differential operator.
The operator identity
[
a
,
b
c
]
=
a
b
c
−
b
c
a
=
a
b
c
−
b
a
c
+
b
a
c
−
b
c
a
=
[
a
,
b
]
c
+
b
[
a
,
c
]
{\displaystyle [a,bc]=abc-bca=abc-bac+bac-bca=[a,b]c+b[a,c]}
allows the evaluation of the commutator of P with any power of X, and it implies that
[
P
,
X
n
]
=
−
i
n
X
n
−
1
{\displaystyle \left[P,X^{n}\right]=-in~X^{n-1}}
which, together with linearity, implies that a P-commutator effectively differentiates any analytic matrix function of X.
Assuming limits are defined sensibly, this extends to arbitrary functions−but the extension need not be made explicit until a certain degree of mathematical rigor is required,
Since X is a Hermitian matrix, it should be diagonalizable, and it will be clear from the eventual form of P that every real number can be an eigenvalue. This makes some of the mathematics subtle, since there is a separate eigenvector for every point in space.
In the basis where X is diagonal, an arbitrary state can be written as a superposition of states with eigenvalues x,
|
ψ
⟩
=
∫
x
ψ
(
x
)
|
x
⟩
,
{\displaystyle |\psi \rangle =\int _{x}\psi (x)|x\rangle \,,}
so that ψ(x) = ⟨x|ψ⟩, and the operator X multiplies each eigenvector by x,
X
|
ψ
⟩
=
∫
x
x
ψ
(
x
)
|
x
⟩
.
{\displaystyle X|\psi \rangle =\int _{x}x\psi (x)|x\rangle ~.}
Define a linear operator D which differentiates ψ,
D
∫
x
ψ
(
x
)
|
x
⟩
=
∫
x
ψ
′
(
x
)
|
x
⟩
,
{\displaystyle D\int _{x}\psi (x)|x\rangle =\int _{x}\psi '(x)|x\rangle \,,}
and note that
(
D
X
−
X
D
)
|
ψ
⟩
=
∫
x
[
(
x
ψ
(
x
)
)
′
−
x
ψ
′
(
x
)
]
|
x
⟩
=
∫
x
ψ
(
x
)
|
x
⟩
=
|
ψ
⟩
,
{\displaystyle (DX-XD)|\psi \rangle =\int _{x}\left[\left(x\psi (x)\right)'-x\psi '(x)\right]|x\rangle =\int _{x}\psi (x)|x\rangle =|\psi \rangle \,,}
so that the operator −iD obeys the same commutation relation as P. Thus, the difference between P and −iD must commute with X,
[
P
+
i
D
,
X
]
=
0
,
{\displaystyle [P+iD,X]=0\,,}
so it may be simultaneously diagonalized with X: its value acting on any eigenstate of X is some function f of the eigenvalue x.
This function must be real, because both P and −iD are Hermitian,
(
P
+
i
D
)
|
x
⟩
=
f
(
x
)
|
x
⟩
,
{\displaystyle (P+iD)|x\rangle =f(x)|x\rangle \,,}
rotating each state |x⟩ by a phase f(x), that is, redefining the phase of the wavefunction:
ψ
(
x
)
→
e
−
i
f
(
x
)
ψ
(
x
)
.
{\displaystyle \psi (x)\rightarrow e^{-if(x)}\psi (x)\,.}
The operator iD is redefined by an amount:
i
D
→
i
D
+
f
(
X
)
,
{\displaystyle iD\rightarrow iD+f(X)\,,}
which means that, in the rotated basis, P is equal to −iD.
Hence, there is always a basis for the eigenvalues of X where the action of P on any wavefunction is known:
P
∫
x
ψ
(
x
)
|
x
⟩
=
∫
x
−
i
ψ
′
(
x
)
|
x
⟩
,
{\displaystyle P\int _{x}\psi (x)|x\rangle =\int _{x}-i\psi '(x)|x\rangle \,,}
and the Hamiltonian in this basis is a linear differential operator on the state-vector components,
[
P
2
2
m
+
V
(
X
)
]
∫
x
ψ
x
|
x
⟩
=
∫
x
[
−
1
2
m
∂
2
∂
x
2
+
V
(
x
)
]
ψ
x
|
x
⟩
{\displaystyle \left[{\frac {P^{2}}{2m}}+V(X)\right]\int _{x}\psi _{x}|x\rangle =\int _{x}\left[-{\frac {1}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x)\right]\psi _{x}|x\rangle }
Thus, the equation of motion for the state vector is but a celebrated differential equation,
Since D is a differential operator, in order for it to be sensibly defined, there must be eigenvalues of X which neighbors every given value. This suggests that the only possibility is that the space of all eigenvalues of X is all real numbers, and that P is iD, up to a phase rotation.
To make this rigorous requires a sensible discussion of the limiting space of functions, and in this space this is the Stone–von Neumann theorem: any operators X and P which obey the commutation relations can be made to act on a space of wavefunctions, with P a derivative operator. This implies that a Schrödinger picture is always available.
Matrix mechanics easily extends to many degrees of freedom in a natural way. Each degree of freedom has a separate X operator and a separate effective differential operator P, and the wavefunction is a function of all the possible eigenvalues of the independent commuting X variables.
[
X
i
,
X
j
]
=
0
[
P
i
,
P
j
]
=
0
[
X
i
,
P
j
]
=
i
δ
i
j
.
{\displaystyle {\begin{aligned}\left[X_{i},X_{j}\right]&=0\\[1ex]\left[P_{i},P_{j}\right]&=0\\[1ex]\left[X_{i},P_{j}\right]&=i\delta _{ij}\,.\end{aligned}}}
In particular, this means that a system of N interacting particles in 3 dimensions is described by one vector whose components in a basis where all the X are diagonal is a mathematical function of 3N-dimensional space describing all their possible positions, effectively a much bigger collection of values than the mere collection of N three-dimensional wavefunctions in one physical space. Schrödinger came to the same conclusion independently, and eventually proved the equivalence of his own formalism to Heisenberg's.
Since the wavefunction is a property of the whole system, not of any one part, the description in quantum mechanics is not entirely local. The description of several quantum particles has them correlated, or entangled. This entanglement leads to strange correlations between distant particles which violate the classical Bell's inequality.
Even if the particles can only be in just two positions, the wavefunction for N particles requires 2N complex numbers, one for each total configuration of positions. This is exponentially many numbers in N, so simulating quantum mechanics on a computer requires exponential resources. Conversely, this suggests that it might be possible to find quantum systems of size N which physically compute the answers to problems which classically require 2N bits to solve. This is the aspiration behind quantum computing.
=== Ehrenfest theorem ===
For the time-independent operators X and P, ∂A/∂t = 0 so the Heisenberg equation above reduces to:
i
ℏ
d
A
d
t
=
[
A
,
H
]
=
A
H
−
H
A
,
{\displaystyle i\hbar {\frac {dA}{dt}}=[A,H]=AH-HA,}
where the square brackets [ , ] denote the commutator. For a Hamiltonian which is p2/2m + V(x), the X and P operators satisfy:
d
X
d
t
=
P
m
,
d
P
d
t
=
−
∇
V
,
{\displaystyle {\frac {dX}{dt}}={\frac {P}{m}},\quad {\frac {dP}{dt}}=-\nabla V,}
where the first is classically the velocity, and second is classically the force, or potential gradient. These reproduce Hamilton's form of Newton's laws of motion. In the Heisenberg picture, the X and P operators satisfy the classical equations of motion. You can take the expectation value of both sides of the equation to see that, in any state |ψ⟩:
d
d
t
⟨
X
⟩
=
d
d
t
⟨
ψ
|
X
|
ψ
⟩
=
1
m
⟨
ψ
|
P
|
ψ
⟩
=
1
m
⟨
P
⟩
d
d
t
⟨
P
⟩
=
d
d
t
⟨
ψ
|
P
|
ψ
⟩
=
⟨
ψ
|
(
−
∇
V
)
|
ψ
⟩
=
−
⟨
∇
V
⟩
.
{\displaystyle {\begin{aligned}{\frac {d}{dt}}\langle X\rangle &={\frac {d}{dt}}\langle \psi |X|\psi \rangle ={\frac {1}{m}}\langle \psi |P|\psi \rangle ={\frac {1}{m}}\langle P\rangle \\[1.5ex]{\frac {d}{dt}}\langle P\rangle &={\frac {d}{dt}}\langle \psi |P|\psi \rangle =\langle \psi |(-\nabla V)|\psi \rangle =-\langle \nabla V\rangle \,.\end{aligned}}}
So Newton's laws are exactly obeyed by the expected values of the operators in any given state. This is Ehrenfest's theorem, which is an obvious corollary of the Heisenberg equations of motion, but is less trivial in the Schrödinger picture, where Ehrenfest discovered it.
=== Transformation theory ===
In classical mechanics, a canonical transformation of phase space coordinates is one which preserves the structure of the Poisson brackets. The new variables x′, p′ have the same Poisson brackets with each other as the original variables x, p. Time evolution is a canonical transformation, since the phase space at any time is just as good a choice of variables as the phase space at any other time.
The Hamiltonian flow is the canonical transformation:
x
→
x
+
d
x
=
x
+
∂
H
∂
p
d
t
p
→
p
+
d
p
=
p
−
∂
H
∂
x
d
t
.
{\displaystyle {\begin{aligned}x&\rightarrow x+dx=x+{\frac {\partial H}{\partial p}}dt\\[1ex]p&\rightarrow p+dp=p-{\frac {\partial H}{\partial x}}dt~.\end{aligned}}}
Since the Hamiltonian can be an arbitrary function of x and p, there are such infinitesimal canonical transformations corresponding to every classical quantity G, where G serves as the Hamiltonian to generate a flow of points in phase space for an increment of time s,
d
x
=
∂
G
∂
p
d
s
=
{
G
,
X
}
d
s
d
p
=
−
∂
G
∂
x
d
s
=
{
G
,
P
}
d
s
.
{\displaystyle {\begin{aligned}dx&={\frac {\partial G}{\partial p}}ds=\left\{G,X\right\}ds\\[1ex]dp&=-{\frac {\partial G}{\partial x}}ds=\left\{G,P\right\}ds\,.\end{aligned}}}
For a general function A(x,p) on phase space, its infinitesimal change at every step ds under this map is
d
A
=
∂
A
∂
x
d
x
+
∂
A
∂
p
d
p
=
{
A
,
G
}
d
s
.
{\displaystyle dA={\frac {\partial A}{\partial x}}dx+{\frac {\partial A}{\partial p}}dp=\{A,G\}ds\,.}
The quantity G is called the infinitesimal generator of the canonical transformation.
In quantum mechanics, the quantum analog G is now a Hermitian matrix, and the equations of motion are given by commutators,
d
A
=
i
[
G
,
A
]
d
s
.
{\displaystyle dA=i[G,A]ds\,.}
The infinitesimal canonical motions can be formally integrated, just as the Heisenberg equation of motion were integrated,
A
′
=
U
†
A
U
{\displaystyle A'=U^{\dagger }AU}
where U = eiGs and s is an arbitrary parameter.
The definition of a quantum canonical transformation is thus an arbitrary unitary change of basis on the space of all state vectors. U is an arbitrary unitary matrix, a complex rotation in phase space,
U
†
=
U
−
1
.
{\displaystyle U^{\dagger }=U^{-1}\,.}
These transformations leave the sum of the absolute square of the wavefunction components invariant, while they take states which are multiples of each other (including states which are imaginary multiples of each other) to states which are the same multiple of each other.
The interpretation of the matrices is that they act as generators of motions on the space of states.
For example, the motion generated by P can be found by solving the Heisenberg equation of motion using P as a Hamiltonian,
d
X
=
i
[
X
,
P
]
d
s
=
d
s
d
P
=
i
[
P
,
P
]
d
s
=
0
.
{\displaystyle {\begin{aligned}dX&=i[X,P]ds=ds\\[1ex]dP&=i[P,P]ds=0\,.\end{aligned}}}
These are translations of the matrix X by a multiple of the identity matrix,
X
→
X
+
s
I
.
{\displaystyle X\rightarrow X+sI~.}
This is the interpretation of the derivative operator D: eiPs = eD, the exponential of a derivative operator is a translation (so Lagrange's shift operator).
The X operator likewise generates translations in P. The Hamiltonian generates translations in time, the angular momentum generates rotations in physical space, and the operator X2 + P2 generates rotations in phase space.
When a transformation, like a rotation in physical space, commutes with the Hamiltonian, the transformation is called a symmetry (behind a degeneracy) of the Hamiltonian – the Hamiltonian expressed in terms of rotated coordinates is the same as the original Hamiltonian. This means that the change in the Hamiltonian under the infinitesimal symmetry generator L vanishes,
d
H
d
s
=
i
[
L
,
H
]
=
0
.
{\displaystyle {\frac {dH}{ds}}=i[L,H]=0\,.}
It then follows that the change in the generator under time translation also vanishes,
d
L
d
t
=
i
[
H
,
L
]
=
0
{\displaystyle {\frac {dL}{dt}}=i[H,L]=0}
so that the matrix L is constant in time: it is conserved.
The one-to-one association of infinitesimal symmetry generators and conservation laws was discovered by Emmy Noether for classical mechanics, where the commutators are Poisson brackets, but the quantum-mechanical reasoning is identical. In quantum mechanics, any unitary symmetry transformation yields a conservation law, since if the matrix U has the property that
U
−
1
H
U
=
H
{\displaystyle U^{-1}HU=H}
so it follows that
U
H
=
H
U
{\displaystyle UH=HU}
and that the time derivative of U is zero – it is conserved.
The eigenvalues of unitary matrices are pure phases, so that the value of a unitary conserved quantity is a complex number of unit magnitude, not a real number. Another way of saying this is that a unitary matrix is the exponential of i times a Hermitian matrix, so that the additive conserved real quantity, the phase, is only well-defined up to an integer multiple of 2π. Only when the unitary symmetry matrix is part of a family that comes arbitrarily close to the identity are the conserved real quantities single-valued, and then the demand that they are conserved become a much more exacting constraint.
Symmetries which can be continuously connected to the identity are called continuous, and translations, rotations, and boosts are examples. Symmetries which cannot be continuously connected to the identity are discrete, and the operation of space-inversion, or parity, and charge conjugation are examples.
The interpretation of the matrices as generators of canonical transformations is due to Paul Dirac. The correspondence between symmetries and matrices was shown by Eugene Wigner to be complete, if antiunitary matrices which describe symmetries which include time-reversal are included.
=== Selection rules ===
It was physically clear to Heisenberg that the absolute squares of the matrix elements of X, which are the Fourier coefficients of the oscillation, would yield the rate of emission of electromagnetic radiation.
In the classical limit of large orbits, if a charge with position X(t) and charge q is oscillating next to an equal and opposite charge at position 0, the instantaneous dipole moment is q X(t), and the time variation of this moment translates directly into the space-time variation of the vector potential, which yields nested outgoing spherical waves.
For atoms, the wavelength of the emitted light is about 10,000 times the atomic radius, and the dipole moment is the only contribution to the radiative field, while all other details of the atomic charge distribution can be ignored.
Ignoring back-reaction, the power radiated in each outgoing mode is a sum of separate contributions from the square of each independent time Fourier mode of d,
P
(
ω
)
=
2
3
ω
4
|
d
i
|
2
.
{\displaystyle P(\omega )={\tfrac {2}{3}}{\omega ^{4}}|d_{i}|^{2}~.}
Now, in Heisenberg's representation, the Fourier coefficients of the dipole moment are the matrix elements of X. This correspondence allowed Heisenberg to provide the rule for the transition intensities, the fraction of the time that, starting from an initial state i, a photon is emitted and the atom jumps to a final state j,
P
i
j
=
2
3
(
E
i
−
E
j
)
4
|
X
i
j
|
2
.
{\displaystyle P_{ij}={\tfrac {2}{3}}\left(E_{i}-E_{j}\right)^{4}\left|X_{ij}\right|^{2}\,.}
This then allowed the magnitude of the matrix elements to be interpreted statistically: they give the intensity of the spectral lines, the probability for quantum jumps from the emission of dipole radiation.
Since the transition rates are given by the matrix elements of X, wherever Xij is zero, the corresponding transition should be absent. These were called the selection rules, which were a puzzle until the advent of matrix mechanics.
An arbitrary state of the hydrogen atom, ignoring spin, is labelled by |n;l,m⟩, where the value of l is a measure of the total orbital angular momentum and m is its z-component, which defines the orbit orientation. The components of the angular momentum pseudovector are
L
i
=
ε
i
j
k
X
j
P
k
{\displaystyle L_{i}=\varepsilon _{ijk}X^{j}P^{k}}
where the products in this expression are independent of order and real, because different components of X and P commute.
The commutation relations of L with all three coordinate matrices X, Y, Z (or with any vector) are easy to find,
[
L
i
,
X
j
]
=
i
ε
i
j
k
X
k
,
{\displaystyle \left[L_{i},X_{j}\right]=i\varepsilon _{ijk}X_{k}\,,}
which confirms that the operator L generates rotations between the three components of the vector of coordinate matrices X.
From this, the commutator of Lz and the coordinate matrices X, Y, Z can be read off,
[
L
z
,
X
]
=
i
Y
,
[
L
z
,
Y
]
=
−
i
X
.
{\displaystyle {\begin{aligned}\left[L_{z},X\right]&=iY\,,\\[1ex]\left[L_{z},Y\right]&=-iX\,.\end{aligned}}}
This means that the quantities X + iY and X − iY have a simple commutation rule,
[
L
z
,
X
+
i
Y
]
=
(
X
+
i
Y
)
,
[
L
z
,
X
−
i
Y
]
=
−
(
X
−
i
Y
)
.
{\displaystyle {\begin{aligned}\left[L_{z},X+iY\right]&=(X+iY)\,,\\[1ex]\left[L_{z},X-iY\right]&=-(X-iY)\,.\end{aligned}}}
Just like the matrix elements of X + iP and X − iP for the harmonic oscillator Hamiltonian, this commutation law implies that these operators only have certain off diagonal matrix elements in states of definite m,
L
z
(
(
X
+
i
Y
)
|
m
⟩
)
=
(
X
+
i
Y
)
L
z
|
m
⟩
+
(
X
+
i
Y
)
|
m
⟩
=
(
m
+
1
)
(
X
+
i
Y
)
|
m
⟩
{\displaystyle L_{z}{\bigl (}(X+iY)|m\rangle {\bigr )}=(X+iY)L_{z}|m\rangle +(X+iY)|m\rangle =(m+1)(X+iY)|m\rangle }
meaning that the matrix (X + iY) takes an eigenvector of Lz with eigenvalue m to an eigenvector with eigenvalue m + 1. Similarly, (X − iY) decrease m by one unit, while Z does not change the value of m.
So, in a basis of |l,m⟩ states where L2 and Lz have definite values, the matrix elements of any of the three components of the position are zero, except when m is the same or changes by one unit.
This places a constraint on the change in total angular momentum. Any state can be rotated so that its angular momentum is in the z-direction as much as possible, where m = l. The matrix element of the position acting on |l,m⟩ can only produce values of m which are bigger by one unit, so that if the coordinates are rotated so that the final state is |l′,l′⟩, the value of l′ can be at most one bigger than the biggest value of l that occurs in the initial state. So l′ is at most l + 1.
The matrix elements vanish for l′ > l + 1, and the reverse matrix element is determined by Hermiticity, so these vanish also when l′ < l − 1: Dipole transitions are forbidden with a change in angular momentum of more than one unit.
=== Sum rules ===
The Heisenberg equation of motion determines the matrix elements of P in the Heisenberg basis from the matrix elements of X.
P
i
j
=
m
d
d
t
X
i
j
=
i
m
(
E
i
−
E
j
)
X
i
j
,
{\displaystyle P_{ij}=m{\frac {d}{dt}}X_{ij}=im\left(E_{i}-E_{j}\right)X_{ij}\,,}
which turns the diagonal part of the commutation relation into a sum rule for the magnitude of the matrix elements:
∑
j
P
i
j
x
j
i
−
X
i
j
p
j
i
=
i
∑
j
2
m
(
E
i
−
E
j
)
|
X
i
j
|
2
=
i
.
{\displaystyle \sum _{j}P_{ij}x_{ji}-X_{ij}p_{ji}=i\sum _{j}2m\left(E_{i}-E_{j}\right)\left|X_{ij}\right|^{2}=i\,.}
This yields a relation for the sum of the spectroscopic intensities to and from any given state, although to be absolutely correct, contributions from the radiative capture probability for unbound scattering states must be included in the sum:
∑
j
2
m
(
E
i
−
E
j
)
|
X
i
j
|
2
=
1
.
{\displaystyle \sum _{j}2m\left(E_{i}-E_{j}\right)\left|X_{ij}\right|^{2}=1\,.}
== See also ==
Interaction picture
Bra–ket notation
Introduction to quantum mechanics
Heisenberg's entryway to matrix mechanics
== References ==
== Further reading ==
Bernstein, Jeremy (2005). "Max Born and the quantum theory". American Journal of Physics. 73 (11). American Association of Physics Teachers (AAPT): 999–1008. Bibcode:2005AmJPh..73..999B. doi:10.1119/1.2060717. ISSN 0002-9505.
Max Born The statistical interpretation of quantum mechanics. Nobel Lecture – December 11, 1954.
Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005) ISBN 0-7382-0693-8. Also published in Germany: Max Born - Baumeister der Quantenwelt. Eine Biographie (Spektrum Akademischer Verlag, 2005), ISBN 3-8274-1640-X.
Max Jammer The Conceptual Development of Quantum Mechanics (McGraw-Hill, 1966)
Jagdish Mehra and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926. (Springer, 2001) ISBN 0-387-95177-6
B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1
Aitchison, Ian J. R.; MacManus, David A.; Snyder, Thomas M. (2004). "Understanding Heisenberg's "magical" paper of July 1925: A new look at the calculational details". American Journal of Physics. 72 (11). American Association of Physics Teachers (AAPT): 1370–1379. arXiv:quant-ph/0404009. doi:10.1119/1.1775243. ISSN 0002-9505. S2CID 53118117.
Thomas F. Jordan, Quantum Mechanics in Simple Matrix Form, (Dover publications, 2005) ISBN 978-0486445304
Merzbacher, E (1968). "Matrix methods in quantum mechanics". Am. J. Phys. 36 (9): 814–821. doi:10.1119/1.1975154.
== External links ==
An Overview of Matrix Mechanics
Matrix Methods in Quantum Mechanics
Heisenberg Quantum Mechanics Archived 2010-02-16 at the Wayback Machine (The theory's origins and its historical developing 1925–27)
Werner Heisenberg 1970 CBC radio Interview
On Matrix Mechanics at MathPages | Wikipedia/Matrix_mechanics |
In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication, which is studied in the sub-branch of coherence theory.
== Principle ==
Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics. In this context, it is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory.: 11–13
This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation.
In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio.
In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field.
In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer. Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth convex shapes and for lossy (low-reflection) surfaces.
The ray-optics field or current is generally not accurate near edges or shadow boundaries, unless supplemented by diffraction and creeping wave calculations.
The standard theory of physical optics has some defects in the evaluation of scattered fields, leading to decreased accuracy away from the specular direction. An improved theory introduced in 2004 gives exact solutions to problems involving wave diffraction by conducting scatterers.
== See also ==
Optical physics
Electromagnetic modeling
Fourier optics
History of optics
Negative-index metamaterials
== References ==
Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
Akhmanov, A; Nikitin, S. Yu (1997). Physical Optics. Oxford University Press. ISBN 0-19-851795-5.
Hay, S.G. (August 2005). "A double-edge-diffraction Gaussian-series method for efficient physical optics analysis of dual-shaped-reflector antennas". IEEE Transactions on Antennas and Propagation. 53 (8): 2597. Bibcode:2005ITAP...53.2597H. doi:10.1109/tap.2005.851855. S2CID 10050665.
Asvestas, J. S. (February 1980). "The physical optics method in electromagnetic scattering". Journal of Mathematical Physics. 21 (2): 290–299. Bibcode:1980JMP....21..290A. doi:10.1063/1.524413.
== External links ==
Media related to Physical optics at Wikimedia Commons | Wikipedia/Wave_theory_of_light |
A scientific theory is an explanation of an aspect of the natural world that can be or that has been repeatedly tested and has corroborating evidence in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.
A scientific theory differs from a scientific fact: a fact is an observation and a theory organizes and explains multiple observations. Furthermore, a theory is expected to make predictions which could be confirmed or refuted with addition observations. Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."
A theory differs from a scientific law in that a law is an empirical description of a relationship between facts and/or other laws. For example, Newton's Law of Gravity is a mathematical equation that can be used to predict the attraction between bodies, but it is not a theory to explain how gravity works.
The meaning of the term scientific theory (often contracted to theory for brevity) as used in the disciplines of science is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in a scientific context it most often refers to an explanation that has already been tested and is widely accepted as valid.
The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. Some theories are so well-established that they are unlikely ever to be fundamentally changed (for example, scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics, germ theory of disease, etc.). In certain cases, a scientific theory or scientific law that fails to fit all data can still be useful (due to its simplicity) as an approximation under specific conditions. An example is Newton's laws of motion, which are a highly accurate approximation to special relativity at velocities that are small relative to the speed of light.
Scientific theories are testable and make verifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power. Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine. Scientific hypotheses can never be "proven" because scientists are not able to fully confirm that their hypothesis is true. Instead, scientists say that the study "supports" or is consistent with their hypothesis.
== Types ==
Albert Einstein described two different types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic theory. Principle theories are empirical generalisations, one such example being Newton's laws of motion.
== Characteristics ==
=== Essential criteria ===
For any theory to be accepted within most academia there is usually one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.
The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.
A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:
It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
It is well-supported by many independent strands of evidence, rather than a single foundation.
It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.
These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.
=== Other criteria ===
In addition, most scientists prefer to work with a theory that meets the following qualities:
It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.
=== Definitions from scientific organizations ===
The United States National Academy of Sciences defines scientific theories as follows:
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the Sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.
== Formation ==
The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.
Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have many repeated bouts of testing. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.
Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relativistic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.
Acceptance of a theory does not require that all of its major predictions be tested if it is already supported by sufficient evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory. As Richard Feynman puts it:It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.
== Modification and improvement ==
If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.
If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.
After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.
=== Unification ===
In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.
When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; The potential unification of these is sometimes called the Theory of Everything.
=== Example: Relativity ===
In 1905, Albert Einstein published the theory of special relativity.
He started with a principle known for three hundred years, since the time of Galileo Galilei: the principle of relativity and a prediction from a well established theory for electromagnetism known as Maxwell's equations, the prediction that the speed of light in a vacuum does not depend on relative motion of the source and receiver. Einstein proposed, or hypothesized, that the concept of Galilean relativity should be modified to align mechanical physics with electromagnetism.: 17 In addition to unifying two branches of physics, this modification lead to specific consequences such as time dilation and length contraction. Careful, repeated experiments have both confirmed Einstein's postulates are valid and that the predictions of the special theory of relativity match experiement.
Einstein next sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still use Newton's equations, which are simpler to operate.
== Theories and laws ==
Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses and can predict the behavior of the natural world. Both are also typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources and may contain one or several laws.
A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.
=== Theories and facts ===
A scientific theory does not turn into a fact. Scientific facts are observations that theories organize and explain. As new facts appear, a theory may be revised or new theories may emerge that encompass these additional facts. While American vernacular speech uses "theory" as similar to a "guess" in opposition to a "fact", in science the word "theory" means a model that is expected to explain a wide range of facts.
== About theories ==
=== Theories as axioms ===
The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the predictions, and the "axioms" can be revised as a direct result.
The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.
=== Theories as models ===
The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.
In this approach, theories are a specific category of models that fulfil the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the Solar System, for example, might consist of abstract objects that represent the Sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.
The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of the Solar System is clearly not an actual house or an actual Solar System; the aspects of an actual house or the actual Solar System represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.
==== Differences between theory and model ====
Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.
Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer-aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, and magnified, in realistic detail. Software packages for creating the bill of materials for construction allow subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing
=== Assumptions in formulating theories ===
An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:
...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them correspond to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.
Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.
Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").
== Descriptions ==
=== From philosophers of science ===
Karl Popper described the characteristics of a scientific theory as follows:
It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
Every "good" scientific theory is a prohibition: it forbids certain things from happening. The more a theory forbids, the better it is.
A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
Every genuine test of a theory is an attempt to falsify it or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
Confirming evidence should not count except when it is the result of a genuine test of the theory, and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.
Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".
Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.
Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.
According to Kitcher, good scientific theories must have three features:
Unity: "A science should be unified.... Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
Fecundity: "A great scientific theory, like Newton's, opens up new areas of research.... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry.... Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity.... A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)
Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.
=== Analogies and metaphors ===
The concept of a scientific theory has also been described using analogies and metaphors. For example, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"
Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.
Michael Polanyi made an analogy between a theory and a map:
A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.
A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:
Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.
The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:
I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on...Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.
== In physics ==
In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).
== Examples ==
Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.
Biology: cell theory, theory of evolution (modern evolutionary synthesis), abiogenesis, germ theory, particulate inheritance theory, dual inheritance theory, Young–Helmholtz theory, opponent process, cohesion-tension theory
Chemistry: collision theory, kinetic theory of gases, Lewis theory, molecular theory, molecular orbital theory, transition state theory, valence bond theory
Physics: atomic theory, Big Bang theory, Dynamo theory, perturbation theory, theory of relativity (successor to classical mechanics), quantum field theory
Earth science: Climate change theory (from climatology), plate tectonics theory (from geology), theories of the origin of the Moon, theories for the Moon illusion
Astronomy: Self-gravitating system, Stellar evolution, solar nebular model, stellar nucleosynthesis
== Explanatory notes ==
== References ==
== Further reading ==
Sellers, Piers (17 August 2016). "Space, Climate Change, and the Real Meaning of Theory". The New Yorker. Retrieved 18 August 2016. Essay by a British/American meteorologist and NASA astronaut on anthopogenic global warming and "theory". | Wikipedia/Scientific_theory |
Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.
Many aspects of quantum mechanics are counterintuitive and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin 'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place.
Quantum mechanics helps people understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero is placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics.
== History ==
James C. Maxwell's unification of the equations governing electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century.
=== Evidence of quanta from the photoelectric effect ===
The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation.
Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors are infrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation.
At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein proposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years to Thomas Young's work on diffraction.
Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902, Philipp Lenard directed light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is the photoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons.: 23
Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured.
Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories.
=== Quantization of bound electrons in atoms ===
Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines.: v1:376 The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics.
Throughout the 19th century evidence grew for the atomic nature of matter. With Thomson's discovery of the electron in 1897, scientist began the search for a model of the interior of the atom. Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom.: 26
Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913 Niels Bohr and Ernest Rutherford connected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed.: 31
Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution.
Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits.: 29
A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws.
=== Quantization of spin ===
In 1922 Otto Stern and Walther Gerlach demonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom.: v2:130 In 1927, T.E. Phipps and J.B. Taylor obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms.
In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest.
=== Quantization of matter ===
In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.: 65
Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment.
In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target which showed a diffraction pattern indicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons.
=== Further developments ===
In 1928 Paul Dirac published his relativistic wave equation simultaneously incorporating relativity, predicting anti-matter, and providing a complete theory for the Stern–Gerlach result.: 131 These successes launched a new fundamental understanding of our world at small scale: quantum mechanics.
Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics.
=== Quantum radiation, quantum fields ===
In 1923 Compton demonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon" Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics.
== Wave–particle duality ==
The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality is the double-slit experiment.
In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light.
Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics.
If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves.
== Uncertainty principle ==
Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired.
In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.
Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.
At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.
The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to the Planck constant.
== Wave function collapse ==
Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics.
For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD.
== Eigenstates and eigenvalues ==
Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.
In the Stern–Gerlach experiment discussed above, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows.
The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value.
== The Pauli exclusion principle ==
In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."
A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the Stern–Gerlach experiment.
== Dirac wave equation ==
In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.
Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory.
== Quantum entanglement ==
In quantum physics, a group of particles can interact or be created together in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known as quantum entanglement.
An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables.
The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."
The Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory.
== Quantum field theory ==
The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory.
Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote:
Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ...
He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view".: 108
In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory".
=== Quantum electrodynamics ===
Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge.
Electric charges are the sources of and create, electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism.
In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization largely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman diagrams provided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles.
The Lamb shift is an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split.
Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept.
=== Standard Model ===
The Standard Model of particle physics is the quantum field theory that describes three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifies all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
== Interpretations ==
The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe.
Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales.
== Applications ==
=== Everyday applications ===
The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.
=== Technological applications ===
Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics.
In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells.
== See also ==
Einstein's thought experiments
Macroscopic quantum phenomena
Philosophy of physics
Quantum computing
Virtual particle
Teaching quantum mechanics
List of textbooks on classical and quantum mechanics
== References ==
== Bibliography ==
Bernstein, Jeremy (2005). "Max Born and the quantum theory". American Journal of Physics. 73 (11): 999–1008. Bibcode:2005AmJPh..73..999B. doi:10.1119/1.2060717.
Beller, Mara (2001). Quantum Dialogue: The Making of a Revolution. University of Chicago Press.
Bohr, Niels (1958). Atomic Physics and Human Knowledge. John Wiley & Sons]. ISBN 0486479285. OCLC 530611. {{cite book}}: ISBN / Date incompatibility (help)
de Broglie, Louis (1953). The Revolution in Physics. Noonday Press. LCCN 53010401.
Bronner, Patrick; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Demonstrating quantum random with single photons". European Journal of Physics. 30 (5): 1189–1200. Bibcode:2009EJPh...30.1189B. doi:10.1088/0143-0807/30/5/026. S2CID 7903179.
Einstein, Albert (1934). Essays in Science. Philosophical Library. ISBN 0486470113. LCCN 55003947. {{cite book}}: ISBN / Date incompatibility (help)
Feigl, Herbert; Brodbeck, May (1953). Readings in the Philosophy of Science. Appleton-Century-Crofts. ISBN 0390304883. LCCN 53006438. {{cite book}}: ISBN / Date incompatibility (help)
Feynman, Richard P. (1949). "Space-Time Approach to Quantum Electrodynamics". Physical Review. 76 (6): 769–89. Bibcode:1949PhRv...76..769F. doi:10.1103/PhysRev.76.769.
Feynman, Richard P. (1990). QED, The Strange Theory of Light and Matter. Penguin Books. ISBN 978-0140125054.
Fowler, Michael (1999). The Bohr Atom. University of Virginia.
Heisenberg, Werner (1958). Physics and Philosophy. Harper and Brothers. ISBN 0061305499. LCCN 99010404. {{cite book}}: ISBN / Date incompatibility (help)
Lakshmibala, S. (2004). "Heisenberg, Matrix Mechanics and the Uncertainty Principle". Resonance: Journal of Science Education. 9 (8): 46–56. doi:10.1007/bf02837577. S2CID 29893512.
Liboff, Richard L. (1992). Introductory Quantum Mechanics (2nd ed.). Addison-Wesley Pub. Co. ISBN 9780201547153.
Lindsay, Robert Bruce; Margenau, Henry (1957). Foundations of Physics. Dover. ISBN 0918024188. LCCN 57014416. {{cite book}}: ISBN / Date incompatibility (help)
McEvoy, J. P.; Zarate, Oscar (2004). Introducing Quantum Theory. Icon Books. ISBN 1874166374.
Nave, Carl Rod (2005). "Quantum Physics". HyperPhysics. Georgia State University.
Peat, F. David (2002). From Certainty to Uncertainty: The Story of Science and Ideas in the Twenty-First Century. Joseph Henry Press.
Reichenbach, Hans (1944). Philosophic Foundations of Quantum Mechanics. University of California Press. ISBN 0486404595. LCCN a44004471. {{cite book}}: ISBN / Date incompatibility (help)
Schilpp, Paul Arthur (1949). Albert Einstein: Philosopher-Scientist. Tudor Publishing Company. LCCN 50005340.
Scientific American Reader, 1953.
Sears, Francis Weston (1949). Optics (3rd ed.). Addison-Wesley. ISBN 0195046013. LCCN 51001018. {{cite book}}: ISBN / Date incompatibility (help)
Shimony, A. (1983). "(title not given in citation)". Foundations of Quantum Mechanics in the Light of New Technology (S. Kamefuchi et al., eds.). Tokyo: Japan Physical Society. p. 225.; cited in: Popescu, Sandu; Daniel Rohrlich (1996). "Action and Passion at a Distance: An Essay in Honor of Professor Abner Shimony". arXiv:quant-ph/9605004.
Tavel, Morton; Tavel, Judith (illustrations) (2002). Contemporary physics and the limits of knowledge. Rutgers University Press. ISBN 978-0813530772.
Van Vleck, J. H.,1928, "The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics", Proc. Natl. Acad. Sci. 14: 179.
Westmoreland; Benjamin Schumacher (1998). "Quantum Entanglement and the Nonexistence of Superluminal Signals". arXiv:quant-ph/9801014.
Wheeler, John Archibald; Feynman, Richard P. (1949). "Classical Electrodynamics in Terms of Direct Interparticle Action" (PDF). Reviews of Modern Physics. 21 (3): 425–33. Bibcode:1949RvMP...21..425W. doi:10.1103/RevModPhys.21.425.
Wieman, Carl; Perkins, Katherine (2005). "Transforming Physics Education". Physics Today. 58 (11): 36. Bibcode:2005PhT....58k..36W. doi:10.1063/1.2155756.
== Further reading ==
The following titles, all by working physicists, attempt to communicate quantum theory to laypeople, using a minimum of technical apparatus.
Jim Al-Khalili (2003). Quantum: A Guide for the Perplexed. Weidenfeld & Nicolson. ISBN 978-1780225340.
Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley. ISBN 0486428788.
Brian Cox and Jeff Forshaw (2011) The Quantum Universe. Allen Lane. ISBN 978-1846144325.
Richard Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 0691083886.
Ford, Kenneth (2005). The Quantum World. Harvard Univ. Press. Includes elementary particle physics.
Ghirardi, GianCarlo (2004). Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading.
Tony Hey and Walters, Patrick (2003). The New Quantum Universe. Cambridge Univ. Press. Includes much about the technologies quantum theory has made possible. ISBN 978-0521564571.
Vladimir G. Ivancevic, Tijana T. Ivancevic (2008). Quantum leap: from Dirac and Feynman, Across the universe, to human body and mind. World Scientific Publishing Company. Provides an intuitive introduction in non-mathematical terms and an introduction in comparatively basic mathematical terms. ISBN 978-9812819277.
J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. ISBN 1840465778'
N. David Mermin (1990). "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge Univ. Press: 110–76. The author is a rare physicist who tries to communicate to philosophers and humanists. ISBN 978-0521388801.
Roland Omnès (1999). Understanding Quantum Mechanics. Princeton Univ. Press. ISBN 978-0691004358.
Victor Stenger (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5–8. ISBN 978-1573928595.
Martinus Veltman (2003). Facts and Mysteries in Elementary Particle Physics. World Scientific Publishing Company. ISBN 978-9812381491.
== External links ==
"Microscopic World – Introduction to Quantum Mechanics". by Takada, Kenjiro, emeritus professor at Kyushu University
The Quantum Exchange (tutorials and open-source learning software).
Atoms and the Periodic Table
Single and double slit interference
Time-Evolution of a Wavepacket in a Square Well An animated demonstration of a wave packet dispersion over time.
Carroll, Sean M. "Quantum Mechanics (an embarrassment)". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Introduction_to_quantum_mechanics |
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy (or simply the degeneracy) of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue.: 48 When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
Degeneracy plays a fundamental role in quantum statistical mechanics. For an N-particle system in three dimensions, a single energy level may correspond to several different wave functions or energy states. These degenerate states at the same level all have an equal probability of being filled. The number of such states gives the degeneracy of a particular energy level.
== Mathematics ==
The possible states of a quantum mechanical system may be treated mathematically as abstract vectors in a separable, complex Hilbert space, while the observables may be represented by linear Hermitian operators acting upon them. By selecting a suitable basis, the components of these vectors and the matrix elements of the operators in that basis may be determined.
If A is a N × N matrix, X a non-zero vector, and λ is a scalar, such that
A
X
=
λ
X
{\displaystyle AX=\lambda X}
, then the scalar λ is said to be an eigenvalue of A and the vector X is said to be the eigenvector corresponding to λ. Together with the zero vector, the set of all eigenvectors corresponding to a given eigenvalue λ form a subspace of Cn, which is called the eigenspace of λ. An eigenvalue λ which corresponds to two or more different linearly independent eigenvectors is said to be degenerate, i.e.,
A
X
1
=
λ
X
1
{\displaystyle AX_{1}=\lambda X_{1}}
and
A
X
2
=
λ
X
2
{\displaystyle AX_{2}=\lambda X_{2}}
, where
X
1
{\displaystyle X_{1}}
and
X
2
{\displaystyle X_{2}}
are linearly independent eigenvectors. The dimension of the eigenspace corresponding to that eigenvalue is known as its degree of degeneracy, which can be finite or infinite. An eigenvalue is said to be non-degenerate if its eigenspace is one-dimensional.
The eigenvalues of the matrices representing physical observables in quantum mechanics give the measurable values of these observables while the eigenstates corresponding to these eigenvalues give the possible states in which the system may be found, upon measurement. The measurable values of the energy of a quantum system are given by the eigenvalues of the Hamiltonian operator, while its eigenstates give the possible energy states of the system. A value of energy is said to be degenerate if there exist at least two linearly independent energy states associated with it. Moreover, any linear combination of two or more degenerate eigenstates is also an eigenstate of the Hamiltonian operator corresponding to the same energy eigenvalue. This clearly follows from the fact that the eigenspace of the energy value eigenvalue λ is a subspace (being the kernel of the Hamiltonian minus λ times the identity), hence is closed under linear combinations.
== Effect of degeneracy on the measurement of energy ==
In the absence of degeneracy, if a measured value of energy of a quantum system is determined, the corresponding state of the system is assumed to be known, since only one eigenstate corresponds to each energy eigenvalue. However, if the Hamiltonian
H
^
{\displaystyle {\hat {H}}}
has a degenerate eigenvalue
E
n
{\displaystyle E_{n}}
of degree gn, the eigenstates associated with it form a vector subspace of dimension gn. In such a case, several final states can be possibly associated with the same result
E
n
{\displaystyle E_{n}}
, all of which are linear combinations of the gn orthonormal eigenvectors
|
E
n
,
i
⟩
{\displaystyle |E_{n,i}\rangle }
.
In this case, the probability that the energy value measured for a system in the state
|
ψ
⟩
{\displaystyle |\psi \rangle }
will yield the value
E
n
{\displaystyle E_{n}}
is given by the sum of the probabilities of finding the system in each of the states in this basis, i.e.,
P
(
E
n
)
=
∑
i
=
1
g
n
|
⟨
E
n
,
i
|
ψ
⟩
|
2
{\displaystyle P(E_{n})=\sum _{i=1}^{g_{n}}|\langle E_{n,i}|\psi \rangle |^{2}}
== Degeneracy in different dimensions ==
This section intends to illustrate the existence of degenerate energy levels in quantum systems studied in different dimensions. The study of one and two-dimensional systems aids the conceptual understanding of more complex systems.
=== Degeneracy in one dimension ===
In several cases, analytic results can be obtained more easily in the study of one-dimensional systems. For a quantum particle with a wave function
|
ψ
⟩
{\displaystyle |\psi \rangle }
moving in a one-dimensional potential
V
(
x
)
{\displaystyle V(x)}
, the time-independent Schrödinger equation can be written as
−
ℏ
2
2
m
d
2
ψ
d
x
2
+
V
ψ
=
E
ψ
{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}+V\psi =E\psi }
Since this is an ordinary differential equation, there are two independent eigenfunctions for a given energy
E
{\displaystyle E}
at most, so that the degree of degeneracy never exceeds two. It can be proven that in one dimension, there are no degenerate bound states for normalizable wave functions. A sufficient condition on a piecewise continuous potential
V
{\displaystyle V}
and the energy
E
{\displaystyle E}
is the existence of two real numbers
M
,
x
0
{\displaystyle M,x_{0}}
with
M
≠
0
{\displaystyle M\neq 0}
such that
∀
x
>
x
0
{\displaystyle \forall x>x_{0}}
we have
V
(
x
)
−
E
≥
M
2
{\displaystyle V(x)-E\geq M^{2}}
. In particular,
V
{\displaystyle V}
is bounded below in this criterion.
=== Degeneracy in two-dimensional quantum systems ===
Two-dimensional quantum systems exist in all three states of matter and much of the variety seen in three dimensional matter can be created in two dimensions. Real two-dimensional materials are made of monoatomic layers on the surface of solids. Some examples of two-dimensional electron systems achieved experimentally include MOSFET, two-dimensional superlattices of Helium, Neon, Argon, Xenon etc. and surface of liquid Helium.
The presence of degenerate energy levels is studied in the cases of Particle in a box and two-dimensional harmonic oscillator, which act as useful mathematical models for several real world systems.
=== Particle in a rectangular plane ===
Consider a free particle in a plane of dimensions
L
x
{\displaystyle L_{x}}
and
L
y
{\displaystyle L_{y}}
in a plane of impenetrable walls. The time-independent Schrödinger equation for this system with wave function
|
ψ
⟩
{\displaystyle |\psi \rangle }
can be written as
−
ℏ
2
2
m
(
∂
2
ψ
∂
x
2
+
∂
2
ψ
∂
y
2
)
=
E
ψ
{\displaystyle -{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}\psi }{{\partial x}^{2}}}+{\frac {\partial ^{2}\psi }{{\partial y}^{2}}}\right)=E\psi }
The permitted energy values are
E
n
x
,
n
y
=
π
2
ℏ
2
2
m
(
n
x
2
L
x
2
+
n
y
2
L
y
2
)
{\displaystyle E_{n_{x},n_{y}}={\frac {\pi ^{2}\hbar ^{2}}{2m}}\left({\frac {n_{x}^{2}}{L_{x}^{2}}}+{\frac {n_{y}^{2}}{L_{y}^{2}}}\right)}
The normalized wave function is
ψ
n
x
,
n
y
(
x
,
y
)
=
2
L
x
L
y
sin
(
n
x
π
x
L
x
)
sin
(
n
y
π
y
L
y
)
{\displaystyle \psi _{n_{x},n_{y}}(x,y)={\frac {2}{\sqrt {L_{x}L_{y}}}}\sin \left({\frac {n_{x}\pi x}{L_{x}}}\right)\sin \left({\frac {n_{y}\pi y}{L_{y}}}\right)}
where
n
x
,
n
y
=
1
,
2
,
3
,
…
{\displaystyle n_{x},n_{y}=1,2,3,\dots }
So, quantum numbers
n
x
{\displaystyle n_{x}}
and
n
y
{\displaystyle n_{y}}
are required to describe the energy eigenvalues and the lowest energy of the system is given by
E
1
,
1
=
π
2
ℏ
2
2
m
(
1
L
x
2
+
1
L
y
2
)
{\displaystyle E_{1,1}=\pi ^{2}{\frac {\hbar ^{2}}{2m}}\left({\frac {1}{L_{x}^{2}}}+{\frac {1}{L_{y}^{2}}}\right)}
For some commensurate ratios of the two lengths
L
x
{\displaystyle L_{x}}
and
L
y
{\displaystyle L_{y}}
, certain pairs of states are degenerate.
If
L
x
/
L
y
=
p
/
q
{\displaystyle L_{x}/L_{y}=p/q}
, where p and q are integers, the states
(
n
x
,
n
y
)
{\displaystyle (n_{x},n_{y})}
and
(
p
n
y
/
q
,
q
n
x
/
p
)
{\displaystyle (pn_{y}/q,qn_{x}/p)}
have the same energy and so are degenerate to each other.
=== Particle in a square box ===
In this case, the dimensions of the box
L
x
=
L
y
=
L
{\displaystyle L_{x}=L_{y}=L}
and the energy eigenvalues are given by
E
n
x
,
n
y
=
π
2
ℏ
2
2
m
L
2
(
n
x
2
+
n
y
2
)
{\displaystyle E_{n_{x},n_{y}}={\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}(n_{x}^{2}+n_{y}^{2})}
Since
n
x
{\displaystyle n_{x}}
and
n
y
{\displaystyle n_{y}}
can be interchanged without changing the energy, each energy level has a degeneracy of at least two when
n
x
{\displaystyle n_{x}}
and
n
y
{\displaystyle n_{y}}
are different. Degenerate states are also obtained when the sum of squares of quantum numbers corresponding to different energy levels are the same. For example, the three states (nx = 7, ny = 1), (nx = 1, ny = 7) and (nx = ny = 5) all have
E
=
50
π
2
ℏ
2
2
m
L
2
{\displaystyle E=50{\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}}
and constitute a degenerate set.
Degrees of degeneracy of different energy levels for a particle in a square box:
=== Particle in a cubic box ===
In this case, the dimensions of the box
L
x
=
L
y
=
L
z
=
L
{\displaystyle L_{x}=L_{y}=L_{z}=L}
and the energy eigenvalues depend on three quantum numbers.
E
n
x
,
n
y
,
n
z
=
π
2
ℏ
2
2
m
L
2
(
n
x
2
+
n
y
2
+
n
z
2
)
{\displaystyle E_{n_{x},n_{y},n_{z}}={\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}(n_{x}^{2}+n_{y}^{2}+n_{z}^{2})}
Since
n
x
{\displaystyle n_{x}}
,
n
y
{\displaystyle n_{y}}
and
n
z
{\displaystyle n_{z}}
can be interchanged without changing the energy, each energy level has a degeneracy of at least three when the three quantum numbers are not all equal.
== Finding a unique eigenbasis in case of degeneracy ==
If two operators
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
commute, i.e.,
[
A
^
,
B
^
]
=
0
{\displaystyle [{\hat {A}},{\hat {B}}]=0}
, then for every eigenvector
|
ψ
⟩
{\displaystyle |\psi \rangle }
of
A
^
{\displaystyle {\hat {A}}}
,
B
^
|
ψ
⟩
{\displaystyle {\hat {B}}|\psi \rangle }
is also an eigenvector of
A
^
{\displaystyle {\hat {A}}}
with the same eigenvalue. However, if this eigenvalue, say
λ
{\displaystyle \lambda }
, is degenerate, it can be said that
B
^
|
ψ
⟩
{\displaystyle {\hat {B}}|\psi \rangle }
belongs to the eigenspace
E
λ
{\displaystyle E_{\lambda }}
of
A
^
{\displaystyle {\hat {A}}}
, which is said to be globally invariant under the action of
B
^
{\displaystyle {\hat {B}}}
.
For two commuting observables A and B, one can construct an orthonormal basis of the state space with eigenvectors common to the two operators. However,
λ
{\displaystyle \lambda }
is a degenerate eigenvalue of
A
^
{\displaystyle {\hat {A}}}
, then it is an eigensubspace of
A
^
{\displaystyle {\hat {A}}}
that is invariant under the action of
B
^
{\displaystyle {\hat {B}}}
, so the representation of
B
^
{\displaystyle {\hat {B}}}
in the eigenbasis of
A
^
{\displaystyle {\hat {A}}}
is not a diagonal but a block diagonal matrix, i.e. the degenerate eigenvectors of
A
^
{\displaystyle {\hat {A}}}
are not, in general, eigenvectors of
B
^
{\displaystyle {\hat {B}}}
. However, it is always possible to choose, in every degenerate eigensubspace of
A
^
{\displaystyle {\hat {A}}}
, a basis of eigenvectors common to
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
.
=== Choosing a complete set of commuting observables ===
If a given observable A is non-degenerate, there exists a unique basis formed by its eigenvectors. On the other hand, if one or several eigenvalues of
A
^
{\displaystyle {\hat {A}}}
are degenerate, specifying an eigenvalue is not sufficient to characterize a basis vector. If, by choosing an observable
B
^
{\displaystyle {\hat {B}}}
, which commutes with
A
^
{\displaystyle {\hat {A}}}
, it is possible to construct an orthonormal basis of eigenvectors common to
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
, which is unique, for each of the possible pairs of eigenvalues {a,b}, then
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
are said to form a complete set of commuting observables. However, if a unique set of eigenvectors can still not be specified, for at least one of the pairs of eigenvalues, a third observable
C
^
{\displaystyle {\hat {C}}}
, which commutes with both
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
can be found such that the three form a complete set of commuting observables.
It follows that the eigenfunctions of the Hamiltonian of a quantum system with a common energy value must be labelled by giving some additional information, which can be done by choosing an operator that commutes with the Hamiltonian. These additional labels required naming of a unique energy eigenfunction and are usually related to the constants of motion of the system.
=== Degenerate energy eigenstates and the parity operator ===
The parity operator is defined by its action in the
|
r
⟩
{\displaystyle |r\rangle }
representation of changing r to −r, i.e.
⟨
r
|
P
|
ψ
⟩
=
ψ
(
−
r
)
{\displaystyle \langle r|P|\psi \rangle =\psi (-r)}
The eigenvalues of P can be shown to be limited to
±
1
{\displaystyle \pm 1}
, which are both degenerate eigenvalues in an infinite-dimensional state space. An eigenvector of P with eigenvalue +1 is said to be even, while that with eigenvalue −1 is said to be odd.
Now, an even operator
A
^
{\displaystyle {\hat {A}}}
is one that satisfies,
A
~
=
P
A
^
P
{\displaystyle {\tilde {A}}=P{\hat {A}}P}
[
P
,
A
^
]
=
0
{\displaystyle [P,{\hat {A}}]=0}
while an odd operator
B
^
{\displaystyle {\hat {B}}}
is one that satisfies
P
B
^
+
B
^
P
=
0
{\displaystyle P{\hat {B}}+{\hat {B}}P=0}
Since the square of the momentum operator
p
^
2
{\displaystyle {\hat {p}}^{2}}
is even, if the potential V(r) is even, the Hamiltonian
H
^
{\displaystyle {\hat {H}}}
is said to be an even operator. In that case, if each of its eigenvalues are non-degenerate, each eigenvector is necessarily an eigenstate of P, and therefore it is possible to look for the eigenstates of
H
^
{\displaystyle {\hat {H}}}
among even and odd states. However, if one of the energy eigenstates has no definite parity, it can be asserted that the corresponding eigenvalue is degenerate, and
P
|
ψ
⟩
{\displaystyle P|\psi \rangle }
is an eigenvector of
H
^
{\displaystyle {\hat {H}}}
with the same eigenvalue as
|
ψ
⟩
{\displaystyle |\psi \rangle }
.
== Degeneracy and symmetry ==
The physical origin of degeneracy in a quantum-mechanical system is often the presence of some symmetry in the system. Studying the symmetry of a quantum system can, in some cases, enable us to find the energy levels and degeneracies without solving the Schrödinger equation, hence reducing effort.
Mathematically, the relation of degeneracy with symmetry can be clarified as follows. Consider a symmetry operation associated with a unitary operator S. Under such an operation, the new Hamiltonian is related to the original Hamiltonian by a similarity transformation generated by the operator S, such that
H
′
=
S
H
S
−
1
=
S
H
S
†
{\displaystyle H'=SHS^{-1}=SHS^{\dagger }}
, since S is unitary. If the Hamiltonian remains unchanged under the transformation operation S, we have
S
H
S
†
=
H
S
H
S
−
1
=
H
S
H
=
H
S
[
S
,
H
]
=
0
{\displaystyle {\begin{aligned}SHS^{\dagger }&=H\\[1ex]SHS^{-1}&=H\\[1ex]SH&=HS\\[1ex][S,H]&=0\end{aligned}}}
Now, if
|
α
⟩
{\displaystyle |\alpha \rangle }
is an energy eigenstate,
H
|
α
⟩
=
E
|
α
⟩
{\displaystyle H|\alpha \rangle =E|\alpha \rangle }
where E is the corresponding energy eigenvalue.
H
S
|
α
⟩
=
S
H
|
α
⟩
=
S
E
|
α
⟩
=
E
S
|
α
⟩
{\displaystyle HS|\alpha \rangle =SH|\alpha \rangle =SE|\alpha \rangle =ES|\alpha \rangle }
which means that
S
|
α
⟩
{\displaystyle S|\alpha \rangle }
is also an energy eigenstate with the same eigenvalue E. If the two states
|
α
⟩
{\displaystyle |\alpha \rangle }
and
S
|
α
⟩
{\displaystyle S|\alpha \rangle }
are linearly independent (i.e. physically distinct), they are therefore degenerate.
In cases where S is characterized by a continuous parameter
ϵ
{\displaystyle \epsilon }
, all states of the form
S
(
ϵ
)
|
α
⟩
{\displaystyle S(\epsilon )|\alpha \rangle }
have the same energy eigenvalue.
=== Symmetry group of the Hamiltonian ===
The set of all operators which commute with the Hamiltonian of a quantum system are said to form the symmetry group of the Hamiltonian. The commutators of the generators of this group determine the algebra of the group. An n-dimensional representation of the Symmetry group preserves the multiplication table of the symmetry operators. The possible degeneracies of the Hamiltonian with a particular symmetry group are given by the dimensionalities of the irreducible representations of the group. The eigenfunctions corresponding to a n-fold degenerate eigenvalue form a basis for a n-dimensional irreducible representation of the Symmetry group of the Hamiltonian.
== Types of degeneracy ==
Degeneracies in a quantum system can be systematic or accidental in nature.
=== Systematic or essential degeneracy ===
This is also called a geometrical or normal degeneracy and arises due to the presence of some kind of symmetry in the system under consideration, i.e. the invariance of the Hamiltonian under a certain operation, as described above. The representation obtained from a normal degeneracy is irreducible and the corresponding eigenfunctions form a basis for this representation.
=== Accidental degeneracy ===
It is a type of degeneracy resulting from some special features of the system or the functional form of the potential under consideration, and is related possibly to a hidden dynamical symmetry in the system. It also results in conserved quantities, which are often not easy to identify. Accidental symmetries lead to these additional degeneracies in the discrete energy spectrum. An accidental degeneracy can be due to the fact that the group of the Hamiltonian is not complete. These degeneracies are connected to the existence of bound orbits in classical Physics.
==== Examples: Coulomb and Harmonic Oscillator potentials ====
For a particle in a central 1/r potential, the Laplace–Runge–Lenz vector is a conserved quantity resulting from an accidental degeneracy, in addition to the conservation of angular momentum due to rotational invariance.
For a particle moving on a cone under the influence of 1/r and r2 potentials, centred at the tip of the cone, the conserved quantities corresponding to accidental symmetry will be two components of an equivalent of the Runge-Lenz vector, in addition to one component of the angular momentum vector. These quantities generate SU(2) symmetry for both potentials.
==== Example: Particle in a constant magnetic field ====
A particle moving under the influence of a constant magnetic field, undergoing cyclotron motion on a circular orbit is another important example of an accidental symmetry. The symmetry multiplets in this case are the Landau levels which are infinitely degenerate.
== Examples ==
=== The hydrogen atom ===
In atomic physics, the bound states of an electron in a hydrogen atom show us useful examples of degeneracy. In this case, the Hamiltonian commutes with the total orbital angular momentum
L
^
2
{\displaystyle {\hat {L}}^{2}}
, its component along the z-direction,
L
^
z
{\displaystyle {\hat {L}}_{z}}
, total spin angular momentum
S
^
2
{\displaystyle {\hat {S}}^{2}}
and its z-component
S
^
z
{\displaystyle {\hat {S}}_{z}}
. The quantum numbers corresponding to these operators are
ℓ
{\displaystyle \ell }
,
m
ℓ
{\displaystyle m_{\ell }}
,
s
{\displaystyle s}
(always 1/2 for an electron) and
m
s
{\displaystyle m_{s}}
respectively.
The energy levels in the hydrogen atom depend only on the principal quantum number n. For a given n, all the states corresponding to
ℓ
=
0
,
…
,
n
−
1
{\displaystyle \ell =0,\ldots ,n-1}
have the same energy and are degenerate. Similarly for given values of n and ℓ, the
(
2
ℓ
+
1
)
{\displaystyle (2\ell +1)}
, states with
m
ℓ
=
−
ℓ
,
…
,
ℓ
{\displaystyle m_{\ell }=-\ell ,\ldots ,\ell }
are degenerate. The degree of degeneracy of the energy level En is therefore
∑
ℓ
=
0
n
−
1
(
2
ℓ
+
1
)
=
n
2
,
{\displaystyle \sum _{\ell \mathop {=} 0}^{n-1}(2\ell +1)=n^{2},}
which is doubled if the spin degeneracy is included.: 267f
The degeneracy with respect to
m
ℓ
{\displaystyle m_{\ell }}
is an essential degeneracy which is present for any central potential, and arises from the absence of a preferred spatial direction. The degeneracy with respect to
ℓ
{\displaystyle \ell }
is often described as an accidental degeneracy, but it can be explained in terms of special symmetries of the Schrödinger equation which are only valid for the hydrogen atom in which the potential energy is given by Coulomb's law.: 267f
=== Isotropic three-dimensional harmonic oscillator ===
It is a spinless particle of mass m moving in three-dimensional space, subject to a central force whose absolute value is proportional to the distance of the particle from the centre of force.
F
=
−
k
r
{\displaystyle F=-kr}
It is said to be isotropic since the potential
V
(
r
)
{\displaystyle V(r)}
acting on it is rotationally invariant, i.e.,
V
(
r
)
=
1
2
m
ω
2
r
2
{\displaystyle V(r)={\tfrac {1}{2}}m\omega ^{2}r^{2}}
where
ω
{\displaystyle \omega }
is the angular frequency given by
k
/
m
{\textstyle {\sqrt {k/m}}}
.
Since the state space of such a particle is the tensor product of the state spaces associated with the individual one-dimensional wave functions, the time-independent Schrödinger equation for such a system is given by-
−
ℏ
2
2
m
(
∂
2
ψ
∂
x
2
+
∂
2
ψ
∂
y
2
+
∂
2
ψ
∂
z
2
)
+
1
2
m
ω
2
(
x
2
+
y
2
+
z
2
)
ψ
=
E
ψ
{\displaystyle -{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}+{\frac {\partial ^{2}\psi }{\partial z^{2}}}\right)+{\frac {1}{2}}{m\omega ^{2}\left(x^{2}+y^{2}+z^{2}\right)\psi }=E\psi }
So, the energy eigenvalues are
E
n
x
,
n
y
,
n
z
=
(
n
x
+
n
y
+
n
z
+
3
2
)
ℏ
ω
{\displaystyle E_{n_{x},n_{y},n_{z}}=\left(n_{x}+n_{y}+n_{z}+{\tfrac {3}{2}}\right)\hbar \omega }
or,
E
n
=
(
n
+
3
2
)
ℏ
ω
{\displaystyle E_{n}=\left(n+{\tfrac {3}{2}}\right)\hbar \omega }
where n is a non-negative integer.
So, the energy levels are degenerate and the degree of degeneracy is equal to the number of different sets
{
n
x
,
n
y
,
n
z
}
{\displaystyle \{n_{x},n_{y},n_{z}\}}
satisfying
n
x
+
n
y
+
n
z
=
n
{\displaystyle n_{x}+n_{y}+n_{z}=n}
The degeneracy of the
n
{\displaystyle n}
-th state can be found by considering the distribution of
n
{\displaystyle n}
quanta across
n
x
{\displaystyle n_{x}}
,
n
y
{\displaystyle n_{y}}
and
n
z
{\displaystyle n_{z}}
. Having 0 in
n
x
{\displaystyle n_{x}}
gives
n
+
1
{\displaystyle n+1}
possibilities for distribution across
n
y
{\displaystyle n_{y}}
and
n
z
{\displaystyle n_{z}}
. Having 1 quanta in
n
x
{\displaystyle n_{x}}
gives
n
{\displaystyle n}
possibilities across
n
y
{\displaystyle n_{y}}
and
n
z
{\displaystyle n_{z}}
and so on. This leads to the general result of
n
−
n
x
+
1
{\displaystyle n-n_{x}+1}
and summing over all
n
{\displaystyle n}
leads to the degeneracy of the
n
{\displaystyle n}
-th state,
∑
n
x
=
0
n
(
n
−
n
x
+
1
)
=
(
n
+
1
)
(
n
+
2
)
2
{\displaystyle \sum _{n_{x}=0}^{n}(n-n_{x}+1)={\frac {(n+1)(n+2)}{2}}}
For the ground state
n
=
0
{\displaystyle n=0}
, the degeneracy is
1
{\displaystyle 1}
so the state is non-degenerate. For all higher states, the degeneracy is greater than 1 so the state is degenerate.
== Removing degeneracy ==
The degeneracy in a quantum mechanical system may be removed if the underlying symmetry is broken by an external perturbation. This causes splitting in the degenerate energy levels. This is essentially a splitting of the original irreducible representations into lower-dimensional such representations of the perturbed system.
Mathematically, the splitting due to the application of a small perturbation potential can be calculated using time-independent degenerate perturbation theory. This is an approximation scheme that can be applied to find the solution to the eigenvalue equation for the Hamiltonian H of a quantum system with an applied perturbation, given the solution for the Hamiltonian H0 for the unperturbed system. It involves expanding the eigenvalues and eigenkets of the Hamiltonian H in a perturbation series.
The degenerate eigenstates with a given energy eigenvalue form a vector subspace, but not every basis of eigenstates of this space is a good starting point for perturbation theory, because typically there would not be any eigenstates of the perturbed system near them. The correct basis to choose is one that diagonalizes the perturbation Hamiltonian within the degenerate subspace.
=== Physical examples of removal of degeneracy by a perturbation ===
Some important examples of physical situations where degenerate energy levels of a quantum system are split by the application of an external perturbation are given below.
=== Symmetry breaking in two-level systems ===
A two-level system essentially refers to a physical system having two states whose energies are close together and very different from those of the other states of the system. All calculations for such a system are performed on a two-dimensional subspace of the state space.
If the ground state of a physical system is two-fold degenerate, any coupling between the two corresponding states lowers the energy of the ground state of the system, and makes it more stable.
If
E
1
{\displaystyle E_{1}}
and
E
2
{\displaystyle E_{2}}
are the energy levels of the system, such that
E
1
=
E
2
=
E
{\displaystyle E_{1}=E_{2}=E}
, and the perturbation
W
{\displaystyle W}
is represented in the two-dimensional subspace as the following 2×2 matrix
W
=
[
0
W
12
W
12
∗
0
]
.
{\displaystyle \mathbf {W} ={\begin{bmatrix}0&W_{12}\\[1ex]W_{12}^{*}&0\end{bmatrix}}.}
then the perturbed energies are
E
+
=
E
+
|
W
12
|
E
−
=
E
−
|
W
12
|
{\displaystyle {\begin{aligned}E_{+}&=E+|W_{12}|\\E_{-}&=E-|W_{12}|\end{aligned}}}
Examples of two-state systems in which the degeneracy in energy states is broken by the presence of off-diagonal terms in the Hamiltonian resulting from an internal interaction due to an inherent property of the system include:
Benzene, with two possible dispositions of the three double bonds between neighbouring Carbon atoms.
Ammonia molecule, where the Nitrogen atom can be either above or below the plane defined by the three Hydrogen atoms.
H+2 molecule, in which the electron may be localized around either of the two nuclei.
=== Fine-structure splitting ===
The corrections to the Coulomb interaction between the electron and the proton in a Hydrogen atom due to relativistic motion and spin–orbit coupling result in breaking the degeneracy in energy levels for different values of l corresponding to a single principal quantum number n.
The perturbation Hamiltonian due to relativistic correction is given by
H
r
=
−
p
4
/
8
m
3
c
2
{\displaystyle H_{r}=-p^{4}/8m^{3}c^{2}}
where
p
{\displaystyle p}
is the momentum operator and
m
{\displaystyle m}
is the mass of the electron. The first-order relativistic energy correction in the
|
n
l
m
⟩
{\displaystyle |nlm\rangle }
basis is given by
E
r
=
(
−
1
/
8
m
3
c
2
)
⟨
n
ℓ
m
|
p
4
|
n
ℓ
m
⟩
{\displaystyle E_{r}=\left(-1/8m^{3}c^{2}\right)\left\langle n\ell m\right|p^{4}\left|n\ell m\right\rangle }
Now
p
4
=
4
m
2
(
H
0
+
e
2
/
r
)
2
{\displaystyle p^{4}=4m^{2}(H^{0}+e^{2}/r)^{2}}
E
r
=
−
1
2
m
c
2
[
E
n
2
+
2
E
n
e
2
⟨
1
r
⟩
+
e
4
⟨
1
r
2
⟩
]
=
−
1
2
m
c
2
α
4
[
−
3
/
(
4
n
4
)
+
1
/
n
3
(
ℓ
+
1
/
2
)
]
{\displaystyle {\begin{aligned}E_{r}&=-{\frac {1}{2mc^{2}}}\left[E_{n}^{2}+2E_{n}e^{2}\left\langle {\frac {1}{r}}\right\rangle +e^{4}\left\langle {\frac {1}{r^{2}}}\right\rangle \right]\\&=-{\frac {1}{2}}mc^{2}\alpha ^{4}\left[-3/(4n^{4})+1/{n^{3}(\ell +1/2)}\right]\end{aligned}}}
where
α
{\displaystyle \alpha }
is the fine structure constant.
The spin–orbit interaction refers to the interaction between the intrinsic magnetic moment of the electron with the magnetic field experienced by it due to the relative motion with the proton. The interaction Hamiltonian is
H
s
o
=
−
e
m
c
m
⋅
L
r
3
=
e
2
m
2
c
2
r
3
S
⋅
L
{\displaystyle H_{so}=-{\frac {e}{mc}}{\frac {\mathbf {m} \cdot \mathbf {L} }{r^{3}}}={\frac {e^{2}}{m^{2}c^{2}r^{3}}}\mathbf {S} \cdot \mathbf {L} }
which may be written as
H
s
o
=
e
2
4
m
2
c
2
r
3
[
J
2
−
L
2
−
S
2
]
{\displaystyle H_{so}={\frac {e^{2}}{4m^{2}c^{2}r^{3}}}\left[J^{2}-L^{2}-S^{2}\right]}
The first order energy correction in the
|
j
,
m
,
ℓ
,
1
/
2
⟩
{\displaystyle |j,m,\ell ,1/2\rangle }
basis where the perturbation Hamiltonian is diagonal, is given by
E
s
o
=
ℏ
2
e
2
4
m
2
c
2
j
(
j
+
1
)
−
ℓ
(
ℓ
+
1
)
−
3
4
a
0
3
n
3
ℓ
(
ℓ
+
1
2
)
(
ℓ
+
1
)
{\displaystyle E_{so}={\frac {\hbar ^{2}e^{2}}{4m^{2}c^{2}}}{\frac {j(j+1)-\ell (\ell +1)-{\frac {3}{4}}}{a_{0}^{3}n^{3}\ell (\ell +{\frac {1}{2}})(\ell +1)}}}
where
a
0
{\displaystyle a_{0}}
is the Bohr radius.
The total fine-structure energy shift is given by
E
f
s
=
−
m
c
2
α
4
2
n
3
[
1
/
(
j
+
1
/
2
)
−
3
/
4
n
]
{\displaystyle E_{fs}=-{\frac {mc^{2}\alpha ^{4}}{2n^{3}}}\left[1/(j+1/2)-3/4n\right]}
for
j
=
ℓ
±
1
2
{\textstyle j=\ell \pm {\tfrac {1}{2}}}
.
=== Zeeman effect ===
The splitting of the energy levels of an atom when placed in an external magnetic field because of the interaction of the magnetic moment
m
→
{\displaystyle {\vec {m}}}
of the atom with the applied field is known as the Zeeman effect.
Taking into consideration the orbital and spin angular momenta,
L
{\displaystyle \mathbf {L} }
and
S
{\displaystyle \mathbf {S} }
, respectively, of a single electron in the Hydrogen atom, the perturbation Hamiltonian is given by
V
^
=
−
(
m
ℓ
+
m
s
)
⋅
B
{\displaystyle {\hat {V}}=-(\mathbf {m} _{\ell }+\mathbf {m} _{s})\cdot \mathbf {B} }
where
m
ℓ
=
−
e
L
/
2
m
{\displaystyle \mathbf {m} _{\ell }=-e\mathbf {L} /2m}
and
m
s
=
−
e
S
/
m
{\displaystyle \mathbf {m} _{s}=-e\mathbf {S} /m}
.
Thus,
V
^
=
e
2
m
(
L
+
2
S
)
⋅
B
{\displaystyle {\hat {V}}={\frac {e}{2m}}(\mathbf {L} +2\mathbf {S} )\cdot \mathbf {B} }
Now, in case of the weak-field Zeeman effect, when the applied field is weak compared to the internal field, the spin–orbit coupling dominates and
L
{\textstyle \mathbf {L} }
and
S
{\textstyle \mathbf {S} }
are not separately conserved. The good quantum numbers are n, ℓ, j and mj, and in this basis, the first order energy correction can be shown to be given by
E
z
=
−
μ
B
g
j
B
m
j
,
{\displaystyle E_{z}=-\mu _{B}g_{j}Bm_{j},}
where
μ
B
=
e
ℏ
/
2
m
{\displaystyle \mu _{B}={e\hbar }/2m}
is called the Bohr Magneton. Thus, depending on the value of
m
j
{\displaystyle m_{j}}
, each degenerate energy level splits into several levels.
In case of the strong-field Zeeman effect, when the applied field is strong enough, so that the orbital and spin angular momenta decouple, the good quantum numbers are now n, l, ml, and ms. Here, Lz and Sz are conserved, so the perturbation Hamiltonian is given by-
V
^
=
e
B
(
L
z
+
2
S
z
)
/
2
m
{\displaystyle {\hat {V}}=eB(L_{z}+2S_{z})/2m}
assuming the magnetic field to be along the z-direction. So,
V
^
=
e
B
(
m
ℓ
+
2
m
s
)
/
2
m
{\displaystyle {\hat {V}}=eB(m_{\ell }+2m_{s})/2m}
For each value of mℓ, there are two possible values of ms,
±
1
/
2
{\displaystyle \pm 1/2}
.
=== Stark effect ===
The splitting of the energy levels of an atom or molecule when subjected to an external electric field is known as the Stark effect.
For the hydrogen atom, the perturbation Hamiltonian is
H
^
s
=
−
|
e
|
E
z
{\displaystyle {\hat {H}}_{s}=-|e|Ez}
if the electric field is chosen along the z-direction.
The energy corrections due to the applied field are given by the expectation value of
H
^
s
{\displaystyle {\hat {H}}_{s}}
in the
|
n
ℓ
m
⟩
{\displaystyle |n\ell m\rangle }
basis. It can be shown by the selection rules that
⟨
n
ℓ
m
ℓ
|
z
|
n
1
ℓ
1
m
ℓ
1
⟩
≠
0
{\displaystyle \langle n\ell m_{\ell }|z|n_{1}\ell _{1}m_{\ell 1}\rangle \neq 0}
when
ℓ
=
ℓ
1
±
1
{\displaystyle \ell =\ell _{1}\pm 1}
and
m
ℓ
=
m
ℓ
1
{\displaystyle m_{\ell }=m_{\ell 1}}
.
The degeneracy is lifted only for certain states obeying the selection rules, in the first order. The first-order splitting in the energy levels for the degenerate states
|
2
,
0
,
0
⟩
{\displaystyle |2,0,0\rangle }
and
|
2
,
1
,
0
⟩
{\displaystyle |2,1,0\rangle }
, both corresponding to n = 2, is given by
Δ
E
2
,
1
,
m
ℓ
=
±
|
e
|
ℏ
2
/
(
m
e
e
2
)
E
{\displaystyle \Delta E_{2,1,m_{\ell }}=\pm |e|\hbar ^{2}/(m_{e}e^{2})E}
.
== See also ==
Density of states
== References ==
== Further reading ==
Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck. Quantum Mechanics. Vol. 1. Hermann. ISBN 978-2-7056-8392-4.
Shankar, Ramamurti (2013). Principles of Quantum Mechanics. Springer. ISBN 978-1-4615-7675-4.
Larson, Ron; Falvo, David C. (30 March 2009). Elementary Linear Algebra, Enhanced Edition. Cengage Learning. pp. 8–. ISBN 978-1-305-17240-1.
Hobson; Riley (27 August 2004). Mathematical Methods For Physics And Engineering (Clpe) 2Ed. Cambridge University Press. ISBN 978-0-521-61296-8.
Hemmer (2005). Kvantemekanikk: P.C. Hemmer. Tapir akademisk forlag. Tillegg 3: supplement to sections 3.1, 3.3, and 3.5. ISBN 978-82-519-2028-5.
Quantum degeneracy in two dimensional systems, Debnarayan Jana, Dept. of Physics, University College of Science and Technology
Al-Hashimi, Munir (2008). Accidental Symmetry in Quantum Physics. | Wikipedia/Degenerate_energy_levels |
In physics, the Majorana equation is a relativistic wave equation. It is named after the Italian physicist Ettore Majorana, who proposed it in 1937 as a means of describing fermions that are their own antiparticle. Particles corresponding to this equation are termed Majorana particles, although that term now has a more expansive meaning, referring to any (possibly non-relativistic) fermionic particle that is its own anti-particle (and is therefore electrically neutral).
There have been proposals that massive neutrinos are described by Majorana particles; there are various extensions to the Standard Model that enable this. The article on Majorana particles presents status for the experimental searches, including details about neutrinos. This article focuses primarily on the mathematical development of the theory, with attention to its discrete and continuous symmetries. The discrete symmetries are charge conjugation, parity transformation and time reversal; the continuous symmetry is Lorentz invariance.
Charge conjugation plays an outsize role, as it is the key symmetry that allows the Majorana particles to be described as electrically neutral. A particularly remarkable aspect is that electrical neutrality allows several global phases to be freely chosen, one each for the left and right chiral fields. This implies that, without explicit constraints on these phases, the Majorana fields are naturally CP violating. Another aspect of electric neutrality is that the left and right chiral fields can be given distinct masses. That is, electric charge is a Lorentz invariant, and also a constant of motion; whereas chirality is a Lorentz invariant, but is not a constant of motion for massive fields. Electrically neutral fields are thus less constrained than charged fields. Under charge conjugation, the two free global phases appear in the mass terms (as they are Lorentz invariant), and so the Majorana mass is described by a complex matrix, rather than a single number. In short, the discrete symmetries of the Majorana equation are considerably more complicated than those for the Dirac equation, where the electrical charge
U
(
1
)
{\displaystyle U(1)}
symmetry constrains and removes these freedoms.
== Definition ==
The Majorana equation can be written in several distinct forms:
As the Dirac equation written so that the Dirac operator is purely Hermitian, thus giving purely real solutions.
As an operator that relates a four-component spinor to its charge conjugate.
As a 2×2 differential equation acting on a complex two-component spinor, resembling the Weyl equation with a properly Lorentz covariant mass term.
These three forms are equivalent, and can be derived from one-another. Each offers slightly different insight into the nature of the equation. The first form emphasises that purely real solutions can be found. The second form clarifies the role of charge conjugation. The third form provides the most direct contact with the representation theory of the Lorentz group.
=== Purely real four-component form ===
The conventional starting point is to state that "the Dirac equation can be written in Hermitian form", when the gamma matrices are taken in the Majorana representation. The Dirac equation is then written as
(
−
i
∂
∂
t
−
i
α
^
⋅
∇
+
β
m
)
ψ
=
0
{\displaystyle \left(\,-i\,{\frac {\partial }{\partial t}}-i\,{\hat {\alpha }}\cdot \nabla +\beta \,m\,\right)\,\psi =0}
with
α
^
{\displaystyle {\hat {\alpha }}}
being purely real 4×4 symmetric matrices, and
β
{\displaystyle \beta }
being purely imaginary skew-symmetric; as required to ensure that the operator (that part inside the parentheses) is Hermitian. In this case, purely real 4‑spinor solutions to the equation can be found; these are the Majorana spinors.
=== Charge-conjugate four-component form ===
The Majorana equation is
i
∂
/
ψ
−
m
ψ
c
=
0
{\displaystyle i\,{\partial \!\!\!{\big /}}\psi -m\,\psi _{c}=0~}
with the derivative operator
∂
/
{\displaystyle {\partial \!\!\!{\big /}}}
written in Feynman slash notation to include the gamma matrices as well as a summation over the spinor components. The spinor
ψ
c
{\textstyle \,\psi _{c}\,}
is the charge conjugate of
ψ
.
{\textstyle \,\psi \,.}
By construction, charge conjugates are necessarily given by
ψ
c
=
η
c
C
ψ
¯
T
{\displaystyle \psi _{c}=\eta _{c}\,C\,{\overline {\psi }}^{\mathsf {T}}~}
where
(
⋅
)
T
{\displaystyle \,(\cdot )^{\mathsf {T}}\,}
denotes the transpose,
η
c
{\displaystyle \,\eta _{c}\,}
is an arbitrary phase factor
|
η
c
|
=
1
,
{\displaystyle \,|\eta _{c}|=1\,,}
conventionally taken as
η
c
=
1
,
{\displaystyle \,\eta _{c}=1\,,}
and
C
{\displaystyle \,C\,}
is a 4×4 matrix, the charge conjugation matrix. The matrix representation of
C
{\displaystyle \,C\,}
depends on the choice of the representation of the gamma matrices. By convention, the conjugate spinor is written as
ψ
¯
=
ψ
†
γ
0
.
{\displaystyle {\overline {\psi }}=\psi ^{\dagger }\,\gamma ^{0}~.}
A number of algebraic identities follow from the charge conjugation matrix
C
.
{\displaystyle C.}
One states that in any representation of the gamma matrices, including the Dirac, Weyl, and Majorana representations, that
C
γ
μ
=
−
γ
μ
T
C
{\displaystyle \,C\,\gamma _{\mu }=-\gamma _{\mu }^{\mathsf {T}}\,C\,}
and so one may write
ψ
c
=
−
η
c
γ
0
C
ψ
∗
{\displaystyle \psi _{c}=-\eta _{c}\,\gamma ^{0}\,C\,\psi ^{*}~}
where
ψ
∗
{\displaystyle \,\psi ^{*}\,}
is the complex conjugate of
ψ
.
{\displaystyle \,\psi \,.}
The charge conjugation matrix
C
{\displaystyle \,C\,}
also has the property that
C
−
1
=
C
†
=
C
T
=
−
C
{\displaystyle C^{-1}=C^{\dagger }=C^{\mathsf {T}}=-C}
in all representations (Dirac, chiral, Majorana). From this, and a fair bit of algebra, one may obtain the equivalent equation:
i
∂
/
ψ
c
−
m
ψ
=
0
{\displaystyle i\,{\partial \!\!\!{\big /}}\psi _{c}-m\,\psi =0}
A detailed discussion of the physical interpretation of matrix
C
{\displaystyle C}
as charge conjugation can be found in the article on charge conjugation. In short, it is involved in mapping particles to their antiparticles, which includes, among other things, the reversal of the electric charge. Although
ψ
c
{\displaystyle \psi ^{c}}
is defined as "the charge conjugate" of
ψ
,
{\displaystyle \psi ,}
the charge conjugation operator has not one but two eigenvalues. This allows a second spinor, the ELKO spinor to be defined. This is discussed in greater detail below.
=== Complex two-component form ===
The Majorana operator,
D
L
,
{\displaystyle \,\mathrm {D} _{\text{L}}\,,}
is defined as
D
L
≡
i
σ
¯
μ
∂
μ
+
η
m
ω
K
{\displaystyle \mathrm {D} _{\text{L}}\equiv i\,{\overline {\sigma }}^{\mu }\,\partial _{\mu }+\eta \,m\,\omega \,K}
where
σ
¯
μ
=
[
σ
0
−
σ
1
−
σ
2
−
σ
3
]
=
[
I
2
−
σ
x
−
σ
y
−
σ
z
]
{\displaystyle {\overline {\sigma }}^{\mu }={\begin{bmatrix}\sigma ^{0}&-\sigma ^{1}&-\sigma ^{2}&-\sigma ^{3}\end{bmatrix}}={\begin{bmatrix}I_{2}&-\sigma _{\text{x}}&-\sigma _{\text{y}}&-\sigma _{\text{z}}\end{bmatrix}}}
is a vector whose components are the 2×2 identity matrix
I
2
{\displaystyle \,I_{2}\,}
for
μ
=
0
{\displaystyle \,\mu =0\,}
and (minus) the Pauli matrices for
μ
∈
{
1
,
2
,
3
}
.
{\displaystyle \,\mu \in \{1,\,2,\,3\}\,.}
The
η
{\displaystyle \,\eta \,}
is an arbitrary phase factor,
|
η
|
=
1
,
{\displaystyle \,|\eta |=1\,,}
typically taken to be one:
η
=
1
.
{\displaystyle \,\eta =1\,.}
The
ω
{\displaystyle \,\omega \,}
is a 2×2 matrix that can be interpreted as the symplectic form for the symplectic group
Sp
(
2
,
C
)
,
{\displaystyle \,\operatorname {Sp} (2,\mathbb {C} )\,,}
which is a double covering of the Lorentz group. It is
ω
=
i
σ
2
=
[
0
1
−
1
0
]
,
{\displaystyle \omega =i\,\sigma _{2}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}~,}
which happens to be isomorphic to the imaginary unit "i" (i.e.
ω
2
=
−
I
{\displaystyle \omega ^{2}=-I\,}
and
a
I
+
b
ω
≅
a
+
b
i
∈
C
{\displaystyle \,a\,I+b\,\omega \cong a+b\,i\in \mathbb {C} \,}
for
a
,
b
∈
R
{\displaystyle \,a,b\in \mathbb {R} }
) with the matrix transpose being the analog of complex conjugation.
Finally, the
K
{\displaystyle \,K\,}
is a short-hand reminder to take the complex conjugate. The Majorana equation for a left-handed complex-valued two-component spinor
ψ
L
{\displaystyle \,\psi _{\text{L}}\,}
is then
D
L
ψ
L
=
0
{\displaystyle \mathrm {D} _{\text{L}}\psi _{\text{L}}=0}
or, equivalently,
i
σ
¯
μ
∂
μ
ψ
L
(
x
)
+
η
m
ω
ψ
L
∗
(
x
)
=
0
{\displaystyle i\,{\overline {\sigma }}^{\mu }\,\partial _{\mu }\psi _{\text{L}}(x)+\eta \,m\,\omega \,\psi _{\text{L}}^{*}(x)=0}
with
ψ
L
∗
(
x
)
{\displaystyle \,\psi _{\text{L}}^{*}(x)\,}
the complex conjugate of
ψ
L
(
x
)
.
{\displaystyle \,\psi _{\text{L}}(x)\,.}
The subscript L is used throughout this section to denote a left-handed chiral spinor; under a parity transformation, this can be taken to a right-handed spinor, and so one also has a right-handed form of the equation. This applies to the four-component equation as well; further details are presented below.
== Key ideas ==
Some of the properties of the Majorana equation, its solution and its Lagrangian formulation are summarized here.
The Majorana equation is similar to the Dirac equation, in the sense that it involves four-component spinors, gamma matrices, and mass terms, but includes the charge conjugate
ψ
c
{\textstyle \psi _{c}}
of a spinor
ψ
{\textstyle \psi }
. In contrast, the Weyl equation is for two-component spinor without mass.
Solutions to the Majorana equation can be interpreted as electrically neutral particles that are their own anti-particle. By convention, the charge conjugation operator takes particles to their anti-particles, and so the Majorana spinor is conventionally defined as the solution where
ψ
=
ψ
c
.
{\displaystyle \psi =\psi _{c}.}
That is, the Majorana spinor is "its own antiparticle". Insofar as charge conjugation takes an electrically charge particle to its anti-particle with opposite charge, one must conclude that the Majorana spinor is electrically neutral.
The Majorana equation is Lorentz covariant, and a variety of Lorentz scalars can be constructed from its spinors. This allows several distinct Lagrangians to be constructed for Majorana fields.
When the Lagrangian is expressed in terms of two-component left and right chiral spinors, it may contain three distinct mass terms: left and right Majorana mass terms, and a Dirac mass term. These manifest physically as two distinct masses; this is the key idea of the seesaw mechanism for describing low-mass neutrinos with a left-handed coupling to the Standard model, with the right-handed component corresponding to a sterile neutrino at GUT-scale masses.
The discrete symmetries of C, P and T conjugation are intimately controlled by a freely chosen phase factor on the charge conjugation operator. This manifests itself as distinct complex phases on the mass terms. This allows both CP-symmetric and CP-violating Lagrangians to be written.
The Majorana fields are CPT invariant, but the invariance is, in a sense "freer" than it is for charged particles. This is because charge is necessarily a Lorentz-invariant property, and is thus constrained for charged fields. The neutral Majorana fields are not constrained in this way, and can mix.
== Two-component Majorana equation ==
The Majorana equation can be written both in terms of a real four-component spinor, and as a complex two-component spinor. Both can be constructed from the Weyl equation, with the addition of a properly Lorentz-covariant mass term. This section provides an explicit construction and articulation.
=== Weyl equation ===
The Weyl equation describes the time evolution of a massless complex-valued two-component spinor. It is conventionally written as
σ
μ
∂
μ
ψ
=
0
{\displaystyle \sigma ^{\mu }\partial _{\mu }\psi =0}
Written out explicitly, it is
I
2
∂
ψ
∂
t
+
σ
x
∂
ψ
∂
x
+
σ
y
∂
ψ
∂
y
+
σ
z
∂
ψ
∂
z
=
0
{\displaystyle I_{2}{\frac {\partial \psi }{\partial t}}+\sigma _{x}{\frac {\partial \psi }{\partial x}}+\sigma _{y}{\frac {\partial \psi }{\partial y}}+\sigma _{z}{\frac {\partial \psi }{\partial z}}=0}
The Pauli four-vector is
σ
μ
=
(
σ
0
σ
1
σ
2
σ
3
)
=
(
I
2
σ
x
σ
y
σ
z
)
{\displaystyle \sigma ^{\mu }={\begin{pmatrix}\sigma ^{0}&\sigma ^{1}&\sigma ^{2}&\sigma ^{3}\end{pmatrix}}={\begin{pmatrix}I_{2}&\sigma _{x}&\sigma _{y}&\sigma _{z}\end{pmatrix}}}
that is, a vector whose components are the 2 × 2 identity matrix
I
2
{\displaystyle I_{2}}
for μ = 0 and the Pauli matrices for μ = 1, 2, 3. Under the parity transformation
x
→
→
x
→
′
=
−
x
→
{\displaystyle {\vec {x}}\to {\vec {x}}^{\prime }=-{\vec {x}}}
one obtains a dual equation
σ
¯
μ
∂
μ
ψ
=
0
{\displaystyle {\bar {\sigma }}^{\mu }\partial _{\mu }\psi =0}
where
σ
¯
μ
=
(
I
2
−
σ
x
−
σ
y
−
σ
z
)
{\displaystyle {\bar {\sigma }}^{\mu }={\begin{pmatrix}I_{2}&-\sigma _{x}&-\sigma _{y}&-\sigma _{z}\end{pmatrix}}}
. These are two distinct forms of the Weyl equation; their solutions are distinct as well. It can be shown that the solutions have left-handed and right-handed helicity, and thus chirality. It is conventional to label these two distinct forms explicitly, thus:
σ
μ
∂
μ
ψ
R
=
0
σ
¯
μ
∂
μ
ψ
L
=
0
.
{\displaystyle \sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}=0\qquad {\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}=0~.}
=== Lorentz invariance ===
The Weyl equation describes a massless particle; the Majorana equation adds a mass term. The mass must be introduced in a Lorentz invariant fashion. This is achieved by observing that the special linear group
SL
(
2
,
C
)
{\displaystyle \operatorname {SL} (2,\mathbb {C} )}
is isomorphic to the symplectic group
Sp
(
2
,
C
)
.
{\displaystyle \operatorname {Sp} (2,\mathbb {C} ).}
Both of these groups are double covers of the Lorentz group
SO
(
1
,
3
)
.
{\displaystyle \operatorname {SO} (1,3).}
The Lorentz invariance of the derivative term (from the Weyl equation) is conventionally worded in terms of the action of the group
SL
(
2
,
C
)
{\displaystyle \operatorname {SL} (2,\mathbb {C} )}
on spinors, whereas the Lorentz invariance of the mass term requires invocation of the defining relation for the symplectic group.
The double-covering of the Lorentz group is given by
σ
¯
μ
Λ
μ
ν
=
S
σ
¯
ν
S
†
{\displaystyle {\overline {\sigma }}_{\mu }{\Lambda ^{\mu }}_{\nu }=S{\overline {\sigma }}_{\nu }S^{\dagger }}
where
Λ
∈
SO
(
1
,
3
)
{\displaystyle \Lambda \in \operatorname {SO} (1,3)}
and
S
∈
SL
(
2
,
C
)
{\displaystyle S\in \operatorname {SL} (2,\mathbb {C} )}
and
S
†
{\displaystyle S^{\dagger }}
is the Hermitian transpose. This is used to relate the transformation properties of the differentials under a Lorentz transformation
x
↦
x
′
=
Λ
x
{\displaystyle x\mapsto x^{\prime }=\Lambda x}
to the transformation properties of the spinors.
The symplectic group
Sp
(
2
,
C
)
{\displaystyle \operatorname {Sp} (2,\mathbb {C} )}
is defined as the set of all complex 2×2 matrices
S
{\displaystyle S}
that satisfy
ω
−
1
S
T
ω
=
S
−
1
{\displaystyle \omega ^{-1}S^{\textsf {T}}\omega =S^{-1}}
where
ω
=
i
σ
2
=
[
0
1
−
1
0
]
{\displaystyle \omega =i\sigma _{2}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}}
is a skew-symmetric matrix. It is used to define a symplectic bilinear form on
C
2
.
{\displaystyle \mathbb {C} ^{2}.}
Writing a pair of arbitrary two-vectors
u
,
v
∈
C
2
{\displaystyle u,v\in \mathbb {C} ^{2}}
as
u
=
(
u
1
u
2
)
v
=
(
v
1
v
2
)
{\displaystyle u={\begin{pmatrix}u_{1}\\u_{2}\end{pmatrix}}\qquad v={\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}}
the symplectic product is
⟨
u
,
v
⟩
=
−
⟨
v
,
u
⟩
=
u
1
v
2
−
u
2
v
1
=
u
T
ω
v
{\displaystyle \langle u,v\rangle =-\langle v,u\rangle =u_{1}v_{2}-u_{2}v_{1}=u^{\textsf {T}}\omega v}
where
u
T
{\displaystyle u^{\textsf {T}}}
is the transpose of
u
.
{\displaystyle u~.}
This form is invariant under Lorentz transformations, in that
⟨
u
,
v
⟩
=
⟨
S
u
,
S
v
⟩
{\displaystyle \langle u,v\rangle =\langle Su,Sv\rangle }
The skew matrix takes the Pauli matrices to minus their transpose:
ω
σ
k
ω
−
1
=
−
σ
k
T
{\displaystyle \omega \sigma _{k}\omega ^{-1}=-\sigma _{k}^{\textsf {T}}}
for
k
=
1
,
2
,
3.
{\displaystyle k=1,2,3.}
The skew matrix can be interpreted as the product of a parity transformation and a transposition acting on two-spinors. However, as will be emphasized in a later section, it can also be interpreted as one of the components of the charge conjugation operator, the other component being complex conjugation. Applying it to the Lorentz transformation yields
σ
μ
Λ
μ
ν
=
(
S
−
1
)
†
σ
ν
S
−
1
{\displaystyle \sigma _{\mu }{\Lambda ^{\mu }}_{\nu }=\left(S^{-1}\right)^{\dagger }\sigma _{\nu }S^{-1}}
These two variants describe the covariance properties of the differentials acting on the left and right spinors, respectively.
=== Differentials ===
Under the Lorentz transformation
x
↦
x
′
=
Λ
x
{\displaystyle x\mapsto x^{\prime }=\Lambda x}
the differential term transforms as
σ
μ
∂
∂
x
μ
ψ
R
(
x
)
↦
σ
μ
∂
∂
x
′
μ
ψ
R
(
x
′
)
=
(
S
−
1
)
†
σ
μ
∂
∂
x
μ
ψ
R
(
x
)
{\displaystyle \sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)\mapsto \sigma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {R}}(x^{\prime })=\left(S^{-1}\right)^{\dagger }\sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)}
provided that the right-handed field transforms as
ψ
R
(
x
)
↦
ψ
R
′
(
x
′
)
=
S
ψ
R
(
x
)
{\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }(x^{\prime })=S\psi _{\rm {R}}(x)}
Similarly, the left-handed differential transforms as
σ
¯
μ
∂
∂
x
μ
ψ
L
(
x
)
↦
σ
¯
μ
∂
∂
x
′
μ
ψ
L
(
x
′
)
=
S
σ
¯
μ
∂
∂
x
μ
ψ
L
(
x
)
{\displaystyle {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)\mapsto {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {L}}(x^{\prime })=S{\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)}
provided that the left-handed spinor transforms as
ψ
L
(
x
)
↦
ψ
L
′
(
x
′
)
=
(
S
†
)
−
1
ψ
L
(
x
)
{\displaystyle \psi _{\rm {L}}(x)\mapsto \psi _{\rm {L}}^{\prime }(x^{\prime })=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}(x)}
=== Mass term ===
The complex conjugate of the right handed spinor field transforms as
ψ
R
∗
(
x
)
↦
ψ
R
′
∗
(
x
′
)
=
S
∗
ψ
R
∗
(
x
)
{\displaystyle \psi _{\rm {R}}^{*}(x)\mapsto \psi _{\rm {R}}^{\prime *}(x^{\prime })=S^{*}\psi _{\rm {R}}^{*}(x)}
The defining relationship for
Sp
(
2
,
C
)
{\displaystyle \operatorname {Sp} (2,\mathbb {C} )}
can be rewritten as
ω
S
∗
=
(
S
†
)
−
1
ω
.
{\displaystyle \omega S^{*}=\left(S^{\dagger }\right)^{-1}\omega \,.}
From this, one concludes that the skew-complex field transforms as
m
ω
ψ
R
∗
(
x
)
↦
m
ω
ψ
R
′
∗
(
x
′
)
=
(
S
†
)
−
1
m
ω
ψ
R
∗
(
x
)
{\displaystyle m\omega \psi _{\rm {R}}^{*}(x)\mapsto m\omega \psi _{\rm {R}}^{\prime *}(x^{\prime })=\left(S^{\dagger }\right)^{-1}m\omega \psi _{\rm {R}}^{*}(x)}
This is fully compatible with the covariance property of the differential. Taking
η
=
e
i
ϕ
{\displaystyle \eta =e^{i\phi }}
to be an arbitrary complex phase factor, the linear combination
i
σ
μ
∂
μ
ψ
R
(
x
)
+
η
m
ω
ψ
R
∗
(
x
)
{\displaystyle i\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}(x)+\eta m\omega \psi _{\rm {R}}^{*}(x)}
transforms in a covariant fashion. Setting this to zero gives the complex two-component Majorana equation for the right-handed field. Similarly, the left-chiral Majorana equation (including an arbitrary phase factor
ζ
{\displaystyle \zeta }
) is
i
σ
¯
μ
∂
μ
ψ
L
(
x
)
+
ζ
m
ω
ψ
L
∗
(
x
)
=
0
{\displaystyle i{\overline {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}(x)+\zeta m\omega \psi _{\rm {L}}^{*}(x)=0}
The left and right chiral versions are related by a parity transformation. As shown below, these square to the Klein–Gordon operator only if
η
=
ζ
.
{\displaystyle \eta =\zeta .}
The skew complex conjugate
ω
ψ
∗
=
i
σ
2
ψ
{\displaystyle \omega \psi ^{*}=i\sigma ^{2}\psi }
can be recognized as the charge conjugate form of
ψ
;
{\displaystyle \psi ~;}
this is articulated in greater detail below. Thus, the Majorana equation can be read as an equation that connects a spinor to its charge-conjugate form.
=== Left and right Majorana operators ===
Define a pair of operators, the Majorana operators,
D
L
=
i
σ
¯
μ
∂
μ
+
ζ
m
ω
K
D
R
=
i
σ
μ
∂
μ
+
η
m
ω
K
{\displaystyle {\begin{aligned}\mathrm {D} _{\rm {L}}&=i{\overline {\sigma }}^{\mu }\partial _{\mu }+\zeta m\omega K&\mathrm {D} _{\rm {R}}&=i\sigma ^{\mu }\partial _{\mu }+\eta m\omega K\end{aligned}}}
where
K
{\displaystyle K}
is a short-hand reminder to take the complex conjugate. Under Lorentz transformations, these transform as
D
L
↦
D
L
′
=
S
D
L
S
†
D
R
↦
D
R
′
=
(
S
†
)
−
1
D
R
S
−
1
{\displaystyle {\begin{aligned}\mathrm {D} _{\rm {L}}\mapsto \mathrm {D} _{\rm {L}}^{\prime }&=S\mathrm {D} _{\rm {L}}S^{\dagger }&\mathrm {D} _{\rm {R}}\mapsto \mathrm {D} _{\rm {R}}^{\prime }&=\left(S^{\dagger }\right)^{-1}\mathrm {D} _{\rm {R}}S^{-1}\end{aligned}}}
whereas the Weyl spinors transform as
ψ
L
↦
ψ
L
′
=
(
S
†
)
−
1
ψ
L
ψ
R
↦
ψ
R
′
=
S
ψ
R
{\displaystyle {\begin{aligned}\psi _{\rm {L}}\mapsto \psi _{\rm {L}}^{\prime }&=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}&\psi _{\rm {R}}\mapsto \psi _{\rm {R}}^{\prime }&=S\psi _{\rm {R}}\end{aligned}}}
just as above. Thus, the matched combinations of these are Lorentz covariant, and one may take
D
L
ψ
L
=
0
D
R
ψ
R
=
0
{\displaystyle {\begin{aligned}\mathrm {D} _{\rm {L}}\psi _{\rm {L}}&=0&\mathrm {D} _{\rm {R}}\psi _{\rm {R}}&=0\end{aligned}}}
as a pair of complex 2-spinor Majorana equations.
The products
D
L
D
R
{\displaystyle \mathrm {D} _{\rm {L}}\mathrm {D} _{\rm {R}}}
and
D
R
D
L
{\displaystyle \mathrm {D} _{\rm {R}}\mathrm {D} _{\rm {L}}}
are both Lorentz covariant. The product is explicitly
D
R
D
L
=
(
i
σ
μ
∂
μ
+
η
m
ω
K
)
(
i
σ
¯
μ
∂
μ
+
ζ
m
ω
K
)
=
−
(
∂
t
2
−
∇
→
⋅
∇
→
+
η
ζ
∗
m
2
)
=
−
(
◻
+
η
ζ
∗
m
2
)
{\displaystyle \mathrm {D} _{\rm {R}}\mathrm {D} _{\rm {L}}=\left(i\sigma ^{\mu }\partial _{\mu }+\eta m\omega K\right)\left(i{\overline {\sigma }}^{\mu }\partial _{\mu }+\zeta m\omega K\right)=-\left(\partial _{t}^{2}-{\vec {\nabla }}\cdot {\vec {\nabla }}+\eta \zeta ^{*}m^{2}\right)=-\left(\square +\eta \zeta ^{*}m^{2}\right)}
Verifying this requires keeping in mind that
ω
2
=
−
1
{\displaystyle \omega ^{2}=-1}
and that
K
i
=
−
i
K
.
{\displaystyle Ki=-iK~.}
The RHS reduces to the Klein–Gordon operator provided that
η
ζ
∗
=
1
{\displaystyle \eta \zeta ^{*}=1}
, that is,
η
=
ζ
.
{\displaystyle \eta =\zeta ~.}
These two Majorana operators are thus "square roots" of the Klein–Gordon operator.
== Four-component Majorana equation ==
The real four-component version of the Majorana equation can be constructed from the complex two-component equation as follows. Given the complex field
ψ
L
{\displaystyle \psi _{\rm {L}}}
satisfying
D
L
ψ
L
=
0
{\displaystyle \mathrm {D} _{\rm {L}}\psi _{\rm {L}}=0}
as above, define
χ
R
≡
−
η
ω
ψ
L
∗
{\displaystyle \chi _{\rm {R}}\equiv -\eta \omega \psi _{\rm {L}}^{*}}
Using the algebraic machinery given above, it is not hard to show that
(
i
σ
μ
∂
μ
−
η
m
ω
K
)
χ
R
=
0
{\displaystyle \left(i\sigma ^{\mu }\partial _{\mu }-\eta m\omega K\right)\chi _{\rm {R}}=0}
Defining a conjugate operator
δ
R
=
i
σ
μ
∂
μ
−
η
m
ω
K
{\displaystyle \delta _{\rm {R}}=i\sigma ^{\mu }\partial _{\mu }-\eta m\omega K}
The four-component Majorana equation is then
(
D
L
⊕
δ
R
)
(
ψ
L
⊕
χ
R
)
=
0
{\displaystyle \left(\mathrm {D} _{\rm {L}}\oplus \delta _{\rm {R}}\right)\left(\psi _{\rm {L}}\oplus \chi _{\rm {R}}\right)=0}
Writing this out in detail, one has
D
L
⊕
δ
R
=
[
D
L
0
0
δ
R
]
=
i
[
I
0
0
I
]
∂
t
+
i
[
−
σ
k
0
0
σ
k
]
∇
k
+
m
[
η
ω
K
0
0
−
η
ω
K
]
{\displaystyle \mathrm {D} _{\rm {L}}\oplus \delta _{\rm {R}}={\begin{bmatrix}\mathrm {D} _{\rm {L}}&0\\0&\delta _{\rm {R}}\end{bmatrix}}=i{\begin{bmatrix}I&0\\0&I\end{bmatrix}}\partial _{t}+i{\begin{bmatrix}-\sigma ^{k}&0\\0&\sigma ^{k}\end{bmatrix}}\nabla _{k}+m{\begin{bmatrix}\eta \omega K&0\\0&-\eta \omega K\end{bmatrix}}}
Multiplying on the left by
β
=
γ
0
=
[
0
I
I
0
]
{\displaystyle \beta =\gamma ^{0}={\begin{bmatrix}0&I\\I&0\end{bmatrix}}}
brings the above into a matrix form wherein the gamma matrices in the chiral representation can be recognized. This is
β
(
D
L
⊕
δ
R
)
=
[
0
δ
R
D
L
0
]
=
i
β
∂
t
+
i
[
0
σ
k
−
σ
k
0
]
∇
k
−
m
[
0
η
ω
K
−
η
ω
K
0
]
{\displaystyle \beta \left(\mathrm {D} _{\rm {L}}\oplus \delta _{\rm {R}}\right)={\begin{bmatrix}0&\delta _{\rm {R}}\\\mathrm {D} _{\rm {L}}&0\end{bmatrix}}=i\beta \partial _{t}+i{\begin{bmatrix}0&\sigma ^{k}\\-\sigma ^{k}&0\end{bmatrix}}\nabla _{k}-m{\begin{bmatrix}0&\eta \omega K\\-\eta \omega K&0\end{bmatrix}}}
That is,
β
(
D
L
⊕
δ
R
)
=
i
γ
μ
∂
μ
−
m
[
0
η
ω
K
−
η
ω
K
0
]
{\displaystyle \beta \left(\mathrm {D} _{\rm {L}}\oplus \delta _{\rm {R}}\right)=i\gamma ^{\mu }\partial _{\mu }-m{\begin{bmatrix}0&\eta \omega K\\-\eta \omega K&0\end{bmatrix}}}
Applying this to the 4-spinor
ψ
L
⊕
χ
R
=
(
ψ
L
χ
R
)
=
(
ψ
L
−
η
ω
ψ
L
∗
)
{\displaystyle \psi _{\rm {L}}\oplus \chi _{\rm {R}}={\begin{pmatrix}\psi _{\rm {L}}\\\chi _{\rm {R}}\end{pmatrix}}={\begin{pmatrix}\psi _{\rm {L}}\\-\eta \omega \psi _{\rm {L}}^{*}\end{pmatrix}}}
and recalling that
ω
2
=
−
1
{\displaystyle \omega ^{2}=-1}
one finds that the spinor is an eigenstate of the mass term,
[
0
η
ω
K
−
η
ω
K
0
]
(
ψ
L
−
η
ω
ψ
L
∗
)
=
(
ψ
L
−
η
ω
ψ
L
∗
)
{\displaystyle {\begin{bmatrix}0&\eta \omega K\\-\eta \omega K&0\end{bmatrix}}{\begin{pmatrix}\psi _{\rm {L}}\\-\eta \omega \psi _{\rm {L}}^{*}\end{pmatrix}}={\begin{pmatrix}\psi _{\rm {L}}\\-\eta \omega \psi _{\rm {L}}^{*}\end{pmatrix}}}
and so, for this particular spinor, the four-component Majorana equation reduces to the Dirac equation
(
i
γ
μ
∂
μ
−
m
)
(
ψ
L
−
η
ω
ψ
L
∗
)
=
0
{\displaystyle \left(i\gamma ^{\mu }\partial _{\mu }-m\right){\begin{pmatrix}\psi _{\rm {L}}\\-\eta \omega \psi _{\rm {L}}^{*}\end{pmatrix}}=0}
The skew matrix can be identified with the charge conjugation operator (in the Weyl basis). Explicitly, this is
C
=
[
0
η
ω
K
−
η
ω
K
0
]
{\displaystyle {\mathsf {C}}={\begin{bmatrix}0&\eta \omega K\\-\eta \omega K&0\end{bmatrix}}}
Given an arbitrary four-component spinor
ψ
,
{\displaystyle \psi ~,}
its charge conjugate is
C
ψ
=
ψ
c
=
η
C
ψ
¯
T
{\displaystyle {\mathsf {C}}\psi =\psi ^{c}=\eta C{\overline {\psi }}^{\textsf {T}}}
with
C
{\displaystyle C}
an ordinary 4×4 matrix, having a form explicitly given in the article on gamma matrices. In conclusion, the 4-component Majorana equation can be written as
0
=
(
i
γ
μ
∂
μ
−
m
C
)
ψ
=
i
γ
μ
∂
μ
ψ
−
m
ψ
c
{\displaystyle {\begin{aligned}0&=\left(i\gamma ^{\mu }\partial _{\mu }-m{\mathsf {C}}\right)\psi \\&=i\gamma ^{\mu }\partial _{\mu }\psi -m\psi ^{c}\end{aligned}}}
== Charge conjugation and parity ==
The charge conjugation operator appears directly in the 4-component version of the Majorana equation. When the spinor field is a charge conjugate of itself, that is, when
ψ
c
=
ψ
,
{\displaystyle \psi ^{c}=\psi ,}
then the Majorana equation reduces to the Dirac equation, and any solution can be interpreted as describing an electrically neutral field. However, the charge conjugation operator has not one, but two distinct eigenstates, one of which is the ELKO spinor; it does not solve the Majorana equation, but rather, a sign-flipped version of it.
The charge conjugation operator
C
{\displaystyle {\mathsf {C}}}
for a four-component spinor is defined as
C
ψ
=
ψ
c
=
η
C
(
ψ
¯
)
T
{\displaystyle {\mathsf {C}}\psi =\psi _{c}=\eta C\left({\overline {\psi }}\right)^{\textsf {T}}}
A general discussion of the physical interpretation of this operator in terms of electrical charge is given in the article on charge conjugation. Additional discussions are provided by Bjorken & Drell or Itzykson & Zuber. In more abstract terms, it is the spinorial equivalent of complex conjugation of the
U
(
1
)
{\displaystyle U(1)}
coupling of the electromagnetic field. This can be seen as follows. If one has a single, real scalar field, it cannot couple to electromagnetism; however, a pair of real scalar fields, arranged as a complex number, can. For scalar fields, charge conjugation is the same as complex conjugation. The discrete symmetries of the
U
(
1
)
{\displaystyle U(1)}
gauge theory follows from the "trivial" observation that
∗
:
U
(
1
)
→
U
(
1
)
e
i
ϕ
↦
e
−
i
ϕ
{\displaystyle *:U(1)\to U(1)\quad e^{i\phi }\mapsto e^{-i\phi }}
is an automorphism of
U
(
1
)
.
{\displaystyle U(1).}
For spinorial fields, the situation is more confusing. Roughly speaking, however, one can say that the Majorana field is electrically neutral, and that taking an appropriate combination of two Majorana fields can be interpreted as a single electrically charged Dirac field. The charge conjugation operator given above corresponds to the automorphism of
U
(
1
)
.
{\displaystyle U(1).}
In the above,
C
{\displaystyle C}
is a 4×4 matrix, given in the article on the gamma matrices. Its explicit form is representation-dependent. The operator
C
{\displaystyle {\mathsf {C}}}
cannot be written as a 4×4 matrix, as it is taking the complex conjugate of
ψ
{\displaystyle \psi }
, and complex conjugation cannot be achieved with a complex 4×4 matrix. It can be written as a real 8×8 matrix, presuming one also writes
ψ
{\displaystyle \psi }
as a purely real 8-component spinor. Letting
K
{\displaystyle K}
stand for complex conjugation, so that
K
(
x
+
i
y
)
=
x
−
i
y
,
{\displaystyle K(x+iy)=x-iy,}
one can then write, for four-component spinors,
C
=
−
η
γ
0
C
K
{\displaystyle {\mathsf {C}}=-\eta \gamma ^{0}CK}
It is not hard to show that
C
2
=
1
{\displaystyle {\mathsf {C}}^{2}=1}
and that
C
γ
μ
C
=
−
γ
μ
.
{\displaystyle {\mathsf {C}}\gamma ^{\mu }{\mathsf {C}}=-\gamma ^{\mu }~.}
It follows from the first identity that
C
{\displaystyle {\mathsf {C}}}
has two eigenvalues, which may be written as
C
ψ
(
±
)
=
±
ψ
(
±
)
{\displaystyle {\mathsf {C}}\psi ^{(\pm )}=\pm \psi ^{(\pm )}}
The eigenvectors are readily found in the Weyl basis. From the above, in this basis,
C
{\displaystyle {\mathsf {C}}}
is explicitly
C
=
[
0
η
ω
K
−
η
ω
K
0
]
{\displaystyle {\mathsf {C}}={\begin{bmatrix}0&\eta \omega K\\-\eta \omega K&0\end{bmatrix}}}
and thus
ψ
Weyl
(
±
)
=
(
ψ
L
∓
η
ω
ψ
L
∗
)
{\displaystyle \psi _{\text{Weyl}}^{(\pm )}={\begin{pmatrix}\psi _{\rm {L}}\\\mp \eta \omega \psi _{\rm {L}}^{*}\end{pmatrix}}}
Both eigenvectors are clearly solutions to the Majorana equation. However, only the positive eigenvector is a solution to the Dirac equation:
0
=
(
i
γ
μ
∂
μ
−
m
C
)
ψ
(
+
)
=
(
i
γ
μ
∂
μ
−
m
)
ψ
(
+
)
{\displaystyle 0=\left(i\gamma ^{\mu }\partial _{\mu }-m{\mathsf {C}}\right)\psi ^{(+)}=\left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi ^{(+)}}
The negative eigenvector "doesn't work", it has the incorrect sign on the Dirac mass term. It still solves the Klein–Gordon equation, however. The negative eigenvector is termed the ELKO spinor.
=== Parity ===
Under parity, the left-handed spinors transform to right-handed spinors. The two eigenvectors of the charge conjugation operator, again in the Weyl basis, are
ψ
R
,
Weyl
(
±
)
=
(
±
η
ω
ψ
R
∗
ψ
R
)
{\displaystyle \psi _{\rm {{R},{\text{Weyl}}}}^{(\pm )}={\begin{pmatrix}\pm \eta \omega \psi _{\rm {R}}^{*}\\\psi _{\rm {R}}\end{pmatrix}}}
As before, both solve the four-component Majorana equation, but only one also solves the Dirac equation. This can be shown by constructing the parity-dual four-component equation. This takes the form
β
(
δ
L
⊕
D
R
)
=
i
γ
μ
∂
μ
+
m
C
{\displaystyle \beta \left(\delta _{\rm {L}}\oplus \mathrm {D} _{\rm {R}}\right)=i\gamma ^{\mu }\partial _{\mu }+m{\mathsf {C}}}
where
δ
L
=
i
σ
¯
μ
∂
μ
−
η
m
ω
K
{\displaystyle \delta _{\rm {L}}=i{\overline {\sigma }}^{\mu }\partial _{\mu }-\eta m\omega K}
Given the two-component spinor
ψ
R
{\displaystyle \psi _{\rm {R}}}
define its conjugate as
χ
L
=
−
η
ω
ψ
R
∗
.
{\displaystyle \chi _{\rm {L}}=-\eta \omega \psi _{\rm {R}}^{*}.}
It is not hard to show that
D
R
ψ
R
=
−
η
ω
(
δ
L
χ
L
)
{\displaystyle \mathrm {D} _{\rm {R}}\psi _{\rm {R}}=-\eta \omega (\delta _{\rm {L}}\chi _{\rm {L}})}
and that therefore, if
D
R
ψ
R
=
0
{\displaystyle \mathrm {D} _{\rm {R}}\psi _{\rm {R}}=0}
then also
δ
L
χ
L
=
0
{\displaystyle \delta _{\rm {L}}\chi _{\rm {L}}=0}
and therefore that
0
=
(
δ
L
⊕
D
R
)
(
χ
L
⊕
ψ
R
)
{\displaystyle 0=\left(\delta _{\rm {L}}\oplus \mathrm {D} _{\rm {R}}\right)\left(\chi _{\rm {L}}\oplus \psi _{\rm {R}}\right)}
or equivalently
0
=
(
i
γ
μ
∂
μ
+
m
C
)
(
χ
L
ψ
R
)
{\displaystyle 0=(i\gamma ^{\mu }\partial _{\mu }+m{\mathsf {C}}){\begin{pmatrix}\chi _{\rm {L}}\\\psi _{\rm {R}}\end{pmatrix}}}
This works, because
C
(
χ
L
⊕
ψ
R
)
=
−
(
χ
L
⊕
ψ
R
)
{\displaystyle {\mathsf {C}}(\chi _{\rm {L}}\oplus \psi _{\rm {R}})=-(\chi _{\rm {L}}\oplus \psi _{\rm {R}})}
and so this reduces to the Dirac equation for
ψ
R
,
Weyl
(
−
)
=
χ
L
⊕
ψ
R
=
(
χ
L
ψ
R
)
{\displaystyle \psi _{{\rm {R}},{\text{Weyl}}}^{(-)}=\chi _{\rm {L}}\oplus \psi _{\rm {R}}={\begin{pmatrix}\chi _{\rm {L}}\\\psi _{\rm {R}}\end{pmatrix}}}
To conclude, and reiterate, the Majorana equation is
0
=
(
i
γ
μ
∂
μ
−
m
C
)
ψ
=
i
γ
μ
∂
μ
ψ
−
m
ψ
c
{\displaystyle 0=\left(i\gamma ^{\mu }\partial _{\mu }-m{\mathsf {C}}\right)\psi =i\gamma ^{\mu }\partial _{\mu }\psi -m\psi _{c}}
It has four inequivalent, linearly independent solutions,
ψ
L
,
R
(
±
)
.
{\displaystyle \psi _{\rm {L,R}}^{(\pm )}.}
Of these, only two are also solutions to the Dirac equation: namely
ψ
L
(
+
)
{\displaystyle \psi _{\rm {L}}^{(+)}}
and
ψ
R
(
−
)
.
{\displaystyle \psi _{\rm {R}}^{(-)}~.}
== Solutions ==
=== Spin eigenstates ===
One convenient starting point for writing the solutions is to work in the rest frame way of the spinors. Writing the quantum Hamiltonian with the conventional sign convention
H
=
i
∂
t
{\displaystyle H=i\partial _{t}}
leads to the Majorana equation taking the form
i
∂
t
ψ
=
−
i
α
→
⋅
∇
ψ
+
m
β
ψ
c
{\displaystyle i\partial _{t}\psi =-i{\vec {\alpha }}\cdot \nabla \psi +m\beta \psi _{c}}
In the chiral (Weyl) basis, one has that
γ
0
=
β
=
(
0
I
I
0
)
,
α
→
=
(
σ
→
0
0
−
σ
→
)
{\displaystyle \gamma ^{0}=\beta ={\begin{pmatrix}0&I\\I&0\end{pmatrix}},\quad {\vec {\alpha }}={\begin{pmatrix}{\vec {\sigma }}&0\\0&-{\vec {\sigma }}\end{pmatrix}}}
with
σ
→
{\displaystyle {\vec {\sigma }}}
the Pauli vector. The sign convention here is consistent with the article gamma matrices. Plugging in the positive charge conjugation eigenstate
ψ
Weyl
(
+
)
{\displaystyle \psi _{\text{Weyl}}^{(+)}}
given above, one obtains an equation for the two-component spinor
i
∂
t
ψ
L
=
−
i
σ
→
⋅
∇
ψ
L
+
m
(
i
σ
2
ψ
L
∗
)
{\displaystyle i\partial _{t}\psi _{\rm {L}}=-i{\vec {\sigma }}\cdot \nabla \psi _{\rm {L}}+m(i\sigma _{2}\psi _{\rm {L}}^{*})}
and likewise
i
∂
t
(
i
σ
2
ψ
L
∗
)
=
+
i
σ
→
⋅
∇
(
i
σ
2
ψ
L
∗
)
+
m
ψ
L
{\displaystyle i\partial _{t}(i\sigma _{2}\psi _{\rm {L}}^{*})=+i{\vec {\sigma }}\cdot \nabla (i\sigma _{2}\psi _{\rm {L}}^{*})+m\psi _{\rm {L}}}
These two are in fact the same equation, which can be verified by noting that
σ
2
{\displaystyle \sigma _{2}}
yields the complex conjugate of the Pauli matrices:
σ
2
(
k
→
⋅
σ
→
)
σ
2
=
−
k
→
⋅
σ
→
∗
.
{\displaystyle \sigma _{2}\left({\vec {k}}\cdot {\vec {\sigma }}\right)\sigma _{2}=-{\vec {k}}\cdot {\vec {\sigma }}^{*}.}
The plane wave solutions can be developed for the energy-momentum
(
k
0
,
k
→
)
{\displaystyle \left(k_{0},{\vec {k}}\right)}
and are most easily stated in the rest frame. The spin-up rest-frame solution is
ψ
L
(
u
)
=
(
e
−
i
m
t
e
i
m
t
)
{\displaystyle \psi _{\rm {L}}^{(u)}={\begin{pmatrix}e^{-imt}\\e^{imt}\end{pmatrix}}}
while the spin-down solution is
ψ
L
(
d
)
=
(
e
i
m
t
−
e
−
i
m
t
)
{\displaystyle \psi _{\rm {L}}^{(d)}={\begin{pmatrix}e^{imt}\\-e^{-imt}\end{pmatrix}}}
That these are being correctly interpreted can be seen by re-expressing them in the Dirac basis, as Dirac spinors. In this case, they take the form
ψ
Dirac
(
u
)
=
[
e
−
i
m
t
0
0
−
e
i
m
t
]
{\displaystyle \psi _{\text{Dirac}}^{(u)}={\begin{bmatrix}e^{-imt}\\0\\0\\-e^{imt}\end{bmatrix}}}
and
ψ
Dirac
(
d
)
=
[
0
e
−
i
m
t
−
e
i
m
t
0
]
{\displaystyle \psi _{\text{Dirac}}^{(d)}={\begin{bmatrix}0\\e^{-imt}\\-e^{imt}\\0\end{bmatrix}}}
These are the rest-frame spinors. They can be seen as a linear combination of both the positive and the negative-energy solutions to the Dirac equation. These are the only two solutions; the Majorana equation has only two linearly independent solutions, unlike the Dirac equation, which has four. The doubling of the degrees of freedom of the Dirac equation can be ascribed to the Dirac spinors carrying charge.
=== Momentum eigenstates ===
In a general momentum frame, the Majorana spinor can be written as
== Electric charge ==
The appearance of both
ψ
{\textstyle \psi }
and
ψ
c
{\textstyle \psi _{c}}
in the Majorana equation means that the field
ψ
{\textstyle \psi }
cannot be coupled to a charged electromagnetic field without violating charge conservation, since particles have the opposite charge to their own antiparticles. To satisfy this restriction,
ψ
{\textstyle \psi }
must be taken to be electrically neutral. This can be articulated in greater detail.
The Dirac equation can be written in a purely real form, when the gamma matrices are taken in the Majorana representation. The Dirac equation can then be written as
(
−
i
∂
∂
t
−
i
α
^
⋅
∇
+
β
m
)
ψ
=
0
{\displaystyle \left(-i{\frac {\partial }{\partial t}}-i{\hat {\alpha }}\cdot \nabla +\beta m\right)\psi =0}
with
α
^
{\displaystyle {\hat {\alpha }}}
being purely real symmetric matrices, and
β
{\displaystyle \beta }
being purely imaginary skew-symmetric. In this case, purely real solutions to the equation can be found; these are the Majorana spinors. Under the action of Lorentz transformations, these transform under the (purely real) spin group
Spin
(
1
,
3
)
.
{\displaystyle \operatorname {Spin} (1,3).}
This stands in contrast to the Dirac spinors, which are only covariant under the action of the complexified spin group
Spin
C
(
1
,
3
)
.
{\displaystyle \operatorname {Spin} ^{\mathbb {C} }(1,3).}
The interpretation is that complexified spin group encodes the electromagnetic potential, the real spin group does not.
This can also be stated in a different way: the Dirac equation, and the Dirac spinors contain a sufficient amount of gauge freedom to naturally encode electromagnetic interactions. This can be seen by noting that the electromagnetic potential can very simply be added to the Dirac equation without requiring any additional modifications or extensions to either the equation or the spinor. The location of this extra degree of freedom is pin-pointed by the charge conjugation operator, and the imposition of the Majorana constraint
ψ
=
ψ
c
{\textstyle \psi =\psi _{c}}
removes this extra degree of freedom. Once removed, there cannot be any coupling to the electromagnetic potential, ergo, the Majorana spinor is necessarily electrically neutral. An electromagnetic coupling can only be obtained by adding back in a complex-number-valued phase factor, and coupling this phase factor to the electromagnetic potential.
The above can be further sharpened by examining the situation in
(
p
,
q
)
{\displaystyle (p,q)}
spatial dimensions. In this case, the complexified spin group
Spin
C
(
p
,
q
)
{\displaystyle \operatorname {Spin} ^{\mathbb {C} }(p,q)}
has a double covering by
SO
(
p
,
q
)
×
S
1
{\displaystyle \operatorname {SO} (p,q)\times S^{1}}
with
S
1
≅
U
(
1
)
{\displaystyle S^{1}\cong U(1)}
the circle. The implication is that
SO
(
p
,
q
)
{\displaystyle \operatorname {SO} (p,q)}
encodes the generalized Lorentz transformations (of course), while the circle can be identified with the
U
(
1
)
{\displaystyle \mathrm {U} (1)}
action of the gauge group on electric charges. That is, the gauge-group action of the complexified spin group on a Dirac spinor can be split into a purely-real Lorentzian part, and an electromagnetic part. This can be further elaborated on non-flat (non-Minkowski-flat) spin manifolds. In this case, the Dirac operator acts on the spinor bundle. Decomposed into distinct terms, it includes the usual covariant derivative
d
+
A
.
{\displaystyle d+A.}
The
A
{\displaystyle A}
field can be seen to arise directly from the curvature of the complexified part of the spin bundle, in that the gauge transformations couple to the complexified part, and not the real-spinor part. That the
A
{\displaystyle A}
field corresponds to the electromagnetic potential can be seen by noting that (for example) the square of the Dirac operator is the Laplacian plus the scalar curvature
R
{\displaystyle R}
(of the underlying manifold that the spinor field sits on) plus the (electromagnetic) field strength
F
=
d
A
.
{\displaystyle F=dA.}
For the Majorana case, one has only the Lorentz transformations acting on the Majorana spinor; the complexification plays no role. A detailed treatment of these topics can be found in Jost while the
(
p
,
q
)
=
(
1
,
3
)
{\displaystyle (p,q)=(1,3)}
case is articulated in Bleeker. Unfortunately, neither text explicitly articulates the Majorana spinor in direct form.
== Field quanta ==
The quanta of the Majorana equation allow for two classes of particles, a neutral particle and its neutral antiparticle. The frequently applied supplemental condition
Ψ
=
Ψ
c
{\textstyle \Psi =\Psi _{c}}
corresponds to the Majorana spinor.
=== Majorana particle ===
Particles corresponding to Majorana spinors are known as Majorana particles, due to the above self-conjugacy constraint. All the fermions included in the Standard Model have been excluded as Majorana fermions (since they have non-zero electric charge they cannot be antiparticles of themselves) with the exception of the neutrino (which is neutral).
Theoretically, the neutrino is a possible exception to this pattern. If so, neutrinoless double-beta decay, as well as a range of lepton-number violating meson and charged lepton decays, are possible. A number of experiments probing whether the neutrino is a Majorana particle are currently underway.
== Notes ==
== References ==
== Additional reading ==
"Majorana Legacy in Contemporary Physics", Electronic Journal of Theoretical Physics (EJTP) Volume 3, Issue 10 (April 2006) Special issue for the Centenary of Ettore Majorana (1906-1938?). ISSN 1729-5254
Frank Wilczek, (2009) "Majorana returns", Nature Physics Vol. 5 pages 614–618. | Wikipedia/Majorana_equation |
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
== Overview ==
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
== History ==
Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.
== Formal definition ==
In the most general sense,
a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function
Φ
:
U
⊆
(
T
×
X
)
→
X
{\displaystyle \Phi :U\subseteq (T\times X)\to X}
with
p
r
o
j
2
(
U
)
=
X
{\displaystyle \mathrm {proj} _{2}(U)=X}
(where
p
r
o
j
2
{\displaystyle \mathrm {proj} _{2}}
is the 2nd projection map)
and for any x in X:
Φ
(
0
,
x
)
=
x
{\displaystyle \Phi (0,x)=x}
Φ
(
t
2
,
Φ
(
t
1
,
x
)
)
=
Φ
(
t
2
+
t
1
,
x
)
,
{\displaystyle \Phi (t_{2},\Phi (t_{1},x))=\Phi (t_{2}+t_{1},x),}
for
t
1
,
t
2
+
t
1
∈
I
(
x
)
{\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)}
and
t
2
∈
I
(
Φ
(
t
1
,
x
)
)
{\displaystyle \ t_{2}\in I(\Phi (t_{1},x))}
, where we have defined the set
I
(
x
)
:=
{
t
∈
T
:
(
t
,
x
)
∈
U
}
{\displaystyle I(x):=\{t\in T:(t,x)\in U\}}
for any x in X.
In particular, in the case that
U
=
T
×
X
{\displaystyle U=T\times X}
we have for every x in X that
I
(
x
)
=
T
{\displaystyle I(x)=T}
and thus that Φ defines a monoid action of T on X.
The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.
We often write
Φ
x
(
t
)
≡
Φ
(
t
,
x
)
{\displaystyle \Phi _{x}(t)\equiv \Phi (t,x)}
Φ
t
(
x
)
≡
Φ
(
t
,
x
)
{\displaystyle \Phi ^{t}(x)\equiv \Phi (t,x)}
if we take one of the variables as constant. The function
Φ
x
:
I
(
x
)
→
X
{\displaystyle \Phi _{x}:I(x)\to X}
is called the flow through x and its graph is called the trajectory through x. The set
γ
x
≡
{
Φ
(
t
,
x
)
:
t
∈
I
(
x
)
}
{\displaystyle \gamma _{x}\equiv \{\Phi (t,x):t\in I(x)\}}
is called the orbit through x.
The orbit through x is the image of the flow through x.
A subset S of the state space X is called Φ-invariant if for all x in S and all t in T
Φ
(
t
,
x
)
∈
S
.
{\displaystyle \Phi (t,x)\in S.}
Thus, in particular, if S is Φ-invariant,
I
(
x
)
=
T
{\displaystyle I(x)=T}
for all x in S. That is, the flow through x must be defined for all time for every element of S.
More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.
=== Geometrical definition ===
In the geometrical definition, a dynamical system is the tuple
⟨
T
,
M
,
f
⟩
{\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle }
.
T
{\displaystyle {\mathcal {T}}}
is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative.
M
{\displaystyle {\mathcal {M}}}
is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with
t
∈
T
{\displaystyle t\in {\mathcal {T}}}
) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain
T
{\displaystyle {\mathcal {T}}}
into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain
T
{\displaystyle {\mathcal {T}}}
.
==== Real dynamical system ====
A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.
==== Discrete dynamical system ====
A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.
==== Cellular automaton ====
A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.
==== Multidimensional generalization ====
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
==== Compactification of a dynamical system ====
Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).
In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.
=== Measure theoretical definition ===
A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has
Φ
−
1
σ
∈
Σ
{\displaystyle \Phi ^{-1}\sigma \in \Sigma }
. A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has
μ
(
Φ
−
1
σ
)
=
μ
(
σ
)
{\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )}
. Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates
Φ
n
=
Φ
∘
Φ
∘
⋯
∘
Φ
{\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi }
for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
==== Relation to geometric definition ====
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
== Construction of dynamical systems ==
The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:
x
˙
=
v
(
t
,
x
)
{\displaystyle {\dot {\boldsymbol {x}}}={\boldsymbol {v}}(t,{\boldsymbol {x}})}
x
|
t
=
0
=
x
0
{\displaystyle {\boldsymbol {x}}|_{t=0}={\boldsymbol {x}}_{0}}
where
x
˙
{\displaystyle {\dot {\boldsymbol {x}}}}
represents the velocity of the material point x
M is a finite dimensional manifold
v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.
There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
autonomous, when v(t, x) = v(x)
homogeneous when v(t, 0) = 0 for all t
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
x
(
t
)
=
Φ
(
t
,
x
0
)
{\displaystyle {\boldsymbol {x}}(t)=\Phi (t,{\boldsymbol {x}}_{0})}
The dynamical system is then (T, M, Φ).
Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy
x
˙
−
v
(
t
,
x
)
=
0
⇔
G
(
t
,
Φ
(
t
,
x
0
)
)
=
0
{\displaystyle {\dot {\boldsymbol {x}}}-{\boldsymbol {v}}(t,{\boldsymbol {x}})=0\qquad \Leftrightarrow \qquad {\mathfrak {G}}\left(t,\Phi (t,{\boldsymbol {x}}_{0})\right)=0}
where
G
:
(
T
×
M
)
M
→
C
{\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} }
is a functional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.
== Examples ==
== Linear dynamical systems ==
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).
=== Flows ===
For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,
x
˙
=
v
(
x
)
=
A
x
+
b
,
{\displaystyle {\dot {x}}=v(x)=Ax+b,}
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity).
The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
Φ
t
(
x
1
)
=
x
1
+
b
t
.
{\displaystyle \Phi ^{t}(x_{1})=x_{1}+bt.}
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
Φ
t
(
x
0
)
=
e
t
A
x
0
.
{\displaystyle \Phi ^{t}(x_{0})=e^{tA}x_{0}.}
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
=== Maps ===
A discrete-time, affine dynamical system has the form of a matrix difference equation:
x
n
+
1
=
A
x
n
+
b
,
{\displaystyle x_{n+1}=Ax_{n}+b,}
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
== Local dynamics ==
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
=== Rectification ===
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
=== Near periodic orbits ===
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
h
−
1
∘
F
∘
h
(
x
)
=
J
⋅
x
.
{\displaystyle h^{-1}\circ F\circ h(x)=J\cdot x.}
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.
=== Conjugation results ===
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
== Bifurcation theory ==
When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
== Ergodic systems ==
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
v
o
l
(
A
)
=
v
o
l
(
Φ
t
(
A
)
)
.
{\displaystyle \mathrm {vol} (A)=\mathrm {vol} (\Phi ^{t}(A)).}
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
(
U
t
a
)
(
x
)
=
a
(
Φ
−
t
(
x
)
)
.
{\displaystyle (U^{t}a)(x)=a(\Phi ^{-t}(x)).}
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
== Nonlinear dynamical systems and chaos ==
Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
t
)
=
1
4
(
1
−
t
2
+
|
1
−
t
2
|
)
2
{\displaystyle y(t)={\frac {1}{4}}\left(1-{\frac {t}{2}}+\left|1-{\frac {t}{2}}\right|\right)^{2}}
that is zero for
t
≥
2
{\displaystyle t\geq 2}
and is not Lipschitz continuous at its ending time
t
=
2.
{\displaystyle t=2.}
== See also ==
== References ==
== Further reading ==
== External links ==
Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems.
Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science
Online books or lecture notes
Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level.
Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems.
Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view.
Learning Dynamical Systems. Tutorial on learning dynamical systems.
Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl
Research groups
Dynamical Systems Group Groningen, IWI, University of Groningen.
Chaos @ UMD. Concentrates on the applications of dynamical systems.
[2], SUNY Stony Brook. Lists of conferences, researchers, and some open problems.
Center for Dynamics and Geometry, Penn State.
Control and Dynamical Systems, Caltech.
Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL).
Center for Dynamical Systems, University of Bremen
Systems Analysis, Modelling and Prediction Group, University of Oxford
Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon
Dynamical Systems Archived 2017-06-02 at the Wayback Machine, IMPA, Instituto Nacional de Matemática Pura e Applicada.
Nonlinear Dynamics Workgroup Archived 2015-01-21 at the Wayback Machine, Institute of Computer Science, Czech Academy of Sciences.
UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia.
Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara. | Wikipedia/Non-linear_dynamics |
In various interpretations of quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation.
In the Copenhagen interpretation, wave function collapse connects quantum to classical models, with a special role for the observer. By contrast, objective-collapse proposes an origin in physical processes. In the many-worlds interpretation, collapse does not exist; all wave function outcomes occur while quantum decoherence accounts for the appearance of collapse.
Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement.
== Mathematical description ==
In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position
r
{\displaystyle r}
and the momentum
p
{\displaystyle p}
but also energy
E
{\displaystyle E}
,
z
{\displaystyle z}
components of spin (
s
z
{\displaystyle s_{z}}
), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing
ϕ
i
{\displaystyle \phi _{i}}
for an eigenstate and
c
i
{\displaystyle c_{i}}
for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation:
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}
The kets
{
|
ϕ
i
⟩
}
{\displaystyle \{|\phi _{i}\rangle \}}
specify the different available quantum "alternatives", i.e., particular quantum states.
The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true.
=== Collapse ===
To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation,: 566 abruptly converting an arbitrary state into a single component eigenstate of the observable:
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
→
|
ψ
′
⟩
=
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle \rightarrow |\psi '\rangle =|\phi _{i}\rangle .}
where the arrow represents a measurement of the observable corresponding to the
ϕ
{\displaystyle \phi }
basis.
For any single event, only one eigenvalue is measured, chosen randomly from among the possible values.
=== Meaning of the expansion coefficients ===
The complex coefficients
{
c
i
}
{\displaystyle \{c_{i}\}}
in the expansion of a quantum state in terms of eigenstates
{
|
ϕ
i
⟩
}
{\displaystyle \{|\phi _{i}\rangle \}}
,
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}
can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:
c
i
=
⟨
ϕ
i
|
ψ
⟩
.
{\displaystyle c_{i}=\langle \phi _{i}|\psi \rangle .}
They are called the probability amplitudes. The square modulus
|
c
i
|
2
{\displaystyle |c_{i}|^{2}}
is the probability that a measurement of the observable yields the eigenstate
|
ϕ
i
⟩
{\displaystyle |\phi _{i}\rangle }
. The sum of the probability over all possible outcomes must be one:
⟨
ψ
|
ψ
⟩
=
∑
i
|
c
i
|
2
=
1.
{\displaystyle \langle \psi |\psi \rangle =\sum _{i}|c_{i}|^{2}=1.}
As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area.
This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.: 17
== Terminology ==
The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description.: 159 Reduction of the state vector replaces the full state vector with a single eigenstate of the observable.
The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation".: 324 When the wave function representation is used, the "reduction" is called "wave function collapse".
== The measurement problem ==
The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.: 127
== Physical approaches to collapse ==
Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".
Various interpretations of quantum mechanics attempt to provide a physical model for collapse.: 816 Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.: 819
The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.
=== Quantum decoherence ===
Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.
The form of decoherence known as environment-induced superselection proposes that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate.
== History ==
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature.
John von Neumann's influential 1932 work Mathematical Foundations of Quantum Mechanics took a more formal approach, developing an "ideal" measurement scheme: 1270 that postulated that there were two processes of wave function change:
The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse).
The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation.
In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe.: 1288 While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule.: 1290 : 5
Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system".: 1273 Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.: 1302
By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particular Compton scattering. Later work refined the notion of measurements into the more easily discussed first kind, that will give the same value when immediately repeated, and the second kind that give different values when repeated.
== See also ==
== References ==
== External links ==
Quotations related to Wave function collapse at Wikiquote | Wikipedia/Wave_function_collapse |
Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution, which offers an information-theoretically secure solution to the key exchange problem. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical (i.e. non-quantum) communication. For example, it is impossible to copy data encoded in a quantum state. If one attempts to read the encoded data, the quantum state will be changed due to wave function collapse (no-cloning theorem). This could be used to detect eavesdropping in quantum key distribution (QKD).
== History ==
In the early 1970s, Stephen Wiesner, then at Columbia University in New York, introduced the concept of quantum conjugate coding. His seminal paper titled "Conjugate Coding" was rejected by the IEEE Information Theory Society but was eventually published in 1983 in SIGACT News. In this paper he showed how to store or transmit two messages by encoding them in two "conjugate observables", such as linear and circular polarization of photons, so that either, but not both, properties may be received and decoded. It was not until Charles H. Bennett, of the IBM's Thomas J. Watson Research Center, and Gilles Brassard met in 1979 at the 20th IEEE Symposium on the Foundations of Computer Science, held in Puerto Rico, that they discovered how to incorporate Wiesner's findings. "The main breakthrough came when we realized that photons were never meant to store information, but rather to transmit it." In 1984, building upon this work, Bennett and Brassard proposed a method for secure communication, which is now called BB84, the first Quantum Key Distribution system. Independently, in 1991 Artur Ekert proposed to use Bell's inequalities to achieve secure key distribution. Ekert's protocol for the key distribution, as it was subsequently shown by Dominic Mayers and Andrew Yao, offers device-independent quantum key distribution.
Companies that manufacture quantum cryptography systems include MagiQ Technologies, Inc. (Boston), ID Quantique (Geneva), QuintessenceLabs (Canberra, Australia), Toshiba (Tokyo), QNu Labs (India) and SeQureNet (Paris).
== Advantages ==
Cryptography is the strongest link in the chain of data security. However, interested parties cannot assume that cryptographic keys will remain secure indefinitely. Quantum cryptography has the potential to encrypt data for longer periods than classical cryptography. Using classical cryptography, scientists cannot guarantee encryption beyond approximately 30 years, but some stakeholders could use longer periods of protection. Take, for example, the healthcare industry. As of 2017, 85.9% of office-based physicians are using electronic medical record systems to store and transmit patient data. Under the Health Insurance Portability and Accountability Act, medical records must be kept secret. Quantum key distribution can protect electronic records for periods of up to 100 years. Also, quantum cryptography has useful applications for governments and militaries as, historically, governments have kept military data secret for periods of over 60 years. There also has been proof that quantum key distribution can travel through a noisy channel over a long distance and be secure. It can be reduced from a noisy quantum scheme to a classical noiseless scheme. This can be solved with classical probability theory. This process of having consistent protection over a noisy channel can be possible through the implementation of quantum repeaters. Quantum repeaters have the ability to resolve quantum communication errors in an efficient way. Quantum repeaters, which are quantum computers, can be stationed as segments over the noisy channel to ensure the security of communication. Quantum repeaters do this by purifying the segments of the channel before connecting them creating a secure line of communication. Sub-par quantum repeaters can provide an efficient amount of security through the noisy channel over a long distance.
== Applications ==
Quantum cryptography is a general subject that covers a broad range of cryptographic practices and protocols. Some of the most notable applications and protocols are discussed below.
=== Quantum key distribution ===
The best-known and developed application of quantum cryptography is QKD, which is the process of using quantum communication to establish a shared key between two parties (Alice and Bob, for example) without a third party (Eve) learning anything about that key, even if Eve can eavesdrop on all communication between Alice and Bob. If Eve tries to learn information about the key being established, discrepancies will arise causing Alice and Bob to notice. Once the key is established, it is then typically used for encrypted communication using classical techniques. For instance, the exchanged key could be used for symmetric cryptography (e.g. one-time pad).
The security of quantum key distribution can be proven mathematically without imposing any restrictions on the abilities of an eavesdropper, something not possible with classical key distribution. This is usually described as "unconditional security", although there are some minimal assumptions required, including that the laws of quantum mechanics apply and that Alice and Bob are able to authenticate each other, i.e. Eve should not be able to impersonate Alice or Bob as otherwise a man-in-the-middle attack would be possible.
While QKD is secure, its practical application faces some challenges. There are in fact limitations for the key generation rate at increasing transmission distances. Recent studies have allowed important advancements in this regard. In 2018, the protocol of twin-field QKD was proposed as a mechanism to overcome the limits of lossy communication. The rate of the twin field protocol was shown to overcome the secret key-agreement capacity of the lossy communication channel, known as repeater-less PLOB bound, at 340 km of optical fiber; its ideal rate surpasses this bound already at 200 km and follows the rate-loss scaling of the higher repeater-assisted secret key-agreement capacity (see figure 1 of and figure 11 of for more details). The protocol suggests that optimal key rates are achievable on "550 kilometers of standard optical fibre", which is already commonly used in communications today. The theoretical result was confirmed in the first experimental demonstration of QKD beyond the PLOB bound which has been characterized as the first effective quantum repeater. Notable developments in terms of achieving high rates at long distances are the sending-not-sending (SNS) version of the TF-QKD protocol. and the no-phase-postselected twin-field scheme.
=== Mistrustful quantum cryptography ===
In mistrustful cryptography the participating parties do not trust each other. For example, Alice and Bob collaborate to perform some computation where both parties enter some private inputs. But Alice does not trust Bob and Bob does not trust Alice. Thus, a secure implementation of a cryptographic task requires that after completing the computation, Alice can be guaranteed that Bob has not cheated and Bob can be guaranteed that Alice has not cheated either. Examples of tasks in mistrustful cryptography are commitment schemes and secure computations, the latter including the further examples of coin flipping and oblivious transfer. Key distribution does not belong to the area of mistrustful cryptography. Mistrustful quantum cryptography studies the area of mistrustful cryptography using quantum systems.
In contrast to quantum key distribution where unconditional security can be achieved based only on the laws of quantum physics, in the case of various tasks in mistrustful cryptography there are no-go theorems showing that it is impossible to achieve unconditionally secure protocols based only on the laws of quantum physics. However, some of these tasks can be implemented with unconditional security if the protocols not only exploit quantum mechanics but also special relativity. For example, unconditionally secure quantum bit commitment was shown impossible by Mayers and by Lo and Chau. Unconditionally secure ideal quantum coin flipping was shown impossible by Lo and Chau. Moreover, Lo showed that there cannot be unconditionally secure quantum protocols for one-out-of-two oblivious transfer and other secure two-party computations. However, unconditionally secure relativistic protocols for coin flipping and bit-commitment have been shown by Kent.
==== Quantum coin flipping ====
Unlike quantum key distribution, quantum coin flipping is a protocol that is used between two participants who do not trust each other. The participants communicate via a quantum channel and exchange information through the transmission of qubits. But because Alice and Bob do not trust each other, each expects the other to cheat. Therefore, more effort must be spent on ensuring that neither Alice nor Bob can gain a significant advantage over the other to produce a desired outcome. An ability to influence a particular outcome is referred to as a bias, and there is a significant focus on developing protocols to reduce the bias of a dishonest player, otherwise known as cheating. Quantum communication protocols, including quantum coin flipping, have been shown to provide significant security advantages over classical communication, though they may be considered difficult to realize in the practical world.
A coin flip protocol generally occurs like this:
Alice chooses a basis (either rectilinear or diagonal) and generates a string of photons to send to Bob in that basis.
Bob randomly chooses to measure each photon in a rectilinear or diagonal basis, noting which basis he used and the measured value.
Bob publicly guesses which basis Alice used to send her qubits.
Alice announces the basis she used and sends her original string to Bob.
Bob confirms by comparing Alice's string to his table. It should be perfectly correlated with the values Bob measured using Alice's basis and completely uncorrelated with the opposite.
Cheating occurs when one player attempts to influence, or increase the probability of a particular outcome. The protocol discourages some forms of cheating; for example, Alice could cheat at step 4 by claiming that Bob incorrectly guessed her initial basis when he guessed correctly, but Alice would then need to generate a new string of qubits that perfectly correlates with what Bob measured in the opposite table. Her chance of generating a matching string of qubits will decrease exponentially with the number of qubits sent, and if Bob notes a mismatch, he will know she was lying. Alice could also generate a string of photons using a mixture of states, but Bob would easily see that her string will correlate partially (but not fully) with both sides of the table, and know she cheated in the process. There is also an inherent flaw that comes with current quantum devices. Errors and lost qubits will affect Bob's measurements, resulting in holes in Bob's measurement table. Significant losses in measurement will affect Bob's ability to verify Alice's qubit sequence in step 5.
One theoretically surefire way for Alice to cheat is to utilize the Einstein-Podolsky-Rosen (EPR) paradox. Two photons in an EPR pair are anticorrelated; that is, they will always be found to have opposite polarizations, provided that they are measured in the same basis. Alice could generate a string of EPR pairs, sending one photon per pair to Bob and storing the other herself. When Bob states his guess, she could measure her EPR pair photons in the opposite basis and obtain a perfect correlation to Bob's opposite table. Bob would never know she cheated. However, this requires capabilities that quantum technology currently does not possess, making it impossible to do in practice. To successfully execute this, Alice would need to be able to store all the photons for a significant amount of time as well as measure them with near perfect efficiency. This is because any photon lost in storage or in measurement would result in a hole in her string that she would have to fill by guessing. The more guesses she has to make, the more she risks detection by Bob for cheating.
==== Quantum commitment ====
In addition to quantum coin-flipping, quantum commitment protocols are implemented when distrustful parties are involved. A commitment scheme allows a party Alice to fix a certain value (to "commit") in such a way that Alice cannot change that value while at the same time ensuring that the recipient Bob cannot learn anything about that value until Alice reveals it. Such commitment schemes are commonly used in cryptographic protocols (e.g. Quantum coin flipping, Zero-knowledge proof, secure two-party computation, and Oblivious transfer).
In the quantum setting, they would be particularly useful: Crépeau and Kilian showed that from a commitment and a quantum channel, one can construct an unconditionally secure protocol for performing so-called oblivious transfer. Oblivious transfer, on the other hand, had been shown by Kilian to allow implementation of almost any distributed computation in a secure way (so-called secure multi-party computation). (Note: The results by Crépeau and Kilian together do not directly imply that given a commitment and a quantum channel one can perform secure multi-party computation. This is because the results do not guarantee "composability", that is, when plugging them together, one might lose security.)
Early quantum commitment protocols were shown to be flawed. In fact, Mayers showed that (unconditionally secure) quantum commitment is impossible: a computationally unlimited attacker can break any quantum commitment protocol.
Yet, the result by Mayers does not preclude the possibility of constructing quantum commitment protocols (and thus secure multi-party computation protocols) under assumptions that are much weaker than the assumptions needed for commitment protocols that do not use quantum communication. The bounded quantum storage model described below is an example for a setting in which quantum communication can be used to construct commitment protocols. A breakthrough in November 2013 offers "unconditional" security of information by harnessing quantum theory and relativity, which has been successfully demonstrated on a global scale for the first time. More recently, Wang et al., proposed another commitment scheme in which the "unconditional hiding" is perfect.
Physical unclonable functions can be also exploited for the construction of cryptographic commitments.
=== Bounded- and noisy-quantum-storage model ===
One possibility to construct unconditionally secure quantum commitment and quantum oblivious transfer (OT) protocols is to use the bounded quantum storage model (BQSM). In this model, it is assumed that the amount of quantum data that an adversary can store is limited by some known constant Q. However, no limit is imposed on the amount of classical (i.e., non-quantum) data the adversary may store.
In the BQSM, one can construct commitment and oblivious transfer protocols. The underlying idea is the following: The protocol parties exchange more than Q quantum bits (qubits). Since even a dishonest party cannot store all that information (the quantum memory of the adversary is limited to Q qubits), a large part of the data will have to be either measured or discarded. Forcing dishonest parties to measure a large part of the data allows the protocol to circumvent the impossibility result, commitment and oblivious transfer protocols can now be implemented.
The protocols in the BQSM presented by Damgård, Fehr, Salvail, and Schaffner do not assume that honest protocol participants store any quantum information; the technical requirements are similar to those in quantum key distribution protocols. These protocols can thus, at least in principle, be realized with today's technology. The communication complexity is only a constant factor larger than the bound Q on the adversary's quantum memory.
The advantage of the BQSM is that the assumption that the adversary's quantum memory is limited is quite realistic. With today's technology, storing even a single qubit reliably over a sufficiently long time is difficult. (What "sufficiently long" means depends on the protocol details. By introducing an artificial pause in the protocol, the amount of time over which the adversary needs to store quantum data can be made arbitrarily large.)
An extension of the BQSM is the noisy-storage model introduced by Wehner, Schaffner and Terhal. Instead of considering an upper bound on the physical size of the adversary's quantum memory, an adversary is allowed to use imperfect quantum storage devices of arbitrary size. The level of imperfection is modelled by noisy quantum channels. For high enough noise levels, the same primitives as in the BQSM can be achieved and the BQSM forms a special case of the noisy-storage model.
In the classical setting, similar results can be achieved when assuming a bound on the amount of classical (non-quantum) data that the adversary can store. It was proven, however, that in this model also the honest parties have to use a large amount of memory (namely the square-root of the adversary's memory bound). This makes these protocols impractical for realistic memory bounds. (Note that with today's technology such as hard disks, an adversary can cheaply store large amounts of classical data.)
=== Position-based quantum cryptography ===
The goal of position-based quantum cryptography is to use the geographical location of a player as its (only) credential. For example, one wants to send a message to a player at a specified position with the guarantee that it can only be read if the receiving party is located at that particular position. In the basic task of position-verification, a player, Alice, wants to convince the (honest) verifiers that she is located at a particular point. It has been shown by Chandran et al. that position-verification using classical protocols is impossible against colluding adversaries (who control all positions except the prover's claimed position). Under various restrictions on the adversaries, schemes are possible.
Under the name of 'quantum tagging', the first position-based quantum schemes have been investigated in 2002 by Kent. A US-patent was granted in 2006. The notion of using quantum effects for location verification first appeared in the scientific literature in 2010. After several other quantum protocols for position verification have been suggested in 2010, Buhrman et al. claimed a general impossibility result: using an enormous amount of quantum entanglement (they use a doubly exponential number of EPR pairs, in the number of qubits the honest player operates on), colluding adversaries are always able to make it look to the verifiers as if they were at the claimed position. However, this result does not exclude the possibility of practical schemes in the bounded- or noisy-quantum-storage model (see above). Later Beigi and König improved the amount of EPR pairs needed in the general attack against position-verification protocols to exponential. They also showed that a particular protocol remains secure against adversaries who controls only a linear amount of EPR pairs. It is argued in that due to time-energy coupling the possibility of formal unconditional location verification via quantum effects remains an open problem. The study of position-based quantum cryptography also has connections with the protocol of port-based quantum teleportation, which is a more advanced version of quantum teleportation, where many EPR pairs are simultaneously used as ports.
=== Device-independent quantum cryptography ===
A quantum cryptographic protocol is device-independent if its security does not rely on trusting that the quantum devices used are truthful. Thus the security analysis of such a protocol needs to consider scenarios of imperfect or even malicious devices. Mayers and Yao proposed the idea of designing quantum protocols using "self-testing" quantum apparatus, the internal operations of which can be uniquely determined by their input-output statistics. Subsequently, Roger Colbeck in his Thesis proposed the use of Bell tests for checking the honesty of the devices. Since then, several problems have been shown to admit unconditional secure and device-independent protocols, even when the actual devices performing the Bell test are substantially "noisy", i.e., far from being ideal. These problems include
quantum key distribution, randomness expansion, and randomness amplification.
In 2018, theoretical studies performed by Arnon- Friedman et al. suggest that exploiting a property of entropy that is later referred to as "Entropy Accumulation Theorem (EAT)", an extension of Asymptotic equipartition property, can guarantee the security of a device independent protocol.
== Post-quantum cryptography ==
Quantum computers may become a technological reality; it is therefore important to study cryptographic schemes used against adversaries with access to a quantum computer. The study of such schemes is often referred to as post-quantum cryptography. The need for post-quantum cryptography arises from the fact that many popular encryption and signature schemes (schemes based on ECC and RSA) can be broken using Shor's algorithm for factoring and computing discrete logarithms on a quantum computer. Examples for schemes that are, as of today's knowledge, secure against quantum adversaries are McEliece and lattice-based schemes, as well as most symmetric-key algorithms. Surveys of post-quantum cryptography are available.
There is also research into how existing cryptographic techniques have to be modified to be able to cope with quantum adversaries. For example, when trying to develop zero-knowledge proof systems that are secure against quantum adversaries, new techniques need to be used: In a classical setting, the analysis of a zero-knowledge proof system usually involves "rewinding", a technique that makes it necessary to copy the internal state of the adversary. In a quantum setting, copying a state is not always possible (no-cloning theorem); a variant of the rewinding technique has to be used.
Post quantum algorithms are also called "quantum resistant", because – unlike quantum key distribution – it is not known or provable that there will not be potential future quantum attacks against them. Even though they may possibly be vulnerable to quantum attacks in the future, the NSA is announcing plans to transition to quantum resistant algorithms. The National Institute of Standards and Technology (NIST) believes that it is time to think of quantum-safe primitives.
== Quantum cryptography beyond key distribution ==
So far, quantum cryptography has been mainly identified with the development of quantum key distribution protocols. Symmetric cryptosystems with keys that have been distributed by means of quantum key distribution become inefficient for large networks (many users), because of the necessity for the establishment and the manipulation of many pairwise secret keys (the so-called "key-management problem"). Moreover, this distribution alone does not address many other cryptographic tasks and functions, which are of vital importance in everyday life. Kak's three-stage protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution, in which the cryptographic transformation uses classical algorithms.
Besides quantum commitment and oblivious transfer (discussed above), research on quantum cryptography beyond key distribution revolves around quantum message authentication, quantum digital signatures, quantum one-way functions and public-key encryption, quantum key-exchange, quantum fingerprinting and entity authentication (for example, see Quantum readout of PUFs), etc.
== Y-00 protocol ==
H. P. Yuen presented Y-00 as a stream cipher using quantum noise around 2000 and applied it for the U.S. Defense Advanced Research Projects Agency (DARPA) High-Speed and High-Capacity Quantum Cryptography Project as an alternative to quantum key distribution.
The review paper summarizes it well.
Unlike quantum key distribution protocols, the main purpose of Y-00 is to transmit a message without eavesdrop-monitoring, not to distribute a key. Therefore, privacy amplification may be used only for key distributions. Currently, research is being conducted mainly in Japan and China: e.g.
The principle of operation is as follows. First, legitimate users share a key and change it to a pseudo-random keystream using the same pseudo-random number generator. Then, the legitimate parties can perform conventional optical communications based on the shared key by transforming it appropriately. For attackers who do not share the key, the wire-tap channel model of Aaron D. Wyner is implemented. The legitimate users' advantage based on the shared key is called "advantage creation". The goal is to achieve longer covert communication than the information-theoretic security limit (one-time pad) set by Shannon. The source of the noise in the above wire-tap channel is the uncertainty principle of the electromagnetic field itself, which is a theoretical consequence of the theory of laser described by Roy J. Glauber and E. C. George Sudarshan (coherent state). Therefore, existing optical communication technologies are sufficient for implementation that some reviews describes: e.g.
Furthermore, since it uses ordinary communication laser light, it is compatible with existing communication infrastructure and can be used for high-speed
and long-distance communication and routing.
Although the main purpose of the protocol is to transmit the message, key distribution is possible by simply replacing the message with a key. Since it is a symmetric key cipher, it must share the initial key previously; however, a method of the initial key agreement was also proposed.
On the other hand, it is currently unclear what implementation realizes information-theoretic security, and security of this protocol has long been a matter of debate.
== Implementation in practice ==
In theory, quantum cryptography seems to be a successful turning point in the information security sector. However, no cryptographic method can ever be absolutely secure. In practice, quantum cryptography is only conditionally secure, dependent on a key set of assumptions.
=== Single-photon source assumption ===
The theoretical basis for quantum key distribution assumes the use of single-photon sources. However, such sources are difficult to construct, and most real-world quantum cryptography systems use faint laser sources as a medium for information transfer. These multi-photon sources open the possibility for eavesdropper attacks, particularly a photon splitting attack. An eavesdropper, Eve, can split the multi-photon source and retain one copy for herself. The other photons are then transmitted to Bob without any measurement or trace that Eve captured a copy of the data. Scientists believe they can retain security with a multi-photon source by using decoy states that test for the presence of an eavesdropper. However, in 2016, scientists developed a near perfect single photon source and estimate that one could be developed in the near future.
=== Identical detector efficiency assumption ===
In practice, multiple single-photon detectors are used in quantum key distribution devices, one for Alice and one for Bob. These photodetectors are tuned to detect an incoming photon during a short window of only a few nanoseconds. Due to manufacturing differences between the two detectors, their respective detection windows will be shifted by some finite amount. An eavesdropper, Eve, can take advantage of this detector inefficiency by measuring Alice's qubit and sending a "fake state" to Bob. Eve first captures the photon sent by Alice and then generates another photon to send to Bob. Eve manipulates the phase and timing of the "faked" photon in a way that prevents Bob from detecting the presence of an eavesdropper. The only way to eliminate this vulnerability is to eliminate differences in photodetector efficiency, which is difficult to do given finite manufacturing tolerances that cause optical path length differences, wire length differences, and other defects.
=== Deprecation of quantum key distributions from governmental institutions ===
Because of the practical problems with quantum key distribution, some governmental organizations recommend the use of post-quantum cryptography (quantum resistant cryptography) instead. For example, the US National Security Agency, European Union Agency for Cybersecurity of EU (ENISA), UK's National Cyber Security Centre, French Secretariat for Defense and Security (ANSSI), and German Federal Office for Information Security (BSI) recommend post-quantum cryptography.
For example, the US National Security Agency addresses five issues:
Quantum key distribution is only a partial solution. QKD generates keying material for an encryption algorithm that provides confidentiality. Such keying material could also be used in symmetric key cryptographic algorithms to provide integrity and authentication if one has the cryptographic assurance that the original QKD transmission comes from the desired entity (i.e. entity source authentication). QKD does not provide a means to authenticate the QKD transmission source. Therefore, source authentication requires the use of asymmetric cryptography or pre-placed keys to provide that authentication. Moreover, the confidentiality services QKD offers can be provided by quantum-resistant cryptography, which is typically less expensive with a better understood risk profile.
Quantum key distribution requires special purpose equipment. QKD is based on physical properties, and its security derives from unique physical layer communications. This requires users to lease dedicated fiber connections or physically manage free-space transmitters. It cannot be implemented in software or as a service on a network, and cannot be easily integrated into existing network equipment. Since QKD is hardware-based it also lacks flexibility for upgrades or security patches.
Quantum key distribution increases infrastructure costs and insider-threat risks. QKD networks frequently necessitate the use of trusted relays, entailing additional cost for secure facilities and additional security risk from insider threats. This eliminates many use cases from consideration.
Securing and validating quantum key distribution is a significant challenge. The actual security provided by a QKD system is not the theoretical unconditional security from the laws of physics (as modeled and often suggested), but rather the more limited security that can be achieved by hardware and engineering designs. The tolerance for error in cryptographic security, however, is many orders of magnitude smaller than what is available in most physical engineering scenarios, making it very difficult to validate. The specific hardware used to perform QKD can introduce vulnerabilities, resulting in several well-publicized attacks on commercial QKD systems.
Quantum key distribution increases the risk of denial of service. The sensitivity to an eavesdropper as the theoretical basis for QKD security claims also shows that denial of service is a significant risk for QKD.
In response to problem 1 above, attempts to deliver authentication keys using post-quantum cryptography (or quantum-resistant cryptography) have been proposed worldwide. On the other hand, quantum-resistant cryptography is cryptography belonging to the class of computational security. In 2015, a research result was already published that "sufficient care must be taken in implementation to achieve information-theoretic security for the system as a whole when authentication keys that are not information-theoretic secure are used" (if the authentication key is not information-theoretically secure, an attacker can break it to bring all classical and quantum communications under control and relay them to launch a man-in-the-middle attack).
Ericsson, a private company, also cites and points out the above problems and then presents a report that it may not be able to support the zero trust security model, which is a recent trend in network security technology.
=== Quantum cryptography in education ===
Quantum cryptography, specifically the BB84 protocol, has become an important topic in physics and computer science education. The challenge of teaching quantum cryptography lies in the technical requirements and the conceptual complexity of quantum mechanics. However, simplified experimental setups for educational purposes are becoming more common, allowing undergraduate students to engage with the core principles of quantum key distribution (QKD) without requiring advanced quantum technology.
== References == | Wikipedia/Quantum_cryptography |
In physics, quantum dynamics is the quantum version of classical dynamics. Quantum dynamics deals with the motions, and energy and momentum exchanges of systems whose behavior is governed by the laws of quantum mechanics. Quantum dynamics is relevant for burgeoning fields, such as quantum computing and atomic optics.
In mathematics, quantum dynamics is the study of the mathematics behind quantum mechanics. Specifically, as a study of dynamics, this field investigates how quantum mechanical observables change over time. Most fundamentally, this involves the study of one-parameter automorphisms of the algebra of all bounded operators on the Hilbert space of observables (which are self-adjoint operators). These dynamics were understood as early as the 1930s, after Wigner, Stone, Hahn and Hellinger worked in the field. Recently, mathematicians in the field have studied irreversible quantum mechanical systems on von Neumann algebras.
== Relation to classical dynamics ==
Equations to describe quantum systems can be seen as equivalent to that of classical dynamics on a macroscopic scale, except for the important detail that the variables don't follow the commutative laws of multiplication. Hence, as a fundamental principle, these variables are instead described as "q-numbers", conventionally represented by operators or Hermitian matrices on a Hilbert space. Indeed, the state of the system in the atomic and subatomic scale is described not by dynamic variables with specific numerical values, but by state functions that are dependent on the c-number time. In this realm of quantum systems, the equation of motion governing dynamics heavily relies on the Hamiltonian, also known as the total energy. Therefore, to anticipate the time evolution of the system, one only needs to determine the initial condition of the state function |Ψ(t) and its first derivative with respect to time.
For example, quasi-free states and automorphisms are the Fermionic counterparts of classical Gaussian measures (Fermions' descriptors are Grassmann operators).
== See also ==
Quantum Field Theory
Perturbation theory
Semigroups
Pseudodifferential operators
Brownian motion
Dilation theory
Quantum probability
Free probability
== References == | Wikipedia/Quantum_dynamics |
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause or prevent a tornado in Texas.: 181–184
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaos: When the present determines the future but the approximate present does not approximately determine the future.
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
== Introduction ==
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
== Chaotic dynamics ==
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
=== Sensitivity to initial conditions ===
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993,: 8 "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration.": 23 The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions.: 189–204 A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation
δ
Z
0
{\displaystyle \delta \mathbf {Z} _{0}}
, the two trajectories end up diverging at a rate given by
|
δ
Z
(
t
)
|
≈
e
λ
t
|
δ
Z
0
|
,
{\displaystyle |\delta \mathbf {Z} (t)|\approx e^{\lambda t}|\delta \mathbf {Z} _{0}|,}
where
t
{\displaystyle t}
is the time and
λ
{\displaystyle \lambda }
is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE, coupled with the solution's boundedness, is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
=== Non-periodicity ===
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
=== Topological mixing ===
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
=== Topological transitivity ===
A map
f
:
X
→
X
{\displaystyle f:X\to X}
is said to be topologically transitive if for any pair of non-empty open sets
U
,
V
⊂
X
{\displaystyle U,V\subset X}
, there exists
k
>
0
{\displaystyle k>0}
such that
f
k
(
U
)
∩
V
≠
∅
{\displaystyle f^{k}(U)\cap V\neq \emptyset }
. Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
=== Density of periodic orbits ===
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,
5
−
5
8
{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}
→
5
+
5
8
{\displaystyle {\tfrac {5+{\sqrt {5}}}{8}}}
→
5
−
5
8
{\displaystyle {\tfrac {5-{\sqrt {5}}}{8}}}
(or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
=== Strange attractors ===
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
=== Coexisting attractors ===
In contrast to single type chaotic solutions, studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
=== Minimum complexity of a chaotic system ===
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
d
x
d
t
=
σ
y
−
σ
x
,
d
y
d
t
=
ρ
x
−
x
z
−
y
,
d
z
d
t
=
x
y
−
β
z
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma y-\sigma x,\\{\frac {\mathrm {d} y}{\mathrm {d} t}}&=\rho x-xz-y,\\{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-\beta z.\end{aligned}}}
where
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
make up the system state,
t
{\displaystyle t}
is time, and
σ
{\displaystyle \sigma }
,
ρ
{\displaystyle \rho }
,
β
{\displaystyle \beta }
are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
=== Infinite dimensional maps ===
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
ψ
n
+
1
(
r
→
,
t
)
=
∫
K
(
r
→
−
r
→
,
,
t
)
f
[
ψ
n
(
r
→
,
,
t
)
]
d
r
→
,
{\displaystyle \psi _{n+1}({\vec {r}},t)=\int K({\vec {r}}-{\vec {r}}^{,},t)f[\psi _{n}({\vec {r}}^{,},t)]d{\vec {r}}^{,}}
,
where kernel
K
(
r
→
−
r
→
,
,
t
)
{\displaystyle K({\vec {r}}-{\vec {r}}^{,},t)}
is propagator derived as Green function of a relevant physical system,
f
[
ψ
n
(
r
→
,
t
)
]
{\displaystyle f[\psi _{n}({\vec {r}},t)]}
might be logistic map alike
ψ
→
G
ψ
[
1
−
tanh
(
ψ
)
]
{\displaystyle \psi \rightarrow G\psi [1-\tanh(\psi )]}
or complex map. For examples of complex maps the Julia set
f
[
ψ
]
=
ψ
2
{\displaystyle f[\psi ]=\psi ^{2}}
or Ikeda map
ψ
n
+
1
=
A
+
B
ψ
n
e
i
(
|
ψ
n
|
2
+
C
)
{\displaystyle \psi _{n+1}=A+B\psi _{n}e^{i(|\psi _{n}|^{2}+C)}}
may serve. When wave propagation problems at distance
L
=
c
t
{\displaystyle L=ct}
with wavelength
λ
=
2
π
/
k
{\displaystyle \lambda =2\pi /k}
are considered the kernel
K
{\displaystyle K}
may have a form of Green function for Schrödinger equation:.
K
(
r
→
−
r
→
,
,
L
)
=
i
k
exp
[
i
k
L
]
2
π
L
exp
[
i
k
|
r
→
−
r
→
,
|
2
2
L
]
{\displaystyle K({\vec {r}}-{\vec {r}}^{,},L)={\frac {ik\exp[ikL]}{2\pi L}}\exp[{\frac {ik|{\vec {r}}-{\vec {r}}^{,}|^{2}}{2L}}]}
.
== Spontaneous order ==
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
== History ==
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
== A popular but inaccurate analogy for chaos ==
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. The characteristic of the aforementioned verse was described as "finite-time sensitive dependence".
== Applications ==
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
=== Cryptography ===
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
=== Robotics ===
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
=== Biology ===
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint.: 176, 177 There is always potential difficulty in distinguishing real chaos from chaos that is only in the model.: 176, 177 Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984.: 176, 177 Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.: 169
=== Economics ===
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able to detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
=== Finite predictability in weather and climate ===
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
=== AI-extended modeling framework ===
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
=== Other areas ===
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
== See also ==
Examples of chaotic systems
Other related topics
People
== References ==
Attribution
This article incorporates text from a free content work. Licensed under CC-BY (license statement/permission). Text taken from Three Kinds of Butterfly Effects within Lorenz Models, Bo-Wen Shen, Roger A. Pielke, Sr., Xubin Zeng, Jialin Cui, Sara Faghih-Naini, Wei Paxson, and Robert Atlas, MDPI. Encyclopedia.
== Further reading ==
=== Articles ===
Sharkovskii, A.N. (1964). "Co-existence of cycles of a continuous mapping of the line into itself". Ukrainian Math. J. 16: 61–71.
Li, T.Y.; Yorke, J.A. (1975). "Period Three Implies Chaos" (PDF). American Mathematical Monthly. 82 (10): 985–92. Bibcode:1975AmMM...82..985L. CiteSeerX 10.1.1.329.5038. doi:10.2307/2318254. JSTOR 2318254. Archived from the original (PDF) on 2009-12-29. Retrieved 2009-08-12.
Alemansour, Hamed; Miandoab, Ehsan Maani; Pishkenari, Hossein Nejat (March 2017). "Effect of size on the chaotic behavior of nano resonators". Communications in Nonlinear Science and Numerical Simulation. 44: 495–505. Bibcode:2017CNSNS..44..495A. doi:10.1016/j.cnsns.2016.09.010.
Crutchfield; Tucker; Morrison; J.D. Farmer; Packard; N.H.; Shaw; R.S (December 1986). "Chaos". Scientific American. 255 (6): 38–49 (bibliography p.136). Bibcode:1986SciAm.255d..38T. doi:10.1038/scientificamerican1286-46. Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Kolyada, S.F. (2004). "Li-Yorke sensitivity and other concepts of chaos". Ukrainian Math. J. 56 (8): 1242–57. doi:10.1007/s11253-005-0055-4. S2CID 207251437.
Day, R.H.; Pavlov, O.V. (2004). "Computing Economic Chaos". Computational Economics. 23 (4): 289–301. arXiv:2211.02441. doi:10.1023/B:CSEM.0000026787.81469.1f. S2CID 119972392. SSRN 806124.
Strelioff, C.; Hübler, A. (2006). "Medium-Term Prediction of Chaos" (PDF). Phys. Rev. Lett. 96 (4): 044101. Bibcode:2006PhRvL..96d4101S. doi:10.1103/PhysRevLett.96.044101. PMID 16486826. 044101. Archived from the original (PDF) on 2013-04-26.
Hübler, A.; Foster, G.; Phelps, K. (2007). "Managing Chaos: Thinking out of the Box" (PDF). Complexity. 12 (3): 10–13. Bibcode:2007Cmplx..12c..10H. doi:10.1002/cplx.20159. Archived from the original (PDF) on 2012-10-30. Retrieved 2011-07-17.
Motter, Adilson E.; Campbell, David K. (2013). "Chaos at 50". Physics Today. 66 (5): 27. arXiv:1306.5777. Bibcode:2013PhT....66e..27M. doi:10.1063/PT.3.1977. S2CID 54005470.
=== Textbooks ===
Alligood, K.T.; Sauer, T.; Yorke, J.A. (1997). Chaos: an introduction to dynamical systems. Springer-Verlag. ISBN 978-0-387-94677-1.
Baker, G. L. (1996). Chaos, Scattering and Statistical Mechanics. Cambridge University Press. ISBN 978-0-521-39511-3.
Badii, R.; Politi A. (1997). Complexity: hierarchical structures and scaling in physics. Cambridge University Press. ISBN 978-0-521-66385-4.
Collet, Pierre; Eckmann, Jean-Pierre (1980). Iterated Maps on the Interval as Dynamical Systems. Birkhauser. ISBN 978-0-8176-4926-5.
Devaney, Robert L. (2003). An Introduction to Chaotic Dynamical Systems (2nd ed.). Westview Press. ISBN 978-0-8133-4085-2.
Robinson, Clark (1995). Dynamical systems: Stability, symbolic dynamics, and chaos. CRC Press. ISBN 0-8493-8493-1.
Feldman, D. P. (2012). Chaos and Fractals: An Elementary Introduction. Oxford University Press. ISBN 978-0-19-956644-0. Archived from the original on 2019-12-31. Retrieved 2016-12-29.
Gollub, J. P.; Baker, G. L. (1996). Chaotic dynamics. Cambridge University Press. ISBN 978-0-521-47685-0.
Guckenheimer, John; Holmes, Philip (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer-Verlag. ISBN 978-0-387-90819-9.
Gulick, Denny (1992). Encounters with Chaos. McGraw-Hill. ISBN 978-0-07-025203-5.
Gutzwiller, Martin (1990). Chaos in Classical and Quantum Mechanics. Springer-Verlag. ISBN 978-0-387-97173-5.
Hoover, William Graham (2001) [1999]. Time Reversibility, Computer Simulation, and Chaos. World Scientific. ISBN 978-981-02-4073-8.
Kautz, Richard (2011). Chaos: The Science of Predictable Random Motion. Oxford University Press. ISBN 978-0-19-959458-0.
Kiel, L. Douglas; Elliott, Euel W. (1997). Chaos Theory in the Social Sciences. Perseus Publishing. ISBN 978-0-472-08472-2.
Moon, Francis (1990). Chaotic and Fractal Dynamics. Springer-Verlag. ISBN 978-0-471-54571-2.
Orlando, Giuseppe; Pisarchick, Alexander; Stoop, Ruedi (2021). Nonlinearities in Economics. Dynamic Modeling and Econometrics in Economics and Finance. Vol. 29. doi:10.1007/978-3-030-70982-2. ISBN 978-3-030-70981-5. S2CID 239756912.
Ott, Edward (2002). Chaos in Dynamical Systems. Cambridge University Press. ISBN 978-0-521-01084-9.
Strogatz, Steven (2000). Nonlinear Dynamics and Chaos. Perseus Publishing. ISBN 978-0-7382-0453-6.
Sprott, Julien Clinton (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 978-0-19-850840-3.
Tél, Tamás; Gruiz, Márton (2006). Chaotic dynamics: An introduction based on classical mechanics. Cambridge University Press. ISBN 978-0-521-83912-9.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Thompson JM, Stewart HB (2001). Nonlinear Dynamics And Chaos. John Wiley and Sons Ltd. ISBN 978-0-471-87645-8.
Tufillaro; Reilly (1992). An experimental approach to nonlinear dynamics and chaos. American Journal of Physics. Vol. 61. Addison-Wesley. p. 958. Bibcode:1993AmJPh..61..958T. doi:10.1119/1.17380. ISBN 978-0-201-55441-0.
Wiggins, Stephen (2003). Introduction to Applied Dynamical Systems and Chaos. Springer. ISBN 978-0-387-00177-7.
Zaslavsky, George M. (2005). Hamiltonian Chaos and Fractional Dynamics. Oxford University Press. ISBN 978-0-19-852604-9.
=== Semitechnical and popular works ===
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, ISBN 978-981-4374-42-2.
Abraham, Ralph H.; Ueda, Yoshisuke, eds. (2000). The Chaos Avant-Garde: Memoirs of the Early Days of Chaos Theory. World Scientific Series on Nonlinear Science Series A. Vol. 39. World Scientific. Bibcode:2000cagm.book.....A. doi:10.1142/4510. ISBN 978-981-238-647-2.
Barnsley, Michael F. (2000). Fractals Everywhere. Morgan Kaufmann. ISBN 978-0-12-079069-2.
Bird, Richard J. (2003). Chaos and Life: Complexity and Order in Evolution and Thought. Columbia University Press. ISBN 978-0-231-12662-5.
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Cunningham, Lawrence A. (1994). "From Random Walks to Chaotic Crashes: The Linear Genealogy of the Efficient Capital Market Hypothesis". George Washington Law Review. 62: 546.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
John Gribbin. Deep Simplicity. Penguin Press Science. Penguin Books.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
Marshall, Alan (2002). The Unity of Nature - Wholeness and Disintegration in Ecology and Science. doi:10.1142/9781860949548. ISBN 9781860949548.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
Peitgen, Heinz-Otto; Richter, Peter H. (1986). The Beauty of Fractals. doi:10.1007/978-3-642-61717-1. ISBN 978-3-642-61719-5.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Ian Roulstone; John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721.
Ruelle, D. (1989). Chaotic Evolution and Strange Attractors. doi:10.1017/CBO9780511608773. ISBN 9780521362726.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Smith, Peter (1998). Explaining Chaos. doi:10.1017/CBO9780511554544. ISBN 9780511554544.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
== External links ==
"Chaos", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt) Archived 2007-02-02 at the Wayback Machine
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller | Wikipedia/Chaotic_systems |
In physics, particularly in quantum field theory, the Weyl equation is a relativistic wave equation for describing massless spin-1/2 particles called Weyl fermions. The equation is named after Hermann Weyl. The Weyl fermions are one of the three possible types of elementary fermions, the other two being the Dirac and the Majorana fermions.
None of the elementary particles in the Standard Model are Weyl fermions. Previous to the confirmation of the neutrino oscillations, it was considered possible that the neutrino might be a Weyl fermion (it is now expected to be either a Dirac or a Majorana fermion). In condensed matter physics, some materials can display quasiparticles that behave as Weyl fermions, leading to the notion of Weyl semimetals.
Mathematically, any Dirac fermion can be decomposed as two Weyl fermions of opposite chirality coupled by the mass term.
== History ==
The Dirac equation was published in 1928 by Paul Dirac, and was first used to model spin-1/2 particles in the framework of relativistic quantum mechanics. Hermann Weyl published his equation in 1929 as a simplified version of the Dirac equation. Wolfgang Pauli wrote in 1933 against Weyl's equation because it violated parity. However, three years before, Pauli had predicted the existence of a new elementary fermion, the neutrino, to explain the beta decay, which eventually was described using the Weyl equation.
In 1937, Conyers Herring proposed that Weyl fermions may exist as quasiparticles in condensed matter.
Neutrinos were experimentally observed in 1956 as particles with extremely small masses (and historically were even sometimes thought to be massless). The same year the Wu experiment showed that parity could be violated by the weak interaction, addressing Pauli's criticism. This was followed by the measurement of the neutrino's helicity in 1958. As experiments showed no signs of a neutrino mass, interest in the Weyl equation resurfaced. Thus, the Standard Model was built under the assumption that neutrinos were Weyl fermions.
While Italian physicist Bruno Pontecorvo had proposed in 1957 the possibility of neutrino masses and neutrino oscillations, it was not until 1998 that Super-Kamiokande eventually confirmed the existence of neutrino oscillations, and their non-zero mass. This discovery confirmed that Weyl's equation cannot completely describe the propagation of neutrinos, as the equations can only describe massless particles.
In 2015, the first Weyl semimetal was demonstrated experimentally in crystalline tantalum arsenide (TaAs) by the collaboration of M.Z. Hasan's (Princeton University) and H. Ding's (Chinese Academy of Sciences) teams. Independently, the same year, M. Soljačić team (Massachusetts Institute of Technology) also observed Weyl-like excitations in photonic crystals.
== Equation ==
The Weyl equation comes in two forms. The right-handed form can be written as follows:
σ
μ
∂
μ
ψ
=
0
{\displaystyle \sigma ^{\mu }\partial _{\mu }\psi =0}
Expanding this equation, and inserting
c
{\displaystyle c}
for the speed of light, it becomes
I
2
1
c
∂
ψ
∂
t
+
σ
x
∂
ψ
∂
x
+
σ
y
∂
ψ
∂
y
+
σ
z
∂
ψ
∂
z
=
0
{\displaystyle I_{2}{\frac {1}{c}}{\frac {\partial \psi }{\partial t}}+\sigma _{x}{\frac {\partial \psi }{\partial x}}+\sigma _{y}{\frac {\partial \psi }{\partial y}}+\sigma _{z}{\frac {\partial \psi }{\partial z}}=0}
where
σ
μ
=
(
σ
0
σ
1
σ
2
σ
3
)
=
(
I
2
σ
x
σ
y
σ
z
)
{\displaystyle \sigma ^{\mu }={\begin{pmatrix}\sigma ^{0}&\sigma ^{1}&\sigma ^{2}&\sigma ^{3}\end{pmatrix}}={\begin{pmatrix}I_{2}&\sigma _{x}&\sigma _{y}&\sigma _{z}\end{pmatrix}}}
is a vector whose components are the 2×2 identity matrix
I
2
{\displaystyle I_{2}}
for
μ
=
0
{\displaystyle \mu =0}
and the Pauli matrices for
μ
=
1
,
2
,
3
,
{\displaystyle \mu =1,2,3,}
and
ψ
{\displaystyle \psi }
is the wavefunction – one of the Weyl spinors. The left-handed form of the Weyl equation is usually written as:
σ
¯
μ
∂
μ
ψ
=
0
{\displaystyle {\bar {\sigma }}^{\mu }\partial _{\mu }\psi =0}
where
σ
¯
μ
=
(
I
2
−
σ
x
−
σ
y
−
σ
z
)
.
{\displaystyle {\bar {\sigma }}^{\mu }={\begin{pmatrix}I_{2}&-\sigma _{x}&-\sigma _{y}&-\sigma _{z}\end{pmatrix}}~.}
The solutions of the right- and left-handed Weyl equations are different: they have right- and left-handed helicity, and thus chirality, respectively. It is convenient to indicate this explicitly, as follows:
σ
μ
∂
μ
ψ
R
=
0
{\displaystyle \sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}=0}
and
σ
¯
μ
∂
μ
ψ
L
=
0
.
{\displaystyle {\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}=0~.}
== Plane wave solutions ==
The plane-wave solutions to the Weyl equation are referred to as the left and right handed Weyl spinors, each is with two components. Both have the form
ψ
(
r
,
t
)
=
(
ψ
1
ψ
2
)
=
χ
e
−
i
(
k
⋅
r
−
ω
t
)
=
χ
e
−
i
(
p
⋅
r
−
E
t
)
/
ℏ
{\displaystyle \psi \left(\mathbf {r} ,t\right)={\begin{pmatrix}\psi _{1}\\\psi _{2}\\\end{pmatrix}}=\chi e^{-i(\mathbf {k} \cdot \mathbf {r} -\omega t)}=\chi e^{-i(\mathbf {p} \cdot \mathbf {r} -Et)/\hbar }}
,
where
χ
=
(
χ
1
χ
2
)
{\displaystyle \chi ={\begin{pmatrix}\chi _{1}\\\chi _{2}\\\end{pmatrix}}}
is a momentum-dependent two-component spinor which satisfies
σ
μ
p
μ
χ
=
(
I
2
E
−
σ
→
⋅
p
→
)
χ
=
0
{\displaystyle \sigma ^{\mu }p_{\mu }\chi =\left(I_{2}E-{\vec {\sigma }}\cdot {\vec {p}}\right)\chi =0}
or
σ
¯
μ
p
μ
χ
=
(
I
2
E
+
σ
→
⋅
p
→
)
χ
=
0
{\displaystyle {\bar {\sigma }}^{\mu }p_{\mu }\chi =\left(I_{2}E+{\vec {\sigma }}\cdot {\vec {p}}\right)\chi =0}
.
By direct manipulation, one obtains that
(
σ
¯
ν
p
ν
)
(
σ
μ
p
μ
)
χ
=
(
σ
ν
p
ν
)
(
σ
¯
μ
p
μ
)
χ
=
p
μ
p
μ
χ
=
(
E
2
−
p
→
⋅
p
→
)
χ
=
0
{\displaystyle \left({\bar {\sigma }}^{\nu }p_{\nu }\right)\left(\sigma ^{\mu }p_{\mu }\right)\chi =\left(\sigma ^{\nu }p_{\nu }\right)\left({\bar {\sigma }}^{\mu }p_{\mu }\right)\chi =p_{\mu }p^{\mu }\chi =\left(E^{2}-{\vec {p}}\cdot {\vec {p}}\right)\chi =0}
,
and concludes that the equations correspond to a particle that is massless. As a result, the magnitude of momentum
p
{\displaystyle \mathbf {p} }
relates directly to the wave-vector
k
{\displaystyle \mathbf {k} }
by the de Broglie relations as:
|
p
|
=
ℏ
|
k
|
=
ℏ
ω
c
⇒
|
k
|
=
ω
c
{\displaystyle |\mathbf {p} |=\hbar |\mathbf {k} |={\frac {\hbar \omega }{c}}\,\Rightarrow \,|\mathbf {k} |={\frac {\omega }{c}}}
The equation can be written in terms of left and right handed spinors as:
σ
μ
∂
μ
ψ
R
=
0
σ
¯
μ
∂
μ
ψ
L
=
0
{\displaystyle {\begin{aligned}\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}&=0\\{\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}&=0\end{aligned}}}
=== Helicity ===
The left and right components correspond to the helicity
λ
{\displaystyle \lambda }
of the particles, the projection of angular momentum operator
J
{\displaystyle \mathbf {J} }
onto the linear momentum
p
{\displaystyle \mathbf {p} }
:
p
⋅
J
|
p
,
λ
⟩
=
λ
|
p
|
|
p
,
λ
⟩
{\displaystyle \mathbf {p} \cdot \mathbf {J} \left|\mathbf {p} ,\lambda \right\rangle =\lambda |\mathbf {p} |\left|\mathbf {p} ,\lambda \right\rangle }
Here
λ
=
±
1
2
.
{\textstyle \lambda =\pm {\frac {1}{2}}~.}
== Lorentz invariance ==
Both equations are Lorentz invariant under the Lorentz transformation
x
↦
x
′
=
Λ
x
{\displaystyle x\mapsto x^{\prime }=\Lambda x}
where
Λ
∈
S
O
(
1
,
3
)
.
{\displaystyle \Lambda \in \mathrm {SO} (1,3)~.}
More precisely, the equations transform as
σ
μ
∂
∂
x
μ
ψ
R
(
x
)
↦
σ
μ
∂
∂
x
′
μ
ψ
R
′
(
x
′
)
=
(
S
−
1
)
†
σ
μ
∂
∂
x
μ
ψ
R
(
x
)
{\displaystyle \sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)\mapsto \sigma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=\left(S^{-1}\right)^{\dagger }\sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)}
where
S
†
{\displaystyle S^{\dagger }}
is the Hermitian transpose, provided that the right-handed field transforms as
ψ
R
(
x
)
↦
ψ
R
′
(
x
′
)
=
S
ψ
R
(
x
)
{\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=S\psi _{\rm {R}}(x)}
The matrix
S
∈
S
L
(
2
,
C
)
{\displaystyle S\in SL(2,\mathbb {C} )}
is related to the Lorentz transform by means of the double covering of the Lorentz group by the special linear group
S
L
(
2
,
C
)
{\displaystyle \mathrm {SL} (2,\mathbb {C} )}
given by
σ
μ
Λ
μ
ν
=
(
S
−
1
)
†
σ
ν
S
−
1
{\displaystyle \sigma _{\mu }{\Lambda ^{\mu }}_{\nu }=\left(S^{-1}\right)^{\dagger }\sigma _{\nu }S^{-1}}
Thus, if the untransformed differential vanishes in one Lorentz frame, then it also vanishes in another. Similarly
σ
¯
μ
∂
∂
x
μ
ψ
L
(
x
)
↦
σ
¯
μ
∂
∂
x
′
μ
ψ
L
′
(
x
′
)
=
S
σ
¯
μ
∂
∂
x
μ
ψ
L
(
x
)
{\displaystyle {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)\mapsto {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=S{\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)}
provided that the left-handed field transforms as
ψ
L
(
x
)
↦
ψ
L
′
(
x
′
)
=
(
S
†
)
−
1
ψ
L
(
x
)
.
{\displaystyle \psi _{\rm {L}}(x)\mapsto \psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}(x)~.}
Proof: Neither of these transformation properties are in any way "obvious", and so deserve a careful derivation. Begin with the form
ψ
R
(
x
)
↦
ψ
R
′
(
x
′
)
=
R
ψ
R
(
x
)
{\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=R\psi _{\rm {R}}(x)}
for some unknown
R
∈
S
L
(
2
,
C
)
{\displaystyle R\in \mathrm {SL} (2,\mathbb {C} )}
to be determined. The Lorentz transform, in coordinates, is
x
′
μ
=
Λ
μ
ν
x
ν
{\displaystyle x^{\prime \mu }={\Lambda ^{\mu }}_{\nu }x^{\nu }}
or, equivalently,
x
ν
=
(
Λ
−
1
)
ν
μ
x
′
μ
{\displaystyle x^{\nu }={\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }x^{\prime \mu }}
This leads to
σ
μ
∂
μ
′
ψ
R
′
(
x
′
)
=
σ
μ
∂
∂
x
′
μ
ψ
R
′
(
x
′
)
=
σ
μ
∂
x
ν
∂
x
′
μ
∂
∂
x
ν
R
ψ
R
(
x
)
=
σ
μ
(
Λ
−
1
)
ν
μ
∂
∂
x
ν
R
ψ
R
(
x
)
=
σ
μ
(
Λ
−
1
)
ν
μ
∂
ν
R
ψ
R
(
x
)
{\displaystyle {\begin{aligned}\sigma ^{\mu }\partial _{\mu }^{\prime }\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)&=\sigma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)\\&=\sigma ^{\mu }{\frac {\partial x^{\nu }}{\partial x^{\prime \mu }}}{\frac {\partial }{\partial x^{\nu }}}R\psi _{\rm {R}}(x)\\&=\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }{\frac {\partial }{\partial x^{\nu }}}R\psi _{\rm {R}}(x)\\&=\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }\partial _{\nu }R\psi _{\rm {R}}(x)\end{aligned}}}
In order to make use of the Weyl map
σ
μ
Λ
μ
ν
=
(
S
−
1
)
†
σ
ν
S
−
1
{\displaystyle \sigma _{\mu }{\Lambda ^{\mu }}_{\nu }=\left(S^{-1}\right)^{\dagger }\sigma _{\nu }S^{-1}}
a few indexes must be raised and lowered. This is easier said than done, as it invokes the identity
η
Λ
T
η
=
Λ
−
1
{\displaystyle \eta \Lambda ^{\mathsf {T}}\eta =\Lambda ^{-1}}
where
η
=
diag
(
+
1
,
−
1
,
−
1
,
−
1
)
{\displaystyle \eta ={\mbox{diag}}(+1,-1,-1,-1)}
is the flat-space Minkowski metric. The above identity is often used to define the elements
Λ
∈
S
O
(
1
,
3
)
.
{\displaystyle \Lambda \in \mathrm {SO} (1,3).}
One takes the transpose:
(
Λ
−
1
)
ν
μ
=
(
Λ
−
1
T
)
μ
ν
{\displaystyle {\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }={\left(\Lambda ^{-1{\mathsf {T}}}\right)_{\mu }}^{\nu }}
to write
σ
μ
(
Λ
−
1
)
ν
μ
∂
ν
R
ψ
R
(
x
)
=
σ
μ
(
Λ
−
1
T
)
μ
ν
∂
ν
R
ψ
R
(
x
)
=
σ
μ
Λ
μ
ν
∂
ν
R
ψ
R
(
x
)
=
(
S
−
1
)
†
σ
μ
∂
μ
S
−
1
R
ψ
R
(
x
)
{\displaystyle {\begin{aligned}\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }\partial _{\nu }R\psi _{\rm {R}}(x)&=\sigma ^{\mu }{\left(\Lambda ^{-1{\mathsf {T}}}\right)_{\mu }}^{\nu }\partial _{\nu }R\psi _{\rm {R}}(x)\\&=\sigma _{\mu }{\Lambda ^{\mu }}_{\nu }\partial ^{\nu }R\psi _{\rm {R}}(x)\\&=\left(S^{-1}\right)^{\dagger }\sigma _{\mu }\partial ^{\mu }S^{-1}R\psi _{\rm {R}}(x)\end{aligned}}}
One thus regains the original form if
S
−
1
R
=
1
,
{\displaystyle S^{-1}R=1,}
that is,
R
=
S
.
{\displaystyle R=S.}
Performing the same manipulations for the left-handed equation, one concludes that
ψ
L
(
x
)
↦
ψ
L
′
(
x
′
)
=
L
ψ
L
(
x
)
{\displaystyle \psi _{\rm {L}}(x)\mapsto \psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=L\psi _{\rm {L}}(x)}
with
L
=
(
S
†
)
−
1
.
{\displaystyle L=\left(S^{\dagger }\right)^{-1}.}
=== Relationship to Majorana ===
The Weyl equation is conventionally interpreted as describing a massless particle. However, with a slight alteration, one may obtain a two-component version of the Majorana equation. This arises because the special linear group
S
L
(
2
,
C
)
{\displaystyle \mathrm {SL} (2,\mathbb {C} )}
is isomorphic to the symplectic group
S
p
(
2
,
C
)
.
{\displaystyle \mathrm {Sp} (2,\mathbb {C} )~.}
The symplectic group is defined as the set of all complex 2×2 matrices that satisfy
S
T
ω
S
=
ω
{\displaystyle S^{\mathsf {T}}\omega S=\omega }
where
ω
=
i
σ
2
=
[
0
1
−
1
0
]
{\displaystyle \omega =i\sigma _{2}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}}
The defining relationship can be rewritten as
ω
S
∗
=
(
S
†
)
−
1
ω
{\displaystyle \omega S^{*}=\left(S^{\dagger }\right)^{-1}\omega }
where
S
∗
{\displaystyle S^{*}}
is the complex conjugate. The right handed field, as noted earlier, transforms as
ψ
R
(
x
)
↦
ψ
R
′
(
x
′
)
=
S
ψ
R
(
x
)
{\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=S\psi _{\rm {R}}(x)}
and so the complex conjugate field transforms as
ψ
R
∗
(
x
)
↦
ψ
R
′
∗
(
x
′
)
=
S
∗
ψ
R
∗
(
x
)
{\displaystyle \psi _{\rm {R}}^{*}(x)\mapsto \psi _{\rm {R}}^{\prime *}\left(x^{\prime }\right)=S^{*}\psi _{\rm {R}}^{*}(x)}
Applying the defining relationship, one concludes that
m
ω
ψ
R
∗
(
x
)
↦
m
ω
ψ
R
′
∗
(
x
′
)
=
(
S
†
)
−
1
m
ω
ψ
R
∗
(
x
)
{\displaystyle m\omega \psi _{\rm {R}}^{*}(x)\mapsto m\omega \psi _{\rm {R}}^{\prime *}\left(x^{\prime }\right)=\left(S^{\dagger }\right)^{-1}m\omega \psi _{\rm {R}}^{*}(x)}
which is exactly the same Lorentz covariance property noted earlier. Thus, the linear combination, using an arbitrary complex phase factor
η
=
e
i
ϕ
{\displaystyle \eta =e^{i\phi }}
i
σ
μ
∂
μ
ψ
R
(
x
)
+
η
m
ω
ψ
R
∗
(
x
)
{\displaystyle i\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}(x)+\eta m\omega \psi _{\rm {R}}^{*}(x)}
transforms in a covariant fashion; setting this to zero gives the complex two-component Majorana equation. The Majorana equation is conventionally written as a four-component real equation, rather than a two-component complex equation; the above can be brought into four-component form (see that article for details). Similarly, the left-chiral Majorana equation (including an arbitrary phase factor
ζ
{\displaystyle \zeta }
) is
i
σ
¯
μ
∂
μ
ψ
L
(
x
)
+
ζ
m
ω
ψ
L
∗
(
x
)
=
0
{\displaystyle i{\overline {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}(x)+\zeta m\omega \psi _{\rm {L}}^{*}(x)=0}
As noted earlier, the left and right chiral versions are related by a parity transformation. The skew complex conjugate
ω
ψ
∗
=
i
σ
2
ψ
{\displaystyle \omega \psi ^{*}=i\sigma ^{2}\psi }
can be recognized as the charge conjugate form of
ψ
.
{\displaystyle \psi ~.}
Thus, the Majorana equation can be read as an equation that connects a spinor to its charge-conjugate form. The two distinct phases on the mass term are related to the two distinct eigenvalues of the charge conjugation operator; see charge conjugation and Majorana equation for details.
Define a pair of operators, the Majorana operators,
D
L
=
i
σ
¯
μ
∂
μ
+
ζ
m
ω
K
D
R
=
i
σ
μ
∂
μ
+
η
m
ω
K
{\displaystyle D_{\rm {L}}=i{\overline {\sigma }}^{\mu }\partial _{\mu }+\zeta m\omega K\qquad D_{\rm {R}}=i\sigma ^{\mu }\partial _{\mu }+\eta m\omega K}
where
K
{\displaystyle K}
is a short-hand reminder to take the complex conjugate. Under Lorentz transformations, these transform as
D
L
↦
D
L
′
=
S
D
L
S
†
D
R
↦
D
R
′
=
(
S
†
)
−
1
D
R
S
−
1
{\displaystyle D_{\rm {L}}\mapsto D_{\rm {L}}^{\prime }=SD_{\rm {L}}S^{\dagger }\qquad D_{\rm {R}}\mapsto D_{\rm {R}}^{\prime }=\left(S^{\dagger }\right)^{-1}D_{\rm {R}}S^{-1}}
whereas the Weyl spinors transform as
ψ
L
↦
ψ
L
′
=
(
S
†
)
−
1
ψ
L
ψ
R
↦
ψ
R
′
=
S
ψ
R
{\displaystyle \psi _{\rm {L}}\mapsto \psi _{\rm {L}}^{\prime }=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}\qquad \psi _{\rm {R}}\mapsto \psi _{\rm {R}}^{\prime }=S\psi _{\rm {R}}}
just as above. Thus, the matched combinations of these are Lorentz covariant, and one may take
D
L
ψ
L
=
0
D
R
ψ
R
=
0
{\displaystyle D_{\rm {L}}\psi _{\rm {L}}=0\qquad D_{\rm {R}}\psi _{\rm {R}}=0}
as a pair of complex 2-spinor Majorana equations.
The products
D
L
D
R
{\displaystyle D_{\rm {L}}D_{\rm {R}}}
and
D
R
D
L
{\displaystyle D_{\rm {R}}D_{\rm {L}}}
are both Lorentz covariant. The product is explicitly
D
R
D
L
=
(
i
σ
μ
∂
μ
+
η
m
ω
K
)
(
i
σ
¯
μ
∂
μ
+
ζ
m
ω
K
)
=
−
(
∂
t
2
−
∇
→
⋅
∇
→
+
η
ζ
∗
m
2
)
=
−
(
◻
+
η
ζ
∗
m
2
)
{\displaystyle D_{\rm {R}}D_{\rm {L}}=\left(i\sigma ^{\mu }\partial _{\mu }+\eta m\omega K\right)\left(i{\overline {\sigma }}^{\mu }\partial _{\mu }+\zeta m\omega K\right)=-\left(\partial _{t}^{2}-{\vec {\nabla }}\cdot {\vec {\nabla }}+\eta \zeta ^{*}m^{2}\right)=-\left(\square +\eta \zeta ^{*}m^{2}\right)}
Verifying this requires keeping in mind that
ω
2
=
−
1
{\displaystyle \omega ^{2}=-1}
and that
K
i
=
−
i
K
.
{\displaystyle Ki=-iK~.}
The RHS reduces to the Klein–Gordon operator provided that
η
ζ
∗
=
1
{\displaystyle \eta \zeta ^{*}=1}
, that is
η
=
ζ
.
{\displaystyle \eta =\zeta ~.}
These two Majorana operators are thus "square roots" of the Klein–Gordon operator.
== Lagrangian densities ==
The equations are obtained from the Lagrangian densities
L
=
i
ψ
R
†
σ
μ
∂
μ
ψ
R
,
{\displaystyle {\mathcal {L}}=i\psi _{\rm {R}}^{\dagger }\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}~,}
L
=
i
ψ
L
†
σ
¯
μ
∂
μ
ψ
L
.
{\displaystyle {\mathcal {L}}=i\psi _{\rm {L}}^{\dagger }{\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}~.}
By treating the spinor and its conjugate (denoted by
†
{\displaystyle \dagger }
) as independent variables, the relevant Weyl equation is obtained.
== Weyl spinors ==
The term Weyl spinor is also frequently used in a more general setting, as an element of a Clifford module. This is closely related to the solutions given above, and gives a natural geometric interpretation to spinors as geometric objects living on a manifold. This general setting has multiple strengths: it clarifies their interpretation as fermions in physics, and it shows precisely how to define spin in General Relativity, or, indeed, for any Riemannian manifold or pseudo-Riemannian manifold. This is informally sketched as follows.
The Weyl equation is invariant under the action of the Lorentz group. This means that, as boosts and rotations are applied, the form of the equation itself does not change. However, the form of the spinor
ψ
{\displaystyle \psi }
itself does change. Ignoring spacetime entirely, the algebra of the spinors is described by a (complexified) Clifford algebra. The spinors transform under the action of the spin group. This is entirely analogous to how one might talk about a vector, and how it transforms under the rotation group, except that now, it has been adapted to the case of spinors.
Given an arbitrary pseudo-Riemannian manifold
M
{\displaystyle M}
of dimension
(
p
,
q
)
{\displaystyle (p,q)}
, one may consider its tangent bundle
T
M
{\displaystyle TM}
. At any given point
x
∈
M
,
{\displaystyle x\in M,}
the tangent space
T
x
M
{\displaystyle T_{x}M}
is a
(
p
,
q
)
{\displaystyle (p,q)}
dimensional vector space. Given this vector space, one can construct the Clifford algebra
C
l
(
p
,
q
)
{\displaystyle \mathrm {Cl} (p,q)}
on it. If
{
e
i
}
{\displaystyle \{e_{i}\}}
are a vector space basis on
T
x
M
{\displaystyle T_{x}M}
, one may construct a pair of Weyl spinors as
w
j
=
1
2
(
e
2
j
+
i
e
2
j
+
1
)
{\displaystyle w_{j}={\frac {1}{\sqrt {2}}}\left(e_{2j}+ie_{2j+1}\right)}
and
w
j
∗
=
1
2
(
e
2
j
−
i
e
2
j
+
1
)
{\displaystyle w_{j}^{*}={\frac {1}{\sqrt {2}}}\left(e_{2j}-ie_{2j+1}\right)}
When properly examined in light of the Clifford algebra, these are naturally anti-commuting, that is, one has that
w
j
w
m
=
−
w
m
w
j
.
{\displaystyle w_{j}w_{m}=-w_{m}w_{j}~.}
This can be happily interpreted as the mathematical realization of the Pauli exclusion principle, thus allowing these abstractly defined formal structures to be interpreted as fermions. For
(
p
,
q
)
=
(
1
,
3
)
{\displaystyle (p,q)=(1,3)}
dimensional Minkowski space-time, there are only two such spinors possible, by convention labelled "left" and "right", as described above. A more formal, general presentation of Weyl spinors can be found in the article on the spin group.
The abstract, general-relativistic form of the Weyl equation can be understood as follows: given a pseudo-Riemannian manifold
M
,
{\displaystyle M,}
one constructs a fiber bundle above it, with the spin group as the fiber. The spin group
S
p
i
n
(
p
,
q
)
{\displaystyle \mathrm {Spin} (p,q)}
is a double cover of the special orthogonal group
S
O
(
p
,
q
)
{\displaystyle \mathrm {SO} (p,q)}
, and so one can identify the spin group fiber-wise with the frame bundle over
M
.
{\displaystyle M~.}
When this is done, the resulting structure is called a spin structure.
Selecting a single point on the fiber corresponds to selecting a local coordinate frame for spacetime; two different points on the fiber are related by a (Lorentz) boost/rotation, that is, by a local change of coordinates. The natural inhabitants of the spin structure are the Weyl spinors, in that the spin structure completely describes how the spinors behave under (Lorentz) boosts/rotations.
Given a spin manifold, the analog of the metric connection is the spin connection; this is effectively "the same thing" as the normal connection, just with spin indexes attached to it in a consistent fashion. The covariant derivative can be defined in terms of the connection in an entirely conventional way. It acts naturally on the Clifford bundle; the Clifford bundle is the space in which the spinors live. The general exploration of such structures and their relationships is termed spin geometry.
=== Mathematical definition ===
For even
n
{\displaystyle n}
, the even subalgebra
C
l
0
(
n
)
{\displaystyle \mathbb {C} l^{0}(n)}
of the complex Clifford algebra
C
l
(
n
)
{\displaystyle \mathbb {C} l(n)}
is isomorphic to
E
n
d
(
C
N
/
2
)
⊕
E
n
d
(
C
N
/
2
)
=:
Δ
n
+
⊕
Δ
n
−
{\displaystyle \mathrm {End} (\mathbb {C} ^{N/2})\oplus \mathrm {End} (\mathbb {C} ^{N/2})=:\Delta _{n}^{+}\oplus \Delta _{n}^{-}}
, where
N
=
2
n
/
2
{\displaystyle N=2^{n/2}}
. A left-handed (respectively, right-handed) complex Weyl spinor in
n
{\displaystyle n}
-dimensional space is an element of
Δ
n
+
{\displaystyle \Delta _{n}^{+}}
(respectively,
Δ
n
−
{\displaystyle \Delta _{n}^{-}}
).
=== Special cases ===
There are three important special cases that can be constructed from Weyl spinors. One is the Dirac spinor, which can be taken to be a pair of Weyl spinors, one left-handed, and one right-handed. These are coupled together in such a way as to represent an electrically charged fermion field. The electric charge arises because the Dirac field transforms under the action of the complexified spin group
S
p
i
n
C
(
p
,
q
)
.
{\displaystyle \mathrm {Spin} ^{\mathbb {C} }(p,q).}
This group has the structure
S
p
i
n
C
(
p
,
q
)
≅
S
p
i
n
(
p
,
q
)
×
Z
2
S
1
{\displaystyle \mathrm {Spin} ^{\mathbb {C} }(p,q)\cong \mathrm {Spin} (p,q)\times _{\mathbb {Z} _{2}}S^{1}}
where
S
1
≅
U
(
1
)
{\displaystyle S^{1}\cong \mathrm {U} (1)}
is the circle, and can be identified with the
U
(
1
)
{\displaystyle \mathrm {U} (1)}
of electromagnetism. The product
×
Z
2
{\displaystyle \times _{\mathbb {Z} _{2}}}
is just fancy notation denoting the product
S
p
i
n
(
p
,
q
)
×
S
1
{\displaystyle \mathrm {Spin} (p,q)\times S^{1}}
with opposite points
(
s
,
u
)
=
(
−
s
,
−
u
)
{\displaystyle (s,u)=(-s,-u)}
identified (a double covering).
The Majorana spinor is again a pair of Weyl spinors, but this time arranged so that the left-handed spinor is the charge conjugate of the right-handed spinor. The result is a field with two less degrees of freedom than the Dirac spinor. It is unable to interact with the electromagnetic field, since it transforms as a scalar under the action of the
s
p
i
n
C
{\displaystyle \mathrm {spin} ^{\mathbb {C} }}
group. That is, it transforms as a spinor, but transversally, such that it is invariant under the
U
(
1
)
{\displaystyle \mathrm {U} (1)}
action of the spin group.
The third special case is the ELKO spinor, constructed much as the Majorana spinor, except with an additional minus sign between the charge-conjugate pair. This again renders it electrically neutral, but introduces a number of other quite surprising properties.
== Notes ==
== References ==
== Further reading ==
McMahon, D. (2008). Quantum Field Theory Demystified. USA: McGraw-Hill. ISBN 978-0-07-154382-8.
Martin, B.R.; Shaw, G. (2008). Particle Physics. Manchester Physics (2nd ed.). John Wiley & Sons. ISBN 978-0-470-03294-7.
Martin, Brian R.; Shaw, Graham (2013). Particle Physics (3rd ed.). ISBN 9781118681664 – via Google Books.
LaBelle, P. (2010). Supersymmetry Demystified. USA: McGraw-Hill. ISBN 978-0-07-163641-4 – via Google Books.
Penrose, Roger (2007). The Road to Reality. Vintage Books. ISBN 978-0-679-77631-4.
Johnston, Hamish (23 July 2015). "Weyl fermions are spotted at long last". Physics World. Retrieved 22 November 2018.
Ciudad, David (20 August 2015). "Massless yet real". Nature Materials. 14 (9): 863. doi:10.1038/nmat4411. ISSN 1476-1122. PMID 26288972.
Vishwanath, Ashvin (8 September 2015). "Where the Weyl things are". APS Physics. Vol. 8. Retrieved 22 November 2018.
Jia, Shuang; Xu, Su-Yang; Hasan, M. Zahid (25 October 2016). "Weyl semimetals, Fermi arcs and chiral anomaly". Nature Materials. 15 (11): 1140–1144. arXiv:1612.00416. Bibcode:2016NatMa..15.1140J. doi:10.1038/nmat4787. PMID 27777402. S2CID 1115349.
== External links ==
http://aesop.phys.utk.edu/qft/2004-5/2-2.pdf
http://www.nbi.dk/~kleppe/random/ll/l2.html
http://www.tfkp.physik.uni-erlangen.de/download/research/DW-derivation.pdf
http://www.weylmann.com/weyldirac.pdf | Wikipedia/Weyl_equation |
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. The equation is validated by its rigorous accounting of the observed fine structure of the hydrogen spectrum and has become vital in the building of the Standard Model.
The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation, which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation.
In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles.
Dirac did not fully appreciate the importance of his results; however, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on par with the works of Newton, Maxwell, and Einstein before him. The equation has been deemed by some physicists to be the "real seed of modern physics". The equation has also been described as the "centerpiece of relativistic quantum mechanics", with it also stated that "the equation is perhaps the most important one in all of quantum mechanics".
The Dirac equation is inscribed upon a plaque on the floor of Westminster Abbey. Unveiled on 13 November 1995, the plaque commemorates Dirac's life.
The equation, in its natural units formulation, is also prominently displayed in the auditorium at the ‘Paul A.M. Dirac’ Lecture Hall at the Patrick M.S. Blackett Institute (formerly The San Domenico Monastery) of the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily.
== History ==
The Dirac equation in the form originally proposed by Dirac is:: 291
(
β
m
c
2
+
c
∑
n
=
1
3
α
n
p
n
)
ψ
(
x
,
t
)
=
i
ℏ
∂
ψ
(
x
,
t
)
∂
t
{\displaystyle \left(\beta mc^{2}+c\sum _{n=1}^{3}\alpha _{n}p_{n}\right)\psi (x,t)=i\hbar {\frac {\partial \psi (x,t)}{\partial t}}}
where ψ(x, t) is the wave function for an electron of rest mass m with spacetime coordinates x, t. p1, p2, p3 are the components of the momentum, understood to be the momentum operator in the Schrödinger equation. c is the speed of light, and ħ is the reduced Planck constant; these fundamental physical constants reflect special relativity and quantum mechanics, respectively. αn and β are 4 × 4 gamma matrices.
Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, thus allowing the atom to be treated in a manner consistent with relativity. He hoped that the corrections introduced this way might have a bearing on the problem of atomic spectra.
Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity—which were based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus—had failed, and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics.
The new elements in this equation are the four 4 × 4 matrices α1, α2, α3 and β, and the four-component wave function ψ. There are four components in ψ because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron.
The 4 × 4 matrices αk and β are all Hermitian and are involutory:
α
i
2
=
β
2
=
I
4
{\displaystyle \alpha _{i}^{2}=\beta ^{2}=I_{4}}
and they all mutually anti-commute:
α
i
α
j
+
α
j
α
i
=
0
(
i
≠
j
)
α
i
β
+
β
α
i
=
0
{\displaystyle {\begin{aligned}\alpha _{i}\alpha _{j}+\alpha _{j}\alpha _{i}&=0\quad (i\neq j)\\\alpha _{i}\beta +\beta \alpha _{i}&=0\end{aligned}}}
These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of German mathematician Hermann Grassmann in his Lineare Ausdehnungslehre (Theory of Linear Expansion).
=== Making the Schrödinger equation relativistic ===
The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle:
−
ℏ
2
2
m
∇
2
ϕ
=
i
ℏ
∂
∂
t
ϕ
.
{\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\phi =i\hbar {\frac {\partial }{\partial t}}\phi ~.}
The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light – the equations must be differentially of the same order in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation
E
2
=
m
2
c
4
+
p
2
c
2
,
{\displaystyle E^{2}=m^{2}c^{4}+p^{2}c^{2},}
which says that the length of this four-vector is proportional to the rest mass m. Substituting the operator equivalents of the energy and momentum from the Schrödinger theory produces the Klein–Gordon equation describing the propagation of waves, constructed from relativistically invariant objects,
(
−
1
c
2
∂
2
∂
t
2
+
∇
2
)
ϕ
=
m
2
c
2
ℏ
2
ϕ
,
{\displaystyle \left(-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}+\nabla ^{2}\right)\phi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\phi ,}
with the wave function
ϕ
{\displaystyle \phi }
being a relativistic scalar: a complex number that has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression
ρ
=
ϕ
∗
ϕ
{\displaystyle \rho =\phi ^{*}\phi }
and this density is convected according to the probability current vector
J
=
−
i
ℏ
2
m
(
ϕ
∗
∇
ϕ
−
ϕ
∇
ϕ
∗
)
{\displaystyle J=-{\frac {i\hbar }{2m}}(\phi ^{*}\nabla \phi -\phi \nabla \phi ^{*})}
with the conservation of probability current and density following from the continuity equation:
∇
⋅
J
+
∂
ρ
∂
t
=
0
.
{\displaystyle \nabla \cdot J+{\frac {\partial \rho }{\partial t}}=0~.}
The fact that the density is positive definite and convected according to this continuity equation implies that one may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. To maintain the notion of a convected density, one must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. The Schrödinger expression can be kept for the current, but the probability density must be replaced by the symmetrically formed expression
ρ
=
i
ℏ
2
m
c
2
(
ψ
∗
∂
t
ψ
−
ψ
∂
t
ψ
∗
)
,
{\displaystyle \rho ={\frac {i\hbar }{2mc^{2}}}\left(\psi ^{*}\partial _{t}\psi -\psi \partial _{t}\psi ^{*}\right),}
which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression
J
μ
=
i
ℏ
2
m
(
ψ
∗
∂
μ
ψ
−
ψ
∂
μ
ψ
∗
)
.
{\displaystyle J^{\mu }={\frac {i\hbar }{2m}}\left(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*}\right).}
The continuity equation is as before. Everything is compatible with relativity now, but the expression for the density is no longer positive definite; the initial values of both ψ and ∂tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, one cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time.
Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson or Higgs boson). Historically, Schrödinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the charge density, which can be positive or negative, and not the probability density.
=== Dirac's coup ===
Dirac thus thought to try an equation that was first order in both space and time. He postulated an equation of the form
E
ψ
=
(
α
→
⋅
p
→
+
β
m
)
ψ
{\displaystyle E\psi =({\vec {\alpha }}\cdot {\vec {p}}+\beta m)\psi }
where the operators
(
α
→
,
β
)
{\displaystyle ({\vec {\alpha }},\beta )}
must be independent of
(
p
→
,
t
)
{\displaystyle ({\vec {p}},t)}
for linearity and independent of
(
x
→
,
t
)
{\displaystyle ({\vec {x}},t)}
for space-time homogeneity. These constraints implied additional dynamical variables that the
(
α
→
,
β
)
{\displaystyle ({\vec {\alpha }},\beta )}
operators will depend upon; from this requirement Dirac concluded that the operators would depend upon 4 × 4 matrices, related to the Pauli matrices.: 205
One could, for example, formally (i.e. by abuse of notation, since it is not straightforward to take a functional square root of the sum of two differential operators) take the relativistic expression for the energy
E
=
c
p
2
+
m
2
c
2
,
{\displaystyle E=c{\sqrt {p^{2}+m^{2}c^{2}}}~,}
replace p by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible.
As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator (see also half derivative) thus:
∇
2
−
1
c
2
∂
2
∂
t
2
=
(
A
∂
x
+
B
∂
y
+
C
∂
z
+
i
c
D
∂
t
)
(
A
∂
x
+
B
∂
y
+
C
∂
z
+
i
c
D
∂
t
)
.
{\displaystyle \nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}=\left(A\partial _{x}+B\partial _{y}+C\partial _{z}+{\frac {i}{c}}D\partial _{t}\right)\left(A\partial _{x}+B\partial _{y}+C\partial _{z}+{\frac {i}{c}}D\partial _{t}\right)~.}
On multiplying out the right side it is apparent that, in order to get all the cross-terms such as ∂x∂y to vanish, one must assume
A
B
+
B
A
=
0
,
…
{\displaystyle AB+BA=0,~\ldots ~}
with
A
2
=
B
2
=
⋯
=
1
.
{\displaystyle A^{2}=B^{2}=\dots =1~.}
Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if A, B, C and D are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least 4 × 4 matrices to set up a system with the properties required – so the wave function had four components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here.
Given the factorization in terms of these matrices, one can now write down immediately an equation
(
A
∂
x
+
B
∂
y
+
C
∂
z
+
i
c
D
∂
t
)
ψ
=
κ
ψ
{\displaystyle \left(A\partial _{x}+B\partial _{y}+C\partial _{z}+{\frac {i}{c}}D\partial _{t}\right)\psi =\kappa \psi }
with
κ
{\displaystyle \kappa }
to be determined. Applying again the matrix operator on both sides yields
(
∇
2
−
1
c
2
∂
t
2
)
ψ
=
κ
2
ψ
.
{\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}\partial _{t}^{2}\right)\psi =\kappa ^{2}\psi ~.}
Taking
κ
=
m
c
ℏ
{\displaystyle \kappa ={\tfrac {mc}{\hbar }}}
shows that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is
(
A
∂
x
+
B
∂
y
+
C
∂
z
+
i
c
D
∂
t
−
m
c
ℏ
)
ψ
=
0
.
{\displaystyle \left(A\partial _{x}+B\partial _{y}+C\partial _{z}+{\frac {i}{c}}D\partial _{t}-{\frac {mc}{\hbar }}\right)\psi =0~.}
Setting
A
=
i
β
α
1
,
B
=
i
β
α
2
,
C
=
i
β
α
3
,
D
=
β
,
{\displaystyle A=i\beta \alpha _{1}\,,\,B=i\beta \alpha _{2}\,,\,C=i\beta \alpha _{3}\,,\,D=\beta ~,}
and because
D
2
=
β
2
=
I
4
{\displaystyle D^{2}=\beta ^{2}=I_{4}}
, the Dirac equation is produced as written above.
=== Covariant form and relativistic invariance ===
To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows:
D
=
γ
0
,
A
=
i
γ
1
,
B
=
i
γ
2
,
C
=
i
γ
3
,
{\displaystyle {\begin{aligned}D&=\gamma ^{0},\\A&=i\gamma ^{1},\quad B=i\gamma ^{2},\quad C=i\gamma ^{3},\end{aligned}}}
and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that ∂0 = 1/c∂t)
where there is an implied summation over the values of the twice-repeated index μ = 0, 1, 2, 3, and ∂μ is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is
γ
0
=
(
I
2
0
0
−
I
2
)
,
γ
1
=
(
0
σ
x
−
σ
x
0
)
,
γ
2
=
(
0
σ
y
−
σ
y
0
)
,
γ
3
=
(
0
σ
z
−
σ
z
0
)
.
{\displaystyle \gamma ^{0}={\begin{pmatrix}I_{2}&0\\0&-I_{2}\end{pmatrix}},\quad \gamma ^{1}={\begin{pmatrix}0&\sigma _{x}\\-\sigma _{x}&0\end{pmatrix}},\quad \gamma ^{2}={\begin{pmatrix}0&\sigma _{y}\\-\sigma _{y}&0\end{pmatrix}},\quad \gamma ^{3}={\begin{pmatrix}0&\sigma _{z}\\-\sigma _{z}&0\end{pmatrix}}.}
The complete system is summarized using the Minkowski metric on spacetime in the form
{
γ
μ
,
γ
ν
}
=
2
η
μ
ν
I
4
{\displaystyle \left\{\gamma ^{\mu },\gamma ^{\nu }\right\}=2\eta ^{\mu \nu }I_{4}}
where the bracket expression
{
a
,
b
}
=
a
b
+
b
a
{\displaystyle \{a,b\}=ab+ba}
denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature (+ − − −). The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this geometric algebra represents an enormous stride forward in the development of quantum theory.
The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light:
P
o
p
ψ
=
m
c
ψ
.
{\displaystyle \operatorname {P} _{\mathsf {op}}\psi =mc\psi .}
Using
∂
/
=
d
e
f
γ
μ
∂
μ
{\displaystyle {\partial \!\!\!/}\mathrel {\stackrel {\mathrm {def} }{=}} \gamma ^{\mu }\partial _{\mu }}
(
∂
/
{\displaystyle {\partial \!\!\!{\big /}}}
is pronounced "d-slash"), according to Feynman slash notation, the Dirac equation becomes:
i
ℏ
∂
/
ψ
−
m
c
ψ
=
0.
{\displaystyle i\hbar {\partial \!\!\!{\big /}}\psi -mc\psi =0.}
In practice, physicists often use units of measure such that ħ = c = 1, known as natural units. The equation then takes the simple form
A foundational theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transform:
γ
μ
′
=
S
−
1
γ
μ
S
.
{\displaystyle \gamma ^{\mu \prime }=S^{-1}\gamma ^{\mu }S~.}
If in addition the matrices are all unitary, as are the Dirac set, then S itself is unitary;
γ
μ
′
=
U
†
γ
μ
U
.
{\displaystyle \gamma ^{\mu \prime }=U^{\dagger }\gamma ^{\mu }U~.}
The transformation U is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator γμ∂μ to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the previously mentioned foundational theorem, one may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form
(
i
U
†
γ
μ
U
∂
μ
′
−
m
)
ψ
(
x
′
,
t
′
)
=
0
U
†
(
i
γ
μ
∂
μ
′
−
m
)
U
ψ
(
x
′
,
t
′
)
=
0
.
{\displaystyle {\begin{aligned}\left(iU^{\dagger }\gamma ^{\mu }U\partial _{\mu }^{\prime }-m\right)\psi \left(x^{\prime },t^{\prime }\right)&=0\\U^{\dagger }(i\gamma ^{\mu }\partial _{\mu }^{\prime }-m)U\psi \left(x^{\prime },t^{\prime }\right)&=0~.\end{aligned}}}
If the transformed spinor is defined as
ψ
′
=
U
ψ
{\displaystyle \psi ^{\prime }=U\psi }
then the transformed Dirac equation is produced in a way that demonstrates manifest relativistic invariance:
(
i
γ
μ
∂
μ
′
−
m
)
ψ
′
(
x
′
,
t
′
)
=
0
.
{\displaystyle \left(i\gamma ^{\mu }\partial _{\mu }^{\prime }-m\right)\psi ^{\prime }\left(x^{\prime },t^{\prime }\right)=0~.}
Thus, settling on any unitary representation of the gammas is final, provided the spinor is transformed according to the unitary transformation that corresponds to the given Lorentz transformation.
The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function. The representation shown here is known as the standard representation – in it, the wave function's upper two components go over into Pauli's 2 spinor wave function in the limit of low energies and small velocities in comparison to light.
The considerations above reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation; they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as γμγν represent oriented surface elements, and so on. With this in mind, one can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is
V
=
1
4
!
ϵ
μ
ν
α
β
γ
μ
γ
ν
γ
α
γ
β
.
{\displaystyle V={\frac {1}{4!}}\epsilon _{\mu \nu \alpha \beta }\gamma ^{\mu }\gamma ^{\nu }\gamma ^{\alpha }\gamma ^{\beta }.}
For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of √g, where g is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus
V
=
i
γ
0
γ
1
γ
2
γ
3
.
{\displaystyle V=i\gamma ^{0}\gamma ^{1}\gamma ^{2}\gamma ^{3}.}
This matrix is given the special symbol γ5, owing to its importance when one is considering improper transformations of space-time, that is, those that change the orientation of the basis vectors. In the standard representation, it is
γ
5
=
(
0
I
2
I
2
0
)
.
{\displaystyle \gamma _{5}={\begin{pmatrix}0&I_{2}\\I_{2}&0\end{pmatrix}}.}
This matrix will also be found to anticommute with the other four Dirac matrices:
γ
5
γ
μ
+
γ
μ
γ
5
=
0
{\displaystyle \gamma ^{5}\gamma ^{\mu }+\gamma ^{\mu }\gamma ^{5}=0}
It takes a leading role when questions of parity arise because the volume element as a directed magnitude changes sign under a space-time reflection. Taking the positive square root above thus amounts to choosing a handedness convention on spacetime.
== Comparison with related theories ==
=== Pauli theory ===
The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two; the ground state therefore could not be integer, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into three parts, corresponding to atoms with Lz = −1, 0, +1. The conclusion is that silver atoms have net intrinsic angular momentum of 1/2. Pauli set up a theory that explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so in SI units: (Note that bold faced characters imply Euclidean vectors in 3 dimensions, whereas the Minkowski four-vector Aμ can be defined as
A
μ
=
(
ϕ
/
c
,
−
A
)
{\displaystyle A_{\mu }=\left(\phi /c,-\mathbf {A} \right)}
.)
H
=
1
2
m
(
σ
⋅
(
p
−
e
A
)
)
2
+
e
ϕ
.
{\displaystyle H={\frac {1}{\ 2\ m\ }}\ {\Bigl (}{\boldsymbol {\sigma }}\cdot {\bigl (}\mathbf {p} -e\ \mathbf {A} {\bigr )}{\Bigr )}^{2}+e\ \phi .}
Here A and
ϕ
{\displaystyle \phi }
represent the components of the electromagnetic four-potential in their standard SI units, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field in SI units:
H
=
1
2
m
(
p
−
e
A
)
2
+
e
ϕ
−
e
ℏ
2
m
σ
⋅
B
.
{\displaystyle H={\frac {1}{\ 2\ m\ }}\ {\bigl (}\mathbf {p} -e\ \mathbf {A} {\bigr )}^{2}+e\ \phi -{\frac {e\ \hbar }{\ 2\ m\ }}\ {\boldsymbol {\sigma }}\cdot \mathbf {B} ~.}
This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it must use a two-component wave function. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form:
(
γ
μ
(
i
ℏ
∂
μ
−
e
A
μ
)
−
m
c
)
ψ
=
0
.
{\displaystyle {\Bigl (}\gamma ^{\mu }\ {\bigl (}i\ \hbar \ \partial _{\mu }-e\ A_{\mu }{\bigr )}-m\ c{\Bigr )}\ \psi =0~.}
A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by i, have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the SI units restored:
(
m
c
2
−
E
+
e
ϕ
+
c
σ
⋅
(
p
−
e
A
)
−
c
σ
⋅
(
p
−
e
A
)
m
c
2
+
E
−
e
ϕ
)
(
ψ
+
ψ
−
)
=
(
0
0
)
.
{\displaystyle {\begin{pmatrix}mc^{2}-E+e\phi \quad &+c{\boldsymbol {\sigma }}\cdot \left(\mathbf {p} -e\mathbf {A} \right)\\-c{\boldsymbol {\sigma }}\cdot \left(\mathbf {p} -e\mathbf {A} \right)&mc^{2}+E-e\phi \end{pmatrix}}{\begin{pmatrix}\psi _{+}\\\psi _{-}\end{pmatrix}}={\begin{pmatrix}0\\0\end{pmatrix}}~.}
so
(
E
−
e
ϕ
)
ψ
+
−
c
σ
⋅
(
p
−
e
A
)
ψ
−
=
m
c
2
ψ
+
c
σ
⋅
(
p
−
e
A
)
ψ
+
−
(
E
−
e
ϕ
)
ψ
−
=
m
c
2
ψ
−
.
{\displaystyle {\begin{aligned}(E-e\phi )\ \psi _{+}-c{\boldsymbol {\sigma }}\cdot \left(\mathbf {p} -e\mathbf {A} \right)\ \psi _{-}&=mc^{2}\ \psi _{+}\\c{\boldsymbol {\sigma }}\cdot \left(\mathbf {p} -e\mathbf {A} \right)\ \psi _{+}-\left(E-e\phi \right)\ \psi _{-}&=mc^{2}\ \psi _{-}\end{aligned}}~.}
Assuming the field is weak and the motion of the electron non-relativistic, the total energy of the electron is approximately equal to its rest energy, and the momentum going over to the classical value,
E
−
e
ϕ
≈
m
c
2
p
≈
m
v
{\displaystyle {\begin{aligned}E-e\phi &\approx mc^{2}\\\mathbf {p} &\approx m\mathbf {v} \end{aligned}}}
and so the second equation may be written
ψ
−
≈
1
2
m
c
σ
⋅
(
p
−
e
A
)
ψ
+
,
{\displaystyle \psi _{-}\approx {\frac {1}{\ 2\ mc\ }}\ {\boldsymbol {\sigma }}\cdot {\Bigl (}\mathbf {p} -e\ \mathbf {A} {\Bigr )}\ \psi _{+},}
which is of order
v
c
.
{\displaystyle \ {\tfrac {\ v\ }{c}}~.}
Thus, at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement
(
E
−
m
c
2
)
ψ
+
=
1
2
m
[
σ
⋅
(
p
−
e
A
)
]
2
ψ
+
+
e
ϕ
ψ
+
{\displaystyle {\bigl (}E-mc^{2}{\bigr )}\ \psi _{+}={\frac {1}{\ 2m\ }}\ {\Bigl [}{\boldsymbol {\sigma }}\cdot {\bigl (}\mathbf {p} -e\mathbf {A} {\bigr )}{\Bigr ]}^{2}\ \psi _{+}+e\ \phi \ \psi _{+}\ }
The operator on the left represents the particle's total energy reduced by its rest energy, which is just its classical kinetic energy, so one can recover Pauli's theory upon identifying his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus, the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious i that appears in it, and the necessity of a complex wave function, back to the geometry of spacetime through the Dirac algebra. It also highlights why the Schrödinger equation, although ostensibly in the form of a diffusion equation, actually represents wave propagation.
It should be strongly emphasized that the entire Dirac spinor represents an irreducible whole. The separation, done here, of the Dirac spinor into large and small components depends on the low-energy approximation being valid. The components that were neglected above, to show that the Pauli theory can be recovered by a low-velocity approximation of Dirac's equation, are necessary to produce new phenomena observed in the relativistic regime – among them antimatter, and the creation and annihilation of particles.
=== Weyl theory ===
In the massless case
m
=
0
{\displaystyle m=0}
, the Dirac equation reduces to the Weyl equation, which describes relativistic massless spin-1/2 particles.
The theory acquires a second
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry: see below.
== Physical interpretation ==
=== Identification of observables ===
The critical physical question in a quantum theory is this: what are the physically observable quantities defined by the theory? According to the postulates of quantum mechanics, such quantities are defined by self-adjoint operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. To maintain this interpretation on passing to the Dirac theory, the Hamiltonian must be taken to be
H
=
γ
0
[
m
c
2
+
c
γ
k
(
p
k
−
q
A
k
)
]
+
c
q
A
0
.
{\displaystyle H=\gamma ^{0}\left[mc^{2}+c\gamma ^{k}\left(p_{k}-qA_{k}\right)\right]+cqA^{0}.}
where, as always, there is an implied summation over the twice-repeated index k = 1, 2, 3. This looks promising, because one can see by inspection the rest energy of the particle and, in the case of A = 0, the energy of a charge placed in an electric potential cqA0. What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is
H
=
c
(
p
−
q
A
)
2
+
m
2
c
2
+
q
A
0
.
{\displaystyle H=c{\sqrt {\left(\mathbf {p} -q\mathbf {A} \right)^{2}+m^{2}c^{2}}}+qA^{0}.}
Thus, the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and one must take great care to correctly identify what is observable in this theory. Much of the apparently paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables.
=== Hole theory ===
The negative E solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, they cannot simply be ignored, for once the interaction between the electron and the electromagnetic field is included, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons.
To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates.
Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy because energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932.
It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it.
In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, and although it too is referred to as an "electron hole", it is distinct from a positron. The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material.
=== In quantum field theory ===
In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation.
== Mathematical formulation ==
In its modern formulation for field theory, the Dirac equation is written in terms of a Dirac spinor field
ψ
{\displaystyle \psi }
taking values in a complex vector space described concretely as
C
4
{\displaystyle \mathbb {C} ^{4}}
, defined on flat spacetime (Minkowski space)
R
1
,
3
{\displaystyle \mathbb {R} ^{1,3}}
. Its expression also contains gamma matrices and a parameter
m
>
0
{\displaystyle m>0}
interpreted as the mass, as well as other physical constants. Dirac first obtained his equation through a factorization of Einstein's energy-momentum-mass equivalence relation assuming a scalar product of momentum vectors determined by the metric tensor and quantized the resulting relation by associating momenta to their respective operators.
In terms of a field
ψ
:
R
1
,
3
→
C
4
{\displaystyle \psi :\mathbb {R} ^{1,3}\rightarrow \mathbb {C} ^{4}}
, the Dirac equation is then
(
i
ℏ
γ
μ
∂
μ
−
m
c
)
ψ
(
x
)
=
0
{\displaystyle (i\hbar \gamma ^{\mu }\partial _{\mu }-mc)\psi (x)=0}
and in natural units, with Feynman slash notation,
(
i
∂
/
−
m
)
ψ
(
x
)
=
0
{\displaystyle (i\partial \!\!\!/-m)\psi (x)=0}
The gamma matrices are a set of four
complex matrices (elements of
Mat
4
×
4
(
C
)
{\displaystyle {\text{Mat}}_{4\times 4}(\mathbb {C} )}
) that satisfy the defining anti-commutation relations:
{
γ
μ
,
γ
ν
}
=
2
η
μ
ν
I
4
{\displaystyle \{\gamma ^{\mu },\gamma ^{\nu }\}=2\eta ^{\mu \nu }I_{4}}
where
η
μ
ν
{\displaystyle \eta ^{\mu \nu }}
is the Minkowski metric element, and the indices
μ
,
ν
{\displaystyle \mu ,\nu }
run over 0,1,2 and 3. These matrices can be realized explicitly under a choice of representation. Two common choices are the Dirac representation and the chiral representation. The Dirac representation is
γ
0
=
(
I
2
0
0
−
I
2
)
,
γ
i
=
(
0
σ
i
−
σ
i
0
)
,
{\displaystyle \gamma ^{0}={\begin{pmatrix}I_{2}&0\\0&-I_{2}\end{pmatrix}},\quad \gamma ^{i}={\begin{pmatrix}0&\sigma ^{i}\\-\sigma ^{i}&0\end{pmatrix}},}
where
σ
i
{\displaystyle \sigma ^{i}}
are the Pauli matrices.
For the chiral representation the
γ
i
{\displaystyle \gamma ^{i}}
are the same, but
γ
0
=
(
0
I
2
I
2
0
)
.
{\displaystyle \gamma ^{0}={\begin{pmatrix}0&I_{2}\\I_{2}&0\end{pmatrix}}~.}
The slash notation is a compact notation for
A
/
:=
γ
μ
A
μ
{\displaystyle A\!\!\!/:=\gamma ^{\mu }A_{\mu }}
where
A
{\displaystyle A}
is a four-vector (often it is the four-vector differential operator
∂
μ
{\displaystyle \partial _{\mu }}
). The summation over the index
μ
{\displaystyle \mu }
is implied.
Alternatively the four coupled linear first-order partial differential equations for the four quantities that make up the wave function can be written as a vector. In Planck units this becomes:: 6
i
∂
x
[
+
ψ
4
+
ψ
3
−
ψ
2
−
ψ
1
]
+
∂
y
[
+
ψ
4
−
ψ
3
−
ψ
2
+
ψ
1
]
+
i
∂
z
[
+
ψ
3
−
ψ
4
−
ψ
1
+
ψ
2
]
−
m
[
+
ψ
1
+
ψ
2
+
ψ
3
+
ψ
4
]
=
i
∂
t
[
−
ψ
1
−
ψ
2
+
ψ
3
+
ψ
4
]
{\displaystyle i\partial _{x}{\begin{bmatrix}+\psi _{4}\\+\psi _{3}\\-\psi _{2}\\-\psi _{1}\end{bmatrix}}+\partial _{y}{\begin{bmatrix}+\psi _{4}\\-\psi _{3}\\-\psi _{2}\\+\psi _{1}\end{bmatrix}}+i\partial _{z}{\begin{bmatrix}+\psi _{3}\\-\psi _{4}\\-\psi _{1}\\+\psi _{2}\end{bmatrix}}-m{\begin{bmatrix}+\psi _{1}\\+\psi _{2}\\+\psi _{3}\\+\psi _{4}\end{bmatrix}}=i\partial _{t}{\begin{bmatrix}-\psi _{1}\\-\psi _{2}\\+\psi _{3}\\+\psi _{4}\end{bmatrix}}}
which makes it clearer that it is a set of four partial differential equations with four unknown functions.
(Note that the
∂
y
{\displaystyle \partial _{y}}
term is not preceded by i because σy is imaginary.)
=== Dirac adjoint and the adjoint equation ===
The Dirac adjoint of the spinor field
ψ
(
x
)
{\displaystyle \psi (x)}
is defined as
ψ
¯
(
x
)
=
ψ
(
x
)
†
γ
0
.
{\displaystyle {\bar {\psi }}(x)=\psi (x)^{\dagger }\gamma ^{0}.}
Using the property of gamma matrices (which follows straightforwardly from Hermicity properties of the
γ
μ
{\displaystyle \gamma ^{\mu }}
) that
(
γ
μ
)
†
=
γ
0
γ
μ
γ
0
,
{\displaystyle (\gamma ^{\mu })^{\dagger }=\gamma ^{0}\gamma ^{\mu }\gamma ^{0},}
one can derive the adjoint Dirac equation by taking the Hermitian conjugate of the Dirac equation and multiplying on the right by
γ
0
{\displaystyle \gamma ^{0}}
:
ψ
¯
(
x
)
(
−
i
γ
μ
∂
←
μ
−
m
)
=
0
{\displaystyle {\bar {\psi }}(x)(-i\gamma ^{\mu }{\overleftarrow {\partial }}_{\mu }-m)=0}
where the partial derivative
∂
←
μ
{\displaystyle {\overleftarrow {\partial }}_{\mu }}
acts from the right on
ψ
¯
(
x
)
{\displaystyle {\bar {\psi }}(x)}
: written in the usual way in terms of a left action of the derivative, we have
−
i
∂
μ
ψ
¯
(
x
)
γ
μ
−
m
ψ
¯
(
x
)
=
0.
{\displaystyle -i\partial _{\mu }{\bar {\psi }}(x)\gamma ^{\mu }-m{\bar {\psi }}(x)=0.}
=== Klein–Gordon equation ===
Applying
i
∂
/
+
m
{\displaystyle i\partial \!\!\!/+m}
to the Dirac equation gives
(
∂
μ
∂
μ
+
m
2
)
ψ
(
x
)
=
0.
{\displaystyle (\partial _{\mu }\partial ^{\mu }+m^{2})\psi (x)=0.}
That is, each component of the Dirac spinor field satisfies the Klein–Gordon equation.
=== Conserved current ===
A conserved current of the theory is
J
μ
=
ψ
¯
γ
μ
ψ
.
{\displaystyle J^{\mu }={\bar {\psi }}\gamma ^{\mu }\psi .}
Another approach to derive this expression is by variational methods, applying Noether's theorem for the global
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry to derive the conserved current
J
μ
.
{\displaystyle J^{\mu }.}
=== Solutions ===
Since the Dirac operator acts on 4-tuples of square-integrable functions, its solutions should be members of the same Hilbert space. The fact that the energies of the solutions do not have a lower bound is unexpected.
==== Plane-wave solutions ====
Plane-wave solutions are those arising from an ansatz
ψ
(
x
)
=
u
(
p
)
e
−
i
p
⋅
x
{\displaystyle \psi (x)=u(\mathbf {p} )e^{-ip\cdot x}}
which models a particle with definite 4-momentum
p
=
(
E
p
,
p
)
{\displaystyle p=(E_{\mathbf {p} },\mathbf {p} )}
where
E
p
=
m
2
+
|
p
|
2
.
{\textstyle E_{\mathbf {p} }={\sqrt {m^{2}+|\mathbf {p} |^{2}}}.}
For this ansatz, the Dirac equation becomes an equation for
u
(
p
)
{\displaystyle u(\mathbf {p} )}
:
(
γ
μ
p
μ
−
m
)
u
(
p
)
=
0.
{\displaystyle \left(\gamma ^{\mu }p_{\mu }-m\right)u(\mathbf {p} )=0.}
After picking a representation for the gamma matrices
γ
μ
{\displaystyle \gamma ^{\mu }}
, solving this is a matter of solving a system of linear equations. It is a representation-free property of gamma matrices that the solution space is two-dimensional (see here).
For example, in the chiral representation for
γ
μ
{\displaystyle \gamma ^{\mu }}
, the solution space is parametrised by a
C
2
{\displaystyle \mathbb {C} ^{2}}
vector
ξ
{\displaystyle \xi }
, with
u
(
p
)
=
(
σ
μ
p
μ
ξ
σ
¯
μ
p
μ
ξ
)
{\displaystyle u(\mathbf {p} )={\begin{pmatrix}{\sqrt {\sigma ^{\mu }p_{\mu }}}\xi \\{\sqrt {{\bar {\sigma }}^{\mu }p_{\mu }}}\xi \end{pmatrix}}}
where
σ
μ
=
(
I
2
,
σ
i
)
,
σ
¯
μ
=
(
I
2
,
−
σ
i
)
{\displaystyle \sigma ^{\mu }=(I_{2},\sigma ^{i}),{\bar {\sigma }}^{\mu }=(I_{2},-\sigma ^{i})}
and
⋅
{\displaystyle {\sqrt {\cdot }}}
is the Hermitian matrix square-root.
These plane-wave solutions provide a starting point for canonical quantization.
=== Lagrangian formulation ===
Both the Dirac equation and the Adjoint Dirac equation can be obtained from (varying) the action with a specific Lagrangian density that is given by:
L
=
i
ℏ
c
ψ
¯
γ
μ
∂
μ
ψ
−
m
c
2
ψ
¯
ψ
{\displaystyle {\mathcal {L}}=i\hbar c{\overline {\psi }}\gamma ^{\mu }\partial _{\mu }\psi -mc^{2}{\overline {\psi }}\psi }
If one varies this with respect to
ψ
{\displaystyle \psi }
one gets the adjoint Dirac equation. Meanwhile, if one varies this with respect to
ψ
¯
{\displaystyle {\bar {\psi }}}
one gets the Dirac equation.
In natural units and with the slash notation, the action is then
For this action, the conserved current
J
μ
{\displaystyle J^{\mu }}
above arises as the conserved current corresponding to the global
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry through Noether's theorem for field theory. Gauging this field theory by changing the symmetry to a local, spacetime point dependent one gives gauge symmetry (really, gauge redundancy). The resultant theory is quantum electrodynamics or QED. See below for a more detailed discussion.
=== Lorentz invariance ===
The Dirac equation is invariant under Lorentz transformations, that is, under the action of the Lorentz group
SO
(
1
,
3
)
{\displaystyle {\text{SO}}(1,3)}
or strictly
SO
(
1
,
3
)
+
{\displaystyle {\text{SO}}(1,3)^{+}}
, the component connected to the identity.
For a Dirac spinor viewed concretely as taking values in
C
4
{\displaystyle \mathbb {C} ^{4}}
, the transformation under a Lorentz transformation
Λ
{\displaystyle \Lambda }
is given by a
4
×
4
{\displaystyle 4\times 4}
complex matrix
S
[
Λ
]
{\displaystyle S[\Lambda ]}
. There are some subtleties in defining the corresponding
S
[
Λ
]
{\displaystyle S[\Lambda ]}
, as well as a standard abuse of notation.
Most treatments occur at the Lie algebra level. For a more detailed treatment see here. The Lorentz group of
4
×
4
{\displaystyle 4\times 4}
real matrices acting on
R
1
,
3
{\displaystyle \mathbb {R} ^{1,3}}
is generated by a set of six matrices
{
M
μ
ν
}
{\displaystyle \{M^{\mu \nu }\}}
with components
(
M
μ
ν
)
ρ
σ
=
η
μ
ρ
δ
ν
σ
−
η
ν
ρ
δ
μ
σ
.
{\displaystyle (M^{\mu \nu })^{\rho }{}_{\sigma }=\eta ^{\mu \rho }\delta ^{\nu }{}_{\sigma }-\eta ^{\nu \rho }\delta ^{\mu }{}_{\sigma }.}
When both the
ρ
,
σ
{\displaystyle \rho ,\sigma }
indices are raised or lowered, these are simply the 'standard basis' of antisymmetric matrices.
These satisfy the Lorentz algebra commutation relations
[
M
μ
ν
,
M
ρ
σ
]
=
M
μ
σ
η
ν
ρ
−
M
ν
σ
η
μ
ρ
+
M
ν
ρ
η
μ
σ
−
M
μ
ρ
η
ν
σ
.
{\displaystyle [M^{\mu \nu },M^{\rho \sigma }]=M^{\mu \sigma }\eta ^{\nu \rho }-M^{\nu \sigma }\eta ^{\mu \rho }+M^{\nu \rho }\eta ^{\mu \sigma }-M^{\mu \rho }\eta ^{\nu \sigma }.}
In the article on the Dirac algebra, it is also found that the spin generators
S
μ
ν
=
1
4
[
γ
μ
,
γ
ν
]
{\displaystyle S^{\mu \nu }={\frac {1}{4}}[\gamma ^{\mu },\gamma ^{\nu }]}
satisfy the Lorentz algebra commutation relations.
A Lorentz transformation
Λ
{\displaystyle \Lambda }
can be written as
Λ
=
exp
(
1
2
ω
μ
ν
M
μ
ν
)
{\displaystyle \Lambda =\exp \left({\frac {1}{2}}\omega _{\mu \nu }M^{\mu \nu }\right)}
where the components
ω
μ
ν
{\displaystyle \omega _{\mu \nu }}
are antisymmetric in
μ
,
ν
{\displaystyle \mu ,\nu }
.
The corresponding transformation on spin space is
S
[
Λ
]
=
exp
(
1
2
ω
μ
ν
S
μ
ν
)
.
{\displaystyle S[\Lambda ]=\exp \left({\frac {1}{2}}\omega _{\mu \nu }S^{\mu \nu }\right).}
This is an abuse of notation, but a standard one. The reason is
S
[
Λ
]
{\displaystyle S[\Lambda ]}
is not a well-defined function of
Λ
{\displaystyle \Lambda }
, since there are two different sets of components
ω
μ
ν
{\displaystyle \omega _{\mu \nu }}
(up to equivalence) that give the same
Λ
{\displaystyle \Lambda }
but different
S
[
Λ
]
{\displaystyle S[\Lambda ]}
. In practice we implicitly pick one of these
ω
μ
ν
{\displaystyle \omega _{\mu \nu }}
and then
S
[
Λ
]
{\displaystyle S[\Lambda ]}
is well defined in terms of
ω
μ
ν
.
{\displaystyle \omega _{\mu \nu }.}
Under a Lorentz transformation, the Dirac equation
i
γ
μ
∂
μ
ψ
(
x
)
−
m
ψ
(
x
)
=
0
{\displaystyle i\gamma ^{\mu }\partial _{\mu }\psi (x)-m\psi (x)=0}
becomes
i
γ
μ
(
(
Λ
−
1
)
μ
ν
∂
ν
)
S
[
Λ
]
ψ
(
Λ
−
1
x
)
−
m
S
[
Λ
]
ψ
(
Λ
−
1
x
)
=
0.
{\displaystyle i\gamma ^{\mu }((\Lambda ^{-1})_{\mu }{}^{\nu }\partial _{\nu })S[\Lambda ]\psi (\Lambda ^{-1}x)-mS[\Lambda ]\psi (\Lambda ^{-1}x)=0.}
Associated to Lorentz invariance is a conserved Noether current, or rather a tensor of conserved Noether currents
(
J
ρ
σ
)
μ
{\displaystyle ({\mathcal {J}}^{\rho \sigma })^{\mu }}
. Similarly, since the equation is invariant under translations, there is a tensor of conserved Noether currents
T
μ
ν
{\displaystyle T^{\mu \nu }}
, which can be identified as the stress-energy tensor of the theory. The Lorentz current
(
J
ρ
σ
)
μ
{\displaystyle ({\mathcal {J}}^{\rho \sigma })^{\mu }}
can be written in terms of the stress-energy tensor in addition to a tensor representing internal angular momentum.
==== Further discussion of Lorentz covariance of the Dirac equation ====
The Dirac equation is Lorentz covariant. Articulating this helps illuminate not only the Dirac equation, but also the Majorana spinor and Elko spinor, which although closely related, have subtle and important differences.
Understanding Lorentz covariance is simplified by keeping in mind the geometric character of the process. Let
a
{\displaystyle a}
be a single, fixed point in the spacetime manifold. Its location can be expressed in multiple coordinate systems. In the physics literature, these are written as
x
{\displaystyle x}
and
x
′
{\displaystyle x'}
, with the understanding that both
x
{\displaystyle x}
and
x
′
{\displaystyle x'}
describe the same point
a
{\displaystyle a}
, but in different local frames of reference (a frame of reference over a small extended patch of spacetime).
One can imagine
a
{\displaystyle a}
as having a fiber of different coordinate frames above it. In geometric terms, one says that spacetime can be characterized as a fiber bundle, and specifically, the frame bundle. The difference between two points
x
{\displaystyle x}
and
x
′
{\displaystyle x'}
in the same fiber is a combination of rotations and Lorentz boosts. A choice of coordinate frame is a (local) section through that bundle.
Coupled to the frame bundle is a second bundle, the spinor bundle. A section through the spinor bundle is just the particle field (the Dirac spinor, in the present case). Different points in the spinor fiber correspond to the same physical object (the fermion) but expressed in different Lorentz frames. Clearly, the frame bundle and the spinor bundle must be tied together in a consistent fashion to get consistent results; formally, one says that the spinor bundle is the associated bundle; it is associated to a principal bundle, which in the present case is the frame bundle. Differences between points on the fiber correspond to the symmetries of the system. The spinor bundle has two distinct generators of its symmetries: the total angular momentum and the intrinsic angular momentum. Both correspond to Lorentz transformations, but in different ways.
The presentation here follows that of Itzykson and Zuber. It is very nearly identical to that of Bjorken and Drell. A similar derivation in a general relativistic setting can be found in Weinberg. Here we fix our spacetime to be flat, that is, our spacetime is Minkowski space.
Under a Lorentz transformation
x
↦
x
′
,
{\displaystyle x\mapsto x',}
the Dirac spinor to transform as
ψ
′
(
x
′
)
=
S
ψ
(
x
)
{\displaystyle \psi '(x')=S\psi (x)}
It can be shown that an explicit expression for
S
{\displaystyle S}
is given by
S
=
exp
(
−
i
4
ω
μ
ν
σ
μ
ν
)
{\displaystyle S=\exp \left({\frac {-i}{4}}\omega ^{\mu \nu }\sigma _{\mu \nu }\right)}
where
ω
μ
ν
{\displaystyle \omega ^{\mu \nu }}
parameterizes the Lorentz transformation, and
σ
μ
ν
{\displaystyle \sigma _{\mu \nu }}
are the six 4×4 matrices satisfying:
σ
μ
ν
=
i
2
[
γ
μ
,
γ
ν
]
.
{\displaystyle \sigma ^{\mu \nu }={\frac {i}{2}}[\gamma ^{\mu },\gamma ^{\nu }]~.}
This matrix can be interpreted as the intrinsic angular momentum of the Dirac field. That it deserves this interpretation arises by contrasting it to the generator
J
μ
ν
{\displaystyle J_{\mu \nu }}
of Lorentz transformations, having the form
J
μ
ν
=
1
2
σ
μ
ν
+
i
(
x
μ
∂
ν
−
x
ν
∂
μ
)
{\displaystyle J_{\mu \nu }={\frac {1}{2}}\sigma _{\mu \nu }+i(x_{\mu }\partial _{\nu }-x_{\nu }\partial _{\mu })}
This can be interpreted as the total angular momentum. It acts on the spinor field as
ψ
′
(
x
)
=
exp
(
−
i
2
ω
μ
ν
J
μ
ν
)
ψ
(
x
)
{\displaystyle \psi ^{\prime }(x)=\exp \left({\frac {-i}{2}}\omega ^{\mu \nu }J_{\mu \nu }\right)\psi (x)}
Note the
x
{\displaystyle x}
above does not have a prime on it: the above is obtained by transforming
x
↦
x
′
{\displaystyle x\mapsto x'}
obtaining the change to
ψ
(
x
)
↦
ψ
′
(
x
′
)
{\displaystyle \psi (x)\mapsto \psi '(x')}
and then returning to the original coordinate system
x
′
↦
x
{\displaystyle x'\mapsto x}
.
The geometrical interpretation of the above is that the frame field is affine, having no preferred origin. The generator
J
μ
ν
{\displaystyle J_{\mu \nu }}
generates the symmetries of this space: it provides a relabelling of a fixed point
x
.
{\displaystyle x~.}
The generator
σ
μ
ν
{\displaystyle \sigma _{\mu \nu }}
generates a movement from one point in the fiber to another: a movement from
x
↦
x
′
{\displaystyle x\mapsto x'}
with both
x
{\displaystyle x}
and
x
′
{\displaystyle x'}
still corresponding to the same spacetime point
a
.
{\displaystyle a.}
These perhaps obtuse remarks can be elucidated with explicit algebra.
Let
x
′
=
Λ
x
{\displaystyle x'=\Lambda x}
be a Lorentz transformation. The Dirac equation is
i
γ
μ
∂
∂
x
μ
ψ
(
x
)
−
m
ψ
(
x
)
=
0
{\displaystyle i\gamma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi (x)-m\psi (x)=0}
If the Dirac equation is to be covariant, then it should have exactly the same form in all Lorentz frames:
i
γ
μ
∂
∂
x
′
μ
ψ
′
(
x
′
)
−
m
ψ
′
(
x
′
)
=
0
{\displaystyle i\gamma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi ^{\prime }(x^{\prime })-m\psi ^{\prime }(x^{\prime })=0}
The two spinors
ψ
{\displaystyle \psi }
and
ψ
′
{\displaystyle \psi ^{\prime }}
should both describe the same physical field, and so should be related by a transformation that does not change any physical observables (charge, current, mass, etc.) The transformation should encode only the change of coordinate frame. It can be shown that such a transformation is a 4×4 unitary matrix. Thus, one may presume that the relation between the two frames can be written as
ψ
′
(
x
′
)
=
S
(
Λ
)
ψ
(
x
)
{\displaystyle \psi ^{\prime }(x^{\prime })=S(\Lambda )\psi (x)}
Inserting this into the transformed equation, the result is
i
γ
μ
∂
x
ν
∂
x
′
μ
∂
∂
x
ν
S
(
Λ
)
ψ
(
x
)
−
m
S
(
Λ
)
ψ
(
x
)
=
0
{\displaystyle i\gamma ^{\mu }{\frac {\partial x^{\nu }}{\partial x^{\prime \mu }}}{\frac {\partial }{\partial x^{\nu }}}S(\Lambda )\psi (x)-mS(\Lambda )\psi (x)=0}
The coordinates related by Lorentz transformation satisfy:
∂
x
ν
∂
x
′
μ
=
(
Λ
−
1
)
ν
μ
{\displaystyle {\frac {\partial x^{\nu }}{\partial x^{\prime \mu }}}={\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }}
The original Dirac equation is then regained if
S
(
Λ
)
γ
μ
S
−
1
(
Λ
)
=
(
Λ
−
1
)
μ
ν
γ
ν
{\displaystyle S(\Lambda )\gamma ^{\mu }S^{-1}(\Lambda )={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }\gamma ^{\nu }}
An explicit expression for
S
(
Λ
)
{\displaystyle S(\Lambda )}
(equal to the expression given above) can be obtained by considering a Lorentz transformation of infinitesimal rotation near the identity transformation:
Λ
μ
ν
=
g
μ
ν
+
ω
μ
ν
,
(
Λ
−
1
)
μ
ν
=
g
μ
ν
−
ω
μ
ν
{\displaystyle {\Lambda ^{\mu }}_{\nu }={g^{\mu }}_{\nu }+{\omega ^{\mu }}_{\nu }\ ,\ {(\Lambda ^{-1})^{\mu }}_{\nu }={g^{\mu }}_{\nu }-{\omega ^{\mu }}_{\nu }}
where
g
μ
ν
{\displaystyle {g^{\mu }}_{\nu }}
is the metric tensor :
g
μ
ν
=
g
μ
ν
′
g
ν
′
ν
=
δ
μ
ν
{\displaystyle {g^{\mu }}_{\nu }=g^{\mu \nu '}g_{\nu '\nu }={\delta ^{\mu }}_{\nu }}
and is symmetric while
ω
μ
ν
=
ω
α
ν
g
α
μ
{\displaystyle \omega _{\mu \nu }={\omega ^{\alpha }}_{\nu }g_{\alpha \mu }}
is antisymmetric. After plugging and chugging, one obtains
S
(
Λ
)
=
I
+
−
i
4
ω
μ
ν
σ
μ
ν
+
O
(
Λ
2
)
,
{\displaystyle S(\Lambda )=I+{\frac {-i}{4}}\omega ^{\mu \nu }\sigma _{\mu \nu }+{\mathcal {O}}\left(\Lambda ^{2}\right),}
which is the (infinitesimal) form for
S
{\displaystyle S}
above and yields the relation
σ
μ
ν
=
i
2
[
γ
μ
,
γ
ν
]
{\displaystyle \sigma ^{\mu \nu }={\frac {i}{2}}[\gamma ^{\mu },\gamma ^{\nu }]}
. To obtain the affine relabelling, write
ψ
′
(
x
′
)
=
(
I
+
−
i
4
ω
μ
ν
σ
μ
ν
)
ψ
(
x
)
=
(
I
+
−
i
4
ω
μ
ν
σ
μ
ν
)
ψ
(
x
′
+
ω
μ
ν
x
′
ν
)
=
(
I
+
−
i
4
ω
μ
ν
σ
μ
ν
−
x
μ
′
ω
μ
ν
∂
ν
)
ψ
(
x
′
)
=
(
I
+
−
i
2
ω
μ
ν
J
μ
ν
)
ψ
(
x
′
)
{\displaystyle {\begin{aligned}\psi '(x')&=\left(I+{\frac {-i}{4}}\omega ^{\mu \nu }\sigma _{\mu \nu }\right)\psi (x)\\&=\left(I+{\frac {-i}{4}}\omega ^{\mu \nu }\sigma _{\mu \nu }\right)\psi (x'+{\omega ^{\mu }}_{\nu }\,x^{\prime \,\nu })\\&=\left(I+{\frac {-i}{4}}\omega ^{\mu \nu }\sigma _{\mu \nu }-x_{\mu }^{\prime }\omega ^{\mu \nu }\partial _{\nu }\right)\psi (x')\\&=\left(I+{\frac {-i}{2}}\omega ^{\mu \nu }J_{\mu \nu }\right)\psi (x')\\\end{aligned}}}
After properly antisymmetrizing, one obtains the generator of symmetries
J
μ
ν
{\displaystyle J_{\mu \nu }}
given earlier. Thus, both
J
μ
ν
{\displaystyle J_{\mu \nu }}
and
σ
μ
ν
{\displaystyle \sigma _{\mu \nu }}
can be said to be the "generators of Lorentz transformations", but with a subtle distinction: the first corresponds to a relabelling of points on the affine frame bundle, which forces a translation along the fiber of the spinor on the spin bundle, while the second corresponds to translations along the fiber of the spin bundle (taken as a movement
x
↦
x
′
{\displaystyle x\mapsto x'}
along the frame bundle, as well as a movement
ψ
↦
ψ
′
{\displaystyle \psi \mapsto \psi '}
along the fiber of the spin bundle.) Weinberg provides additional arguments for the physical interpretation of these as total and intrinsic angular momentum.
== Other formulations ==
The Dirac equation can be formulated in a number of other ways.
=== Curved spacetime ===
This article has developed the Dirac equation in flat spacetime according to special relativity. It is possible to formulate the Dirac equation in curved spacetime.
=== The algebra of physical space ===
This article developed the Dirac equation using four-vectors and Schrödinger operators. The Dirac equation in the algebra of physical space uses a Clifford algebra over the real numbers, a type of geometric algebra.
=== Coupled Weyl Spinors ===
As mentioned above, the massless Dirac equation immediately reduces to the homogeneous Weyl equation. By using the chiral representation of the gamma matrices, the nonzero-mass equation can also be decomposed into a pair of coupled inhomogeneous Weyl equations acting on the first and last pairs of indices of the original four-component spinor, i.e.
ψ
=
(
ψ
L
ψ
R
)
{\displaystyle \psi ={\begin{pmatrix}\psi _{L}\\\psi _{R}\end{pmatrix}}}
, where
ψ
L
{\displaystyle \psi _{L}}
and
ψ
R
{\displaystyle \psi _{R}}
are each two-component Weyl spinors. This is because the skew block form of the chiral gamma matrices means that they swap the
ψ
L
{\displaystyle \psi _{L}}
and
ψ
R
{\displaystyle \psi _{R}}
and apply the two-by-two Pauli matrices to each:
γ
μ
(
ψ
L
ψ
R
)
=
(
σ
μ
ψ
R
σ
¯
μ
ψ
L
)
.
{\displaystyle \gamma ^{\mu }{\begin{pmatrix}\psi _{L}\\\psi _{R}\end{pmatrix}}={\begin{pmatrix}\sigma ^{\mu }\psi _{R}\\{\overline {\sigma }}^{\mu }\psi _{L}\end{pmatrix}}.}
So the Dirac equation
(
i
γ
μ
∂
μ
−
m
)
(
ψ
L
ψ
R
)
=
0
{\displaystyle (i\gamma ^{\mu }\partial _{\mu }-m){\begin{pmatrix}\psi _{L}\\\psi _{R}\end{pmatrix}}=0}
becomes
i
(
σ
μ
∂
μ
ψ
R
σ
¯
μ
∂
μ
ψ
L
)
=
m
(
ψ
L
ψ
R
)
,
{\displaystyle i{\begin{pmatrix}\sigma ^{\mu }\partial _{\mu }\psi _{R}\\{\overline {\sigma }}^{\mu }\partial _{\mu }\psi _{L}\end{pmatrix}}=m{\begin{pmatrix}\psi _{L}\\\psi _{R}\end{pmatrix}},}
which in turn is equivalent to a pair of inhomogeneous Weyl equations for massless left- and right-helicity spinors, where the coupling strength is proportional to the mass:
i
σ
μ
∂
μ
ψ
R
=
m
ψ
L
{\displaystyle i\sigma ^{\mu }\partial _{\mu }\psi _{R}=m\psi _{L}}
i
σ
¯
μ
∂
μ
ψ
L
=
m
ψ
R
.
{\displaystyle i{\overline {\sigma }}^{\mu }\partial _{\mu }\psi _{L}=m\psi _{R}.}
This has been proposed as an intuitive explanation of Zitterbewegung, as these massless components would propagate at the speed of light and move in opposite directions, since the helicity is the projection of the spin onto the direction of motion. Here the role of the "mass"
m
{\displaystyle m}
is not to make the velocity less than the speed of light, but instead controls the average rate at which these reversals occur; specifically, the reversals can be modeled as a Poisson process.
== U(1) symmetry ==
Natural units are used in this section. The coupling constant is labelled by convention with
e
{\displaystyle e}
: this parameter can also be viewed as modelling the electron charge.
=== Vector symmetry ===
The Dirac equation and action admits a
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry where the fields
ψ
,
ψ
¯
{\displaystyle \psi ,{\bar {\psi }}}
transform as
ψ
(
x
)
↦
e
i
α
ψ
(
x
)
,
ψ
¯
(
x
)
↦
e
−
i
α
ψ
¯
(
x
)
.
{\displaystyle {\begin{aligned}\psi (x)&\mapsto e^{i\alpha }\psi (x),\\{\bar {\psi }}(x)&\mapsto e^{-i\alpha }{\bar {\psi }}(x).\end{aligned}}}
This is a global symmetry, known as the
U
(
1
)
{\displaystyle {\text{U}}(1)}
vector symmetry (as opposed to the
U
(
1
)
{\displaystyle {\text{U}}(1)}
axial symmetry: see below). By Noether's theorem there is a corresponding conserved current: this has been mentioned previously as
J
μ
(
x
)
=
ψ
¯
(
x
)
γ
μ
ψ
(
x
)
.
{\displaystyle J^{\mu }(x)={\bar {\psi }}(x)\gamma ^{\mu }\psi (x).}
=== Gauging the symmetry ===
If we 'promote' the global symmetry, parametrised by the constant
α
{\displaystyle \alpha }
, to a local symmetry, parametrised by a function
α
:
R
1
,
3
→
R
{\displaystyle \alpha :\mathbb {R} ^{1,3}\to \mathbb {R} }
, or equivalently
e
i
α
:
R
1
,
3
→
U
(
1
)
,
{\displaystyle e^{i\alpha }:\mathbb {R} ^{1,3}\to {\text{U}}(1),}
the Dirac equation is no longer invariant: there is a residual derivative of
α
(
x
)
{\displaystyle \alpha (x)}
.
The fix proceeds as in scalar electrodynamics: the partial derivative is promoted to a covariant derivative
D
μ
{\displaystyle D_{\mu }}
D
μ
ψ
=
∂
μ
ψ
+
i
e
A
μ
ψ
,
{\displaystyle D_{\mu }\psi =\partial _{\mu }\psi +ieA_{\mu }\psi ,}
D
μ
ψ
¯
=
∂
μ
ψ
¯
−
i
e
A
μ
ψ
¯
.
{\displaystyle D_{\mu }{\bar {\psi }}=\partial _{\mu }{\bar {\psi }}-ieA_{\mu }{\bar {\psi }}.}
The covariant derivative depends on the field being acted on. The newly introduced
A
μ
{\displaystyle A_{\mu }}
is the 4-vector potential from electrodynamics, but also can be viewed as a
U
(
1
)
{\displaystyle {\text{U}}(1)}
gauge field (which, mathematically, is defined as a
U
(
1
)
{\displaystyle {\text{U}}(1)}
connection).
The transformation law under gauge transformations for
A
μ
{\displaystyle A_{\mu }}
is then the usual
A
μ
(
x
)
↦
A
μ
(
x
)
+
1
e
∂
μ
α
(
x
)
{\displaystyle A_{\mu }(x)\mapsto A_{\mu }(x)+{\frac {1}{e}}\partial _{\mu }\alpha (x)}
but can also be derived by asking that covariant derivatives transform under a gauge transformation as
D
μ
ψ
(
x
)
↦
e
i
α
(
x
)
D
μ
ψ
(
x
)
,
{\displaystyle D_{\mu }\psi (x)\mapsto e^{i\alpha (x)}D_{\mu }\psi (x),}
D
μ
ψ
¯
(
x
)
↦
e
−
i
α
(
x
)
D
μ
ψ
¯
(
x
)
.
{\displaystyle D_{\mu }{\bar {\psi }}(x)\mapsto e^{-i\alpha (x)}D_{\mu }{\bar {\psi }}(x).}
We then obtain a gauge-invariant Dirac action by promoting the partial derivative to a covariant one:
S
=
∫
d
4
x
ψ
¯
(
i
D
/
−
m
)
ψ
=
∫
d
4
x
ψ
¯
(
i
γ
μ
D
μ
−
m
)
ψ
.
{\displaystyle S=\int d^{4}x\,{\bar {\psi }}\,(iD\!\!\!\!{\big /}-m)\,\psi =\int d^{4}x\,{\bar {\psi }}\,(i\gamma ^{\mu }D_{\mu }-m)\,\psi .}
The final step needed to write down a gauge-invariant Lagrangian is to add a Maxwell Lagrangian term,
S
Maxwell
=
∫
d
4
x
[
−
1
4
F
μ
ν
F
μ
ν
]
.
{\displaystyle S_{\text{Maxwell}}=\int d^{4}x\,\left[-{\frac {1}{4}}F^{\mu \nu }F_{\mu \nu }\right].}
Putting these together gives
Expanding out the covariant derivative allows the action to be written in a second useful form:
S
QED
=
∫
d
4
x
[
−
1
4
F
μ
ν
F
μ
ν
+
ψ
¯
(
i
∂
/
−
m
)
ψ
−
e
J
μ
A
μ
]
{\displaystyle S_{\text{QED}}=\int d^{4}x\,\left[-{\frac {1}{4}}F^{\mu \nu }F_{\mu \nu }+{\bar {\psi }}\,(i\partial \!\!\!{\big /}-m)\,\psi -eJ^{\mu }A_{\mu }\right]}
=== Axial symmetry ===
Massless Dirac fermions, that is, fields
ψ
(
x
)
{\displaystyle \psi (x)}
satisfying the Dirac equation with
m
=
0
{\displaystyle m=0}
, admit a second, inequivalent
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry.
This is seen most easily by writing the four-component Dirac fermion
ψ
(
x
)
{\displaystyle \psi (x)}
as a pair of two-component vector fields,
ψ
(
x
)
=
(
ψ
1
(
x
)
ψ
2
(
x
)
)
,
{\displaystyle \psi (x)={\begin{pmatrix}\psi _{1}(x)\\\psi _{2}(x)\end{pmatrix}},}
and adopting the chiral representation for the gamma matrices, so that
i
γ
μ
∂
μ
{\displaystyle i\gamma ^{\mu }\partial _{\mu }}
may be written
i
γ
μ
∂
μ
=
(
0
i
σ
μ
∂
μ
i
σ
¯
μ
∂
μ
0
)
{\displaystyle i\gamma ^{\mu }\partial _{\mu }={\begin{pmatrix}0&i\sigma ^{\mu }\partial _{\mu }\\i{\bar {\sigma }}^{\mu }\partial _{\mu }\ &0\end{pmatrix}}}
where
σ
μ
{\displaystyle \sigma ^{\mu }}
has components
(
I
2
,
σ
i
)
{\displaystyle (I_{2},\sigma ^{i})}
and
σ
¯
μ
{\displaystyle {\bar {\sigma }}^{\mu }}
has components
(
I
2
,
−
σ
i
)
{\displaystyle (I_{2},-\sigma ^{i})}
.
The Dirac action then takes the form
S
=
∫
d
4
x
ψ
1
†
(
i
σ
μ
∂
μ
)
ψ
1
+
ψ
2
†
(
i
σ
¯
μ
∂
μ
)
ψ
2
.
{\displaystyle S=\int d^{4}x\,\psi _{1}^{\dagger }(i\sigma ^{\mu }\partial _{\mu })\psi _{1}+\psi _{2}^{\dagger }(i{\bar {\sigma }}^{\mu }\partial _{\mu })\psi _{2}.}
That is, it decouples into a theory of two Weyl spinors or Weyl fermions.
The earlier vector symmetry is still present, where
ψ
1
{\displaystyle \psi _{1}}
and
ψ
2
{\displaystyle \psi _{2}}
rotate identically. This form of the action makes the second inequivalent
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry manifest:
ψ
1
(
x
)
↦
e
i
β
ψ
1
(
x
)
,
ψ
2
(
x
)
↦
e
−
i
β
ψ
2
(
x
)
.
{\displaystyle {\begin{aligned}\psi _{1}(x)&\mapsto e^{i\beta }\psi _{1}(x),\\\psi _{2}(x)&\mapsto e^{-i\beta }\psi _{2}(x).\end{aligned}}}
This can also be expressed at the level of the Dirac fermion as
ψ
(
x
)
↦
exp
(
i
β
γ
5
)
ψ
(
x
)
{\displaystyle \psi (x)\mapsto \exp(i\beta \gamma ^{5})\psi (x)}
where
exp
{\displaystyle \exp }
is the exponential map for matrices.
This isn't the only
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry possible, but it is conventional. Any 'linear combination' of the vector and axial symmetries is also a
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry.
Classically, the axial symmetry admits a well-formulated gauge theory. But at the quantum level, there is an anomaly, that is, an obstruction to gauging.
=== Extension to color symmetry ===
We can extend this discussion from an abelian
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry to a general non-abelian symmetry under a gauge group
G
{\displaystyle G}
, the group of color symmetries for a theory.
For concreteness, we fix
G
=
SU
(
N
)
{\displaystyle G={\text{SU}}(N)}
, the special unitary group of matrices acting on
C
N
{\displaystyle \mathbb {C} ^{N}}
.
Before this section,
ψ
(
x
)
{\displaystyle \psi (x)}
could be viewed as a spinor field on Minkowski space, in other words a function
ψ
:
R
1
,
3
↦
C
4
{\displaystyle \psi :\mathbb {R} ^{1,3}\mapsto \mathbb {C} ^{4}}
, and its components in
C
4
{\displaystyle \mathbb {C} ^{4}}
are labelled by spin indices, conventionally Greek indices taken from the start of the alphabet
α
,
β
,
γ
,
⋯
{\displaystyle \alpha ,\beta ,\gamma ,\cdots }
.
Promoting the theory to a gauge theory, informally
ψ
{\displaystyle \psi }
acquires a part transforming like
C
N
{\displaystyle \mathbb {C} ^{N}}
, and these are labelled by color indices, conventionally Latin indices
i
,
j
,
k
,
⋯
{\displaystyle i,j,k,\cdots }
. In total,
ψ
(
x
)
{\displaystyle \psi (x)}
has
4
N
{\displaystyle 4N}
components, given in indices by
ψ
i
,
α
(
x
)
{\displaystyle \psi ^{i,\alpha }(x)}
. The 'spinor' labels only how the field transforms under spacetime transformations.
Formally,
ψ
(
x
)
{\displaystyle \psi (x)}
is valued in a tensor product, that is, it is a function
ψ
:
R
1
,
3
→
C
4
⊗
C
N
.
{\displaystyle \psi :\mathbb {R} ^{1,3}\to \mathbb {C} ^{4}\otimes \mathbb {C} ^{N}.}
Gauging proceeds similarly to the abelian
U
(
1
)
{\displaystyle {\text{U}}(1)}
case, with a few differences. Under a gauge transformation
U
:
R
1
,
3
→
SU
(
N
)
,
{\displaystyle U:\mathbb {R} ^{1,3}\rightarrow {\text{SU}}(N),}
the spinor fields transform as
ψ
(
x
)
↦
U
(
x
)
ψ
(
x
)
{\displaystyle \psi (x)\mapsto U(x)\psi (x)}
ψ
¯
(
x
)
↦
ψ
¯
(
x
)
U
†
(
x
)
.
{\displaystyle {\bar {\psi }}(x)\mapsto {\bar {\psi }}(x)U^{\dagger }(x).}
The matrix-valued gauge field
A
μ
{\displaystyle A_{\mu }}
or
SU
(
N
)
{\displaystyle {\text{SU}}(N)}
connection transforms as
A
μ
(
x
)
↦
U
(
x
)
A
μ
(
x
)
U
(
x
)
−
1
+
1
g
(
∂
μ
U
(
x
)
)
U
(
x
)
−
1
,
{\displaystyle A_{\mu }(x)\mapsto U(x)A_{\mu }(x)U(x)^{-1}+{\frac {1}{g}}(\partial _{\mu }U(x))U(x)^{-1},}
and the covariant derivatives defined
D
μ
ψ
=
∂
μ
ψ
+
i
g
A
μ
ψ
,
{\displaystyle D_{\mu }\psi =\partial _{\mu }\psi +igA_{\mu }\psi ,}
D
μ
ψ
¯
=
∂
μ
ψ
¯
−
i
g
ψ
¯
A
μ
†
{\displaystyle D_{\mu }{\bar {\psi }}=\partial _{\mu }{\bar {\psi }}-ig{\bar {\psi }}A_{\mu }^{\dagger }}
transform as
D
μ
ψ
(
x
)
↦
U
(
x
)
D
μ
ψ
(
x
)
,
{\displaystyle D_{\mu }\psi (x)\mapsto U(x)D_{\mu }\psi (x),}
D
μ
ψ
¯
(
x
)
↦
(
D
μ
ψ
¯
(
x
)
)
U
(
x
)
†
.
{\displaystyle D_{\mu }{\bar {\psi }}(x)\mapsto (D_{\mu }{\bar {\psi }}(x))U(x)^{\dagger }.}
Writing down a gauge-invariant action proceeds exactly as with the
U
(
1
)
{\displaystyle {\text{U}}(1)}
case, replacing the Maxwell Lagrangian with the Yang–Mills Lagrangian
S
Y-M
=
∫
d
4
x
−
1
4
Tr
(
F
μ
ν
F
μ
ν
)
{\displaystyle S_{\text{Y-M}}=\int d^{4}x\,-{\frac {1}{4}}{\text{Tr}}(F^{\mu \nu }F_{\mu \nu })}
where the Yang–Mills field strength or curvature is defined here as
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
−
i
g
[
A
μ
,
A
ν
]
{\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }-ig\left[A_{\mu },A_{\nu }\right]}
and
[
⋅
,
⋅
]
{\displaystyle [\cdot ,\cdot ]}
is the matrix commutator.
The action is then
==== Physical applications ====
For physical applications, the case
N
=
3
{\displaystyle N=3}
describes the quark sector of the Standard Model, which models strong interactions. Quarks are modelled as Dirac spinors; the gauge field is the gluon field. The case
N
=
2
{\displaystyle N=2}
describes part of the electroweak sector of the Standard Model. Leptons such as electrons and neutrinos are the Dirac spinors; the gauge field is the
W
{\displaystyle W}
gauge boson.
==== Generalisations ====
This expression can be generalised to arbitrary Lie group
G
{\displaystyle G}
with connection
A
μ
{\displaystyle A_{\mu }}
and a representation
(
ρ
,
G
,
V
)
{\displaystyle (\rho ,G,V)}
, where the colour part of
ψ
{\displaystyle \psi }
is valued in
V
{\displaystyle V}
. Formally, the Dirac field is a function
ψ
:
R
1
,
3
→
C
4
⊗
V
.
{\displaystyle \psi :\mathbb {R} ^{1,3}\to \mathbb {C} ^{4}\otimes V.}
Then
ψ
{\displaystyle \psi }
transforms under a gauge transformation
g
:
R
1
,
3
→
G
{\displaystyle g:\mathbb {R} ^{1,3}\to G}
as
ψ
(
x
)
↦
ρ
(
g
(
x
)
)
ψ
(
x
)
{\displaystyle \psi (x)\mapsto \rho (g(x))\psi (x)}
and the covariant derivative is defined
D
μ
ψ
=
∂
μ
ψ
+
ρ
(
A
μ
)
ψ
{\displaystyle D_{\mu }\psi =\partial _{\mu }\psi +\rho (A_{\mu })\psi }
where here we view
ρ
{\displaystyle \rho }
as a Lie algebra representation of the Lie algebra
g
=
L
(
G
)
{\displaystyle {\mathfrak {g}}={\text{L}}(G)}
associated to
G
{\displaystyle G}
.
This theory can be generalised to curved spacetime, but there are subtleties that arise in gauge theory on a general spacetime (or more generally still, a manifold), which can be ignored on flat spacetime. This is ultimately due to the contractibility of flat spacetime that allows us to view a gauge field and gauge transformations as defined globally on
R
1
,
3
{\displaystyle \mathbb {R} ^{1,3}}
.
== See also ==
== References ==
=== Citations ===
=== Selected papers ===
Anderson, Carl (1933). "The Positive Electron". Physical Review. 43 (6): 491. Bibcode:1933PhRv...43..491A. doi:10.1103/PhysRev.43.491.
Arminjon, M.; F. Reifler (2013). "Equivalent forms of Dirac equations in curved spacetimes and generalized de Broglie relations". Brazilian Journal of Physics. 43 (1–2): 64–77. arXiv:1103.3201. Bibcode:2013BrJPh..43...64A. doi:10.1007/s13538-012-0111-0. S2CID 38235437.
Dirac, P. A. M. (1928). "The Quantum Theory of the Electron" (PDF). Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023. JSTOR 94981. Archived (PDF) from the original on 2 January 2015.
Dirac, P. A. M. (1930). "A Theory of Electrons and Protons". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 126 (801): 360–365. Bibcode:1930RSPSA.126..360D. doi:10.1098/rspa.1930.0013. JSTOR 95359.
Frisch, R.; Stern, O. (1933). "Über die magnetische Ablenkung von Wasserstoffmolekülen und das magnetische Moment des Protons. I". Zeitschrift für Physik. 85 (1–2): 4. Bibcode:1933ZPhy...85....4F. doi:10.1007/BF01330773. S2CID 120793548.
=== Textbooks ===
Bjorken, J D; Drell, S (1964). Relativistic Quantum mechanics. New York, McGraw-Hill.
Halzen, Francis; Martin, Alan (1984). Quarks & Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 9780471887416.
Griffiths, D.J. (2008). Introduction to Elementary Particles (2nd ed.). Wiley-VCH. ISBN 978-3-527-40601-2.
Rae, Alastair I. M.; Jim Napolitano (2015). Quantum Mechanics (6th ed.). Routledge. ISBN 978-1482299182.
Schiff, L.I. (1968). Quantum Mechanics (3rd ed.). McGraw-Hill.
Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Plenum.
Thaller, B. (1992). The Dirac Equation. Texts and Monographs in Physics. Springer.
== External links ==
The history of the positron Lecture given by Dirac in 1975
The Dirac Equation at MathPages
The Dirac equation for a spin 1⁄2 particle
The Dirac Equation in natural units at the Paul M. Dirac Lecture Hall, EMFCSC, Erice, Sicily | Wikipedia/Dirac_equation |
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it.
== Explanation ==
Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator.
Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy.
== History ==
The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926.
== Atoms ==
=== Intrinsic energy levels ===
In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative.
==== Orbital state energy level: atom/ion with nucleus + one electron ====
Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by:
E
n
=
−
h
c
R
∞
Z
2
n
2
{\displaystyle E_{n}=-hcR_{\infty }{\frac {Z^{2}}{n^{2}}}}
(typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is the Planck constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n.
This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = hν = hc / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data.
1
λ
=
R
Z
2
(
1
n
1
2
−
1
n
2
2
)
{\displaystyle {\frac {1}{\lambda }}=RZ^{2}\left({\frac {1}{n_{1}^{2}}}-{\frac {1}{n_{2}^{2}}}\right)}
An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.
==== Electron–electron interactions in atoms ====
If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.
For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number.
E
n
,
ℓ
=
−
h
c
R
∞
Z
e
f
f
2
n
2
{\displaystyle E_{n,\ell }=-hcR_{\infty }{\frac {{Z_{\rm {eff}}}^{2}}{n^{2}}}}
In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule.
==== Fine structure splitting ====
Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV.
==== Hyperfine structure ====
This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV.
=== Energy levels due to external fields ===
==== Zeeman effect ====
There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by
U
=
−
μ
L
⋅
B
{\displaystyle U=-{\boldsymbol {\mu }}_{L}\cdot \mathbf {B} }
with
−
μ
L
=
e
ℏ
2
m
L
=
μ
B
L
{\displaystyle -{\boldsymbol {\mu }}_{L}={\dfrac {e\hbar }{2m}}\mathbf {L} =\mu _{B}\mathbf {L} }
.
Additionally taking into account the magnetic momentum arising from the electron spin.
Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin
−
μ
S
=
−
μ
B
g
S
S
{\displaystyle -{\boldsymbol {\mu }}_{S}=-\mu _{\text{B}}g_{S}\mathbf {S} }
,
with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ,
μ
=
μ
L
+
μ
S
{\displaystyle {\boldsymbol {\mu }}={\boldsymbol {\mu }}_{L}+{\boldsymbol {\mu }}_{S}}
.
The interaction energy therefore becomes
U
B
=
−
μ
⋅
B
=
μ
B
B
(
M
L
+
g
S
M
S
)
{\displaystyle U_{B}=-{\boldsymbol {\mu }}\cdot \mathbf {B} =\mu _{\text{B}}B(M_{L}+g_{S}M_{S})}
.
==== Stark effect ====
== Molecules ==
Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs.
In polyatomic molecules, different vibrational and rotational energy levels are also involved.
Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that:
E
=
E
electronic
+
E
vibrational
+
E
rotational
+
E
nuclear
+
E
translational
{\displaystyle E=E_{\text{electronic}}+E_{\text{vibrational}}+E_{\text{rotational}}+E_{\text{nuclear}}+E_{\text{translational}}}
where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.
The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance.
=== Energy level diagrams ===
There are various types of energy level diagrams for bonds between atoms in a molecule.
Examples
Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams.
== Energy level transitions ==
Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels.
Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).
ΔE = hf = hc / λ,
since c, the speed of light, equals to fλ
Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum.
An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n.
A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics.
Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow.
An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.
== Crystalline materials ==
Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal.
== See also ==
Perturbation theory (quantum mechanics)
Atomic clock
Computational chemistry
== References == | Wikipedia/Energy_level |
QED: The Strange Theory of Light and Matter is an adaptation for the general reader of four lectures on quantum electrodynamics (QED) published in 1985 by American physicist and Nobel laureate Richard Feynman.
QED was designed to be a popular science book, written in a witty style, and containing just enough quantum-mechanical mathematics to allow the solving of very basic problems in quantum electrodynamics by an educated lay audience. It is unusual for a popular science book in the level of mathematical detail it goes into, actually allowing the reader to solve simple optics problems, as might be found in an actual textbook. But unlike in a typical textbook, the mathematics is taught in very simple terms, with no attempt to solve problems efficiently, use standard terminology, or facilitate further advancement in the field. The focus instead is on nurturing a basic conceptual understanding of what is really going on in such calculations. Complex numbers are taught, for instance, by asking the reader to imagine that there are tiny clocks attached to subatomic particles. The book was first published in 1985 by the Princeton University Press.
== The book ==
In an acknowledgement Feynman wrote:
This book purports to be a record of the lectures on quantum electrodynamics I gave at UCLA, transcribed and edited by my good friend Ralph Leighton. Actually, the manuscript has undergone considerable modification. Mr. Leighton's experience in teaching and in writing was of considerable value in this attempt at presenting this central part of physics to a wider audience.
People are always asking for the latest developments in the unification of this theory with that theory, and they don't give us a chance to tell them anything about what we know pretty well. They always want to know the things we don't know. — Richard Feynman
Much of Feynman's discussion springs from an everyday phenomenon: the way any transparent sheet of glass partly reflects any light shining on it. Feynman also pays homage to Isaac Newton's struggles to come to terms with the nature of light.
Feynman's lectures were originally given as the Sir Douglas Robb lectures at the University of Auckland, New Zealand in 1979. Videotapes of these lectures were made publicly available on a not-for-profit basis in 1996 and more recently have been placed online by the Vega Science Trust.
The book is based on Feynman's delivery of the first Alix G. Mautner Memorial Lecture series for the general public at the University of California, Los Angeles (UCLA) in 1983. The differences between the book and the original Auckland lectures were discussed in June 1996 in the American Journal of Physics.
In 2006, Princeton University Press published a new edition with a new introduction by Anthony Zee. He introduces Feynman's peculiar take at explaining physics, and cites: "According to Feynman, to learn QED you have two choices: you can go through seven years of physics education or read this book".
== The four lectures ==
1. Photons - Corpuscles of Light
In the first lecture, which acts as a gentle lead-in to the subject of quantum electrodynamics, Feynman describes the basic properties of photons. He discusses how to measure the probability that a photon will reflect or transmit through a partially reflective piece of glass.
2. Fits of Reflection and Transmission - Quantum Behaviour
In the second lecture, Feynman looks at the different paths a photon can take as it travels from one point to another and how this affects phenomena like reflection and diffraction.
3. Electrons and Their Interactions
The third lecture describes quantum phenomena such as the famous double-slit experiment and Werner Heisenberg's uncertainty principle, thus describing the transmission and reflection of photons. It also introduces his famous "Feynman diagrams" and how quantum electrodynamics describes the interactions of subatomic particles.
4. New Queries
In the fourth lecture, Feynman discusses the meaning of quantum electrodynamics and some of its problems. He then describes "the rest of physics", giving a brief look at quantum chromodynamics, the weak interaction and gravity, and how they relate to quantum electrodynamics.
== Notes ==
== References ==
Dean, Chris. "The Vega Science Trust - Richard Feynman - Science Videos". www.vega.org.uk. Retrieved 2016-10-26.
Dudley, J.M.; A.M. Kwan (June 1996). "Richard Feynman's popular lectures on quantum electrodynamics: The 1979 Robb Lectures at Auckland University". American Journal of Physics. 64 (6): 694–698. Bibcode:1996AmJPh..64..694D. doi:10.1119/1.18234.
Feynman, Richard (1985). QED: The strange theory of light and matter. Princeton University Press. ISBN 0-691-08388-6.
Feynman, Richard (2006). QED: The strange theory of light and matter. Princeton University Press. ISBN 0-691-12575-9.
The Douglas Robb Memorial Lectures Video of the four public lectures in New Zealand of which the four chapters of this book QED: The Strange Theory of Light and Matter are transcripts.
== External links ==
Richard Feynman - Science Videos - The Douglas Robb Memorial Lectures (4 parts) on Vega Science Trust, from the University of Auckland (New Zealand)
The Strange Theory of Light — Interactive animation computer programs inspired by the Czech translation of this book by Ladislav Szántó et al. | Wikipedia/QED:_The_Strange_Theory_of_Light_and_Matter |
In mathematical physics, the WKB approximation or WKB method is a technique for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wave function is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly.
The name is an initialism for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used letter combinations include JWKB and WKBJ, where the "J" stands for Jeffreys.
== Brief history ==
This method is named after physicists Gregor Wentzel, Hendrik Anthony Kramers, and Léon Brillouin, who all developed it in 1926. In 1923, mathematician Harold Jeffreys had developed a general method of approximating solutions to linear, second-order differential equations, a class that includes the Schrödinger equation. The Schrödinger equation itself was not developed until two years later, and Wentzel, Kramers, and Brillouin were apparently unaware of this earlier work, so Jeffreys is often neglected credit. Early texts in quantum mechanics contain any number of combinations of their initials, including WBK, BWK, WKBJ, JWKB and BWKJ. An authoritative discussion and critical survey has been given by Robert B. Dingle.
Earlier appearances of essentially equivalent methods are: Francesco Carlini in 1817, Joseph Liouville in 1837, George Green in 1837, Lord Rayleigh in 1912 and Richard Gans in 1915. Liouville and Green may be said to have founded the method in 1837, and it is also commonly referred to as the Liouville–Green or LG method.
The important contribution of Jeffreys, Wentzel, Kramers, and Brillouin to the method was the inclusion of the treatment of turning points, connecting the evanescent and oscillatory solutions at either side of the turning point. For example, this may occur in the Schrödinger equation, due to a potential energy hill.
== Formulation ==
Generally, WKB theory is a method for approximating the solution of a differential equation whose highest derivative is multiplied by a small parameter ε. The method of approximation is as follows.
For a differential equation
ε
d
n
y
d
x
n
+
a
(
x
)
d
n
−
1
y
d
x
n
−
1
+
⋯
+
k
(
x
)
d
y
d
x
+
m
(
x
)
y
=
0
,
{\displaystyle \varepsilon {\frac {d^{n}y}{dx^{n}}}+a(x){\frac {d^{n-1}y}{dx^{n-1}}}+\cdots +k(x){\frac {dy}{dx}}+m(x)y=0,}
assume a solution of the form of an asymptotic series expansion
y
(
x
)
∼
exp
[
1
δ
∑
n
=
0
∞
δ
n
S
n
(
x
)
]
{\displaystyle y(x)\sim \exp \left[{\frac {1}{\delta }}\sum _{n=0}^{\infty }\delta ^{n}S_{n}(x)\right]}
in the limit δ → 0. The asymptotic scaling of δ in terms of ε will be determined by the equation – see the example below.
Substituting the above ansatz into the differential equation and cancelling out the exponential terms allows one to solve for an arbitrary number of terms Sn(x) in the expansion.
WKB theory is a special case of multiple scale analysis.
== An example ==
This example comes from the text of Carl M. Bender and Steven Orszag. Consider the second-order homogeneous linear differential equation
ϵ
2
d
2
y
d
x
2
=
Q
(
x
)
y
,
{\displaystyle \epsilon ^{2}{\frac {d^{2}y}{dx^{2}}}=Q(x)y,}
where
Q
(
x
)
≠
0
{\displaystyle Q(x)\neq 0}
. Substituting
y
(
x
)
=
exp
[
1
δ
∑
n
=
0
∞
δ
n
S
n
(
x
)
]
{\displaystyle y(x)=\exp \left[{\frac {1}{\delta }}\sum _{n=0}^{\infty }\delta ^{n}S_{n}(x)\right]}
results in the equation
ϵ
2
[
1
δ
2
(
∑
n
=
0
∞
δ
n
S
n
′
)
2
+
1
δ
∑
n
=
0
∞
δ
n
S
n
′
′
]
=
Q
(
x
)
.
{\displaystyle \epsilon ^{2}\left[{\frac {1}{\delta ^{2}}}\left(\sum _{n=0}^{\infty }\delta ^{n}S_{n}^{\prime }\right)^{2}+{\frac {1}{\delta }}\sum _{n=0}^{\infty }\delta ^{n}S_{n}^{\prime \prime }\right]=Q(x).}
To leading order in ϵ (assuming, for the moment, the series will be asymptotically consistent), the above can be approximated as
ϵ
2
δ
2
S
0
′
2
+
2
ϵ
2
δ
S
0
′
S
1
′
+
ϵ
2
δ
S
0
′
′
=
Q
(
x
)
.
{\displaystyle {\frac {\epsilon ^{2}}{\delta ^{2}}}{S_{0}^{\prime }}^{2}+{\frac {2\epsilon ^{2}}{\delta }}S_{0}^{\prime }S_{1}^{\prime }+{\frac {\epsilon ^{2}}{\delta }}S_{0}^{\prime \prime }=Q(x).}
In the limit δ → 0, the dominant balance is given by
ϵ
2
δ
2
S
0
′
2
∼
Q
(
x
)
.
{\displaystyle {\frac {\epsilon ^{2}}{\delta ^{2}}}{S_{0}^{\prime }}^{2}\sim Q(x).}
So δ is proportional to ϵ. Setting them equal and comparing powers yields
ϵ
0
:
S
0
′
2
=
Q
(
x
)
,
{\displaystyle \epsilon ^{0}:\quad {S_{0}^{\prime }}^{2}=Q(x),}
which can be recognized as the eikonal equation, with solution
S
0
(
x
)
=
±
∫
x
0
x
Q
(
x
′
)
d
x
′
.
{\displaystyle S_{0}(x)=\pm \int _{x_{0}}^{x}{\sqrt {Q(x')}}\,dx'.}
Considering first-order powers of ϵ fixes
ϵ
1
:
2
S
0
′
S
1
′
+
S
0
′
′
=
0.
{\displaystyle \epsilon ^{1}:\quad 2S_{0}^{\prime }S_{1}^{\prime }+S_{0}^{\prime \prime }=0.}
This has the solution
S
1
(
x
)
=
−
1
4
ln
Q
(
x
)
+
k
1
,
{\displaystyle S_{1}(x)=-{\frac {1}{4}}\ln Q(x)+k_{1},}
where k1 is an arbitrary constant.
We now have a pair of approximations to the system (a pair, because S0 can take two signs); the first-order WKB-approximation will be a linear combination of the two:
y
(
x
)
≈
c
1
Q
−
1
4
(
x
)
exp
[
1
ϵ
∫
x
0
x
Q
(
t
)
d
t
]
+
c
2
Q
−
1
4
(
x
)
exp
[
−
1
ϵ
∫
x
0
x
Q
(
t
)
d
t
]
.
{\displaystyle y(x)\approx c_{1}Q^{-{\frac {1}{4}}}(x)\exp \left[{\frac {1}{\epsilon }}\int _{x_{0}}^{x}{\sqrt {Q(t)}}\,dt\right]+c_{2}Q^{-{\frac {1}{4}}}(x)\exp \left[-{\frac {1}{\epsilon }}\int _{x_{0}}^{x}{\sqrt {Q(t)}}\,dt\right].}
Higher-order terms can be obtained by looking at equations for higher powers of δ. Explicitly,
2
S
0
′
S
n
′
+
S
n
−
1
′
′
+
∑
j
=
1
n
−
1
S
j
′
S
n
−
j
′
=
0
{\displaystyle 2S_{0}^{\prime }S_{n}^{\prime }+S_{n-1}^{\prime \prime }+\sum _{j=1}^{n-1}S_{j}^{\prime }S_{n-j}^{\prime }=0}
for n ≥ 2.
=== Precision of the asymptotic series ===
The asymptotic series for y(x) is usually a divergent series, whose general term δn Sn(x) starts to increase after a certain value n = nmax. Therefore, the smallest error achieved by the WKB method is at best of the order of the last included term.
For the equation
ϵ
2
d
2
y
d
x
2
=
Q
(
x
)
y
,
{\displaystyle \epsilon ^{2}{\frac {d^{2}y}{dx^{2}}}=Q(x)y,}
with Q(x) <0 an analytic function, the value
n
max
{\displaystyle n_{\max }}
and the magnitude of the last term can be estimated as follows:
n
max
≈
2
ϵ
−
1
|
∫
x
0
x
∗
−
Q
(
z
)
d
z
|
,
{\displaystyle n_{\max }\approx 2\epsilon ^{-1}\left|\int _{x_{0}}^{x_{\ast }}{\sqrt {-Q(z)}}\,dz\right|,}
δ
n
max
S
n
max
(
x
0
)
≈
2
π
n
max
exp
[
−
n
max
]
,
{\displaystyle \delta ^{n_{\max }}S_{n_{\max }}(x_{0})\approx {\sqrt {\frac {2\pi }{n_{\max }}}}\exp[-n_{\max }],}
where
x
0
{\displaystyle x_{0}}
is the point at which
y
(
x
0
)
{\displaystyle y(x_{0})}
needs to be evaluated and
x
∗
{\displaystyle x_{\ast }}
is the (complex) turning point where
Q
(
x
∗
)
=
0
{\displaystyle Q(x_{\ast })=0}
, closest to
x
=
x
0
{\displaystyle x=x_{0}}
.
The number nmax can be interpreted as the number of oscillations between
x
0
{\displaystyle x_{0}}
and the closest turning point.
If
ϵ
−
1
Q
(
x
)
{\displaystyle \epsilon ^{-1}Q(x)}
is a slowly changing function,
ϵ
|
d
Q
d
x
|
≪
Q
2
,
[might be
Q
3
/
2
?]
{\displaystyle \epsilon \left|{\frac {dQ}{dx}}\right|\ll Q^{2},^{{\text{[might be }}Q^{3/2}{\text{?]}}}}
the number nmax will be large, and the minimum error of the asymptotic series will be exponentially small.
== Application in non relativistic quantum mechanics ==
The above example may be applied specifically to the one-dimensional, time-independent Schrödinger equation,
−
ℏ
2
2
m
d
2
d
x
2
Ψ
(
x
)
+
V
(
x
)
Ψ
(
x
)
=
E
Ψ
(
x
)
,
{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\Psi (x)+V(x)\Psi (x)=E\Psi (x),}
which can be rewritten as
d
2
d
x
2
Ψ
(
x
)
=
2
m
ℏ
2
(
V
(
x
)
−
E
)
Ψ
(
x
)
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)\Psi (x).}
=== Approximation away from the turning points ===
The wavefunction can be rewritten as the exponential of another function S (closely related to the action), which could be complex,
Ψ
(
x
)
=
e
i
S
(
x
)
ℏ
,
{\displaystyle \Psi (\mathbf {x} )=e^{iS(\mathbf {x} ) \over \hbar },}
so that its substitution in Schrödinger's equation gives:
i
ℏ
∇
2
S
(
x
)
−
(
∇
S
(
x
)
)
2
=
2
m
(
V
(
x
)
−
E
)
,
{\displaystyle i\hbar \nabla ^{2}S(\mathbf {x} )-(\nabla S(\mathbf {x} ))^{2}=2m\left(V(\mathbf {x} )-E\right),}
Next, the semiclassical approximation is used. This means that each function is expanded as a power series in ħ.
S
=
S
0
+
ℏ
S
1
+
ℏ
2
S
2
+
⋯
{\displaystyle S=S_{0}+\hbar S_{1}+\hbar ^{2}S_{2}+\cdots }
Substituting in the equation, and only retaining terms up to first order in ℏ, we get:
(
∇
S
0
+
ℏ
∇
S
1
)
2
−
i
ℏ
(
∇
2
S
0
)
=
2
m
(
E
−
V
(
x
)
)
{\displaystyle (\nabla S_{0}+\hbar \nabla S_{1})^{2}-i\hbar (\nabla ^{2}S_{0})=2m(E-V(\mathbf {x} ))}
which gives the following two relations:
(
∇
S
0
)
2
=
2
m
(
E
−
V
(
x
)
)
=
(
p
(
x
)
)
2
2
∇
S
0
⋅
∇
S
1
−
i
∇
2
S
0
=
0
{\displaystyle {\begin{aligned}(\nabla S_{0})^{2}=2m(E-V(\mathbf {x} ))=(p(\mathbf {x} ))^{2}\\2\nabla S_{0}\cdot \nabla S_{1}-i\nabla ^{2}S_{0}=0\end{aligned}}}
which can be solved for 1D systems, first equation resulting in:
S
0
(
x
)
=
±
∫
2
m
(
E
−
V
(
x
)
)
d
x
=
±
∫
p
(
x
)
d
x
{\displaystyle S_{0}(x)=\pm \int {\sqrt {2m\left(E-V(x)\right)}}\,dx=\pm \int p(x)\,dx}
and the second equation computed for the possible values of the above, is generally expressed as:
Ψ
(
x
)
≈
C
+
e
+
i
ℏ
∫
p
(
x
)
d
x
|
p
(
x
)
|
+
C
−
e
−
i
ℏ
∫
p
(
x
)
d
x
|
p
(
x
)
|
{\displaystyle \Psi (x)\approx C_{+}{\frac {e^{+{\frac {i}{\hbar }}\int p(x)\,dx}}{\sqrt {|p(x)|}}}+C_{-}{\frac {e^{-{\frac {i}{\hbar }}\int p(x)\,dx}}{\sqrt {|p(x)|}}}}
Thus, the resulting wavefunction in first order WKB approximation is presented as,
In the classically allowed region, namely the region where
V
(
x
)
<
E
{\displaystyle V(x)<E}
the integrand in the exponent is imaginary and the approximate wave function is oscillatory. In the classically forbidden region
V
(
x
)
>
E
{\displaystyle V(x)>E}
, the solutions are growing or decaying. It is evident in the denominator that both of these approximate solutions become singular near the classical turning points, where E = V(x), and cannot be valid. (The turning points are the points where the classical particle changes direction.)
Hence, when
E
>
V
(
x
)
{\displaystyle E>V(x)}
, the wavefunction can be chosen to be expressed as:
Ψ
(
x
′
)
≈
C
cos
(
1
ℏ
∫
|
p
(
x
)
|
d
x
+
α
)
|
p
(
x
)
|
+
D
sin
(
−
1
ℏ
∫
|
p
(
x
)
|
d
x
+
α
)
|
p
(
x
)
|
{\displaystyle \Psi (x')\approx C{\frac {\cos {({\frac {1}{\hbar }}\int |p(x)|\,dx}+\alpha )}{\sqrt {|p(x)|}}}+D{\frac {\sin {(-{\frac {1}{\hbar }}\int |p(x)|\,dx}+\alpha )}{\sqrt {|p(x)|}}}}
and for
V
(
x
)
>
E
{\displaystyle V(x)>E}
,
Ψ
(
x
′
)
≈
C
+
e
+
i
ℏ
∫
|
p
(
x
)
|
d
x
|
p
(
x
)
|
+
C
−
e
−
i
ℏ
∫
|
p
(
x
)
|
d
x
|
p
(
x
)
|
.
{\displaystyle \Psi (x')\approx {\frac {C_{+}e^{+{\frac {i}{\hbar }}\int |p(x)|\,dx}}{\sqrt {|p(x)|}}}+{\frac {C_{-}e^{-{\frac {i}{\hbar }}\int |p(x)|\,dx}}{\sqrt {|p(x)|}}}.}
The integration in this solution is computed between the classical turning point and the arbitrary position x'.
=== Validity of WKB solutions ===
From the condition:
(
S
0
′
(
x
)
)
2
−
(
p
(
x
)
)
2
+
ℏ
(
2
S
0
′
(
x
)
S
1
′
(
x
)
−
i
S
0
″
(
x
)
)
=
0
{\displaystyle (S_{0}'(x))^{2}-(p(x))^{2}+\hbar (2S_{0}'(x)S_{1}'(x)-iS_{0}''(x))=0}
It follows that:
ℏ
∣
2
S
0
′
(
x
)
S
1
′
(
x
)
∣
+
ℏ
∣
i
S
0
″
(
x
)
∣≪∣
(
S
0
′
(
x
)
)
2
∣
+
∣
(
p
(
x
)
)
2
∣
{\textstyle \hbar \mid 2S_{0}'(x)S_{1}'(x)\mid +\hbar \mid iS_{0}''(x)\mid \ll \mid (S_{0}'(x))^{2}\mid +\mid (p(x))^{2}\mid }
For which the following two inequalities are equivalent since the terms in either side are equivalent, as used in the WKB approximation:
ℏ
∣
S
0
″
(
x
)
∣≪∣
(
S
0
′
(
x
)
)
2
∣
2
ℏ
∣
S
0
′
S
1
′
∣≪∣
(
p
′
(
x
)
)
2
∣
{\displaystyle {\begin{aligned}\hbar \mid S_{0}''(x)\mid \ll \mid (S_{0}'(x))^{2}\mid \\2\hbar \mid S_{0}'S_{1}'\mid \ll \mid (p'(x))^{2}\mid \end{aligned}}}
The first inequality can be used to show the following:
ℏ
∣
S
0
″
(
x
)
∣≪∣
(
p
(
x
)
)
∣
2
1
2
ℏ
|
p
(
x
)
|
|
d
p
2
d
x
|
≪
|
p
(
x
)
|
2
λ
|
d
V
d
x
|
≪
|
p
|
2
m
{\displaystyle {\begin{aligned}\hbar \mid S_{0}''(x)\mid \ll \mid (p(x))\mid ^{2}\\{\frac {1}{2}}{\frac {\hbar }{|p(x)|}}\left|{\frac {dp^{2}}{dx}}\right|\ll |p(x)|^{2}\\\lambda \left|{\frac {dV}{dx}}\right|\ll {\frac {|p|^{2}}{m}}\\\end{aligned}}}
where
|
S
0
′
(
x
)
|
=
|
p
(
x
)
|
{\textstyle |S_{0}'(x)|=|p(x)|}
is used and
λ
(
x
)
{\textstyle \lambda (x)}
is the local de Broglie wavelength of the wavefunction. The inequality implies that the variation of potential is assumed to be slowly varying. This condition can also be restated as the fractional change of
E
−
V
(
x
)
{\textstyle E-V(x)}
or that of the momentum
p
(
x
)
{\textstyle p(x)}
, over the wavelength
λ
{\textstyle \lambda }
, being much smaller than
1
{\textstyle 1}
.
Similarly it can be shown that
λ
(
x
)
{\textstyle \lambda (x)}
also has restrictions based on underlying assumptions for the WKB approximation that:
|
d
λ
d
x
|
≪
1
{\displaystyle \left|{\frac {d\lambda }{dx}}\right|\ll 1}
which implies that the de Broglie wavelength of the particle is slowly varying.
=== Behavior near the turning points ===
We now consider the behavior of the wave function near the turning points. For this, we need a different method. Near the first turning points, x1, the term
2
m
ℏ
2
(
V
(
x
)
−
E
)
{\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}
can be expanded in a power series,
2
m
ℏ
2
(
V
(
x
)
−
E
)
=
U
1
⋅
(
x
−
x
1
)
+
U
2
⋅
(
x
−
x
1
)
2
+
⋯
.
{\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)=U_{1}\cdot (x-x_{1})+U_{2}\cdot (x-x_{1})^{2}+\cdots \;.}
To first order, one finds
d
2
d
x
2
Ψ
(
x
)
=
U
1
⋅
(
x
−
x
1
)
⋅
Ψ
(
x
)
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)=U_{1}\cdot (x-x_{1})\cdot \Psi (x).}
This differential equation is known as the Airy equation, and the solution may be written in terms of Airy functions,
Ψ
(
x
)
=
C
A
Ai
(
U
1
3
⋅
(
x
−
x
1
)
)
+
C
B
Bi
(
U
1
3
⋅
(
x
−
x
1
)
)
=
C
A
Ai
(
u
)
+
C
B
Bi
(
u
)
.
{\displaystyle \Psi (x)=C_{A}\operatorname {Ai} \left({\sqrt[{3}]{U_{1}}}\cdot (x-x_{1})\right)+C_{B}\operatorname {Bi} \left({\sqrt[{3}]{U_{1}}}\cdot (x-x_{1})\right)=C_{A}\operatorname {Ai} \left(u\right)+C_{B}\operatorname {Bi} \left(u\right).}
Although for any fixed value of
ℏ
{\displaystyle \hbar }
, the wave function is bounded near the turning points, the wave function will be peaked there, as can be seen in the images above. As
ℏ
{\displaystyle \hbar }
gets smaller, the height of the wave function at the turning points grows. It also follows from this approximation that:
1
ℏ
∫
p
(
x
)
d
x
=
U
1
∫
x
−
a
d
x
=
2
3
(
U
1
3
(
x
−
a
)
)
3
2
=
2
3
u
3
2
{\displaystyle {\frac {1}{\hbar }}\int p(x)dx={\sqrt {U_{1}}}\int {\sqrt {x-a}}\,dx={\frac {2}{3}}({\sqrt[{3}]{U_{1}}}(x-a))^{\frac {3}{2}}={\frac {2}{3}}u^{\frac {3}{2}}}
=== Connection conditions ===
It now remains to construct a global (approximate) solution to the Schrödinger equation. For the wave function to be square-integrable, we must take only the exponentially decaying solution in the two classically forbidden regions. These must then "connect" properly through the turning points to the classically allowed region. For most values of E, this matching procedure will not work: The function obtained by connecting the solution near
+
∞
{\displaystyle +\infty }
to the classically allowed region will not agree with the function obtained by connecting the solution near
−
∞
{\displaystyle -\infty }
to the classically allowed region. The requirement that the two functions agree imposes a condition on the energy E, which will give an approximation to the exact quantum energy levels.The wavefunction's coefficients can be calculated for a simple problem shown in the figure. Let the first turning point, where the potential is decreasing over x, occur at
x
=
x
1
{\displaystyle x=x_{1}}
and the second turning point, where potential is increasing over x, occur at
x
=
x
2
{\displaystyle x=x_{2}}
. Given that we expect wavefunctions to be of the following form, we can calculate their coefficients by connecting the different regions using Airy and Bairy functions.
Ψ
V
>
E
(
x
)
≈
A
e
2
3
u
3
2
u
4
+
B
e
−
2
3
u
3
2
u
4
Ψ
E
>
V
(
x
)
≈
C
cos
(
2
3
u
3
2
−
α
)
u
4
+
D
sin
(
2
3
u
3
2
−
α
)
u
4
{\displaystyle {\begin{aligned}\Psi _{V>E}(x)\approx A{\frac {e^{{\frac {2}{3}}u^{\frac {3}{2}}}}{\sqrt[{4}]{u}}}+B{\frac {e^{-{\frac {2}{3}}u^{\frac {3}{2}}}}{\sqrt[{4}]{u}}}\\\Psi _{E>V}(x)\approx C{\frac {\cos {({\frac {2}{3}}u^{\frac {3}{2}}-\alpha )}}{\sqrt[{4}]{u}}}+D{\frac {\sin {({\frac {2}{3}}u^{\frac {3}{2}}-\alpha )}}{\sqrt[{4}]{u}}}\\\end{aligned}}}
==== First classical turning point ====
For
U
1
<
0
{\displaystyle U_{1}<0}
ie. decreasing potential condition or
x
=
x
1
{\displaystyle x=x_{1}}
in the given example shown by the figure, we require the exponential function to decay for negative values of x so that wavefunction for it to go to zero. Considering Bairy functions to be the required connection formula, we get:
Bi
(
u
)
→
−
1
π
1
u
4
sin
(
2
3
|
u
|
3
2
−
π
4
)
where,
u
→
−
∞
Bi
(
u
)
→
1
π
1
u
4
e
2
3
u
3
2
where,
u
→
+
∞
{\displaystyle {\begin{aligned}\operatorname {Bi} (u)\rightarrow -{\frac {1}{\sqrt {\pi }}}{\frac {1}{\sqrt[{4}]{u}}}\sin {\left({\frac {2}{3}}|u|^{\frac {3}{2}}-{\frac {\pi }{4}}\right)}\quad {\textrm {where,}}\quad u\rightarrow -\infty \\\operatorname {Bi} (u)\rightarrow {\frac {1}{\sqrt {\pi }}}{\frac {1}{\sqrt[{4}]{u}}}e^{{\frac {2}{3}}u^{\frac {3}{2}}}\quad {\textrm {where,}}\quad u\rightarrow +\infty \\\end{aligned}}}
We cannot use Airy function since it gives growing exponential behaviour for negative x. When compared to WKB solutions and matching their behaviours at
±
∞
{\displaystyle \pm \infty }
, we conclude:
A
=
−
D
=
N
{\displaystyle A=-D=N}
,
B
=
C
=
0
{\displaystyle B=C=0}
and
α
=
π
4
{\displaystyle \alpha ={\frac {\pi }{4}}}
.
Thus, letting some normalization constant be
N
{\displaystyle N}
, the wavefunction is given for increasing potential (with x) as:
Ψ
WKB
(
x
)
=
{
−
N
|
p
(
x
)
|
exp
(
−
1
ℏ
∫
x
x
1
|
p
(
x
)
|
d
x
)
if
x
<
x
1
N
|
p
(
x
)
|
sin
(
1
ℏ
∫
x
x
1
|
p
(
x
)
|
d
x
−
π
4
)
if
x
2
>
x
>
x
1
{\displaystyle \Psi _{\text{WKB}}(x)={\begin{cases}-{\frac {N}{\sqrt {|p(x)|}}}\exp {(-{\frac {1}{\hbar }}\int _{x}^{x_{1}}|p(x)|dx)}&{\text{if }}x<x_{1}\\{\frac {N}{\sqrt {|p(x)|}}}\sin {({\frac {1}{\hbar }}\int _{x}^{x_{1}}|p(x)|dx-{\frac {\pi }{4}})}&{\text{if }}x_{2}>x>x_{1}\\\end{cases}}}
==== Second classical turning point ====
For
U
1
>
0
{\displaystyle U_{1}>0}
ie. increasing potential condition or
x
=
x
2
{\displaystyle x=x_{2}}
in the given example shown by the figure, we require the exponential function to decay for positive values of x so that wavefunction for it to go to zero. Considering Airy functions to be the required connection formula, we get:
Ai
(
u
)
→
1
2
π
1
u
4
e
−
2
3
u
3
2
where,
u
→
+
∞
Ai
(
u
)
→
1
π
1
u
4
cos
(
2
3
|
u
|
3
2
−
π
4
)
where,
u
→
−
∞
{\displaystyle {\begin{aligned}\operatorname {Ai} (u)\rightarrow {\frac {1}{2{\sqrt {\pi }}}}{\frac {1}{\sqrt[{4}]{u}}}e^{-{\frac {2}{3}}u^{\frac {3}{2}}}\quad {\textrm {where,}}\quad u\rightarrow +\infty \\\operatorname {Ai} (u)\rightarrow {\frac {1}{\sqrt {\pi }}}{\frac {1}{\sqrt[{4}]{u}}}\cos {\left({\frac {2}{3}}|u|^{\frac {3}{2}}-{\frac {\pi }{4}}\right)}\quad {\textrm {where,}}\quad u\rightarrow -\infty \\\end{aligned}}}
We cannot use Bairy function since it gives growing exponential behaviour for positive x. When compared to WKB solutions and matching their behaviours at
±
∞
{\displaystyle \pm \infty }
, we conclude:
2
B
=
C
=
N
′
{\displaystyle 2B=C=N'}
,
D
=
A
=
0
{\displaystyle D=A=0}
and
α
=
π
4
{\displaystyle \alpha ={\frac {\pi }{4}}}
.
Thus, letting some normalization constant be
N
′
{\displaystyle N'}
, the wavefunction is given for increasing potential (with x) as:
Ψ
WKB
(
x
)
=
{
N
′
|
p
(
x
)
|
cos
(
1
ℏ
∫
x
x
2
|
p
(
x
)
|
d
x
−
π
4
)
if
x
1
<
x
<
x
2
N
′
2
|
p
(
x
)
|
exp
(
−
1
ℏ
∫
x
2
x
|
p
(
x
)
|
d
x
)
if
x
>
x
2
{\displaystyle \Psi _{\text{WKB}}(x)={\begin{cases}{\frac {N'}{\sqrt {|p(x)|}}}\cos {({\frac {1}{\hbar }}\int _{x}^{x_{2}}|p(x)|dx-{\frac {\pi }{4}})}&{\text{if }}x_{1}<x<x_{2}\\{\frac {N'}{2{\sqrt {|p(x)|}}}}\exp {(-{\frac {1}{\hbar }}\int _{x_{2}}^{x}|p(x)|dx)}&{\text{if }}x>x_{2}\\\end{cases}}}
==== Common oscillating wavefunction ====
Matching the two solutions for region
x
1
<
x
<
x
2
{\displaystyle x_{1}<x<x_{2}}
, it is required that the difference between the angles in these functions is
π
(
n
+
1
/
2
)
{\displaystyle \pi (n+1/2)}
where the
π
2
{\displaystyle {\frac {\pi }{2}}}
phase difference accounts for changing cosine to sine for the wavefunction and
n
π
{\displaystyle n\pi }
difference since negation of the function can occur by letting
N
=
(
−
1
)
n
N
′
{\displaystyle N=(-1)^{n}N'}
. Thus:
∫
x
1
x
2
2
m
(
E
−
V
(
x
)
)
d
x
=
(
n
+
1
/
2
)
π
ℏ
,
{\displaystyle \int _{x_{1}}^{x_{2}}{\sqrt {2m\left(E-V(x)\right)}}\,dx=(n+1/2)\pi \hbar ,}
Where n is a non-negative integer. This condition can also be rewritten as saying that:
The area enclosed by the classical energy curve is
2
π
ℏ
(
n
+
1
/
2
)
{\displaystyle 2\pi \hbar (n+1/2)}
.
Either way, the condition on the energy is a version of the Bohr–Sommerfeld quantization condition, with a "Maslov correction" equal to 1/2.
It is possible to show that after piecing together the approximations in the various regions, one obtains a good approximation to the actual eigenfunction. In particular, the Maslov-corrected Bohr–Sommerfeld energies are good approximations to the actual eigenvalues of the Schrödinger operator. Specifically, the error in the energies is small compared to the typical spacing of the quantum energy levels. Thus, although the "old quantum theory" of Bohr and Sommerfeld was ultimately replaced by the Schrödinger equation, some vestige of that theory remains, as an approximation to the eigenvalues of the appropriate Schrödinger operator.
==== General connection conditions ====
Thus, from the two cases the connection formula is obtained at a classical turning point,
x
=
a
{\displaystyle x=a}
:
N
|
p
(
x
)
|
sin
(
1
ℏ
∫
x
a
|
p
(
x
)
|
d
x
−
π
4
)
⟹
−
N
|
p
(
x
)
|
exp
(
1
ℏ
∫
a
x
|
p
(
x
)
|
d
x
)
{\displaystyle {\frac {N}{\sqrt {|p(x)|}}}\sin {\left({\frac {1}{\hbar }}\int _{x}^{a}|p(x)|dx-{\frac {\pi }{4}}\right)}\Longrightarrow -{\frac {N}{\sqrt {|p(x)|}}}\exp {\left({\frac {1}{\hbar }}\int _{a}^{x}|p(x)|dx\right)}}
and:
N
′
|
p
(
x
)
|
cos
(
1
ℏ
∫
x
a
|
p
(
x
)
|
d
x
−
π
4
)
⟸
N
′
2
|
p
(
x
)
|
exp
(
−
1
ℏ
∫
a
x
|
p
(
x
)
|
d
x
)
{\displaystyle {\frac {N'}{\sqrt {|p(x)|}}}\cos {\left({\frac {1}{\hbar }}\int _{x}^{a}|p(x)|dx-{\frac {\pi }{4}}\right)}\Longleftarrow {\frac {N'}{2{\sqrt {|p(x)|}}}}\exp {\left(-{\frac {1}{\hbar }}\int _{a}^{x}|p(x)|dx\right)}}
The WKB wavefunction at the classical turning point away from it is approximated by oscillatory sine or cosine function in the classically allowed region, represented in the left and growing or decaying exponentials in the forbidden region, represented in the right. The implication follows due to the dominance of growing exponential compared to decaying exponential. Thus, the solutions of oscillating or exponential part of wavefunctions can imply the form of wavefunction on the other region of potential as well as at the associated turning point.
=== Probability density ===
One can then compute the probability density associated to the approximate wave function. The probability that the quantum particle will be found in the classically forbidden region is small. In the classically allowed region, meanwhile, the probability the quantum particle will be found in a given interval is approximately the fraction of time the classical particle spends in that interval over one period of motion. Since the classical particle's velocity goes to zero at the turning points, it spends more time near the turning points than in other classically allowed regions. This observation accounts for the peak in the wave function (and its probability density) near the turning points.
Applications of the WKB method to Schrödinger equations with a large variety of potentials and comparison with perturbation methods and path integrals are treated in Müller-Kirsten.
== Examples in quantum mechanics ==
Although WKB potential only applies to smoothly varying potentials, in the examples where rigid walls produce infinities for potential, the WKB approximation can still be used to approximate wavefunctions in regions of smoothly varying potentials. Since the rigid walls have highly discontinuous potential, the connection condition cannot be used at these points and the results obtained can also differ from that of the above treatment.
=== Bound states for 1 rigid wall ===
The potential of such systems can be given in the form:
V
(
x
)
=
{
V
(
x
)
if
x
≥
x
1
∞
if
x
<
x
1
{\displaystyle V(x)={\begin{cases}V(x)&{\text{if }}x\geq x_{1}\\\infty &{\text{if }}x<x_{1}\\\end{cases}}}
where
x
1
<
x
2
{\textstyle x_{1}<x_{2}}
.
Finding wavefunction in bound region, ie. within classical turning points
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
, by considering approximations far from
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
respectively we have two solutions:
Ψ
WKB
(
x
)
=
A
|
p
(
x
)
|
sin
(
1
ℏ
∫
x
x
1
|
p
(
x
)
|
d
x
+
α
)
{\displaystyle \Psi _{\text{WKB}}(x)={\frac {A}{\sqrt {|p(x)|}}}\sin {\left({\frac {1}{\hbar }}\int _{x}^{x_{1}}|p(x)|dx+\alpha \right)}}
Ψ
WKB
(
x
)
=
B
|
p
(
x
)
|
cos
(
1
ℏ
∫
x
x
2
|
p
(
x
)
|
d
x
+
β
)
{\displaystyle \Psi _{\text{WKB}}(x)={\frac {B}{\sqrt {|p(x)|}}}\cos {\left({\frac {1}{\hbar }}\int _{x}^{x_{2}}|p(x)|dx+\beta \right)}}
Since wavefunction must vanish near
x
1
{\textstyle x_{1}}
, we conclude
α
=
0
{\textstyle \alpha =0}
. For airy functions near
x
2
{\textstyle x_{2}}
, we require
β
=
−
π
4
{\textstyle \beta =-{\frac {\pi }{4}}}
. We require that angles within these functions have a phase difference
π
(
n
+
1
/
2
)
{\displaystyle \pi (n+1/2)}
where the
π
2
{\displaystyle {\frac {\pi }{2}}}
phase difference accounts for changing sine to cosine and
n
π
{\displaystyle n\pi }
allowing
B
=
(
−
1
)
n
A
{\displaystyle B=(-1)^{n}A}
.
1
ℏ
∫
x
1
x
2
|
p
(
x
)
|
d
x
=
π
(
n
+
3
4
)
{\displaystyle {\frac {1}{\hbar }}\int _{x_{1}}^{x_{2}}|p(x)|dx=\pi \left(n+{\frac {3}{4}}\right)}
Where n is a non-negative integer. Note that the right hand side of this would instead be
π
(
n
−
1
/
4
)
{\displaystyle \pi (n-1/4)}
if n was only allowed to non-zero natural numbers.
Thus we conclude that, for
n
=
1
,
2
,
3
,
⋯
{\textstyle n=1,2,3,\cdots }
∫
x
1
x
2
2
m
(
E
−
V
(
x
)
)
d
x
=
(
n
−
1
4
)
π
ℏ
{\displaystyle \int _{x_{1}}^{x_{2}}{\sqrt {2m\left(E-V(x)\right)}}\,dx=\left(n-{\frac {1}{4}}\right)\pi \hbar }
In 3 dimensions with spherically symmetry, the same condition holds where the position x is replaced by radial distance r, due to its similarity with this problem.
=== Bound states within 2 rigid wall ===
The potential of such systems can be given in the form:
V
(
x
)
=
{
∞
if
x
>
x
2
V
(
x
)
if
x
2
≥
x
≥
x
1
∞
if
x
<
x
1
{\displaystyle V(x)={\begin{cases}\infty &{\text{if }}x>x_{2}\\V(x)&{\text{if }}x_{2}\geq x\geq x_{1}\\\infty &{\text{if }}x<x_{1}\\\end{cases}}}
where
x
1
<
x
2
{\textstyle x_{1}<x_{2}}
.
For
E
≥
V
(
x
)
{\textstyle E\geq V(x)}
between
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
which are thus the classical turning points, by considering approximations far from
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
respectively we have two solutions:
Ψ
WKB
(
x
)
=
A
|
p
(
x
)
|
sin
(
1
ℏ
∫
x
x
1
|
p
(
x
)
|
d
x
)
{\displaystyle \Psi _{\text{WKB}}(x)={\frac {A}{\sqrt {|p(x)|}}}\sin {\left({\frac {1}{\hbar }}\int _{x}^{x_{1}}|p(x)|dx\right)}}
Ψ
WKB
(
x
)
=
B
|
p
(
x
)
|
sin
(
1
ℏ
∫
x
x
2
|
p
(
x
)
|
d
x
)
{\displaystyle \Psi _{\text{WKB}}(x)={\frac {B}{\sqrt {|p(x)|}}}\sin {\left({\frac {1}{\hbar }}\int _{x}^{x_{2}}|p(x)|dx\right)}}
Since wavefunctions must vanish at
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
. Here, the phase difference only needs to account for
n
π
{\displaystyle n\pi }
which allows
B
=
(
−
1
)
n
A
{\displaystyle B=(-1)^{n}A}
. Hence the condition becomes:
∫
x
1
x
2
2
m
(
E
−
V
(
x
)
)
d
x
=
n
π
ℏ
{\displaystyle \int _{x_{1}}^{x_{2}}{\sqrt {2m\left(E-V(x)\right)}}\,dx=n\pi \hbar }
where
n
=
1
,
2
,
3
,
⋯
{\textstyle n=1,2,3,\cdots }
but not equal to zero since it makes the wavefunction zero everywhere.
=== Quantum bouncing ball ===
Consider the following potential a bouncing ball is subjected to:
V
(
x
)
=
{
m
g
x
if
x
≥
0
∞
if
x
<
0
{\displaystyle V(x)={\begin{cases}mgx&{\text{if }}x\geq 0\\\infty &{\text{if }}x<0\\\end{cases}}}
The wavefunction solutions of the above can be solved using the WKB method by considering only odd parity solutions of the alternative potential
V
(
x
)
=
m
g
|
x
|
{\displaystyle V(x)=mg|x|}
. The classical turning points are identified
x
1
=
−
E
m
g
{\textstyle x_{1}=-{E \over mg}}
and
x
2
=
E
m
g
{\textstyle x_{2}={E \over mg}}
. Thus applying the quantization condition obtained in WKB:
∫
x
1
x
2
2
m
(
E
−
V
(
x
)
)
d
x
=
(
n
odd
+
1
/
2
)
π
ℏ
{\displaystyle \int _{x_{1}}^{x_{2}}{\sqrt {2m\left(E-V(x)\right)}}\,dx=(n_{\text{odd}}+1/2)\pi \hbar }
Letting
n
odd
=
2
n
−
1
{\textstyle n_{\text{odd}}=2n-1}
where
n
=
1
,
2
,
3
,
⋯
{\textstyle n=1,2,3,\cdots }
, solving for
E
{\textstyle E}
with given
V
(
x
)
=
m
g
|
x
|
{\displaystyle V(x)=mg|x|}
, we get the quantum mechanical energy of a bouncing ball:
E
=
(
3
(
n
−
1
4
)
π
)
2
3
2
(
m
g
2
ℏ
2
)
1
3
.
{\displaystyle E={\left(3\left(n-{\frac {1}{4}}\right)\pi \right)^{\frac {2}{3}} \over 2}(mg^{2}\hbar ^{2})^{\frac {1}{3}}.}
This result is also consistent with the use of equation from bound state of one rigid wall without needing to consider an alternative potential.
=== Quantum Tunneling ===
The potential of such systems can be given in the form:
V
(
x
)
=
{
0
if
x
<
x
1
V
(
x
)
if
x
2
≥
x
≥
x
1
0
if
x
>
x
2
{\displaystyle V(x)={\begin{cases}0&{\text{if }}x<x_{1}\\V(x)&{\text{if }}x_{2}\geq x\geq x_{1}\\0&{\text{if }}x>x_{2}\\\end{cases}}}
where
x
1
<
x
2
{\textstyle x_{1}<x_{2}}
.
Its solutions for an incident wave is given as
ψ
(
x
)
=
{
A
exp
(
i
p
0
x
ℏ
)
+
B
exp
(
−
i
p
0
x
ℏ
)
if
x
<
x
1
C
|
p
(
x
)
|
exp
(
−
1
ℏ
∫
x
1
x
|
p
(
x
)
|
d
x
)
if
x
2
≥
x
≥
x
1
D
exp
(
i
p
0
x
ℏ
)
if
x
>
x
2
{\displaystyle \psi (x)={\begin{cases}A\exp({ip_{0}x \over \hbar })+B\exp({-ip_{0}x \over \hbar })&{\text{if }}x<x_{1}\\{\frac {C}{\sqrt {|p(x)|}}}\exp {(-{\frac {1}{\hbar }}\int _{x_{1}}^{x}|p(x)|dx)}&{\text{if }}x_{2}\geq x\geq x_{1}\\D\exp({ip_{0}x \over \hbar })&{\text{if }}x>x_{2}\\\end{cases}}}
where the wavefunction in the classically forbidden region is the WKB approximation but neglecting the growing exponential. This is a fair assumption for wide potential barriers through which the wavefunction is not expected to grow to high magnitudes.
By the requirement of continuity of wavefunction and its derivatives, the following relation can be shown:
|
D
|
2
|
A
|
2
=
4
(
1
+
a
1
2
/
p
0
2
)
a
1
a
2
exp
(
−
2
ℏ
∫
x
1
x
2
|
p
(
x
′
)
|
d
x
′
)
{\displaystyle {\frac {|D|^{2}}{|A|^{2}}}={\frac {4}{(1+{a_{1}^{2}}/{p_{0}^{2}})}}{\frac {a_{1}}{a_{2}}}\exp \left(-{\frac {2}{\hbar }}\int _{x_{1}}^{x_{2}}|p(x')|dx'\right)}
where
a
1
=
|
p
(
x
1
)
|
{\displaystyle a_{1}=|p(x_{1})|}
and
a
2
=
|
p
(
x
2
)
|
{\displaystyle a_{2}=|p(x_{2})|}
.
Using
J
(
x
,
t
)
=
i
ℏ
2
m
(
ψ
∗
∇
ψ
−
ψ
∇
ψ
∗
)
{\textstyle \mathbf {J} (\mathbf {x} ,t)={\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})}
we express the values without signs as:
J
inc.
=
ℏ
2
m
(
2
p
0
ℏ
|
A
|
2
)
{\textstyle J_{\text{inc.}}={\frac {\hbar }{2m}}({\frac {2p_{0}}{\hbar }}|A|^{2})}
J
ref.
=
ℏ
2
m
(
2
p
0
ℏ
|
B
|
2
)
{\textstyle J_{\text{ref.}}={\frac {\hbar }{2m}}({\frac {2p_{0}}{\hbar }}|B|^{2})}
J
trans.
=
ℏ
2
m
(
2
p
0
ℏ
|
D
|
2
)
{\textstyle J_{\text{trans.}}={\frac {\hbar }{2m}}({\frac {2p_{0}}{\hbar }}|D|^{2})}
Thus, the transmission coefficient is found to be:
T
=
|
D
|
2
|
A
|
2
=
4
(
1
+
a
1
2
/
p
0
2
)
a
1
a
2
exp
(
−
2
ℏ
∫
x
1
x
2
|
p
(
x
′
)
|
d
x
′
)
{\displaystyle T={\frac {|D|^{2}}{|A|^{2}}}={\frac {4}{(1+{a_{1}^{2}}/{p_{0}^{2}})}}{\frac {a_{1}}{a_{2}}}\exp \left(-{\frac {2}{\hbar }}\int _{x_{1}}^{x_{2}}|p(x')|dx'\right)}
where
p
(
x
)
=
2
m
(
E
−
V
(
x
)
)
{\textstyle p(x)={\sqrt {2m(E-V(x))}}}
,
a
1
=
|
p
(
x
1
)
|
{\displaystyle a_{1}=|p(x_{1})|}
and
a
2
=
|
p
(
x
2
)
|
{\displaystyle a_{2}=|p(x_{2})|}
. The result can be stated as
T
∼
e
−
2
γ
{\textstyle T\sim ~e^{-2\gamma }}
where
γ
=
∫
x
1
x
2
|
p
(
x
′
)
|
d
x
′
{\textstyle \gamma =\int _{x_{1}}^{x_{2}}|p(x')|dx'}
.
== See also ==
== References ==
=== Further reading ===
Child, M. S. (1991). Semiclassical mechanics with molecular applications. Oxford: Clarendon Press. ISBN 0-19-855654-3.
Fröman, N.; Fröman, P.-O. (1965). JWKB Approximation: Contributions to the Theory. Amsterdam: North-Holland.
Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-111892-7.
Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, Bibcode:2013qtm..book.....H, ISBN 978-1461471158
Liboff, Richard L. (2003). Introductory Quantum Mechanics (4th ed.). Addison-Wesley. ISBN 0-8053-8714-5.
Olver, Frank William John (1974). Asymptotics and Special Functions. Academic Press. ISBN 0-12-525850-X.
Razavy, Mohsen (2003). Quantum Theory of Tunneling. World Scientific. ISBN 981-238-019-1.
== External links ==
Fitzpatrick, Richard (2002). "The W.K.B. Approximation". (An application of the WKB approximation to the scattering of radio waves from the ionosphere.) | Wikipedia/WKB_approximation |
Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.
Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits. The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classical artificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.
== Examples ==
Quantum neural network research is still in its infancy, and a conglomeration of proposals and ideas of varying scope and mathematical rigor have been put forward. Most of them are based on the idea of replacing classical binary or McCulloch-Pitts neurons with a qubit (which can be called a “quron”), resulting in neural units that can be in a superposition of the state ‘firing’ and ‘resting’.
=== Quantum perceptrons ===
A lot of proposals attempt to find a quantum equivalent for the perceptron unit from which neural nets are constructed. A problem is that nonlinear activation functions do not immediately correspond to the mathematical structure of quantum theory, since a quantum evolution is described by linear operations and leads to probabilistic observation. Ideas to imitate the perceptron activation function with a quantum mechanical formalism reach from special measurements to postulating non-linear quantum operators (a mathematical framework that is disputed). A direct implementation of the activation function using the circuit-based model of quantum computation has recently been proposed by Schuld, Sinayskiy and Petruccione based on the quantum phase estimation algorithm.
=== Quantum networks ===
At a larger scale, researchers have attempted to generalize neural networks to the quantum setting. One way of constructing a quantum neuron is to first generalise classical neurons and then generalising them further to make unitary gates. Interactions between neurons can be controlled quantumly, with unitary gates, or classically, via measurement of the network states. This high-level theoretical technique can be applied broadly, by taking different types of networks and different implementations of quantum neurons, such as photonically implemented neurons and quantum reservoir processor (quantum version of reservoir computing). Most learning algorithms follow the classical model of training an artificial neural network to learn the input-output function of a given training set and use classical feedback loops to update parameters of the quantum system until they converge to an optimal configuration. Learning as a parameter optimisation problem has also been approached by adiabatic models of quantum computing.
Quantum neural networks can be applied to algorithmic design: given qubits with tunable mutual interactions, one can attempt to learn interactions following the classical backpropagation rule from a training set of desired input-output relations, taken to be the desired output algorithm's behavior. The quantum network thus ‘learns’ an algorithm.
=== Quantum associative memory ===
The first quantum associative memory algorithm was introduced by Dan Ventura and Tony Martinez in 1999. The authors do not attempt to translate the structure of artificial neural network models into quantum theory, but propose an algorithm for a circuit-based quantum computer that simulates associative memory. The memory states (in Hopfield neural networks saved in the weights of the neural connections) are written into a superposition, and a Grover-like quantum search algorithm retrieves the memory state closest to a given input. As such, this is not a fully content-addressable memory, since only incomplete patterns can be retrieved.
The first truly content-addressable quantum memory, which can retrieve patterns also from corrupted inputs, was proposed by Carlo A. Trugenberger. Both memories can store an exponential (in terms of n qubits) number of patterns but can be used only once due to the no-cloning theorem and their destruction upon measurement.
Trugenberger, however, has shown that his probabilistic model of quantum associative memory can be efficiently implemented and re-used multiples times for any polynomial number of stored patterns, a large advantage with respect to classical associative memories.
=== Classical neural networks inspired by quantum theory ===
A substantial amount of interest has been given to a “quantum-inspired” model that uses ideas from quantum theory to implement a neural network based on fuzzy logic.
== Training ==
Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the current perceptron copies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate the no-cloning theorem. A proposed generalized solution to this is to replace the classical fan-out method with an arbitrary unitary that spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (
U
f
{\displaystyle U_{f}}
) with a dummy state qubit in a known state (Ex.
|
0
⟩
{\displaystyle |0\rangle }
in the computational basis), also known as an Ancilla bit, the information from the qubit can be transferred to the next layer of qubits. This process adheres to the quantum operation requirement of reversibility.
Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed uses fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time. In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.
=== Cost functions ===
To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network's output to the expected or desired output. In a Classical Neural Network, the weights (
w
{\displaystyle w}
) and biases (
b
{\displaystyle b}
) at each step determine the outcome of the cost function
C
(
w
,
b
)
{\displaystyle C(w,b)}
. When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, where
y
(
x
)
{\displaystyle y(x)}
is the desired output and
a
out
(
x
)
{\displaystyle a^{\text{out}}(x)}
is the actual output, the cost function is optimized when
C
(
w
,
b
)
{\displaystyle C(w,b)}
= 0. For a quantum neural network, the cost function is determined by measuring the fidelity of the outcome state (
ρ
out
{\displaystyle \rho ^{\text{out}}}
) with the desired outcome state (
ϕ
out
{\displaystyle \phi ^{\text{out}}}
), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1.
Equation 1
C
(
w
,
b
)
=
1
N
∑
x
|
|
y
(
x
)
−
a
out
(
x
)
|
|
2
{\displaystyle C(w,b)={1 \over N}\sum _{x}{||y(x)-a^{\text{out}}(x)|| \over 2}}
Equation 2
C
=
1
N
∑
x
N
⟨
ϕ
out
|
ρ
out
|
ϕ
out
⟩
{\displaystyle C={1 \over N}\sum _{x}^{N}{\langle \phi ^{\text{out}}|\rho ^{\text{out}}|\phi ^{\text{out}}\rangle }}
=== Barren plateaus ===
Gradient descent is widely used and successful in classical algorithms. However, although the simplified structure is very similar to neural networks such as CNNs, QNNs perform much worse.
Since the quantum space exponentially expands as the q-bit grows, the observations will concentrate around the mean value at an exponential rate, where also have exponentially small gradients.
This situation is known as Barren Plateaus, because most of the initial parameters are trapped on a "plateau" of almost zero gradient, which approximates random wandering rather than gradient descent. This makes the model untrainable.
In fact, not only QNN, but almost all deeper VQA algorithms have this problem. In the present NISQ era, this is one of the problems that have to be solved if more applications are to be made of the various VQA algorithms, including QNN.
== See also ==
Differentiable programming
Optical neural network
Holographic associative memory
Quantum cognition
Quantum machine learning
== References ==
== External links ==
Recent review of quantum neural networks by M. Schuld, I. Sinayskiy and F. Petruccione
Review of quantum neural networks by Wei
Article by P. Gralewicz on the plausibility of quantum computing in biological neural networks
Training a neural net to recognize images | Wikipedia/Quantum_neural_network |
The timeline of quantum mechanics is a list of key events in the history of quantum mechanics, quantum field theories and quantum chemistry.
The initiation of quantum science occurred in 1900, originating from the problem of the oscillator beginning during the mid-19th century.
== 19th century ==
1801 – Thomas Young establishes the wave nature of light with his double-slit experiment.
1859 – Gustav Kirchhoff introduces the concept of a blackbody and proves that its emission spectrum depends only on its temperature.
1860–1900 – Ludwig Eduard Boltzmann, James Clerk Maxwell and others develop the theory of statistical mechanics. Boltzmann argues that entropy is a measure of disorder.
1877 – Boltzmann suggests that the energy levels of a physical system could be discrete based on statistical mechanics and mathematical arguments; also produces the first circle diagram representation, or atomic model of a molecule (such as an iodine gas molecule) in terms of the overlapping terms α and β, later (in 1928) called molecular orbitals, of the constituting atoms.
1885 – Johann Jakob Balmer discovers a numerical relationship between visible spectral lines of hydrogen, the Balmer series.
1887 – Heinrich Hertz discovers the photoelectric effect, shown by Einstein in 1905 to involve quanta of light.
1888 – Hertz demonstrates experimentally that electromagnetic waves exist, as predicted by Maxwell.
1888 – Johannes Rydberg modifies the Balmer formula to include all spectral series of lines for the hydrogen atom, producing the Rydberg formula that is employed later by Niels Bohr and others to verify Bohr's first quantum model of the atom.
1895 – Wilhelm Conrad Röntgen discovers X-rays in experiments with electron beams in plasma.
1896 – Antoine Henri Becquerel accidentally discovers radioactivity while investigating the work of Wilhelm Conrad Röntgen; he finds that uranium salts emit radiation that resembled Röntgen's X-rays in their penetrating power. In one experiment, Becquerel wraps a sample of a phosphorescent substance, potassium uranyl sulfate, in photographic plates surrounded by very thick black paper in preparation for an experiment with bright sunlight; then, to his surprise, the photographic plates are already exposed before the experiment starts, showing a projected image of his sample.
1896–1897 – Pieter Zeeman first observes the Zeeman splitting effect by applying a magnetic field to light sources.
1896–1897 – Marie Curie (née Skłodowska, Becquerel's doctoral student) investigates uranium salt samples using a very sensitive electrometer device that was invented 15 years before by her husband and his brother Jacques Curie to measure electrical charge. She discovers that rays emitted by the uranium salt samples make the surrounding air electrically conductive, and measures the emitted rays' intensity. In April 1898, through a systematic search of substances, she finds that thorium compounds, like those of uranium, emitted "Becquerel rays", thus preceding the work of Frederick Soddy and Ernest Rutherford on the nuclear decay of thorium to radium by three years.
1897:
Ivan Borgman demonstrates that X-rays and radioactive materials induce thermoluminescence.
J. J. Thomson's experimentation with cathode rays led him to suggest a fundamental unit more than a 1,000 times smaller than an atom, based on the high charge-to-mass ratio. He called the particle a "corpuscle", but later scientists preferred the term electron.
Joseph Larmor explained the splitting of the spectral lines in a magnetic field by the oscillation of electrons.
Larmor, created the first solar system model of the atom in 1897. He also postulated the proton, calling it a "positive electron". He said the destruction of this type of atom making up matter "is an occurrence of infinitely small probability".
1899–1903 – Ernest Rutherford investigates radioactivity. He coins the terms alpha and beta rays in 1899 to describe the two distinct types of radiation emitted by thorium and uranium salts. Rutherford is joined at McGill University in 1900 by Frederick Soddy and together they discover nuclear transmutation when they find in 1902 that radioactive thorium is converting itself into radium through a process of nuclear decay and a gas (later found to be 42He); they report their interpretation of radioactivity in 1903. Rutherford becomes known as the "father of nuclear physics" with his nuclear atom model of 1911.
== 20th century ==
=== 1900–1909 ===
1900 – To explain black-body radiation (1862), Max Planck suggests that electromagnetic energy could only be emitted in quantized form, i.e. the energy could only be a multiple of an elementary unit E = hν, where h is the Planck constant and ν is the frequency of the radiation.
1902 – To explain the octet rule (1893), Gilbert N. Lewis develops the "cubical atom" theory in which electrons in the form of dots are positioned at the corner of a cube. Predicts that single, double, or triple "bonds" result when two atoms are held together by multiple pairs of electrons (one pair for each bond) located between the two atoms.
1903 – Antoine Becquerel, Pierre Curie and Marie Curie share the 1903 Nobel Prize in Physics for their work on spontaneous radioactivity.
1904 – Richard Abegg notes the pattern that the numerical difference between the maximum positive valence, such as +6 for H2SO4, and the maximum negative valence, such as −2 for H2S, of an element tends to be eight (Abegg's rule).
1905 :
Albert Einstein explains the photoelectric effect (reported in 1887 by Heinrich Hertz), i.e. that shining light on certain materials can function to eject electrons from the material. He postulates, as based on Planck's quantum hypothesis (1900), that light itself consists of individual quantum particles (photons).
Einstein explains the effects of Brownian motion as caused by the kinetic energy (i.e., movement) of atoms, which was subsequently, experimentally verified by Jean Baptiste Perrin, thereby settling the century-long dispute about the validity of John Dalton's atomic theory.
Einstein publishes his special theory of relativity
Einstein theoretically derives the equivalence of matter and energy.
1907 to 1917 – Ernest Rutherford: To test his planetary model of 1904, later known as the Rutherford model, he sent a beam of positively charged alpha particles onto a gold foil and noticed that some bounced back, thus showing that an atom has a small-sized positively charged atomic nucleus at its center. However, he received in 1908 the Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances", which followed on the work of Marie Curie, not for his planetary model of the atom; he is also widely credited with first "splitting the atom" in 1917. In 1911 Ernest Rutherford explained the Geiger–Marsden experiment by invoking a nuclear atom model and derived the Rutherford cross section.
1909 – Geoffrey Ingram Taylor demonstrates that interference patterns of light were generated even when the light energy introduced consisted of only one photon. This discovery of the wave–particle duality of matter and energy is fundamental to the later development of quantum field theory.
1909 and 1916 – Einstein shows that, if Planck's law of black-body radiation is accepted, the energy quanta must also carry momentum p = h / λ, making them full-fledged particles.
=== 1910–1919 ===
1911:
Lise Meitner and Otto Hahn perform an experiment that shows that the energies of electrons emitted by beta decay had a continuous rather than discrete spectrum. This is in apparent contradiction to the law of conservation of energy, as it appeared that energy was lost in the beta decay process. A second problem is that the spin of the nitrogen-14 atom was 1, in contradiction to the Rutherford prediction of 1⁄2. These anomalies are later explained by the discoveries of the neutrino and the neutron.
Ștefan Procopiu performs experiments in which he determines the correct value of electron's magnetic dipole moment, μB = 9.27×10−21 erg·Oe−1 (in 1913 he is also able to calculate a theoretical value of the Bohr magneton based on Planck's quantum theory).
John William Nicholson is noted as the first to create an atomic model that quantized angular momentum as h/2π. Niels Bohr quoted him in his 1913 paper of the Bohr model of the atom.
1912 – Victor Hess discovers the existence of cosmic radiation.
1912 – Henri Poincaré publishes an influential mathematical argument in support of the essential nature of energy quanta.
1913:
Robert Andrews Millikan publishes the results of his "oil drop" experiment, in which he precisely determines the electric charge of the electron. Determination of the fundamental unit of electric charge makes it possible to calculate the Avogadro constant (which is the number of atoms or molecules in one mole of any substance) and thereby to determine the atomic weight of the atoms of each element.
Niels Bohr publishes his 1913 paper of the Bohr model of the atom.
Ștefan Procopiu publishes a theoretical paper with the correct value of the electron's magnetic dipole moment μB.
Niels Bohr obtains theoretically the value of the electron's magnetic dipole moment μB as a consequence of his atom model
Johannes Stark and Antonino Lo Surdo independently discover the shifting and splitting of the spectral lines of atoms and molecules due to the presence of the light source in an external static electric field.
To explain the Rydberg formula (1888), which correctly modeled the light emission spectra of atomic hydrogen, Bohr hypothesizes that negatively charged electrons revolve around a positively charged nucleus at certain fixed "quantum" distances and that each of these "spherical orbits" has a specific energy associated with it such that electron movements between orbits requires "quantum" emissions or absorptions of energy.
1914 – James Franck and Gustav Hertz report their experiment on electron collisions with mercury atoms, which provides a new test of Bohr's quantized model of atomic energy levels.
1915 – Einstein first presents to the Prussian Academy of Science what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter is present, and form the core of Einstein's General Theory of Relativity. Although this theory is not directly applicable to quantum mechanics, theorists of quantum gravity seek to reconcile them.
1916 – Paul Epstein and Karl Schwarzschild, working independently, derive equations for the linear and quadratic Stark effect in hydrogen.
1916 – Gilbert N. Lewis conceives the theoretical basis of Lewis dot formulas, diagrams that show the bonding between atoms of a molecule and the lone pairs of electrons that may exist in the molecule.
1916 – To account for the Zeeman effect (1896), i.e. that atomic absorption or emission spectral lines change when the light source is subjected to a magnetic field, Arnold Sommerfeld suggests there might be "elliptical orbits" in atoms in addition to spherical orbits.
1918 – Sir Ernest Rutherford notices that, when alpha particles are shot into nitrogen gas, his scintillation detectors shows the signatures of hydrogen nuclei. Rutherford determines that the only place this hydrogen could have come from was the nitrogen, and therefore nitrogen must contain hydrogen nuclei. He thus suggests that the hydrogen nucleus, which is known to have an atomic number of 1, is an elementary particle, which he decides must be the protons hypothesized by Eugen Goldstein.
1919 – Building on the work of Lewis (1916), Irving Langmuir coins the term "covalence" and postulates that coordinate covalent bonds occur when two electrons of a pair of atoms come from both atoms and are equally shared by them, thus explaining the fundamental nature of chemical bonding and molecular chemistry.
=== 1920–1929 ===
1920 – Hendrik Kramers uses Bohr–Sommerfeld quantization to derive formulas for intensities of spectral transitions of the Stark effect. Kramers also includes the effect of fine structure, including corrections for relativistic kinetic energy and coupling between electron spin and orbit.
1921–1922 – Frederick Soddy receives the Nobel Prize for 1921 in Chemistry one year later, in 1922, "for his contributions to our knowledge of the chemistry of radioactive substances, and his investigations into the origin and nature of isotopes"; he writes in his Nobel Lecture of 1922: "The interpretation of radioactivity which was published in 1903 by Sir Ernest Rutherford and myself ascribed the phenomena to the spontaneous disintegration of the atoms of the radio-element, whereby a part of the original atom was violently ejected as a radiant particle, and the remainder formed a totally new kind of atom with a distinct chemical and physical character."
1922:
Arthur Compton finds that X-ray wavelengths increase due to scattering of the radiant energy by free electrons. The scattered quanta have less energy than the quanta of the original ray. This discovery, known as the Compton effect or Compton scattering, demonstrates the particle concept of electromagnetic radiation.
Otto Stern and Walther Gerlach perform the Stern–Gerlach experiment, which detects discrete values of angular momentum for atoms in the ground state passing through an inhomogeneous magnetic field leading to the discovery of the spin of the electron.
Bohr updates his model of the atom to better explain the properties of the periodic table by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells", presaging orbital theory.
1923:
Pierre Auger discovers the Auger effect, where filling the inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom.
Louis de Broglie extends wave–particle duality to particles, postulating that electrons in motion are associated with waves. He predicts that the wavelengths are given by the Planck constant h divided by the momentum of the mv = p of the electron: λ = h / mv = h / p.
Gilbert N. Lewis creates the theory of Lewis acids and bases based on the properties of electrons in molecules, defining an acid as accepting an electron lone pair from a base.
1924 – Satyendra Nath Bose explains Planck's law using a new statistical law that governs bosons, and Einstein generalizes it to predict Bose–Einstein condensate. The theory becomes known as Bose–Einstein statistics.
1924 – Wolfgang Pauli outlines the "Pauli exclusion principle", which states that no two identical fermions may occupy the same quantum state simultaneously, a fact that explains many features of the periodic table.
1925:
George Uhlenbeck and Samuel Goudsmit postulate the existence of electron spin.
Friedrich Hund outlines Hund's rule of Maximum Multiplicity, which states that when electrons are added successively to an atom as many levels or orbits are singly occupied as possible before any pairing of electrons with opposite spin occurs and made the distinction that the inner electrons in molecules remained in atomic orbitals and only the valence electrons needed to be in molecular orbitals involving both nuclei.
Werner Heisenberg published his Umdeutung paper, reinterpreting quantum mechanics using non-commutative algebra.
Heisenberg, Max Born, and Pascual Jordan develop the matrix mechanics formulation of quantum Mechanics.
1926:
Lewis coins the term photon in a letter to the scientific journal Nature, which he derives from the Greek word for light, φως (transliterated phôs).
Oskar Klein and Walter Gordon state their relativistic quantum wave equation, later called the Klein–Gordon equation.
Enrico Fermi discovers the spin–statistics theorem connection.
Paul Dirac introduces Fermi–Dirac statistics.
Erwin Schrödinger uses De Broglie's electron wave postulate (1924) to develop a "wave equation" that represents mathematically the distribution of a charge of an electron distributed through space, being spherically symmetric or prominent in certain directions, i.e. directed valence bonds, which gives the correct values for spectral lines of the hydrogen atom; also introduces the Hamiltonian operator in quantum mechanics.
Max Born postulates the statistical interpretation of the quantum mechanical wave function (Born rule)
Paul Epstein reconsiders the linear and quadratic Stark effect from the point of view of the new quantum theory, using the equations of Schrödinger and others. The derived equations for the line intensities are a decided improvement over previous results obtained by Hans Kramers.
1926 to 1932 – John von Neumann published the Mathematical Foundations of Quantum Mechanics in terms of Hermitian operators on Hilbert spaces, subsequently published in 1932 as a basic textbook on the mathematical formulation of quantum mechanics.
1927:
Werner Heisenberg formulates the quantum uncertainty principle.
Niels Bohr and Werner Heisenberg develop the Copenhagen interpretation of the probabilistic nature of wavefunctions.
Born and J. Robert Oppenheimer introduce the Born–Oppenheimer approximation, which allows the quick approximation of the energy and wavefunctions of smaller molecules.
Walter Heitler and Fritz London introduce the concepts of valence bond theory and apply it to the hydrogen molecule.
Llewellyn Thomas and Fermi develop the Thomas–Fermi model for a gas in a box.
Chandrasekhara Venkata Raman studies optical photon scattering by electrons.
Dirac states his relativistic electron quantum wave equation, the Dirac equation.
Charles Galton Darwin and Walter Gordon solve the Dirac equation for a Coulomb potential.
Charles Drummond Ellis (along with James Chadwick and colleagues) finally establish clearly that the beta decay spectrum is in fact continuous and not discrete, posing a problem that will later be solved by theorizing (and later discovering) the existence of the neutrino.
Walter Heitler uses Schrödinger's wave equation to show how two hydrogen atom wavefunctions join, with plus, minus, and exchange terms, to form a covalent bond.
Robert Mulliken works, in coordination with Hund, to develop a molecular orbital theory where electrons are assigned to states that extend over an entire molecule and, in 1932, introduces many new molecular orbital terminologies, such as σ bond, π bond, and δ bond.
Eugene Wigner relates degeneracies of quantum states to irreducible representations of symmetry groups.
Hermann Klaus Hugo Weyl proves in collaboration with his student Fritz Peter a fundamental theorem in harmonic analysis—the Peter–Weyl theorem—relevant to group representations in quantum theory (including the complete reducibility of unitary representations of a compact topological group); introduces the Weyl quantization, and earlier, in 1918, introduces the concept of gauge and a gauge theory; later in 1935 he introduces and characterizes with Richard Bauer the concept of spinor in n dimensions.
1928:
Linus Pauling outlines the nature of the chemical bond: uses Heitler's quantum mechanical covalent bond model to outline the quantum mechanical basis for all types of molecular structure and bonding and suggests that different types of bonds in molecules can become equalized by rapid shifting of electrons, a process called "resonance" (1931), such that resonance hybrids contain contributions from the different possible electronic configurations.
Friedrich Hund and Robert S. Mulliken introduce the concept of molecular orbitals.
Born and Vladimir Fock formulate and prove the adiabatic theorem, which states that a physical system shall remain in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum.
1929:
Oskar Klein discovers the Klein paradox
Oskar Klein and Yoshio Nishina derive the Klein–Nishina cross section for high energy photon scattering by electrons.
Sir Nevill Mott derives the Mott cross section for the Coulomb scattering of relativistic electrons.
John Lennard-Jones introduces the linear combination of atomic orbitals approximation for the calculation of molecular orbitals.
Fritz Houtermans and Robert d'Escourt Atkinson propose that stars release energy by nuclear fusion.
=== 1930–1939 ===
1930
Dirac hypothesizes the existence of the positron.
Dirac's textbook The Principles of Quantum Mechanics is published, becoming a standard reference book that is still used today.
Erich Hückel introduces the Hückel molecular orbital method, which expands on orbital theory to determine the energies of orbitals of pi electrons in conjugated hydrocarbon systems.
Fritz London explains van der Waals forces as due to the interacting fluctuating dipole moments between molecules
Pauli suggests in a famous letter that, in addition to electrons and protons, atoms also contain an extremely light neutral particle that he calls the "neutron". He suggests that this "neutron" is also emitted during beta decay and has simply not yet been observed. Later it is determined that this particle is actually the almost massless neutrino.
1931:
John Lennard-Jones proposes the Lennard-Jones inter-atomic potential.
Walther Bothe and Herbert Becker find that if the very energetic alpha particles emitted from polonium fall on certain light elements, specifically beryllium, boron, or lithium, an unusually penetrating radiation is produced. At first this radiation is thought to be gamma radiation, although it is more penetrating than any gamma rays known, and the details of experimental results are very difficult to interpret on this basis. Some scientists begin to hypothesize the possible existence of another fundamental particle.
Erich Hückel redefines the property of aromaticity in a quantum mechanical context by introducing the 4n+2 rule, or Hückel's rule, which predicts whether an organic planar ring molecule will have aromatic properties.
Ernst Ruska creates the first electron microscope.
Ernest Lawrence creates the first cyclotron and founds the Radiation Laboratory, later the Lawrence Berkeley National Laboratory; in 1939 he was awarded the Nobel Prize in Physics for his work on the cyclotron.
1932:
Irène Joliot-Curie and Frédéric Joliot show that if the unknown radiation generated by alpha particles falls on paraffin or any other hydrogen-containing compound, it ejects protons of very high energy. This is not in itself inconsistent with the proposed gamma ray nature of the new radiation, but detailed quantitative analysis of the data become increasingly difficult to reconcile with such a hypothesis.
James Chadwick performs a series of experiments showing that the gamma ray hypothesis for the unknown radiation produced by alpha particles is untenable, and that the new particles must be the neutrons hypothesized by Fermi.
Werner Heisenberg applies perturbation theory to the two-electron problem to show how resonance arising from electron exchange can explain Force carriers.
Mark Oliphant: Building upon the nuclear transmutation experiments of Ernest Rutherford done a few years earlier, observes fusion of light nuclei (hydrogen isotopes). The steps of the main cycle of nuclear fusion in stars are subsequently worked out by Hans Bethe over the next decade.
Carl D. Anderson experimentally proves the existence of the positron.
1933 – Following Chadwick's experiments, Fermi renames Pauli's "neutron" to neutrino to distinguish it from Chadwick's theory of the much more massive neutron.
1933 – Leó Szilárd first theorizes the concept of a nuclear chain reaction. He files a patent for his idea of a simple nuclear reactor the following year.
1934:
Fermi publishes a very successful model of beta decay in which neutrinos are produced.
Fermi studies the effects of bombarding uranium isotopes with neutrons.
N. N. Semyonov develops the total quantitative chain chemical reaction theory, later the basis of various high technologies using the incineration of gas mixtures. The idea is also used for the description of the nuclear reaction.
Irène Joliot-Curie and Frédéric Joliot-Curie discover artificial radioactivity and are jointly awarded the 1935 Nobel Prize in Chemistry
1935:
Einstein, Boris Podolsky, and Nathan Rosen describe the EPR paradox, which challenges the completeness of quantum mechanics as it was theorized up to that time. Assuming that local realism is valid, they demonstrated that there would need to be hidden parameters to explain how measuring the quantum state of one particle could influence the quantum state of another particle without apparent contact between them.
Schrödinger develops the Schrödinger's cat thought experiment. It illustrates what he saw as the problems of the Copenhagen interpretation of quantum mechanics if subatomic particles can be in two contradictory quantum states at once.
Hideki Yukawa predicts the existence of the pion, stating that such a potential arises from the exchange of a massive scalar field, as it would be found in the field of the pion. Prior to Yukawa's paper, it was believed that the scalar fields of the fundamental forces necessitated massless particles.
1936 – Alexandru Proca publishes prior to Hideki Yukawa his relativistic quantum field equations for a massive vector meson of spin-1 as a basis for nuclear forces.
1936 – Garrett Birkhoff and John von Neumann introduce Quantum Logic in an attempt to reconcile the apparent inconsistency of classical, Boolean logic with the Heisenberg Uncertainty Principle of quantum mechanics as applied, for example, to the measurement of complementary (noncommuting) observables in quantum mechanics, such as position and momentum; current approaches to quantum logic involve noncommutative and non-associative many-valued logic.
1936 – Carl D. Anderson discovers muons while he is studying cosmic radiation.
1937 – Hermann Arthur Jahn and Edward Teller prove, using group theory, that non-linear degenerate molecules are unstable. The Jahn–Teller theorem essentially states that any non-linear molecule with a degenerate electronic ground state will undergo a geometrical distortion that removes that degeneracy, because the distortion lowers the overall energy of the complex. The latter process is called the Jahn–Teller effect; this effect was recently considered also in relation to the superconductivity mechanism in YBCO and other high temperature superconductors. The details of the Jahn–Teller effect are presented with several examples and EPR data in the basic textbook by Abragam and Bleaney (1970).
1938 – Charles Coulson makes the first accurate calculation of a molecular orbital wavefunction with the hydrogen molecule.
1938 – Otto Hahn and his assistant Fritz Strassmann send a manuscript to Naturwissenschaften reporting they have detected the element barium after bombarding uranium with neutrons. Hahn calls this new phenomenon a 'bursting' of the uranium nucleus. Simultaneously, Hahn communicates these results to Lise Meitner. Meitner, and her nephew Otto Robert Frisch, correctly interpret these results as being a nuclear fission. Frisch confirms this experimentally on 13 January 1939.
1939 – Leó Szilárd and Fermi discover neutron multiplication in uranium, proving that a chain reaction is indeed possible.
=== 1940–1949 ===
1942 – A team led by Enrico Fermi creates the first artificial self-sustaining nuclear chain reaction, called Chicago Pile-1, in a racquets court below the bleachers of Stagg Field at the University of Chicago on December 2, 1942.
1942 to 1946 – J. Robert Oppenheimer successfully leads the Manhattan Project, predicts quantum tunneling and proposes the Oppenheimer–Phillips process in nuclear fusion
1945 – the Manhattan Project produces the first nuclear fission explosion on July 16, 1945, in the Trinity test in New Mexico.
1945 – John Archibald Wheeler and Richard Feynman originate Wheeler–Feynman absorber theory, an interpretation of electrodynamics that supposes that elementary particles are not self-interacting.
1946 – Theodor V. Ionescu and Vasile Mihu report the construction of the first hydrogen maser by stimulated emission of radiation in molecular hydrogen.
1947 – Willis Lamb and Robert Retherford measure a small difference in energy between the energy levels 2S1/2 and 2P1/2 of the hydrogen atom, known as the Lamb shift.
1947 – George Rochester and Clifford Charles Butler publish two cloud chamber photographs of cosmic ray-induced events, one showing what appears to be a neutral particle decaying into two charged pions, and one that appears to be a charged particle decaying into a charged pion and something neutral. The estimated mass of the new particles is very rough, about half a proton's mass. More examples of these "V-particles" were slow in coming, and they are soon given the name kaons.
1948 – Sin-Itiro Tomonaga and Julian Schwinger independently introduce perturbative renormalization as a method of correcting the original Lagrangian of a quantum field theory so as to eliminate a series of infinite terms that would otherwise result.
1948 – Richard Feynman states the path integral formulation of quantum mechanics.
1949 – Freeman Dyson determines the equivalence of two formulations of quantum electrodynamics: Feynman's diagrammatic path integral formulation and the operator method developed by Julian Schwinger and Tomonaga. A by-product of that demonstration is the invention of the Dyson series.
=== 1950–1959 ===
1951:
Clemens C. J. Roothaan and George G. Hall derive the Roothaan–Hall equations, putting rigorous molecular orbital methods on a firm basis.
Edward Teller, physicist and "father of the hydrogen bomb", and Stanislaw Ulam, mathematician, are reported to have written jointly in March 1951 a classified report on "Hydrodynamic Lenses and Radiation Mirrors" that results in the next step in the Manhattan Project.
1951 and 1952 – at the Manhattan Project, the first planned fusion thermonuclear reaction experiment is carried out successfully in the Spring of 1951 at Eniwetok, based on the work of Edward Teller and Dr. Hans A. Bethe. The Los Alamos Laboratory proposes a date in November 1952 for a hydrogen bomb, full-scale test that is apparently carried out.
Felix Bloch and Edward Mills Purcell receive a shared Nobel Prize in Physics for their first observations of the quantum phenomenon of nuclear magnetic resonance previously reported in 1949. Purcell reports his contribution as Research in Nuclear Magnetism, and gives credit to his coworkers such as Herbert S. Gutowsky for their NMR contributions, as well as theoretical researchers of nuclear magnetism such as John Hasbrouck Van Vleck.
1952 – Albert W. Overhauser formulates a theory of dynamic nuclear polarization, also known as the Overhauser Effect; other contenders are the subsequent theory of Ionel Solomon reported in 1955 that includes the Solomon equations for the dynamics of coupled spins, and that of R. Kaiser in 1963. The general Overhauser effect is first demonstrated experimentally by T. R. Carver and Charles P. Slichter in 1953.
1952 – Donald A. Glaser creates the bubble chamber, which allows detection of electrically charged particles by surrounding them by a bubble. Properties of the particles such as momentum can be determined by studying their helical paths. Glaser receives a Nobel prize in 1960 for his invention.
1953 – Charles H. Townes, collaborating with James P. Gordon, and Herbert J. Zeiger, builds the first ammonia maser; receives a Nobel prize in 1964 for his experimental success in producing coherent radiation by atoms and molecules.
1954 – Chen Ning Yang and Robert Mills derive a gauge theory for nonabelian groups, leading to the successful formulation of both electroweak unification and quantum chromodynamics.
1955 – Ionel Solomon develops the first nuclear magnetic resonance theory of magnetic dipole coupled nuclear spins and of the Nuclear Overhauser effect.
1956 – P. Kuroda predicts that self-sustaining nuclear chain reactions should occur in natural uranium deposits.
1956 – Chien-Shiung Wu carries out the Wu Experiment, which observes parity violation in cobalt-60 decay, showing that parity violation is present in the weak interaction.
1956 – Clyde L. Cowan and Frederick Reines experimentally prove the existence of the neutrino.
1957 – John Bardeen, Leon Cooper and John Robert Schrieffer propose their quantum BCS theory of low temperature superconductivity, for which they receive a Nobel prize in 1972. The theory represents superconductivity as a macroscopic quantum coherence phenomenon involving phonon coupled electron pairs with opposite spin
1957 – William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle, in their 1957 paper Synthesis of the Elements in Stars, show that the abundances of essentially all but the lightest chemical elements can be explained by the process of nucleosynthesis in stars.
1957 – Hugh Everett formulates the many-worlds interpretation of quantum mechanics, which states that every possible quantum outcome is realized in divergent, non-communicating parallel universes in quantum superposition.
1958–1959 – magic angle spinning described by Edward Raymond Andrew, A. Bradbury, and R. G. Eades, and independently in 1959 by I. J. Lowe.
=== 1960–1969 ===
1961 – Claus Jönsson performs Young's double-slit experiment (1909) for the first time with particles other than photons by using electrons and with similar results, confirming that massive particles also behaved according to the wave–particle duality that is a fundamental principle of quantum field theory.
1961 – Anatole Abragam publishes the fundamental textbook on the quantum theory of Nuclear Magnetic Resonance entitled The Principles of Nuclear Magnetism;
1961 – Sheldon Glashow extends the electroweak interaction models developed by Julian Schwinger by including a short range neutral current, the Zo. The resulting symmetry structure that Glashow proposes, SU(2) × U(1), forms the basis of the accepted theory of the electroweak interactions.
1962 – Leon M. Lederman, Melvin Schwartz and Jack Steinberger show that more than one type of neutrino exists by detecting interactions of the muon neutrino (already hypothesised with the name "neutretto")
1962 – Jeffrey Goldstone, Yoichiro Nambu, Abdus Salam, and Steven Weinberg develop what is now known as Goldstone's Theorem: if there is a continuous symmetry transformation under which the Lagrangian is invariant, then either the vacuum state is also invariant under the transformation, or there must be spinless particles of zero mass, thereafter called Nambu–Goldstone bosons.
1962 to 1973 – Brian David Josephson, predicts correctly the quantum tunneling effect involving superconducting currents while he is a PhD student under the supervision of Professor Brian Pippard at the Royal Society Mond Laboratory in Cambridge, UK; subsequently, in 1964, he applies his theory to coupled superconductors. The effect is later demonstrated experimentally at Bell Labs in the USA. For his important quantum discovery he is awarded the Nobel Prize in Physics in 1973.
1963 – Eugene P. Wigner lays the foundation for the theory of symmetries in quantum mechanics as well as for basic research into the structure of the atomic nucleus; makes important "contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles"; he shares half of his Nobel prize in Physics with Maria Goeppert-Mayer and J. Hans D. Jensen.
1963 – Maria Goeppert Mayer and J. Hans D. Jensen share with Eugene P. Wigner half of the Nobel Prize in Physics in 1963 "for their discoveries concerning nuclear shell structure theory".
1964 – John Stewart Bell puts forth Bell's theorem, which used testable inequality relations to show the flaws in the earlier Einstein–Podolsky–Rosen paradox and prove that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. This inaugurated the study of quantum entanglement, the phenomenon in which separate particles share the same quantum state despite being at a distance from each other.
1964 – Nikolai G. Basov and Aleksandr M. Prokhorov share the Nobel Prize in Physics in 1964 for, respectively, semiconductor lasers and Quantum Electronics; they also share the prize with Charles Hard Townes, the inventor of the ammonium maser.
1969 to 1977 – Sir Nevill Mott and Philip Warren Anderson publish quantum theories for electrons in non-crystalline solids, such as glasses and amorphous semiconductors; receive in 1977 a Nobel prize in Physics for their investigations into the electronic structure of magnetic and disordered systems, which allow for the development of electronic switching and memory devices in computers. The prize is shared with John Hasbrouck Van Vleck for his contributions to the understanding of the behavior of electrons in magnetic solids; he established the fundamentals of the quantum mechanical theory of magnetism and the crystal field theory (chemical bonding in metal complexes) and is regarded as the Father of modern Magnetism.
1969 and 1970 – Theodor V. Ionescu, Radu Pârvan and I.C. Baianu observe and report quantum amplified stimulation of electromagnetic radiation in hot deuterium plasmas in a longitudinal magnetic field; publish a quantum theory of the amplified coherent emission of radiowaves and microwaves by focused electron beams coupled to ions in hot plasmas.
=== 1971–1979 ===
1971 – Martinus J. G. Veltman and Gerardus 't Hooft show that, if the symmetries of Yang–Mills theory are broken according to the method suggested by Peter Higgs, then Yang–Mills theory can be renormalized. The renormalization of Yang–Mills Theory predicts the existence of a massless particle, called the gluon, which could explain the nuclear strong force. It also explains how the particles of the weak interaction, the W and Z bosons, obtain their mass via spontaneous symmetry breaking and the Yukawa interaction.
1972 – Francis Perrin discovers "natural nuclear fission reactors" in uranium deposits in Oklo, Gabon, where analysis of isotope ratios demonstrate that self-sustaining, nuclear chain reactions have occurred. The conditions under which a natural nuclear reactor could exist were predicted in 1956 by P. Kuroda.
1973 – Peter Mansfield formulates the physical theory of nuclear magnetic resonance imaging (NMRI) aka magnetic resonance imaging (MRI).
1974 – Pier Giorgio Merli performs Young's double-slit experiment (1909) using a single electron with similar results, confirming the existence of quantum fields for massive particles.
1977 – Ilya Prigogine develops non-equilibrium, irreversible thermodynamics and quantum operator theory, especially the time superoperator theory; he is awarded the Nobel Prize in Chemistry in 1977 "for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures".
1978 – Pyotr Kapitsa observes new phenomena in hot deuterium plasmas excited by very high power microwaves in attempts to obtain controlled thermonuclear fusion reactions in such plasmas placed in longitudinal magnetic fields, using a novel and low-cost design of thermonuclear reactor, similar in concept to that reported by Theodor V. Ionescu et al. in 1969. Receives a Nobel prize for early low temperature physics experiments on helium superfluidity carried out in 1937 at the Cavendish Laboratory in Cambridge, UK, and discusses his 1977 thermonuclear reactor results in his Nobel lecture on December 8, 1978.
1979 – Kenneth A. Rubinson and coworkers, at the Cavendish Laboratory, observe ferromagnetic spin wave resonant excitations in metallic glasses and interpret the observations in terms of two-magnon dispersion and a spin exchange Hamiltonian, similar in form to that of a Heisenberg ferromagnet.
=== 1980–1999 ===
1980 to 1982 – Alain Aspect verifies experimentally the quantum entanglement hypothesis; his Bell test experiments provide strong evidence that a quantum event at one location can affect an event at another location without any obvious mechanism for communication between the two locations. This remarkable result confirmed the experimental verification of quantum entanglement by John F. Clauser. and. Stuart Freedman in 1972. Aspect later shared the 2022 Nobel Prize in Physics with Clauser and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science".
1982 to 1997 – Tokamak Fusion Test Reactor (TFTR) at PPPL, Princeton, USA: Operated since 1982, produces 10.7 MW of controlled fusion power for only 0.21 s in 1994 by using T–D nuclear fusion in a tokamak reactor with "a toroidal 6T magnetic field for plasma confinement, a 3 MA plasma current and an electron density of 1.0×1020 m−3 of 13.5 keV"
1983 – Carlo Rubbia and Simon van der Meer, at the Super Proton Synchrotron, see unambiguous signals of W particles in January. The actual experiments are called UA1 (led by Rubbia) and UA2 (led by Peter Jenni), and are the collaborative effort of many people. Simon van der Meer is the driving force on the use of the accelerator. UA1 and UA2 find the Z particle a few months later, in May 1983.
1983 to 2011 – The largest and most powerful experimental nuclear fusion tokamak reactor in the world, Joint European Torus (JET) begins operation at Culham Facility in UK; operates with T-D plasma pulses and has a reported gain factor Q of 0.7 in 2009, with an input of 40MW for plasma heating, and a 2800-ton iron magnet for confinement; in 1997 in a tritium-deuterium experiment JET produces 16 MW of fusion power, a total of 22 MJ of fusion, energy and a steady fusion power of 4 MW, which is maintained for 4 seconds.
1985 to 2010 – The JT-60 (Japan Torus) begins operation in 1985 with an experimental D–D nuclear fusion tokamak similar to the JET; in 2010 JT-60 holds the record for the highest value of the fusion triple product achieved: 1.77×1028 K·s·m−3 = 1.53×1021 keV·s·m−3. JT-60 claims it would have an equivalent energy gain factor, Q of 1.25 if it were operated with a T–D plasma instead of the D–D plasma, and on May 9, 2006, attains a fusion hold time of 28.6 s in full operation; moreover, a high-power microwave gyrotron construction is completed that is capable of 1.5 MW output for 1 s, thus meeting the conditions for the planned ITER, large-scale nuclear fusion reactor. JT-60 is disassembled in 2010 to be upgraded to a more powerful nuclear fusion reactor—the JT-60SA—by using niobium–titanium superconducting coils for the magnet confining the ultra-hot D–D plasma.
1986 – Johannes Georg Bednorz and Karl Alexander Müller produce unambiguous experimental proof of high temperature superconductivity involving Jahn–Teller polarons in orthorhombic La2CuO4, YBCO and other perovskite-type oxides; promptly receive a Nobel prize in 1987 and deliver their Nobel lecture on December 8, 1987.
1986 – Vladimir Gershonovich Drinfeld introduces the concept of quantum groups as Hopf algebras in his seminal address on quantum theory at the International Congress of Mathematicians, and also connects them to the study of the Yang–Baxter equation, which is a necessary condition for the solvability of statistical mechanics models; he also generalizes Hopf algebras to quasi-Hopf algebras, and introduces the study of Drinfeld twists, which can be used to factorize the R-matrix corresponding to the solution of the Yang–Baxter equation associated with a quasitriangular Hopf algebra.
1988 to 1998 – Mihai Gavrilă discovers in 1988 the new quantum phenomenon of atomic dichotomy in hydrogen and subsequently publishes a book on the atomic structure and decay in high-frequency fields of hydrogen atoms placed in ultra-intense laser fields.
1991 – Richard R. Ernst develops two-dimensional nuclear magnetic resonance spectroscopy (2D-FT NMRS) for small molecules in solution and is awarded the Nobel Prize in Chemistry in 1991 "for his contributions to the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy".
1995 – Eric Cornell, Carl Wieman and Wolfgang Ketterle and co-workers at JILA create the first "pure" Bose–Einstein condensate. They do this by cooling a dilute vapor consisting of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT creates a condensate made of sodium-23. Ketterle's condensate has about a hundred times more atoms, allowing him to obtain several important results such as the observation of quantum mechanical interference between two different condensates.
1997 – Peter Shor publishes Shor's algorithm, a quantum computing algorithm for finding prime factors of integers. The algorithm is one of the few known quantum algorithms with immediate potential applications, which likely leads to a superpolynomial improvement over known non-quantum algorithms.
1999 to 2013 – NSTX—The National Spherical Torus Experiment at PPPL, Princeton, USA launches a nuclear fusion project on February 12, 1999, for "an innovative magnetic fusion device that was constructed by the Princeton Plasma Physics Laboratory (PPPL) in collaboration with the Oak Ridge National Laboratory, Columbia University, and the University of Washington at Seattle"; NSTX is being used to study the physics principles of spherically shaped plasmas.
== 21st century ==
2001 – Researchers at IBM physically implement Shor's algorithm with an NMR setup, factoring 15 into 3 times 5 using seven qubits.
2002 – Leonid I. Vainerman organizes a meeting at Strasbourg of theoretical physicists and mathematicians focused on quantum group and quantum groupoid applications in quantum theories; the proceedings of the meeting are published in 2003 in a book edited by the meeting organizer.
2009 – Aaron D. O'Connell invents the first quantum machine, applying quantum mechanics to a macroscopic object just large enough to be seen by the naked eye, which is able to vibrate a small amount and large amount simultaneously.
2011 – Zachary Dutton demonstrates how photons can co-exist in superconductors. "Direct Observation of Coherent Population Trapping in a Superconducting Artificial Atom",
2012 – The existence of Higgs boson was confirmed by the ATLAS and CMS collaborations based on proton-proton collisions in the large hadron collider at CERN. Peter Higgs and François Englert were awarded the 2013 Nobel Prize in Physics for their theoretical predictions.
2015 – The first loophole-free Bell tests are performed by three independent teams, led by Ronald Hanson and Bas Hensen at TU Delft, by Sae Woo Nam and Krister Shalm at NIST, and by Anton Zeilinger and Marissa Giustina at the University of Vienna, confirming precisely the predictions of quantum mechanics and ruling out any local-realistic description of nature. These experiments are the culmination of a series of experiments started by John Clauser in the 1970s and significantly advanced by Alain Aspect in the 1980s among others. Clauser, Aspect, and Zeilinger share the Nobel Prize in Physics 2022 for their results.
== See also ==
== References ==
== Bibliography ==
Peacock, Kent A. (2008). The Quantum Revolution: A Historical Perspective. Westport, Conn.: Greenwood Press. ISBN 9780313334481.
Ben-Menahem, A. (2009). "Historical timeline of quantum mechanics 1925–1989". Historical Encyclopedia of Natural and Mathematical Sciences (1st ed.). Berlin: Springer. pp. 4342–4349. ISBN 9783540688310.
== Notes ==
== External links ==
Learning materials related to the history of Quantum Mechanics at Wikiversity | Wikipedia/Timeline_of_quantum_mechanics |
Coherence expresses the potential for two waves to interfere. Two monochromatic beams from a single source always interfere.: 286 Wave sources are not strictly monochromatic: they may be partly coherent.
When interfering, two waves add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of minima which may be zero: 286 (destructive interference), depending on their relative phase. Constructive or destructive interference are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable.
Two waves with constant relative phase will be coherent. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions. More broadly, coherence describes the statistical similarity of a field, such as an electromagnetic field or quantum wave packet, at different points in space or time.
== Qualitative concept ==
Coherence controls the visibility or contrast of interference patterns. For example, visibility of the double slit experiment pattern requires that both slits be illuminated by a coherent wave as illustrated in the figure. Large sources without collimation or sources that mix many different frequencies will have lower visibility.: 264
Coherence contains several distinct concepts. Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferometer, when one of the mirrors is moved away gradually from the beam-splitter, the time for the beam to travel increases and the fringes become dull and finally disappear, showing temporal coherence. Similarly, in a double-slit experiment, if the space between the two slits is increased, the coherence dies gradually and finally the fringes disappear, showing spatial coherence. In both cases, the fringe amplitude slowly disappears, as the path difference increases past the coherence length.
Coherence was originally conceived in connection with Thomas Young's double-slit experiment in optics but is now used in any field that involves waves, such as acoustics, electrical engineering, neuroscience, and quantum mechanics. The property of coherence is the basis for commercial applications such as holography, the Sagnac gyroscope, radio antenna arrays, optical coherence tomography and telescope interferometers (Astronomical optical interferometers and radio telescopes).
== Mathematical definition ==
The coherence function between two signals
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
is defined as
γ
x
y
2
(
f
)
=
|
S
x
y
(
f
)
|
2
S
x
x
(
f
)
S
y
y
(
f
)
{\displaystyle \gamma _{xy}^{2}(f)={\frac {|S_{xy}(f)|^{2}}{S_{xx}(f)S_{yy}(f)}}}
where
S
x
y
(
f
)
{\displaystyle S_{xy}(f)}
is the cross-spectral density of the signal and
S
x
x
(
f
)
{\displaystyle S_{xx}(f)}
and
S
y
y
(
f
)
{\displaystyle S_{yy}(f)}
are the power spectral density functions of
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
, respectively. The cross-spectral density and the power spectral density are defined as the Fourier transforms of the cross-correlation and the autocorrelation signals, respectively. For instance, if the signals are functions of time, the cross-correlation is a measure of the similarity of the two signals as a function of the time lag relative to each other and the autocorrelation is a measure of the similarity of each signal with itself in different instants of time. In this case the coherence is a function of frequency. Analogously, if
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
are functions of space, the cross-correlation measures the similarity of two signals in different points in space and the autocorrelations the similarity of the signal relative to itself for a certain separation distance. In that case, coherence is a function of wavenumber (spatial frequency).
The coherence varies in the interval
. If
γ
x
y
2
(
f
)
=
1
{\displaystyle \gamma _{xy}^{2}(f)=1}
it means that the signals are perfectly correlated or linearly related and if
γ
x
y
2
(
f
)
=
0
{\displaystyle \gamma _{xy}^{2}(f)=0}
they are totally uncorrelated. If a linear system is being measured,
x
(
t
)
{\displaystyle x(t)}
being the input and
y
(
t
)
{\displaystyle y(t)}
the output, the coherence function will be unitary all over the spectrum. However, if non-linearities are present in the system the coherence will vary in the limit given above.
== Coherence and correlation ==
The coherence of two waves expresses how well correlated the waves are as quantified by the cross-correlation function. Cross-correlation quantifies the ability to predict the phase of the second wave by knowing the phase of the first. As an example, consider two waves perfectly correlated for all times (by using a monochromatic light source). At any time, the phase difference between the two waves will be constant. If, when they are combined, they exhibit perfect constructive interference, perfect destructive interference, or something in-between but with constant phase difference, then it follows that they are perfectly coherent. As will be discussed below, the second wave need not be a separate entity. It could be the first wave at a different time or position. In this case, the measure of correlation is the autocorrelation function (sometimes called self-coherence). Degree of correlation involves correlation functions.: 545-550
== Examples of wave-like states ==
These states are unified by the fact that their behavior is described by a wave equation or some generalization thereof.
Waves in a rope (up and down) or slinky (compression and expansion)
Surface waves in a liquid
Electromagnetic signals (fields) in transmission lines
Sound
Radio waves and microwaves
Light waves (optics)
Matter waves associated with, for examples, electrons and atoms
In system with macroscopic waves, one can measure the wave directly. Consequently, its correlation with another wave can simply be calculated. However, in optics one cannot measure the electric field directly as it oscillates much faster than any detector's time resolution. Instead, one measures the intensity of the light. Most of the concepts involving coherence which will be introduced below were developed in the field of optics and then used in other fields. Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly.
== Temporal coherence ==
Temporal coherence is the measure of the average correlation between the value of a wave and itself delayed by
, at any pair of times. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time
τ
c
{\displaystyle \tau _{\mathrm {c} }}
. At a delay of
τ
=
0
{\displaystyle \tau =0}
the degree of coherence is perfect, whereas it drops significantly as the delay passes
τ
=
τ
c
{\displaystyle \tau =\tau _{\mathrm {c} }}
. The coherence length
L
c
{\displaystyle L_{\mathrm {c} }}
is defined as the distance the wave travels in time
τ
c
{\displaystyle \tau _{\mathrm {c} }}
.: 560, 571–573
The coherence time is not the time duration of the signal; the coherence length differs from the coherence area (see below).
=== The relationship between coherence time and bandwidth ===
The larger the bandwidth – range of frequencies Δf a wave contains – the faster the wave decorrelates (and hence the smaller
is):: 358–359, 560
τ
c
Δ
f
≳
1.
{\displaystyle \tau _{c}\Delta f\gtrsim 1.}
Formally, this follows from the convolution theorem in mathematics, which relates the Fourier transform of the power spectrum (the intensity of each frequency) to its autocorrelation.: 572
Narrow bandwidth lasers have long coherence lengths (up to hundreds of meters). For example, a stabilized and monomode helium–neon laser can easily produce light with coherence lengths of 300 m. Not all lasers have a high monochromaticity, however (e.g. for a mode-locked Ti-sapphire laser, Δλ ≈ 2 nm – 70 nm).
LEDs are characterized by Δλ ≈ 50 nm, and tungsten filament lights exhibit Δλ ≈ 600 nm, so these sources have shorter coherence times than the most monochromatic lasers.
=== Examples of temporal coherence ===
Examples of temporal coherence include:
A wave containing only a single frequency (monochromatic) is perfectly correlated with itself at all time delays, in accordance with the above relation. (See Figure 1)
Conversely, a wave whose phase drifts quickly will have a short coherence time. (See Figure 2)
Similarly, pulses (wave packets) of waves, which naturally have a broad range of frequencies, also have a short coherence time since the amplitude of the wave changes quickly. (See Figure 3)
Finally, white light, which has a very broad range of frequencies, is a wave which varies quickly in both amplitude and phase. Since it consequently has a very short coherence time (just 10 periods or so), it is often called incoherent.
Holography requires light with a long coherence time. In contrast, optical coherence tomography, in its classical version, uses light with a short coherence time.
=== Measurement of temporal coherence ===
In optics, temporal coherence is measured in an interferometer such as the Michelson interferometer or Mach–Zehnder interferometer. In these devices, a wave is combined with a copy of itself that is delayed by time
τ
{\displaystyle \tau }
. A detector measures the time-averaged intensity of the light exiting the interferometer. The resulting visibility of the interference pattern (e.g. see Figure 4) gives the temporal coherence at delay
τ
{\displaystyle \tau }
. Since for most natural light sources, the coherence time is much shorter than the time resolution of any detector, the detector itself does the time averaging. Consider the example shown in Figure 3. At a fixed delay, here
, an infinitely fast detector would measure an intensity that fluctuates significantly over a time t equal to
τ
{\displaystyle \tau }
. In this case, to find the temporal coherence at
2
τ
c
{\displaystyle 2\tau _{\mathrm {c} }}
, one would manually time-average the intensity.
== Spatial coherence ==
In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two spatial points x1 and x2 in the extent of a wave to interfere when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference defines the diameter of the coherence area,
(Coherence length
l
c
{\displaystyle l_{\mathrm {c} }}
, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium).
A
c
{\displaystyle A_{\mathrm {c} }}
is the relevant type of coherence for the Young's double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes.
A distance
z
{\displaystyle z}
away from an incoherent source with surface area
A
s
{\displaystyle A_{\mathrm {s} }}
,
A
c
=
λ
2
z
2
A
s
{\displaystyle A_{\mathrm {c} }={\frac {\lambda ^{2}z^{2}}{A_{\mathrm {s} }}}}
Sometimes people also use "spatial coherence" to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself.
=== Examples ===
Spatial coherence
Consider a tungsten light-bulb filament. Different points in the filament emit light independently and have no fixed phase-relationship. In detail, at any point in time the profile of the emitted light is going to be distorted. The profile will change randomly over the coherence time
. Since for a white-light source such as a light-bulb
τ
c
{\displaystyle \tau _{c}}
is small, the filament is considered a spatially incoherent source. In contrast, a radio antenna array, has large spatial coherence because antennas at opposite ends of the array emit with a fixed phase-relationship. Light waves produced by a laser often have high temporal and spatial coherence (though the degree of coherence depends strongly on the exact properties of the laser). Spatial coherence of laser beams also manifests itself as speckle patterns and diffraction fringes seen at the edges of shadow.
Holography requires temporally and spatially coherent light. Its inventor, Dennis Gabor, produced successful holograms more than ten years before lasers were invented. To produce coherent light he passed the monochromatic light from an emission line of a mercury-vapor lamp through a pinhole spatial filter.
In February 2011 it was reported that helium atoms, cooled to near absolute zero / Bose–Einstein condensate state, can be made to flow and behave as a coherent beam as occurs in a laser.
== Spectral coherence of short pulses ==
Waves of different frequencies (in light these are different colours) can interfere to form a pulse if they have a fixed relative phase-relationship (see Fourier transform). Conversely, if waves of different frequencies are not coherent, then, when combined, they create a wave that is continuous in time (e.g. white light or white noise). The temporal duration of the pulse
Δ
t
{\displaystyle \Delta t}
is limited by the spectral bandwidth of the light
Δ
f
{\displaystyle \Delta f}
according to:
,
which follows from the properties of the Fourier transform and results in Küpfmüller's uncertainty principle (for quantum particles it also results in the Heisenberg uncertainty principle).
If the phase depends linearly on the frequency (i.e.
θ
(
f
)
∝
f
{\displaystyle \theta (f)\propto f}
) then the pulse will have the minimum time duration for its bandwidth (a transform-limited pulse), otherwise it is chirped (see dispersion).
=== Measurement of spectral coherence ===
Measurement of the spectral coherence of light requires a nonlinear optical interferometer, such as an intensity optical correlator, frequency-resolved optical gating (FROG), or spectral phase interferometry for direct electric-field reconstruction (SPIDER).
== Polarization and coherence ==
Light also has a polarization, which is the direction in which the electric or magnetic field oscillates. Unpolarized light is composed of incoherent light waves with random polarization angles. The electric field of the unpolarized light wanders in every direction and changes in phase over the coherence time of the two light waves. An absorbing polarizer rotated to any angle will always transmit half the incident intensity when averaged over time.
If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. If a wave is combined with an orthogonally polarized copy of itself delayed by less than the coherence time, partially polarized light is created.
The polarization of a light beam is represented by a vector in the Poincaré sphere. For polarized light the end of the vector lies on the surface of the sphere, whereas the vector has zero length for unpolarized light. The vector for partially polarized light lies within the sphere.
== Quantum coherence ==
The signature property of quantum matter waves, wave interference, relies on coherence. While initially patterned after optical coherence, the theory and experimental understanding of quantum coherence greatly expanded the topic.
=== Matter wave coherence ===
The simplest extension of optical coherence applies optical concepts to matter waves. For example, when performing the double-slit experiment with atoms instead of light waves, a sufficiently collimated atomic beam creates a coherent atomic wave-function illuminating both slits. Each slit acts as a separate but in-phase beam contributing to the intensity pattern on a screen. These two contributions give rise to an intensity pattern of bright bands due to constructive interference, interlaced with dark bands due to destructive interference, on a downstream screen. Many variations of this experiment have been demonstrated.: 1057
As with light, transverse coherence (across the direction of propagation) of matter waves is controlled by collimation. Because light, at all frequencies, travels the same velocity, longitudinal and temporal coherence are linked; in matter waves these are independent. In matter waves, velocity (energy) selection controls longitudinal coherence and pulsing or chopping controls temporal coherence.: 154
=== Quantum optics ===
The discovery of the Hanbury Brown and Twiss effect – correlation of light upon coincidence – triggered Glauber's creation of uniquely quantum coherence analysis. Classical optical coherence becomes a classical limit for first-order quantum coherence; higher degree of coherence leads to many phenomena in quantum optics.
=== Macroscopic quantum coherence ===
Macroscopic scale quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (off-diagonal long-range order, ODLRO) for superfluidity, and laser light, is related to first-order (1-body) coherence/ODLRO, while superconductivity is related to second-order coherence/ODLRO. (For fermions, such as electrons, only even orders of coherence/ODLRO are possible.) For bosons, a Bose–Einstein condensate is an example of a system exhibiting macroscopic quantum coherence through a multiple occupied single-particle state.
The classical electromagnetic field exhibits macroscopic quantum coherence. The most obvious example is the carrier signal for radio and TV. They satisfy Glauber's quantum description of coherence.
=== Quantum coherence as a resource ===
Recently M. B. Plenio and co-workers constructed an operational formulation of quantum coherence as a resource theory. They introduced coherence monotones analogous to the entanglement monotones. Quantum coherence has been shown to be equivalent to quantum entanglement in the sense that coherence can be faithfully described as entanglement, and conversely that each entanglement measure corresponds to a coherence measure.
== Applications ==
=== Holography ===
Coherent superpositions of optical wave fields include holography. Holographic photographs have been used as art and as difficult to forge security labels.
=== Non-optical wave fields ===
Further applications concern the coherent superposition of non-optical wave fields. In quantum mechanics for example one considers a probability field, which is related to the wave function
ψ
(
r
)
{\displaystyle \psi (\mathbf {r} )}
(interpretation: density of the probability amplitude). Here the applications concern, among others, the future technologies of quantum computing and the already available technology of quantum cryptography. Additionally the problems of the following subchapter are treated.
=== Modal analysis ===
Coherence is used to check the quality of the transfer functions (FRFs) being measured. Low coherence can be caused by poor signal to noise ratio, and/or inadequate frequency resolution.
== See also ==
== References ==
== External links ==
Dr. SkySkull (2008-09-03). "Optics basics: Coherence". Skulls in the Stars. | Wikipedia/Coherence_(physics) |
Nanotechnology is the manipulation of matter with at least one dimension sized from 1 to 100 nanometers (nm). At this scale, commonly known as the nanoscale, surface area and quantum mechanical effects become important in describing properties of matter. This definition of nanotechnology includes all types of research and technologies that deal with these special properties. It is common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to research and applications whose common trait is scale. An earlier understanding of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabricating macroscale products, now referred to as molecular nanotechnology.
Nanotechnology defined by scale includes fields of science such as surface science, organic chemistry, molecular biology, semiconductor physics, energy storage, engineering, microfabrication, and molecular engineering. The associated research and applications range from extensions of conventional device physics to molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Nanotechnology may be able to create new materials and devices with diverse applications, such as in nanomedicine, nanoelectronics, agricultural sectors, biomaterials energy production, and consumer products. However, nanotechnology raises issues, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.
== Origins ==
The concepts that seeded nanotechnology were first discussed in 1959 by physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms.
The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which achieved popular success and helped thrust nanotechnology into the public sphere. In it he proposed the idea of a nanoscale "assembler" that would be able to build a copy of itself and of other items of arbitrary complexity with atom-level control. Also in 1986, Drexler co-founded The Foresight Institute to increase public awareness and understanding of nanotechnology concepts and implications.
The emergence of nanotechnology as a field in the 1980s occurred through the convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework, and experimental advances that drew additional attention to the prospects. In the 1980s, two breakthroughs helped to spark the growth of nanotechnology. First, the invention of the scanning tunneling microscope in 1981 enabled visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.
Second, fullerenes (buckyballs) were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related carbon nanotubes (sometimes called graphene tubes or Bucky tubes) which suggested potential applications for nanoscale electronics and devices. The discovery of carbon nanotubes is attributed to Sumio Iijima of NEC in 1991, for which Iijima won the inaugural 2008 Kavli Prize in Nanoscience.
In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology. Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.
Meanwhile, commercial products based on advancements in nanoscale technologies began emerging. These products were limited to bulk applications of nanomaterials and did not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles.
Governments moved to promote and fund research into nanotechnology, such as American the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established research funding, and in Europe via the European Framework Programmes for Research and Technological Development.
By the mid-2000s scientific attention began to flourish. Nanotechnology roadmaps centered on atomically precise manipulation of matter and discussed existing and projected capabilities, goals, and applications.
== Fundamental concepts ==
Nanotechnology is the science and engineering of functional systems at the molecular scale. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up making complete, high-performance products.
One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon–carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and DNA's diameter is around 2 nm. On the other hand, the smallest cellular life forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range 1 to 100 nm, following the definition used by the American National Nanotechnology Initiative. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which have an approximately ,25 nm kinetic diameter). The upper limit is more or less arbitrary, but is around the size below which phenomena not observed in larger structures start to become apparent and can be made use of. These phenomena make nanotechnology distinct from devices that are merely miniaturized versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.
To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.
Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.
Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved to provide nanotechnology's scientific foundation.
=== Larger to smaller: a materials perspective ===
Several phenomena become pronounced as system size. These include statistical mechanical effects, as well as quantum mechanical effects, for example, the "quantum size effect" in which the electronic properties of solids alter along with reductions in particle size. Such effects do not apply at macro or micro dimensions. However, quantum effects can become significant when nanometer scales. Additionally, physical (mechanical, electrical, optical, etc.) properties change versus macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal, and catalytic properties of materials. Diffusion and reactions can be different as well. Systems with fast ion transport are referred to as nanoionics. The mechanical properties of nanosystems are of interest in research.
=== Simple to complex: a molecular perspective ===
Modern synthetic chemistry can prepare small molecules of almost any structure. These methods are used to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble single molecules into supramolecular assemblies consisting of many molecules arranged in a well-defined manner.
These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into a useful conformation through a bottom-up approach. The concept of molecular recognition is important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme targeting a single substrate, or the specific folding of a protein. Thus, components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.
Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, many examples of self-assembly based on molecular recognition in exist in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions.
=== Molecular nanotechnology: a long-term view ===
Molecular nanotechnology, sometimes called molecular manufacturing, concerns engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with molecular assemblers, machines that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When Drexler independently coined and popularized the term "nanotechnology", he envisioned manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: biology was full of examples of sophisticated, stochastically optimized biological machines.
Drexler and other researchers have proposed that advanced nanotechnology ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems: Molecular Machinery, Manufacturing, and Computation.
In general, assembling devices on the atomic scale requires positioning atoms on other atoms of comparable size and stickiness. Carlo Montemagno's view is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis was impossible due to difficulties in mechanically manipulating individual molecules.
This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machines are possible, non-biological molecular machines remained in their infancy. Alex Zettl and colleagues at Lawrence Berkeley Laboratories and UC Berkeley constructed at least three molecular devices whose motion is controlled via changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.
Ho and Lee at Cornell University in 1999 used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal and chemically bound the CO to the Fe by applying a voltage.
== Research ==
=== Nanomaterials ===
Many areas of science develop or study materials having unique properties arising from their nanoscale dimensions.
Interface and colloid science produced many materials that may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related to nanoionics and nanoelectronics.
Nanoscale materials can be used for bulk applications; most commercial applications of nanotechnology are of this flavor.
Progress has been made in using these materials for medical applications, including tissue engineering, drug delivery, antibacterials and biosensors.
Nanoscale materials such as nanopillars are used in solar cells.
Applications incorporating semiconductor nanoparticles in products such as display technology, lighting, solar cells and biological imaging; see quantum dots.
=== Bottom-up approaches ===
The bottom-up approach seeks to arrange smaller components into more complex assemblies.
DNA nanotechnology utilizes Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.
Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) aim at designing molecules with well-defined shape (e.g. bis-peptides).
More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip-pen nanolithography. This technique fits into the larger subfield of nanolithography.
Molecular-beam epitaxy allows for bottom-up assemblies of materials, most notably semiconductor materials commonly used in chip and computing applications, stacks, gating, and nanowire lasers.
=== Top-down approaches ===
These seek to create smaller devices by using larger ones to direct their assembly.
Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are capable of creating features smaller than 100 nm. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of giant magnetoresistance and contributions to the field of spintronics.
Solid-state techniques can be used to create nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS.
Focused ion beams can directly remove material, or even deposit material when suitable precursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in transmission electron microscopy.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is then followed by an etching process to remove material in a top-down method.
=== Functional approaches ===
Functional approaches seek to develop useful components without regard to how they might be assembled.
Magnetic assembly for the synthesis of anisotropic superparamagnetic materials such as magnetic nano chains.
Molecular scale electronics seeks to develop molecules with useful electronic properties. These could be used as single-molecule components in a nanoelectronic device, such as rotaxane.
Synthetic chemical methods can be used to create synthetic molecular motors, such as in a so-called nanocar.
=== Biomimetic approaches ===
Bionics or biomimicry seeks to apply biological methods and systems found in nature to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied.
Bionanotechnology is the use of biomolecules for applications in nanotechnology, including the use of viruses and lipid assemblies. Nanocellulose, a nanopolymer often used for bulk-scale applications, has gained interest owing to its useful properties such as abundance, high aspect ratio, good mechanical properties, renewability, and biocompatibility.
=== Speculative ===
These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry could progress. These often take a big-picture view, with more emphasis on societal implications than engineering details.
Molecular nanotechnology is a proposed approach that involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities.
Nanorobotics considers self-sufficient machines operating at the nanoscale. There are hopes for applying nanorobots in medicine. Nevertheless, progress on innovative materials and patented methodologies have been demonstrated.
Productive nanosystems are "systems of nanosystems" could produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage could form the basis of another industrial revolution. Mihail Roco proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems.
Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science.
Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are used only informally.
=== Dimensionality in nanomaterials ===
Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomaterials. Dimensionality plays a major role in determining the characteristic of nanomaterials including physical, chemical, and biological characteristics. With the decrease in dimensionality, an increase in surface-to-volume ratio is observed. This indicates that smaller dimensional nanomaterials have higher surface area compared to 3D nanomaterials. Two dimensional (2D) nanomaterials have been extensively investigated for electronic, biomedical, drug delivery and biosensor applications.
== Tools and techniques ==
=== Scanning microscopes ===
The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two versions of scanning probes that are used for nano-scale observation. Other types of scanning probe microscopy have much higher resolution, since they are not limited by the wavelengths of sound or light.
The tip of a scanning probe can also be used to manipulate nanostructures (positional assembly). Feature-oriented scanning may be a promising way to implement these nano-scale manipulations via an automatic algorithm. However, this is still a slow process because of low velocity of the microscope.
The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques.
=== Lithography ===
Various techniques of lithography, such as optical lithography, X-ray lithography, dip pen lithography, electron beam lithography or nanoimprint lithography offer top-down fabrication techniques where a bulk material is reduced to a nano-scale pattern.
Another group of nano-technological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers.
==== Bottom-up ====
In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual-polarization interferometry is one tool suitable for characterization of self-assembled thin films. Another variation of the bottom-up approach is molecular-beam epitaxy or MBE. Researchers at Bell Telephone Laboratories including John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE lays down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.
Therapeutic products based on responsive nanomaterials, such as the highly deformable, stress-sensitive Transfersome vesicles, are approved for human use in some countries.
== Applications ==
As of August 21, 2008, the Project on Emerging Nanotechnologies estimated that over 800 manufacturer-identified nanotech products were publicly available, with new ones hitting the market at a pace of 3–4 per week. Most applications are "first generation" passive nanomaterials that includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants, and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.
In the electric car industry, single wall carbon nanotubes (SWCNTs) address key lithium-ion battery challenges, including energy density, charge rate, service life, and cost. SWCNTs connect electrode particles during charge/discharge process, preventing battery premature degradation. Their exceptional ability to wrap active material particles enhanced electrical conductivity and physical properties, setting them apart multi-walled carbon nanotubes and carbon black.
Further applications allow tennis balls to last longer, golf balls to fly straighter, and bowling balls to become more durable. Trousers and socks have been infused with nanotechnology to last longer and lower temperature in the summer. Bandages are infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology. Also, to build structures for on chip computing with light, for example on chip optical quantum information processing, and picosecond transmission of information.
Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the doctors' offices and at homes. Cars use nanomaterials in such ways that car parts require fewer metals during manufacturing and less fuel to operate in the future.
Nanoencapsulation involves the enclosure of active substances within carriers. Typically, these carriers offer advantages, such as enhanced bioavailability, controlled release, targeted delivery, and protection of the encapsulated substances. In the medical field, nanoencapsulation plays a significant role in drug delivery. It facilitates more efficient drug administration, reduces side effects, and increases treatment effectiveness. Nanoencapsulation is particularly useful for improving the bioavailability of poorly water-soluble drugs, enabling controlled and sustained drug release, and supporting the development of targeted therapies. These features collectively contribute to advancements in medical treatments and patient care.
Nanotechnology may play role in tissue engineering. When designing scaffolds, researchers attempt to mimic the nanoscale features of a cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support bone growth, researchers may mimic osteoclast resorption pits.
Researchers used DNA origami-based nanobots capable of carrying out logic functions to target drug delivery in cockroaches.
A nano bible (a .5mm2 silicon chip) was created by the Technion in order to increase youth interest in nanotechnology.
== Implications ==
One concern is the effect that industrial-scale manufacturing and use of nanomaterials will have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated. However, regulation might stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health research potential health effects stemming from exposures to nanoparticles.
Nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are released in the wash. These particles are then flushed into the wastewater stream and may destroy bacteria that are critical components of natural ecosystems, farms, and waste treatment processes.
Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability.
Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, testified that commercialization depends on adequate oversight, risk research strategy, and public engagement. As of 206 Berkeley, California was the only US city to regulate nanotechnology.
=== Health and environmental concerns ===
Inhaling airborne nanoparticles and nanofibers may contribute to pulmonary diseases, e.g. fibrosis. Researchers found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice.
A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging".
A Nature Nanotechnology study suggested that some forms of carbon nanotubes could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.
== Regulation ==
Calls for tighter regulation of nanotechnology have accompanied a debate related to human health and safety risks. Some regulatory agencies cover some nanotechnology products and processes – by "bolting on" nanotechnology to existing regulations – leaving clear gaps. Davies proposed a road map describing steps to deal with these shortcomings.
Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, reported insufficient funding for human health and safety research, and as a result inadequate understanding of human health and safety risks. Some academics called for stricter application of the precautionary principle, slowing marketing approval, enhanced labelling and additional safety data.
A Royal Society report identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that "manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure".
== See also ==
== References ==
== External links ==
What is Nanotechnology? (A Vega/BBC/OU Video Discussion). | Wikipedia/Quantum_nanoscience |
In theoretical physics, the Rarita–Schwinger equation is the
relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In modern notation it can be written as:
(
ϵ
μ
κ
ρ
ν
γ
5
γ
κ
∂
ρ
−
i
m
σ
μ
ν
)
ψ
ν
=
0
,
{\displaystyle \left(\epsilon ^{\mu \kappa \rho \nu }\gamma _{5}\gamma _{\kappa }\partial _{\rho }-im\sigma ^{\mu \nu }\right)\psi _{\nu }=0,}
where
ϵ
μ
κ
ρ
ν
{\displaystyle \epsilon ^{\mu \kappa \rho \nu }}
is the Levi-Civita symbol,
γ
κ
{\displaystyle \gamma _{\kappa }}
are Dirac matrices (with
κ
=
0
,
1
,
2
,
3
{\displaystyle \kappa =0,1,2,3}
) and
γ
5
=
i
γ
0
γ
1
γ
2
γ
3
{\displaystyle \gamma _{5}=i\gamma _{0}\gamma _{1}\gamma _{2}\gamma _{3}}
,
m
{\displaystyle m}
is the mass,
σ
μ
ν
≡
i
2
[
γ
μ
,
γ
ν
]
{\displaystyle \sigma ^{\mu \nu }\equiv {\frac {i}{2}}[\gamma ^{\mu },\gamma ^{\nu }]}
,
and
ψ
ν
{\displaystyle \psi _{\nu }}
is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the (1/2, 1/2) ⊗ ((1/2, 0) ⊕ (0, 1/2)) representation of the Lorentz group, or rather, its (1, 1/2) ⊕ (1/2, 1) part.
This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian:
L
=
−
1
2
ψ
¯
μ
(
ϵ
μ
κ
ρ
ν
γ
5
γ
κ
∂
ρ
−
i
m
σ
μ
ν
)
ψ
ν
,
{\displaystyle {\mathcal {L}}=-{\tfrac {1}{2}}\;{\bar {\psi }}_{\mu }\left(\epsilon ^{\mu \kappa \rho \nu }\gamma _{5}\gamma _{\kappa }\partial _{\rho }-im\sigma ^{\mu \nu }\right)\psi _{\nu },}
where the bar above
ψ
μ
{\displaystyle \psi _{\mu }}
denotes the Dirac adjoint.
This equation controls the propagation of the wave function of composite objects such as the delta baryons (Δ) or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally.
The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation
ψ
μ
→
ψ
μ
+
∂
μ
ϵ
{\displaystyle \psi _{\mu }\rightarrow \psi _{\mu }+\partial _{\mu }\epsilon }
, where
ϵ
≡
ϵ
α
{\displaystyle \epsilon \equiv \epsilon _{\alpha }}
is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino.
"Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist.
== Equations of motion in the massless case ==
Consider a massless Rarita–Schwinger field described by the Lagrangian density
L
R
S
=
ψ
¯
μ
γ
μ
ν
ρ
∂
ν
ψ
ρ
,
{\displaystyle {\mathcal {L}}_{RS}={\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho },}
where the sum over spin indices is implicit,
ψ
μ
{\displaystyle \psi _{\mu }}
are Majorana spinors, and
γ
μ
ν
ρ
≡
1
3
!
γ
[
μ
γ
ν
γ
ρ
]
.
{\displaystyle \gamma ^{\mu \nu \rho }\equiv {\frac {1}{3!}}\gamma ^{[\mu }\gamma ^{\nu }\gamma ^{\rho ]}.}
To obtain the equations of motion we vary the Lagrangian with respect to the fields
ψ
μ
{\displaystyle \psi _{\mu }}
, obtaining:
δ
L
R
S
=
δ
ψ
¯
μ
γ
μ
ν
ρ
∂
ν
ψ
ρ
+
ψ
¯
μ
γ
μ
ν
ρ
∂
ν
δ
ψ
ρ
=
δ
ψ
¯
μ
γ
μ
ν
ρ
∂
ν
ψ
ρ
−
∂
ν
ψ
¯
μ
γ
μ
ν
ρ
δ
ψ
ρ
+
boundary terms
{\displaystyle \delta {\mathcal {L}}_{RS}=\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }+{\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\delta \psi _{\rho }=\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }-\partial _{\nu }{\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\delta \psi _{\rho }+{\text{ boundary terms}}}
using the Majorana flip properties
we see that the second and first terms on the RHS are equal, concluding that
δ
L
R
S
=
2
δ
ψ
¯
μ
γ
μ
ν
ρ
∂
ν
ψ
ρ
,
{\displaystyle \delta {\mathcal {L}}_{RS}=2\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho },}
plus unimportant boundary terms.
Imposing
δ
L
R
S
=
0
{\displaystyle \delta {\mathcal {L}}_{RS}=0}
we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads:
γ
μ
ν
ρ
∂
ν
ψ
ρ
=
0.
{\displaystyle \gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }=0.}
The gauge symmetry of the massless Rarita-Schwinger equation allows the choice of the gauge
γ
μ
ψ
μ
=
0
{\displaystyle \gamma ^{\mu }\psi _{\mu }=0}
, reducing the equations to:
γ
ν
∂
ν
ψ
μ
=
0
,
∂
μ
ψ
μ
=
0
,
γ
μ
ψ
μ
=
0.
{\displaystyle \gamma ^{\nu }{\partial }_{\nu }\psi _{\mu }=0,\quad \partial ^{\mu }\psi _{\mu }=0,\quad \gamma ^{\mu }\psi _{\mu }=0.}
A solution with spins 1/2 and 3/2 is given by:
ψ
0
=
κ
,
ψ
i
=
ψ
i
T
T
+
γ
j
∂
j
∇
2
γ
0
∂
i
κ
,
{\displaystyle \psi _{0}=\kappa ,\quad \psi _{i}=\psi _{i}^{TT}+{\frac {\gamma ^{j}\partial _{j}}{\nabla ^{2}}}\gamma _{0}\partial _{i}\kappa ,}
where
∇
2
{\displaystyle \nabla ^{2}}
is the spatial Laplacian,
ψ
i
T
T
{\displaystyle \psi _{i}^{TT}}
is doubly transverse, carrying spin 3/2, and
κ
{\displaystyle \kappa }
satisfies the massless Dirac equation, therefore carrying spin 1/2.
== Drawbacks of the equation ==
The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli formalisms is afflicted with several maladies.
=== Superluminal propagation ===
As in the case of the Dirac equation, electromagnetic interaction can be added by promoting the partial derivative to gauge covariant derivative:
∂
μ
→
D
μ
=
∂
μ
−
i
e
A
μ
{\displaystyle \partial _{\mu }\rightarrow D_{\mu }=\partial _{\mu }-ieA_{\mu }}
.
In 1969, Velo and Zwanziger showed that the Rarita–Schwinger Lagrangian coupled to electromagnetism leads to equation with solutions representing wavefronts, some of which propagate faster than light. In other words,
the field then suffers from acausal, superluminal propagation; consequently, the quantization in interaction with electromagnetism is essentially flawed. In extended supergravity, though, Das and Freedman have shown that local supersymmetry solves this problem.
== References ==
== Sources ==
Rarita, William; Schwinger, Julian (1941-07-01). "On a Theory of Particles with Half-Integral Spin". Physical Review. 60 (1). American Physical Society (APS): 61. Bibcode:1941PhRv...60...61R. doi:10.1103/physrev.60.61. ISSN 0031-899X.
Collins P.D.B., Martin A.D., Squires E.J., Particle physics and cosmology (1989) Wiley, Section 1.6.
Velo, Giorgio; Zwanziger, Daniel (1969-10-25). "Propagation and Quantization of Rarita-Schwinger Waves in an External Electromagnetic Potential". Physical Review. 186 (5). American Physical Society (APS): 1337–1341. Bibcode:1969PhRv..186.1337V. doi:10.1103/physrev.186.1337. ISSN 0031-899X.
Velo, Giorgio; Zwanzinger, Daniel (1969-12-25). "Noncausality and Other Defects of Interaction Lagrangians for Particles with Spin One and Higher". Physical Review. 188 (5). American Physical Society (APS): 2218–2222. Bibcode:1969PhRv..188.2218V. doi:10.1103/physrev.188.2218. ISSN 0031-899X.
Kobayashi, M.; Shamaly, A. (1978-04-15). "Minimal electromagnetic coupling for massive spin-two fields". Physical Review D. 17 (8). American Physical Society (APS): 2179–2181. Bibcode:1978PhRvD..17.2179K. doi:10.1103/physrevd.17.2179. ISSN 0556-2821. | Wikipedia/Rarita–Schwinger_equation |
In various interpretations of quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation.
In the Copenhagen interpretation, wave function collapse connects quantum to classical models, with a special role for the observer. By contrast, objective-collapse proposes an origin in physical processes. In the many-worlds interpretation, collapse does not exist; all wave function outcomes occur while quantum decoherence accounts for the appearance of collapse.
Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement.
== Mathematical description ==
In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position
r
{\displaystyle r}
and the momentum
p
{\displaystyle p}
but also energy
E
{\displaystyle E}
,
z
{\displaystyle z}
components of spin (
s
z
{\displaystyle s_{z}}
), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing
ϕ
i
{\displaystyle \phi _{i}}
for an eigenstate and
c
i
{\displaystyle c_{i}}
for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation:
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}
The kets
{
|
ϕ
i
⟩
}
{\displaystyle \{|\phi _{i}\rangle \}}
specify the different available quantum "alternatives", i.e., particular quantum states.
The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true.
=== Collapse ===
To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation,: 566 abruptly converting an arbitrary state into a single component eigenstate of the observable:
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
→
|
ψ
′
⟩
=
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle \rightarrow |\psi '\rangle =|\phi _{i}\rangle .}
where the arrow represents a measurement of the observable corresponding to the
ϕ
{\displaystyle \phi }
basis.
For any single event, only one eigenvalue is measured, chosen randomly from among the possible values.
=== Meaning of the expansion coefficients ===
The complex coefficients
{
c
i
}
{\displaystyle \{c_{i}\}}
in the expansion of a quantum state in terms of eigenstates
{
|
ϕ
i
⟩
}
{\displaystyle \{|\phi _{i}\rangle \}}
,
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
.
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}
can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:
c
i
=
⟨
ϕ
i
|
ψ
⟩
.
{\displaystyle c_{i}=\langle \phi _{i}|\psi \rangle .}
They are called the probability amplitudes. The square modulus
|
c
i
|
2
{\displaystyle |c_{i}|^{2}}
is the probability that a measurement of the observable yields the eigenstate
|
ϕ
i
⟩
{\displaystyle |\phi _{i}\rangle }
. The sum of the probability over all possible outcomes must be one:
⟨
ψ
|
ψ
⟩
=
∑
i
|
c
i
|
2
=
1.
{\displaystyle \langle \psi |\psi \rangle =\sum _{i}|c_{i}|^{2}=1.}
As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area.
This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.: 17
== Terminology ==
The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description.: 159 Reduction of the state vector replaces the full state vector with a single eigenstate of the observable.
The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation".: 324 When the wave function representation is used, the "reduction" is called "wave function collapse".
== The measurement problem ==
The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.: 127
== Physical approaches to collapse ==
Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".
Various interpretations of quantum mechanics attempt to provide a physical model for collapse.: 816 Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.: 819
The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.
=== Quantum decoherence ===
Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.
The form of decoherence known as environment-induced superselection proposes that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate.
== History ==
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature.
John von Neumann's influential 1932 work Mathematical Foundations of Quantum Mechanics took a more formal approach, developing an "ideal" measurement scheme: 1270 that postulated that there were two processes of wave function change:
The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse).
The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation.
In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe.: 1288 While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule.: 1290 : 5
Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system".: 1273 Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.: 1302
By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particular Compton scattering. Later work refined the notion of measurements into the more easily discussed first kind, that will give the same value when immediately repeated, and the second kind that give different values when repeated.
== See also ==
== References ==
== External links ==
Quotations related to Wave function collapse at Wikiquote | Wikipedia/Collapse_of_the_wavefunction |
Energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy.
== Forms ==
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
== History ==
The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
== Units of measure ==
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
== Scientific use ==
=== Classical mechanics ===
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
W
=
∫
C
F
⋅
d
s
{\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} }
This says that the work (
W
{\displaystyle W}
) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
=== Chemistry ===
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
=== Biology ===
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C
6
H
12
O
6
+
6
O
2
⟶
6
CO
2
+
6
H
2
O
{\displaystyle {\ce {C6H12O6 + 6O2 -> 6CO2 + 6H2O}}}
C
57
H
110
O
6
+
(
81
1
2
)
O
2
⟶
57
CO
2
+
55
H
2
O
{\displaystyle {\ce {C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O}}}
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
=== Earth sciences ===
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
=== Cosmology ===
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
=== Quantum mechanics ===
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
E
=
h
ν
{\displaystyle E=h\nu }
(where
h
{\displaystyle h}
is the Planck constant and
ν
{\displaystyle \nu }
the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
=== Relativity ===
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
E
0
=
m
0
c
2
,
{\displaystyle E_{0}=m_{0}c^{2},}
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
E
0
{\displaystyle E_{0}}
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
== Transformation ==
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy (
E
p
{\displaystyle E_{p}}
) to kinetic energy (
E
k
{\displaystyle E_{k}}
) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since
E
p
=
m
g
h
{\displaystyle E_{p}=mgh}
(mass times acceleration due to gravity times the height) and
E
k
=
1
2
m
v
2
{\textstyle E_{k}={\frac {1}{2}}mv^{2}}
(half mass times velocity squared). Then the total amount of energy can be found by adding
E
p
+
E
k
=
E
total
{\displaystyle E_{p}+E_{k}=E_{\text{total}}}
.
=== Conservation of energy and mass in transformation ===
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
c
2
{\displaystyle c^{2}}
is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9×1016 joules, equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
=== Reversible and non-reversible transformations ===
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
== Conservation of energy ==
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
Δ
E
Δ
t
≥
ℏ
2
{\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}}
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
== Energy transfer ==
=== Closed systems ===
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where
E
{\displaystyle E}
is the amount of energy transferred,
W
{\displaystyle W}
represents the work done on or by the system, and
Q
{\displaystyle Q}
represents the heat flow into or out of the system. As a simplification, the heat term,
Q
{\displaystyle Q}
, can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
=== Open systems ===
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
E
matter
{\displaystyle E_{\text{matter}}}
, one may write
== Thermodynamics ==
=== Internal energy ===
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
=== First law of thermodynamics ===
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
d
E
=
T
d
S
−
P
d
V
,
{\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,,}
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
d
E
=
δ
Q
+
δ
W
{\displaystyle \mathrm {d} E=\delta Q+\delta W}
where
δ
Q
{\displaystyle \delta Q}
is the heat supplied to the system and
δ
W
{\displaystyle \delta W}
is the work applied to the system.
=== Equipartition of energy ===
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
== See also ==
== Notes ==
== References ==
== Further reading ==
=== Journals ===
The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018–
== External links ==
Differences between Heat and Thermal energy (Archived 2016-08-27 at the Wayback Machine) – BioCab | Wikipedia/Total_energy |
In physics, a hidden-variable theory is a deterministic model which seeks to explain the probabilistic nature of quantum mechanics by introducing additional, possibly inaccessible, variables.
The mathematical formulation of quantum mechanics assumes that the state of a system prior to measurement is indeterminate; quantitative bounds on this indeterminacy are expressed by the Heisenberg uncertainty principle. Most hidden-variable theories are attempts to avoid this indeterminacy, but possibly at the expense of requiring that nonlocal interactions be allowed. One notable hidden-variable theory is the de Broglie–Bohm theory.
In their 1935 EPR paper, Albert Einstein, Boris Podolsky, and Nathan Rosen argued that quantum entanglement might imply that quantum mechanics is an incomplete description of reality. John Stewart Bell in 1964, in his eponymous theorem proved that correlations between particles under any local hidden variable theory must obey certain constraints. Subsequently, Bell test experiments have demonstrated broad violation of these constraints, ruling out such theories. Bell's theorem, however, does not rule out the possibility of nonlocal theories or superdeterminism; these therefore cannot be falsified by Bell tests.
== Motivation ==
Macroscopic physics requires classical mechanics which allows accurate predictions of mechanical motion with reproducible, high precision. Quantum phenomena require quantum mechanics, which allows accurate predictions of statistical averages only. If quantum states had hidden-variables awaiting ingenious new measurement technologies, then the latter (statistical results) might be convertible to a form of the former (classical-mechanical motion).
This classical mechanics description would eliminate unsettling characteristics of quantum theory like the uncertainty principle. More fundamentally however, a successful model of quantum phenomena with hidden variables implies quantum entities with intrinsic values independent of measurements. Existing quantum mechanics asserts that state properties can only be known after a measurement. As N. David Mermin puts it:It is a fundamental quantum doctrine that a measurement does not, in general, reveal a pre-existing value of the measured property. On the contrary, the outcome of a measurement is brought into being by the act of measurement itself...
In other words, whereas a hidden-variable theory would imply intrinsic particle properties, in quantum mechanics an electron has no definite position and velocity to even be revealed.
== History ==
=== "God does not play dice" ===
In June 1926, Max Born published a paper, in which he was the first to clearly enunciate the probabilistic interpretation of the quantum wave function, which had been introduced by Erwin Schrödinger earlier in the year. Born concluded the paper as follows:Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive.Born's interpretation of the wave function was criticized by Schrödinger, who had previously attempted to interpret it in real physical terms, but Albert Einstein's response became one of the earliest and most famous assertions that quantum mechanics is incomplete:Quantum mechanics is very worthy of respect. But an inner voice tells me this is not the genuine article after all. The theory delivers much but it hardly brings us closer to the Old One's secret. In any event, I am convinced that He is not playing dice.Niels Bohr reportedly replied to Einstein's later expression of this sentiment by advising him to "stop telling God what to do."
=== Early attempts at hidden-variable theories ===
Shortly after making his famous "God does not play dice" comment, Einstein attempted to formulate a deterministic counter proposal to quantum mechanics, presenting a paper at a meeting of the Academy of Sciences in Berlin, on 5 May 1927, titled "Bestimmt Schrödinger's Wellenmechanik die Bewegung eines Systems vollständig oder nur im Sinne der Statistik?" ("Does Schrödinger's wave mechanics determine the motion of a system completely or only in the statistical sense?"). However, as the paper was being prepared for publication in the academy's journal, Einstein decided to withdraw it, possibly because he discovered that, contrary to his intention, his use of Schrödinger's field to guide localized particles allowed just the kind of non-local influences he intended to avoid.
At the Fifth Solvay Congress, held in Belgium in October 1927 and attended by all the major theoretical physicists of the era, Louis de Broglie presented his own version of a deterministic hidden-variable theory, apparently unaware of Einstein's aborted attempt earlier in the year. In his theory, every particle had an associated, hidden "pilot wave" which served to guide its trajectory through space. The theory was subject to criticism at the Congress, particularly by Wolfgang Pauli, which de Broglie did not adequately answer; de Broglie abandoned the theory shortly thereafter.
=== Declaration of completeness of quantum mechanics, and the Bohr–Einstein debates ===
Also at the Fifth Solvay Congress, Max Born and Werner Heisenberg made a presentation summarizing the recent tremendous theoretical development of quantum mechanics. At the conclusion of the presentation, they declared:[W]hile we consider ... a quantum mechanical treatment of the electromagnetic field ... as not yet finished, we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification...
On the question of the 'validity of the law of causality' we have this opinion: as long as one takes into account only experiments that lie in the domain of our currently acquired physical and quantum mechanical experience, the assumption of indeterminism in principle, here taken as fundamental, agrees with experience.Although there is no record of Einstein responding to Born and Heisenberg during the technical sessions of the Fifth Solvay Congress, he did challenge the completeness of quantum mechanics at various times. In his tribute article for Born's retirement he discussed the quantum representation of a macroscopic ball bouncing elastically between rigid barriers. He argues that such a quantum representation does not represent a specific ball, but "time ensemble of systems". As such the representation is correct, but incomplete because it does not represent the real individual macroscopic case. Einstein considered quantum mechanics incomplete "because the state function, in general, does not even describe the individual event/system".
=== Von Neumann's proof ===
John von Neumann in his 1932 book Mathematical Foundations of Quantum Mechanics had presented a proof that there could be no "hidden parameters" in quantum mechanics. The validity of von Neumann's proof was questioned by Grete Hermann in 1935, who found a flaw in the proof. The critical issue concerned averages over ensembles. Von Neumann assumed that a relation between the expected values of different observable quantities holds for each possible value of the "hidden parameters", rather than only for a statistical average over them. However Hermann's work went mostly unnoticed until its rediscovery by John Stewart Bell more than 30 years later.
The validity and definitiveness of von Neumann's proof were also questioned by Hans Reichenbach, and possibly in conversation though not in print by Albert Einstein. Reportedly, in a conversation circa 1938 with his assistants Peter Bergmann and Valentine Bargmann, Einstein pulled von Neumann's book off his shelf, pointed to the same assumption critiqued by Hermann and Bell, and asked why one should believe in it. Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.
=== EPR paradox ===
Einstein argued that quantum mechanics could not be a complete theory of physical reality. He wrote,
Consider a mechanical system consisting of two partial systems A and B which interact with each other only during a limited time. Let the ψ function [i.e., wavefunction] before their interaction be given. Then the Schrödinger equation will furnish the ψ function after the interaction has taken place. Let us now determine the physical state of the partial system A as completely as possible by measurements. Then quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the physical quantities (observables) of A have been measured (for instance, coordinates or momenta). Since there can be only one physical state of B after the interaction which cannot reasonably be considered to depend on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated to the physical state. This coordination of several ψ functions to the same physical state of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical state of a single system.
Together with Boris Podolsky and Nathan Rosen, Einstein published a paper that gave a related but distinct argument against the completeness of quantum mechanics. They proposed a thought experiment involving a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is impossible according to the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
Bohr answered the Einstein–Podolsky–Rosen challenge as follows:
[The argument of] Einstein, Podolsky and Rosen contains an ambiguity as regards the meaning of the expression "without in any way disturbing a system." ... [E]ven at this stage [i.e., the measurement of, for example, a particle that is part of an entangled pair], there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term "physical reality" can be properly attached, we see that the argumentation of the mentioned authors does not justify their conclusion that quantum-mechanical description is essentially incomplete."
Bohr is here choosing to define a "physical reality" as limited to a phenomenon that is immediately observable by an arbitrarily chosen and explicitly specified technique, using his own special definition of the term 'phenomenon'. He wrote in 1948:
As a more appropriate way of expression, one may strongly advocate limitation of the use of the word phenomenon to refer exclusively to observations obtained under specified circumstances, including an account of the whole experiment.
This was, of course, in conflict with the EPR criterion of reality.
=== Bell's theorem ===
In 1964, John Stewart Bell showed through his famous theorem that if local hidden variables exist, certain experiments could be performed involving quantum entanglement where the result would satisfy a Bell inequality. If, on the other hand, statistical correlations resulting from quantum entanglement could not be explained by local hidden variables, the Bell inequality would be violated. Another no-go theorem concerning hidden-variable theories is the Kochen–Specker theorem.
Physicists such as Alain Aspect and Paul Kwiat have performed experiments that have found violations of these inequalities up to 242 standard deviations. This rules out local hidden-variable theories, but does not rule out non-local ones. Theoretically, there could be experimental problems that affect the validity of the experimental findings.
Gerard 't Hooft has disputed the validity of Bell's theorem on the basis of the superdeterminism loophole and proposed some ideas to construct local deterministic models.
== Bohm's hidden-variable theory ==
In 1952, David Bohm proposed a hidden variable theory. Bohm unknowingly rediscovered (and extended) the idea that Louis de Broglie's pilot wave theory had proposed in 1927 (and abandoned) – hence this theory is commonly called "de Broglie-Bohm theory". Assuming the validity of Bell's theorem, any deterministic hidden-variable theory that is consistent with quantum mechanics would have to be non-local, maintaining the existence of instantaneous or faster-than-light relations (correlations) between physically separated entities.
Bohm posited both the quantum particle, e.g. an electron, and a hidden 'guiding wave' that governs its motion. Thus, in this theory electrons are quite clearly particles. When a double-slit experiment is performed, the electron goes through either one of the slits. Also, the slit passed through is not random but is governed by the (hidden) pilot wave, resulting in the wave pattern that is observed.
In Bohm's interpretation, the (non-local) quantum potential constitutes an implicate (hidden) order which organizes a particle, and which may itself be the result of yet a further implicate order: a superimplicate order which organizes a field. Nowadays Bohm's theory is considered to be one of many interpretations of quantum mechanics. Some consider it the simplest theory to explain quantum phenomena. Nevertheless, it is a hidden-variable theory, and necessarily so. The major reference for Bohm's theory today is his book with Basil Hiley, published posthumously.
A possible weakness of Bohm's theory is that some (including Einstein, Pauli, and Heisenberg) feel that it looks contrived. (Indeed, Bohm thought this of his original formulation of the theory.) Bohm said he considered his theory to be unacceptable as a physical theory due to the guiding wave's existence in an abstract multi-dimensional configuration space, rather than three-dimensional space.
== Recent developments ==
In August 2011, Roger Colbeck and Renato Renner published a proof that any extension of quantum mechanical theory, whether using hidden variables or otherwise, cannot provide a more accurate prediction of outcomes, assuming that observers can freely choose the measurement settings. Colbeck and Renner write: "In the present work, we have ... excluded the possibility that any extension of quantum theory (not necessarily in the form of local hidden variables) can help predict the outcomes of any measurement on any quantum state. In this sense, we show the following: under the assumption that measurement settings can be chosen freely, quantum theory really is complete".
In January 2013, Giancarlo Ghirardi and Raffaele Romano described a model which, "under a different free choice assumption [...] violates [the statement by Colbeck and Renner] for almost all states of a bipartite two-level system, in a possibly experimentally testable way".
== See also ==
== References ==
== Bibliography ==
Peres, Asher; Zurek, Wojciech (1982). "Is quantum theory universally valid?". American Journal of Physics. 50 (9): 807–810. Bibcode:1982AmJPh..50..807P. doi:10.1119/1.13086.
Jammer, Max (1985). "The EPR Problem in Its Historical Development". In Lahti, P.; Mittelstaedt, P. (eds.). Symposium on the Foundations of Modern Physics: 50 years of the Einstein–Podolsky–Rosen Gedankenexperiment. Singapore: World Scientific. pp. 129–149.
Fine, Arthur (1986). The Shaky Game: Einstein Realism and the Quantum Theory. Chicago: University of Chicago Press. | Wikipedia/Hidden-variable_theory |
In atomic physics, the Bohr model or Rutherford–Bohr model was a model of the atom that incorporated some early quantum concepts. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J. J. Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values).
In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics.
The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.
The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory.
== Background ==
Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.: 2
=== Planetary models ===
In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.: 35
These models faced a significant constraint.
In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.: 113
=== Thomson's atom model ===
When Bohr began his work on a new atomic theory in the summer of 1912: 237 the atomic model proposed by J. J. Thomson, now known as the plum pudding model, was the best available.: 37 Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.: 38
However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.: 18
=== Rutherford nuclear model ===
In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model.
In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom.
Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete.
Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons.
=== Atomic spectra ===
By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.: 18
In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series.: II:106 Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom.
The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element.: 173 Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.: 847
=== Haas atomic model ===
In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy,
E
pot
{\displaystyle E_{\text{pot}}}
, on a sphere of radius a to equal the frequency, f, of the electron's orbit on the sphere times the Planck constant:: 197
E
pot
=
−
e
2
a
=
h
f
{\displaystyle E_{\text{pot}}={\frac {-e^{2}}{a}}=hf}
where e represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force:
e
2
a
2
=
m
a
(
2
π
f
)
2
{\displaystyle {\frac {e^{2}}{a^{2}}}=ma(2\pi f)^{2}}
where m is the mass of the electron. This combination relates the radius of the sphere to the Planck constant:
a
=
h
2
4
π
2
e
2
m
{\displaystyle a={\frac {h^{2}}{4\pi ^{2}e^{2}m}}}
Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom.
Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, a, the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.: 197
=== Influence of the Solvay Conference ===
The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including
Ernest Rutherford, Bohr's mentor.: 271
Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.: 233
The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators.
Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms.: 273 Bohr would adopt the second path.
The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics.: 273 While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories.: 244 Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom: 199
=== Nicholson atom theory ===
In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant.
Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency.
This new concept gave Planck constant an atomic meaning for the first time.: 169 In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom.
The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.: 178
Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit.: 163 By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair.: 195 Bohr's atomic model would abandon classical electrodynamics.
Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency.
=== Bohr's previous work ===
Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.: 194
After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.: 195
== Development ==
Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula.
After this, Bohr declared, "everything became clear".
In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model:
The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones.
The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant:
m
e
v
r
=
n
ℏ
{\displaystyle m_{\mathrm {e} }vr=n\hbar }
, where
n
=
1
,
2
,
3
,
.
.
.
{\displaystyle n=1,2,3,...}
is called the principal quantum number, and
ℏ
=
h
/
2
π
{\displaystyle \hbar =h/2\pi }
. The lowest value of
n
{\displaystyle n}
is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation.
Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency
ν
{\displaystyle \nu }
determined by the energy difference of the levels according to the Planck relation:
Δ
E
=
E
2
−
E
1
=
h
ν
{\displaystyle \Delta E=E_{2}-E_{1}=h\nu }
, where
h
{\displaystyle h}
is the Planck constant.
Other points are:
Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons.
According to the Maxwell theory the frequency
ν
{\displaystyle \nu }
of classical radiation is equal to the rotation frequency
ν
{\displaystyle \nu }
rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels
E
n
{\displaystyle E_{n}}
and
E
n
−
k
{\displaystyle E_{n-k}}
when
k
{\displaystyle k}
is much smaller than
n
{\displaystyle n}
. These jumps reproduce the frequency of the
k
{\displaystyle k}
-th harmonic of orbit
n
{\displaystyle n}
. For sufficiently large values of
n
{\displaystyle n}
(so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small
n
{\displaystyle n}
(or large
k
{\displaystyle k}
), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers.
The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average.
Bohr's condition, that the angular momentum be an integer multiple of
ℏ
{\displaystyle \hbar }
, was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:
n
λ
=
2
π
r
.
{\displaystyle n\lambda =2\pi r.}
According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is
λ
=
h
m
v
,
{\displaystyle \lambda ={\frac {h}{mv}},}
which implies that
n
h
m
v
=
2
π
r
,
{\displaystyle {\frac {nh}{mv}}=2\pi r,}
or
n
h
2
π
=
m
v
r
,
{\displaystyle {\frac {nh}{2\pi }}=mvr,}
where
m
v
r
{\displaystyle mvr}
is the angular momentum of the orbiting electron. Writing
ℓ
{\displaystyle \ell }
for this angular momentum, the previous equation becomes
ℓ
=
n
h
2
π
,
{\displaystyle \ell ={\frac {nh}{2\pi }},}
which is Bohr's second postulate.
Bohr described angular momentum of the electron orbit as
2
/
h
{\displaystyle 2/h}
while de Broglie's wavelength of
λ
=
h
/
p
{\displaystyle \lambda =h/p}
described
h
{\displaystyle h}
divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected.
In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge.
== Electron energy levels ==
The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons.
Calculation of the orbits requires two assumptions.
Classical mechanics
The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force.
m
e
v
2
r
=
Z
k
e
e
2
r
2
,
{\displaystyle {\frac {m_{\mathrm {e} }v^{2}}{r}}={\frac {Zk_{\mathrm {e} }e^{2}}{r^{2}}},}
where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius:
v
=
Z
k
e
e
2
m
e
r
.
{\displaystyle v={\sqrt {\frac {Zk_{\mathrm {e} }e^{2}}{m_{\mathrm {e} }r}}}.}
It also determines the electron's total energy at any radius:
E
=
−
1
2
m
e
v
2
.
{\displaystyle E=-{\frac {1}{2}}m_{\mathrm {e} }v^{2}.}
The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem.
A quantum rule
The angular momentum L = mevr is an integer multiple of ħ:
m
e
v
r
=
n
ℏ
.
{\displaystyle m_{\mathrm {e} }vr=n\hbar .}
=== Derivation ===
In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T.
However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation.
Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula.
Denoting the total energy as E, the electron charge as −e, the nucleus charge as K = Ze, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:: 3
E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance.
Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that
E
=
T
+
U
{\displaystyle E=T+U}
(where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a.
Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit,
Then Bohr assumes that
|
E
|
{\displaystyle \vert E\vert }
is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency,: 4 i.e.:
From eq. (1a, 1b, 2), it descends:
He further assumes that the orbit is circular, i.e.
a
=
r
{\displaystyle a=r}
, and, denoting the angular momentum of the electron as L, introduces the equation:
Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution.
From eq. (1c, 2, 4), it stems:
where:
that is:
This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.: 15
Substituting the expression for the velocity gives an equation for r in terms of n:
m
e
k
e
Z
e
2
m
e
r
r
=
n
ℏ
,
{\displaystyle m_{\text{e}}{\sqrt {\dfrac {k_{\text{e}}Ze^{2}}{m_{\text{e}}r}}}r=n\hbar ,}
so that the allowed orbit radius at any n is
r
n
=
n
2
ℏ
2
Z
k
e
e
2
m
e
.
{\displaystyle r_{n}={\frac {n^{2}\hbar ^{2}}{Zk_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}.}
The smallest possible value of r in the hydrogen atom (Z = 1) is called the Bohr radius and is equal to:
r
1
=
ℏ
2
k
e
e
2
m
e
≈
5.29
×
10
−
11
m
=
52.9
p
m
.
{\displaystyle r_{1}={\frac {\hbar ^{2}}{k_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}\approx 5.29\times 10^{-11}~\mathrm {m} =52.9~\mathrm {pm} .}
The energy of the n-th level for any atom is determined by the radius and quantum number:
E
=
−
Z
k
e
e
2
2
r
n
=
−
Z
2
(
k
e
e
2
)
2
m
e
2
ℏ
2
n
2
≈
−
13.6
Z
2
n
2
e
V
.
{\displaystyle E=-{\frac {Zk_{\mathrm {e} }e^{2}}{2r_{n}}}=-{\frac {Z^{2}(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}n^{2}}}\approx {\frac {-13.6\ Z^{2}}{n^{2}}}~\mathrm {eV} .}
An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product.
The combination of natural constants in the energy formula is called the Rydberg energy (RE):
R
E
=
(
k
e
e
2
)
2
m
e
2
ℏ
2
.
{\displaystyle R_{\mathrm {E} }={\frac {(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}}}.}
This expression is clarified by interpreting it in combinations that form more natural units:
m
e
c
2
{\displaystyle m_{\mathrm {e} }c^{2}}
is the rest mass energy of the electron (511 keV),
k
e
e
2
ℏ
c
=
α
≈
1
137
{\displaystyle {\frac {k_{\mathrm {e} }e^{2}}{\hbar c}}=\alpha \approx {\frac {1}{137}}}
is the fine-structure constant,
R
E
=
1
2
(
m
e
c
2
)
α
2
{\displaystyle R_{\mathrm {E} }={\frac {1}{2}}(m_{\mathrm {e} }c^{2})\alpha ^{2}}
.
Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Ze, where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation):
E
n
=
−
Z
2
R
E
n
2
.
{\displaystyle E_{n}=-{\frac {Z^{2}R_{\mathrm {E} }}{n^{2}}}.}
The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force.
When Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei.
The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron,
m
red
=
m
e
m
p
m
e
+
m
p
=
m
e
1
1
+
m
e
/
m
p
.
{\displaystyle m_{\text{red}}={\frac {m_{\mathrm {e} }m_{\mathrm {p} }}{m_{\mathrm {e} }+m_{\mathrm {p} }}}=m_{\mathrm {e} }{\frac {1}{1+m_{\mathrm {e} }/m_{\mathrm {p} }}}.}
However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1 + 1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4.
For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus.
E
n
=
R
E
2
n
2
{\displaystyle E_{n}={\frac {R_{\mathrm {E} }}{2n^{2}}}}
(positronium).
== Rydberg formula ==
Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines.
Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant,
R
{\displaystyle R}
, now known the Rydberg constant and a pair of integers indexing the lines:: 247
ν
=
R
(
1
m
2
−
1
n
2
)
.
{\displaystyle \nu =R\left({\frac {1}{m^{2}}}-{\frac {1}{n^{2}}}\right).}
Despite many attempts, no theory of the atom could reproduce these relatively simple formula.: 169
In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by
τ
{\displaystyle \tau }
:
W
τ
=
2
π
2
m
e
4
h
2
τ
2
{\displaystyle W_{\tau }={\frac {2\pi ^{2}me^{4}}{h^{2}\tau ^{2}}}}
The energy difference between two such levels is then:
h
ν
=
W
τ
2
−
W
τ
1
=
2
π
2
m
e
4
h
2
(
1
τ
2
2
−
1
τ
1
2
)
{\displaystyle h\nu =W_{\tau _{2}}-W_{\tau _{1}}={\frac {2\pi ^{2}me^{4}}{h^{2}}}\left({\frac {1}{\tau _{2}^{2}}}-{\frac {1}{\tau _{1}^{2}}}\right)}
Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:: 31
c
R
H
=
2
π
2
m
e
4
h
3
.
{\displaystyle cR_{\text{H}}={\frac {2\pi ^{2}me^{4}}{h^{3}}}.}
Since the energy of a photon is
E
=
h
c
λ
,
{\displaystyle E={\frac {hc}{\lambda }},}
these results can be expressed in terms of the wavelength of the photon given off:
1
λ
=
R
(
1
n
f
2
−
1
n
i
2
)
.
{\displaystyle {\frac {1}{\lambda }}=R\left({\frac {1}{n_{\text{f}}^{2}}}-{\frac {1}{n_{\text{i}}^{2}}}\right).}
Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman (nf = 1), Balmer (nf = 2), and Paschen (nf = 3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.: 34
To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing Z with Z − b or n with n − b where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model.
== Shell model (heavier atoms) ==
Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium.
In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury
Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit.
This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit.
For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer.
The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas).
In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n = 3 d-orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.
== Moseley's law and calculation (K-alpha X-ray emission lines) ==
Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley."
In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z − 1)2.
Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation.
It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers".
In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines,
E
=
h
ν
=
E
i
−
E
f
=
R
E
(
Z
−
1
)
2
(
1
1
2
−
1
2
2
)
,
{\displaystyle E=h\nu =E_{i}-E_{f}=R_{\mathrm {E} }(Z-1)^{2}\left({\frac {1}{1^{2}}}-{\frac {1}{2^{2}}}\right),}
or
f
=
ν
=
R
v
(
3
4
)
(
Z
−
1
)
2
=
(
2.46
×
10
15
Hz
)
(
Z
−
1
)
2
.
{\displaystyle f=\nu =R_{\mathrm {v} }\left({\frac {3}{4}}\right)(Z-1)^{2}=(2.46\times 10^{15}~{\text{Hz}})(Z-1)^{2}.}
Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28×1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913.
The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation.
== Shortcomings ==
The Bohr model gives an incorrect value L=ħ for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction.
In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics).
The Bohr model also failed to explain:
Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom.
The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect).
The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin.
The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields.
Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together.
Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium.
== Refinements ==
Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition
∫
0
T
p
r
d
q
r
=
n
h
,
{\displaystyle \int _{0}^{T}p_{\text{r}}\,dq_{\text{r}}=nh,}
where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.
The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.
However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.
The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.
Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells".
== Model of the chemical bond ==
Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
== Symbolism of planetary atomic models ==
Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms.
The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy).: 58 Examples of its use over the past century include but are not limited to:
The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular.
The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches.
The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A".
A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general.
The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model.
The television show The Big Bang Theory uses a planetary-like image in its print logo.
The JavaScript library React uses planetary-like image as its logo.
On maps, it is generally used to indicate a nuclear power installation.
== See also ==
== References ==
=== Footnotes ===
=== Primary sources ===
Bohr, N. (July 1913). "I. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (151): 1–25. Bibcode:1913PMag...26....1B. doi:10.1080/14786441308634955.
Bohr, N. (September 1913). "XXXVII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (153): 476–502. Bibcode:1913PMag...26..476B. doi:10.1080/14786441308634993.
Bohr, N. (1 November 1913). "LXXIII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (155): 857–875. Bibcode:1913PMag...26..857B. doi:10.1080/14786441308635031.
Bohr, N. (October 1913). "The Spectra of Helium and Hydrogen". Nature. 92 (2295): 231–232. Bibcode:1913Natur..92..231B. doi:10.1038/092231d0. S2CID 11988018.
Bohr, N. (March 1921). "Atomic Structure". Nature. 107 (2682): 104–107. Bibcode:1921Natur.107..104B. doi:10.1038/107104a0. S2CID 4035652.
A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft. 19: 82–92. Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.)
de Broglie, Maurice; Langevin, Paul; Solvay, Ernest; Einstein, Albert (1912). La théorie du rayonnement et les quanta : rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M.E. Solvay (in French). Gauthier-Villars. OCLC 1048217622.
== Further reading ==
Linus Carl Pauling (1970). "Chapter 5-1". General Chemistry (3rd ed.). San Francisco: W.H. Freeman & Co.
Reprint: Linus Pauling (1988). General Chemistry. New York: Dover Publications. ISBN 0-486-65622-5.
George Gamow (1985). "Chapter 2". Thirty Years That Shook Physics. Dover Publications.
Walter J. Lehmann (1972). "Chapter 18". Atomic and Molecular Structure: the development of our concepts. John Wiley and Sons. ISBN 0-471-52440-9.
Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0.
Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61
Steven and Susan Zumdahl (2010). "Chapter 7.4". Chemistry (8th ed.). Brooks/Cole. ISBN 978-0-495-82992-8.
Kragh, Helge (November 2011). "Conceptual objections to the Bohr atomic theory — do electrons have a 'free will'?". The European Physical Journal H. 36 (3): 327–352. Bibcode:2011EPJH...36..327K. doi:10.1140/epjh/e2011-20031-x. S2CID 120859582.
== External links ==
Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode | Wikipedia/Bohr_model |
The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. Its introduction allowed many principal concepts of thermodynamics to be established. It treats a gas as composed of numerous particles, too small to be seen with a microscope, in constant, random motion. These particles are now known to be the atoms or molecules of the gas. The kinetic theory of gases uses their collisions with each other and with the walls of their container to explain the relationship between the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity.
The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart.
Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations.
The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics.
== History ==
=== Kinetic theory of matter ===
==== Antiquity ====
In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.
==== Modern era ====
===== "Heat is motion" =====
One of the first and boldest statements on the relationship between motion of particles and heat was by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In 1623, in The Assayer, Galileo Galilei, in turn, argued that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon.
In 1665, in Micrographia, the English polymath Robert Hooke repeated Bacon's assertion, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle noted that a hammer's "impulse" is transformed into the motion of a nail's constituent particles, and that this type of motion is what heat consists of. Boyle also believed that all macroscopic properties, including color, taste and elasticity, are caused by and ultimately consist of nothing but the arrangement and motion of indivisible particles of matter. In a lecture of 1681, Hooke asserted a direct relationship between the temperature of an object and the speed of its internal particles. "Heat ... is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved." In a manuscript published 1720, the English philosopher John Locke made a very similar statement: "What in our sensation is heat, in the object is nothing but motion." Locke too talked about the motion of the internal particles of the object, which he referred to as its "insensible parts".
In his 1744 paper Meditations on the Cause of Heat and Cold, Russian polymath Mikhail Lomonosov made a relatable appeal to everyday experience to gain acceptance of the microscopic and kinetic nature of matter and heat:Movement should not be denied based on the fact it is not seen. Who would deny that the leaves of trees move when rustled by a wind, despite it being unobservable from large distances? Just as in this case motion remains hidden due to perspective, it remains hidden in warm bodies due to the extremely small sizes of the moving particles. In both cases, the viewing angle is so small that neither the object nor their movement can be seen.Lomonosov also insisted that movement of particles is necessary for the processes of dissolution, extraction and diffusion, providing as examples the dissolution and diffusion of salts by the action of water particles on the of the “molecules of salt”, the dissolution of metals in mercury, and the extraction of plant pigments by alcohol.
Also the transfer of heat was explained by the motion of particles. Around 1760, Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another."
=== Kinetic theory of gases ===
In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.: 36–37
Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation.
In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases."
In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann.
At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory.
Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases.
== Assumptions ==
The application of kinetic theory to ideal gases makes the following assumptions:
The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions.
The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit.
The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic.
Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory.
Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another.
Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible.
As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below).
More modern developments, such as the revised Enskog theory and the extended Bhatnagar–Gross–Krook model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results.
== Equilibrium properties ==
=== Pressure and kinetic energy ===
In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface.
Consider a gas particle traveling at velocity,
v
i
{\textstyle v_{i}}
, along the
i
^
{\displaystyle {\hat {i}}}
-direction in an enclosed volume with characteristic length,
L
i
{\displaystyle L_{i}}
, cross-sectional area,
A
i
{\displaystyle A_{i}}
, and volume,
V
=
A
i
L
i
{\displaystyle V=A_{i}L_{i}}
. The gas particle encounters a boundary after characteristic time
t
=
L
i
/
v
i
.
{\displaystyle t=L_{i}/v_{i}.}
The momentum of the gas particle can then be described as
p
i
=
m
v
i
=
m
L
i
/
t
.
{\displaystyle p_{i}=mv_{i}=mL_{i}/t.}
We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that
F
i
=
d
p
i
d
t
=
m
L
i
t
2
=
m
v
i
2
L
i
.
{\displaystyle F_{i}={\frac {\mathrm {d} p_{i}}{\mathrm {d} t}}={\frac {mL_{i}}{t^{2}}}={\frac {mv_{i}^{2}}{L_{i}}}.}
Now consider a large number,
N
{\displaystyle N}
, of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed,
v
{\textstyle v}
, in every direction is identical
v
x
2
=
v
y
2
=
v
z
2
.
{\displaystyle v_{x}^{2}=v_{y}^{2}=v_{z}^{2}.}
Further, assume that the volume is symmetrical about its three dimensions,
i
^
,
j
^
,
k
^
{\displaystyle {\hat {i}},{\hat {j}},{\hat {k}}}
, such that
V
=
V
i
=
V
j
=
V
k
,
F
=
F
i
=
F
j
=
F
k
,
A
i
=
A
j
=
A
k
.
{\displaystyle {\begin{aligned}V={}&V_{i}=V_{j}=V_{k},\\F={}&F_{i}=F_{j}=F_{k},\\&A_{i}=A_{j}=A_{k}.\end{aligned}}}
The total surface area on which the gas particles act is therefore
A
=
3
A
i
.
{\displaystyle A=3A_{i}.}
The pressure exerted by the collisions of the
N
{\displaystyle N}
gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume,
P
=
N
F
¯
A
=
N
L
F
V
{\displaystyle P={\frac {N{\overline {F}}}{A}}={\frac {NLF}{V}}}
⇒
P
V
=
N
L
F
=
N
3
m
v
2
.
{\displaystyle \Rightarrow PV=NLF={\frac {N}{3}}mv^{2}.}
The total translational kinetic energy
K
t
{\displaystyle K_{\text{t}}}
of the gas is defined as
K
t
=
N
2
m
v
2
,
{\displaystyle K_{\text{t}}={\frac {N}{2}}mv^{2},}
providing the result
P
V
=
2
3
K
t
.
{\displaystyle PV={\frac {2}{3}}K_{\text{t}}.}
This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property.
The mass density of a gas
ρ
{\displaystyle \rho }
is expressed through the total mass of gas particles and through volume of this gas:
ρ
=
N
m
V
{\displaystyle \rho ={\frac {Nm}{V}}}
. Taking this into account, the pressure is equal to
P
=
ρ
v
2
3
.
{\displaystyle P={\frac {\rho v^{2}}{3}}.}
Relativistic expression for this formula is
P
=
2
ρ
c
2
3
(
(
1
−
v
2
¯
/
c
2
)
−
1
/
2
−
1
)
,
{\displaystyle P={\frac {2\rho c^{2}}{3}}\left({\left(1-{\overline {v^{2}}}/c^{2}\right)}^{-1/2}-1\right),}
where
c
{\displaystyle c}
is speed of light. In the limit of small speeds, the expression becomes
P
≈
ρ
v
2
¯
/
3
{\displaystyle P\approx \rho {\overline {v^{2}}}/3}
.
=== Temperature and kinetic energy ===
Rewriting the above result for the pressure as
P
V
=
1
3
N
m
v
2
{\textstyle PV={\frac {1}{3}}Nmv^{2}}
, we may combine it with the ideal gas law
where
k
B
{\displaystyle k_{\mathrm {B} }}
is the Boltzmann constant and
T
{\displaystyle T}
is the absolute temperature defined by the ideal gas law, to obtain
k
B
T
=
1
3
m
v
2
,
{\displaystyle k_{\mathrm {B} }T={\frac {1}{3}}mv^{2},}
which leads to a simplified expression of the average translational kinetic energy per molecule,
1
2
m
v
2
=
3
2
k
B
T
.
{\displaystyle {\frac {1}{2}}mv^{2}={\frac {3}{2}}k_{\mathrm {B} }T.}
The translational kinetic energy of the system is
N
{\displaystyle N}
times that of a molecule, namely
K
t
=
1
2
N
m
v
2
{\textstyle K_{\text{t}}={\frac {1}{2}}Nmv^{2}}
. The temperature,
T
{\displaystyle T}
is related to the translational kinetic energy by the description above, resulting in
which becomes
Equation (3) is one important result of the kinetic theory:
The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature.
From equations (1) and (3), we have
Thus, the product of pressure and volume per mole is proportional to the average
translational molecular kinetic energy.
Equations (1) and (4) are called the "classical results", which could also be derived from statistical mechanics;
for more details, see:
The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom.
Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is
K
=
D
K
t
=
D
2
N
m
v
2
.
{\displaystyle K=DK_{\text{t}}={\frac {D}{2}}Nmv^{2}.}
Thus, the energy added to the system per gas particle kinetic degree of freedom is
K
N
D
=
1
2
k
B
T
.
{\displaystyle {\frac {K}{ND}}={\frac {1}{2}}k_{\text{B}}T.}
Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is
K
=
D
2
k
B
N
A
=
3
2
R
,
{\displaystyle K={\frac {D}{2}}k_{\text{B}}N_{\text{A}}={\frac {3}{2}}R,}
where
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant, and R is the ideal gas constant.
Thus, the ratio of the kinetic energy to the absolute temperature of an ideal monatomic gas can be calculated easily:
per mole: 12.47 J/K
per molecule: 20.7 yJ/K = 129 μeV/K
At standard temperature (273.15 K), the kinetic energy can also be obtained:
per mole: 3406 J
per molecule: 5.65 zJ = 35.2 meV.
At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions.
=== Collisions with container wall ===
For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation.
Assume that in the container, the number density (number per unit volume) is
n
=
N
/
V
{\displaystyle n=N/V}
and that the particles obey Maxwell's velocity distribution:
f
Maxwell
(
v
x
,
v
y
,
v
z
)
d
v
x
d
v
y
d
v
z
=
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
d
v
x
d
v
y
d
v
z
{\displaystyle f_{\text{Maxwell}}(v_{x},v_{y},v_{z})\,dv_{x}\,dv_{y}\,dv_{z}=\left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\,dv_{x}\,dv_{y}\,dv_{z}}
Then for a small area
d
A
{\displaystyle dA}
on the container wall, a particle with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal of the area
d
A
{\displaystyle dA}
, will collide with the area within time interval
d
t
{\displaystyle dt}
, if it is within the distance
v
d
t
{\displaystyle v\,dt}
from the area
d
A
{\displaystyle dA}
. Therefore, all the particles with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal that can reach area
d
A
{\displaystyle dA}
within time interval
d
t
{\displaystyle dt}
are contained in the tilted pipe with a height of
v
cos
(
θ
)
d
t
{\displaystyle v\cos(\theta )dt}
and a volume of
v
cos
(
θ
)
d
A
d
t
{\displaystyle v\cos(\theta )\,dA\,dt}
.
The total number of particles that reach area
d
A
{\displaystyle dA}
within time interval
d
t
{\displaystyle dt}
also depends on the velocity distribution; All in all, it calculates to be:
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
.
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\left(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi \right).}
Integrating this over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time:
J
collision
=
∫
0
π
/
2
cos
(
θ
)
sin
(
θ
)
d
θ
∫
0
π
sin
(
θ
)
d
θ
×
n
v
¯
=
1
4
n
v
¯
=
n
4
8
k
B
T
π
m
.
{\displaystyle J_{\text{collision}}={\frac {\displaystyle \int _{0}^{\pi /2}\cos(\theta )\sin(\theta )\,d\theta }{\displaystyle \int _{0}^{\pi }\sin(\theta )\,d\theta }}\times n{\bar {v}}={\frac {1}{4}}n{\bar {v}}={\frac {n}{4}}{\sqrt {\frac {8k_{\mathrm {B} }T}{\pi m}}}.}
This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed
v
¯
{\displaystyle {\bar {v}}}
of the Maxwell's velocity distribution, one has to integrate over
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
{\displaystyle 0<\theta <\pi }
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
.
The momentum transfer to the container wall from particles hitting the area
d
A
{\displaystyle dA}
with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is:
[
2
m
v
cos
(
θ
)
]
×
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
.
{\displaystyle [2mv\cos(\theta )]\times nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}\left(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi \right).}
Integrating this over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the pressure (consistent with Ideal gas law):
P
=
2
∫
0
π
/
2
cos
2
(
θ
)
sin
(
θ
)
d
θ
∫
0
π
sin
(
θ
)
d
θ
×
n
m
v
rms
2
=
1
3
n
m
v
rms
2
=
2
3
n
⟨
E
kin
⟩
=
n
k
B
T
{\displaystyle P={\frac {\displaystyle 2\int _{0}^{\pi /2}\cos ^{2}(\theta )\sin(\theta )\,d\theta }{\displaystyle \int _{0}^{\pi }\sin(\theta )\,d\theta }}\times nmv_{\text{rms}}^{2}={\frac {1}{3}}nmv_{\text{rms}}^{2}={\frac {2}{3}}n\langle E_{\text{kin}}\rangle =nk_{\mathrm {B} }T}
If this small area
A
{\displaystyle A}
is punched to become a small hole, the effusive flow rate will be:
Φ
effusion
=
J
collision
A
=
n
A
k
B
T
2
π
m
.
{\displaystyle \Phi _{\text{effusion}}=J_{\text{collision}}A=nA{\sqrt {\frac {k_{\mathrm {B} }T}{2\pi m}}}.}
Combined with the ideal gas law, this yields
Φ
effusion
=
P
A
2
π
m
k
B
T
.
{\displaystyle \Phi _{\text{effusion}}={\frac {PA}{\sqrt {2\pi mk_{\mathrm {B} }T}}}.}
The above expression is consistent with Graham's law.
To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with
(
v
,
θ
,
ϕ
)
{\displaystyle (v,\theta ,\phi )}
that hit the area
d
A
{\displaystyle dA}
within the time interval
d
t
{\displaystyle dt}
are contained in the tilted pipe with a height of
v
cos
(
θ
)
d
t
{\displaystyle v\cos(\theta )\,dt}
and a volume of
v
cos
(
θ
)
d
A
d
t
{\displaystyle v\cos(\theta )\,dA\,dt}
; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of
v
cos
θ
{\displaystyle v\cos \theta }
:
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
λ
v
cos
θ
(
m
2
π
k
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
θ
d
v
d
θ
d
ϕ
)
{\displaystyle {\begin{aligned}f(v,\theta ,\phi )\,dv\,d\theta \,d\phi &=\lambda v\cos {\theta }\left({\frac {m}{2\pi kT}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{2}\sin {\theta }\,dv\,d\theta \,d\phi )\end{aligned}}}
with the constraint
v
>
0
{\textstyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
. The constant
λ
{\displaystyle \lambda }
can be determined by the normalization condition
∫
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
1
{\textstyle \int f(v,\theta ,\phi )\,dv\,d\theta \,d\phi =1}
to be
4
/
v
¯
{\textstyle 4/{\bar {v}}}
, and overall:
f
(
v
,
θ
,
ϕ
)
d
v
d
θ
d
ϕ
=
1
2
π
(
m
k
B
T
)
2
e
−
m
v
2
2
k
B
T
(
v
3
sin
θ
cos
θ
d
v
d
θ
d
ϕ
)
;
v
>
0
,
0
<
θ
<
π
2
,
0
<
ϕ
<
2
π
{\displaystyle {\begin{aligned}f(v,\theta ,\phi )\,dv\,d\theta \,d\phi &={\frac {1}{2\pi }}\left({\frac {m}{k_{\mathrm {B} }T}}\right)^{2}e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{3}\sin {\theta }\cos {\theta }\,dv\,d\theta \,d\phi )\\\end{aligned}};\quad v>0,\,0<\theta <{\frac {\pi }{2}},\,0<\phi <2\pi }
=== Speed of molecules ===
From the kinetic energy formula it can be shown that
v
p
=
2
⋅
k
B
T
m
,
{\displaystyle v_{\text{p}}={\sqrt {2\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
v
¯
=
2
π
v
p
=
8
π
⋅
k
B
T
m
,
{\displaystyle {\bar {v}}={\frac {2}{\sqrt {\pi }}}v_{p}={\sqrt {{\frac {8}{\pi }}\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
v
rms
=
3
2
v
p
=
3
⋅
k
B
T
m
,
{\displaystyle v_{\text{rms}}={\sqrt {\frac {3}{2}}}v_{p}={\sqrt {{3}\cdot {\frac {k_{\mathrm {B} }T}{m}}}},}
where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed
v
p
{\displaystyle v_{\text{p}}}
is 81.6% of the root-mean-square speed
v
rms
{\displaystyle v_{\text{rms}}}
, and the mean (arithmetic mean, or average) speed
v
¯
{\displaystyle {\bar {v}}}
is 92.1% of the rms speed (isotropic distribution of speeds).
See:
Average,
Root-mean-square speed
Arithmetic mean
Mean
Mode (statistics)
=== Mean free path ===
In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let
σ
{\displaystyle \sigma }
be the collision cross section of one molecule colliding with another. As in the previous section, the number density
n
{\displaystyle n}
is defined as the number of molecules per (extensive) volume, or
n
=
N
/
V
{\displaystyle n=N/V}
. The collision cross section per volume or collision cross section density is
n
σ
{\displaystyle n\sigma }
, and it is related to the mean free path
ℓ
{\displaystyle \ell }
by
ℓ
=
1
n
σ
2
{\displaystyle \ell ={\frac {1}{n\sigma {\sqrt {2}}}}}
Notice that the unit of the collision cross section per volume
n
σ
{\displaystyle n\sigma }
is reciprocal of length.
== Transport properties ==
The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion.
In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst.
=== Viscosity and kinetic momentum ===
In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component
u
{\displaystyle u}
which increase uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Inside a dilute gas in a Couette flow setup, let
u
0
{\displaystyle u_{0}}
be the forward velocity of the gas at a horizontal flat layer (labeled as
y
=
0
{\displaystyle y=0}
);
u
0
{\displaystyle u_{0}}
is along the horizontal direction. The number of molecules arriving at the area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
θ
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos({\theta })\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}\,e^{-{\frac {mv^{2}}{2k_{\mathrm {B} }T}}}(v^{2}\sin {\theta }\,dv\,d\theta \,d\phi )}
These molecules made their last collision at
y
=
±
ℓ
cos
θ
{\displaystyle y=\pm \ell \cos \theta }
, where
ℓ
{\displaystyle \ell }
is the mean free path. Each molecule will contribute a forward momentum of
p
x
±
=
m
(
u
0
±
ℓ
cos
θ
d
u
d
y
)
,
{\displaystyle p_{x}^{\pm }=m\left(u_{0}\pm \ell \cos \theta {\frac {du}{dy}}\right),}
where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient
d
u
/
d
y
{\displaystyle du/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the forward momentum transfer per unit time per unit area (also known as shear stress):
τ
±
=
1
4
v
¯
n
⋅
m
(
u
0
±
2
3
ℓ
d
u
d
y
)
{\displaystyle \tau ^{\pm }={\frac {1}{4}}{\bar {v}}n\cdot m\left(u_{0}\pm {\frac {2}{3}}\ell {\frac {du}{dy}}\right)}
The net rate of momentum per unit area that is transported across the imaginary surface is thus
τ
=
τ
+
−
τ
−
=
1
3
v
¯
n
m
⋅
ℓ
d
u
d
y
{\displaystyle \tau =\tau ^{+}-\tau ^{-}={\frac {1}{3}}{\bar {v}}nm\cdot \ell {\frac {du}{dy}}}
Combining the above kinetic equation with Newton's law of viscosity
τ
=
η
d
u
d
y
{\displaystyle \tau =\eta {\frac {du}{dy}}}
gives the equation for shear viscosity, which is usually denoted
η
0
{\displaystyle \eta _{0}}
when it is a dilute gas:
η
0
=
1
3
v
¯
n
m
ℓ
{\displaystyle \eta _{0}={\frac {1}{3}}{\bar {v}}nm\ell }
Combining this equation with the equation for mean free path gives
η
0
=
1
3
2
m
v
¯
σ
{\displaystyle \eta _{0}={\frac {1}{3{\sqrt {2}}}}{\frac {m{\bar {v}}}{\sigma }}}
Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as
v
¯
=
2
π
v
p
=
2
2
π
k
B
T
m
{\displaystyle {\bar {v}}={\frac {2}{\sqrt {\pi }}}v_{p}=2{\sqrt {{\frac {2}{\pi }}{\frac {k_{\mathrm {B} }T}{m}}}}}
where
v
p
{\displaystyle v_{p}}
is the most probable speed. We note that
k
B
N
A
=
R
and
M
=
m
N
A
{\displaystyle k_{\text{B}}N_{\text{A}}=R\quad {\text{and}}\quad M=mN_{\text{A}}}
and insert the velocity in the viscosity equation above. This gives the well known equation (with
σ
{\displaystyle \sigma }
subsequently estimated below) for shear viscosity for dilute gases:
η
0
=
2
3
π
⋅
m
k
B
T
σ
=
2
3
π
⋅
M
R
T
σ
N
A
{\displaystyle \eta _{0}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {mk_{\mathrm {B} }T}}{\sigma }}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {MRT}}{\sigma N_{\text{A}}}}}
and
M
{\displaystyle M}
is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by
σ
=
π
(
2
r
)
2
=
π
d
2
{\displaystyle \sigma =\pi \left(2r\right)^{2}=\pi d^{2}}
The radius
r
{\displaystyle r}
is called collision cross section radius or kinetic radius, and the diameter
d
{\displaystyle d}
is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals.
The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as
η
=
(
1
+
α
η
)
η
0
+
η
c
{\displaystyle \eta =(1+\alpha _{\eta })\eta _{0}+\eta _{c}}
where
α
η
{\displaystyle \alpha _{\eta }}
is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and
η
c
{\displaystyle \eta _{c}}
is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision.
=== Thermal conductivity and heat flux ===
Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas:
Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy
ε
{\displaystyle \varepsilon }
which increases uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Let
ε
0
{\displaystyle \varepsilon _{0}}
be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi )}
These molecules made their last collision at a distance
ℓ
cos
θ
{\displaystyle \ell \cos \theta }
above and below the gas layer, and each will contribute a molecular kinetic energy of
ε
±
=
(
ε
0
±
m
c
v
ℓ
cos
θ
d
T
d
y
)
,
{\displaystyle \varepsilon ^{\pm }=\left(\varepsilon _{0}\pm mc_{v}\ell \cos \theta \,{\frac {dT}{dy}}\right),}
where
c
v
{\displaystyle c_{v}}
is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient
d
T
/
d
y
{\displaystyle dT/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the energy transfer per unit time per unit area (also known as heat flux):
q
y
±
=
−
1
4
v
¯
n
⋅
(
ε
0
±
2
3
m
c
v
ℓ
d
T
d
y
)
{\displaystyle q_{y}^{\pm }=-{\frac {1}{4}}{\bar {v}}n\cdot \left(\varepsilon _{0}\pm {\frac {2}{3}}mc_{v}\ell {\frac {dT}{dy}}\right)}
Note that the energy transfer from above is in the
−
y
{\displaystyle -y}
direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus
q
=
q
y
+
−
q
y
−
=
−
1
3
v
¯
n
m
c
v
ℓ
d
T
d
y
{\displaystyle q=q_{y}^{+}-q_{y}^{-}=-{\frac {1}{3}}{\bar {v}}nmc_{v}\ell \,{\frac {dT}{dy}}}
Combining the above kinetic equation with Fourier's law
q
=
−
κ
d
T
d
y
{\displaystyle q=-\kappa \,{\frac {dT}{dy}}}
gives the equation for thermal conductivity, which is usually denoted
κ
0
{\displaystyle \kappa _{0}}
when it is a dilute gas:
κ
0
=
1
3
v
¯
n
m
c
v
ℓ
{\displaystyle \kappa _{0}={\frac {1}{3}}{\bar {v}}nmc_{v}\ell }
Similarly to viscosity, Revised Enskog theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as
κ
=
α
κ
κ
0
+
κ
c
{\displaystyle \kappa =\alpha _{\kappa }\kappa _{0}+\kappa _{c}}
where
α
κ
{\displaystyle \alpha _{\kappa }}
is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and
κ
c
{\displaystyle \kappa _{c}}
is a term accounting for the transfer of energy across a non-zero distance between particles during a collision.
=== Diffusion coefficient and diffusion flux ===
Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas:
Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density
n
{\displaystyle n}
in the layer increases uniformly with distance
y
{\displaystyle y}
above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions.
Let
n
0
{\displaystyle n_{0}}
be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area
d
A
{\displaystyle dA}
on one side of the gas layer, with speed
v
{\displaystyle v}
at angle
θ
{\displaystyle \theta }
from the normal, in time interval
d
t
{\displaystyle dt}
is
n
v
cos
(
θ
)
d
A
d
t
×
(
m
2
π
k
B
T
)
3
/
2
e
−
m
v
2
2
k
B
T
(
v
2
sin
(
θ
)
d
v
d
θ
d
ϕ
)
{\displaystyle nv\cos(\theta )\,dA\,dt\times \left({\frac {m}{2\pi k_{\mathrm {B} }T}}\right)^{3/2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}(v^{2}\sin(\theta )\,dv\,d\theta \,d\phi )}
These molecules made their last collision at a distance
ℓ
cos
θ
{\displaystyle \ell \cos \theta }
above and below the gas layer, where the local number density is
n
±
=
(
n
0
±
ℓ
cos
θ
d
n
d
y
)
{\displaystyle n^{\pm }=\left(n_{0}\pm \ell \cos \theta \,{\frac {dn}{dy}}\right)}
Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient
d
n
/
d
y
{\displaystyle dn/dy}
can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
v
>
0
{\displaystyle v>0}
,
0
<
θ
<
π
2
{\textstyle 0<\theta <{\frac {\pi }{2}}}
,
0
<
ϕ
<
2
π
{\displaystyle 0<\phi <2\pi }
yields the molecular transfer per unit time per unit area (also known as diffusion flux):
J
y
±
=
−
1
4
v
¯
⋅
(
n
0
±
2
3
ℓ
d
n
d
y
)
{\displaystyle J_{y}^{\pm }=-{\frac {1}{4}}{\bar {v}}\cdot \left(n_{0}\pm {\frac {2}{3}}\ell \,{\frac {dn}{dy}}\right)}
Note that the molecular transfer from above is in the
−
y
{\displaystyle -y}
direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus
J
=
J
y
+
−
J
y
−
=
−
1
3
v
¯
ℓ
d
n
d
y
{\displaystyle J=J_{y}^{+}-J_{y}^{-}=-{\frac {1}{3}}{\bar {v}}\ell {\frac {dn}{dy}}}
Combining the above kinetic equation with Fick's first law of diffusion
J
=
−
D
d
n
d
y
{\displaystyle J=-D{\frac {dn}{dy}}}
gives the equation for mass diffusivity, which is usually denoted
D
0
{\displaystyle D_{0}}
when it is a dilute gas:
D
0
=
1
3
v
¯
ℓ
{\displaystyle D_{0}={\frac {1}{3}}{\bar {v}}\ell }
The corresponding expression obtained from Revised Enskog Theory may be written as
D
=
α
D
D
0
{\displaystyle D=\alpha _{D}D_{0}}
where
α
D
{\displaystyle \alpha _{D}}
is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density.
== Detailed balance ==
=== Fluctuation and dissipation ===
The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation:
D
=
μ
k
B
T
,
{\displaystyle D=\mu \,k_{\text{B}}T,}
where
D is the mass diffusivity;
μ is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, μ = vd/F;
kB is the Boltzmann constant;
T is the absolute temperature.
Note that the mobility μ = vd/F can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas.
=== Onsager reciprocal relations ===
The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas.
== See also ==
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations
Boltzmann equation
Chapman–Enskog theory
Collision theory
Critical temperature
Gas laws
Heat
Interatomic potential
Magnetohydrodynamics
Maxwell–Boltzmann distribution
Mixmaster universe
Thermodynamics
Vicsek model
Vlasov equation
== References ==
=== Citations ===
=== Sources cited ===
== Further reading ==
Sydney Chapman and Thomas George Cowling (1939/1970), The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, (first edition 1939, second edition 1952), third edition 1970 prepared in co-operation with D. Burnett, Cambridge University Press, London
Joseph Oakland Hirschfelder, Charles Francis Curtiss, and Robert Byron Bird (1964), Molecular Theory of Gases and Liquids, revised edition (Wiley-Interscience), ISBN 978-0471400653
Richard Lawrence Liboff (2003), Kinetic Theory: Classical, Quantum, and Relativistic Descriptions, third edition (Springer), ISBN 978-0-387-21775-8
Behnam Rahimi and Henning Struchtrup Archived 2021-07-25 at the Wayback Machine (2016), "Macroscopic and kinetic modelling of rarefied polyatomic gases", Journal of Fluid Mechanics, 806, 437–505, DOI 10.1017/jfm.2016.604
== External links ==
PHYSICAL CHEMISTRY – Gases
Early Theories of Gases
Thermodynamics Archived 2017-02-28 at the Wayback Machine - a chapter from an online textbook
Temperature and Pressure of an Ideal Gas: The Equation of State on Project PHYSNET.
Introduction to the kinetic molecular theory of gases, from The Upper Canada District School Board
Java animation illustrating the kinetic theory from University of Arkansas
Flowchart linking together kinetic theory concepts, from HyperPhysics
Interactive Java Applets allowing high school students to experiment and discover how various factors affect rates of chemical reactions.
https://www.youtube.com/watch?v=47bF13o8pb8&list=UUXrJjdDeqLgGjJbP1sMnH8A A demonstration apparatus for the thermal agitation in gases. | Wikipedia/Kinetic_theory_of_gases |
The Klein–Gordon equation (Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic wave equation, related to the Schrödinger equation. It is named after Oskar Klein and Walter Gordon. It is second-order in space and time and manifestly Lorentz-covariant. It is a differential equation version of the relativistic energy–momentum relation
E
2
=
(
p
c
)
2
+
(
m
0
c
2
)
2
{\displaystyle E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2}\,}
.
== Statement ==
The Klein–Gordon equation can be written in different ways. The equation itself usually refers to the position space form, where it can be written in terms of separated space and time components
(
t
,
x
)
{\displaystyle \ \left(\ t,\mathbf {x} \ \right)\ }
or by combining them into a four-vector
x
μ
=
(
c
t
,
x
)
.
{\displaystyle \ x^{\mu }=\left(\ c\ t,\mathbf {x} \ \right)~.}
By Fourier transforming the field into momentum space, the solution is usually written in terms of a superposition of plane waves whose energy and momentum obey the energy-momentum dispersion relation from special relativity. Here, the Klein–Gordon equation is given for both of the two common metric signature conventions
η
μ
ν
=
diag
(
±
1
,
∓
1
,
∓
1
,
∓
1
)
.
{\displaystyle \ \eta _{\mu \nu }={\text{diag}}\left(\ \pm 1,\mp 1,\mp 1,\mp 1\ \right)~.}
Here,
◻
=
±
η
μ
ν
∂
μ
∂
ν
{\displaystyle \ \Box =\pm \eta ^{\mu \nu }\partial _{\mu }\partial _{\nu }\ }
is the wave operator and
∇
2
{\displaystyle \nabla ^{2}}
is the Laplace operator. The speed of light
c
{\displaystyle \ c\ }
and Planck constant
ℏ
{\displaystyle \ \hbar \ }
are often seen to clutter the equations, so they are therefore often expressed in natural units where
c
=
ℏ
=
1
.
{\displaystyle \ c=\hbar =1~.}
Unlike the Schrödinger equation, the Klein–Gordon equation admits two values of ω for each k: One positive and one negative. Only by separating out the positive and negative frequency parts does one obtain an equation describing a relativistic wavefunction. For the time-independent case, the Klein–Gordon equation becomes
[
∇
2
−
m
2
c
2
ℏ
2
]
ψ
(
r
)
=
0
,
{\displaystyle \ \left[\ \nabla ^{2}-{\frac {\ m^{2}c^{2}}{\ \hbar ^{2}}}\ \right]\ \psi (\ \mathbf {r} \ )=0\ ,}
which is formally the same as the homogeneous screened Poisson equation. In addition, the Klein–Gordon equation can also be represented as:
p
^
μ
p
^
μ
ψ
=
m
2
c
2
ψ
{\displaystyle \ {\hat {p}}^{\mu }\ {\hat {p}}_{\mu }\ \psi =m^{2}c^{2}\psi \ }
where, the momentum operator is given as:
p
^
μ
=
i
ℏ
∂
∂
x
μ
=
i
ℏ
(
∂
∂
(
c
t
)
,
−
∂
∂
x
,
−
∂
∂
y
,
−
∂
∂
z
)
=
(
E
^
c
,
p
^
)
.
{\displaystyle \ {\hat {p}}^{\mu }=i\hbar {\frac {\partial }{\ \partial x_{\mu }\ }}=i\hbar \left(\ {\frac {\partial }{\ \partial (ct)\ }},-{\frac {\partial }{\ \partial x\ }},-{\frac {\partial }{\ \partial y\ }},-{\frac {\partial }{\ \partial z\ }}\ \right)=\left(\ {\frac {\hat {E}}{c}},\mathbf {\hat {p}} \ \right)~.}
== Relevance ==
The equation is to be understood first as a classical continuous scalar field equation that can be quantized. The quantization process introduces then a quantum field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation.
The equation solutions include a scalar or pseudoscalar field. In the realm of particle physics electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, the practical utility for particles like pions is limited. There is a second version of the equation for a complex scalar field that is theoretically important being the equation of the Higgs Boson. In the realm of condensed matter it can be used for many approximations of quasi-particles without spin.
The equation can be put into the form of a Schrödinger equation. In this form it is expressed as two coupled differential equations, each of first order in time. The solutions have two components, reflecting the charge degree of freedom in relativity. It admits a conserved quantity, but this is not positive definite. The wave function cannot therefore be interpreted as a probability amplitude. The conserved quantity is instead interpreted as electric charge, and the norm squared of the wave function is interpreted as a charge density. The equation describes all spinless particles with positive, negative, and zero charge.
Any solution of the free Dirac equation is, for each of its four components, a solution of the free Klein–Gordon equation. Despite historically it was invented as a single particle equation the Klein–Gordon equation cannot form the basis of a consistent quantum relativistic one-particle theory, any relativistic theory implies creation and annihilation of particles beyond a certain energy threshold.
== Solution for free particle ==
Here, the Klein–Gordon equation in natural units,
(
◻
+
m
2
)
ψ
(
x
)
=
0
{\displaystyle (\Box +m^{2})\psi (x)=0}
, with the metric signature
η
μ
ν
=
diag
(
+
1
,
−
1
,
−
1
,
−
1
)
{\displaystyle \eta _{\mu \nu }={\text{diag}}(+1,-1,-1,-1)}
is solved by Fourier transformation. Inserting the Fourier transformation
ψ
(
x
)
=
∫
d
4
p
(
2
π
)
4
e
−
i
p
⋅
x
ψ
(
p
)
{\displaystyle \psi (x)=\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}e^{-ip\cdot x}\psi (p)}
and using orthogonality of the complex exponentials gives the dispersion relation
p
2
=
(
p
0
)
2
−
p
2
=
m
2
{\displaystyle p^{2}=(p^{0})^{2}-\mathbf {p} ^{2}=m^{2}}
This restricts the momenta to those that lie on shell, giving positive and negative energy solutions
p
0
=
±
E
(
p
)
where
E
(
p
)
=
p
2
+
m
2
.
{\displaystyle p^{0}=\pm E(\mathbf {p} )\quad {\text{where}}\quad E(\mathbf {p} )={\sqrt {\mathbf {p} ^{2}+m^{2}}}.}
For a new set of constants
C
(
p
)
{\displaystyle C(p)}
, the solution then becomes
ψ
(
x
)
=
∫
d
4
p
(
2
π
)
4
e
i
p
⋅
x
C
(
p
)
δ
(
(
p
0
)
2
−
E
(
p
)
2
)
.
{\displaystyle \psi (x)=\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}e^{ip\cdot x}C(p)\delta ((p^{0})^{2}-E(\mathbf {p} )^{2}).}
It is common to handle the positive and negative energy solutions by separating out the negative energies and work only with positive
p
0
{\displaystyle p^{0}}
:
ψ
(
x
)
=
∫
d
4
p
(
2
π
)
4
δ
(
(
p
0
)
2
−
E
(
p
)
2
)
(
A
(
p
)
e
−
i
p
0
x
0
+
i
p
i
x
i
+
B
(
p
)
e
+
i
p
0
x
0
+
i
p
i
x
i
)
θ
(
p
0
)
=
∫
d
4
p
(
2
π
)
4
δ
(
(
p
0
)
2
−
E
(
p
)
2
)
(
A
(
p
)
e
−
i
p
0
x
0
+
i
p
i
x
i
+
B
(
−
p
)
e
+
i
p
0
x
0
−
i
p
i
x
i
)
θ
(
p
0
)
→
∫
d
4
p
(
2
π
)
4
δ
(
(
p
0
)
2
−
E
(
p
)
2
)
(
A
(
p
)
e
−
i
p
⋅
x
+
B
(
p
)
e
+
i
p
⋅
x
)
θ
(
p
0
)
{\displaystyle {\begin{aligned}\psi (x)=&\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}\delta ((p^{0})^{2}-E(\mathbf {p} )^{2})\left(A(p)e^{-ip^{0}x^{0}+ip^{i}x^{i}}+B(p)e^{+ip^{0}x^{0}+ip^{i}x^{i}}\right)\theta (p^{0})\\=&\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}\delta ((p^{0})^{2}-E(\mathbf {p} )^{2})\left(A(p)e^{-ip^{0}x^{0}+ip^{i}x^{i}}+B(-p)e^{+ip^{0}x^{0}-ip^{i}x^{i}}\right)\theta (p^{0})\\\rightarrow &\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}\delta ((p^{0})^{2}-E(\mathbf {p} )^{2})\left(A(p)e^{-ip\cdot x}+B(p)e^{+ip\cdot x}\right)\theta (p^{0})\\\end{aligned}}}
In the last step,
B
(
p
)
→
B
(
−
p
)
{\displaystyle B(p)\rightarrow B(-p)}
was renamed. Now we can perform the
p
0
{\displaystyle p^{0}}
-integration, picking up the positive frequency part from the delta function only:
ψ
(
x
)
=
∫
d
4
p
(
2
π
)
4
δ
(
p
0
−
E
(
p
)
)
2
E
(
p
)
(
A
(
p
)
e
−
i
p
⋅
x
+
B
(
p
)
e
+
i
p
⋅
x
)
θ
(
p
0
)
=
∫
d
3
p
(
2
π
)
3
1
2
E
(
p
)
(
A
(
p
)
e
−
i
p
⋅
x
+
B
(
p
)
e
+
i
p
⋅
x
)
|
p
0
=
+
E
(
p
)
.
{\displaystyle {\begin{aligned}\psi (x)&=\int {\frac {\mathrm {d} ^{4}p}{(2\pi )^{4}}}{\frac {\delta (p^{0}-E(\mathbf {p} ))}{2E(\mathbf {p} )}}\left(A(p)e^{-ip\cdot x}+B(p)e^{+ip\cdot x}\right)\theta (p^{0})\\&=\int \left.{\frac {\mathrm {d} ^{3}p}{(2\pi )^{3}}}{\frac {1}{2E(\mathbf {p} )}}\left(A(\mathbf {p} )e^{-ip\cdot x}+B(\mathbf {p} )e^{+ip\cdot x}\right)\right|_{p^{0}=+E(\mathbf {p} )}.\end{aligned}}}
This is commonly taken as a general solution to the free Klein–Gordon equation. Note that because the initial Fourier transformation contained Lorentz invariant quantities like
p
⋅
x
=
p
μ
x
μ
{\displaystyle p\cdot x=p_{\mu }x^{\mu }}
only, the last expression is also a Lorentz invariant solution to the Klein–Gordon equation. If one does not require Lorentz invariance, one can absorb the
1
/
2
E
(
p
)
{\displaystyle 1/2E(\mathbf {p} )}
-factor into the coefficients
A
(
p
)
{\displaystyle A(p)}
and
B
(
p
)
{\displaystyle B(p)}
.
== History ==
The equation was named after the physicists Oskar Klein and Walter Gordon, who in 1926 proposed that it describes relativistic electrons. Vladimir Fock also discovered the equation independently in 1926 slightly after Klein's work, in that Klein's paper was received on 28 April 1926, Fock's paper was received on 30 July 1926 and Gordon's paper on 29 September 1926. Other authors making similar claims in that same year include Johann Kudar, Théophile de Donder and Frans-H. van den Dungen, and Louis de Broglie. Although it turned out that modeling the electron's spin required the Dirac equation, the Klein–Gordon equation correctly describes the spinless relativistic composite particles, like the pion. On 4 July 2012, European Organization for Nuclear Research CERN announced the discovery of the Higgs boson. Since the Higgs boson is a spin-zero particle, it is the first observed ostensibly elementary particle to be described by the Klein–Gordon equation. Further experimentation and analysis is required to discern whether the Higgs boson observed is that of the Standard Model or a more exotic, possibly composite, form.
The Klein–Gordon equation was first considered as a quantum wave equation by Erwin Schrödinger in his search for an equation describing de Broglie waves. The equation is found in his notebooks from late 1925, and he appears to have prepared a manuscript applying it to the hydrogen atom. Yet, because it fails to take into account the electron's spin, the equation predicts the hydrogen atom's fine structure incorrectly, including overestimating the overall magnitude of the splitting pattern by a factor of 4n/2n − 1 for the n-th energy level. The Dirac equation relativistic spectrum is, however, easily recovered if the orbital-momentum quantum number l is replaced by total angular-momentum quantum number j. In January 1926, Schrödinger submitted for publication instead his equation, a non-relativistic approximation that predicts the Bohr energy levels of hydrogen without fine structure.
In 1926, soon after the Schrödinger equation was introduced, Vladimir Fock wrote an article about its generalization for the case of magnetic fields, where forces were dependent on velocity, and independently derived this equation. Both Klein and Fock used Kaluza and Klein's method. Fock also determined the gauge theory for the wave equation. The Klein–Gordon equation for a free particle has a simple plane-wave solution.
== Derivation ==
The non-relativistic equation for the energy of a free particle is
p
2
2
m
=
E
.
{\displaystyle {\frac {\mathbf {p} ^{2}}{2m}}=E.}
By quantizing this, we get the non-relativistic Schrödinger equation for a free particle:
p
^
2
2
m
ψ
=
E
^
ψ
,
{\displaystyle {\frac {\mathbf {\hat {p}} ^{2}}{2m}}\psi ={\hat {E}}\psi ,}
where
p
^
=
−
i
ℏ
∇
{\displaystyle \mathbf {\hat {p}} =-i\hbar \mathbf {\nabla } }
is the momentum operator (∇ being the del operator), and
E
^
=
i
ℏ
∂
∂
t
{\displaystyle {\hat {E}}=i\hbar {\frac {\partial }{\partial t}}}
is the energy operator.
The Schrödinger equation suffers from not being relativistically invariant, meaning that it is inconsistent with special relativity.
It is natural to try to use the identity from special relativity describing the energy:
p
2
c
2
+
m
2
c
4
=
E
.
{\displaystyle {\sqrt {\mathbf {p} ^{2}c^{2}+m^{2}c^{4}}}=E.}
Then, just inserting the quantum-mechanical operators for momentum and energy yields the equation
(
−
i
ℏ
∇
)
2
c
2
+
m
2
c
4
ψ
=
i
ℏ
∂
∂
t
ψ
.
{\displaystyle {\sqrt {(-i\hbar \mathbf {\nabla } )^{2}c^{2}+m^{2}c^{4}}}\,\psi =i\hbar {\frac {\partial }{\partial t}}\psi .}
The square root of a differential operator can be defined with the help of Fourier transformations, but due to the asymmetry of space and time derivatives, Dirac found it impossible to include external electromagnetic fields in a relativistically invariant way. So he looked for another equation that can be modified in order to describe the action of electromagnetic forces. In addition, this equation, as it stands, is nonlocal (see also Introduction to nonlocal equations).
Klein and Gordon instead began with the square of the above identity, i.e.
p
2
c
2
+
m
2
c
4
=
E
2
,
{\displaystyle \mathbf {p} ^{2}c^{2}+m^{2}c^{4}=E^{2},}
which, when quantized, gives
(
(
−
i
ℏ
∇
)
2
c
2
+
m
2
c
4
)
ψ
=
(
i
ℏ
∂
∂
t
)
2
ψ
,
{\displaystyle \left((-i\hbar \mathbf {\nabla } )^{2}c^{2}+m^{2}c^{4}\right)\psi =\left(i\hbar {\frac {\partial }{\partial t}}\right)^{2}\psi ,}
which simplifies to
−
ℏ
2
c
2
∇
2
ψ
+
m
2
c
4
ψ
=
−
ℏ
2
∂
2
∂
t
2
ψ
.
{\displaystyle -\hbar ^{2}c^{2}\mathbf {\nabla } ^{2}\psi +m^{2}c^{4}\psi =-\hbar ^{2}{\frac {\partial ^{2}}{\partial t^{2}}}\psi .}
Rearranging terms yields
1
c
2
∂
2
∂
t
2
ψ
−
∇
2
ψ
+
m
2
c
2
ℏ
2
ψ
=
0.
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi -\mathbf {\nabla } ^{2}\psi +{\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi =0.}
Since all reference to imaginary numbers has been eliminated from this equation, it can be applied to fields that are real-valued, as well as those that have complex values.
Rewriting the first two terms using the inverse of the Minkowski metric diag(−c2, 1, 1, 1), and writing the Einstein summation convention explicitly we get
−
η
μ
ν
∂
μ
∂
ν
ψ
≡
∑
μ
=
0
3
∑
ν
=
0
3
−
η
μ
ν
∂
μ
∂
ν
ψ
=
1
c
2
∂
0
2
ψ
−
∑
ν
=
1
3
∂
ν
∂
ν
ψ
=
1
c
2
∂
2
∂
t
2
ψ
−
∇
2
ψ
.
{\displaystyle -\eta ^{\mu \nu }\partial _{\mu }\,\partial _{\nu }\psi \equiv \sum _{\mu =0}^{3}\sum _{\nu =0}^{3}-\eta ^{\mu \nu }\partial _{\mu }\,\partial _{\nu }\psi ={\frac {1}{c^{2}}}\partial _{0}^{2}\psi -\sum _{\nu =1}^{3}\partial _{\nu }\,\partial _{\nu }\psi ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi -\mathbf {\nabla } ^{2}\psi .}
Thus the Klein–Gordon equation can be written in a covariant notation. This often means an abbreviation in the form of
(
◻
+
μ
2
)
ψ
=
0
,
{\displaystyle (\Box +\mu ^{2})\psi =0,}
where
μ
=
m
c
ℏ
,
{\displaystyle \mu ={\frac {mc}{\hbar }},}
and
◻
=
1
c
2
∂
2
∂
t
2
−
∇
2
.
{\displaystyle \Box ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}.}
This operator is called the wave operator.
Today this form is interpreted as the relativistic field equation for spin-0 particles. Furthermore, any component of any solution to the free Dirac equation (for a spin-1/2 particle) is automatically a solution to the free Klein–Gordon equation. This generalizes to particles of any spin due to the Bargmann–Wigner equations. Furthermore, in quantum field theory, every component of every quantum field must satisfy the free Klein–Gordon equation, making the equation a generic expression of quantum fields.
=== Klein–Gordon equation in a potential ===
The Klein–Gordon equation can be generalized to describe a field in some potential
V
(
ψ
)
{\displaystyle V(\psi )}
as
◻
ψ
+
∂
V
∂
ψ
=
0.
{\displaystyle \Box \psi +{\frac {\partial V}{\partial \psi }}=0.}
Then the Klein–Gordon equation is the case
V
(
ψ
)
=
M
2
ψ
¯
ψ
{\displaystyle V(\psi )=M^{2}{\bar {\psi }}\psi }
.
Another common choice of potential which arises in interacting theories is the
ϕ
4
{\displaystyle \phi ^{4}}
potential for a real scalar field
ϕ
,
{\displaystyle \phi ,}
V
(
ϕ
)
=
1
2
m
2
ϕ
2
+
λ
ϕ
4
.
{\displaystyle V(\phi )={\frac {1}{2}}m^{2}\phi ^{2}+\lambda \phi ^{4}.}
==== Higgs sector ====
The pure Higgs boson sector of the Standard model is modelled by a Klein–Gordon field with a potential, denoted
H
{\displaystyle H}
for this section. The Standard model is a gauge theory and so while the field transforms trivially under the Lorentz group, it transforms as a
C
2
{\displaystyle \mathbb {C} ^{2}}
-valued vector under the action of the
SU
(
2
)
{\displaystyle {\text{SU}}(2)}
part of the gauge group. Therefore, while it is a vector field
H
:
R
1
,
3
→
C
2
{\displaystyle H:\mathbb {R} ^{1,3}\rightarrow \mathbb {C} ^{2}}
, it is still referred to as a scalar field, as scalar describes its transformation (formally, representation) under the Lorentz group. This is also discussed below in the scalar chromodynamics section.
The Higgs field is modelled by a potential
V
(
H
)
=
−
m
2
H
†
H
+
λ
(
H
†
H
)
2
{\displaystyle V(H)=-m^{2}H^{\dagger }H+\lambda (H^{\dagger }H)^{2}}
,
which can be viewed as a generalization of the
ϕ
4
{\displaystyle \phi ^{4}}
potential, but has an important difference: it has a circle of minima. This observation is an important one in the theory of spontaneous symmetry breaking in the Standard model.
== Conserved U(1) current ==
The Klein–Gordon equation (and action) for a complex field
ψ
{\displaystyle \psi }
admits a
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry. That is, under the transformations
ψ
(
x
)
↦
e
i
θ
ψ
(
x
)
,
{\displaystyle \psi (x)\mapsto e^{i\theta }\psi (x),}
ψ
¯
(
x
)
↦
e
−
i
θ
ψ
¯
(
x
)
,
{\displaystyle {\bar {\psi }}(x)\mapsto e^{-i\theta }{\bar {\psi }}(x),}
the Klein–Gordon equation is invariant, as is the action (see below). By Noether's theorem for fields, corresponding to this symmetry there is a current
J
μ
{\displaystyle J^{\mu }}
defined as
J
μ
(
x
)
=
e
2
m
(
ψ
¯
(
x
)
∂
μ
ψ
(
x
)
−
ψ
(
x
)
∂
μ
ψ
¯
(
x
)
)
.
{\displaystyle J^{\mu }(x)={\frac {e}{2m}}\left(\,{\bar {\psi }}(x)\partial ^{\mu }\psi (x)-\psi (x)\partial ^{\mu }{\bar {\psi }}(x)\,\right).}
which satisfies the conservation equation
∂
μ
J
μ
(
x
)
=
0.
{\displaystyle \partial _{\mu }J^{\mu }(x)=0.}
The form of the conserved current can be derived systematically by applying Noether's theorem to the
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry. We will not do so here, but simply verify that this current is conserved.
From the Klein–Gordon equation for a complex field
ψ
(
x
)
{\displaystyle \psi (x)}
of mass
M
{\displaystyle M}
, written in covariant notation and mostly plus signature,
(
◻
+
m
2
)
ψ
(
x
)
=
0
{\displaystyle (\square +m^{2})\psi (x)=0}
and its complex conjugate
(
◻
+
m
2
)
ψ
¯
(
x
)
=
0.
{\displaystyle (\square +m^{2}){\bar {\psi }}(x)=0.}
Multiplying by the left respectively by
ψ
¯
(
x
)
{\displaystyle {\bar {\psi }}(x)}
and
ψ
(
x
)
{\displaystyle \psi (x)}
(and omitting for brevity the explicit
x
{\displaystyle x}
dependence),
ψ
¯
(
◻
+
m
2
)
ψ
=
0
,
{\displaystyle {\bar {\psi }}(\square +m^{2})\psi =0,}
ψ
(
◻
+
m
2
)
ψ
¯
=
0.
{\displaystyle \psi (\square +m^{2}){\bar {\psi }}=0.}
Subtracting the former from the latter, we obtain
ψ
¯
◻
ψ
−
ψ
◻
ψ
¯
=
0
,
{\displaystyle {\bar {\psi }}\square \psi -\psi \square {\bar {\psi }}=0,}
or in index notation,
ψ
¯
∂
μ
∂
μ
ψ
−
ψ
∂
μ
∂
μ
ψ
¯
=
0.
{\displaystyle {\bar {\psi }}\partial _{\mu }\partial ^{\mu }\psi -\psi \partial _{\mu }\partial ^{\mu }{\bar {\psi }}=0.}
Applying this to the derivative of the current
J
μ
(
x
)
≡
ψ
∗
(
x
)
∂
μ
ψ
(
x
)
−
ψ
(
x
)
∂
μ
ψ
∗
(
x
)
,
{\displaystyle J^{\mu }(x)\equiv \psi ^{*}(x)\partial ^{\mu }\psi (x)-\psi (x)\partial ^{\mu }\psi ^{*}(x),}
one finds
∂
μ
J
μ
(
x
)
=
0.
{\displaystyle \partial _{\mu }J^{\mu }(x)=0.}
This
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry is a global symmetry, but it can also be gauged to create a local or gauge symmetry: see below scalar QED. The name of gauge symmetry is somewhat misleading: it is really a redundancy, while the global symmetry is a genuine symmetry.
== Lagrangian formulation ==
The Klein–Gordon equation can also be derived by a variational method, arising as the Euler–Lagrange equation of the action
S
=
∫
(
−
ℏ
2
η
μ
ν
∂
μ
ψ
¯
∂
ν
ψ
−
M
2
c
2
ψ
¯
ψ
)
d
4
x
,
{\displaystyle {\mathcal {S}}=\int \left(-\hbar ^{2}\eta ^{\mu \nu }\partial _{\mu }{\bar {\psi }}\,\partial _{\nu }\psi -M^{2}c^{2}{\bar {\psi }}\psi \right)\mathrm {d} ^{4}x,}
In natural units, with signature mostly minus, the actions take the simple form
for a real scalar field of mass
m
{\displaystyle m}
, and
for a complex scalar field of mass
M
{\displaystyle M}
.
Applying the formula for the stress–energy tensor to the Lagrangian density (the quantity inside the integral), we can derive the stress–energy tensor of the scalar field. It is
T
μ
ν
=
ℏ
2
(
η
μ
α
η
ν
β
+
η
μ
β
η
ν
α
−
η
μ
ν
η
α
β
)
∂
α
ψ
¯
∂
β
ψ
−
η
μ
ν
M
2
c
2
ψ
¯
ψ
.
{\displaystyle T^{\mu \nu }=\hbar ^{2}\left(\eta ^{\mu \alpha }\eta ^{\nu \beta }+\eta ^{\mu \beta }\eta ^{\nu \alpha }-\eta ^{\mu \nu }\eta ^{\alpha \beta }\right)\partial _{\alpha }{\bar {\psi }}\,\partial _{\beta }\psi -\eta ^{\mu \nu }M^{2}c^{2}{\bar {\psi }}\psi .}
and in natural units,
T
μ
ν
=
2
∂
μ
ψ
¯
∂
ν
ψ
−
η
μ
ν
(
∂
ρ
ψ
¯
∂
ρ
ψ
−
M
2
ψ
¯
ψ
)
{\displaystyle T^{\mu \nu }=2\partial ^{\mu }{\bar {\psi }}\partial ^{\nu }\psi -\eta ^{\mu \nu }(\partial ^{\rho }{\bar {\psi }}\partial _{\rho }\psi -M^{2}{\bar {\psi }}\psi )}
By integration of the time–time component T00 over all space, one may show that both the positive- and negative-frequency plane-wave solutions can be physically associated with particles with positive energy. This is not the case for the Dirac equation and its energy–momentum tensor.
The stress energy tensor is the set of conserved currents corresponding to the invariance of the Klein–Gordon equation under space-time translations
x
μ
↦
x
μ
+
c
μ
{\displaystyle x^{\mu }\mapsto x^{\mu }+c^{\mu }}
. Therefore, each component is conserved, that is,
∂
μ
T
μ
ν
=
0
{\displaystyle \partial _{\mu }T^{\mu \nu }=0}
(this holds only on-shell, that is, when the Klein–Gordon equations are satisfied). It follows that the integral of
T
0
ν
{\displaystyle T^{0\nu }}
over space is a conserved quantity for each
ν
{\displaystyle \nu }
. These have the physical interpretation of total energy for
ν
=
0
{\displaystyle \nu =0}
and total momentum for
ν
=
i
{\displaystyle \nu =i}
with
i
∈
{
1
,
2
,
3
}
{\displaystyle i\in \{1,2,3\}}
.
== Non-relativistic limit ==
=== Classical field ===
Taking the non-relativistic limit (v ≪ c) of a classical Klein–Gordon field ψ(x, t) begins with the ansatz factoring the oscillatory rest mass energy term,
ψ
(
x
,
t
)
=
ϕ
(
x
,
t
)
e
−
i
ℏ
m
c
2
t
where
ϕ
(
x
,
t
)
=
u
E
(
x
)
e
−
i
ℏ
E
′
t
.
{\displaystyle \psi (\mathbb {x} ,t)=\phi (\mathbb {x} ,t)\,e^{-{\frac {i}{\hbar }}mc^{2}t}\quad {\textrm {where}}\quad \phi (\mathbb {x} ,t)=u_{E}(x)e^{-{\frac {i}{\hbar }}E't}.}
Defining the kinetic energy
E
′
=
E
−
m
c
2
=
m
2
c
4
+
c
2
p
2
−
m
c
2
≈
p
2
2
m
{\displaystyle E'=E-mc^{2}={\sqrt {m^{2}c^{4}+c^{2}p^{2}}}-mc^{2}\approx {\frac {p^{2}}{2m}}}
,
E
′
≪
m
c
2
{\displaystyle E'\ll mc^{2}}
in the non-relativistic limit
v
=
p
/
m
≪
c
{\displaystyle v=p/m\ll c}
, and hence
i
ℏ
∂
ϕ
∂
t
=
E
′
ϕ
≪
m
c
2
ϕ
and
(
i
ℏ
)
2
∂
2
ϕ
∂
t
2
=
(
E
′
)
2
ϕ
≪
(
m
c
2
)
2
ϕ
.
{\displaystyle i\hbar {\frac {\partial \phi }{\partial t}}=E'\phi \ll mc^{2}\phi \quad {\textrm {and}}\quad (i\hbar )^{2}{\frac {\partial ^{2}\phi }{\partial t^{2}}}=(E')^{2}\phi \ll (mc^{2})^{2}\phi .}
Applying this yields the non-relativistic limit of the second time derivative of
ψ
{\displaystyle \psi }
,
∂
ψ
∂
t
=
(
−
i
m
c
2
ℏ
ϕ
+
∂
ϕ
∂
t
)
e
−
i
ℏ
m
c
2
t
≈
−
i
m
c
2
ℏ
ϕ
e
−
i
ℏ
m
c
2
t
{\displaystyle {\frac {\partial \psi }{\partial t}}=\left(-i{\frac {mc^{2}}{\hbar }}\phi +{\frac {\partial \phi }{\partial t}}\right)\,e^{-{\frac {i}{\hbar }}mc^{2}t}\approx -i{\frac {mc^{2}}{\hbar }}\phi \,e^{-{\frac {i}{\hbar }}mc^{2}t}}
∂
2
ψ
∂
t
2
=
−
(
i
2
m
c
2
ℏ
∂
ϕ
∂
t
+
(
m
c
2
ℏ
)
2
ϕ
−
∂
2
ϕ
∂
t
2
)
e
−
i
ℏ
m
c
2
t
≈
−
(
i
2
m
c
2
ℏ
∂
ϕ
∂
t
+
(
m
c
2
ℏ
)
2
ϕ
)
e
−
i
ℏ
m
c
2
t
{\displaystyle {\frac {\partial ^{2}\psi }{\partial t^{2}}}=-\left(i{\frac {2mc^{2}}{\hbar }}{\frac {\partial \phi }{\partial t}}+\left({\frac {mc^{2}}{\hbar }}\right)^{2}\phi -{\frac {\partial ^{2}\phi }{\partial t^{2}}}\right)e^{-{\frac {i}{\hbar }}mc^{2}t}\approx -\left(i{\frac {2mc^{2}}{\hbar }}{\frac {\partial \phi }{\partial t}}+\left({\frac {mc^{2}}{\hbar }}\right)^{2}\phi \right)e^{-{\frac {i}{\hbar }}mc^{2}t}}
Substituting into the free Klein–Gordon equation,
c
−
2
∂
t
2
ψ
=
∇
2
ψ
−
(
m
c
ℏ
)
2
ψ
{\displaystyle c^{-2}\partial _{t}^{2}\psi =\nabla ^{2}\psi -({\frac {mc}{\hbar }})^{2}\psi }
, yields
−
1
c
2
(
i
2
m
c
2
ℏ
∂
ϕ
∂
t
+
(
m
c
2
ℏ
)
2
ϕ
)
e
−
i
ℏ
m
c
2
t
≈
(
∇
2
−
(
m
c
ℏ
)
2
)
ϕ
e
−
i
ℏ
m
c
2
t
{\displaystyle -{\frac {1}{c^{2}}}\left(i{\frac {2mc^{2}}{\hbar }}{\frac {\partial \phi }{\partial t}}+\left({\frac {mc^{2}}{\hbar }}\right)^{2}\phi \right)e^{-{\frac {i}{\hbar }}mc^{2}t}\approx \left(\nabla ^{2}-\left({\frac {mc}{\hbar }}\right)^{2}\right)\phi \,e^{-{\frac {i}{\hbar }}mc^{2}t}}
which (by dividing out the exponential and subtracting the mass term) simplifies to
i
ℏ
∂
ϕ
∂
t
=
−
ℏ
2
2
m
∇
2
ϕ
.
{\displaystyle i\hbar {\frac {\partial \phi }{\partial t}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\phi .}
This is a classical Schrödinger field.
=== Quantum field ===
The analogous limit of a quantum Klein–Gordon field is complicated by the non-commutativity of the field operator. In the limit v ≪ c, the creation and annihilation operators decouple and behave as independent quantum Schrödinger fields.
== Scalar electrodynamics ==
There is a way to make the complex Klein–Gordon field
ψ
{\displaystyle \psi }
interact with electromagnetism in a gauge-invariant way. We can replace the (partial) derivative with the gauge-covariant derivative. Under a local
U
(
1
)
{\displaystyle {\text{U}}(1)}
gauge transformation, the fields transform as
ψ
↦
ψ
′
=
e
i
θ
(
x
)
ψ
,
{\displaystyle \psi \mapsto \psi '=e^{i\theta (x)}\psi ,}
ψ
¯
↦
ψ
¯
′
=
e
−
i
θ
(
x
)
ψ
¯
,
{\displaystyle {\bar {\psi }}\mapsto {\bar {\psi }}'=e^{-i\theta (x)}{\bar {\psi }},}
where
θ
(
x
)
=
θ
(
t
,
x
)
{\displaystyle \theta (x)=\theta (t,{\textbf {x}})}
is a function of spacetime, thus making it a local transformation, as opposed to a constant over all of spacetime, which would be a global
U
(
1
)
{\displaystyle {\text{U}}(1)}
transformation. A subtle point is that global transformations can arise as local ones, when the function
θ
(
x
)
{\displaystyle \theta (x)}
is taken to be a constant function.
A well-formulated theory should be invariant under such transformations. Precisely, this means that the equations of motion and action (see below) are invariant. To achieve this, ordinary derivatives
∂
μ
{\displaystyle \partial _{\mu }}
must be replaced by gauge-covariant derivatives
D
μ
{\displaystyle D_{\mu }}
, defined as
D
μ
ψ
=
(
∂
μ
−
i
e
A
μ
)
ψ
{\displaystyle D_{\mu }\psi =(\partial _{\mu }-ieA_{\mu })\psi }
D
μ
ψ
¯
=
(
∂
μ
+
i
e
A
μ
)
ψ
¯
{\displaystyle D_{\mu }{\bar {\psi }}=(\partial _{\mu }+ieA_{\mu }){\bar {\psi }}}
where the 4-potential or gauge field
A
μ
{\displaystyle A_{\mu }}
transforms under a gauge transformation
θ
{\displaystyle \theta }
as
A
μ
↦
A
μ
′
=
A
μ
+
1
e
∂
μ
θ
{\displaystyle A_{\mu }\mapsto A'_{\mu }=A_{\mu }+{\frac {1}{e}}\partial _{\mu }\theta }
.
With these definitions, the covariant derivative transforms as
D
μ
ψ
↦
e
i
θ
D
μ
ψ
{\displaystyle D_{\mu }\psi \mapsto e^{i\theta }D_{\mu }\psi }
In natural units, the Klein–Gordon equation therefore becomes
D
μ
D
μ
ψ
−
M
2
ψ
=
0.
{\displaystyle D_{\mu }D^{\mu }\psi -M^{2}\psi =0.}
Since an ungauged
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry is only present in complex Klein–Gordon theory, this coupling and promotion to a gauged
U
(
1
)
{\displaystyle {\text{U}}(1)}
symmetry is compatible only with complex Klein–Gordon theory and not real Klein–Gordon theory.
In natural units and mostly minus signature we have
where
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
{\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }}
is known as the Maxwell tensor, field strength or curvature depending on viewpoint.
This theory is often known as scalar quantum electrodynamics or scalar QED, although all aspects we've discussed here are classical.
=== Scalar chromodynamics ===
It is possible to extend this to a non-abelian gauge theory with a gauge group
G
{\displaystyle G}
, where we couple the scalar Klein–Gordon action to a Yang–Mills Lagrangian. Here, the field is actually vector-valued, but is still described as a scalar field: the scalar describes its transformation under space-time transformations, but not its transformation under the action of the gauge group.
For concreteness we fix
G
{\displaystyle G}
to be
SU
(
N
)
{\displaystyle {\text{SU}}(N)}
, the special unitary group for some
N
≥
2
{\displaystyle N\geq 2}
. Under a gauge transformation
U
(
x
)
{\displaystyle U(x)}
, which can be described as a function
U
:
R
1
,
3
→
SU
(
N
)
,
{\displaystyle U:\mathbb {R} ^{1,3}\rightarrow {\text{SU}}(N),}
the scalar field
ψ
{\displaystyle \psi }
transforms as a
C
N
{\displaystyle \mathbb {C} ^{N}}
vector
ψ
(
x
)
↦
U
(
x
)
ψ
(
x
)
{\displaystyle \psi (x)\mapsto U(x)\psi (x)}
ψ
†
(
x
)
↦
ψ
†
(
x
)
U
†
(
x
)
{\displaystyle \psi ^{\dagger }(x)\mapsto \psi ^{\dagger }(x)U^{\dagger }(x)}
.
The covariant derivative is
D
μ
ψ
=
∂
μ
ψ
−
i
g
A
μ
ψ
{\displaystyle D_{\mu }\psi =\partial _{\mu }\psi -igA_{\mu }\psi }
D
μ
ψ
†
=
∂
μ
ψ
†
+
i
g
ψ
†
A
μ
†
{\displaystyle D_{\mu }\psi ^{\dagger }=\partial _{\mu }\psi ^{\dagger }+ig\psi ^{\dagger }A_{\mu }^{\dagger }}
where the gauge field or connection transforms as
A
μ
↦
U
A
μ
U
−
1
−
i
g
∂
μ
U
U
−
1
.
{\displaystyle A_{\mu }\mapsto UA_{\mu }U^{-1}-{\frac {i}{g}}\partial _{\mu }UU^{-1}.}
This field can be seen as a matrix valued field which acts on the vector space
C
N
{\displaystyle \mathbb {C} ^{N}}
.
Finally defining the chromomagnetic field strength or curvature,
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
+
g
(
A
μ
A
ν
−
A
ν
A
μ
)
,
{\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }+g(A_{\mu }A_{\nu }-A_{\nu }A_{\mu }),}
we can define the action.
== Klein–Gordon on curved spacetime ==
In general relativity, we include the effect of gravity by replacing partial derivatives with covariant derivatives, and the Klein–Gordon equation becomes (in the mostly pluses signature)
0
=
−
g
μ
ν
∇
μ
∇
ν
ψ
+
m
2
c
2
ℏ
2
ψ
=
−
g
μ
ν
∇
μ
(
∂
ν
ψ
)
+
m
2
c
2
ℏ
2
ψ
=
−
g
μ
ν
∂
μ
∂
ν
ψ
+
g
μ
ν
Γ
σ
μ
ν
∂
σ
ψ
+
m
2
c
2
ℏ
2
ψ
,
{\displaystyle {\begin{aligned}0&=-g^{\mu \nu }\nabla _{\mu }\nabla _{\nu }\psi +{\dfrac {m^{2}c^{2}}{\hbar ^{2}}}\psi =-g^{\mu \nu }\nabla _{\mu }(\partial _{\nu }\psi )+{\dfrac {m^{2}c^{2}}{\hbar ^{2}}}\psi \\&=-g^{\mu \nu }\partial _{\mu }\partial _{\nu }\psi +g^{\mu \nu }\Gamma ^{\sigma }{}_{\mu \nu }\partial _{\sigma }\psi +{\dfrac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,\end{aligned}}}
or equivalently,
−
1
−
g
∂
μ
(
g
μ
ν
−
g
∂
ν
ψ
)
+
m
2
c
2
ℏ
2
ψ
=
0
,
{\displaystyle {\frac {-1}{\sqrt {-g}}}\partial _{\mu }\left(g^{\mu \nu }{\sqrt {-g}}\partial _{\nu }\psi \right)+{\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi =0,}
where gαβ is the inverse of the metric tensor that is the gravitational potential field, g is the determinant of the metric tensor, ∇μ is the covariant derivative, and Γσμν is the Christoffel symbol that is the gravitational force field.
With natural units this becomes
This also admits an action formulation on a spacetime (Lorentzian) manifold
M
{\displaystyle M}
. Using abstract index notation and in mostly plus signature this is
or
== See also ==
Quantum field theory
Quartic interaction
Relativistic wave equations
Dirac equation (spin 1/2)
Proca action (spin 1)
Rarita–Schwinger equation (spin 3/2)
Scalar field theory
Sine–Gordon equation
== Remarks ==
== Notes ==
== References ==
Davydov, A. S. (1976). Quantum Mechanics, 2nd Edition. Pergamon Press. ISBN 0-08-020437-6.
Feshbach, H.; Villars, F. (1958). "Elementary relativistic wave mechanics of spin 0 and spin 1/2 particles". Reviews of Modern Physics. 30 (1): 24–45. Bibcode:1958RvMP...30...24F. doi:10.1103/RevModPhys.30.24.
Gordon, Walter (1926). "Der Comptoneffekt nach der Schrödingerschen Theorie". Zeitschrift für Physik. 40 (1–2): 117. Bibcode:1926ZPhy...40..117G. doi:10.1007/BF01390840. S2CID 122254400.
Greiner, W. (2000). Relativistic Quantum Mechanics. Wave Equations (3rd ed.). Springer Verlag. ISBN 3-5406-74578.
Greiner, W.; Müller, B. (1994). Quantum Mechanics: Symmetries (2nd ed.). Springer. ISBN 978-3540580805.
Gross, F. (1993). Relativistic Quantum Mechanics and Field Theory (1st ed.). Wiley-VCH. ISBN 978-0471591139.
Klein, O. (1926). "Quantentheorie und fünfdimensionale Relativitätstheorie". Zeitschrift für Physik. 37 (12): 895. Bibcode:1926ZPhy...37..895K. doi:10.1007/BF01397481.
Sakurai, J. J. (1967). Advanced Quantum Mechanics. Addison Wesley. ISBN 0-201-06710-2.
Weinberg, S. (2002). The Quantum Theory of Fields. Vol. I. Cambridge University Press. ISBN 0-521-55001-7.
== External links ==
"Klein–Gordon equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Klein-Gordon Equation". MathWorld.
Linear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations.
Nonlinear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations.
Introduction to nonlocal equations. | Wikipedia/Klein–Gordon_equation |
Relational quantum mechanics (RQM) is an interpretation of quantum mechanics which treats the state of a quantum system as being relational, that is, the state is the relation between the observer and the system. This interpretation was first delineated by Carlo Rovelli in a 1994 preprint, and has since been expanded upon by a number of theorists. It is inspired by the key idea behind special relativity, that the details of an observation depend on the reference frame of the observer, and Wheeler's idea that information theory would make sense of quantum mechanics.
The physical content of the theory has not to do with objects themselves, but the relations between them. As Rovelli puts it:
"Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".
The essential idea behind RQM is that different observers may give different accurate accounts of the same system. For example, to one observer, a system is in a single, "collapsed" eigenstate. To a second observer, the same system is in a superposition of two or more states and the first observer is in a correlated superposition of two or more states. RQM argues that this is a complete picture of the world because the notion of "state" is always relative to some observer. There is no privileged, "real" account.
The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system.
The terms "observer" and "observed" apply to any arbitrary system, microscopic or macroscopic. The classical limit is a consequence of aggregate systems of very highly correlated subsystems.
A "measurement event" is thus described as an ordinary physical interaction where two systems become correlated to some degree with respect to each other.
Rovelli criticizes describing this as a form of "observer-dependence" which suggests reality depends upon the presence of a conscious observer, when his point is instead that reality is relational and thus the state of a system can be described even in relation to any physical object and not necessarily a human observer.
The proponents of the relational interpretation argue that this approach resolves some of the traditional interpretational difficulties with quantum mechanics. By giving up our preconception of a global privileged state, issues around the measurement problem and local realism are resolved.
== History and development ==
Relational quantum mechanics arose from a comparison of the quandaries posed by the interpretations of quantum mechanics with those resulting from Lorentz transformations prior to the development of special relativity. Rovelli suggested that just as pre-relativistic interpretations of Lorentz's equations were complicated by incorrectly assuming an observer-independent time exists, a similarly incorrect assumption frustrates attempts to make sense of the quantum formalism. The assumption rejected by relational quantum mechanics is the existence of an observer-independent state of a system.
The idea has been expanded upon by Lee Smolin and Louis Crane, who have both applied the concept to quantum cosmology, and the interpretation has been applied to the EPR paradox, revealing not only a peaceful co-existence between quantum mechanics and special relativity, but a formal indication of a completely local character to reality.
In 2020, Rovelli published an account of the main ideas of the relational interpretation in his pop-science book Helgoland.
In 2023, Rovelli published a paper with Emily Adlam that presented a significant revision to relational quantum mechanics. Adlam and Rovelli introduced a new axiom, positing the existence of "cross-perspective links", which has been seen as a major move away from relationalism.
== The problem of the observer and the observed ==
This problem was initially discussed in detail in Everett's thesis, The Theory of the Universal Wavefunction. Consider observer
O
{\displaystyle O}
, measuring the state of the quantum system
S
{\displaystyle S}
. We assume that
O
{\displaystyle O}
has complete information on the system, and that
O
{\displaystyle O}
can write down the wavefunction
|
ψ
⟩
{\displaystyle |\psi \rangle }
describing it. At the same time, there is another observer
O
′
{\displaystyle O'}
, who is interested in the state of the entire
O
{\displaystyle O}
-
S
{\displaystyle S}
system, and
O
′
{\displaystyle O'}
likewise has complete information.
To analyse this system formally, we consider a system
S
{\displaystyle S}
which may take one of two states, which we shall designate
|
↑
⟩
{\displaystyle |{\uparrow }\rangle }
and
|
↓
⟩
{\displaystyle |\downarrow \rangle }
, ket vectors in the Hilbert space
H
S
{\displaystyle H_{S}}
. Now, the observer
O
{\displaystyle O}
wishes to make a measurement on the system. At time
t
1
{\displaystyle t_{1}}
, this observer may characterize the system as follows:
|
ψ
⟩
=
α
|
↑
⟩
+
β
|
↓
⟩
,
{\displaystyle |\psi \rangle =\alpha |{\uparrow }\rangle +\beta |{\downarrow }\rangle ,}
where
|
α
|
2
{\displaystyle |\alpha |^{2}}
and
|
β
|
2
{\displaystyle |\beta |^{2}}
are probabilities of finding the system in the respective states, and these add up to 1. For our purposes here, we can assume that in a single experiment, the outcome is the eigenstate
|
↑
⟩
{\displaystyle |{\uparrow }\rangle }
(but this can be substituted throughout, without loss of generality, by
|
↓
⟩
{\displaystyle |{\downarrow }\rangle }
). So, we may represent the sequence of events in this experiment, with observer
O
{\displaystyle O}
doing the observing, as follows:
t
1
→
t
2
α
|
↑
⟩
+
β
|
↓
⟩
→
|
↑
⟩
.
{\displaystyle {\begin{matrix}t_{1}&\rightarrow &t_{2}\\\alpha |{\uparrow }\rangle +\beta |{\downarrow }\rangle &\rightarrow &|{\uparrow }\rangle .\end{matrix}}}
This is the description of the measurement event given by observer
O
{\displaystyle O}
. Now, any measurement is also a physical interaction between two or more systems. Accordingly, we can consider the tensor product Hilbert space
H
S
⊗
H
O
{\displaystyle H_{S}\otimes H_{O}}
, where
H
O
{\displaystyle H_{O}}
is the Hilbert space inhabited by state vectors describing
O
{\displaystyle O}
. If the initial state of
O
{\displaystyle O}
is
|
init
⟩
{\displaystyle |{\text{init}}\rangle }
, some degrees of freedom in
O
{\displaystyle O}
become correlated with the state of
S
{\displaystyle S}
after the measurement, and this correlation can take one of two values:
|
O
↑
⟩
{\displaystyle |O_{\uparrow }\rangle }
or
|
O
↓
⟩
{\displaystyle |O_{\downarrow }\rangle }
where the direction of the arrows in the subscripts corresponds to the outcome of the measurement that
O
{\displaystyle O}
has made on
S
{\displaystyle S}
. If we now consider the description of the measurement event by the other observer,
O
′
{\displaystyle O'}
, who describes the combined
S
+
O
{\displaystyle S+O}
system, but does not interact with it, the following gives the description of the measurement event according to
O
′
{\displaystyle O'}
, from the linearity inherent in the quantum formalism:
t
1
→
t
2
(
α
|
↑
⟩
+
β
|
↓
⟩
)
⊗
|
init
⟩
→
α
|
↑
⟩
⊗
|
O
↑
⟩
+
β
|
↓
⟩
⊗
|
O
↓
⟩
.
{\displaystyle {\begin{matrix}t_{1}&\rightarrow &t_{2}\\\left(\alpha |{\uparrow }\rangle +\beta |{\downarrow }\rangle \right)\otimes |{\text{init}}\rangle &\rightarrow &\alpha |{\uparrow }\rangle \otimes |O_{\uparrow }\rangle +\beta |{\downarrow }\rangle \otimes |O_{\downarrow }\rangle .\end{matrix}}}
Thus, on the assumption (see hypothesis 2 below) that quantum mechanics is complete, the two observers
O
{\displaystyle O}
and
O
′
{\displaystyle O'}
give different but equally correct accounts of the events
t
1
→
t
2
{\displaystyle t_{1}\rightarrow t_{2}}
.
Note that the above scenario is directly linked to Wigner's Friend thought experiment, which serves as a prime example when understanding different interpretations of quantum theory.
== Central principles ==
=== Observer-dependence of state ===
According to
O
{\displaystyle O}
, at
t
2
{\displaystyle t_{2}}
, the system
S
{\displaystyle S}
is in a determinate state, namely spin up. And, if quantum mechanics is complete, then so is this description. But, for
O
′
{\displaystyle O'}
,
S
{\displaystyle S}
is not uniquely determinate, but is rather entangled with the state of
O
{\displaystyle O}
– note that his description of the situation at
t
2
{\displaystyle t_{2}}
is not factorisable no matter what basis chosen. But, if quantum mechanics is complete, then the description that
O
′
{\displaystyle O'}
gives is also complete.
Thus the standard mathematical formulation of quantum mechanics allows different observers to give different accounts of the same sequence of events. There are many ways to overcome this perceived difficulty. It could be described as an epistemic limitation – observers with a full knowledge of the system, we might say, could give a complete and equivalent description of the state of affairs, but that obtaining this knowledge is impossible in practice. But whom? What makes
O
{\displaystyle O}
's description better than that of
O
′
{\displaystyle O'}
, or vice versa? Alternatively, we could claim that quantum mechanics is not a complete theory, and that by adding more structure we could arrive at a universal description (the troubled hidden variables approach). Yet another option is to give a preferred status to a particular observer or type of observer, and assign the epithet of correctness to their description alone. This has the disadvantage of being ad hoc, since there are no clearly defined or physically intuitive criteria by which this super-observer ("who can observe all possible sets of observations by all observers over the entire universe") ought to be chosen.
RQM, however, takes the point illustrated by this problem at face value. Instead of trying to modify quantum mechanics to make it fit with prior assumptions that we might have about the world, Rovelli says that we should modify our view of the world to conform to what amounts to our best physical theory of motion. Just as forsaking the notion of absolute simultaneity helped clear up the problems associated with the interpretation of the Lorentz transformations, so many of the conundrums associated with quantum mechanics dissolve, provided that the state of a system is assumed to be observer-dependent – like simultaneity in Special Relativity. This insight follows logically from the two main hypotheses which inform this interpretation:
Hypothesis 1: the equivalence of systems. There is no a priori distinction that should be drawn between quantum and macroscopic systems. All systems are, fundamentally, quantum systems.
Hypothesis 2: the completeness of quantum mechanics. There are no hidden variables or other factors which may be appropriately added to quantum mechanics, in light of current experimental evidence.
Thus, if a state is to be observer-dependent, then a description of a system would follow the form "system S is in state x with reference to observer O" or similar constructions, much like in relativity theory. In RQM it is meaningless to refer to the absolute, observer-independent state of any system.
=== Information and correlation ===
It is generally well established that any quantum mechanical measurement can be reduced to a set of yes–no questions or bits that are either 1 or 0. RQM makes use of this fact to formulate the state of a quantum system (relative to a given observer!) in terms of the physical notion of information developed by Claude Shannon. Any yes/no question can be described as a single bit of information. This should not be confused with the idea of a qubit from quantum information theory, because a qubit can be in a superposition of values, whilst the "questions" of RQM are ordinary binary variables.
Any quantum measurement is fundamentally a physical interaction between the system being measured and some form of measuring apparatus. By extension, any physical interaction may be seen to be a form of quantum measurement, as all systems are seen as quantum systems in RQM. A physical interaction is seen by other observers unaware of the result, as establishing a correlation between the system and the observer, and this correlation is what is described and predicted by the quantum formalism.
But, Rovelli points out, this form of correlation is precisely the same as the definition of information in Shannon's theory. Specifically, an observer O observing a system S will, after measurement, have some degrees of freedom correlated with those of S, as described by another observer unaware of the result. The amount of this correlation is given by log2k bits, where k is the number of possible values which this correlation may take – the number of "options" there are, as described by the other observer.
Note that if the other observer is aware of the measurement result, there is only one possible value for the correlation, so they will not regard the (first observer's) measurement as producing any information, as expected.
=== All systems are quantum systems ===
All physical interactions are, at bottom, quantum interactions, and must ultimately be governed by the same rules. Thus, an interaction between two particles does not, in RQM, differ fundamentally from an interaction between a particle and some "apparatus". There is no true wave collapse, in the sense in which it occurs in some interpretations.
Because "state" is expressed in RQM as the correlation between two systems, there can be no meaning to "self-measurement". If observer
O
{\displaystyle O}
measures system
S
{\displaystyle S}
,
S
{\displaystyle S}
's "state" is represented as a correlation between
O
{\displaystyle O}
and
S
{\displaystyle S}
.
O
{\displaystyle O}
itself cannot say anything with respect to its own "state", because its own "state" is defined only relative to another observer,
O
′
{\displaystyle O'}
. If the
S
+
O
{\displaystyle S+O}
compound system does not interact with any other systems, then it will possess a clearly defined state relative to
O
′
{\displaystyle O'}
. However, because
O
{\displaystyle O}
's measurement of
S
{\displaystyle S}
breaks its unitary evolution with respect to
O
{\displaystyle O}
,
O
{\displaystyle O}
will not be able to give a full description of the
S
+
O
{\displaystyle S+O}
system (since it can only speak of the correlation between
S
{\displaystyle S}
and itself, not its own behaviour). A complete description of the
(
S
+
O
)
+
O
′
{\displaystyle (S+O)+O'}
system can only be given by a further, external observer, and so forth.
Taking the model system discussed above, if
O
′
{\displaystyle O'}
has full information on the
S
+
O
{\displaystyle S+O}
system, it will know the Hamiltonians of both
S
{\displaystyle S}
and
O
{\displaystyle O}
, including the interaction Hamiltonian. Thus, the system will evolve entirely unitarily (without any form of collapse) relative to
O
′
{\displaystyle O'}
, if
O
{\displaystyle O}
measures
S
{\displaystyle S}
. The only reason that
O
{\displaystyle O}
will perceive a "collapse" is because
O
{\displaystyle O}
has incomplete information on the system (specifically,
O
{\displaystyle O}
does not know its own Hamiltonian, and the interaction Hamiltonian for the measurement).
== Consequences and implications ==
=== Coherence ===
In our system above,
O
′
{\displaystyle O'}
may be interested in ascertaining whether or not the state of
O
{\displaystyle O}
accurately reflects the state of
S
{\displaystyle S}
. We can draw up for
O
′
{\displaystyle O'}
an operator,
M
{\displaystyle M}
, which is specified as:
M
(
|
↑
⟩
⊗
|
O
↑
⟩
)
=
|
↑
⟩
⊗
|
O
↑
⟩
{\displaystyle M\left(|{\uparrow }\rangle \otimes |O_{\uparrow }\rangle \right)=|{\uparrow }\rangle \otimes |O_{\uparrow }\rangle }
M
(
|
↑
⟩
⊗
|
O
↓
⟩
)
=
0
{\displaystyle M\left(|{\uparrow }\rangle \otimes |O_{\downarrow }\rangle \right)=0}
M
(
|
↓
⟩
⊗
|
O
↑
⟩
)
=
0
{\displaystyle M\left(|{\downarrow }\rangle \otimes |O_{\uparrow }\rangle \right)=0}
M
(
|
↓
⟩
⊗
|
O
↓
⟩
)
=
|
↓
⟩
⊗
|
O
↓
⟩
{\displaystyle M\left(|{\downarrow }\rangle \otimes |O_{\downarrow }\rangle \right)=|{\downarrow }\rangle \otimes |O_{\downarrow }\rangle }
with an eigenvalue of 1 meaning that
O
{\displaystyle O}
indeed accurately reflects the state of
S
{\displaystyle S}
. So there is a 0 probability of
O
{\displaystyle O}
reflecting the state of
S
{\displaystyle S}
as being
|
↑
⟩
{\displaystyle |{\uparrow }\rangle }
if it is in fact
|
↓
⟩
{\displaystyle |{\downarrow }\rangle }
, and so forth. The implication of this is that at time
t
2
{\displaystyle t_{2}}
,
O
′
{\displaystyle O'}
can predict with certainty that the
S
+
O
{\displaystyle S+O}
system is in some eigenstate of
M
{\displaystyle M}
, but cannot say which eigenstate it is in, unless
O
′
{\displaystyle O'}
itself interacts with the
S
+
O
{\displaystyle S+O}
system.
An apparent paradox arises when one considers the comparison, between two observers, of the specific outcome of a measurement. In the problem of the observer observed section above, let us imagine that the two experiments want to compare results. It is obvious that if the observer
O
′
{\displaystyle O'}
has the full Hamiltonians of both
S
{\displaystyle S}
and
O
{\displaystyle O}
, he will be able to say with certainty that at time
t
2
{\displaystyle t_{2}}
,
O
{\displaystyle O}
has a determinate result for
S
{\displaystyle S}
's spin, but he will not be able to say what
O
{\displaystyle O}
's result is without interaction, and hence breaking the unitary evolution of the compound system (because he doesn't know his own Hamiltonian). The distinction between knowing "that" and knowing "what" is a common one in everyday life: everyone knows that the weather will be like something tomorrow, but no-one knows exactly what the weather will be like.
But, let us imagine that
O
′
{\displaystyle O'}
measures the spin of
S
{\displaystyle S}
, and finds it to have spin down (and note that nothing in the analysis above precludes this from happening). What happens if he talks to
O
{\displaystyle O}
, and they compare the results of their experiments?
O
{\displaystyle O}
, it will be remembered, measured a spin up on the particle. This would appear to be paradoxical: the two observers, surely, will realise that they have disparate results.
However, this apparent paradox only arises as a result of the question being framed incorrectly: as long as we presuppose an "absolute" or "true" state of the world, this would, indeed, present an insurmountable obstacle for the relational interpretation. However, in a fully relational context, there is no way in which the problem can even be coherently expressed. The consistency inherent in the quantum formalism, exemplified by the "M-operator" defined above, guarantees that there will be no contradictions between records. The interaction between
O
′
{\displaystyle O'}
and whatever he chooses to measure, be it the
S
+
O
{\displaystyle S+O}
compound system or
O
{\displaystyle O}
and
S
{\displaystyle S}
individually, will be a physical interaction, a quantum interaction, and so a complete description of it can only be given by a further observer
O
″
{\displaystyle O''}
, who will have a similar "M-operator" guaranteeing coherency, and so on out. In other words, a situation such as that described above cannot violate any physical observation, as long as the physical content of quantum mechanics is taken to refer only to relations.
=== Relational networks ===
An interesting implication of RQM arises when we consider that interactions between material systems can only occur within the constraints prescribed by Special Relativity, namely within the intersections of the light cones of the systems: when they are spatiotemporally contiguous, in other words. Relativity tells us that objects have location only relative to other objects. By extension, a network of relations could be built up based on the properties of a set of systems, which determines which systems have properties relative to which others, and when (since properties are no longer well defined relative to a specific observer after unitary evolution breaks down for that observer). On the assumption that all interactions are local (which is backed up by the analysis of the EPR paradox presented below), one could say that the ideas of "state" and spatiotemporal contiguity are two sides of the same coin: spacetime location determines the possibility of interaction, but interactions determine spatiotemporal structure. The full extent of this relationship, however, has not yet fully been explored.
=== RQM and quantum cosmology ===
The universe is the sum total of everything in existence with any possibility of direct or indirect interaction with a local observer. A (physical) observer outside of the universe would require physically breaking of gauge invariance, and a concomitant alteration in the mathematical structure of gauge-invariance theory.
Similarly, RQM conceptually forbids the possibility of an external observer. Since the assignment of a quantum state requires at least two "objects" (system and observer), which must both be physical systems, there is no meaning in speaking of the "state" of the entire universe. This is because this state would have to be ascribed to a correlation between the universe and some other physical observer, but this observer in turn would have to form part of the universe. As was discussed above, it is not possible for an object to contain a complete specification of itself. Following the idea of relational networks above, an RQM-oriented cosmology would have to account for the universe as a set of partial systems providing descriptions of one another. Such a construction was developed in particular by Francesca Vidotto .
== Relationship with other interpretations ==
The only group of interpretations of quantum mechanics with which RQM is almost completely incompatible is that of hidden variables theories. RQM shares some deep similarities with other views, but differs from them all to the extent to which the other interpretations do not accord with the "relational world" put forward by RQM.
=== Copenhagen interpretation ===
RQM is, in essence, quite similar to the Copenhagen interpretation, but with an important difference. In the Copenhagen interpretation, the macroscopic world is assumed to be intrinsically classical in nature, and wave function collapse occurs when a quantum system interacts with macroscopic apparatus. In RQM, any interaction, be it micro or macroscopic, causes the linearity of Schrödinger evolution to break down. RQM could recover a Copenhagen-like view of the world by assigning a privileged status (not dissimilar to a preferred frame in relativity) to the classical world. However, by doing this one would lose sight of the key features that RQM brings to our view of the quantum world.
=== Hidden-variables theories ===
Bohm's interpretation of QM does not sit well with RQM. One of the explicit hypotheses in the construction of RQM is that quantum mechanics is a complete theory, that is it provides a full account of the world. Moreover, the Bohmian view seems to imply an underlying, "absolute" set of states of all systems, which is also ruled out as a consequence of RQM.
We find a similar incompatibility between RQM and suggestions such as that of Penrose, which postulate that some process (in Penrose's case, gravitational effects) violate the linear evolution of the Schrödinger equation for the system.
=== Relative-state formulation ===
The many-worlds family of interpretations (MWI) shares an important feature with RQM, that is, the relational nature of all value assignments (that is, properties). Everett, however, maintains that the universal wavefunction gives a complete description of the entire universe, while Rovelli argues that this is problematic, both because this description is not tied to a specific observer (and hence is "meaningless" in RQM), and because RQM maintains that there is no single, absolute description of the universe as a whole, but rather a net of interrelated partial descriptions.
=== Consistent histories approach ===
In the consistent histories approach to QM, instead of assigning probabilities to single values for a given system, the emphasis is given to sequences of values, in such a way as to exclude (as physically impossible) all value assignments which result in inconsistent probabilities being attributed to observed states of the system. This is done by means of ascribing values to "frameworks", and all values are hence framework-dependent.
RQM accords perfectly well with this view. However, the consistent histories approach does not give a full description of the physical meaning of framework-dependent value (that is it does not account for how there can be "facts" if the value of any property depends on the framework chosen). By incorporating the relational view into this approach, the problem is solved: RQM provides the means by which the observer-independent, framework-dependent probabilities of various histories are reconciled with observer-dependent descriptions of the world.
== EPR and quantum non-locality ==
RQM provides an unusual solution to the EPR paradox. Indeed, it manages to dissolve the problem altogether, inasmuch as there is no superluminal transportation of information involved in a Bell test experiment: the principle of locality is preserved inviolate for all observers.
=== The problem ===
In the EPR thought experiment, a radioactive source produces two electrons in a singlet state, meaning that the sum of the spin on the two electrons is zero. These electrons are fired off at time
t
1
{\displaystyle t_{1}}
towards two spacelike separated observers, Alice and Bob, who can perform spin measurements, which they do at time
t
2
{\displaystyle t_{2}}
. The fact that the two electrons are a singlet means that if Alice measures z-spin up on her electron, Bob will measure z-spin down on his, and vice versa: the correlation is perfect. If Alice measures z-axis spin, and Bob measures the orthogonal y-axis spin, however, the correlation will be zero. Intermediate angles give intermediate correlations in a way that, on careful analysis, proves inconsistent with the idea that each particle has a definite, independent probability of producing the observed measurements (the correlations violate Bell's inequality).
This subtle dependence of one measurement on the other holds even when measurements are made simultaneously and a great distance apart, which gives the appearance of a superluminal communication taking place between the two electrons. Put simply, how can Bob's electron "know" what Alice measured on hers, so that it can adjust its own behavior accordingly?
=== Relational solution ===
In RQM, an interaction between a system and an observer is necessary for the system to have clearly defined properties relative to that observer. Since the two measurement events take place at spacelike separation, they do not lie in the intersection of Alice's and Bob's light cones. Indeed, there is no observer who can instantaneously measure both electrons' spin.
The key to the RQM analysis is to remember that the results obtained on each "wing" of the experiment only become determinate for a given observer once that observer has interacted with the other observer involved. As far as Alice is concerned, the specific results obtained on Bob's wing of the experiment are indeterminate for her, although she will know that Bob has a definite result. In order to find out what result Bob has, she has to interact with him at some time
t
3
{\displaystyle t_{3}}
in their future light cones, through ordinary classical information channels.
The question then becomes one of whether the expected correlations in results will appear: will the two particles behave in accordance with the laws of quantum mechanics? Let us denote by
M
A
(
α
)
{\displaystyle M_{A}(\alpha )}
the idea that the observer
A
{\displaystyle A}
(Alice) measures the state of the system
α
{\displaystyle \alpha }
(Alice's particle).
So, at time
t
2
{\displaystyle t_{2}}
, Alice knows the value of
M
A
(
α
)
{\displaystyle M_{A}(\alpha )}
: the spin of her particle, relative to herself. But, since the particles are in a singlet state, she knows that
M
A
(
α
)
+
M
A
(
β
)
=
0
,
{\displaystyle M_{A}(\alpha )+M_{A}(\beta )=0,}
and so if she measures her particle's spin to be
σ
{\displaystyle \sigma }
, she can predict that Bob's particle (
β
{\displaystyle \beta }
) will have spin
−
σ
{\displaystyle -\sigma }
. All this follows from standard quantum mechanics, and there is no "spooky action at a distance" yet. From the "coherence-operator" discussed above, Alice also knows that if at
t
3
{\displaystyle t_{3}}
she measures Bob's particle and then measures Bob (that is asks him what result he got) – or vice versa – the results will be consistent:
M
A
(
B
)
=
M
A
(
β
)
{\displaystyle M_{A}(B)=M_{A}(\beta )}
Finally, if a third observer (Charles, say) comes along and measures Alice, Bob, and their respective particles, he will find that everyone still agrees, because his own "coherence-operator" demands that
M
C
(
A
)
=
M
C
(
α
)
{\displaystyle M_{C}(A)=M_{C}(\alpha )}
and
M
C
(
B
)
=
M
C
(
β
)
{\displaystyle M_{C}(B)=M_{C}(\beta )}
while knowledge that the particles were in a singlet state tells him that
M
C
(
α
)
+
M
C
(
β
)
=
0.
{\displaystyle M_{C}(\alpha )+M_{C}(\beta )=0.}
Thus the relational interpretation, by shedding the notion of an "absolute state" of the system, allows for an analysis of the EPR paradox which neither violates traditional locality constraints, nor implies superluminal information transfer, since we can assume that all observers are moving at comfortable sub-light velocities. And, most importantly, the results of every observer are in full accordance with those expected by conventional quantum mechanics.
Whether or not this account of locality is successful has been a matter of debate.
== Derivation ==
A promising feature of this interpretation is that RQM offers the possibility of being derived from a small number of axioms, or postulates based on experimental observations. Rovelli's derivation of RQM uses three fundamental postulates. However, it has been suggested that it may be possible to reformulate the third postulate into a weaker statement, or possibly even do away with it altogether. The derivation of RQM parallels, to a large extent, quantum logic. The first two postulates are motivated entirely by experimental results, while the third postulate, although it accords perfectly with what we have discovered experimentally, is introduced as a means of recovering the full Hilbert space formalism of quantum mechanics from the other two postulates. The two empirical postulates are:
Postulate 1: there is a maximum amount of relevant information that may be obtained from a quantum system.
Postulate 2: it is always possible to obtain new information from a system.
We let
W
(
S
)
{\displaystyle W\left(S\right)}
denote the set of all possible questions that may be "asked" of a quantum system, which we shall denote by
Q
i
{\displaystyle Q_{i}}
,
i
∈
W
{\displaystyle i\in W}
. We may experimentally find certain relations between these questions:
{
∧
,
∨
,
¬
,
⊃
,
⊥
}
{\displaystyle \left\{\land ,\lor ,\neg ,\supset ,\bot \right\}}
, corresponding to {intersection, orthogonal sum, orthogonal complement, inclusion, and orthogonality} respectively, where
Q
1
⊥
Q
2
≡
Q
1
⊃
¬
Q
2
{\displaystyle Q_{1}\bot Q_{2}\equiv Q_{1}\supset \neg Q_{2}}
.
=== Structure ===
From the first postulate, it follows that we may choose a subset
Q
c
(
i
)
{\displaystyle Q_{c}^{(i)}}
of
N
{\displaystyle N}
mutually independent questions, where
N
{\displaystyle N}
is the number of bits contained in the maximum amount of information. We call such a question
Q
c
(
i
)
{\displaystyle Q_{c}^{(i)}}
a complete question. The value of
Q
c
(
i
)
{\displaystyle Q_{c}^{(i)}}
can be expressed as an N-tuple sequence of binary valued numerals, which has
2
N
=
k
{\displaystyle 2^{N}=k}
possible permutations of "0" and "1" values. There will also be more than one possible complete question. If we further assume that the relations
{
∧
,
∨
}
{\displaystyle \left\{\land ,\lor \right\}}
are defined for all
Q
i
{\displaystyle Q_{i}}
, then
W
(
S
)
{\displaystyle W\left(S\right)}
is an orthomodular lattice, while all the possible unions of sets of complete questions form a Boolean algebra with the
Q
c
(
i
)
{\displaystyle Q_{c}^{(i)}}
as atoms.
The second postulate governs the event of further questions being asked by an observer
O
1
{\displaystyle O_{1}}
of a system
S
{\displaystyle S}
, when
O
1
{\displaystyle O_{1}}
already has a full complement of information on the system (an answer to a complete question). We denote by
p
(
Q
|
Q
c
(
j
)
)
{\displaystyle p\left(Q|Q_{c}^{(j)}\right)}
the probability that a "yes" answer to a question
Q
{\displaystyle Q}
will follow the complete question
Q
c
(
j
)
{\displaystyle Q_{c}^{(j)}}
. If
Q
{\displaystyle Q}
is independent of
Q
c
(
j
)
{\displaystyle Q_{c}^{(j)}}
, then
p
=
0.5
{\displaystyle p=0.5}
, or it might be fully determined by
Q
c
(
j
)
{\displaystyle Q_{c}^{(j)}}
, in which case
p
=
1
{\displaystyle p=1}
. There is also a range of intermediate possibilities, and this case is examined below.
If the question that
O
1
{\displaystyle O_{1}}
wants to ask the system is another complete question,
Q
b
(
i
)
{\displaystyle Q_{b}^{(i)}}
, the probability
p
i
j
=
p
(
Q
b
(
i
)
|
Q
c
(
j
)
)
{\displaystyle p^{ij}=p\left(Q_{b}^{(i)}|Q_{c}^{(j)}\right)}
of a "yes" answer has certain constraints upon it:
1.
0
≤
p
i
j
≤
1
,
{\displaystyle 0\leq p^{ij}\leq 1,\ }
2.
∑
i
p
i
j
=
1
,
{\displaystyle \sum _{i}p^{ij}=1,\ }
3.
∑
j
p
i
j
=
1.
{\displaystyle \sum _{j}p^{ij}=1.\ }
The three constraints above are inspired by the most basic of properties of probabilities, and are satisfied if
p
i
j
=
|
U
i
j
|
2
{\displaystyle p^{ij}=\left|U^{ij}\right|^{2}}
,
where
U
i
j
{\displaystyle U^{ij}}
is a unitary matrix.
Postulate 3 If
b
{\displaystyle b}
and
c
{\displaystyle c}
are two complete questions, then the unitary matrix
U
b
c
{\displaystyle U_{bc}}
associated with their probability described above satisfies the equality
U
c
d
=
U
c
b
U
b
d
{\displaystyle U_{cd}=U_{cb}U_{bd}}
, for all
b
,
c
{\displaystyle b,c}
and
d
{\displaystyle d}
.
This third postulate implies that if we set a complete question
|
Q
c
(
i
)
⟩
{\displaystyle |Q_{c}^{(i)}\rangle }
as a basis vector in a complex Hilbert space, we may then represent any other question
|
Q
b
(
j
)
⟩
{\displaystyle |Q_{b}^{(j)}\rangle }
as a linear combination:
|
Q
b
(
j
)
⟩
=
∑
i
U
b
c
i
j
|
Q
c
(
i
)
⟩
.
{\displaystyle |Q_{b}^{(j)}\rangle =\sum _{i}U_{bc}^{ij}|Q_{c}^{(i)}\rangle .}
And the conventional probability rule of quantum mechanics states that if two sets of basis vectors are in the relation above, then the probability
p
i
j
{\displaystyle p^{ij}}
is
p
i
j
=
|
⟨
Q
c
(
i
)
|
Q
b
(
j
)
⟩
|
2
=
|
U
b
c
i
j
|
2
.
{\displaystyle p^{ij}=|\langle Q_{c}^{(i)}|Q_{b}^{(j)}\rangle |^{2}=|U_{bc}^{ij}|^{2}.}
=== Dynamics ===
The Heisenberg picture of time evolution accords most easily with RQM. Questions may be labelled by a time parameter
t
→
Q
(
t
)
{\displaystyle t\rightarrow Q(t)}
, and are regarded as distinct if they are specified by the same operator but are performed at different times. Because time evolution is a symmetry in the theory (it forms a necessary part of the full formal derivation of the theory from the postulates), the set of all possible questions at time
t
2
{\displaystyle t_{2}}
is isomorphic to the set of all possible questions at time
t
1
{\displaystyle t_{1}}
. It follows, by standard arguments in quantum logic, from the derivation above that the orthomodular lattice
W
(
S
)
{\displaystyle W(S)}
has the structure of the set of linear subspaces of a Hilbert space, with the relations between the questions corresponding to the relations between linear subspaces.
It follows that there must be a unitary transformation
U
(
t
2
−
t
1
)
{\displaystyle U\left(t_{2}-t_{1}\right)}
that satisfies:
Q
(
t
2
)
=
U
(
t
2
−
t
1
)
Q
(
t
1
)
U
−
1
(
t
2
−
t
1
)
{\displaystyle Q(t_{2})=U\left(t_{2}-t_{1}\right)Q(t_{1})U^{-1}\left(t_{2}-t_{1}\right)}
and
U
(
t
2
−
t
1
)
=
exp
(
−
i
(
t
2
−
t
1
)
H
)
{\displaystyle U\left(t_{2}-t_{1}\right)=\exp({-i\left(t_{2}-t_{1}\right)H})}
where
H
{\displaystyle H}
is the Hamiltonian, a self-adjoint operator on the Hilbert space and the unitary matrices are an abelian group.
== Problems and discussion ==
The question is whether RQM denies any objective reality, or otherwise stated: there is only a subjectively knowable reality. Rovelli limits the scope of this claim by stating that RQM relates to the variables of a physical system and not to constant, intrinsic properties, such as the mass and charge of an electron. Indeed, mechanics in general only predicts the behavior of a physical system under various conditions. In classical mechanics this behavior is mathematically represented in a phase space with certain degrees of freedom; in quantum mechanics this is a state space, mathematically represented as a multidimensional complex Hilbert space, in which the dimensions correspond to the above variables.
Dorato, however, argues that all intrinsic properties of a physical system, including mass and charge, are only knowable in a subjective interaction between the observer and the physical system. The unspoken thought behind this is that intrinsic properties are essentially quantum mechanical properties as well.
== See also ==
== Notes ==
== References ==
Bitbol, M.: "An analysis of the Einstein–Podolsky–Rosen correlations in terms of events"; Physics Letters 96A, 1983: 66–70.
Crane, L.: "Clock and Category: Is Quantum Gravity Algebraic?"; Journal of Mathematical Physics 36; 1993: 6180–6193; arXiv:gr-qc/9504038 .
Everett, H.: "The Theory of the Universal Wavefunction"; Princeton University Doctoral Dissertation; in DeWitt, B.S. & Graham, R.N. (eds.): "The Many-Worlds Interpretation of Quantum Mechanics"; Princeton University Press; 1973.
Finkelstein, D.R.: "Quantum Relativity: A Synthesis of the Ideas of Einstein and Heisenberg"; Springer-Verlag; 1996.
Floridi, L.: "Informational Realism"; Computers and Philosophy 2003 - Selected Papers from the Computer and Philosophy conference (CAP 2003), Conferences in Research and Practice in Information Technology, '37', 2004, edited by J. Weckert. and Y. Al-Saggaf, ACS, pp. 7–12. [1].
Laudisa, F.: "The EPR Argument in a Relational Interpretation of Quantum Mechanics"; Foundations of Physics Letters, 14 (2); 2001: pp. 119–132; arXiv:quant-ph/0011016 .
Laudisa, F. & Rovelli, C.: "Relational Quantum Mechanics"; The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.); online article.
Pienaar, L.: "Comment on 'The Notion of Locality in Relational Quantum Mechanics'"; Foundations of Physics 49 2019; 1404–1414; arXiv:1807.06457 .
Rovelli, C.: Helgoland; Adelphi; 2020; English translation 2021 Helgoland: Making Sense of the Quantum Revolution.
Rovelli, C. & Smerlak, M.: "Relational EPR"; Preprint: arXiv:quant-ph/0604064 .
Rovelli, C.: "Relational Quantum Mechanics"; International Journal of Theoretical Physics 35; 1996: 1637–1678; arXiv:quant-ph/9609002 .
Smolin, L.: "The Bekenstein Bound, Topological Quantum Field Theory and Pluralistic Quantum Field Theory"; Preprint: arXiv:gr-qc/9508064 .
Wheeler, J. A.: "Information, physics, quantum: The search for links"; in Zurek, W., ed.: "Complexity, Entropy and the Physics of Information"; pp. 3–28; Addison-Wesley; 1990.
== Further reading ==
Pienaar, Jacques (2021). "A Quintet of Quandaries: Five No-Go Theorems for Relational Quantum Mechanics". Foundations of Physics. 51 (5): 97. arXiv:2107.00670. Bibcode:2021FoPh...51...97P. doi:10.1007/s10701-021-00500-6.
Muciño, Ricardo; Okon, Elias; Sudarsky, Daniel (2022). "Assessing relational quantum mechanics". Synthese. 200 (5): 399. arXiv:2107.00670. doi:10.1007/s11229-022-03886-6.
Di Biagio, Andrea; Rovelli, Carlo (2022). "Relational quantum mechanics is about facts, not states: A reply to Pienaar and Brukner". Foundations of Physics. 52 (3): 62. arXiv:2110.03610. Bibcode:2022FoPh...52...62D. doi:10.1007/s10701-022-00579-5. PMID 35694217.
Calosi, Claudio; Riedel, Timotheus (2024). "Relational Quantum Mechanics at the Crossroads". Foundations of Physics. 54 (6): 74. Bibcode:2024FoPh...54...74C. doi:10.1007/s10701-024-00810-5.
== External links ==
Relational Quantum Mechanics, The Stanford Encyclopedia of Philosophy (revised edition, 2019)
Adlam, Emily; Rovelli, Carlo (17 November 2023). "Information is Physical: Cross-Perspective Links in Relational Quantum Mechanics". Philosophy of Physics. 1 (1): 4. arXiv:2203.13342. | Wikipedia/Relational_quantum_mechanics |
Quantum stochastic calculus is a generalization of stochastic calculus to noncommuting variables. The tools provided by quantum stochastic calculus are of great use for modeling the random evolution of systems undergoing measurement, as in quantum trajectories.: 148 Just as the Lindblad master equation provides a quantum generalization to the Fokker–Planck equation, quantum stochastic calculus allows for the derivation of quantum stochastic differential equations (QSDE) that are analogous to classical Langevin equations.
For the remainder of this article stochastic calculus will be referred to as classical stochastic calculus, in order to clearly distinguish it from quantum stochastic calculus.
== Heat baths ==
An important physical scenario in which a quantum stochastic calculus is needed is the case of a system interacting with a heat bath. It is appropriate in many circumstances to model the heat bath as an assembly of harmonic oscillators. One type of interaction between the system and the bath can be modeled (after making a canonical transformation) by the following Hamiltonian:: 42, 45
H
=
H
s
y
s
(
Z
)
+
1
2
∑
n
(
(
p
n
−
κ
n
X
)
2
+
ω
n
2
q
n
2
)
,
{\displaystyle H=H_{\mathrm {sys} }(\mathbf {Z} )+{\frac {1}{2}}\sum _{n}\left((p_{n}-\kappa _{n}X)^{2}+\omega _{n}^{2}q_{n}^{2}\right)\,,}
where
H
s
y
s
{\displaystyle H_{\mathrm {sys} }}
is the system Hamiltonian,
Z
{\displaystyle \mathbf {Z} }
is a vector containing the system variables corresponding to a finite number of degrees of freedom,
n
{\displaystyle n}
is an index for the different bath modes,
ω
n
{\displaystyle \omega _{n}}
is the frequency of a particular mode,
p
n
{\displaystyle p_{n}}
and
q
n
{\displaystyle q_{n}}
are bath operators for a particular mode,
X
{\displaystyle X}
is a system operator, and
κ
n
{\displaystyle \kappa _{n}}
quantifies the coupling between the system and a particular bath mode.
In this scenario the equation of motion for an arbitrary system operator
Y
{\displaystyle Y}
is called the quantum Langevin equation and may be written as:: 46–47
where
[
⋅
,
⋅
]
{\displaystyle [\cdot ,\cdot ]}
and
{
⋅
,
⋅
}
{\displaystyle \{\cdot ,\cdot \}}
denote the commutator and anticommutator (respectively), the memory function
f
{\displaystyle f}
is defined as:
f
(
t
)
≡
∑
n
κ
n
2
cos
(
ω
n
t
)
,
{\displaystyle f(t)\equiv \sum _{n}\kappa _{n}^{2}\cos(\omega _{n}t)\,,}
and the time dependent noise operator
ξ
{\displaystyle \xi }
is defined as:
ξ
(
t
)
≡
i
∑
n
κ
n
ℏ
ω
n
2
(
−
a
n
(
t
0
)
e
−
i
ω
n
(
t
−
t
0
)
+
a
n
†
(
t
0
)
e
i
ω
n
(
t
−
t
0
)
)
,
{\displaystyle \xi (t)\equiv i\sum _{n}\kappa _{n}{\sqrt {\frac {\hbar \omega _{n}}{2}}}\left(-a_{n}(t_{0})e^{-i\omega _{n}(t-t_{0})}+a_{n}^{\dagger }(t_{0})e^{i\omega _{n}(t-t_{0})}\right)\,,}
where the bath annihilation operator
a
n
{\displaystyle a_{n}}
is defined as:
a
n
≡
ω
n
q
n
+
i
p
n
2
ℏ
ω
n
.
{\displaystyle a_{n}\equiv {\frac {\omega _{n}q_{n}+ip_{n}}{\sqrt {2\hbar \omega _{n}}}}\,.}
Oftentimes this equation is more general than is needed, and further approximations are made to simplify the equation.
=== White noise formalism ===
For many purposes it is convenient to make approximations about the nature of the heat bath in order to achieve a white noise formalism. In such a case the interaction may be modeled by the Hamiltonian
H
=
H
s
y
s
+
H
B
+
H
i
n
t
{\displaystyle H=H_{\mathrm {sys} }+H_{B}+H_{\mathrm {int} }}
where:: 3762
H
B
=
ℏ
∫
−
∞
∞
d
ω
ω
b
†
(
ω
)
b
(
ω
)
,
{\displaystyle H_{B}=\hbar \int _{-\infty }^{\infty }\mathrm {d} \omega \,\omega b^{\dagger }(\omega )b(\omega )\,,}
and
H
i
n
t
=
i
ℏ
∫
−
∞
∞
d
ω
κ
(
ω
)
(
b
†
(
ω
)
c
−
c
†
b
(
ω
)
)
,
{\displaystyle H_{\mathrm {int} }=i\hbar \int _{-\infty }^{\infty }\mathrm {d} \omega \,\kappa (\omega )\left(b^{\dagger }(\omega )c-c^{\dagger }b(\omega )\right)\,,}
where
b
(
ω
)
{\displaystyle b(\omega )}
are annihilation operators for the bath with the commutation relation
[
b
(
ω
)
,
b
†
(
ω
′
)
]
=
δ
(
ω
−
ω
′
)
{\displaystyle [b(\omega ),b^{\dagger }(\omega ^{\prime })]=\delta (\omega -\omega ^{\prime })}
,
c
{\displaystyle c}
is an operator on the system,
κ
(
ω
)
{\displaystyle \kappa (\omega )}
quantifies the strength of the coupling of the bath modes to the system, and
H
s
y
s
{\displaystyle H_{\mathrm {sys} }}
describes the free system evolution.: 148 This model uses the rotating wave approximation and extends the lower limit of
ω
{\displaystyle \omega }
to
−
∞
{\displaystyle -\infty }
in order to admit a mathematically simple white noise formalism. The coupling strengths are also usually simplified to a constant in what is sometimes called the first Markov approximation:: 3763
κ
(
ω
)
=
γ
2
π
.
{\displaystyle \kappa (\omega )={\sqrt {\frac {\gamma }{2\pi }}}\,.}
Systems coupled to a bath of harmonic oscillators can be thought of as being driven by a noise input and radiating a noise output.: 43 The input noise operator at time
t
{\displaystyle t}
is defined by:: 150 : 3763
b
i
n
(
t
)
=
1
2
π
∫
−
∞
∞
d
ω
e
−
i
ω
(
t
−
t
0
)
b
0
(
ω
)
,
{\displaystyle b_{\mathrm {in} }(t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\mathrm {d} \omega \,e^{-i\omega (t-t_{0})}b_{0}(\omega )\,,}
where
b
0
(
ω
)
=
b
(
ω
)
|
t
=
t
0
{\displaystyle b_{0}(\omega )=\left.b(\omega )\right\vert _{t=t_{0}}}
, since this operator is expressed in the Heisenberg picture. Satisfaction of the commutation relation
[
b
i
n
(
t
)
,
b
i
n
†
(
t
′
)
]
=
δ
(
t
−
t
′
)
{\displaystyle [b_{\mathrm {in} }(t),b_{\mathrm {in} }^{\dagger }(t^{\prime })]=\delta (t-t^{\prime })}
allows the model to have a strict correspondence with a Markovian master equation.: 142
In the white noise setting described so far, the quantum Langevin equation for an arbitrary system operator
a
{\displaystyle a}
takes a simpler form:: 3763
For the case most closely corresponding to classical white noise, the input to the system is described by a density operator giving the following expectation value:: 154
=== Quantum Wiener process ===
In order to define quantum stochastic integration, it is important to define a quantum Wiener process:: 155 : 3765
B
(
t
,
t
0
)
=
∫
t
0
t
b
i
n
(
t
′
)
d
t
′
.
{\displaystyle B(t,t_{0})=\int _{t_{0}}^{t}b_{\mathrm {in} }(t^{\prime })\mathrm {d} t^{\prime }\,.}
This definition gives the quantum Wiener process the commutation relation
[
B
(
t
,
t
0
)
,
B
†
(
t
,
t
0
)
]
=
t
−
t
0
{\displaystyle [B(t,t_{0}),B^{\dagger }(t,t_{0})]=t-t_{0}}
. The property of the bath annihilation operators in (WN2) implies that the quantum Wiener process has an expectation value of:
⟨
B
†
(
t
,
t
0
)
B
(
t
,
t
0
)
⟩
ρ
(
t
,
t
0
)
=
N
(
t
−
t
0
)
.
{\displaystyle \langle B^{\dagger }(t,t_{0})B(t,t_{0})\rangle _{\rho (t,t_{0})}=N(t-t_{0})\,.}
The quantum Wiener processes are also specified such that their quasiprobability distributions are Gaussian by defining the density operator:
ρ
(
t
,
t
0
)
=
(
1
−
e
−
κ
)
exp
[
−
κ
B
†
(
t
,
t
0
)
B
(
t
,
t
0
)
t
−
t
0
]
,
{\displaystyle \rho (t,t_{0})=(1-e^{-\kappa })\exp \left[-{\frac {\kappa B^{\dagger }(t,t_{0})B(t,t_{0})}{t-t_{0}}}\right]\,,}
where
N
=
1
/
(
e
κ
−
1
)
{\displaystyle N=1/(e^{\kappa }-1)}
.: 3765
== Quantum stochastic integration ==
The stochastic evolution of system operators can also be defined in terms of the stochastic integration of given equations.
=== Quantum Itô integral ===
The quantum Itô integral of a system operator
g
(
t
)
{\displaystyle g(t)}
is given by:: 155
(
I
)
∫
t
0
t
g
(
t
′
)
d
B
(
t
′
)
=
lim
n
→
∞
∑
i
=
1
n
g
(
t
i
)
(
B
(
t
i
+
1
,
t
0
)
−
B
(
t
i
,
t
0
)
)
,
{\displaystyle (\mathbf {I} )\int _{t_{0}}^{t}g(t^{\prime })\mathrm {d} B(t^{\prime })=\lim _{n\to \infty }\sum _{i=1}^{n}g(t_{i})\left(B(t_{i+1},t_{0})-B(t_{i},t_{0})\right)\,,}
where the bold (I) preceding the integral stands for Itô. One of the characteristics of defining the integral in this way is that the increments
d
B
{\displaystyle \mathrm {d} B}
and
d
B
†
{\displaystyle \mathrm {d} B^{\dagger }}
commute with the system operator.
=== Itô quantum stochastic differential equation ===
In order to define the Itô QSDE, it is necessary to know something about the bath statistics.: 159 In the context of the white noise formalism described earlier, the Itô QSDE can be defined as:: 156
(
I
)
d
a
=
−
i
ℏ
[
a
,
H
s
y
s
]
d
t
+
γ
(
(
N
+
1
)
D
[
c
†
]
a
+
N
D
[
c
]
a
)
d
t
−
γ
(
[
a
,
c
†
]
d
B
(
t
)
−
d
B
†
(
t
)
[
a
,
c
]
)
,
{\displaystyle (\mathbf {I} )\,\mathrm {d} a=-{\frac {i}{\hbar }}[a,H_{\mathrm {sys} }]\mathrm {d} t+\gamma \left((N+1){\mathcal {D}}[c^{\dagger }]a+N{\mathcal {D}}[c]a\right)\mathrm {d} t-{\sqrt {\gamma }}\left([a,c^{\dagger }]\mathrm {d} B(t)-\mathrm {d} B^{\dagger }(t)[a,c]\right)\,,}
where the equation has been simplified using the Lindblad superoperator:: 105
D
[
A
]
a
≡
A
a
A
†
−
1
2
(
A
†
A
a
+
a
A
†
A
)
.
{\displaystyle {\mathcal {D}}[A]a\equiv AaA^{\dagger }-{\frac {1}{2}}\left(A^{\dagger }Aa+aA^{\dagger }A\right)\,.}
This differential equation is interpreted as defining the system operator
a
{\displaystyle a}
as the quantum Itô integral of the right hand side, and is equivalent to the Langevin equation (WN1).: 3765
=== Quantum Stratonovich integral ===
The quantum Stratonovich integral of a system operator
g
(
t
)
{\displaystyle g(t)}
is given by:: 157
(
S
)
∫
t
0
t
g
(
t
′
)
d
B
(
t
′
)
=
lim
n
→
∞
∑
i
=
1
n
g
(
t
i
)
+
g
(
t
i
+
1
)
2
(
B
(
t
i
+
1
,
t
0
)
−
B
(
t
i
,
t
0
)
)
,
{\displaystyle (\mathbf {S} )\int _{t_{0}}^{t}g(t^{\prime })\mathrm {d} B(t^{\prime })=\lim _{n\to \infty }\sum _{i=1}^{n}{\frac {g(t_{i})+g(t_{i+1})}{2}}\left(B(t_{i+1},t_{0})-B(t_{i},t_{0})\right)\,,}
where the bold (S) preceding the integral stands for Stratonovich. Unlike the Itô formulation, the increments in the Stratonovich integral do not commute with the system operator, and it can be shown that:
(
S
)
∫
t
0
t
g
(
t
′
)
d
B
(
t
′
)
−
(
S
)
∫
t
0
t
d
B
(
t
′
)
g
(
t
′
)
=
γ
2
∫
t
0
t
d
t
′
[
g
(
t
′
)
,
c
(
t
′
)
]
.
{\displaystyle (\mathbf {S} )\int _{t_{0}}^{t}g(t^{\prime })\mathrm {d} B(t^{\prime })-(\mathbf {S} )\int _{t_{0}}^{t}\mathrm {d} B(t^{\prime })g(t^{\prime })={\frac {\sqrt {\gamma }}{2}}\int _{t_{0}}^{t}\mathrm {d} t^{\prime }\,[g(t^{\prime }),c(t^{\prime })]\,.}
=== Stratonovich quantum stochastic differential equation ===
The Stratonovich QSDE can be defined as:: 158
(
S
)
d
a
=
−
i
ℏ
[
a
,
H
s
y
s
]
d
t
−
γ
2
(
[
a
,
c
†
]
c
−
c
†
[
a
,
c
]
)
d
t
−
γ
(
[
a
,
c
†
]
d
B
(
t
)
−
d
B
†
(
t
)
[
a
,
c
]
)
.
{\displaystyle (\mathbf {S} )\,\mathrm {d} a=-{\frac {i}{\hbar }}[a,H_{\mathrm {sys} }]\mathrm {d} t-{\frac {\gamma }{2}}\left([a,c^{\dagger }]c-c^{\dagger }[a,c]\right)\mathrm {d} t-{\sqrt {\gamma }}\left([a,c^{\dagger }]\mathrm {d} B(t)-\mathrm {d} B^{\dagger }(t)[a,c]\right)\,.}
This differential equation is interpreted as defining the system operator
a
{\displaystyle a}
as the quantum Stratonovich integral of the right hand side, and is in the same form as the Langevin equation (WN1).: 3766–3767
=== Relation between Itô and Stratonovich integrals ===
The two definitions of quantum stochastic integrals relate to one another in the following way, assuming a bath with
N
{\displaystyle N}
defined as before:
(
S
)
∫
t
0
t
g
(
t
′
)
d
B
(
t
′
)
=
(
I
)
∫
t
0
t
g
(
t
′
)
d
B
(
t
′
)
+
1
2
γ
N
∫
t
0
t
d
t
′
[
g
(
t
′
)
,
c
(
t
′
)
]
.
{\displaystyle (\mathbf {S} )\int _{t_{0}}^{t}g(t^{\prime })\mathrm {d} B(t^{\prime })=(\mathbf {I} )\int _{t_{0}}^{t}g(t^{\prime })\mathrm {d} B(t^{\prime })+{\frac {1}{2}}{\sqrt {\gamma }}N\int _{t_{0}}^{t}\mathrm {d} t^{\prime }\,[g(t^{\prime }),c(t^{\prime })]\,.}
=== Calculus rules ===
Just as with classical stochastic calculus, the appropriate product rule can be derived for Itô and Stratonovich integration, respectively:: 156, 159
(
I
)
d
(
a
b
)
=
a
d
b
+
b
d
a
+
d
a
d
b
,
{\displaystyle (\mathbf {I} )\,\mathrm {d} (ab)=a\,\mathrm {d} b+b\,\mathrm {d} a+\mathrm {d} a\,\mathrm {d} b\,,}
(
S
)
d
(
a
b
)
=
a
d
b
+
d
a
b
.
{\displaystyle (\mathbf {S} )\,\mathrm {d} (ab)=a\,\mathrm {d} b+\mathrm {d} a\,b\,.}
As is the case in classical stochastic calculus, the Stratonovich form is the one which preserves the ordinary calculus (which in this case is noncommuting). A peculiarity in the quantum generalization is the necessity to define both Itô and Stratonovitch integration in order to prove that the Stratonovitch form preserves the rules of noncommuting calculus.: 155
== Quantum trajectories ==
Quantum trajectories can generally be thought of as the path through Hilbert space that the state of a quantum system traverses over time. In a stochastic setting, these trajectories are often conditioned upon measurement results. The unconditioned Markovian evolution of a quantum system (averaged over all possible measurement outcomes) is given by a Lindblad equation. In order to describe the conditioned evolution in these cases, it is necessary to unravel the Lindblad equation by choosing a consistent QSDE. In the case where the conditioned system state is always pure, the unraveling could be in the form of a stochastic Schrödinger equation (SSE). If the state may become mixed, then it is necessary to use a stochastic master equation (SME).: 148
=== Example unravelings ===
Consider the following Lindblad master equation for a system interacting with a vacuum bath:: 145
ρ
˙
=
D
[
c
]
ρ
−
i
[
H
s
y
s
,
ρ
]
.
{\displaystyle {\dot {\rho }}={\mathcal {D}}[c]\rho -i[H_{\mathrm {sys} },\rho ]\,.}
This describes the evolution of the system state averaged over the outcomes of any particular measurement that might be made on the bath. The following SME describes the evolution of the system conditioned on the results of a continuous photon-counting measurement performed on the bath:
d
ρ
I
(
t
)
=
(
d
N
(
t
)
G
[
c
]
−
d
t
H
[
i
H
s
y
s
+
1
2
c
†
c
]
)
ρ
I
(
t
)
,
{\displaystyle \mathrm {d} \rho _{I}(t)=\left(\mathrm {d} N(t){\mathcal {G}}[c]-\mathrm {d} t{\mathcal {H}}[iH_{\mathrm {sys} }+{\frac {1}{2}}c^{\dagger }c]\right)\rho _{I}(t)\,,}
where
G
[
r
]
ρ
≡
r
ρ
r
†
Tr
[
r
ρ
r
†
]
−
ρ
H
[
r
]
ρ
≡
r
ρ
+
ρ
r
†
−
Tr
[
r
ρ
+
ρ
r
†
]
ρ
{\displaystyle {\begin{array}{rcl}{\mathcal {G}}[r]\rho &\equiv &{\frac {r\rho r^{\dagger }}{\operatorname {Tr} [r\rho r^{\dagger }]}}-\rho \\{\mathcal {H}}[r]\rho &\equiv &r\rho +\rho r^{\dagger }-\operatorname {Tr} [r\rho +\rho r^{\dagger }]\rho \end{array}}}
are nonlinear superoperators and
N
(
t
)
{\displaystyle N(t)}
is the photocount, indicating how many photons have been detected at time
t
{\displaystyle t}
and giving the following jump probability:: 152, 155
E
[
d
N
(
t
)
]
=
d
t
Tr
[
c
†
c
ρ
I
(
t
)
]
,
{\displaystyle \operatorname {E} [\mathrm {d} N(t)]=\mathrm {d} t\operatorname {Tr} [c^{\dagger }c\rho _{I}(t)]\,,}
where
E
[
⋅
]
{\displaystyle \operatorname {E} [\cdot ]}
denotes the expected value. Another type of measurement that could be made on the bath is homodyne detection, which results in quantum trajectories given by the following SME:
d
ρ
J
(
t
)
=
−
i
[
H
s
y
s
,
ρ
J
(
t
)
]
d
t
+
d
t
D
[
c
]
ρ
J
(
t
)
+
d
W
(
t
)
H
[
c
]
ρ
J
(
t
)
,
{\displaystyle \mathrm {d} \rho _{J}(t)=-i[H_{\mathrm {sys} },\rho _{J}(t)]\mathrm {d} t+\mathrm {d} t{\mathcal {D}}[c]\rho _{J}(t)+\mathrm {d} W(t){\mathcal {H}}[c]\rho _{J}(t)\,,}
where
d
W
(
t
)
{\displaystyle \mathrm {d} W(t)}
is a Wiener increment satisfying:: 161
d
W
(
t
)
2
=
d
t
E
[
d
W
(
t
)
]
=
0
.
{\displaystyle {\begin{array}{rcl}\mathrm {d} W(t)^{2}&=&\mathrm {d} t\\\operatorname {E} [\mathrm {d} W(t)]&=&0\,.\end{array}}}
Although these two SMEs look wildly different, calculating their expected evolution shows that they are both indeed unravelings of the same Lindlad master equation:
E
[
d
ρ
I
(
t
)
]
=
E
[
d
ρ
J
(
t
)
]
=
ρ
˙
d
t
.
{\displaystyle \operatorname {E} [\mathrm {d} \rho _{I}(t)]=\operatorname {E} [\mathrm {d} \rho _{J}(t)]={\dot {\rho }}\mathrm {d} t\,.}
=== Computational considerations ===
One important application of quantum trajectories is reducing the computational resources required to simulate a master equation. For a Hilbert space of dimension d, the amount of real numbers required to store the density matrix is of order d2, and the time required to compute the master equation evolution is of order d4. Storing the state vector for a SSE, on the other hand, only requires an amount of real numbers of order d, and the time to compute trajectory evolution is only of order d2. The master equation evolution can then be approximated by averaging over many individual trajectories simulated using the SSE, a technique sometimes referred to as the Monte Carlo wave-function approach. Although the number of calculated trajectories n must be very large in order to accurately approximate the master equation, good results can be obtained for trajectory counts much less than d2. Not only does this technique yield faster computation time, but it also allows for the simulation of master equations on machines that do not have enough memory to store the entire density matrix.: 153
== References == | Wikipedia/Quantum_stochastic_calculus |
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters.
While some variation of the Copenhagen interpretation is commonly presented in textbooks, many other interpretations have been developed.
Despite nearly a century of debate and experiment, no consensus has been reached among physicists and philosophers of physics concerning which interpretation best "represents" reality.
== History ==
The definition of quantum theorists' terms, such as wave function and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across space, but Max Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space;: 24–33 the Born rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not.
The views of several early pioneers of quantum mechanics, such as Niels Bohr and Werner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of David Bohm and the many-worlds interpretation of Hugh Everett III.
The physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." (Mermin also coined the saying "Shut up and calculate" to describe many physicists' attitude to quantum theory, a remark which is often misattributed to Richard Feynman.) As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011. The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll."
Some concepts originating from studies of interpretations have found more practical application in quantum information science.
== Interpretive challenges ==
Abstract, mathematical nature of quantum field theories: the mathematical structure of quantum mechanics is abstract and does not result in a single, clear interpretation of its quantities.
Apparent indeterministic and irreversible processes: in classical field theory, a physical property at a given location in the field is readily derived. In most mathematical formulations of quantum mechanics, measurement (understood as an interaction with a given state) has a special role in the theory, as it is the sole process that can cause a nonunitary, irreversible evolution of the state.
Role of the observer in determining outcomes. Copenhagen-type interpretations imply that the wavefunction is a calculational tool, and represents reality only immediately after a measurement performed by an observer. Everettian interpretations grant that all possible outcomes are real, and that measurement-type interactions cause a branching process in which each possibility is realised.
Classically unexpected correlations between remote objects: entangled quantum systems, as illustrated in the EPR paradox, obey statistics that seem to violate principles of local causality by action at a distance.
Complementarity of proffered descriptions: complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. This implies the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). Like contextuality, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects.
Contextual behaviour of systems locally: Quantum contextuality demonstrates that classical intuitions, in which properties of a system hold definite values independent of the manner of their measurement, fail even for local systems. Also, physical principles such as Leibniz's Principle of the identity of indiscernibles no longer apply in the quantum domain, signaling that most classical intuitions may be incorrect about the quantum world.
== Influential interpretations ==
=== Copenhagen interpretation ===
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,: 133 while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement".
Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assuming definite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.: 85–90 The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality.: 24–33
=== Many worlds ===
The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasingly entangled with the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservable alternate histories.
=== Quantum information theories ===
Quantum informational approaches have attracted growing support. They subdivide into two kinds.
Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism.
Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking. Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. James Hartle writes,
The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system.
=== Relational quantum mechanics ===
The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them.
=== QBism ===
QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory.
QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it.
=== Consistent histories ===
The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation.
According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle).
=== Ensemble interpretation ===
The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein:
The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.
The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the text book Quantum Mechanics, A Modern Development.
=== De Broglie–Bohm theory ===
The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological.
=== Transactional interpretation ===
The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real.
=== Consciousness causes collapse ===
Eugene Wigner argued that human experimenter consciousness (or maybe even animal consciousness) was critical for the collapse of the wavefunction, but he later abandoned this interpretation after learning about quantum decoherence.
Some specific proposals for consciousness caused wave-function collapse have been shown to be unfalsifiable and more broadly reasonable assumption about consciousness lead to the same conclusion.
=== Quantum logic ===
Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics.
=== Modal interpretations of quantum theory ===
Modal interpretations of quantum mechanics were first conceived of in 1972 by Bas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between a dynamical state, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and a value state, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions, including proposals by Kochen, Dieks, Clifton, Dickson, and Bub. According to Michel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable.
=== Time-symmetric theories ===
Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921. Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal. (See Wheeler–Feynman time-symmetric theory.) This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future.
Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, states that the two-state vector formalism dovetails well with Hugh Everett's many-worlds interpretation.
=== Other interpretations ===
As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism.
== Related concepts ==
Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves.
=== Quantum Darwinism ===
Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years including pointer states, einselection and decoherence.
=== Objective-collapse theories ===
Objective-collapse theories differ from the Copenhagen interpretation by regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include
the Ghirardi–Rimini–Weber theory
the continuous spontaneous localization model
the Penrose interpretation
== Comparisons ==
The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference.
No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research.
Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people.
== The silent approach ==
Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence was Paul Dirac who once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things." This position is not uncommon among practitioners of quantum mechanics.
Similarly Richard Feynman wrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement.
Others, like Nico van Kampen and Willis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics.
== See also ==
== References ==
== Sources ==
Bub, J.; Clifton, R. (1996). "A uniqueness theorem for interpretations of quantum mechanics". Studies in History and Philosophy of Modern Physics. 27B: 181–219. doi:10.1016/1355-2198(95)00019-4.
Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. Chicago, Illinois: University of Chicago Press.
Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–376. East Lansing, Michigan: Philosophy of Science Association.
--------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48.
Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)". arXiv:quant-ph/0205039
--------, and A. Peres, 2000, "Quantum theory needs no 'interpretation'", Physics Today.
Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0.
Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge University Press. ISBN 0-521-56457-3.
Jackiw, Roman; Kleppner, D. (2000). "One Hundred Years of Quantum Physics". Science. 289 (5481): 893–898. arXiv:quant-ph/0008092. Bibcode:2000quant.ph..8092K. doi:10.1126/science.289.5481.893. PMID 17839156. S2CID 6604344.
Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill.
--------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons.
Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicolson.
de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1.
Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press.
Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences.
Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. University of California Press.
Tegmark, Max; Wheeler, J. A. (2001). "100 Years of Quantum Mysteries". Scientific American. 284 (2): 68–75. Bibcode:2001SciAm.284b..68T. doi:10.1038/scientificamerican0201-68. S2CID 119375538.
Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303–366.
John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton, New Jersey: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983.
== Further reading ==
Almost all authors below are professional physicists.
David Z Albert, 1992. Quantum Mechanics and Experience. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-74112-9.
John S. Bell, 1987. Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, ISBN 0-521-36869-3. The 2004 edition (ISBN 0-521-52338-9) includes two additional papers and an introduction by Alain Aspect.
Dmitrii Ivanovich Blokhintsev, 1968. The Philosophy of Quantum Mechanics. D. Reidel Publishing Company. ISBN 90-277-0105-9.
David Bohm, 1980. Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2.
Adan Cabello (15 November 2004). "Bibliographic guide to the foundations of quantum mechanics and quantum information". arXiv:quant-ph/0012089.
David Deutsch, 1997. The Fabric of Reality. London: Allen Lane. ISBN 0-14-027541-X; ISBN 0-7139-9061-9. Argues forcefully against instrumentalism. For general readers.
F. J. Duarte (2014). Quantum Optics for Engineers. New York: CRC. ISBN 978-1439888537. Provides a pragmatic perspective on interpretations. For general readers.
Bernard d'Espagnat, 1976. Conceptual Foundation of Quantum Mechanics, 2nd ed. Addison Wesley. ISBN 0-8133-4087-X.
Bernard d'Espagnat, 1983. In Search of Reality. Springer. ISBN 0-387-11399-1.
Bernard d'Espagnat, 2003. Veiled Reality: An Analysis of Quantum Mechanical Concepts. Westview Press.
Bernard d'Espagnat, 2006. On Physics and Philosophy. Princetone, New Jersey: Princeton University Press.
Arthur Fine, 1986. The Shaky Game: Einstein Realism and the Quantum Theory. Science and its Conceptual Foundations. Chicago, Illinois: University of Chicago Press. ISBN 0-226-24948-4.
Ghirardi, Giancarlo, 2004. Sneaking a Look at God's Cards. Princeton, New Jersey: Princeton University Press.
Gregg Jaeger (2009) Entanglement, Information, and the Interpretation of Quantum Mechanics. Springer. ISBN 978-3-540-92127-1.
N. David Mermin (1990) Boojums all the way through. Cambridge University Press. ISBN 0-521-38880-5.
Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton, New Jersey: Princeton University Press. ISBN 0-691-03669-1.
Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press.
Roland Omnès, 1999. Quantum Philosophy: Understanding and Interpreting Contemporary Science. Princeton, New Jersey: Princeton University Press.
Roger Penrose, 1989. The Emperor's New Mind. Oxford University Press. ISBN 0-19-851973-7. Especially chapter 6.
Roger Penrose, 1994. Shadows of the Mind. Oxford University Press. ISBN 0-19-853978-9.
Roger Penrose, 2004. The Road to Reality. New York: Alfred A. Knopf. Argues that quantum theory is incomplete.
Lee Phillips, 2017. A brief history of quantum alternatives. Ars Technica.
Styer, Daniel F.; Balkin, Miranda S.; Becker, Kathryn M.; Burns, Matthew R.; Dudley, Christopher E.; Forth, Scott T.; Gaumer, Jeremy S.; Kramer, Mark A.; et al. (March 2002). "Nine formulations of quantum mechanics" (PDF). American Journal of Physics. 70 (3): 288–297. Bibcode:2002AmJPh..70..288S. doi:10.1119/1.1445404.
Baggott, Jim (25 April 2024). "'Shut up and calculate': how Einstein lost the battle to explain quantum reality". Nature. 629 (8010): 29–32. Bibcode:2024Natur.629...29B. doi:10.1038/d41586-024-01216-z. PMID 38664517.
== External links ==
Stanford Encyclopedia of Philosophy:
"Bohmian mechanics" by Sheldon Goldstein.
"Collapse Theories." by Giancarlo Ghirardi.
"Copenhagen Interpretation of Quantum Mechanics" by Jan Faye.
"Everett's Relative State Formulation of Quantum Mechanics" by Jeffrey Barrett.
"Many-Worlds Interpretation of Quantum Mechanics" by Lev Vaidman.
"Modal Interpretation of Quantum Mechanics" by Michael Dickson and Dennis Dieks.
"Philosophical Issues in Quantum Theory" by Wayne Myrvold.
"Quantum-Bayesian and Pragmatist Views of Quantum Theory" by Richard Healey.
"Quantum Entanglement and Information" by Jeffrey Bub.
"Quantum mechanics" by Jenann Ismael.
"Quantum Logic and Probability Theory" by Alexander Wilce.
"Relational Quantum Mechanics" by Federico Laudisa and Carlo Rovelli.
"The Role of Decoherence in Quantum Mechanics" by Guido Bacciagaluppi.
Internet Encyclopedia of Philosophy:
"Interpretations of Quantum Mechanics" by Peter J. Lewis.
"Everettian Interpretations of Quantum Mechanics" by Christina Conroy. | Wikipedia/Interpretations_of_quantum_mechanics |
A theory of everything (TOE), final theory, ultimate theory, unified field theory, or master theory is a hypothetical singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all aspects of the universe.: 6 Finding a theory of everything is one of the major unsolved problems in physics.
Over the past few centuries, two theoretical frameworks have been developed that, together, most closely resemble a theory of everything. These two theories upon which all modern physics rests are general relativity and quantum mechanics. General relativity is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large scale and high mass: planets, stars, galaxies, clusters of galaxies, etc. On the other hand, quantum mechanics is a theoretical framework that focuses primarily on three non-gravitational forces for understanding the universe in regions of both very small scale and low mass: subatomic particles, atoms, and molecules. Quantum mechanics successfully implemented the Standard Model that describes the three non-gravitational forces: strong nuclear, weak nuclear, and electromagnetic force – as well as all observed elementary particles.: 122
General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used.: 842–844 The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe.
In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime.
== Name ==
Initially, the term theory of everything was used with an ironic reference to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist Harald Fritzsch used the term in his 1977 lectures in Varenna. Physicist John Ellis claims to have introduced the acronym "TOE" into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research.
== Historical antecedents ==
=== Antiquity to 19th century ===
Many ancient cultures such as Babylonian astronomers and Indian astronomy studied the pattern of the Seven Sacred Luminaires/Classical Planets against the background of stars, with their interest being to relate celestial movement to human events (astrology), and the goal being to predict events by recording events against a time measure and then look for recurrent patterns. The debate between the universe having either a beginning or eternal cycles can be traced to ancient Babylonia. Hindu cosmology posits that time is infinite with a cyclic universe, where the current universe was preceded and will be followed by an infinite number of universes. Time scales mentioned in Hindu cosmology correspond to those of modern scientific cosmology. Its cycles run from an ordinary day and night to a day and night of Brahma, 8.64 billion years long.
The natural philosophy of atomism appeared in several ancient traditions. In ancient Greek philosophy, the pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom' proposed by Democritus was an early philosophical attempt to unify phenomena observed in nature. The concept of 'atom' also appeared in the Nyaya-Vaisheshika school of ancient Indian philosophy.
Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.: 340
Following earlier atomistic thought, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles.: 184
In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory.
In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time:: ch 7
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable.
In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.
In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection.
In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything.
=== Early 20th century ===
In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".
After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert, Theodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory: ch 17 (see Einstein–Maxwell–Dirac equations).
=== Late 20th century and the nuclear interactions ===
In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.
Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force.
Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses (80.4 GeV/c2 and 91.2 GeV/c2, respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent.
While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved.
== Modern physics ==
=== Conventional sequence of theories ===
A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph.
In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.
Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.
The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories.
In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven.
=== String theory and M-theory ===
Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue.
One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.
Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.
In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.: 347
One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics.
=== Loop quantum gravity ===
Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales.
There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.
This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).
Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.
=== Other attempts ===
Among other attempts to develop a theory of everything is the theory of causal fermion systems, giving the two current physical theories (general relativity and quantum field theory) as limiting cases.
Another theory is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a theory of everything but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a theory of everything. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events.
Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves.
Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge).
=== Present status ===
At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything.
== Arguments against ==
In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery.
=== Gödel's incompleteness theorem ===
A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.
Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because a "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything.
Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them."
Stephen Hawking was originally a believer in the Theory of Everything, but after considering Gödel's Theorem, he concluded that one was not obtainable. "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind."
Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information.
Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.
Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a theory of everything cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question. This definitional discrepancy may explain some of the disagreement among researchers.
=== Fundamental limits in accuracy ===
No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.
=== Definition of fundamental laws ===
There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything.
A well-known debate over this took place between Steven Weinberg and Philip Anderson.
==== Impossibility of calculation ====
Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them.
== See also ==
== References ==
=== Bibliography ===
Pais, Abraham (1982) Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford University Press, Oxford. Ch. 17, ISBN 0-19-853907-X
Weinberg, Steven (1993) Dreams of a Final Theory: The Search for the Fundamental Laws of Nature, Hutchinson Radius, London, ISBN 0-09-177395-4
Corey S. Powell Relativity versus quantum mechanics: the battle for the universe, The Guardian (2015) Relativity versus quantum mechanics: the battle for the universe
== External links ==
The Elegant Universe, Nova episode about the search for the theory of everything and string theory.
Theory of Everything, freeview video by the Vega Science Trust, BBC and Open University.
The Theory of Everything: Are we getting closer, or is a final theory of matter and the universe impossible? Debate between John Ellis (physicist), Frank Close and Nicholas Maxwell.
Why The World Exists, a discussion between physicist Laura Mersini-Houghton, cosmologist George Francis Rayner Ellis and philosopher David Wallace about dark matter, parallel universes and explaining why these and the present Universe exist.
Theories of Everything, BBC Radio 4 discussion with Brian Greene, John Barrow & Val Gibson (In Our Time, March 25, 2004). | Wikipedia/Theory_of_Everything |
In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation.
== Equation ==
For a particle of mass
m
{\displaystyle m}
and electric charge
q
{\displaystyle q}
, in an electromagnetic field described by the magnetic vector potential
A
{\displaystyle \mathbf {A} }
and the electric scalar potential
ϕ
{\displaystyle \phi }
, the Pauli equation reads:
Here
σ
=
(
σ
x
,
σ
y
,
σ
z
)
{\displaystyle {\boldsymbol {\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})}
are the Pauli operators collected into a vector for convenience, and
p
^
=
−
i
ℏ
∇
{\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla }
is the momentum operator in position representation. The state of the system,
|
ψ
⟩
{\displaystyle |\psi \rangle }
(written in Dirac notation), can be considered as a two-component spinor wavefunction, or a column vector (after choice of basis):
|
ψ
⟩
=
ψ
+
|
↑
⟩
+
ψ
−
|
↓
⟩
=
⋅
[
ψ
+
ψ
−
]
{\displaystyle |\psi \rangle =\psi _{+}|{\mathord {\uparrow }}\rangle +\psi _{-}|{\mathord {\downarrow }}\rangle \,{\stackrel {\cdot }{=}}\,{\begin{bmatrix}\psi _{+}\\\psi _{-}\end{bmatrix}}}
.
The Hamiltonian operator is a 2 × 2 matrix because of the Pauli operators.
H
^
=
1
2
m
[
σ
⋅
(
p
^
−
q
A
)
]
2
+
q
ϕ
{\displaystyle {\hat {H}}={\frac {1}{2m}}\left[{\boldsymbol {\sigma }}\cdot (\mathbf {\hat {p}} -q\mathbf {A} )\right]^{2}+q\phi }
Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field. See Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just
p
2
2
m
{\displaystyle {\frac {\mathbf {p} ^{2}}{2m}}}
where
p
{\displaystyle \mathbf {p} }
is the kinetic momentum, while in the presence of an electromagnetic field it involves the minimal coupling
Π
=
p
−
q
A
{\displaystyle \mathbf {\Pi } =\mathbf {p} -q\mathbf {A} }
, where now
Π
{\displaystyle \mathbf {\Pi } }
is the kinetic momentum and
p
{\displaystyle \mathbf {p} }
is the canonical momentum.
The Pauli operators can be removed from the kinetic energy term using the Pauli vector identity:
(
σ
⋅
a
)
(
σ
⋅
b
)
=
a
⋅
b
+
i
σ
⋅
(
a
×
b
)
{\displaystyle ({\boldsymbol {\sigma }}\cdot \mathbf {a} )({\boldsymbol {\sigma }}\cdot \mathbf {b} )=\mathbf {a} \cdot \mathbf {b} +i{\boldsymbol {\sigma }}\cdot \left(\mathbf {a} \times \mathbf {b} \right)}
Note that unlike a vector, the differential operator
p
^
−
q
A
=
−
i
ℏ
∇
−
q
A
{\displaystyle \mathbf {\hat {p}} -q\mathbf {A} =-i\hbar \nabla -q\mathbf {A} }
has non-zero cross product with itself. This can be seen by considering the cross product applied to a scalar function
ψ
{\displaystyle \psi }
:
[
(
p
^
−
q
A
)
×
(
p
^
−
q
A
)
]
ψ
=
−
q
[
p
^
×
(
A
ψ
)
+
A
×
(
p
^
ψ
)
]
=
i
q
ℏ
[
∇
×
(
A
ψ
)
+
A
×
(
∇
ψ
)
]
=
i
q
ℏ
[
ψ
(
∇
×
A
)
−
A
×
(
∇
ψ
)
+
A
×
(
∇
ψ
)
]
=
i
q
ℏ
B
ψ
{\displaystyle {\begin{aligned}\left[\left(\mathbf {\hat {p}} -q\mathbf {A} \right)\times \left(\mathbf {\hat {p}} -q\mathbf {A} \right)\right]\psi &=-q\left[\mathbf {\hat {p}} \times \left(\mathbf {A} \psi \right)+\mathbf {A} \times \left(\mathbf {\hat {p}} \psi \right)\right]\\&=iq\hbar \left[\nabla \times \left(\mathbf {A} \psi \right)+\mathbf {A} \times \left(\nabla \psi \right)\right]\\&=iq\hbar \left[\psi \left(\nabla \times \mathbf {A} \right)-\mathbf {A} \times \left(\nabla \psi \right)+\mathbf {A} \times \left(\nabla \psi \right)\right]=iq\hbar \mathbf {B} \psi \end{aligned}}}
where
B
=
∇
×
A
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }
is the magnetic field.
For the full Pauli equation, one then obtains
for which only a few analytic results are known, e.g., in the context of Landau quantization with homogenous magnetic fields or for an idealized, Coulomb-like, inhomogeneous magnetic field.
=== Weak magnetic fields ===
For the case of where the magnetic field is constant and homogenous, one may expand
(
p
^
−
q
A
)
2
{\textstyle (\mathbf {\hat {p}} -q\mathbf {A} )^{2}}
using the symmetric gauge
A
^
=
1
2
B
×
r
^
{\textstyle \mathbf {\hat {A}} ={\frac {1}{2}}\mathbf {B} \times \mathbf {\hat {r}} }
, where
r
{\textstyle \mathbf {r} }
is the position operator and A is now an operator. We obtain
(
p
^
−
q
A
^
)
2
=
|
p
^
|
2
−
q
(
r
^
×
p
^
)
⋅
B
+
1
4
q
2
(
|
B
|
2
|
r
^
|
2
−
|
B
⋅
r
^
|
2
)
≈
p
^
2
−
q
L
^
⋅
B
,
{\displaystyle (\mathbf {\hat {p}} -q\mathbf {\hat {A}} )^{2}=|\mathbf {\hat {p}} |^{2}-q(\mathbf {\hat {r}} \times \mathbf {\hat {p}} )\cdot \mathbf {B} +{\frac {1}{4}}q^{2}\left(|\mathbf {B} |^{2}|\mathbf {\hat {r}} |^{2}-|\mathbf {B} \cdot \mathbf {\hat {r}} |^{2}\right)\approx \mathbf {\hat {p}} ^{2}-q\mathbf {\hat {L}} \cdot \mathbf {B} \,,}
where
L
^
{\textstyle \mathbf {\hat {L}} }
is the particle angular momentum operator and we neglected terms in the magnetic field squared
B
2
{\textstyle B^{2}}
. Therefore, we obtain
where
S
=
ℏ
σ
/
2
{\textstyle \mathbf {S} =\hbar {\boldsymbol {\sigma }}/2}
is the spin of the particle. The factor 2 in front of the spin is known as the Dirac g-factor. The term in
B
{\textstyle \mathbf {B} }
, is of the form
−
μ
⋅
B
{\textstyle -{\boldsymbol {\mu }}\cdot \mathbf {B} }
which is the usual interaction between a magnetic moment
μ
{\textstyle {\boldsymbol {\mu }}}
and a magnetic field, like in the Zeeman effect.
For an electron of charge
−
e
{\textstyle -e}
in an isotropic constant magnetic field, one can further reduce the equation using the total angular momentum
J
=
L
+
S
{\textstyle \mathbf {J} =\mathbf {L} +\mathbf {S} }
and Wigner-Eckart theorem. Thus we find
[
|
p
|
2
2
m
+
μ
B
g
J
m
j
|
B
|
−
e
ϕ
]
|
ψ
⟩
=
i
ℏ
∂
∂
t
|
ψ
⟩
{\displaystyle \left[{\frac {|\mathbf {p} |^{2}}{2m}}+\mu _{\rm {B}}g_{J}m_{j}|\mathbf {B} |-e\phi \right]|\psi \rangle =i\hbar {\frac {\partial }{\partial t}}|\psi \rangle }
where
μ
B
=
e
ℏ
2
m
{\textstyle \mu _{\rm {B}}={\frac {e\hbar }{2m}}}
is the Bohr magneton and
m
j
{\textstyle m_{j}}
is the magnetic quantum number related to
J
{\textstyle \mathbf {J} }
. The term
g
J
{\textstyle g_{J}}
is known as the Landé g-factor, and is given here by
g
J
=
3
2
+
3
4
−
ℓ
(
ℓ
+
1
)
2
j
(
j
+
1
)
,
{\displaystyle g_{J}={\frac {3}{2}}+{\frac {{\frac {3}{4}}-\ell (\ell +1)}{2j(j+1)}},}
where
ℓ
{\displaystyle \ell }
is the orbital quantum number related to
L
2
{\displaystyle L^{2}}
and
j
{\displaystyle j}
is the total orbital quantum number related to
J
2
{\displaystyle J^{2}}
.
== From Dirac equation ==
The Pauli equation can be inferred from the non-relativistic limit of the Dirac equation, which is the relativistic quantum equation of motion for spin-1/2 particles.
=== Derivation ===
The Dirac equation can be written as:
i
ℏ
∂
t
(
ψ
1
ψ
2
)
=
c
(
σ
⋅
Π
ψ
2
σ
⋅
Π
ψ
1
)
+
q
ϕ
(
ψ
1
ψ
2
)
+
m
c
2
(
ψ
1
−
ψ
2
)
,
{\displaystyle i\hbar \,\partial _{t}{\begin{pmatrix}\psi _{1}\\\psi _{2}\end{pmatrix}}=c\,{\begin{pmatrix}{\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }}\,\psi _{2}\\{\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }}\,\psi _{1}\end{pmatrix}}+q\,\phi \,{\begin{pmatrix}\psi _{1}\\\psi _{2}\end{pmatrix}}+mc^{2}\,{\begin{pmatrix}\psi _{1}\\-\psi _{2}\end{pmatrix}},}
where
∂
t
=
∂
∂
t
{\textstyle \partial _{t}={\frac {\partial }{\partial t}}}
and
ψ
1
,
ψ
2
{\displaystyle \psi _{1},\psi _{2}}
are two-component spinor, forming a bispinor.
Using the following ansatz:
(
ψ
1
ψ
2
)
=
e
−
i
m
c
2
t
ℏ
(
ψ
χ
)
,
{\displaystyle {\begin{pmatrix}\psi _{1}\\\psi _{2}\end{pmatrix}}=e^{-i{\tfrac {mc^{2}t}{\hbar }}}{\begin{pmatrix}\psi \\\chi \end{pmatrix}},}
with two new spinors
ψ
,
χ
{\displaystyle \psi ,\chi }
, the equation becomes
i
ℏ
∂
t
(
ψ
χ
)
=
c
(
σ
⋅
Π
χ
σ
⋅
Π
ψ
)
+
q
ϕ
(
ψ
χ
)
+
(
0
−
2
m
c
2
χ
)
.
{\displaystyle i\hbar \partial _{t}{\begin{pmatrix}\psi \\\chi \end{pmatrix}}=c\,{\begin{pmatrix}{\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }}\,\chi \\{\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }}\,\psi \end{pmatrix}}+q\,\phi \,{\begin{pmatrix}\psi \\\chi \end{pmatrix}}+{\begin{pmatrix}0\\-2\,mc^{2}\,\chi \end{pmatrix}}.}
In the non-relativistic limit,
∂
t
χ
{\displaystyle \partial _{t}\chi }
and the kinetic and electrostatic energies are small with respect to the rest energy
m
c
2
{\displaystyle mc^{2}}
, leading to the Lévy-Leblond equation. Thus
χ
≈
σ
⋅
Π
ψ
2
m
c
.
{\displaystyle \chi \approx {\frac {{\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }}\,\psi }{2\,mc}}\,.}
Inserted in the upper component of Dirac equation, we find Pauli equation (general form):
i
ℏ
∂
t
ψ
=
[
(
σ
⋅
Π
)
2
2
m
+
q
ϕ
]
ψ
.
{\displaystyle i\hbar \,\partial _{t}\,\psi =\left[{\frac {({\boldsymbol {\sigma }}\cdot {\boldsymbol {\Pi }})^{2}}{2\,m}}+q\,\phi \right]\psi .}
=== From a Foldy–Wouthuysen transformation ===
The rigorous derivation of the Pauli equation follows from Dirac equation in an external field and performing a Foldy–Wouthuysen transformation considering terms up to order
O
(
1
/
m
c
)
{\displaystyle {\mathcal {O}}(1/mc)}
. Similarly, higher order corrections to the Pauli equation can be determined giving rise to spin-orbit and Darwin interaction terms, when expanding up to order
O
(
1
/
(
m
c
)
2
)
{\displaystyle {\mathcal {O}}(1/(mc)^{2})}
instead.
== Pauli coupling ==
Pauli's equation is derived by requiring minimal coupling, which provides a g-factor g=2. Most elementary particles have anomalous g-factors, different from 2. In the domain of relativistic quantum field theory, one defines a non-minimal coupling, sometimes called Pauli coupling, in order to add an anomalous factor
γ
μ
p
μ
→
γ
μ
p
μ
−
q
γ
μ
A
μ
+
a
σ
μ
ν
F
μ
ν
{\displaystyle \gamma ^{\mu }p_{\mu }\to \gamma ^{\mu }p_{\mu }-q\gamma ^{\mu }A_{\mu }+a\sigma _{\mu \nu }F^{\mu \nu }}
where
p
μ
{\displaystyle p_{\mu }}
is the four-momentum operator,
A
μ
{\displaystyle A_{\mu }}
is the electromagnetic four-potential,
a
{\displaystyle a}
is proportional to the anomalous magnetic dipole moment,
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
{\displaystyle F^{\mu \nu }=\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }}
is the electromagnetic tensor, and
σ
μ
ν
=
i
2
[
γ
μ
,
γ
ν
]
{\textstyle \sigma _{\mu \nu }={\frac {i}{2}}[\gamma _{\mu },\gamma _{\nu }]}
are the Lorentzian spin matrices and the commutator of the gamma matrices
γ
μ
{\displaystyle \gamma ^{\mu }}
. In the context of non-relativistic quantum mechanics, instead of working with the Schrödinger equation, Pauli coupling is equivalent to using the Pauli equation (or postulating Zeeman energy) for an arbitrary g-factor.
== See also ==
Semiclassical physics
Atomic, molecular, and optical physics
Group contraction
Gordon decomposition
== Footnotes ==
== References ==
=== Books ===
Schwabl, Franz (2004). Quantenmechanik I. Springer. ISBN 978-3540431060.
Schwabl, Franz (2005). Quantenmechanik für Fortgeschrittene. Springer. ISBN 978-3540259046.
Claude Cohen-Tannoudji; Bernard Diu; Frank Laloe (2006). Quantum Mechanics 2. Wiley, J. ISBN 978-0471569527. | Wikipedia/Pauli_equation |
In physics, the kinetic energy of an object is the form of energy that it possesses due to its motion.
In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is
1
2
m
v
2
{\textstyle {\frac {1}{2}}mv^{2}}
.
The kinetic energy of an object is equal to the work, or force (F) in the direction of motion times its displacement (s), needed to accelerate the object from rest to its given speed. The same amount of work is done by the object when decelerating from its current speed to a state of rest.
The SI unit of energy is the joule, while the English unit of energy is the foot-pound.
In relativistic mechanics,
1
2
m
v
2
{\textstyle {\frac {1}{2}}mv^{2}}
is a good approximation of kinetic energy only when v is much less than the speed of light.
== History and etymology ==
The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality.
The principle of classical mechanics that E ∝ mv2 is conserved was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force or vis viva.: 227 Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship in 1722. By dropping weights from different heights into a block of clay, Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation.
The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Thomas Young, who in his 1802 lecture to the Royal Society, was the first to use the term energy to refer to kinetic energy in its modern sense, instead of vis viva. Gaspard-Gustave Coriolis published in 1829 the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–1851. William Rankine, who had introduced the term "potential energy" in 1853, and the phrase "actual energy" to complement it, later cites William Thomson and Peter Tait as substituting the word "kinetic" for "actual".
== Overview ==
Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy.
Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist transfers chemical energy provided by food to the bicycle and cyclist's store of kinetic energy as they increase their speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces thermal energy within the cyclist.
The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat.
Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant.
Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant.
Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound and binding energy (breaking bound structures).
Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion.
Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula 1/2mv2 given by classical mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed.
== Kinetic energy for non-relativistic velocity ==
Treatments of kinetic energy depend upon the relative velocity of objects compared to the fixed speed of light. Speeds experienced directly by humans are non-relativisitic; higher speeds require the theory of relativity.
=== Kinetic energy of rigid bodies ===
In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to half the product of the mass and the square of the speed. In formula form:
E
k
=
1
2
m
v
2
{\displaystyle E_{\text{k}}={\frac {1}{2}}mv^{2}}
where
m
{\displaystyle m}
is the mass and
v
{\displaystyle v}
is the speed (magnitude of the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules.
For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as
E
k
=
1
2
⋅
80
kg
⋅
(
18
m/s
)
2
=
12
,
960
J
=
12.96
kJ
{\displaystyle E_{\text{k}}={\frac {1}{2}}\cdot 80\,{\text{kg}}\cdot \left(18\,{\text{m/s}}\right)^{2}=12,960\,{\text{J}}=12.96\,{\text{kJ}}}
When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.,
F
s
=
1
2
m
v
2
{\displaystyle Fs={\frac {1}{2}}mv^{2}}
Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed.
The kinetic energy of an object is related to its momentum by the equation:
E
k
=
p
2
2
m
{\displaystyle E_{\text{k}}={\frac {p^{2}}{2m}}}
where:
p
{\displaystyle p}
is momentum
m
{\displaystyle m}
is mass of the body
For the translational kinetic energy, that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass
m
{\displaystyle m}
, whose center of mass is moving in a straight line with speed
v
{\displaystyle v}
, as seen above is equal to
E
t
=
1
2
m
v
2
{\displaystyle E_{\text{t}}={\frac {1}{2}}mv^{2}}
where:
m
{\displaystyle m}
is the mass of the body
v
{\displaystyle v}
is the speed of the center of mass of the body.
The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy.
The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole.
==== Derivation ====
===== Without vector calculus =====
The work W done by a force F on an object over a distance s parallel to F equals
W
=
F
⋅
s
.
{\displaystyle W=F\cdot s.}
Using Newton's second law
F
=
m
a
{\displaystyle F=ma}
with m the mass and a the acceleration of the object and
s
=
a
t
2
2
{\displaystyle s={\frac {at^{2}}{2}}}
the distance traveled by the accelerated object in time t, we find with
v
=
a
t
{\displaystyle v=at}
for the velocity v of the object
W
=
m
a
a
t
2
2
=
m
(
a
t
)
2
2
=
m
v
2
2
.
{\displaystyle W=ma{\frac {at^{2}}{2}}={\frac {m(at)^{2}}{2}}={\frac {mv^{2}}{2}}.}
===== With vector calculus =====
The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx
F
⋅
d
x
=
F
⋅
v
d
t
=
d
p
d
t
⋅
v
d
t
=
v
⋅
d
p
=
v
⋅
d
(
m
v
)
,
{\displaystyle \mathbf {F} \cdot d\mathbf {x} =\mathbf {F} \cdot \mathbf {v} dt={\frac {d\mathbf {p} }{dt}}\cdot \mathbf {v} dt=\mathbf {v} \cdot d\mathbf {p} =\mathbf {v} \cdot d(m\mathbf {v} )\,,}
where we have assumed the relationship p = m v and the validity of Newton's second law. (However, also see the special relativistic derivation below.)
Applying the product rule we see that:
d
(
v
⋅
v
)
=
(
d
v
)
⋅
v
+
v
⋅
(
d
v
)
=
2
(
v
⋅
d
v
)
.
{\displaystyle d(\mathbf {v} \cdot \mathbf {v} )=(d\mathbf {v} )\cdot \mathbf {v} +\mathbf {v} \cdot (d\mathbf {v} )=2(\mathbf {v} \cdot d\mathbf {v} ).}
Therefore (assuming constant mass so that dm = 0), we have
v
⋅
d
(
m
v
)
=
m
2
d
(
v
⋅
v
)
=
m
2
d
v
2
=
d
(
m
v
2
2
)
.
{\displaystyle \mathbf {v} \cdot d(m\mathbf {v} )={\frac {m}{2}}d(\mathbf {v} \cdot \mathbf {v} )={\frac {m}{2}}dv^{2}=d\left({\frac {mv^{2}}{2}}\right).}
Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy:
E
k
=
∫
v
1
v
2
p
⋅
d
v
=
∫
v
1
v
2
m
v
⋅
d
v
=
m
v
2
2
|
v
1
v
2
=
1
2
m
(
v
2
2
−
v
1
2
)
.
{\displaystyle E_{\text{k}}=\int _{v_{1}}^{v_{2}}\mathbf {p} \cdot d\mathbf {v} =\int _{v_{1}}^{v_{2}}m\mathbf {v} \cdot d\mathbf {v} ={mv^{2} \over 2}{\bigg \vert }_{v_{1}}^{v_{2}}={1 \over 2}m(v_{2}^{2}-v_{1}^{2}).}
This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the momentum (p) of a body and the infinitesimal change of the velocity (v) of the body. It is assumed that the body starts with no kinetic energy when it is at rest (motionless).
=== Rotating bodies ===
If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy (
E
r
{\displaystyle E_{\text{r}}\,}
) which is simply the sum of the kinetic energies of its moving parts, and is thus given by:
E
r
=
∫
Q
v
2
d
m
2
=
∫
Q
(
r
ω
)
2
d
m
2
=
ω
2
2
∫
Q
r
2
d
m
=
ω
2
2
I
=
1
2
I
ω
2
{\displaystyle E_{\text{r}}=\int _{Q}{\frac {v^{2}dm}{2}}=\int _{Q}{\frac {(r\omega )^{2}dm}{2}}={\frac {\omega ^{2}}{2}}\int _{Q}{r^{2}}dm={\frac {\omega ^{2}}{2}}I={\frac {1}{2}}I\omega ^{2}}
where:
ω is the body's angular velocity
r is the distance of any mass dm from that line
I
{\displaystyle I}
is the body's moment of inertia, equal to
∫
Q
r
2
d
m
{\textstyle \int _{Q}{r^{2}}dm}
.
(In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape).
=== Kinetic energy of systems ===
A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains.
A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However, all internal energies of all types contribute to a body's mass, inertia, and total energy.
=== Fluid dynamics ===
In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point.
E
k
=
1
2
m
v
2
{\displaystyle E_{\text{k}}={\frac {1}{2}}mv^{2}}
Dividing by V, the unit of volume:
E
k
V
=
1
2
m
V
v
2
q
=
1
2
ρ
v
2
{\displaystyle {\begin{aligned}{\frac {E_{\text{k}}}{V}}&={\frac {1}{2}}{\frac {m}{V}}v^{2}\\q&={\frac {1}{2}}\rho v^{2}\end{aligned}}}
where
q
{\displaystyle q}
is the dynamic pressure, and ρ is the density of the incompressible fluid.
=== Frame of reference ===
The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame.
The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass.
This may be simply shown: let
V
{\displaystyle \textstyle \mathbf {V} }
be the relative velocity of the center of mass frame i in the frame k. Since
v
2
=
(
v
i
+
V
)
2
=
(
v
i
+
V
)
⋅
(
v
i
+
V
)
=
v
i
⋅
v
i
+
2
v
i
⋅
V
+
V
⋅
V
=
v
i
2
+
2
v
i
⋅
V
+
V
2
,
{\displaystyle v^{2}=\left(v_{i}+V\right)^{2}=\left(\mathbf {v} _{i}+\mathbf {V} \right)\cdot \left(\mathbf {v} _{i}+\mathbf {V} \right)=\mathbf {v} _{i}\cdot \mathbf {v} _{i}+2\mathbf {v} _{i}\cdot \mathbf {V} +\mathbf {V} \cdot \mathbf {V} =v_{i}^{2}+2\mathbf {v} _{i}\cdot \mathbf {V} +V^{2},}
Then,
E
k
=
∫
v
2
2
d
m
=
∫
v
i
2
2
d
m
+
V
⋅
∫
v
i
d
m
+
V
2
2
∫
d
m
.
{\displaystyle E_{\text{k}}=\int {\frac {v^{2}}{2}}dm=\int {\frac {v_{i}^{2}}{2}}dm+\mathbf {V} \cdot \int \mathbf {v} _{i}dm+{\frac {V^{2}}{2}}\int dm.}
However, let
∫
v
i
2
2
d
m
=
E
i
{\textstyle \int {\frac {v_{i}^{2}}{2}}dm=E_{i}}
the kinetic energy in the center of mass frame,
∫
v
i
d
m
{\textstyle \int \mathbf {v} _{i}dm}
would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass:
∫
d
m
=
M
{\textstyle \int dm=M}
. Substituting, we get:
E
k
=
E
i
+
M
V
2
2
.
{\displaystyle E_{\text{k}}=E_{i}+{\frac {MV^{2}}{2}}.}
Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same).
=== Rotation in systems ===
It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy):
E
k
=
E
t
+
E
r
{\displaystyle E_{\text{k}}=E_{\text{t}}+E_{\text{r}}}
where:
Ek is the total kinetic energy
Et is the translational kinetic energy
Er is the rotational energy or angular kinetic energy in the rest frame
Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation.
== Relativistic kinetic energy ==
If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In relativity, the total energy is given by the energy-momentum relation:
E
2
=
(
p
c
)
2
+
(
m
0
c
2
)
2
{\displaystyle E^{2}=(p{\textrm {c}})^{2}+\left(m_{0}{\textrm {c}}^{2}\right)^{2}\,}
Here we use the relativistic expression for linear momentum:
p
=
m
γ
v
{\displaystyle p=m\gamma v}
, where
γ
=
1
/
1
−
v
2
/
c
2
{\textstyle \gamma =1/{\sqrt {1-v^{2}/c^{2}}}}
.
with
m
{\displaystyle m}
being an object's (rest) mass,
v
{\displaystyle v}
speed, and c the speed of light in vacuum.
Then kinetic energy is the total relativistic energy minus the rest energy:
E
K
=
E
−
m
0
c
2
=
(
p
c
)
2
+
(
m
0
c
2
)
2
−
m
0
c
2
{\displaystyle E_{K}=E-m_{0}c^{2}={\sqrt {(p{\textrm {c}})^{2}+\left(m_{0}{\textrm {c}}^{2}\right)^{2}}}-m_{0}c^{2}}
At low speeds, the square root can be expanded and the rest energy drops out, giving the Newtonian kinetic energy.
=== Derivation ===
Start with the expression for linear momentum
p
=
m
γ
v
{\displaystyle \mathbf {p} =m\gamma \mathbf {v} }
, where
γ
=
1
/
1
−
v
2
/
c
2
{\textstyle \gamma =1/{\sqrt {1-v^{2}/c^{2}}}}
. Integrating by parts yields:
E
k
=
∫
v
⋅
d
p
=
∫
v
⋅
d
(
m
γ
v
)
=
m
γ
v
⋅
v
−
∫
m
γ
v
⋅
d
v
=
m
γ
v
2
−
m
2
∫
γ
d
(
v
2
)
{\displaystyle E_{\text{k}}=\int \mathbf {v} \cdot d\mathbf {p} =\int \mathbf {v} \cdot d(m\gamma \mathbf {v} )=m\gamma \mathbf {v} \cdot \mathbf {v} -\int m\gamma \mathbf {v} \cdot d\mathbf {v} =m\gamma v^{2}-{\frac {m}{2}}\int \gamma d\left(v^{2}\right)}
Since
γ
=
(
1
−
v
2
/
c
2
)
−
1
2
{\displaystyle \gamma =\left(1-v^{2}/c^{2}\right)^{-{\frac {1}{2}}}}
,
E
k
=
m
γ
v
2
−
−
m
c
2
2
∫
γ
d
(
1
−
v
2
c
2
)
=
m
γ
v
2
+
m
c
2
(
1
−
v
2
c
2
)
1
2
−
E
0
{\displaystyle {\begin{aligned}E_{\text{k}}&=m\gamma v^{2}-{\frac {-mc^{2}}{2}}\int \gamma d\left(1-{\frac {v^{2}}{c^{2}}}\right)\\&=m\gamma v^{2}+mc^{2}\left(1-{\frac {v^{2}}{c^{2}}}\right)^{\frac {1}{2}}-E_{0}\end{aligned}}}
Where
E
0
{\displaystyle E_{0}}
is a constant of integration for the indefinite integral. Rearranging to combine the factor
m
γ
{\displaystyle m\gamma }
gives
E
k
=
m
γ
(
v
2
+
c
2
(
1
−
v
2
c
2
)
)
−
E
0
=
m
γ
(
v
2
+
c
2
−
v
2
)
−
E
0
=
m
γ
c
2
−
E
0
{\displaystyle {\begin{aligned}E_{\text{k}}&=m\gamma \left(v^{2}+c^{2}\left(1-{\frac {v^{2}}{c^{2}}}\right)\right)-E_{0}\\&=m\gamma \left(v^{2}+c^{2}-v^{2}\right)-E_{0}\\&=m\gamma c^{2}-E_{0}\end{aligned}}}
E
0
{\displaystyle E_{0}}
is found by observing that when
v
=
0
,
γ
=
1
{\displaystyle \mathbf {v} =0,\ \gamma =1}
and
E
k
=
0
{\displaystyle E_{\text{k}}=0}
, the result is the "rest energy":
E
0
=
m
c
2
{\displaystyle E_{0}=mc^{2}}
and resulting in the formula:
E
k
=
m
γ
c
2
−
m
c
2
=
m
c
2
1
−
v
2
c
2
−
m
c
2
=
(
γ
−
1
)
m
c
2
{\displaystyle E_{\text{k}}=m\gamma c^{2}-mc^{2}={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-mc^{2}=(\gamma -1)mc^{2}}
This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary.
=== Low speed limit ===
The mathematical by-product of this calculation is the mass–energy equivalence formula, that mass and energy are essentially the same thing:: 51 : 121
E
rest
=
E
0
=
m
c
2
{\displaystyle E_{\text{rest}}=E_{0}=mc^{2}}
At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. To see this, apply the binomial approximation or take the first two terms of the Taylor expansion in powers of
v
2
{\displaystyle v^{2}}
for the reciprocal square root:: 51
E
k
≈
m
c
2
(
1
+
1
2
v
2
c
2
)
−
m
c
2
=
1
2
m
v
2
{\displaystyle E_{\text{k}}\approx mc^{2}\left(1+{\frac {1}{2}}{\frac {v^{2}}{c^{2}}}\right)-mc^{2}={\frac {1}{2}}mv^{2}}
So, the total energy
E
k
{\displaystyle E_{k}}
can be partitioned into the rest mass energy plus the non-relativistic kinetic energy at low speeds.
When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation
E
k
≈
m
c
2
(
1
+
1
2
v
2
c
2
+
3
8
v
4
c
4
)
−
m
c
2
=
1
2
m
v
2
+
3
8
m
v
4
c
2
{\displaystyle E_{\text{k}}\approx mc^{2}\left(1+{\frac {1}{2}}{\frac {v^{2}}{c^{2}}}+{\frac {3}{8}}{\frac {v^{4}}{c^{4}}}\right)-mc^{2}={\frac {1}{2}}mv^{2}+{\frac {3}{8}}m{\frac {v^{4}}{c^{2}}}}
is small for low speeds. For example, for a speed of 10 km/s (22,000 mph) the correction to the non-relativistic kinetic energy is 0.0417 J/kg (on a non-relativistic kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a non-relativistic kinetic energy of 5 GJ/kg).
The relativistic relation between kinetic energy and momentum is given by
E
k
=
p
2
c
2
+
m
2
c
4
−
m
c
2
{\displaystyle E_{\text{k}}={\sqrt {p^{2}c^{2}+m^{2}c^{4}}}-mc^{2}}
This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics:
E
k
≈
p
2
2
m
−
p
4
8
m
3
c
2
.
{\displaystyle E_{\text{k}}\approx {\frac {p^{2}}{2m}}-{\frac {p^{4}}{8m^{3}c^{2}}}.}
This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity.
=== General relativity ===
Using the convention that
g
α
β
u
α
u
β
=
−
c
2
{\displaystyle g_{\alpha \beta }\,u^{\alpha }\,u^{\beta }\,=\,-c^{2}}
where the four-velocity of a particle is
u
α
=
d
x
α
d
τ
{\displaystyle u^{\alpha }\,=\,{\frac {dx^{\alpha }}{d\tau }}}
and
τ
{\displaystyle \tau }
is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity.
If the particle has momentum
p
β
=
m
g
β
α
u
α
{\displaystyle p_{\beta }\,=\,m\,g_{\beta \alpha }\,u^{\alpha }}
as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is
E
=
−
p
β
u
obs
β
{\displaystyle E\,=\,-\,p_{\beta }\,u_{\text{obs}}^{\beta }}
and the kinetic energy can be expressed as the total energy minus the rest energy:
E
k
=
−
p
β
u
obs
β
−
m
c
2
.
{\displaystyle E_{k}\,=\,-\,p_{\beta }\,u_{\text{obs}}^{\beta }\,-\,m\,c^{2}\,.}
Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). Since
u
α
=
d
x
α
d
t
d
t
d
τ
=
v
α
u
t
{\displaystyle u^{\alpha }={\frac {dx^{\alpha }}{dt}}{\frac {dt}{d\tau }}=v^{\alpha }u^{t}}
where vα is the ordinary velocity measured w.r.t. the coordinate system, we get
−
c
2
=
g
α
β
u
α
u
β
=
g
t
t
(
u
t
)
2
+
g
s
s
v
2
(
u
t
)
2
.
{\displaystyle -c^{2}=g_{\alpha \beta }u^{\alpha }u^{\beta }=g_{tt}\left(u^{t}\right)^{2}+g_{ss}v^{2}\left(u^{t}\right)^{2}\,.}
Solving for ut gives
u
t
=
c
−
1
g
t
t
+
g
s
s
v
2
.
{\displaystyle u^{t}=c{\sqrt {\frac {-1}{g_{tt}+g_{ss}v^{2}}}}\,.}
Thus for a stationary observer (v = 0)
u
obs
t
=
c
−
1
g
t
t
{\displaystyle u_{\text{obs}}^{t}=c{\sqrt {\frac {-1}{g_{tt}}}}}
and thus the kinetic energy takes the form
E
k
=
−
m
g
t
t
u
t
u
obs
t
−
m
c
2
=
m
c
2
g
t
t
g
t
t
+
g
s
s
v
2
−
m
c
2
.
{\displaystyle E_{\text{k}}=-mg_{tt}u^{t}u_{\text{obs}}^{t}-mc^{2}=mc^{2}{\sqrt {\frac {g_{tt}}{g_{tt}+g_{ss}v^{2}}}}-mc^{2}\,.}
Factoring out the rest energy gives:
E
k
=
m
c
2
(
g
t
t
g
t
t
+
g
s
s
v
2
−
1
)
.
{\displaystyle E_{\text{k}}=mc^{2}\left({\sqrt {\frac {g_{tt}}{g_{tt}+g_{ss}v^{2}}}}-1\right)\,.}
This expression reduces to the special relativistic case for the flat-space metric where
g
t
t
=
−
c
2
g
s
s
=
1
.
{\displaystyle {\begin{aligned}g_{tt}&=-c^{2}\\g_{ss}&=1\,.\end{aligned}}}
In the Newtonian approximation to general relativity
g
t
t
=
−
(
c
2
+
2
Φ
)
g
s
s
=
1
−
2
Φ
c
2
{\displaystyle {\begin{aligned}g_{tt}&=-\left(c^{2}+2\Phi \right)\\g_{ss}&=1-{\frac {2\Phi }{c^{2}}}\end{aligned}}}
where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies.
== Kinetic energy in quantum mechanics ==
In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator
p
^
{\displaystyle {\hat {p}}}
. The kinetic energy operator in the non-relativistic case can be written as
T
^
=
p
^
2
2
m
.
{\displaystyle {\hat {T}}={\frac {{\hat {p}}^{2}}{2m}}.}
Notice that this can be obtained by replacing
p
{\displaystyle p}
by
p
^
{\displaystyle {\hat {p}}}
in the classical expression for kinetic energy in terms of momentum,
E
k
=
p
2
2
m
.
{\displaystyle E_{\text{k}}={\frac {p^{2}}{2m}}.}
In the Schrödinger picture,
p
^
{\displaystyle {\hat {p}}}
takes the form
−
i
ℏ
∇
{\displaystyle -i\hbar \nabla }
where the derivative is taken with respect to position coordinates and hence
T
^
=
−
ℏ
2
2
m
∇
2
.
{\displaystyle {\hat {T}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}.}
The expectation value of the electron kinetic energy,
⟨
T
^
⟩
{\displaystyle \left\langle {\hat {T}}\right\rangle }
, for a system of N electrons described by the wavefunction
|
ψ
⟩
{\displaystyle \vert \psi \rangle }
is a sum of 1-electron operator expectation values:
⟨
T
^
⟩
=
⟨
ψ
|
∑
i
=
1
N
−
ℏ
2
2
m
e
∇
i
2
|
ψ
⟩
=
−
ℏ
2
2
m
e
∑
i
=
1
N
⟨
ψ
|
∇
i
2
|
ψ
⟩
{\displaystyle \left\langle {\hat {T}}\right\rangle =\left\langle \psi \left\vert \sum _{i=1}^{N}{\frac {-\hbar ^{2}}{2m_{\text{e}}}}\nabla _{i}^{2}\right\vert \psi \right\rangle =-{\frac {\hbar ^{2}}{2m_{\text{e}}}}\sum _{i=1}^{N}\left\langle \psi \left\vert \nabla _{i}^{2}\right\vert \psi \right\rangle }
where
m
e
{\displaystyle m_{\text{e}}}
is the mass of the electron and
∇
i
2
{\displaystyle \nabla _{i}^{2}}
is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons.
The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density
ρ
(
r
)
{\displaystyle \rho (\mathbf {r} )}
, the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as
T
[
ρ
]
=
1
8
∫
∇
ρ
(
r
)
⋅
∇
ρ
(
r
)
ρ
(
r
)
d
3
r
{\displaystyle T[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d^{3}r}
where
T
[
ρ
]
{\displaystyle T[\rho ]}
is known as the von Weizsäcker kinetic energy functional.
== See also ==
Escape velocity
Foot-pound
Joule
Kinetic energy penetrator
Kinetic energy per unit mass of projectiles
Kinetic projectile
Parallel axis theorem
Potential energy
Recoil
== Notes ==
== References ==
Physics Classroom (2000). "Kinetic Energy". Retrieved 2015-07-19.
School of Mathematics and Statistics, University of St Andrews (2000). "Biography of Gaspard-Gustave de Coriolis (1792–1843)". Retrieved 2006-03-03.
Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
Tipler, Paul; Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0.
== External links ==
Media related to Kinetic energy at Wikimedia Commons | Wikipedia/Kinetic_energy |
Quantum physics is a branch of modern physics in which energy and matter are described at their most fundamental level, that of energy quanta, elementary particles, and quantum fields. Quantum physics encompasses any discipline concerned with systems that exhibit notable quantum-mechanical effects, where waves have properties of particles, and particles behave like waves. Applications of quantum mechanics include explaining phenomena found in nature as well as developing technologies that rely upon quantum effects, like integrated circuits and lasers.
Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bonds to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved.
Historically, the first applications of quantum mechanics to physical systems were the algebraic determination of the hydrogen spectrum by Wolfgang Pauli and the treatment of diatomic molecules by Lucy Mensing.
In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.
== Electronics ==
Many modern electronic devices are designed using quantum mechanics. Examples include lasers, electron microscopes, magnetic resonance imaging (MRI) devices and the components used in computing hardware. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunications devices. Another application is for making laser diodes and light-emitting diodes, which are a high-efficiency source of light. The global positioning system (GPS) makes use of atomic clocks to measure precise time differences and therefore determine a user's location.
Many electronic devices operate using the effect of quantum tunneling. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize the quantum tunneling effect, such as resonant tunneling diodes. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see figure at right). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, the current decreases. Quantum mechanics is necessary to understand and design such electronic devices.
== Cryptography ==
Many scientists are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information.
An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition.
== Quantum computing ==
Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security.
Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances.
== Macroscale quantum effects ==
While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition.
== Other phenomena ==
Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures. Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms. Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant.
== Notes ==
== References == | Wikipedia/Applications_of_quantum_mechanics |
In physics, relativistic quantum mechanics (RQM) is any Poincaré-covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high-energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity.
Key features common to all RQMs include: the prediction of antimatter, spin magnetic moments of elementary spin-1/2 fermions, fine structure, and quantum dynamics of charged particles in electromagnetic fields. The key result is the Dirac equation, from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations.
The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta. A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example, in matter creation and annihilation.
Paul Dirac's work between 1927 and 1933 shaped the synthesis of special relativity and quantum mechanics. His work was instrumental, as he formulated the Dirac equation and also originated quantum electrodynamics, both of which were successful in combining the two theories.
In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier-transformed – see position and momentum space.
== Combining special relativity and quantum mechanics ==
One approach is to modify the Schrödinger picture to be consistent with special relativity.
A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation:
i
ℏ
∂
∂
t
ψ
=
H
^
ψ
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi ={\hat {H}}\psi }
using a suitable Hamiltonian operator Ĥ corresponding to the system. The solution is a complex-valued wavefunction ψ(r, t), a function of the 3D position vector r of the particle at time t, describing the behavior of the system.
Every particle has a non-negative spin quantum number s. The number 2s is an integer, odd for fermions and even for bosons. Each s has 2s + 1 z-projection quantum numbers; σ = s, s − 1, ... , −s + 1, −s. This is an additional discrete variable the wavefunction requires; ψ(r, t, σ).
Historically, in the early 1920s Pauli, Kronig, Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin–statistics theorem (1939) due to Fierz, rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry), to the quark configurations and colour charge (hence the properties of baryons and mesons).
A fundamental prediction of special relativity is the relativistic energy–momentum relation; for a particle of rest mass m, and in a particular frame of reference with energy E and 3-momentum p with magnitude in terms of the dot product
p
=
p
⋅
p
{\displaystyle p={\sqrt {\mathbf {p} \cdot \mathbf {p} }}}
, it is:
E
2
=
c
2
p
⋅
p
+
(
m
c
2
)
2
.
{\displaystyle E^{2}=c^{2}\mathbf {p} \cdot \mathbf {p} +(mc^{2})^{2}\,.}
These equations are used together with the energy and momentum operators, which are respectively:
E
^
=
i
ℏ
∂
∂
t
,
p
^
=
−
i
ℏ
∇
,
{\displaystyle {\hat {E}}=i\hbar {\frac {\partial }{\partial t}}\,,\quad {\hat {\mathbf {p} }}=-i\hbar \nabla \,,}
to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for ψ to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation).
The Heisenberg picture is another formulation of QM, in which case the wavefunction ψ is time-independent, and the operators A(t) contain the time dependence, governed by the equation of motion:
d
d
t
A
=
1
i
ℏ
[
A
,
H
^
]
+
∂
∂
t
A
,
{\displaystyle {\frac {d}{dt}}A={\frac {1}{i\hbar }}[A,{\hat {H}}]+{\frac {\partial }{\partial t}}A\,,}
This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR.
Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory.
A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group.
=== Space and time ===
In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system ψ(r1, r2, r3, ..., t, σ1, σ2, σ3...).
In relativistic mechanics, the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events. The position and time coordinates combine naturally into a four-dimensional spacetime position X = (ct, r) corresponding to events, and the energy and 3-momentum combine naturally into the four-momentum P = (E/c, p) of a dynamic particle, as measured in some reference frame, change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations.
Under a proper orthochronous Lorentz transformation (r, t) → Λ(r, t) in Minkowski space, all one-particle quantum states ψσ locally transform under some representation D of the Lorentz group:
ψ
σ
(
r
,
t
)
→
D
(
Λ
)
ψ
σ
(
Λ
−
1
(
r
,
t
)
)
{\displaystyle \psi _{\sigma }(\mathbf {r} ,t)\rightarrow D(\Lambda )\psi _{\sigma }(\Lambda ^{-1}(\mathbf {r} ,t))}
where D(Λ) is a finite-dimensional representation, in other words a (2s + 1)×(2s + 1) square matrix . Again, ψ is thought of as a column vector containing components with the (2s + 1) allowed values of σ. The quantum numbers s and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation.
=== Non-relativistic and relativistic Hamiltonians ===
The classical Hamiltonian for a particle in a potential is the kinetic energy p·p/2m plus the potential energy V(r, t), with the corresponding quantum operator in the Schrödinger picture:
H
^
=
p
^
⋅
p
^
2
m
+
V
(
r
,
t
)
{\displaystyle {\hat {H}}={\frac {{\hat {\mathbf {p} }}\cdot {\hat {\mathbf {p} }}}{2m}}+V(\mathbf {r} ,t)}
and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting:
H
^
=
E
^
=
c
2
p
^
⋅
p
^
+
(
m
c
2
)
2
⇒
i
ℏ
∂
∂
t
ψ
=
c
2
p
^
⋅
p
^
+
(
m
c
2
)
2
ψ
{\displaystyle {\hat {H}}={\hat {E}}={\sqrt {c^{2}{\hat {\mathbf {p} }}\cdot {\hat {\mathbf {p} }}+(mc^{2})^{2}}}\quad \Rightarrow \quad i\hbar {\frac {\partial }{\partial t}}\psi ={\sqrt {c^{2}{\hat {\mathbf {p} }}\cdot {\hat {\mathbf {p} }}+(mc^{2})^{2}}}\,\psi }
is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on ψ. As a result of the power series, the space and time derivatives are completely asymmetric: infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality: if the particle is initially localized at a point r0 so that ψ(r0, t = 0) is finite and zero elsewhere, then at any later time the equation predicts delocalization ψ(r, t) ≠ 0 everywhere, even for |r| > ct which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint ψ(|r| > ct, t) = 0.
There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of μB, the Bohr magneton:
μ
^
S
=
−
g
μ
B
ℏ
S
^
,
|
μ
S
|
=
−
g
μ
B
σ
,
{\displaystyle {\hat {\boldsymbol {\mu }}}_{S}=-{\frac {g\mu _{B}}{\hbar }}{\hat {\mathbf {S} }}\,,\quad \left|{\boldsymbol {\mu }}_{S}\right|=-g\mu _{B}\sigma \,,}
where g is the (spin) g-factor for the particle, and S the spin operator, so they interact with electromagnetic fields. For a particle in an externally applied magnetic field B, the interaction term
H
^
B
=
−
B
⋅
μ
^
S
{\displaystyle {\hat {H}}_{B}=-\mathbf {B} \cdot {\hat {\boldsymbol {\mu }}}_{S}}
has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation.
Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices, in which the matrix multiplication runs over the spin index σ, so in general a relativistic Hamiltonian:
H
^
=
H
^
(
r
,
t
,
p
^
,
S
^
)
{\displaystyle {\hat {H}}={\hat {H}}(\mathbf {r} ,t,{\hat {\mathbf {p} }},{\hat {\mathbf {S} }})}
is a function of space, time, and the momentum and spin operators.
=== The Klein–Gordon and Dirac equations for free particles ===
Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation:
E
^
2
ψ
=
c
2
p
^
⋅
p
^
ψ
+
(
m
c
2
)
2
ψ
,
{\displaystyle {\hat {E}}^{2}\psi =c^{2}{\hat {\mathbf {p} }}\cdot {\hat {\mathbf {p} }}\psi +(mc^{2})^{2}\psi \,,}
and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant, yet this equation alone isn't a sufficient foundation for RQM for at least two reasons: one is that negative-energy states are solutions, another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form:
(
E
^
−
c
α
⋅
p
^
−
β
m
c
2
)
(
E
^
+
c
α
⋅
p
^
+
β
m
c
2
)
ψ
=
0
,
{\displaystyle \left({\hat {E}}-c{\boldsymbol {\alpha }}\cdot {\hat {\mathbf {p} }}-\beta mc^{2}\right)\left({\hat {E}}+c{\boldsymbol {\alpha }}\cdot {\hat {\mathbf {p} }}+\beta mc^{2}\right)\psi =0\,,}
where α = (α1, α2, α3) and β are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for i ≠ j:
α
i
β
=
−
β
α
i
,
α
i
α
j
=
−
α
j
α
i
,
{\displaystyle \alpha _{i}\beta =-\beta \alpha _{i},\quad \alpha _{i}\alpha _{j}=-\alpha _{j}\alpha _{i}\,,}
and square to the identity matrix:
α
i
2
=
β
2
=
I
,
{\displaystyle \alpha _{i}^{2}=\beta ^{2}=I\,,}
so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor:
(
E
^
−
c
α
⋅
p
^
−
β
m
c
2
)
ψ
=
0
⇔
H
^
=
c
α
⋅
p
^
+
β
m
c
2
{\displaystyle \left({\hat {E}}-c{\boldsymbol {\alpha }}\cdot {\hat {\mathbf {p} }}-\beta mc^{2}\right)\psi =0\quad \Leftrightarrow \quad {\hat {H}}=c{\boldsymbol {\alpha }}\cdot {\hat {\mathbf {p} }}+\beta mc^{2}}
is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass. Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators E + cα · p + βmc2, and comparison with the KG equation determines the constraints on α and β. The positive mass equation can continue to be used without loss of continuity. The matrices multiplying ψ suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions, so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle, electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details.
=== Densities and currents ===
In non-relativistic quantum mechanics, the square modulus of the wavefunction ψ gives the probability density function ρ = |ψ|2. This is the Copenhagen interpretation, circa 1927. In RQM, while ψ(r, t) is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density ρ or probability current j (really meaning probability current density) because they are not positive-definite functions of space and time. The Dirac equation does:
ρ
=
ψ
†
ψ
,
j
=
ψ
†
γ
0
γ
ψ
⇌
J
μ
=
ψ
†
γ
0
γ
μ
ψ
{\displaystyle \rho =\psi ^{\dagger }\psi ,\quad \mathbf {j} =\psi ^{\dagger }\gamma ^{0}{\boldsymbol {\gamma }}\psi \quad \rightleftharpoons \quad J^{\mu }=\psi ^{\dagger }\gamma ^{0}\gamma ^{\mu }\psi }
where the dagger denotes the Hermitian adjoint (authors usually write ψ = ψ†γ0 for the Dirac adjoint) and Jμ is the probability four-current, while the Klein–Gordon equation does not:
ρ
=
i
ℏ
2
m
c
2
(
ψ
∗
∂
ψ
∂
t
−
ψ
∂
ψ
∗
∂
t
)
,
j
=
−
i
ℏ
2
m
(
ψ
∗
∇
ψ
−
ψ
∇
ψ
∗
)
⇌
J
μ
=
i
ℏ
2
m
(
ψ
∗
∂
μ
ψ
−
ψ
∂
μ
ψ
∗
)
{\displaystyle \rho ={\frac {i\hbar }{2mc^{2}}}\left(\psi ^{*}{\frac {\partial \psi }{\partial t}}-\psi {\frac {\partial \psi ^{*}}{\partial t}}\right)\,,\quad \mathbf {j} =-{\frac {i\hbar }{2m}}\left(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*}\right)\quad \rightleftharpoons \quad J^{\mu }={\frac {i\hbar }{2m}}(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*})}
where ∂μ is the four-gradient. Since the initial values of both ψ and ∂ψ/∂t may be freely chosen, the density can be negative.
Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge. Then, the wavefunction ψ is not a wavefunction at all, but reinterpreted as a field. The density and current of electric charge always satisfy a continuity equation:
∂
ρ
∂
t
+
∇
⋅
J
=
0
⇌
∂
μ
J
μ
=
0
,
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0\quad \rightleftharpoons \quad \partial _{\mu }J^{\mu }=0\,,}
as charge is a conserved quantity. Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions.
== Spin and electromagnetically interacting particles ==
Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge q in an electromagnetic field, given by the magnetic vector potential A(r, t) defined by the magnetic field B = ∇ × A, and electric scalar potential ϕ(r, t), this is:
E
^
→
E
^
−
q
ϕ
,
p
^
→
p
^
−
q
A
⇌
P
^
μ
→
P
^
μ
−
q
A
μ
{\displaystyle {\hat {E}}\rightarrow {\hat {E}}-q\phi \,,\quad {\hat {\mathbf {p} }}\rightarrow {\hat {\mathbf {p} }}-q\mathbf {A} \quad \rightleftharpoons \quad {\hat {P}}_{\mu }\rightarrow {\hat {P}}_{\mu }-qA_{\mu }}
where Pμ is the four-momentum that has a corresponding 4-momentum operator, and Aμ the four-potential. In the following, the non-relativistic limit refers to the limiting cases:
E
−
e
ϕ
≈
m
c
2
,
p
≈
m
v
,
{\displaystyle E-e\phi \approx mc^{2}\,,\quad \mathbf {p} \approx m\mathbf {v} \,,}
that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum.
=== Spin 0 ===
In RQM, the KG equation admits the minimal coupling prescription;
(
E
^
−
q
ϕ
)
2
ψ
=
c
2
(
p
^
−
q
A
)
2
ψ
+
(
m
c
2
)
2
ψ
⇌
[
(
P
^
μ
−
q
A
μ
)
(
P
^
μ
−
q
A
μ
)
−
(
m
c
)
2
]
ψ
=
0.
{\displaystyle {({\hat {E}}-q\phi )}^{2}\psi =c^{2}{({\hat {\mathbf {p} }}-q\mathbf {A} )}^{2}\psi +(mc^{2})^{2}\psi \quad \rightleftharpoons \quad \left[{({\hat {P}}_{\mu }-qA_{\mu })}{({\hat {P}}^{\mu }-qA^{\mu })}-{(mc)}^{2}\right]\psi =0.}
In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar (0,0) representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of (0,0) representations. Solutions that do not belong to the irreducible (0,0) representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin 1/2, see below. Thus if a system satisfies the KG equation only, it can only be interpreted as a system with zero spin.
The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π-mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions.
The KG equation is applicable to spinless charged bosons in an external electromagnetic potential. As such, the equation cannot be applied to the description of atoms, since the electron is a spin 1/2 particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field:
(
i
ℏ
∂
∂
t
−
q
ϕ
)
ψ
=
1
2
m
(
p
^
−
q
A
)
2
ψ
⇔
H
^
=
1
2
m
(
p
^
−
q
A
)
2
+
q
ϕ
.
{\displaystyle \left(i\hbar {\frac {\partial }{\partial t}}-q\phi \right)\psi ={\frac {1}{2m}}{({\hat {\mathbf {p} }}-q\mathbf {A} )}^{2}\psi \quad \Leftrightarrow \quad {\hat {H}}={\frac {1}{2m}}{({\hat {\mathbf {p} }}-q\mathbf {A} )}^{2}+q\phi .}
=== Spin 1/2 ===
Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field:
(
i
ℏ
∂
∂
t
−
q
ϕ
)
ψ
=
[
1
2
m
(
σ
⋅
(
p
−
q
A
)
)
2
]
ψ
⇔
H
^
=
1
2
m
(
σ
⋅
(
p
−
q
A
)
)
2
+
q
ϕ
{\displaystyle \left(i\hbar {\frac {\partial }{\partial t}}-q\phi \right)\psi =\left[{\frac {1}{2m}}{({\boldsymbol {\sigma }}\cdot (\mathbf {p} -q\mathbf {A} ))}^{2}\right]\psi \quad \Leftrightarrow \quad {\hat {H}}={\frac {1}{2m}}{({\boldsymbol {\sigma }}\cdot (\mathbf {p} -q\mathbf {A} ))}^{2}+q\phi }
by means of the 2 × 2 Pauli matrices, and ψ is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field:
ψ
=
(
ψ
↑
ψ
↓
)
{\displaystyle \psi ={\begin{pmatrix}\psi _{\uparrow }\\\psi _{\downarrow }\end{pmatrix}}}
where the subscripts ↑ and ↓ refer to the "spin up" (σ = +1/2) and "spin down" (σ = −1/2) states.
In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above;
(
i
ℏ
∂
∂
t
−
q
ϕ
)
ψ
=
γ
0
[
c
γ
⋅
(
p
^
−
q
A
)
−
m
c
2
]
ψ
⇌
[
γ
μ
(
P
^
μ
−
q
A
μ
)
−
m
c
2
]
ψ
=
0
{\displaystyle \left(i\hbar {\frac {\partial }{\partial t}}-q\phi \right)\psi =\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot {({\hat {\mathbf {p} }}-q\mathbf {A} )}-mc^{2}\right]\psi \quad \rightleftharpoons \quad \left[\gamma ^{\mu }({\hat {P}}_{\mu }-qA_{\mu })-mc^{2}\right]\psi =0}
and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices γ0 = β, γ = (γ1, γ2, γ3) = βα = (βα1, βα2, βα3). There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here ψ is a four-component spinor field, which is conventionally split into two two-component spinors in the form:
ψ
=
(
ψ
+
ψ
−
)
=
(
ψ
+
↑
ψ
+
↓
ψ
−
↑
ψ
−
↓
)
{\displaystyle \psi ={\begin{pmatrix}\psi _{+}\\\psi _{-}\end{pmatrix}}={\begin{pmatrix}\psi _{+\uparrow }\\\psi _{+\downarrow }\\\psi _{-\uparrow }\\\psi _{-\downarrow }\end{pmatrix}}}
The 2-spinor ψ+ corresponds to a particle with 4-momentum (E, p) and charge q and two spin states (σ = ±1/2, as before). The other 2-spinor ψ− corresponds to a similar particle with the same mass and spin states, but negative 4-momentum −(E, p) and negative charge −q, that is, negative energy states, time-reversed momentum, and negated charge. This was the first interpretation and prediction of a particle and corresponding antiparticle. See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting A = 0 and ϕ to the appropriate electrostatic potential, additional relativistic terms include the spin–orbit interaction, electron gyromagnetic ratio, and Darwin term. In ordinary QM these terms have to be put in by hand and treated using perturbation theory. The positive energies do account accurately for the fine structure.
Within RQM, for massless particles the Dirac equation reduces to:
(
E
^
c
+
σ
⋅
p
^
)
ψ
+
=
0
,
(
E
^
c
−
σ
⋅
p
^
)
ψ
−
=
0
⇌
σ
μ
P
^
μ
ψ
+
=
0
,
σ
μ
P
^
μ
ψ
−
=
0
,
{\displaystyle \left({\frac {\hat {E}}{c}}+{\boldsymbol {\sigma }}\cdot {\hat {\mathbf {p} }}\right)\psi _{+}=0\,,\quad \left({\frac {\hat {E}}{c}}-{\boldsymbol {\sigma }}\cdot {\hat {\mathbf {p} }}\right)\psi _{-}=0\quad \rightleftharpoons \quad \sigma ^{\mu }{\hat {P}}_{\mu }\psi _{+}=0\,,\quad \sigma _{\mu }{\hat {P}}^{\mu }\psi _{-}=0\,,}
the first of which is the Weyl equation, a considerable simplification applicable for massless neutrinos. This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix σ0 which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives).
The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups, because they satisfy the important commutator [ , ] and anticommutator [ , ]+ relations respectively:
[
σ
a
,
σ
b
]
=
2
i
ε
a
b
c
σ
c
,
[
σ
a
,
σ
b
]
+
=
2
δ
a
b
σ
0
{\displaystyle \left[\sigma _{a},\sigma _{b}\right]=2i\varepsilon _{abc}\sigma _{c}\,,\quad \left[\sigma _{a},\sigma _{b}\right]_{+}=2\delta _{ab}\sigma _{0}}
where εabc is the three-dimensional Levi-Civita symbol. The gamma matrices form bases in Clifford algebra, and have a connection to the components of the flat spacetime Minkowski metric ηαβ in the anticommutation relation:
[
γ
α
,
γ
β
]
+
=
γ
α
γ
β
+
γ
β
γ
α
=
2
η
α
β
,
{\displaystyle \left[\gamma ^{\alpha },\gamma ^{\beta }\right]_{+}=\gamma ^{\alpha }\gamma ^{\beta }+\gamma ^{\beta }\gamma ^{\alpha }=2\eta ^{\alpha \beta }\,,}
(This can be extended to curved spacetime by introducing vierbeins, but is not the subject of special relativity).
In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin 1/2 fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system. This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums.
=== Helicity and chirality ===
The helicity operator is defined by;
h
^
=
S
^
⋅
p
^
|
p
|
=
S
^
⋅
c
p
^
E
2
−
(
m
0
c
2
)
2
{\displaystyle {\hat {h}}={\hat {\mathbf {S} }}\cdot {\frac {\hat {\mathbf {p} }}{|\mathbf {p} |}}={\hat {\mathbf {S} }}\cdot {\frac {c{\hat {\mathbf {p} }}}{\sqrt {E^{2}-(m_{0}c^{2})^{2}}}}}
where p is the momentum operator, S the spin operator for a particle of spin s, E is the total energy of the particle, and m0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors. Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment.
An automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin 1/2 operator on the 3-momentum (times c), σ · c p, which is the helicity (for the spin 1/2 case) times
E
2
−
(
m
0
c
2
)
2
{\displaystyle {\sqrt {E^{2}-(m_{0}c^{2})^{2}}}}
.
For massless particles the helicity simplifies to:
h
^
=
S
^
⋅
c
p
^
E
{\displaystyle {\hat {h}}={\hat {\mathbf {S} }}\cdot {\frac {c{\hat {\mathbf {p} }}}{E}}}
=== Higher spins ===
The Dirac equation can only describe particles of spin 1/2. Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation. The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin. Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices.
The wavefunctions are multicomponent spinor fields, which can be represented as column vectors of functions of space and time:
ψ
(
r
,
t
)
=
[
ψ
σ
=
s
(
r
,
t
)
ψ
σ
=
s
−
1
(
r
,
t
)
⋮
ψ
σ
=
−
s
+
1
(
r
,
t
)
ψ
σ
=
−
s
(
r
,
t
)
]
⇌
ψ
(
r
,
t
)
†
=
[
ψ
σ
=
s
(
r
,
t
)
⋆
ψ
σ
=
s
−
1
(
r
,
t
)
⋆
⋯
ψ
σ
=
−
s
+
1
(
r
,
t
)
⋆
ψ
σ
=
−
s
(
r
,
t
)
⋆
]
{\displaystyle \psi (\mathbf {r} ,t)={\begin{bmatrix}\psi _{\sigma =s}(\mathbf {r} ,t)\\\psi _{\sigma =s-1}(\mathbf {r} ,t)\\\vdots \\\psi _{\sigma =-s+1}(\mathbf {r} ,t)\\\psi _{\sigma =-s}(\mathbf {r} ,t)\end{bmatrix}}\quad \rightleftharpoons \quad {\psi (\mathbf {r} ,t)}^{\dagger }={\begin{bmatrix}{\psi _{\sigma =s}(\mathbf {r} ,t)}^{\star }&{\psi _{\sigma =s-1}(\mathbf {r} ,t)}^{\star }&\cdots &{\psi _{\sigma =-s+1}(\mathbf {r} ,t)}^{\star }&{\psi _{\sigma =-s}(\mathbf {r} ,t)}^{\star }\end{bmatrix}}}
where the expression on the right is the Hermitian conjugate. For a massive particle of spin s, there are 2s + 1 components for the particle, and another 2s + 1 for the corresponding antiparticle (there are 2s + 1 possible σ values in each case), altogether forming a 2(2s + 1)-component spinor field:
ψ
(
r
,
t
)
=
[
ψ
+
,
σ
=
s
(
r
,
t
)
ψ
+
,
σ
=
s
−
1
(
r
,
t
)
⋮
ψ
+
,
σ
=
−
s
+
1
(
r
,
t
)
ψ
+
,
σ
=
−
s
(
r
,
t
)
ψ
−
,
σ
=
s
(
r
,
t
)
ψ
−
,
σ
=
s
−
1
(
r
,
t
)
⋮
ψ
−
,
σ
=
−
s
+
1
(
r
,
t
)
ψ
−
,
σ
=
−
s
(
r
,
t
)
]
⇌
ψ
(
r
,
t
)
†
[
ψ
+
,
σ
=
s
(
r
,
t
)
⋆
ψ
+
,
σ
=
s
−
1
(
r
,
t
)
⋆
⋯
ψ
−
,
σ
=
−
s
(
r
,
t
)
⋆
]
{\displaystyle \psi (\mathbf {r} ,t)={\begin{bmatrix}\psi _{+,\,\sigma =s}(\mathbf {r} ,t)\\\psi _{+,\,\sigma =s-1}(\mathbf {r} ,t)\\\vdots \\\psi _{+,\,\sigma =-s+1}(\mathbf {r} ,t)\\\psi _{+,\,\sigma =-s}(\mathbf {r} ,t)\\\psi _{-,\,\sigma =s}(\mathbf {r} ,t)\\\psi _{-,\,\sigma =s-1}(\mathbf {r} ,t)\\\vdots \\\psi _{-,\,\sigma =-s+1}(\mathbf {r} ,t)\\\psi _{-,\,\sigma =-s}(\mathbf {r} ,t)\end{bmatrix}}\quad \rightleftharpoons \quad {\psi (\mathbf {r} ,t)}^{\dagger }{\begin{bmatrix}{\psi _{+,\,\sigma =s}(\mathbf {r} ,t)}^{\star }&{\psi _{+,\,\sigma =s-1}(\mathbf {r} ,t)}^{\star }&\cdots &{\psi _{-,\,\sigma =-s}(\mathbf {r} ,t)}^{\star }\end{bmatrix}}}
with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s, there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to +s and the other for the antiparticle in the opposite helicity state corresponding to −s:
ψ
(
r
,
t
)
=
(
ψ
+
(
r
,
t
)
ψ
−
(
r
,
t
)
)
{\displaystyle \psi (\mathbf {r} ,t)={\begin{pmatrix}\psi _{+}(\mathbf {r} ,t)\\\psi _{-}(\mathbf {r} ,t)\end{pmatrix}}}
According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927.
For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies. For spin greater than ħ/2, the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments (electric dipole moments and magnetic dipole moments) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin 1/2 case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible. For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009).
== Velocity operator ==
The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition p = m v, and substituting quantum operators in the usual way:
v
^
=
1
m
p
^
{\displaystyle {\hat {\mathbf {v} }}={\frac {1}{m}}{\hat {\mathbf {p} }}}
which has eigenvalues that take any value. In RQM, the Dirac theory, it is:
v
^
=
i
ℏ
[
H
^
,
r
^
]
{\displaystyle {\hat {\mathbf {v} }}={\frac {i}{\hbar }}\left[{\hat {H}},{\hat {\mathbf {r} }}\right]}
which must have eigenvalues between ±c. See Foldy–Wouthuysen transformation for more theoretical background.
== Relativistic quantum Lagrangians ==
The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for ψ. An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density), then generate the differential equation by the field-theoretic Euler–Lagrange equation:
∂
μ
(
∂
L
∂
(
∂
μ
ψ
)
)
−
∂
L
∂
ψ
=
0
{\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\psi )}}\right)-{\frac {\partial {\mathcal {L}}}{\partial \psi }}=0\,}
For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is:
L
=
ψ
¯
(
γ
μ
P
μ
−
m
c
)
ψ
{\displaystyle {\mathcal {L}}={\overline {\psi }}(\gamma ^{\mu }P_{\mu }-mc)\psi }
and Klein–Gordon Lagrangian is:
L
=
−
ℏ
2
m
η
μ
ν
∂
μ
ψ
∗
∂
ν
ψ
−
m
c
2
ψ
∗
ψ
.
{\displaystyle {\mathcal {L}}=-{\frac {\hbar ^{2}}{m}}\eta ^{\mu \nu }\partial _{\mu }\psi ^{*}\partial _{\nu }\psi -mc^{2}\psi ^{*}\psi \,.}
This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of ψ is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) Weinberg (1995).
== Relativistic quantum angular momentum ==
In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition L = r × p. In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism:
M
α
β
=
X
α
P
β
−
X
β
P
α
=
2
X
[
α
P
β
]
⇌
M
=
X
∧
P
,
{\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }=2X^{[\alpha }P^{\beta ]}\quad \rightleftharpoons \quad \mathbf {M} =\mathbf {X} \wedge \mathbf {P} \,,}
which are six components altogether: three are the non-relativistic 3-orbital angular momenta; M12 = L3, M23 = L1, M31 = L2, and the other three M01, M02, M03 are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass m, the total angular momentum tensor is:
J
α
β
=
2
X
[
α
P
β
]
+
1
m
2
ε
α
β
γ
δ
W
γ
p
δ
⇌
J
=
X
∧
P
+
1
m
2
⋆
(
W
∧
P
)
{\displaystyle J^{\alpha \beta }=2X^{[\alpha }P^{\beta ]}+{\frac {1}{m^{2}}}\varepsilon ^{\alpha \beta \gamma \delta }W_{\gamma }p_{\delta }\quad \rightleftharpoons \quad \mathbf {J} =\mathbf {X} \wedge \mathbf {P} +{\frac {1}{m^{2}}}\star (\mathbf {W} \wedge \mathbf {P} )}
where the star denotes the Hodge dual, and
W
α
=
1
2
ε
α
β
γ
δ
M
β
γ
p
δ
⇌
W
=
⋆
(
M
∧
P
)
{\displaystyle W_{\alpha }={\frac {1}{2}}\varepsilon _{\alpha \beta \gamma \delta }M^{\beta \gamma }p^{\delta }\quad \rightleftharpoons \quad \mathbf {W} =\star (\mathbf {M} \wedge \mathbf {P} )}
is the Pauli–Lubanski pseudovector. For more on relativistic spin, see (for example) Troshin & Tyurin (1994).
=== Thomas precession and spin–orbit interactions ===
In 1926, the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects. In 1939 Wigner derived the Thomas precession.
In classical electromagnetism and special relativity, an electron moving with a velocity v through an electric field E but not a magnetic field B, will in its own frame of reference experience a Lorentz-transformed magnetic field B′:
B
′
=
E
×
v
c
2
1
−
(
v
/
c
)
2
.
{\displaystyle \mathbf {B} '={\frac {\mathbf {E} \times \mathbf {v} }{c^{2}{\sqrt {1-\left(v/c\right)^{2}}}}}\,.}
In the non-relativistic limit v << c:
B
′
=
E
×
v
c
2
,
{\displaystyle \mathbf {B} '={\frac {\mathbf {E} \times \mathbf {v} }{c^{2}}}\,,}
so the non-relativistic spin interaction Hamiltonian becomes:
H
^
=
−
B
′
⋅
μ
^
S
=
−
(
B
+
E
×
v
c
2
)
⋅
μ
^
S
,
{\displaystyle {\hat {H}}=-\mathbf {B} '\cdot {\hat {\boldsymbol {\mu }}}_{S}=-\left(\mathbf {B} +{\frac {\mathbf {E} \times \mathbf {v} }{c^{2}}}\right)\cdot {\hat {\boldsymbol {\mu }}}_{S}\,,}
where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order (v/c)², but this disagrees with experimental atomic spectra by a factor of 1⁄2. It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference, and this additional precession of the electron is called the Thomas precession. It can be shown that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is:
H
^
=
−
B
′
⋅
μ
^
S
=
−
(
B
+
E
×
v
2
c
2
)
⋅
μ
^
S
.
{\displaystyle {\hat {H}}=-\mathbf {B} '\cdot {\hat {\boldsymbol {\mu }}}_{S}=-\left(\mathbf {B} +{\frac {\mathbf {E} \times \mathbf {v} }{2c^{2}}}\right)\cdot {\hat {\boldsymbol {\mu }}}_{S}\,.}
In the case of RQM, the factor of 1⁄2 is predicted by the Dirac equation.
== History ==
The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985), and P.W Atkins (1974)]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics, nuclear physics, and particle physics; by considering spectroscopy, diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin.
=== Relativistic description of particles in quantum phenomena ===
Albert Einstein in 1905 explained of the photoelectric effect; a particle description of light as photons. In 1916, Sommerfeld explains fine structure; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter: the de Broglie relations, which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality.
=== Experiments ===
1897 J. J. Thomson discovers the electron and measures its mass-to-charge ratio. Discovery of the Zeeman effect: the splitting a spectral line into several components in the presence of a static magnetic field.
1908 Millikan measures the charge on the electron and finds experimental evidence of its quantization, in the oil drop experiment.
1911 Alpha particle scattering in the Geiger–Marsden experiment, led by Rutherford, showed that atoms possess an internal structure: the atomic nucleus.
1913 The Stark effect is discovered: splitting of spectral lines due to a static electric field (compare with the Zeeman effect).
1922 Stern–Gerlach experiment: experimental evidence of spin and its quantization.
1924 Stoner studies splitting of energy levels in magnetic fields.
1932 Experimental discovery of the neutron by Chadwick, and positrons by Anderson, confirming the theoretical prediction of positrons.
1958 Discovery of the Mössbauer effect: resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid, useful for accurate measurements of gravitational redshift and time dilation, and in the analysis of nuclear electromagnetic moments in hyperfine interactions.
=== Quantum non-locality and relativistic locality ===
In 1935, Einstein, Rosen, Podolsky published a paper concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c). QM does not violate SR. In 1959, Bohm and Aharonov publish a paper on the Aharonov–Bohm effect, questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox, showing that QM cannot be derived from local hidden-variable theories if locality is to be maintained.
=== The Lamb shift ===
In 1947, the Lamb shift was discovered: a small difference in the 2S1⁄2 and 2P1⁄2 levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2S1⁄2 and 2P1⁄2 hydrogen levels by microwave radiation. An explanation of the Lamb shift is presented by Bethe. Papers on the effect were published in the early 1950s.
=== Development of quantum electrodynamics ===
1927 Dirac establishes the field of QED, also coining the term "quantum electrodynamics".
1943 Tomonaga begins work on renormalization, influential in QED.
1947 Schwinger calculates the anomalous magnetic moment of the electron. Kusch measures of the anomalous magnetic electron moment, confirming one of QED's great predictions.
== See also ==
== Footnotes ==
== References ==
=== Selected books ===
Dirac, P.A.M. (1981). Principles of Quantum Mechanics (4th ed.). Clarendon Press. ISBN 978-0-19-852011-5.
Dirac, P.A.M. (1964). Lectures on Quantum Mechanics. Courier Dover Publications. ISBN 978-0-486-41713-4. {{cite book}}: ISBN / Date incompatibility (help)
Thaller, B. (2010). The Dirac Equation. Springer. ISBN 978-3-642-08134-7.
Pauli, W. (1980). General Principles of Quantum Mechanics. Springer. ISBN 978-3-540-09842-3.
Merzbacher, E. (1998). Quantum Mechanics (3rd ed.). John Wiley & Sons. ISBN 978-0-471-88702-7.
Messiah, A. (1961). Quantum Mechanics. Vol. 1. John Wiley & Sons. ISBN 978-0-471-59766-7. {{cite book}}: ISBN / Date incompatibility (help)
Bjorken, J.D.; Drell, S.D. (1964). Relativistic Quantum Mechanics (Pure & Applied Physics). McGraw-Hill. ISBN 978-0-07-005493-6. {{cite book}}: ISBN / Date incompatibility (help)
Feynman, R.P.; Leighton, R.B.; Sands, M. (1965). Feynman Lectures on Physics. Vol. 3. Addison-Wesley. ISBN 978-0-201-02118-9.
Schiff, L.I. (1968). Quantum Mechanics (3rd ed.). McGraw-Hill.
Dyson, F. (2011). Advanced Quantum Mechanics (2nd ed.). World Scientific. ISBN 978-981-4383-40-0.
Clifton, R.K. (2011). Perspectives on Quantum Reality: Non-Relativistic, Relativistic, and Field-Theoretic. Springer. ISBN 978-90-481-4643-7.
Tannoudji, C.; Diu, B.; Laloë, F. (1977). Quantum Mechanics. Vol. 1. Wiley VCH. ISBN 978-0-471-16433-3.
Tannoudji, C.; Diu, B.; Laloë, F. (1977). Quantum Mechanics. Vol. 2. Wiley VCH. ISBN 978-0-471-16435-7.
Rae, A.I.M. (2008). Quantum Mechanics. Vol. 2 (5th ed.). Taylor & Francis. ISBN 978-1-58488-970-0.
Pilkuhn, H. (2005). Relativistic Quantum Mechanics. Texts and Monographs in Physics Series (2nd ed.). Springer. ISBN 978-3-540-28522-9.
Parthasarathy, R. (2010). Relativistic quantum mechanics. Alpha Science International. ISBN 978-1-84265-573-3.
Kaldor, U.; Wilson, S. (2003). Theoretical Chemistry and Physics of Heavy and Superheavy Elements. Springer. ISBN 978-1-4020-1371-3.
Thaller, B. (2005). Advanced visual quantum mechanics. Springer. Bibcode:2005avqm.book.....T. ISBN 978-0-387-27127-9.
Breuer, H.P.; Petruccione, F. (2000). Relativistic Quantum Measurement and Decoherence. Springer. ISBN 978-3-540-41061-4. Relativistic quantum mechanics.
Shepherd, P.J. (2013). A Course in Theoretical Physics. John Wiley & Sons. ISBN 978-1-118-51692-8.
Bethe, H.A.; Jackiw, R.W. (1997). Intermediate Quantum Mechanics. Addison-Wesley. ISBN 978-0-201-32831-8.
Heitler, W. (1954). The Quantum Theory of Radiation (3rd ed.). Courier Dover Publications. ISBN 978-0-486-64558-2. {{cite book}}: ISBN / Date incompatibility (help)
Gottfried, K.; Yan, T. (2003). Quantum Mechanics: Fundamentals (2nd ed.). Springer. p. 245. ISBN 978-0-387-95576-6.
Schwabl, F. (2010). Quantum Mechanics. Springer. p. 220. ISBN 978-3-540-71933-5.
Sachs, R.G. (1987). The Physics of Time Reversal (2nd ed.). University of Chicago Press. p. 280. ISBN 978-0-226-73331-9. hyperfine structure in relativistic quantum mechanics.
=== Group theory in quantum physics ===
Weyl, H. (1950). The theory of groups and quantum mechanics. Courier Dover Publications. p. 203. ISBN 9780486602691. magnetic moments in relativistic quantum mechanics. {{cite book}}: ISBN / Date incompatibility (help)
Tung, W.K. (1985). Group Theory in Physics. World Scientific. ISBN 978-9971-966-56-0.
Heine, V. (1993). Group Theory in Quantum Mechanics: An Introduction to Its Present Usage. Courier Dover Publications. ISBN 978-0-486-67585-5.
=== Selected papers ===
Dirac, P.A.M. (1932). "Relativistic Quantum Mechanics". Proceedings of the Royal Society A. 136 (829): 453–464. Bibcode:1932RSPSA.136..453D. doi:10.1098/rspa.1932.0094.
Pauli, W. (1945). "Exclusion principle and quantum mechanics" (PDF).
Antoine, J.P. (2004). "Relativistic Quantum Mechanics". J. Phys. A. 37 (4): 1465. Bibcode:2004JPhA...37.1463P. CiteSeerX 10.1.1.499.2793. doi:10.1088/0305-4470/37/4/B01.
Henneaux, M.; Teitelboim, C. (1982). "Relativistic quantum mechanics of supersymmetric particles". Vol. 143.
Fanchi, J.R. (1986). "Parametrizing relativistic quantum mechanics". Phys. Rev. A. 34 (3): 1677–1681. Bibcode:1986PhRvA..34.1677F. doi:10.1103/PhysRevA.34.1677. PMID 9897446.
Ord, G.N. (1983). "Fractal space-time: a geometric analogue of relativistic quantum mechanics". J. Phys. A. 16 (9): 1869–1884. Bibcode:1983JPhA...16.1869O. doi:10.1088/0305-4470/16/9/012.
Coester, F.; Polyzou, W.N. (1982). "Relativistic quantum mechanics of particles with direct interactions". Phys. Rev. D. 26 (6): 1348–1367. Bibcode:1982PhRvD..26.1348C. doi:10.1103/PhysRevD.26.1348.
Mann, R.B.; Ralph, T.C. (2012). "Relativistic quantum information". Class. Quantum Grav. 29 (22): 220301. Bibcode:2012CQGra..29v0301M. doi:10.1088/0264-9381/29/22/220301. S2CID 123341332.
Low, S.G. (1997). "Canonically Relativistic Quantum Mechanics: Representations of the Unitary Semidirect Heisenberg Group, U(1,3) *s H(1,3)". J. Math. Phys. 38 (22): 2197–2209. arXiv:physics/9703008. Bibcode:2012CQGra..29v0301M. doi:10.1088/0264-9381/29/22/220301. S2CID 123341332.
Fronsdal, C.; Lundberg, L.E. (1997). "Relativistic Quantum Mechanics of Two Interacting Particles". Phys. Rev. D. 1 (12): 3247–3258. arXiv:physics/9703008. Bibcode:1970PhRvD...1.3247F. doi:10.1103/PhysRevD.1.3247.
Bordovitsyn, V.A.; Myagkii, A.N. (2004). "Spin–orbital motion and Thomas precession in the classical and quantum theories" (PDF). American Journal of Physics. 72 (1): 51–52. arXiv:physics/0310016. Bibcode:2004AmJPh..72...51K. doi:10.1119/1.1615526. S2CID 119533324.
Rȩbilas, K. (2013). "Comment on 'Elementary analysis of the special relativistic combination of velocities, Wigner rotation and Thomas precession'". Eur. J. Phys. 34 (3): L55 – L61. Bibcode:2013EJPh...34L..55R. doi:10.1088/0143-0807/34/3/L55. S2CID 122527454.
Corben, H.C. (1993). "Factors of 2 in magnetic moments, spin–orbit coupling, and Thomas precession". Am. J. Phys. 61 (6): 551. Bibcode:1993AmJPh..61..551C. doi:10.1119/1.17207.
== Further reading ==
=== Relativistic quantum mechanics and field theory ===
Ohlsson, T. (2011). Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory. Cambridge University Press. p. 10. ISBN 978-1-139-50432-4.
Aitchison, I.J.R.; Hey, A.J.G. (2002). Gauge Theories in Particle Physics: From Relativistic Quantum Mechanics to QED. Vol. 1 (3rd ed.). CRC Press. ISBN 978-0-8493-8775-3.
Griffiths, D. (2008). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-3-527-61847-7.
Capri, Anton Z. (2002). Relativistic quantum mechanics and introduction to quantum field theory. World Scientific. Bibcode:2002rqmi.book.....C. ISBN 978-981-238-137-8.
Wu, Ta-you; Hwang, W.Y. Pauchy (1991). Relativistic quantum mechanics and quantum fields. World Scientific. ISBN 978-981-02-0608-6.
Nagashima, Y. (2010). Elementary particle physics, Quantum Field Theory. Vol. 1. ISBN 978-3-527-40962-4.
Bjorken, J.D.; Drell, S.D. (1965). Relativistic Quantum Fields (Pure & Applied Physics). McGraw-Hill. ISBN 978-0-07-005494-3.
Weinberg, S. (1996). The Quantum Theory of Fields. Vol. 2. Cambridge University Press. ISBN 978-0-521-55002-4.
Weinberg, S. (2000). The Quantum Theory of Fields. Vol. 3. Cambridge University Press. ISBN 978-0-521-66000-6.
Gross, F. (2008). Relativistic Quantum Mechanics and Field Theory. John Wiley & Sons. ISBN 978-3-527-61734-0.
Nazarov, Y.V.; Danon, J. (2013). Advanced Quantum Mechanics: A Practical Guide. Cambridge University Press. ISBN 978-0-521-76150-5.
Bogolubov, N.N. (1989). General Principles of Quantum Field Theory (2nd ed.). Springer. p. 272. ISBN 978-0-7923-0540-8.
Mandl, F.; Shaw, G. (2010). Quantum Field Theory (2nd ed.). John Wiley & Sons. ISBN 978-0-471-49683-0.
Lindgren, I. (2011). Relativistic Many-body Theory: A New Field-theoretical Approach. Springer series on atomic, optical, and plasma physics. Vol. 63. Springer. ISBN 978-1-4419-8309-1.
Grant, I.P. (2007). Relativistic Quantum theory of atoms and molecules. Atomic, optical, and plasma physics. Springer. ISBN 978-0-387-34671-7.
=== Quantum theory and applications in general ===
Aruldhas, G.; Rajagopal, P. (2005). Modern Physics. PHI Learning Pvt. Ltd. p. 395. ISBN 978-81-203-2597-5.
Hummel, R.E. (2011). Electronic properties of materials. Springer. p. 395. ISBN 978-1-4419-8164-6.
Pavia, D.L. (2005). Introduction to Spectroscopy (4th ed.). Cengage Learning. p. 105. ISBN 978-0-495-11478-9.
Mizutani, U. (2001). Introduction to the Electron Theory of Metals. Cambridge University Press. p. 387. ISBN 978-0-521-58709-9.
Choppin, G.R. (2002). Radiochemistry and nuclear chemistry (3 ed.). Butterworth-Heinemann. p. 308. ISBN 978-0-7506-7463-8.
Sitenko, A.G. (1990). Theory of nuclear reactions. World Scientific. p. 443. ISBN 978-9971-5-0482-3.
Nolting, W.; Ramakanth, A. (2008). Quantum theory of magnetism. Springer. ISBN 978-3-540-85416-6.
Luth, H. (2013). Quantum Physics in the Nanoworld. Graduate texts in physics. Springer. p. 149. ISBN 978-3-642-31238-0.
Sattler, K.D. (2010). Handbook of Nanophysics: Functional Nanomaterials. CRC Press. pp. 40–43. ISBN 978-1-4200-7553-3.
Kuzmany, H. (2009). Solid-State Spectroscopy. Springer. p. 256. ISBN 978-3-642-01480-2.
Reid, J.M. (1984). The Atomic Nucleus (2nd ed.). Manchester University Press. ISBN 978-0-7190-0978-5.
Schwerdtfeger, P. (2002). Relativistic Electronic Structure Theory - Fundamentals. Theoretical and Computational Chemistry. Vol. 11. Elsevier. p. 208. ISBN 978-0-08-054046-7.
Piela, L. (2006). Ideas of Quantum Chemistry. Elsevier. p. 676. ISBN 978-0-08-046676-7.
Kumar, M. (2009). Quantum (book). ISBN 978-1-84831-035-3.
== External links ==
Maiani, Luciano; Benhar, Omar (2024). Relativistic Quantum Mechanics: An Introduction to Relativistic Quantum Fields. Taylor & Francis. ISBN 9781003436263.
Pfeifer, W. (2008) [2004]. Relativistic Quantum Mechanics, an Introduction.
Lukačević, Igor (2013). "Relativistic Quantum Mechanics (Lecture Notes)" (PDF). Archived from the original (PDF) on 2014-08-26.
"Relativistic Quantum Mechanics" (PDF). Cavendish Laboratory. University of Cambridge.
Miller, David J. (2008). "Relativistic Quantum Mechanics" (PDF). University of Glasgow. Archived from the original (PDF) on 2020-12-19. Retrieved 2020-11-17. | Wikipedia/Relativistic_quantum_mechanics |
In quantum physics, unitarity is (or a unitary process has) the condition that the time evolution of a quantum state according to the Schrödinger equation is mathematically represented by a unitary operator. This is typically taken as an axiom or basic postulate of quantum mechanics, while generalizations of or departures from unitarity are part of speculations about theories that may go beyond quantum mechanics. A unitarity bound is any inequality that follows from the unitarity of the evolution operator, i.e. from the statement that time evolution preserves inner products in Hilbert space.
== Hamiltonian evolution ==
Time evolution described by a time-independent Hamiltonian is represented by a one-parameter family of unitary operators, for which the Hamiltonian is a generator:
U
(
t
)
=
e
−
i
H
^
t
/
ℏ
{\displaystyle U(t)=e^{-i{\hat {H}}t/\hbar }}
.
In the Schrödinger picture, the unitary operators are taken to act upon the system's quantum state, whereas in the Heisenberg picture, the time dependence is incorporated into the observables instead.
=== Implications of unitarity on measurement results ===
In quantum mechanics, every state is described as a vector in Hilbert space. When a measurement is performed, it is convenient to describe this space using a vector basis in which every basis vector has a defined result of the measurement – e.g., a vector basis of defined momentum in case momentum is measured. The measurement operator is diagonal in this basis.
The probability to get a particular measured result depends on the probability amplitude, given by the inner product of the physical state
|
ψ
⟩
{\displaystyle |\psi \rangle }
with the basis vectors
{
|
ϕ
i
⟩
}
{\displaystyle \{|\phi _{i}\rangle \}}
that diagonalize the measurement operator. For a physical state that is measured after it has evolved in time, the probability amplitude can be described either by the inner product of the physical state after time evolution with the relevant basis vectors, or equivalently by the inner product of the physical state with the basis vectors that are evolved backwards in time. Using the time evolution operator
e
−
i
H
^
t
/
ℏ
{\displaystyle e^{-i{\hat {H}}t/\hbar }}
, we have:
⟨
ϕ
i
|
e
−
i
H
^
t
/
ℏ
ψ
⟩
=
⟨
e
−
i
H
^
(
−
t
)
/
ℏ
ϕ
i
|
ψ
⟩
{\displaystyle \left\langle \phi _{i}\left|e^{-i{\hat {H}}t/\hbar }\psi \right.\right\rangle =\left\langle \left.e^{-i{\hat {H}}(-t)/\hbar }\phi _{i}\right|\psi \right\rangle }
But by definition of Hermitian conjugation, this is also:
⟨
ϕ
i
|
e
−
i
H
^
t
/
ℏ
ψ
⟩
=
⟨
ϕ
i
(
e
−
i
H
^
t
/
ℏ
)
†
|
ψ
⟩
=
⟨
ϕ
i
e
−
i
H
^
†
(
−
t
)
/
ℏ
|
ψ
⟩
{\displaystyle \left\langle \phi _{i}\left|e^{-i{\hat {H}}t/\hbar }\psi \right.\right\rangle =\left\langle \left.\phi _{i}\left(e^{-i{\hat {H}}t/\hbar }\right)^{\dagger }\right|\psi \right\rangle =\left\langle \left.\phi _{i}e^{-i{\hat {H}}^{\dagger }(-t)/\hbar }\right|\psi \right\rangle }
Since these equalities are true for every two vectors, we get
H
^
†
=
H
^
{\displaystyle {\hat {H}}^{\dagger }={\hat {H}}}
This means that the Hamiltonian is Hermitian and the time evolution operator
e
−
i
H
^
t
/
ℏ
{\displaystyle e^{-i{\hat {H}}t/\hbar }}
is unitary.
Since by the Born rule the norm determines the probability to get a particular result in a measurement, unitarity together with the Born rule guarantees the sum of probabilities is always one. Furthermore, unitarity together with the Born rule implies that the measurement operators in Heisenberg picture indeed describe how the measurement results are expected to evolve in time.
=== Implications on the form of the Hamiltonian ===
That the time evolution operator is unitary, is equivalent to the Hamiltonian being Hermitian. Equivalently, this means that the possible measured energies, which are the eigenvalues of the Hamiltonian, are always real numbers.
== Scattering amplitude and the optical theorem ==
The S-matrix is used to describe how the physical system changes in a scattering process. It is in fact equal to the time evolution operator over a very long time (approaching infinity) acting on momentum states of particles (or bound complex of particles) at infinity. Thus it must be a unitary operator as well; a calculation yielding a non-unitary S-matrix often implies a bound state has been overlooked.
=== Optical theorem ===
Unitarity of the S-matrix implies, among other things, the optical theorem. This can be seen as follows:
The S-matrix can be written as:
S
=
1
+
i
T
{\displaystyle S=1+iT}
where
T
{\displaystyle T}
is the part of the S-matrix that is due to interactions; e.g.
T
=
0
{\displaystyle T=0}
just implies the S-matrix is 1, no interaction occur and all states remain unchanged.
Unitarity of the S-matrix:
S
†
S
=
1
{\displaystyle S^{\dagger }S=1}
is then equivalent to:
−
i
(
T
−
T
†
)
=
T
†
T
{\displaystyle -i\left(T-T^{\dagger }\right)=T^{\dagger }T}
The left-hand side is twice the imaginary part of the S-matrix. In order to see what the right-hand side is, let us look at any specific element of this matrix, e.g. between some initial state
|
I
⟩
{\displaystyle |I\rangle }
and final state
⟨
F
|
{\displaystyle \langle F|}
, each of which may include many particles. The matrix element is then:
⟨
F
|
T
†
T
|
I
⟩
=
∑
i
⟨
F
|
T
†
|
A
i
⟩
⟨
A
i
|
T
|
I
⟩
{\displaystyle \left\langle F\left|T^{\dagger }T\right|I\right\rangle =\sum _{i}\left\langle F|T^{\dagger }|A_{i}\right\rangle \left\langle A_{i}|T|I\right\rangle }
where {Ai} is the set of possible on-shell states - i.e. momentum states of particles (or bound complex of particles) at infinity.
Thus, twice the imaginary part of the S-matrix, is equal to a sum representing products of contributions from all the scatterings of the initial state of the S-matrix to any other physical state at infinity, with the scatterings of the latter to the final state of the S-matrix. Since the imaginary part of the S-matrix can be calculated by virtual particles appearing in intermediate states of the Feynman diagrams, it follows that these virtual particles must only consist of real particles that may also appear as final states. The mathematical machinery which is used to ensure this includes gauge symmetry and sometimes also Faddeev–Popov ghosts.
=== Unitarity bounds ===
According to the optical theorem, the probability amplitude M (= iT) for any scattering process must obey
|
M
|
2
=
2
Im
(
M
)
{\displaystyle |M|^{2}=2\operatorname {Im} (M)}
Similar unitarity bounds imply that the amplitudes and cross section cannot increase too much with energy or they must decrease as quickly as a certain formula dictates. For example, Froissart bound says that the total cross section of two particles scattering is bounded by
c
ln
2
s
{\displaystyle c\ln ^{2}s}
, where
c
{\displaystyle c}
is a constant, and
s
{\displaystyle s}
is the square of the center-of-mass energy. (See Mandelstam variables)
== See also ==
Antiunitary operator
the Born rule
Probability axioms
Quantum channel
Stone's theorem on one-parameter unitary groups
Wigner's theorem
== References == | Wikipedia/Unitarity_(physics) |
Physics Physique Физика, also known as various punctuations of Physics, Physique, Fizika, and as Physics for short, was a scientific journal published from 1964 through 1968. Founded by Philip Warren Anderson and Bernd T. Matthias, who were inspired by wide-circulation literary magazines like Harper's, the journal's original goal was to print papers of interest to scientists in all branches of physics. It is best known for publishing John Stewart Bell's paper on the result now known as Bell's theorem. Failing to attract sufficient interest as an unspecialized journal, Physics Physique Физика soon focused on solid-state physics before folding altogether in 1968. The four volumes of this journal were eventually made freely available online by the American Physical Society.
Bell chose to publish his theorem in this journal because it did not require page charges, and at the time it in fact paid the authors who published there. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian. In 1967, the unusual title caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test experiment in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test experiment in 1972.
== Selected publications ==
The following are among the most highly cited articles published in the journal during its four-year time span.
Susskind, Leonard; Glogower, Jonathan (1964-07-01). "Quantum mechanical phase and time operator". Physics Physique Fizika. 1 (1): 49–61. doi:10.1103/PhysicsPhysiqueFizika.1.49.
Gell-Mann, Murray (1964-07-01). "The symmetry group of vector and axial vector currents". Physics Physique Fizika. 1 (1): 63–75. doi:10.1103/PhysicsPhysiqueFizika.1.63.
Bell, J. S. (1964-11-01). "On the Einstein Podolsky Rosen paradox". Physics Physique Fizika. 1 (3): 195–200. doi:10.1103/PhysicsPhysiqueFizika.1.195.
Kadanoff, Leo P. (1966-06-01). "Scaling laws for Ising models near Tc". Physics Physique Fizika. 2 (6): 263–272. doi:10.1103/PhysicsPhysiqueFizika.2.263.
Fisher, Michael E. (1967-10-01). "The theory of condensation and the critical point". Physics Physique Fizika. 3 (5): 255–283. doi:10.1103/PhysicsPhysiqueFizika.3.255.
== See also ==
Epistemological Letters
== References ==
== External links ==
Full text of Physics Physique физика from the American Physical Society | Wikipedia/Physics_Physique_Fizika |
Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Albert Einstein’s theory of special relativity.
The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. This idea would be true if supersymmetry were an exact symmetry of nature; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature".
== Etymology and terminology ==
The term zero-point energy (ZPE) is a translation from the German Nullpunktsenergie. Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, and its associated zero-point energy is called the vacuum energy.
== Overview ==
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics.
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Given the equivalence of mass and energy expressed by Albert Einstein's E = mc2, any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field.
The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy (discussed further below) and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem.
Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy.
== History ==
=== Early aether theories ===
Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was τὸ κενόν, "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum.
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc:
To those who maintained the existence of a plenum as a philosophical principle, nature's abhorrence of a vacuum was a sufficient reason for imagining an all-surrounding aether ... Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, till a space had been filled three or four times with aethers.
Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
=== Second quantum theory ===
In 1900, Max Planck derived the average energy ε of a single energy radiator, e.g., a vibrating atomic unit, as a function of absolute temperature:
ε
=
h
ν
e
h
ν
/
(
k
T
)
−
1
,
{\displaystyle \varepsilon ={\frac {h\nu }{e^{h\nu /(kT)}-1}}\,,}
where h is the Planck constant, ν is the frequency, k is the Boltzmann constant, and T is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900.
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900.
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of hν. This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one hν/2, as an additional term dependent on the frequency ν, which was greater than zero (where h is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be:
ε
=
h
ν
2
+
h
ν
e
h
ν
/
(
k
T
)
−
1
.
{\displaystyle \varepsilon ={\frac {h\nu }{2}}+{\frac {h\nu }{e^{h\nu /(kT)}-1}}~.}
Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920:
There is a weighty argument to be adduced in favour of the aether hypothesis. To deny the aether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view ... according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.
Kurt Bennewitz and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics.
In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law 1/r2 held down to the zero values of r. For the force between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would be adequate to separate them. [...] Thus the matter in the universe would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability.
=== Quantum field theory and beyond ===
In 1926, Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory".
Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac:
The light-quantum has the peculiarity that it apparently ceases to exist when it is in one of its stationary states, namely, the zero state, in which its momentum and therefore also its energy, are zero. When a light-quantum is absorbed it can be considered to jump into this zero state, and when one is emitted it can be considered to jump from the zero state to one in which it is physically in evidence, so that it appears to have been created. Since there is no limit to the number of light-quanta that may be created in this way, we must suppose that there are an infinite number of light quanta in the zero state ...
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:
From quantum theory there follows the existence of so called zero-point oscillations; for example each oscillator in its lowest state is not completely at rest but always is moving about its equilibrium position. Therefore electromagnetic oscillations also can never cease completely. Thus the quantum nature of the electromagnetic field has as its consequence zero point oscillations of the field strength in the lowest energy state, in which there are no light quanta in space ... The zero point oscillations act on an electron in the same way as ordinary electrical oscillations do. They can change the eigenstate of the electron, but only in a transition to a state with the lowest energy, since empty space can only take away energy, and not give it up. In this way spontaneous radiation arises as a consequence of the existence of these unique field strengths corresponding to zero point oscillations. Thus spontaneous radiation is induced radiation of light quanta produced by zero point oscillations of empty space
This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality".
In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers.
In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions.
In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom.
== Uncertainty principle ==
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,
H
^
=
V
0
+
1
2
k
(
x
^
−
x
0
)
2
+
1
2
m
p
^
2
,
{\displaystyle {\hat {H}}=V_{0}+{\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}+{\frac {1}{2m}}{\hat {p}}^{2}\,,}
where V0 is the minimum of the classical potential well.
The uncertainty principle tells us that
⟨
(
x
^
−
x
0
)
2
⟩
⟨
p
^
2
⟩
≥
ℏ
2
,
{\displaystyle {\sqrt {\left\langle \left({\hat {x}}-x_{0}\right)^{2}\right\rangle }}{\sqrt {\left\langle {\hat {p}}^{2}\right\rangle }}\geq {\frac {\hbar }{2}}\,,}
making the expectation values of the kinetic and potential terms above satisfy
⟨
1
2
k
(
x
^
−
x
0
)
2
⟩
⟨
1
2
m
p
^
2
⟩
≥
(
ℏ
4
)
2
k
m
.
{\displaystyle \left\langle {\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}\right\rangle \left\langle {\frac {1}{2m}}{\hat {p}}^{2}\right\rangle \geq \left({\frac {\hbar }{4}}\right)^{2}{\frac {k}{m}}\,.}
The expectation value of the energy must therefore be at least
⟨
H
^
⟩
≥
V
0
+
ℏ
2
k
m
=
V
0
+
ℏ
ω
2
{\displaystyle \left\langle {\hat {H}}\right\rangle \geq V_{0}+{\frac {\hbar }{2}}{\sqrt {\frac {k}{m}}}=V_{0}+{\frac {\hbar \omega }{2}}}
where ω = √k/m is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly E0 = V0 + ħω/2, requires solving for the ground state of the system.
== Atomic physics ==
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by ν above, using angular frequency, denoted with ω and defined by ω = 2πν. This leads to a convention of writing the Planck constant h with a bar through its top (ħ) to denote the quantity h/2π. In these terms, an example of zero-point energy is the above E = ħω/2 associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:
h
2
n
2
8
m
L
2
{\displaystyle {\frac {h^{2}n^{2}}{8mL^{2}}}}
where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well.
== Quantum field theory ==
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field.
Each point in space makes a contribution of E = ħω/2, resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy.
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy.
In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses.
=== Quantum electrodynamic vacuum ===
The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories.
==== Redefining the zero of energy ====
In the quantum theory of the electromagnetic field, classical wave amplitudes α and α* are replaced by operators a and a† that satisfy:
[
a
,
a
†
]
=
1
{\displaystyle \left[a,a^{\dagger }\right]=1}
The classical quantity |α|2 appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator a†a. The fact that:
[
a
,
a
†
a
]
≠
1
{\displaystyle \left[a,a^{\dagger }a\right]\neq 1}
implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for a†a and a. The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" a† and a associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of a and a†. This is true for any harmonic oscillator: the zero-point energy ħω/2 appears when we write the Hamiltonian:
H
c
l
=
p
2
2
m
+
1
2
m
ω
2
q
2
=
1
2
ℏ
ω
(
a
a
†
+
a
†
a
)
=
ℏ
ω
(
a
†
a
+
1
2
)
{\displaystyle {\begin{aligned}H_{cl}&={\frac {p^{2}}{2m}}+{\tfrac {1}{2}}m\omega ^{2}{q}^{2}\\&={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)\\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)\end{aligned}}}
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:
H
F
−
⟨
0
|
H
F
|
0
⟩
=
1
2
ℏ
ω
(
a
a
†
+
a
†
a
)
−
1
2
ℏ
ω
=
ℏ
ω
(
a
†
a
+
1
2
)
−
1
2
ℏ
ω
=
ℏ
ω
a
†
a
{\displaystyle {\begin{aligned}H_{F}-\left\langle 0|H_{F}|0\right\rangle &={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega a^{\dagger }a\end{aligned}}}
without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted :HF, i.e.:
:
H
F
:≡
ℏ
ω
(
a
a
†
+
a
†
a
)
:≡
ℏ
ω
a
†
a
{\displaystyle :H_{F}:\equiv \hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right):\equiv \hbar \omega a^{\dagger }a}
In other words, within the normal ordering symbol we can commute a and a†. Since zero-point energy is intimately connected to the non-commutativity of a and a†, the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with a and a† and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian.
==== Electromagnetic field in free space ====
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by:
H
F
=
1
8
π
∫
d
3
r
(
E
2
+
B
2
)
=
k
2
2
π
|
α
(
t
)
|
2
{\displaystyle {\begin{aligned}H_{F}&={\frac {1}{8\pi }}\int d^{3}r\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\\&={\frac {k^{2}}{2\pi }}|\alpha (t)|^{2}\end{aligned}}}
We introduce the "mode function" A0(r) that satisfies the Helmholtz equation:
(
∇
2
+
k
2
)
A
0
(
r
)
=
0
{\displaystyle \left(\nabla ^{2}+k^{2}\right)\mathbf {A} _{0}(\mathbf {r} )=0}
where k = ω/c and assume it is normalized such that:
∫
d
3
r
|
A
0
(
r
)
|
2
=
1
{\displaystyle \int d^{3}r\left|\mathbf {A} _{0}(\mathbf {r} )\right|^{2}=1}
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that |A0(r)|2 should be independent of r for each mode of the field. The mode function satisfying these conditions is:
A
0
(
r
)
=
e
k
e
i
k
⋅
r
{\displaystyle \mathbf {A} _{0}(\mathbf {r} )=e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }}
where k · ek = 0 in order to have the transversality condition ∇ · A(r,t) satisfied for the Coulomb gauge in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume V = L3 and impose on the field the periodic boundary condition:
A
(
x
+
L
,
y
+
L
,
z
+
L
,
t
)
=
A
(
x
,
y
,
z
,
t
)
{\displaystyle \mathbf {A} (x+L,y+L,z+L,t)=\mathbf {A} (x,y,z,t)}
or equivalently
(
k
x
,
k
y
,
k
z
)
=
2
π
L
(
n
x
,
n
y
,
n
z
)
{\displaystyle \left(k_{x},k_{y},k_{z}\right)={\frac {2\pi }{L}}\left(n_{x},n_{y},n_{z}\right)}
where n can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:
A
k
(
r
)
=
1
V
e
k
e
i
k
⋅
r
{\displaystyle \mathbf {A} _{\mathbf {k} }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }}
which satisfies the Helmholtz equation, transversality, and the "box normalization":
∫
V
d
3
r
|
A
k
(
r
)
|
2
=
1
{\displaystyle \int _{V}d^{3}r\left|\mathbf {A} _{\mathbf {k} }(\mathbf {r} )\right|^{2}=1}
where ek is chosen to be a unit vector which specifies the polarization of the field mode. The condition k · ek = 0 means that there are two independent choices of ek, which we call ek1 and ek2 where ek1 · ek2 = 0 and e2k1 = e2k2 = 1. Thus we define the mode functions:
A
k
λ
(
r
)
=
1
V
e
k
λ
e
i
k
⋅
r
,
λ
=
{
1
2
{\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} \lambda }e^{i\mathbf {k} \cdot \mathbf {r} }\,,\quad \lambda ={\begin{cases}1\\2\end{cases}}}
in terms of which the vector potential becomes:
A
k
λ
(
r
,
t
)
=
2
π
ℏ
c
2
ω
k
V
[
a
k
λ
(
0
)
e
i
k
⋅
r
+
a
k
λ
†
(
0
)
e
−
i
k
⋅
r
]
e
k
λ
{\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }}
or:
A
k
λ
(
r
,
t
)
=
2
π
ℏ
c
2
ω
k
V
[
a
k
λ
(
0
)
e
−
i
(
ω
k
t
−
k
⋅
r
)
+
a
k
λ
†
(
0
)
e
i
(
ω
k
t
−
k
⋅
r
)
]
{\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}\right]}
where ωk = kc and akλ, a†kλ are photon annihilation and creation operators for the mode with wave vector k and polarization λ. This gives the vector potential for a plane wave mode of the field. The condition for (kx, ky, kz) shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:
A
(
r
t
)
=
∑
k
λ
2
π
ℏ
c
2
ω
k
V
[
a
k
λ
(
0
)
e
i
k
⋅
r
+
a
k
λ
†
(
0
)
e
−
i
k
⋅
r
]
e
k
λ
{\displaystyle \mathbf {A} (\mathbf {r} t)=\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }}
for the total vector potential in free space. Using the fact that:
∫
V
d
3
r
A
k
λ
(
r
)
⋅
A
k
′
λ
′
∗
(
r
)
=
δ
k
,
k
′
3
δ
λ
,
λ
′
{\displaystyle \int _{V}d^{3}r\mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )\cdot \mathbf {A} _{\mathbf {k} '\lambda '}^{\ast }(\mathbf {r} )=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}}
we find the field Hamiltonian is:
H
F
=
∑
k
λ
ℏ
ω
k
(
a
k
λ
†
a
k
λ
+
1
2
)
{\displaystyle H_{F}=\sum _{\mathbf {k} \lambda }\hbar \omega _{k}\left(a_{\mathbf {k} \lambda }^{\dagger }a_{\mathbf {k} \lambda }+{\tfrac {1}{2}}\right)}
This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations:
[
a
k
λ
(
t
)
,
a
k
′
λ
′
†
(
t
)
]
=
δ
k
,
k
′
3
δ
λ
,
λ
′
[
a
k
λ
(
t
)
,
a
k
′
λ
′
(
t
)
]
=
[
a
k
λ
†
(
t
)
,
a
k
′
λ
′
†
(
t
)
]
=
0
{\displaystyle {\begin{aligned}\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]&=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}\\[10px]\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}(t)\right]&=\left[a_{\mathbf {k} \lambda }^{\dagger }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]=0\end{aligned}}}
Clearly the least eigenvalue for HF is:
∑
k
λ
1
2
ℏ
ω
k
{\displaystyle \sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}}
This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor
8
π
v
2
d
v
c
3
V
{\displaystyle {\frac {8\pi v^{2}dv}{c^{3}}}V}
shows. The summation becomes approximately the integral:
4
π
h
V
c
3
∫
v
3
d
v
{\displaystyle {\frac {4\pi hV}{c^{3}}}\int v^{3}\,dv}
for high values of v. It diverges proportional to v4 for large v.
There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume V is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry.
This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out.
==== Necessity of the vacuum field in QED ====
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which nkλ = 0 for all modes (k, λ). The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero.
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by Σkλ ħωk/2 is infinite. We can make the replacement:
∑
k
λ
⟶
∑
λ
(
1
2
π
)
3
∫
d
3
k
=
V
8
π
3
∑
λ
∫
d
3
k
{\displaystyle \sum _{\mathbf {k} \lambda }\longrightarrow \sum _{\lambda }\left({\frac {1}{2\pi }}\right)^{3}\int d^{3}k={\frac {V}{8\pi ^{3}}}\sum _{\lambda }\int d^{3}k}
the zero-point energy density is:
1
V
∑
k
λ
1
2
ℏ
ω
k
=
2
8
π
3
∫
d
3
k
1
2
ℏ
ω
k
=
4
π
4
π
3
∫
d
k
k
2
(
1
2
ℏ
ω
k
)
=
ℏ
2
π
2
c
3
∫
d
ω
ω
3
{\displaystyle {\begin{aligned}{\frac {1}{V}}\sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}&={\frac {2}{8\pi ^{3}}}\int d^{3}k{\tfrac {1}{2}}\hbar \omega _{k}\\&={\frac {4\pi }{4\pi ^{3}}}\int dk\,k^{2}\left({\tfrac {1}{2}}\hbar \omega _{k}\right)\\&={\frac {\hbar }{2\pi ^{2}c^{3}}}\int d\omega \,\omega ^{3}\end{aligned}}}
or in other words the spectral energy density of the vacuum field:
ρ
0
(
ω
)
=
ℏ
ω
3
2
π
2
c
3
{\displaystyle \rho _{0}(\omega )={\frac {\hbar \omega ^{3}}{2\pi ^{2}c^{3}}}}
The zero-point energy density in the frequency range from ω1 to ω2 is therefore:
∫
ω
1
ω
2
d
ω
ρ
0
(
ω
)
=
ℏ
8
π
2
c
3
(
ω
2
4
−
ω
1
4
)
{\displaystyle \int _{\omega _{1}}^{\omega _{2}}d\omega \rho _{0}(\omega )={\frac {\hbar }{8\pi ^{2}c^{3}}}\left(\omega _{2}^{4}-\omega _{1}^{4}\right)}
This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3.
We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is:
H
=
1
2
m
(
p
−
e
c
A
)
2
+
1
2
m
ω
0
2
x
2
+
H
F
{\displaystyle H={\frac {1}{2m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}+{\tfrac {1}{2}}m\omega _{0}^{2}\mathbf {x} ^{2}+H_{F}}
This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate x and the canonical momentum p = mẋ +eA/c of the oscillator are:
x
˙
=
(
i
ℏ
)
−
1
[
x
.
H
]
=
1
m
(
p
−
e
c
A
)
p
˙
=
(
i
ℏ
)
−
1
[
p
.
H
]
=
1
2
∇
(
p
−
e
c
A
)
2
−
m
ω
0
2
x
˙
=
−
1
m
[
(
p
−
e
c
A
)
⋅
∇
]
[
−
e
c
A
]
−
1
m
(
p
−
e
c
A
)
×
∇
×
[
−
e
c
A
]
−
m
ω
0
2
x
˙
=
e
c
(
x
˙
⋅
∇
)
A
+
e
c
x
˙
×
B
−
m
ω
0
2
x
˙
{\displaystyle {\begin{aligned}\mathbf {\dot {x}} &=(i\hbar )^{-1}[\mathbf {x} .H]={\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\\\mathbf {\dot {p}} &=(i\hbar )^{-1}[\mathbf {p} .H]{\begin{aligned}&={\tfrac {1}{2}}\nabla \left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}-m\omega _{0}^{2}\mathbf {\dot {x}} \\&=-{\frac {1}{m}}\left[\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\cdot \nabla \right]\left[-{\frac {e}{c}}\mathbf {A} \right]-{\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\times \nabla \times \left[-{\frac {e}{c}}\mathbf {A} \right]-m\omega _{0}^{2}\mathbf {\dot {x}} \\&={\frac {e}{c}}(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}\end{aligned}}}
or:
m
x
¨
=
p
˙
−
e
c
A
˙
=
−
e
c
[
A
˙
−
(
x
˙
⋅
∇
)
A
]
+
e
c
x
˙
×
B
−
m
ω
0
2
x
=
e
E
+
e
c
x
˙
×
B
−
m
ω
0
2
x
{\displaystyle {\begin{aligned}m\mathbf {\ddot {x}} &=\mathbf {\dot {p}} -{\frac {e}{c}}\mathbf {\dot {A}} \\&=-{\frac {e}{c}}\left[\mathbf {\dot {A}} -\left(\mathbf {\dot {x}} \cdot \nabla \right)\mathbf {A} \right]+{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \\&=e\mathbf {E} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \end{aligned}}}
since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative
A
˙
=
∂
A
∂
t
+
(
x
˙
⋅
∇
)
A
3
.
{\displaystyle \mathbf {\dot {A}} ={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} ^{3}\,.}
For nonrelativistic motion we may neglect the magnetic force and replace the expression for mẍ by:
x
¨
+
ω
0
2
x
≈
e
m
E
≈
∑
k
λ
2
π
ℏ
ω
k
V
[
a
k
λ
(
t
)
+
a
k
λ
†
(
t
)
]
e
k
λ
{\displaystyle {\begin{aligned}\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} &\approx {\frac {e}{m}}\mathbf {E} \\&\approx \sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(t)+a_{\mathbf {k} \lambda }^{\dagger }(t)\right]e_{\mathbf {k} \lambda }\end{aligned}}}
Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for akλ is found similarly from the Hamiltonian to be:
a
˙
k
λ
=
i
ω
k
a
k
λ
+
i
e
2
π
ℏ
ω
k
V
x
˙
⋅
e
k
λ
{\displaystyle {\dot {a}}_{\mathbf {k} \lambda }=i\omega _{k}a_{\mathbf {k} \lambda }+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\mathbf {\dot {x}} \cdot e_{\mathbf {k} \lambda }}
in the electric dipole approximation.
In deriving these equations for x, p, and akλ we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, t = 0) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator A(t) evolves in time as A(t) = U†(t)A(0)U(t), where U(t) is the time evolution operator satisfying
i
ℏ
U
˙
=
H
U
,
U
†
(
t
)
=
U
−
1
(
t
)
,
U
(
0
)
=
1
.
{\displaystyle i\hbar {\dot {U}}=HU\,,\quad U^{\dagger }(t)=U^{-1}(t)\,,\quad U(0)=1\,.}
Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is:
a
k
λ
(
t
)
=
a
k
λ
(
0
)
e
−
i
ω
k
t
+
i
e
2
π
ℏ
ω
k
V
∫
0
t
d
t
′
e
k
λ
⋅
x
˙
(
t
′
)
e
i
ω
k
(
t
′
−
t
)
{\displaystyle a_{\mathbf {k} \lambda }(t)=a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\int _{0}^{t}dt'\,e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} (t')e^{i\omega _{k}\left(t'-t\right)}}
and therefore the equation for ȧkλ may be written:
x
¨
+
ω
0
2
x
=
e
m
E
0
(
t
)
+
e
m
E
R
R
(
t
)
{\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} ={\frac {e}{m}}\mathbf {E} _{0}(t)+{\frac {e}{m}}\mathbf {E} _{RR}(t)}
where
E
0
(
t
)
=
i
∑
k
λ
2
π
ℏ
ω
k
V
[
a
k
λ
(
0
)
e
−
i
ω
k
t
−
a
k
λ
†
(
0
)
e
i
ω
k
t
]
e
k
λ
{\displaystyle \mathbf {E} _{0}(t)=i\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}-a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i\omega _{k}t}\right]e_{\mathbf {k} \lambda }}
and
E
R
R
(
t
)
=
−
4
π
e
V
∑
k
λ
∫
0
t
d
t
′
[
e
k
λ
⋅
x
˙
(
t
′
)
]
cos
ω
k
(
t
′
−
t
)
{\displaystyle \mathbf {E} _{RR}(t)=-{\frac {4\pi e}{V}}\sum _{\mathbf {k} \lambda }\int _{0}^{t}dt'\left[e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} \left(t'\right)\right]\cos \omega _{k}\left(t'-t\right)}
It can be shown that in the radiation reaction field, if the mass m is regarded as the "observed" mass then we can take
E
R
R
(
t
)
=
2
e
3
c
3
x
¨
{\displaystyle \mathbf {E} _{RR}(t)={\frac {2e}{3c^{3}}}\mathbf {\ddot {x}} }
The total field acting on the dipole has two parts, E0(t) and ERR(t). E0(t) is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation
[
∇
2
−
1
c
2
∂
2
∂
t
2
]
E
=
0
{\displaystyle \left[\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right]\mathbf {E} =0}
satisfied by the field in the (source free) vacuum. For this reason E0(t) is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at t = 0. ERR(t) is the source field, the field generated by the dipole and acting on the dipole.
Using the above equation for ERR(t) we obtain an equation for the Heisenberg-picture operator
x
(
t
)
{\displaystyle \mathbf {x} (t)}
that is formally the same as the classical equation for a linear dipole oscillator:
x
¨
+
ω
0
2
x
−
τ
x
.
.
.
=
e
m
E
0
(
t
)
{\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)}
where τ = 2e2/3mc3. in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole.
Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field E0(t).
According to our earlier equation for akλ(t) the free field is the only field in existence at t = 0 as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at t = 0 is therefore of the form
|
Ψ
⟩
=
|
vac
⟩
|
ψ
D
⟩
,
{\displaystyle |\Psi \rangle =|{\text{vac}}\rangle |\psi _{D}\rangle \,,}
where |vac⟩ is the vacuum state of the field and |ψD⟩ is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero:
⟨
E
0
(
t
)
⟩
=
⟨
Ψ
|
E
0
(
t
)
|
Ψ
⟩
=
0
{\displaystyle \langle \mathbf {E} _{0}(t)\rangle =\langle \Psi |\mathbf {E} _{0}(t)|\Psi \rangle =0}
since akλ(0)|vac⟩ = 0. however, the energy density associated with the free field is infinite:
1
4
π
⟨
E
0
2
(
t
)
⟩
=
1
4
π
∑
k
λ
∑
k
′
λ
′
2
π
ℏ
ω
k
V
2
π
ℏ
ω
k
′
V
×
⟨
a
k
λ
(
0
)
a
k
′
λ
′
†
(
0
)
⟩
=
1
4
π
∑
k
λ
(
2
π
ℏ
ω
k
V
)
=
∫
0
∞
d
w
ρ
0
(
ω
)
{\displaystyle {\begin{aligned}{\frac {1}{4\pi }}\left\langle \mathbf {E} _{0}^{2}(t)\right\rangle &={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\sum _{\mathbf {k'} \lambda '}{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}{\sqrt {\frac {2\pi \hbar \omega _{k'}}{V}}}\times \left\langle a_{\mathbf {k} \lambda }(0)a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right\rangle \\&={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\left({\frac {2\pi \hbar \omega _{k}}{V}}\right)\\&=\int _{0}^{\infty }dw\,\rho _{0}(\omega )\end{aligned}}}
The important point of this is that the zero-point field energy HF does not affect the Heisenberg equation for akλ since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with akλ. We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term Σkλ ħωk/2 in the field Hamiltonian.
The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory:
[
z
(
t
)
,
p
z
(
t
)
]
=
[
U
†
(
t
)
z
(
0
)
U
(
t
)
,
U
†
(
t
)
p
z
(
0
)
U
(
t
)
]
=
U
†
(
t
)
[
z
(
0
)
,
p
z
(
0
)
]
U
(
t
)
=
i
ℏ
U
†
(
t
)
U
(
t
)
=
i
ℏ
{\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&=\left[U^{\dagger }(t)z(0)U(t),U^{\dagger }(t)p_{z}(0)U(t)\right]\\&=U^{\dagger }(t)\left[z(0),p_{z}(0)\right]U(t)\\&=i\hbar U^{\dagger }(t)U(t)\\&=i\hbar \end{aligned}}}
We can calculate [z(t),pz(t)] from the formal solution of the operator equation of motion
x
¨
+
ω
0
2
x
−
τ
x
.
.
.
=
e
m
E
0
(
t
)
{\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)}
Using the fact that
[
a
k
λ
(
0
)
,
a
k
′
λ
′
†
(
0
)
]
=
δ
k
k
′
3
,
δ
λ
λ
′
{\displaystyle \left[a_{\mathbf {k} \lambda }(0),a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right]=\delta _{\mathbf {kk'} }^{3},\delta _{\lambda \lambda '}}
and that equal-time particle and field operators commute, we obtain:
=
[
z
(
t
)
,
m
z
˙
(
t
)
]
+
[
z
(
t
)
,
e
c
A
z
(
t
)
]
=
[
z
(
t
)
,
m
z
˙
(
t
)
]
=
(
i
ℏ
e
2
2
π
2
m
c
3
)
(
8
π
3
)
∫
0
∞
d
ω
ω
4
(
ω
2
−
ω
0
2
)
2
+
τ
2
ω
6
{\displaystyle {\begin{aligned}[z(t),p_{z}(t)]&=\left[z(t),m{\dot {z}}(t)\right]+\left[z(t),{\frac {e}{c}}A_{z}(t)\right]\\&=\left[z(t),m{\dot {z}}(t)\right]\\&=\left({\frac {i\hbar e^{2}}{2\pi ^{2}mc^{3}}}\right)\left({\frac {8\pi }{3}}\right)\int _{0}^{\infty }{\frac {d\omega \,\omega ^{4}}{\left(\omega ^{2}-\omega _{0}^{2}\right)^{2}+\tau ^{2}\omega ^{6}}}\end{aligned}}}
For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., τω0 ≪ 1. Then the integrand above is sharply peaked at ω = ω0 and:
[
z
(
t
)
,
p
z
(
t
)
]
≈
2
i
ℏ
e
2
3
π
m
c
3
ω
0
3
∫
−
∞
∞
d
x
x
2
+
τ
2
ω
0
6
=
(
2
i
ℏ
e
2
ω
0
3
3
π
m
c
3
)
(
π
τ
ω
0
3
)
=
i
ℏ
{\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&\approx {\frac {2i\hbar e^{2}}{3\pi mc^{3}}}\omega _{0}^{3}\int _{-\infty }^{\infty }{\frac {dx}{x^{2}+\tau ^{2}\omega _{0}^{6}}}\\&=\left({\frac {2i\hbar e^{2}\omega _{0}^{3}}{3\pi mc^{3}}}\right)\left({\frac {\pi }{\tau \omega _{0}^{3}}}\right)\\&=i\hbar \end{aligned}}}
the necessity of the vacuum field can also be appreciated by making the small damping approximation in
x
¨
+
ω
0
2
x
−
τ
x
.
.
.
=
e
m
E
0
(
t
)
x
¨
≈
−
ω
0
2
x
(
t
)
x
.
.
.
≈
−
ω
0
2
x
˙
{\displaystyle {\begin{aligned}&\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)\\&\mathbf {\ddot {x}} \approx -\omega _{0}^{2}\mathbf {x} (t)&&\mathbf {\overset {...}{x}} \approx -\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}}
and
x
¨
+
τ
ω
0
2
x
˙
+
ω
0
2
x
≈
e
m
E
0
(
t
)
{\displaystyle \mathbf {\ddot {x}} +\tau \omega _{0}^{2}\mathbf {\dot {x}} +\omega _{0}^{2}\mathbf {x} \approx {\frac {e}{m}}\mathbf {E} _{0}(t)}
Without the free field E0(t) in this equation the operator x(t) would be exponentially dampened, and commutators like [z(t),pz(t)] would approach zero for t ≫ 1/τω20. With the vacuum field included, however, the commutator is iħ at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator.
What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails.
The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of x, the spectral energy density of the vacuum field must be proportional to the third power of ω in order for [z(t),pz(t)] to hold. In the case of a dissipative force proportional to ẋ, by contrast, the fluctuation force must be proportional to
ω
{\displaystyle \omega }
in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem.
The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field.
=== Quantum chromodynamic vacuum ===
The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
=== Higgs field ===
The Standard Model hypothesises a field called the Higgs field (symbol: ϕ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of ϕ0 in the ground state (the vacuum expectation value or VEV) is then ⟨ϕ0⟩ = v/√2, where v = |μ|/√λ. The measured value of this parameter is approximately 246 GeV/c2. It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number.
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory).
== Experimental observations ==
Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics.
=== Casimir effect ===
A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move.
Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then.
In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction.
An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than c between two closely spaced conducting plates.
=== Lamb shift ===
The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels 2S1/2 and 2P1/2 (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about 4.38×10−6 eV is roughly 10−7 of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms.
=== Fine-structure constant ===
Taking ħ (the Planck constant divided by 2π), c (the speed of light), and e2 = q2e/4πε0 (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where qe is the absolute value of the electronic charge and
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant:
α
=
e
2
ℏ
c
=
q
e
2
4
π
ε
0
ℏ
c
≈
1
137
{\displaystyle \alpha ={\frac {e^{2}}{\hbar c}}={\frac {q_{e}^{2}}{4\pi \varepsilon _{0}\hbar c}}\approx {\frac {1}{137}}}
The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it.
The Heisenberg inequality where ħ = h/2π, and Δx, Δp are the standard deviations of position and momentum states that:
Δ
x
Δ
p
≥
1
2
ℏ
{\displaystyle \Delta _{x}\Delta _{p}\geq {\frac {1}{2}}\hbar }
It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, mzc2 ≈ 90 GeV, that:
α
≈
1
129
{\displaystyle \alpha \approx {\frac {1}{129}}}
rather than the low-energy α ≈ 1/137. The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining α. All in all, α depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of α has been observed for several years now in precision experiment in high-energy physics.
=== Vacuum birefringence ===
In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space.
Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth.
Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE).
== Speculated involvement in other phenomena ==
=== Dark energy ===
In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy.
There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude.
The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy.
=== Cosmic inflation ===
Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain.
=== Cosmology ===
Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation.
== Alternative theories ==
There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED."
Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory.
Nevertheless, as pointed out by Jaffe, "no
known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”"
== Chaotic and emergent phenomena ==
The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence.
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity.
A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun.
One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter (
μ
{\displaystyle \mu }
= electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature. This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum.
== Purported applications ==
Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines.
Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil.
=== Casimir batteries and engines ===
A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates.
In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved."
Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system.
In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force.
A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation.
=== Single heat baths ===
In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties.
There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations.
Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown
=== Space travel and gravitational shielding ===
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak.
In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross.
From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves."
In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them".
In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum.
In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC. AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public.
In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, also commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable"
Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space.
In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored.
In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out.
== See also ==
== References ==
=== Notes ===
=== Articles in the press ===
=== Bibliography ===
== Further reading ==
=== Press articles ===
=== Journal articles ===
=== Books ===
== External links ==
Nima Arkani-Hamed on the issue of vacuum energy and dark energy.
Steven Weinberg on the cosmological constant problem. | Wikipedia/Zero-point_energy |
Macroscopic quantum phenomena are processes showing quantum behavior at the macroscopic scale, rather than at the atomic scale where quantum effects are prevalent. The best-known examples of macroscopic quantum phenomena are superfluidity and superconductivity; other examples include the quantum Hall effect, Josephson effect and topological order. Since 2000 there has been extensive experimental work on quantum gases, particularly Bose–Einstein condensates.
Between 1996 and 2016 six Nobel Prizes were given for work related to macroscopic quantum phenomena. Macroscopic quantum phenomena can be observed in superfluid helium and in superconductors, but also in dilute quantum gases, dressed photons such as polaritons and in laser light. Although these media are very different, they are all similar in that they show macroscopic quantum behavior, and in this respect they all can be referred to as quantum fluids.
Quantum phenomena are generally classified as macroscopic when the quantum states are occupied by a large number of particles (of the order of the Avogadro number) or the quantum states involved are macroscopic in size (up to kilometer-sized in superconducting wires).
== Consequences of the macroscopic occupation ==
The concept of macroscopically occupied quantum states is introduced by Fritz London. In this section it will be explained what it means if a single state is occupied by a very large number of particles. We start with the wave function of the state written as
with Ψ0 the amplitude and
φ
{\displaystyle \varphi }
the phase. The wave function is normalized so that
The physical interpretation of the quantity
depends on the number of particles. Fig. 1 represents a container with a certain number of particles with a small control volume ΔV inside. We check from time to time how many particles are in the control box. We distinguish three cases:
There is only one particle. In this case the control volume is empty most of the time. However, there is a certain chance to find the particle in it given by Eq. (3). The probability is proportional to ΔV. The factor ΨΨ∗ is called the chance density.
If the number of particles is a bit larger there are usually some particles inside the box. We can define an average, but the actual number of particles in the box has relatively large fluctuations around this average.
In the case of a very large number of particles there will always be a lot of particles in the small box. The number will fluctuate but the fluctuations around the average are relatively small. The average number is proportional to ΔV and ΨΨ∗ is now interpreted as the particle density.
In quantum mechanics the particle probability flow density Jp (unit: particles per second per m2), also called probability current, can be derived from the Schrödinger equation to be
with q the charge of the particle and
A
→
{\displaystyle {\vec {A}}}
the vector potential; cc stands for the complex conjugate of the other term inside the brackets. For neutral particles q = 0, for superconductors q = −2e (with e the elementary charge) the charge of Cooper pairs. With Eq. (1)
If the wave function is macroscopically occupied the particle probability flow density becomes a particle flow density. We introduce the fluid velocity vs via the mass flow density
The density (mass per volume) is
so Eq. (5) results in
This important relation connects the velocity, a classical concept, of the condensate with the phase of the wave function, a quantum-mechanical concept.
== Superfluidity ==
At temperatures below the lambda point, helium shows the unique property of superfluidity. The fraction of the liquid that forms the superfluid component is a macroscopic quantum fluid. The helium atom is a neutral particle, so q = 0. Furthermore, when considering helium-4, the relevant particle mass is m = m4, so Eq. (8) reduces to
For an arbitrary loop in the liquid, this gives
Due to the single-valued nature of the wave function
with n integer, we have
The quantity
is the quantum of circulation. For a circular motion with radius r
In case of a single quantum (n = 1)
When superfluid helium is put in rotation, Eq. (13) will not be satisfied for all loops inside the liquid unless the rotation is organized around vortex lines (as depicted in Fig. 2). These lines have a vacuum core with a diameter of about 1 Å (which is smaller than the average particle distance). The superfluid helium rotates around the core with very high speeds. Just outside the core (r = 1 Å), the velocity is as large as 160 m/s. The cores of the vortex lines and the container rotate as a solid body around the rotation axes with the same angular velocity. The number of vortex lines increases with the angular velocity (as shown in the upper half of the figure). Note that the two right figures both contain six vortex lines, but the lines are organized in different stable patterns.
== Superconductivity ==
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957.
He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices. For this and related work, he was awarded the Nobel Prize in 2003 with Ginzburg and Leggett.
=== Fluxoid quantization ===
For superconductors the bosons involved are the so-called Cooper pairs which are quasiparticles formed by two electrons. Hence m = 2me and q = −2e where me and e are the mass of an electron and the elementary charge. It follows from Eq. (8) that
Integrating Eq. (15) over a closed loop gives
As in the case of helium we define the vortex strength
and use the general relation
where Φ is the magnetic flux enclosed by the loop. The so-called fluxoid is defined by
In general the values of κ and Φ depend on the choice of the loop. Due to the single-valued nature of the wave function and Eq. (16) the fluxoid is quantized
The unit of quantization is called the flux quantum
The flux quantum plays a very important role in superconductivity. The earth magnetic field is very small (about 50 μT), but it generates one flux quantum in an area of 6 μm by 6 μm. So, the flux quantum is very small. Yet it was measured to an accuracy of 9 digits as shown in Eq. (21). Nowadays the value given by Eq. (21) is exact by definition.
In Fig. 3 two situations are depicted of superconducting rings in an external magnetic field. One case is a thick-walled ring and in the other case the ring is also thick-walled, but is interrupted by a weak link. In the latter case we will meet the famous Josephson relations. In both cases we consider a loop inside the material. In general a superconducting circulation current will flow in the material. The total magnetic flux in the loop is the sum of the applied flux Φa and the self-induced flux Φs induced by the circulation current
=== Thick ring ===
The first case is a thick ring in an external magnetic field (Fig. 3a). The currents in a superconductor only flow in a thin layer at the surface. The thickness of this layer is determined by the so-called London penetration depth. It is of μm size or less. We consider a loop far away from the surface so that vs = 0 everywhere so κ = 0. In that case the fluxoid is equal to the magnetic flux (Φv = Φ). If vs = 0 Eq. (15) reduces to
Taking the rotation gives
Using the well-known relations
∇
→
×
∇
→
φ
=
0
{\displaystyle {\vec {\nabla }}\times {\vec {\nabla }}\varphi =0}
and
∇
→
×
A
→
=
B
→
{\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}}
shows that the magnetic field in the bulk of the superconductor is zero as well. So, for thick rings, the total magnetic flux in the loop is quantized according to
=== Interrupted ring, weak links ===
Weak links play a very important role in modern superconductivity. In most cases weak links are oxide barriers between two superconducting thin films, but it can also be a crystal boundary (in the case of high-Tc superconductors). A schematic representation is given in Fig. 4. Now consider the ring which is thick everywhere except for a small section where the ring is closed via a weak link (Fig. 3b). The velocity is zero except near the weak link. In these regions the velocity contribution to the total phase change in the loop is given by (with Eq. (15))
The line integral is over the contact from one side to the other in such a way that the end points of the line are well inside the bulk of the superconductor where vs = 0. So the value of the line integral is well-defined (e.g. independent of the choice of the end points). With Eqs. (19), (22), and (26)
Without proof we state that the supercurrent through the weak link is given by the so-called DC Josephson relation
The voltage over the contact is given by the AC Josephson relation
The names of these relations (DC and AC relations) are misleading since they both hold in DC and AC situations. In the steady state (constant
Δ
φ
∗
{\displaystyle \Delta \varphi ^{*}}
) Eq. (29) shows that V=0 while a nonzero current flows through the junction. In the case of a constant applied voltage (voltage bias) Eq. (29) can be integrated easily and gives
Substitution in Eq. (28) gives
This is an AC current. The frequency
is called the Josephson frequency. One μV gives a frequency of about 500 MHz. By using Eq. (32) the flux quantum is determined with the high precision as given in Eq. (21).
The energy difference of a Cooper pair, moving from one side of the contact to the other, is ΔE = 2eV. With this expression Eq. (32) can be written as ΔE = hν which is the relation for the energy of a photon with frequency ν.
The AC Josephson relation (Eq. (29)) can be easily understood in terms of Newton's law, (or from one of the London equation's). We start with Newton's law
F
→
=
m
d
v
→
s
d
t
.
{\displaystyle {\vec {F}}=m{\frac {\mathrm {d} {\vec {v}}_{s}}{\mathrm {d} t}}.}
Substituting the expression for the Lorentz force
F
→
=
q
(
E
→
+
v
→
s
×
B
→
)
{\displaystyle {\vec {F}}=q\left({\vec {E}}+{\vec {v}}_{s}\times {\vec {B}}\right)}
and using the general expression for the co-moving time derivative
d
v
→
s
d
t
=
∂
v
→
s
∂
t
+
1
2
∇
→
v
s
2
−
v
→
s
×
(
∇
→
×
v
→
s
)
{\displaystyle {\frac {\mathrm {d} {\vec {v}}_{s}}{\mathrm {d} t}}={\frac {\partial {\vec {v}}_{s}}{\partial t}}+{\frac {1}{2}}{\vec {\nabla }}v_{s}^{2}-{\vec {v}}_{s}\times \left({\vec {\nabla }}\times {\vec {v}}_{s}\right)}
gives
q
m
(
E
→
+
v
→
s
×
B
→
)
=
∂
v
→
s
∂
t
+
1
2
∇
→
v
s
2
−
v
→
s
×
(
∇
→
×
v
→
s
)
.
{\displaystyle {\frac {q}{m}}\left({\vec {E}}+{\vec {v}}_{s}\times {\vec {B}}\right)={\frac {\partial {\vec {v}}_{s}}{\partial t}}+{\frac {1}{2}}{\vec {\nabla }}v_{s}^{2}-{\vec {v}}_{s}\times \left({\vec {\nabla }}\times {\vec {v}}_{s}\right).}
Eq. (8) gives
0
=
∇
→
×
v
→
s
+
q
m
∇
→
×
A
→
=
∇
→
×
v
→
s
+
q
m
B
→
{\displaystyle 0={\vec {\nabla }}\times {\vec {v}}_{s}+{\frac {q}{m}}{\vec {\nabla }}\times {\vec {A}}={\vec {\nabla }}\times {\vec {v}}_{s}+{\frac {q}{m}}{\vec {B}}}
so
q
m
E
→
=
∂
v
→
s
∂
t
+
1
2
∇
→
v
s
2
.
{\displaystyle {\frac {q}{m}}{\vec {E}}={\frac {\partial {\vec {v}}_{s}}{\partial t}}+{\frac {1}{2}}{\vec {\nabla }}v_{s}^{2}.}
Take the line integral of this expression. In the end points the velocities are zero so the ∇v2 term gives no contribution. Using
∫
E
→
⋅
d
ℓ
→
=
−
V
{\displaystyle \int {\vec {E}}\cdot \mathrm {d} {\vec {\ell }}=-V}
and Eq. (26), with q = −2e and m = 2me, gives Eq. (29).
=== DC SQUID ===
Fig. 5 shows a so-called DC SQUID. It consists of two superconductors connected by two weak links. The fluxoid quantization of a loop through the two bulk superconductors and the two weak links demands
If the self-inductance of the loop can be neglected the magnetic flux in the loop Φ is equal to the applied flux
with B the magnetic field, applied perpendicular to the surface, and A the surface area of the loop. The total supercurrent is given by
Substitution of Eq(33) in (35) gives
Using a well known geometrical formula we get
Since the sin-function can vary only between −1 and +1 a steady solution is only possible if the applied current is below a critical current given by
Note that the critical current is periodic in the applied flux with period Φ0. The dependence of the critical current on the applied flux is depicted in Fig. 6. It has a strong resemblance with the interference pattern generated by a laser beam behind a double slit. In practice the critical current is not zero at half integer values of the flux quantum of the applied flux. This is due to the fact that the self-inductance of the loop cannot be neglected.
=== Type II superconductivity ===
Type-II superconductivity is characterized by two critical fields called Bc1 and Bc2. At a magnetic field Bc1 the applied magnetic field starts to penetrate the sample, but the sample is still superconducting. Only at a field of Bc2 the sample is completely normal. For fields in between Bc1 and Bc2 magnetic flux penetrates the superconductor in well-organized patterns, the so-called Abrikosov vortex lattice similar to the pattern shown in Fig. 2. A cross section of the superconducting plate is given in Fig. 7. Far away from the plate the field is homogeneous, but in the material superconducting currents flow which squeeze the field in bundles of exactly one flux quantum. The typical field in the core is as big as 1 tesla. The currents around the vortex core flow in a layer of about 50 nm with current densities on the order of 15×1012 A/m2. That corresponds with 15 million ampère in a wire of one mm2.
== Dilute quantum gases ==
The classical types of quantum systems, superconductors and superfluid helium, were discovered in the beginning of the 20th century. Near the end of the 20th century, scientists discovered how to create very dilute atomic or molecular gases, cooled first by laser cooling and then by evaporative cooling. They are trapped using magnetic fields or optical dipole potentials in ultrahigh vacuum chambers. Isotopes which have been used include rubidium (Rb-87 and Rb-85), strontium (Sr-87, Sr-86, and Sr-84) potassium (K-39 and K-40), sodium (Na-23), lithium (Li-7 and Li-6), and hydrogen (H-1). The temperatures to which they can be cooled are as low as a few nanokelvin. The developments have been very fast in the past few years. A team of NIST and the University of Colorado has succeeded in creating and observing vortex quantization in these systems. The concentration of vortices increases with the angular velocity of the rotation, similar to the case of superfluid helium and superconductivity.
== See also ==
== References and footnotes == | Wikipedia/Macroscopic_quantum_phenomena |
The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992).
The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration.
Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function.
There are several equivalent mathematical formulations of the theory.
== Overview ==
De Broglie–Bohm theory is based on the following postulates:
There is a configuration
q
{\displaystyle q}
of the universe, described by coordinates
q
k
{\displaystyle q^{k}}
, which is an element of the configuration space
Q
{\displaystyle Q}
. The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions
Q
k
{\displaystyle \mathbf {Q} _{k}}
of
N
{\displaystyle N}
particles, or, in case of field theory, the space of field configurations
ϕ
(
x
)
{\displaystyle \phi (x)}
. The configuration evolves (for spin=0) according to the guiding equation
m
k
d
q
k
d
t
(
t
)
=
ℏ
∇
k
Im
ln
ψ
(
q
,
t
)
=
ℏ
Im
(
∇
k
ψ
ψ
)
(
q
,
t
)
=
m
k
j
k
ψ
∗
ψ
=
Re
(
P
^
k
Ψ
Ψ
)
,
{\displaystyle m_{k}{\frac {dq^{k}}{dt}}(t)=\hbar \nabla _{k}\operatorname {Im} \ln \psi (q,t)=\hbar \operatorname {Im} \left({\frac {\nabla _{k}\psi }{\psi }}\right)(q,t)={\frac {m_{k}\mathbf {j} _{k}}{\psi ^{*}\psi }}=\operatorname {Re} \left({\frac {\mathbf {\hat {P}} _{k}\Psi }{\Psi }}\right),}
where
j
{\displaystyle \mathbf {j} }
is the probability current or probability flux, and
P
^
{\displaystyle \mathbf {\hat {P}} }
is the momentum operator. Here,
ψ
(
q
,
t
)
{\displaystyle \psi (q,t)}
is the standard complex-valued wavefunction from quantum theory, which evolves according to the Schrödinger equation
i
ℏ
∂
∂
t
ψ
(
q
,
t
)
=
−
∑
i
=
1
N
ℏ
2
2
m
i
∇
i
2
ψ
(
q
,
t
)
+
V
(
q
)
ψ
(
q
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (q,t)=-\sum _{i=1}^{N}{\frac {\hbar ^{2}}{2m_{i}}}\nabla _{i}^{2}\psi (q,t)+V(q)\psi (q,t).}
This completes the specification of the theory for any quantum theory with Hamilton operator of type
H
=
∑
1
2
m
i
p
^
i
2
+
V
(
q
^
)
{\textstyle H=\sum {\frac {1}{2m_{i}}}{\hat {p}}_{i}^{2}+V({\hat {q}})}
.
The configuration is distributed according to
|
ψ
(
q
,
t
)
|
2
{\displaystyle |\psi (q,t)|^{2}}
at some moment of time
t
{\displaystyle t}
, and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics.
Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2).
=== Double-slit experiment ===
The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen).
If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears.
In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen.
To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space.
== Theory ==
=== Pilot wave ===
The de Broglie–Bohm theory describes a pilot wave
ψ
(
q
,
t
)
∈
C
{\displaystyle \psi (q,t)\in \mathbb {C} }
in a configuration space
Q
{\displaystyle Q}
and trajectories
q
(
t
)
∈
Q
{\displaystyle q(t)\in Q}
of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation).
The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force".: 76 Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle.
The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description.
In what follows below, the setup for one particle moving in
R
3
{\displaystyle \mathbb {R} ^{3}}
is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still
R
3
{\displaystyle \mathbb {R} ^{3}}
, but configuration space becomes
R
3
N
{\displaystyle \mathbb {R} ^{3N}}
. While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory.
Extensions to this theory include spin and more complicated configuration spaces.
We use variations of
Q
{\displaystyle \mathbf {Q} }
for particle positions, while
ψ
{\displaystyle \psi }
represents the complex-valued wavefunction on configuration space.
=== Guiding equation ===
For a spinless single particle moving in
R
3
{\displaystyle \mathbb {R} ^{3}}
, the particle's velocity is
d
Q
d
t
(
t
)
=
ℏ
m
Im
(
∇
ψ
ψ
)
(
Q
,
t
)
.
{\displaystyle {\frac {d\mathbf {Q} }{dt}}(t)={\frac {\hbar }{m}}\operatorname {Im} \left({\frac {\nabla \psi }{\psi }}\right)(\mathbf {Q} ,t).}
For many particles labeled
Q
k
{\displaystyle \mathbf {Q} _{k}}
for the
k
{\displaystyle k}
-th particle their velocities are
d
Q
k
d
t
(
t
)
=
ℏ
m
k
Im
(
∇
k
ψ
ψ
)
(
Q
1
,
Q
2
,
…
,
Q
N
,
t
)
.
{\displaystyle {\frac {d\mathbf {Q} _{k}}{dt}}(t)={\frac {\hbar }{m_{k}}}\operatorname {Im} \left({\frac {\nabla _{k}\psi }{\psi }}\right)(\mathbf {Q} _{1},\mathbf {Q} _{2},\ldots ,\mathbf {Q} _{N},t).}
The main fact to notice is that this velocity field depends on the actual positions of all of the
N
{\displaystyle N}
particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe.
=== Schrödinger equation ===
The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on
R
3
{\displaystyle \mathbb {R} ^{3}}
. The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function
V
{\displaystyle V}
on
R
3
{\displaystyle \mathbb {R} ^{3}}
:
i
ℏ
∂
∂
t
ψ
=
−
ℏ
2
2
m
∇
2
ψ
+
V
ψ
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi =-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi +V\psi .}
For many particles, the equation is the same except that
ψ
{\displaystyle \psi }
and
V
{\displaystyle V}
are now on configuration space,
R
3
N
{\displaystyle \mathbb {R} ^{3N}}
:
i
ℏ
∂
∂
t
ψ
=
−
∑
k
=
1
N
ℏ
2
2
m
k
∇
k
2
ψ
+
V
ψ
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi =-\sum _{k=1}^{N}{\frac {\hbar ^{2}}{2m_{k}}}\nabla _{k}^{2}\psi +V\psi .}
This is the same wavefunction as in conventional quantum mechanics.
=== Relation to the Born rule ===
In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by
|
ψ
|
2
{\displaystyle |\psi |^{2}}
. And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies
|
ψ
|
2
{\displaystyle |\psi |^{2}}
.
For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that
|
ψ
|
2
{\displaystyle |\psi |^{2}}
, by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e.,
|
ψ
|
2
{\displaystyle |\psi |^{2}}
) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical.
The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect.
It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate.
It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as
|
ψ
|
2
{\displaystyle |\psi |^{2}}
.
=== The conditional wavefunction of a subsystem ===
In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as
ψ
(
t
,
q
I
,
q
II
)
{\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})}
, where
q
I
{\displaystyle q^{\text{I}}}
denotes the configuration variables associated to some subsystem (I) of the universe, and
q
II
{\displaystyle q^{\text{II}}}
denotes the remaining configuration variables. Denote respectively by
Q
I
(
t
)
{\displaystyle Q^{\text{I}}(t)}
and
Q
II
(
t
)
{\displaystyle Q^{\text{II}}(t)}
the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by
ψ
I
(
t
,
q
I
)
=
ψ
(
t
,
q
I
,
Q
II
(
t
)
)
.
{\displaystyle \psi ^{\text{I}}(t,q^{\text{I}})=\psi (t,q^{\text{I}},Q^{\text{II}}(t)).}
It follows immediately from the fact that
Q
(
t
)
=
(
Q
I
(
t
)
,
Q
II
(
t
)
)
{\displaystyle Q(t)=(Q^{\text{I}}(t),Q^{\text{II}}(t))}
satisfies the guiding equation that also the configuration
Q
I
(
t
)
{\displaystyle Q^{\text{I}}(t)}
satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction
ψ
{\displaystyle \psi }
replaced with the conditional wavefunction
ψ
I
{\displaystyle \psi ^{\text{I}}}
. Also, the fact that
Q
(
t
)
{\displaystyle Q(t)}
is random with probability density given by the square modulus of
ψ
(
t
,
⋅
)
{\displaystyle \psi (t,\cdot )}
implies that the conditional probability density of
Q
I
(
t
)
{\displaystyle Q^{\text{I}}(t)}
given
Q
II
(
t
)
{\displaystyle Q^{\text{II}}(t)}
is given by the square modulus of the (normalized) conditional wavefunction
ψ
I
(
t
,
⋅
)
{\displaystyle \psi ^{\text{I}}(t,\cdot )}
(in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula).
Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as
ψ
(
t
,
q
I
,
q
II
)
=
ψ
I
(
t
,
q
I
)
ψ
II
(
t
,
q
II
)
,
{\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})=\psi ^{\text{I}}(t,q^{\text{I}})\psi ^{\text{II}}(t,q^{\text{II}}),}
then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to
ψ
I
{\displaystyle \psi ^{\text{I}}}
(this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then
ψ
I
{\displaystyle \psi ^{\text{I}}}
does satisfy a Schrödinger equation. More generally, assume that the universal wave function
ψ
{\displaystyle \psi }
can be written in the form
ψ
(
t
,
q
I
,
q
II
)
=
ψ
I
(
t
,
q
I
)
ψ
II
(
t
,
q
II
)
+
ϕ
(
t
,
q
I
,
q
II
)
,
{\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})=\psi ^{\text{I}}(t,q^{\text{I}})\psi ^{\text{II}}(t,q^{\text{II}})+\phi (t,q^{\text{I}},q^{\text{II}}),}
where
ϕ
{\displaystyle \phi }
solves Schrödinger equation and,
ϕ
(
t
,
q
I
,
Q
II
(
t
)
)
=
0
{\displaystyle \phi (t,q^{\text{I}},Q^{\text{II}}(t))=0}
for all
t
{\displaystyle t}
and
q
I
{\displaystyle q^{\text{I}}}
. Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to
ψ
I
{\displaystyle \psi ^{\text{I}}}
, and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then
ψ
I
{\displaystyle \psi ^{\text{I}}}
satisfies a Schrödinger equation.
The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems.
== Extensions ==
=== Relativity ===
Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time.
A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time.
Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction.
The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time.
Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial.
Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity.
Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which
|
ψ
|
2
{\displaystyle |\psi |^{2}}
is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings.
Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Olivier Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out.
=== Spin ===
To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be
C
2
{\displaystyle \mathbb {C} ^{2}}
. The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term:
d
Q
k
d
t
(
t
)
=
ℏ
m
k
Im
(
(
ψ
,
D
k
ψ
)
(
ψ
,
ψ
)
)
(
Q
1
,
…
,
Q
N
,
t
)
,
i
ℏ
∂
∂
t
ψ
=
(
−
∑
k
=
1
N
ℏ
2
2
m
k
D
k
2
+
V
−
∑
k
=
1
N
μ
k
S
k
ℏ
s
k
⋅
B
(
q
k
)
)
ψ
,
{\displaystyle {\begin{aligned}{\frac {d\mathbf {Q} _{k}}{dt}}(t)&={\frac {\hbar }{m_{k}}}\operatorname {Im} \left({\frac {(\psi ,D_{k}\psi )}{(\psi ,\psi )}}\right)(\mathbf {Q} _{1},\ldots ,\mathbf {Q} _{N},t),\\i\hbar {\frac {\partial }{\partial t}}\psi &=\left(-\sum _{k=1}^{N}{\frac {\hbar ^{2}}{2m_{k}}}D_{k}^{2}+V-\sum _{k=1}^{N}\mu _{k}{\frac {\mathbf {S} _{k}}{\hbar s_{k}}}\cdot \mathbf {B} (\mathbf {q} _{k})\right)\psi ,\end{aligned}}}
where
m
k
,
e
k
,
μ
k
{\displaystyle m_{k},e_{k},\mu _{k}}
— the mass, charge and magnetic moment of the
k
{\displaystyle k}
–th particle
S
k
{\displaystyle \mathbf {S} _{k}}
— the appropriate spin operator acting in the
k
{\displaystyle k}
–th particle's spin space
s
k
{\displaystyle s_{k}}
— spin quantum number of the
k
{\displaystyle k}
–th particle (
s
k
=
1
/
2
{\displaystyle s_{k}=1/2}
for electron)
A
{\displaystyle \mathbf {A} }
is vector potential in
R
3
{\displaystyle \mathbb {R} ^{3}}
B
=
∇
×
A
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }
is the magnetic field in
R
3
{\displaystyle \mathbb {R} ^{3}}
D
k
=
∇
k
−
i
e
k
ℏ
A
(
q
k
)
{\textstyle D_{k}=\nabla _{k}-{\frac {ie_{k}}{\hbar }}\mathbf {A} (\mathbf {q} _{k})}
is the covariant derivative, involving the vector potential, ascribed to the coordinates of
k
{\displaystyle k}
–th particle (in SI units)
ψ
{\displaystyle \psi }
— the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form
ψ
:
R
9
×
R
→
C
2
⊗
C
2
⊗
C
3
,
{\displaystyle \psi :\mathbb {R} ^{9}\times \mathbb {R} \to \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}\otimes \mathbb {C} ^{3},}
where
⊗
{\displaystyle \otimes }
is a tensor product, so this spin space is 12-dimensional
(
⋅
,
⋅
)
{\displaystyle (\cdot ,\cdot )}
is the inner product in spin space
C
d
{\displaystyle \mathbb {C} ^{d}}
:
(
ϕ
,
ψ
)
=
∑
s
=
1
d
ϕ
s
∗
ψ
s
.
{\displaystyle (\phi ,\psi )=\sum _{s=1}^{d}\phi _{s}^{*}\psi _{s}.}
=== Stochastic electrodynamics ===
Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field.
=== Quantum field theory ===
In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space.
Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place.
=== Curved space ===
To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of the Schrödinger equation.
For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in the Schrödinger equation becomes a local self-adjoint operator acting on that space.
The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion.
In a general spacetime with curvature and torsion, the guiding equation for the four-velocity
u
i
{\displaystyle u^{i}}
of an elementary fermion particle is
u
i
=
e
μ
i
ψ
¯
γ
μ
ψ
ψ
¯
ψ
,
{\displaystyle u^{i}={\frac {e_{\mu }^{i}{\bar {\psi }}\gamma ^{\mu }\psi }{{\bar {\psi }}\psi }},}
where the wave function
ψ
{\displaystyle \psi }
is a spinor,
ψ
¯
{\displaystyle {\bar {\psi }}}
is the corresponding adjoint,
γ
μ
{\displaystyle \gamma ^{\mu }}
are the Dirac matrices, and
e
μ
i
{\displaystyle e_{\mu }^{i}}
is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion.
=== Exploiting nonlocality ===
De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking
R
{\displaystyle R}
to the probability density function
ρ
=
R
2
{\displaystyle \rho =R^{2}}
as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of
ψ
{\displaystyle \psi }
. It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place.
Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle.
Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated.
== Results ==
Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell).
The basis for agreement with standard quantum mechanics is that the particles are distributed according to
|
ψ
|
2
{\displaystyle |\psi |^{2}}
. This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution.
=== Measuring spin and polarization ===
According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment.
In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics.
=== Measurements, the quantum formalism, and observer independence ===
De Broglie–Bohm theory gives almost the same results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics.
==== Collapse of the wavefunction ====
De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with the Schrödinger equation and the guiding equation, with an initial
|
ψ
|
2
{\displaystyle |\psi |^{2}}
distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details).
It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results.
Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by the Schrödinger equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs.
==== Operators as observables ====
In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction.
In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant.
There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to
|
ψ
|
2
{\displaystyle |\psi |^{2}}
, and no contradiction to experimental results is possible to detect.
Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators.
==== Hidden variables ====
De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description.
Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence.
=== Different predictions ===
A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions.
Results from one such experiment agreed with the predictions of standard quantum mechanics and disagreed with the Bohm predictions when they conflicted. These conclusions have been the subject of debate.
=== Heisenberg's uncertainty principle ===
The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of
Δ
x
{\displaystyle \Delta x}
and the momentum with an accuracy of
Δ
p
{\displaystyle \Delta p}
, then
Δ
x
Δ
p
≳
h
.
{\displaystyle \Delta x\Delta p\gtrsim h.}
In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory.
To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation.
For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation.
=== Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality ===
De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments.
In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory.
Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality".
Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect.
The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored."
The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail.
=== Classical limit ===
Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis.
=== Quantum trajectory method ===
Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.)
This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics".
Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100.
There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where
R
−
1
∇
2
R
→
∞
.
{\displaystyle R^{-1}\nabla ^{2}R\to \infty .}
This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged.
These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account.
The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system.
== Similarities with the many-worlds interpretation ==
Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds:
The second issue with Bohmian mechanics may, at first sight, appear rather harmless, but which on a closer look develops considerable destructive power: the issue of empty branches. These are the components of the post-measurement state that do not guide any particles because they do not have the actual configuration q in their support. At first sight, the empty branches do not appear problematic but on the contrary very helpful as they enable the theory to explain unique outcomes of measurements. Also, they seem to explain why there is an effective "collapse of the wavefunction", as in ordinary quantum mechanics. On a closer view, though, one must admit that these empty branches do not actually disappear. As the wavefunction is taken to describe a really existing field, all their branches really exist and will evolve forever by the Schrödinger dynamics, no matter how many of them will become empty in the course of the evolution. Every branch of the global wavefunction potentially describes a complete world which is, according to Bohm's ontology, only a possible world that would be the actual world if only it were filled with particles, and which is in every respect identical to a corresponding world in Everett's theory. Only one branch at a time is occupied by particles, thereby representing the actual world, while all other branches, though really existing as part of a really existing wavefunction, are empty and thus contain some sort of "zombie worlds" with planets, oceans, trees, cities, cars and people who talk like us and behave like us, but who do not actually exist. Now, if the Everettian theory may be accused of ontological extravagance, then Bohmian mechanics could be accused of ontological wastefulness. On top of the ontology of empty branches comes the additional ontology of particle positions that are, on account of the quantum equilibrium hypothesis, forever unknown to the observer. Yet, the actual configuration is never needed for the calculation of the statistical predictions in experimental reality, for these can be obtained by mere wavefunction algebra. From this perspective, Bohmian mechanics may appear as a wasteful and redundant theory. I think it is considerations like these that are the biggest obstacle in the way of a general acceptance of Bohmian mechanics.
Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches:
It is usually overlooked that Bohm's theory contains the same "many worlds" of dynamically separate branches as the Everett interpretation (now regarded as "empty" wave components), since it is based on precisely the same ... global wave function ...
David Deutsch has expressed the same point more "acerbically":
Pilot-wave theories are parallel-universe theories in a state of chronic denial.
This conclusion has been challenged by Detlef Dürr and Justin Lazarovici:
The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission.
== Occam's-razor criticism ==
Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said:
Our main criticism of this view is on the grounds of simplicity – if one desires to hold the view that
ψ
{\displaystyle \psi }
is a real field, then the associated particle is superfluous, since, as we have endeavored to illustrate, the pure wave theory is itself satisfactory.
In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor.
According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms.
According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function.
== Derivations ==
De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory.
The Schrödinger equation can be derived by using Einstein's light quanta hypothesis:
E
=
ℏ
ω
{\displaystyle E=\hbar \omega }
and de Broglie's hypothesis:
p
=
ℏ
k
{\displaystyle \mathbf {p} =\hbar \mathbf {k} }
.
The guiding equation can be derived in a similar fashion. We assume a plane wave:
ψ
(
x
,
t
)
=
A
e
i
(
k
⋅
x
−
ω
t
)
{\displaystyle \psi (\mathbf {x} ,t)=Ae^{i(\mathbf {k} \cdot \mathbf {x} -\omega t)}}
. Notice that
i
k
=
∇
ψ
/
ψ
{\displaystyle i\mathbf {k} =\nabla \psi /\psi }
. Assuming that
p
=
m
v
{\displaystyle \mathbf {p} =m\mathbf {v} }
for the particle's actual velocity, we have that
v
=
ℏ
m
Im
(
∇
ψ
ψ
)
{\displaystyle \mathbf {v} ={\frac {\hbar }{m}}\operatorname {Im} \left({\frac {\nabla \psi }{\psi }}\right)}
. Thus, we have the guiding equation.
Notice that this derivation does not use the Schrödinger equation.
Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation
−
∂
ρ
∂
t
=
∇
⋅
(
ρ
v
ψ
)
{\displaystyle -{\frac {\partial \rho }{\partial t}}=\nabla \cdot (\rho v^{\psi })}
for the density
ρ
=
|
ψ
|
2
{\displaystyle \rho =|\psi |^{2}}
. This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle.
A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform the Schrödinger equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows:
Decomposition:
ψ
(
x
,
t
)
=
R
(
x
,
t
)
e
i
S
(
x
,
t
)
/
ℏ
.
{\displaystyle \psi (\mathbf {x} ,t)=R(\mathbf {x} ,t)e^{iS(\mathbf {x} ,t)/\hbar }.}
Note that
R
2
(
x
,
t
)
{\displaystyle R^{2}(\mathbf {x} ,t)}
corresponds to the probability density
ρ
(
x
,
t
)
=
|
ψ
(
x
,
t
)
|
2
{\displaystyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}}
.
Continuity equation:
−
∂
ρ
(
x
,
t
)
∂
t
=
∇
⋅
(
ρ
(
x
,
t
)
∇
S
(
x
,
t
)
m
)
{\displaystyle -{\frac {\partial \rho (\mathbf {x} ,t)}{\partial t}}=\nabla \cdot \left(\rho (\mathbf {x} ,t){\frac {\nabla S(\mathbf {x} ,t)}{m}}\right)}
.
Hamilton–Jacobi equation:
∂
S
(
x
,
t
)
∂
t
=
−
[
1
2
m
(
∇
S
(
x
,
t
)
)
2
+
V
−
ℏ
2
2
m
∇
2
R
(
x
,
t
)
R
(
x
,
t
)
]
.
{\displaystyle {\frac {\partial S(\mathbf {x} ,t)}{\partial t}}=-\left[{\frac {1}{2m}}(\nabla S(\mathbf {x} ,t))^{2}+V-{\frac {\hbar ^{2}}{2m}}{\frac {\nabla ^{2}R(\mathbf {x} ,t)}{R(\mathbf {x} ,t)}}\right].}
The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential
V
−
ℏ
2
2
m
∇
2
R
R
{\displaystyle V-{\frac {\hbar ^{2}}{2m}}{\frac {\nabla ^{2}R}{R}}}
and velocity field
∇
S
m
.
{\displaystyle {\frac {\nabla S}{m}}.}
The potential
V
{\displaystyle V}
is the classical potential that appears in the Schrödinger equation, and the other term involving
R
{\displaystyle R}
is the quantum potential, terminology introduced by Bohm.
This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by
∇
S
m
{\displaystyle {\frac {\nabla S}{m}}}
, which is a symptom of this being a first-order theory, not a second-order theory.
A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that the Schrödinger equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis.
A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator
H
{\displaystyle H}
, the equation to satisfy for all functions
f
{\displaystyle f}
(with associated multiplication operator
f
^
{\displaystyle {\hat {f}}}
) is
(
v
(
f
)
)
(
q
)
=
Re
(
ψ
,
i
ℏ
[
H
,
f
^
]
ψ
)
(
ψ
,
ψ
)
(
q
)
{\displaystyle (v(f))(q)=\operatorname {Re} {\frac {\left(\psi ,{\frac {i}{\hbar }}[H,{\hat {f}}]\psi \right)}{(\psi ,\psi )}}(q)}
, where
(
v
,
w
)
{\displaystyle (v,w)}
is the local Hermitian inner product on the value space of the wavefunction.
This formulation allows for stochastic theories such as the creation and annihilation of particles.
A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities:
A physical system consists in a spatiotemporally propagating wave and a point particle guided by it.
The wave is described mathematically by a solution
ψ
{\displaystyle \psi }
to the Schrödinger wave equation.
The particle motion is described by a solution to
x
˙
(
t
)
=
[
∇
S
(
x
(
t
)
,
t
)
)
]
/
m
{\displaystyle \mathbf {\dot {x}} (t)=[\nabla S(\mathbf {x} (t),t))]/m}
in dependence on initial condition
x
(
t
=
0
)
{\displaystyle \mathbf {x} (t=0)}
, with
S
{\displaystyle S}
the phase of
ψ
{\displaystyle \psi }
.The fourth postulate is subsidiary yet consistent with the first three:
The probability
ρ
(
x
(
t
)
)
{\displaystyle \rho (\mathbf {x} (t))}
to find the particle in the differential volume
d
3
x
{\displaystyle d^{3}x}
at time t equals
|
ψ
(
x
(
t
)
)
|
2
{\displaystyle |\psi (\mathbf {x} (t))|^{2}}
.
== History ==
The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982: Bohm showed explicitly how parameters could indeed be introduced, into nonrelativistic wave mechanics, with the help of which the indeterministic description could be transformed into a deterministic one. More importantly, in my opinion, the subjectivity of the orthodox version, the necessary reference to the "observer", could be eliminated. ...But why then had Born not told me of this "pilot wave"? If only to point out what was wrong with it? Why did von Neumann not consider it? More extraordinarily, why did people go on producing "impossibility" proofs, after 1952, and as recently as 1978?... Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show us that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?
Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries.
De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference.
=== Pilot-wave theory ===
Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a no hidden variables proof in his book Mathematical Foundations of Quantum Mechanics, that was widely believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades.
In 1926, Erwin Madelung had developed a hydrodynamic version of the Schrödinger equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum analog of Euler equations of fluid dynamics, differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics.
Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential.
After publishing his popular textbook Quantum Theory that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's no hidden variables proof. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993].
This stage applies to multiple particles, and is deterministic.
The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local.
Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows:
I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed.
He subsequently described Bohm's theory as "artificial metaphysics".
According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee.
In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics.
Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's).
The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing."
=== Bohmian mechanics ===
Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles.
All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods.
=== Causal interpretation and ontological interpretation ===
Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993).
This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory.
In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952.
William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles.
=== Hydrodynamic quantum analogs ===
Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories.
The analogs have been compared to the Faraday wave.
These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved.
Another classical analog has been reported in surface gravity waves.
== Surrealistic trajectories ==
In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors.
In 2016, Mahler et al. verified the ESSW predictions. However they propose the surrealistic effect is a consequence of the nonlocality inherent in Bohm's theory.
== See also ==
Madelung equations
Local hidden-variable theory
Superfluid vacuum theory
Fluid analogs in quantum mechanics
Probability current
== Notes ==
== References ==
== Sources ==
== Further reading ==
== External links == | Wikipedia/Bohmian_mechanics |
In physics, a charge is any of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. Charges correspond to the time-invariant generators of a symmetry group, and specifically, to the generators that commute with the Hamiltonian. Charges are often denoted by
Q
{\displaystyle Q}
, and so the invariance of the charge corresponds to the vanishing commutator
[
Q
,
H
]
=
0
{\displaystyle [Q,H]=0}
, where
H
{\displaystyle H}
is the Hamiltonian. Thus, charges are associated with conserved quantum numbers; these are the eigenvalues of the generator
Q
{\displaystyle Q}
. A "charge" can also refer to a point-shaped object with an electric charge and a position, such as in the method of image charges.
== Abstract definition ==
Abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge"; the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge.
Thus, for example, the electric charge is the generator of the U(1) symmetry of electromagnetism. The conserved current is the electric current.
In the case of local, dynamical symmetries, associated with every charge is a gauge field; when quantized, the gauge field becomes a gauge boson. The charges of the theory "radiate" the gauge field. Thus, for example, the gauge field of electromagnetism is the electromagnetic field; and the gauge boson is the photon.
The word "charge" is often used as a synonym for both the generator of a symmetry, and the conserved quantum number (eigenvalue) of the generator. Thus, letting the upper-case letter
Q
{\displaystyle \ Q\ }
refer to the generator, one has that the generator commutes with the Hamiltonian
[
Q
,
H
]
=
0
.
{\displaystyle \ \left[\ Q,H\ \right]=0~.}
Commutation implies that the eigenvalues (lower-case)
q
{\displaystyle \ q\ }
are time-invariant:
d
q
d
t
=
0
.
{\displaystyle \ {\tfrac {~\!\operatorname {d} q~}{~\!\operatorname {d} t~}}=0~.}
So, for example, when the symmetry group is a Lie group, then the charge operators correspond to the simple roots of the root system of the Lie algebra; the discreteness of the root system accounting for the quantization of the charge. The simple roots are used, as all the other roots can be obtained as linear combinations of these. The general roots are often called raising and lowering operators, or ladder operators.
The charge quantum numbers then correspond to the weights of the highest-weight modules of a given representation of the Lie algebra. So, for example, when a particle in a quantum field theory belongs to a symmetry, then it transforms according to a particular representation of that symmetry; the charge quantum number is then the weight of the representation.
== Examples ==
Various charge quantum numbers have been introduced by theories of particle physics. These include the charges of the Standard Model:
The color charge of quarks. The color charge generates the SU(3) color symmetry of quantum chromodynamics.
The weak isospin quantum numbers of the electroweak interaction. It generates the SU(2) part of the electroweak SU(2) × U(1) symmetry. Weak isospin is a local symmetry, whose gauge bosons are the W and Z bosons.
The electric charge for electromagnetic interactions. In mathematics texts, this is sometimes referred to as the
u
1
{\displaystyle u_{1}}
-charge of a Lie algebra module.
Note that these charge quantum numbers show up in the Lagrangian via the Gauge covariant derivative#Standard_Model.
Charges of approximate symmetries:
The strong isospin charges. The symmetry groups is SU(2) flavor symmetry; the gauge bosons are the pions. The pions are not elementary particles, and the symmetry is only approximate. It is a special case of flavor symmetry.
Other quark-flavor charges, such as strangeness or charm. Together with the u–d isospin mentioned above, these generate the global SU(6) flavor symmetry of the fundamental particles; this symmetry is badly broken by the masses of the heavy quarks. Charges include the hypercharge, the X-charge and the weak hypercharge.
Hypothetical charges of extensions to the Standard Model:
The hypothetical magnetic charge is another charge in the theory of electromagnetism. Magnetic charges are not seen experimentally in laboratory experiments, but would be present for theories including magnetic monopoles.
In supersymmetry:
The supercharge refers to the generator that rotates the fermions into bosons, and vice versa, in the supersymmetry.
In conformal field theory:
The central charge of the Virasoro algebra, sometimes referred to as the conformal central charge or the conformal anomaly. Here, the term 'central' is used in the sense of the center in group theory: it is an operator that commutes with all the other operators in the algebra. The central charge is the eigenvalue of the central generator of the algebra; here, it is the energy–momentum tensor of the two-dimensional conformal field theory.
In gravitation:
Eigenvalues of the energy–momentum tensor correspond to physical mass.
== Charge conjugation ==
In the formalism of particle theories, charge-like quantum numbers can sometimes be inverted by means of a charge conjugation operator called C. Charge conjugation simply means that a given symmetry group occurs in two inequivalent (but still isomorphic) group representations. It is usually the case that the two charge-conjugate representations are complex conjugate fundamental representations of the Lie group. Their product then forms the adjoint representation of the group.
Thus, a common example is that the product of two charge-conjugate fundamental representations of SL(2,C) (the spinors) forms the adjoint rep of the Lorentz group SO(3,1); abstractly, one writes
2
⊗
2
¯
=
3
⊕
1.
{\displaystyle 2\otimes {\overline {2}}=3\oplus 1.\ }
That is, the product of two (Lorentz) spinors is a (Lorentz) vector and a (Lorentz) scalar. Note that the complex Lie algebra sl(2,C) has a compact real form su(2) (in fact, all Lie algebras have a unique compact real form). The same decomposition holds for the compact form as well: the product of two spinors in su(2) being a vector in the rotation group O(3) and a singlet. The decomposition is given by the Clebsch–Gordan coefficients.
A similar phenomenon occurs in the compact group SU(3), where there are two charge-conjugate but inequivalent fundamental representations, dubbed
3
{\displaystyle 3}
and
3
¯
{\displaystyle {\overline {3}}}
, the number 3 denoting the dimension of the representation, and with the quarks transforming under
3
{\displaystyle 3}
and the antiquarks transforming under
3
¯
{\displaystyle {\overline {3}}}
. The Kronecker product of the two gives
3
⊗
3
¯
=
8
⊕
1.
{\displaystyle 3\otimes {\overline {3}}=8\oplus 1.\ }
That is, an eight-dimensional representation, the octet of the eight-fold way, and a singlet. The decomposition of such products of representations into direct sums of irreducible representations can in general be written as
Λ
⊗
Λ
′
=
⨁
i
L
i
Λ
i
{\displaystyle \Lambda \otimes \Lambda '=\bigoplus _{i}{\mathcal {L}}_{i}\Lambda _{i}}
for representations
Λ
{\displaystyle \Lambda }
. The dimensions of the representations obey the "dimension sum rule":
d
Λ
⋅
d
Λ
′
=
∑
i
L
i
d
Λ
i
.
{\displaystyle d_{\Lambda }\cdot d_{\Lambda '}=\sum _{i}{\mathcal {L}}_{i}d_{\Lambda _{i}}.}
Here,
d
Λ
{\displaystyle d_{\Lambda }}
is the dimension of the representation
Λ
{\displaystyle \Lambda }
, and the integers
L
{\displaystyle {\mathcal {L}}}
being the Littlewood–Richardson coefficients. The decomposition of the representations is again given by the Clebsch–Gordan coefficients, this time in the general Lie-algebra setting.
== See also ==
Casimir operator
== References == | Wikipedia/Charge_(physics) |
The old quantum theory is a collection of results from the years 1900–1925, which predate modern quantum mechanics. The theory was never complete or self-consistent, but was instead a set of heuristic corrections to classical mechanics. The theory has come to be understood as the semi-classical approximation to modern quantum mechanics. The main and final accomplishments of the old quantum theory were the determination of the modern form of the periodic table by Edmund Stoner and the Pauli exclusion principle, both of which were premised on Arnold Sommerfeld's enhancements to the Bohr model of the atom.
The main tool of the old quantum theory was the Bohr–Sommerfeld quantization condition, a procedure for selection of certain allowed states of a classical system: the system can then only exist in one of the allowed states and not in any other state.
== History ==
The old quantum theory was instigated by the 1900 work of Max Planck on the emission and absorption of light in a black body with his discovery of Planck's law introducing his quantum of action, and began in earnest after the work of Albert Einstein on the specific heats of solids in 1907 brought him to the attention of Walther Nernst. Einstein, followed by Debye, applied quantum principles to the motion of atoms, explaining the specific heat anomaly.
In 1910, Arthur Erich Haas further developed J. J. Thomson's atomic model in a paper that outlined a treatment of the hydrogen atom involving quantization of electronic orbitals, thus anticipating the Bohr model (1913) by three years.
John William Nicholson is noted as the first to create an atomic model that quantized angular momentum as
h
/
(
2
π
)
{\displaystyle h/(2\pi )}
. Niels Bohr quoted him in his 1913 paper of the Bohr model of the atom.
In 1913, Niels Bohr displayed rudiments of the later defined correspondence principle and used it to formulate a model of the hydrogen atom which explained the line spectrum. In the next few years Arnold Sommerfeld extended the quantum rule to arbitrary integrable systems making use of the principle of adiabatic invariance of the quantum numbers introduced by Lorentz and Einstein. Sommerfeld made a crucial contribution by quantizing the z-component of the angular momentum, which in the old quantum era was called "space quantization" (German: Richtungsquantelung). This model, which became known as the Bohr–Sommerfeld model, allowed the orbits of the electron to be ellipses instead of circles, and introduced the concept of quantum degeneracy. The theory would have correctly explained the Zeeman effect, except for the issue of electron spin. Sommerfeld's model was much closer to the modern quantum mechanical picture than Bohr's.
Throughout the 1910s and well into the 1920s, many problems were attacked using the old quantum theory with mixed results. Molecular rotation and vibration spectra were understood and the electron's spin was discovered, leading to the confusion of half-integer quantum numbers. Max Planck introduced the zero point energy and Arnold Sommerfeld semiclassically quantized the relativistic hydrogen atom. Hendrik Kramers explained the Stark effect. Bose and Einstein gave the correct quantum statistics for photons.
Kramers gave a prescription for calculating transition probabilities between quantum states in terms of Fourier components of the motion, ideas which were extended in collaboration with Werner Heisenberg to a semiclassical matrix-like description of atomic transition probabilities. Heisenberg went on to reformulate all of quantum theory in terms of a version of these transition matrices, creating matrix mechanics.
In 1924, Louis de Broglie introduced the wave theory of matter, which was extended to a semiclassical equation for matter waves by Albert Einstein a short time later. In 1926 Erwin Schrödinger found a completely quantum mechanical wave-equation, which reproduced all the successes of the old quantum theory without ambiguities and inconsistencies. Schrödinger's wave mechanics developed separately from matrix mechanics until Schrödinger and others proved that the two methods predicted the same experimental consequences. Paul Dirac later proved in 1926 that both methods can be obtained from a more general method called transformation theory.
In the 1950s Joseph Keller updated Bohr–Sommerfeld quantization using Einstein's interpretation of 1917, now known as Einstein–Brillouin–Keller method. In 1971, Martin Gutzwiller took into account that this method only works for integrable systems and derived a semiclassical way of quantizing chaotic systems from path integrals.
== Basic principles ==
The basic idea of the old quantum theory is that the motion in an atomic system is quantized, or discrete. The system obeys classical mechanics except that not every motion is allowed, only those motions which obey the quantization condition:
∮
H
(
p
,
q
)
=
E
p
i
d
q
i
=
n
i
h
{\displaystyle \oint _{H(p,q)=E}p_{i}\,dq_{i}=n_{i}h}
where the
p
i
{\displaystyle p_{i}}
are the momenta of the system and the
q
i
{\displaystyle q_{i}}
are the corresponding coordinates. The quantum numbers
n
i
{\displaystyle n_{i}}
are integers and the integral is taken over one period of the motion at constant energy (as described by the Hamiltonian). The integral is an area in phase space, which is a quantity called the action and is quantized in units of the (unreduced) Planck constant. For this reason, the Planck constant was often called the quantum of action.
In order for the old quantum condition to make sense, the classical motion must be separable, meaning that there are separate coordinates
q
i
{\displaystyle q_{i}}
in terms of which the motion is periodic. The periods of the different motions do not have to be the same, they can even be incommensurate, but there must be a set of coordinates where the motion decomposes in a multi-periodic way.
The motivation for the old quantum condition was the correspondence principle, complemented by the physical observation that the quantities which are quantized must be adiabatic invariants. Given Planck's quantization rule for the harmonic oscillator, either condition determines the correct classical quantity to quantize in a general system up to an additive constant.
This quantization condition is often known as the Wilson–Sommerfeld rule, proposed independently by William Wilson and Arnold Sommerfeld.
== Examples ==
=== Thermal properties of the harmonic oscillator ===
The simplest system in the old quantum theory is the harmonic oscillator, whose Hamiltonian is:
H
=
p
2
2
m
+
m
ω
2
q
2
2
.
{\displaystyle H={p^{2} \over 2m}+{m\omega ^{2}q^{2} \over 2}.}
The old quantum theory yields a recipe for the quantization of the energy levels of the harmonic oscillator, which, when combined with the Boltzmann probability distribution of thermodynamics, yields the correct expression for the stored energy and specific heat of a quantum oscillator both at low and at ordinary temperatures. Applied as a model for the specific heat of solids, this resolved a discrepancy in pre-quantum thermodynamics that had troubled 19th-century scientists. Let us now describe this.
The level sets of H are the orbits, and the quantum condition is that the area enclosed by an orbit in phase space is an integer. It follows that the energy is quantized according to the Planck rule:
E
=
n
ℏ
ω
,
{\displaystyle E=n\hbar \omega ,\,}
a result which was known well before, and used to formulate the old quantum condition. This result differs by
1
2
ℏ
ω
{\displaystyle {\tfrac {1}{2}}\hbar \omega }
from the results found with the help of quantum mechanics. This constant is neglected in the derivation of the old quantum theory, and its value cannot be determined using it.
The thermal properties of a quantized oscillator may be found by averaging the energy in each of the discrete states assuming that they are occupied with a Boltzmann weight:
U
=
∑
n
ℏ
ω
n
e
−
β
n
ℏ
ω
∑
n
e
−
β
n
ℏ
ω
=
ℏ
ω
e
−
β
ℏ
ω
1
−
e
−
β
ℏ
ω
,
w
h
e
r
e
β
=
1
k
T
,
{\displaystyle U={\sum _{n}\hbar \omega ne^{-\beta n\hbar \omega } \over \sum _{n}e^{-\beta n\hbar \omega }}={\hbar \omega e^{-\beta \hbar \omega } \over 1-e^{-\beta \hbar \omega }},\;\;\;{\rm {where}}\;\;\beta ={\frac {1}{kT}},}
kT is Boltzmann constant times the absolute temperature, which is the temperature as measured in more natural units of energy. The quantity
β
{\displaystyle \beta }
is more fundamental in thermodynamics than the temperature, because it is the thermodynamic potential associated to the energy.
From this expression, it is easy to see that for large values of
β
{\displaystyle \beta }
, for very low temperatures, the average energy U in the harmonic oscillator approaches zero very quickly, exponentially fast. The reason is that kT is the typical energy of random motion at temperature T, and when this is smaller than
ℏ
ω
{\displaystyle \hbar \omega }
, there is not enough energy to give the oscillator even one quantum of energy. So the oscillator stays in its ground state, storing next to no energy at all.
This means that at very cold temperatures, the change in energy with respect to beta, or equivalently the change in energy with respect to temperature, is also exponentially small. The change in energy with respect to temperature is the specific heat, so the specific heat is exponentially small at low temperatures, going to zero like
exp
(
−
ℏ
ω
/
k
T
)
{\displaystyle \exp(-\hbar \omega /kT)}
At small values of
β
{\displaystyle \beta }
, at high temperatures, the average energy U is equal to
1
/
β
=
k
T
{\displaystyle 1/\beta =kT}
. This reproduces the equipartition theorem of classical thermodynamics: every harmonic oscillator at temperature T has energy kT on average. This means that the specific heat of an oscillator is constant in classical mechanics and equal to k. For a collection of atoms connected by springs, a reasonable model of a solid, the total specific heat is equal to the total number of oscillators times k. There are overall three oscillators for each atom, corresponding to the three possible directions of independent oscillations in three dimensions. So the specific heat of a classical solid is always 3k per atom, or in chemistry units, 3R per mole of atoms.
Monatomic solids at room temperatures have approximately the same specific heat of 3k per atom, but at low temperatures they don't. The specific heat is smaller at colder temperatures, and it goes to zero at absolute zero. This is true for all material systems, and this observation is called the third law of thermodynamics. Classical mechanics cannot explain the third law, because in classical mechanics the specific heat is independent of the temperature.
This contradiction between classical mechanics and the specific heat of cold materials was noted by James Clerk Maxwell in the 19th century, and remained a deep puzzle for those who advocated an atomic theory of matter. Einstein resolved this problem in 1906 by proposing that atomic motion is quantized. This was the first application of quantum theory to mechanical systems. A short while later, Peter Debye gave a quantitative theory of solid specific heats in terms of quantized oscillators with various frequencies (see Einstein solid and Debye model).
=== One-dimensional potential: U = 0 ===
One-dimensional problems are easy to solve. At any energy E, the value of the momentum p is found from the conservation equation:
2
m
(
E
−
U
(
q
)
)
=
2
m
E
=
p
=
const.
{\displaystyle {\sqrt {2m(E-U(q))}}={\sqrt {2mE}}=p={\text{const.}}}
which is integrated over all values of q between the classical turning points, the places where the momentum vanishes. The integral is easiest for a particle in a box of length L, where the quantum condition is:
2
∫
0
L
p
d
q
=
n
h
{\displaystyle 2\int _{0}^{L}p\,dq=nh}
which gives the allowed momenta:
p
=
n
h
2
L
{\displaystyle p={nh \over 2L}}
and the energy levels
E
n
=
p
2
2
m
=
n
2
h
2
8
m
L
2
{\displaystyle E_{n}={p^{2} \over 2m}={n^{2}h^{2} \over 8mL^{2}}}
=== One-dimensional potential: U = Fx ===
Another easy case to solve with the old quantum theory is a linear potential on the positive halfline, the constant confining force F binding a particle to an impenetrable wall. This case is much more difficult in the full quantum mechanical treatment, and unlike the other examples, the semiclassical answer here is not exact but approximate, becoming more accurate at large quantum numbers.
2
∫
0
E
F
2
m
(
E
−
F
x
)
d
x
=
n
h
{\displaystyle 2\int _{0}^{\frac {E}{F}}{\sqrt {2m(E-Fx)}}\ dx=nh}
so that the quantum condition is
4
3
2
m
E
3
/
2
F
=
n
h
{\displaystyle {4 \over 3}{\sqrt {2m}}{E^{3/2} \over F}=nh}
which determines the energy levels,
E
n
=
(
3
n
h
F
4
2
m
)
2
/
3
{\displaystyle E_{n}=\left({3nhF \over 4{\sqrt {2m}}}\right)^{2/3}}
In the specific case F=mg, the particle is confined by the gravitational potential of the earth and the "wall" here is the surface of the earth.
=== One-dimensional potential: U = 1⁄2kx2 ===
This case is also easy to solve, and the semiclassical answer here agrees with the quantum one to within the ground-state energy. Its quantization-condition integral is
2
∫
−
2
E
k
2
E
k
2
m
(
E
−
1
2
k
x
2
)
d
x
=
n
h
{\displaystyle 2\int _{-{\sqrt {\frac {2E}{k}}}}^{\sqrt {\frac {2E}{k}}}{\sqrt {2m\left(E-{\frac {1}{2}}kx^{2}\right)}}\ dx=nh}
with solution
E
=
n
h
2
π
k
m
=
n
ℏ
ω
{\displaystyle E=n{\frac {h}{2\pi }}{\sqrt {\frac {k}{m}}}=n\hbar \omega }
for oscillation angular frequency
ω
{\displaystyle \omega }
, as before.
=== Rotator ===
Another simple system is the rotator. A rotator consists of a mass M at the end of a massless rigid rod of length R and in two dimensions has the Lagrangian:
L
=
M
R
2
2
θ
˙
2
{\displaystyle L={MR^{2} \over 2}{\dot {\theta }}^{2}}
which determines that the angular momentum J conjugate to
θ
{\displaystyle \theta }
, the polar angle,
J
=
M
R
2
θ
˙
{\displaystyle J=MR^{2}{\dot {\theta }}}
. The old quantum condition requires that J multiplied by the period of
θ
{\displaystyle \theta }
is an integer multiple of the Planck constant:
2
π
J
=
n
h
{\displaystyle 2\pi J=nh}
the angular momentum to be an integer multiple of
ℏ
{\displaystyle \hbar }
. In the Bohr model, this restriction imposed on circular orbits was enough to determine the energy levels.
In three dimensions, a rigid rotator can be described by two angles —
θ
{\displaystyle \theta }
and
ϕ
{\displaystyle \phi }
, where
θ
{\displaystyle \theta }
is the inclination relative to an arbitrarily chosen z-axis while
ϕ
{\displaystyle \phi }
is the rotator angle in the projection to the x–y plane. The kinetic energy is again the only contribution to the Lagrangian:
L
=
M
R
2
2
θ
˙
2
+
M
R
2
2
(
sin
(
θ
)
ϕ
˙
)
2
{\displaystyle L={MR^{2} \over 2}{\dot {\theta }}^{2}+{MR^{2} \over 2}(\sin(\theta ){\dot {\phi }})^{2}}
And the conjugate momenta are
p
θ
=
θ
˙
{\displaystyle p_{\theta }={\dot {\theta }}}
and
p
ϕ
=
sin
(
θ
)
2
ϕ
˙
{\displaystyle p_{\phi }=\sin(\theta )^{2}{\dot {\phi }}}
. The equation of motion for
ϕ
{\displaystyle \phi }
is trivial:
p
ϕ
{\displaystyle p_{\phi }}
is a constant:
p
ϕ
=
l
ϕ
{\displaystyle p_{\phi }=l_{\phi }}
which is the z-component of the angular momentum. The quantum condition demands that the integral of the constant
l
ϕ
{\displaystyle l_{\phi }}
as
ϕ
{\displaystyle \phi }
varies from 0 to
2
π
{\displaystyle 2\pi }
is an integer multiple of h:
l
ϕ
=
m
ℏ
{\displaystyle l_{\phi }=m\hbar }
And m is called the magnetic quantum number, because the z component of the angular momentum is the magnetic moment of the rotator along the z direction in the case where the particle at the end of the rotator is charged.
Since the three-dimensional rotator is rotating about an axis, the total angular momentum should be restricted in the same way as the two-dimensional rotator. The two quantum conditions restrict the total angular momentum and the z-component of the angular momentum to be the integers l,m. This condition is reproduced in modern quantum mechanics, but in the era of the old quantum theory it led to a paradox: how can the orientation of the angular momentum relative to the arbitrarily chosen z-axis be quantized? This seems to pick out a direction in space.
This phenomenon, the quantization of angular momentum about an axis, was given the name space quantization, because it seemed incompatible with rotational invariance. In modern quantum mechanics, the angular momentum is quantized the same way, but the discrete states of definite angular momentum in any one orientation are quantum superpositions of the states in other orientations, so that the process of quantization does not pick out a preferred axis. For this reason, the name "space quantization" fell out of favor, and the same phenomenon is now called the quantization of angular momentum.
=== Hydrogen atom ===
The angular part of the hydrogen atom is just the rotator, and gives the quantum numbers l and m. The only remaining variable is the radial coordinate, which executes a periodic one-dimensional potential motion, which can be solved.
For a fixed value of the total angular momentum L, the Hamiltonian for a classical Kepler problem is (the unit of mass and unit of energy redefined to absorb two constants):
H
=
p
r
2
2
+
l
2
2
r
2
−
1
r
.
{\displaystyle H={p_{r}^{2} \over 2}+{l^{2} \over 2r^{2}}-{1 \over r}.}
Fixing the energy to be (a negative) constant and solving for the radial momentum
p
r
{\displaystyle p_{r}}
, the quantum condition integral is:
∮
2
E
−
l
2
r
2
+
2
r
d
r
=
k
h
{\displaystyle \oint {\sqrt {2E-{l^{2} \over r^{2}}+{2 \over r}}}\ dr=kh}
which can be solved with the method of residues, and gives a new quantum number
k
{\displaystyle k}
which determines the energy in combination with
l
{\displaystyle l}
. The energy is:
E
=
−
1
2
(
k
+
l
)
2
{\displaystyle E=-{1 \over 2(k+l)^{2}}}
and it only depends on the sum of k and l, which is the principal quantum number n. Since k is positive, the allowed values of l for any given n are no bigger than n. The energies reproduce those in the Bohr model, except with the correct quantum mechanical multiplicities, with some ambiguity at the extreme values.
== De Broglie waves ==
In 1905, Einstein noted that the entropy of the quantized electromagnetic field oscillators in a box is, for short wavelength, equal to the entropy of a gas of point particles in the same box. The number of point particles is equal to the number of quanta. Einstein concluded that the quanta could be treated as if they were localizable objects
(see page 139/140), particles of light. Today we call them photons (a name coined by Gilbert N. Lewis in a letter to Nature.)
Einstein's theoretical argument was based on thermodynamics, on counting the number of states, and so was not completely convincing. Nevertheless, he concluded that light had attributes of both waves and particles, more precisely that an electromagnetic standing wave with frequency
ω
{\displaystyle \omega }
with the quantized energy:
E
=
n
ℏ
ω
{\displaystyle E=n\hbar \omega \,}
should be thought of as consisting of n photons each with an energy
ℏ
ω
{\displaystyle \hbar \omega }
. Einstein could not describe how the photons were related to the wave.
The photons have momentum as well as energy, and the momentum had to be
ℏ
k
{\displaystyle \hbar k}
where
k
{\displaystyle k}
is the wavenumber of the electromagnetic wave. This is required by relativity, because the momentum and energy form a four-vector, as do the frequency and wave-number.
In 1924, as a PhD candidate, Louis de Broglie proposed a new interpretation of the quantum condition. He suggested that all matter, electrons as well as photons, are described by waves obeying the relations.
p
=
ℏ
k
{\displaystyle p=\hbar k}
or, expressed in terms of wavelength
λ
{\displaystyle \lambda }
instead,
p
=
h
λ
{\displaystyle p={h \over \lambda }}
He then noted that the quantum condition:
∫
p
d
x
=
ℏ
∫
k
d
x
=
2
π
ℏ
n
{\displaystyle \int p\,dx=\hbar \int k\,dx=2\pi \hbar n}
counts the change in phase for the wave as it travels along the classical orbit, and requires that it be an integer multiple of
2
π
{\displaystyle 2\pi }
. Expressed in wavelengths, the number of wavelengths along a classical orbit must be an integer. This is the condition for constructive interference, and it explained the reason for quantized orbits—the matter waves make standing waves only at discrete frequencies, at discrete energies.
For example, for a particle confined in a box, a standing wave must fit an integer number of wavelengths between twice the distance between the walls. The condition becomes:
n
λ
=
2
L
{\displaystyle n\lambda =2L}
so that the quantized momenta are:
p
=
n
h
2
L
{\displaystyle p={\frac {nh}{2L}}}
reproducing the old quantum energy levels.
This development was given a more mathematical form by Einstein, who noted that the phase function for the waves,
θ
(
J
,
x
)
{\displaystyle \theta (J,x)}
, in a mechanical system should be identified with the solution to the Hamilton–Jacobi equation, an equation which William Rowan Hamilton believed to be a short-wavelength limit of a sort of wave mechanics in the 19th century. Schrödinger then found the proper wave equation which matched the Hamilton–Jacobi equation for the phase; this is now known as the Schrödinger equation.
== Kramers transition matrix ==
The old quantum theory was formulated only for special mechanical systems which could be separated into action angle variables which were periodic. It did not deal with the emission and absorption of radiation. Nevertheless, Hendrik Kramers was able to find heuristics for describing how emission and absorption should be calculated.
Kramers suggested that the orbits of a quantum system should be Fourier analyzed, decomposed into harmonics at multiples of the orbit frequency:
X
n
(
t
)
=
∑
k
=
−
∞
∞
e
i
k
ω
t
X
n
;
k
{\displaystyle X_{n}(t)=\sum _{k=-\infty }^{\infty }e^{ik\omega t}X_{n;k}}
The index n describes the quantum numbers of the orbit, it would be n–l–m in the Sommerfeld model. The frequency
ω
{\displaystyle \omega }
is the angular frequency of the orbit
2
π
/
T
n
{\displaystyle 2\pi /T_{n}}
while k is an index for the Fourier mode. Bohr had suggested that the k-th harmonic of the classical motion correspond to the transition from level n to level n−k.
Kramers proposed that the transition between states were analogous to classical emission of radiation, which happens at frequencies at multiples of the orbit frequencies. The rate of emission of radiation is proportional to
|
X
k
|
2
{\displaystyle |X_{k}|^{2}}
, as it would be in classical mechanics. The description was approximate, since the Fourier components did not have frequencies that exactly match the energy spacings between levels.
This idea led to the development of matrix mechanics.
== Limitations ==
The old quantum theory had some limitations:
The old quantum theory provides no means to calculate the intensities of the spectral lines.
It fails to explain the anomalous Zeeman effect (that is, where the spin of the electron cannot be neglected).
It cannot quantize "chaotic" systems, i.e. dynamical systems in which trajectories are neither closed nor periodic and whose analytical form does not exist. This presents a problem for systems as simple as a 2-electron atom which is classically chaotic analogously to the famous gravitational three-body problem.
However it can be used to describe atoms with more than one electron (e.g. Helium) and the Zeeman effect.
It was later proposed that the old quantum theory is in fact the semi-classical approximation to the canonical quantum mechanics but its limitations are still under investigation.
== See also ==
Bohr model
Bohr–Sommerfeld model
BKS theory
== References ==
== Further reading ==
Thewlis, J., ed. (1962). Encyclopaedic Dictionary of Physics.
Pais, Abraham (1982). "Max Born's Statistical Interpretation of Quantum Mechanics" (PDF). Science. 218 (4578): 1193–8. Bibcode:1982Sci...218.1193P. doi:10.1126/science.218.4578.1193. PMID 17802457. S2CID 34406257. Address to annual meeting of the Optical Society of America October 21, 1982 (Tucson AZ). Retrieved 2013-09-08.
Planck, Max (1922). The origin and development of the quantum theory. Translated by Silberstein, L.; Clarke, H. T. Oxford: Clarendon Press. | Wikipedia/Old_quantum_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.