text stringlengths 13 991 |
|---|
The parametrization for formula_214 and formula_215 takes advantage of |
the Todorov effective external potential forms (as seen in the above section |
on the two-body Klein Gordon equations) and at the same time displays the |
correct static limit form for the Pauli reduction to Schrödinger-like |
form. The choice for these parameterizations (as with the two-body Klein |
Gordon equations) is closely tied to classical or quantum field |
theories for separate scalar and vector interactions. This |
amounts to working in the Feynman gauge with the simplest relation between |
space- and timelike parts of the vector interaction. |
The mass and energy potentials are respectively |
in which formula_223 is a Green function determined from the Schrödinger equation. Because of the similarity between the Schrödinger equation Eq. () and the relativistic constraint equation (),one can derive the same type of equation as the above |
called the quasipotential equation with a formula_215 very similar to that given in the Lippmann-Schwinger equation. The difference is that with the quasipotential equation, one starts with the scattering amplitudes formula_226 of quantum field theory, as determined from Feynman diagrams and deduces the quasipotential Φ perturbatively. Then one can use that Φ in (), to compute energy levels of two particle systems that are implied by the field theory. Constraint dynamics provides one of many, in fact an infinite number of, different types of quasipotential equations (three-dimensional truncations of the Bethe-Salpeter equation) differing from one another by the choice of formula_215. |
In general relativity, the Komar superpotential, corresponding to the invariance of the Hilbert–Einstein Lagrangian formula_1, is the tensor density: |
associated with a vector field formula_3, and where formula_4 denotes covariant derivative with respect to the Levi-Civita connection. |
where formula_6 denotes interior product, generalizes to an arbitrary vector field formula_7 the so-called above Komar superpotential, which was originally derived for timelike Killing vector fields. |
Komar superpotential is affected by the anomalous factor problem: In fact, when computed, for example, on the Kerr–Newman solution, produces the correct angular momentum, but just one-half of the expected mass. |
In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a probability distribution for the state of the system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902. |
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics. |
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes. |
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function. |
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called "stationary" and can be said to be in "statistical equilibrium". |
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics. Three important thermodynamic ensembles were defined by Gibbs: |
The calculations that can be made using each of these ensembles are explored further in their respective articles. |
Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived. |
For example in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system. |
Representations of statistical ensembles in statistical mechanics. |
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables. |
In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily. |
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles "A", "B" of the same system: |
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set. |
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by formula_1. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable in quantum mechanics can be written as an operator, . The expectation value of this operator on the statistical ensemble formula_2 is given by the following trace: |
This can be used to evaluate averages (operator ), variances (using operator ), covariances (using operator ), etc. The density matrix must always have a trace of 1: formula_4 (this essentially is the condition that the probabilities must add up to one). |
In general, the ensemble evolves over time according to the von Neumann equation. |
Equilibrium ensembles (those that do not evolve over time, formula_5) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator . Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is |
where the , indexed by , are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.) |
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation. |
In a mechanical system with a defined number of parts, the phase space has generalized coordinates called , and associated canonical momenta called . The ensemble is then represented by a joint probability density function . |
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers (first kind of particle), (second kind of particle), and so on up to (the last kind of particle; is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function . The number of coordinates varies with the numbers of particles. |
Any mechanical quantity can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by : |
The condition of probability normalization applies, requiring |
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability "density" in phase space to a probability "distribution" over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, , is related to the probability distribution over microstates, by a factor |
Since can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of when comparing different systems. |
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting. |
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor introduced above would be set to , and the integral would be restricted to the selected subregion of phase space.) |
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers. |
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using |
This is known as "correct Boltzmann counting". |
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like. |
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks. |
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble "itself" (not the consequent results) is a precisely defined object mathematically. For instance, |
In this section, we attempt to partially answer this question. |
Suppose we have a "preparation procedure" for a system in a physics |
lab: For example, the procedure might involve a physical apparatus and |
some protocols for manipulating the apparatus. As a result of this preparation procedure, some system |
is produced and maintained in isolation for some small period of time. |
By repeating this laboratory preparation procedure we obtain a |
...,"X""k", which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble. |
In a laboratory setting, each one of these prepped systems might be used as input |
for "one" subsequent "testing procedure". Again, the testing procedure |
involves a physical apparatus and some protocols; as a result of the |
testing procedure we obtain a "yes" or "no" answer. |
Given a testing procedure "E" applied to each prepared system, we obtain a sequence of values |
..., Meas ("E", "X""k"). Each one of these values is a 0 (or no) or a 1 (yes). |
For quantum mechanical systems, an important assumption made in the |
quantum logic approach to quantum mechanics is the identification of "yes-no" questions to the |
lattice of closed subspaces of a Hilbert space. With some additional |
technical assumptions one can then infer that states are given by |
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values. |
Dirac equation in the algebra of physical space |
The Dirac equation, as the relativistic equation that describes |
spin 1/2 particles in quantum mechanics, can be written in terms of the Algebra of physical space (APS), which is a case of a Clifford algebra or geometric algebra |
that is based on the use of paravectors. |
The Dirac equation in APS, including the electromagnetic interaction, reads |
Another form of the Dirac equation in terms of the Space time algebra was given earlier by David Hestenes. |
In general, the Dirac equation in the formalism of geometric algebra has the advantage of |
The spinor can be written in a null basis as |
such that the representation of the spinor in terms of the Pauli matrices is |
The standard form of the Dirac equation can be recovered by decomposing the spinor in its right and left-handed spinor components, which are extracted with the help of the projector |
The Dirac equation can be also written as |
Without electromagnetic interaction, the following equation is obtained from |
the two equivalent forms of the Dirac equation |
where the second column of the right and left spinors can be dropped by defining the |
The standard relativistic covariant form of the Dirac equation in the Weyl |
Given two spinors formula_18 and formula_19 in APS and |
their respective spinors in the standard form as formula_20 and |
formula_21, one can verify the following identity |
The Dirac equation is invariant under a global right rotation applied |
so that the kinetic term of the Dirac equation transforms as |
so that we can verify the invariance of the form of the Dirac equation. |
A more demanding requirement is that the Dirac equation should be |
invariant under a local gauge transformation of the type formula_28 |
In this case, the kinetic term transforms as |
so that the left side of the Dirac equation transforms covariantly as |
where we identify the need to perform an electromagnetic gauge transformation. |
The mass term transforms as in the case with global rotation, so, the form |
An application of the Dirac equation on itself leads to the second order Dirac equation |
A solution for the free particle with momentum formula_34 and positive energy formula_35 is |
and the current resembles the classical proper velocity |
A solution for the free particle with negative energy and momentum |
and the current resembles the classical proper velocity formula_38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.