text stringlengths 13 991 |
|---|
The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: |
The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. |
For open systems (also allowing exchange of matter): |
Here formula_47 is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. |
The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution. |
Due to Loschmidt's paradox, derivations of the Second Law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested. |
Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of formula_48 is: |
where formula_50 is the number of quantum states in a small interval between formula_48 and formula_52. Here formula_53 is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of formula_53. However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on formula_53. |
Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then formula_56 will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that formula_56 is maximized as that is the most probable situation in equilibrium. |
If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that formula_56 is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). |
Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number formula_56 of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of formula_60. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity increases monotonically as a function of time during the intermediate out of equilibrium state. |
Derivation of the entropy change for reversible processes. |
The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by: |
See here for the justification for this definition. Suppose that the system has some external parameter, "x", that can be changed. In general, the energy eigenstates of the system will depend on "x". According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. |
The generalized force, "X", corresponding to the external variable "x" is defined such that formula_63 is the work performed by the system if "x" is increased by an amount "dx". For example, if "x" is the volume, then "X" is the pressure. The generalized force for a system known to be in energy eigenstate formula_64 is given by: |
Since the system can be in any energy eigenstate within an interval of formula_53, we define the generalized force for the system as the expectation value of the above expression: |
To evaluate the average, we partition the formula_50 energy eigenstates by counting how many of them have a value for formula_69 within a range between formula_70 and formula_71. Calling this number formula_72, we have: |
The average defining the generalized force can now be written: |
We can relate this to the derivative of the entropy with respect to "x" at constant energy "E" as follows. Suppose we change "x" to "x" + "dx". Then formula_50 will change because the energy eigenstates depend on "x", causing energy eigenstates to move into or out of the range between formula_48 and formula_77. Let's focus again on the energy eigenstates for which formula_78 lies within the range between formula_70 and formula_71. Since these energy eigenstates increase in energy by "Y dx", all such energy eigenstates that are in the interval ranging from "E" – "Y" "dx" to "E" move from below "E" to above "E". There are |
such energy eigenstates. If formula_82, all these energy eigenstates will move into the range between formula_48 and formula_77 and contribute to an increase in formula_56. The number of energy eigenstates that move from below formula_77 to above formula_77 is given by formula_88. The difference |
is thus the net contribution to the increase in formula_56. Note that if "Y dx" is larger than formula_53 there will be the energy eigenstates that move from below "E" to above formula_77. They are counted in both formula_93 and formula_88, therefore the above expression is also valid in that case. |
Expressing the above expression as a derivative with respect to "E" and summing over "Y" yields the expression: |
The logarithmic derivative of formula_56 with respect to "x" is thus given by: |
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that: |
Derivation for systems described by the canonical ensemble. |
If a system is in thermal contact with a heat bath at some temperature "T" then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble: |
Here "Z" is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy: |
Inserting the formula for formula_104 for the canonical ensemble in here gives: |
As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seem to have been extremely uniform. |
This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size. |
As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions. |
There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, such as Erwin Schrödinger, Léon Brillouin and Isaac Asimov. |
Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs. |
Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are subject to unsettled current debate or research. |
Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time. |
This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter. |
As gravity is the most important force operating on cosmological scales, it may be difficult or impossible to apply the second law to the universe as a whole. |
The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium. |
In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system. |
It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states. |
There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal. |
For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law. |
The physics of macroscopically observable fluctuations is beyond the scope of this article. |
The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality). |
Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this. |
Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system. |
James Clerk Maxwell imagined one container divided into two parts, "A" and "B". Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from "A" flies towards the trapdoor, the demon opens it, and the molecule will fly from "A" to "B". The average speed of the molecules in "B" will have increased while in "A" they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in "A" and increases in "B", contrary to the second law of thermodynamics. |
One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. |
Maxwell's 'demon' repeatedly alters the permeability of the wall between "A" and "B". It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes. |
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-½ massive particles such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way. |
The equation also implied the existence of a new form of matter, "antimatter", previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a "theoretical" justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. |
Although Dirac did not at first fully appreciate the importance of his results, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, and Einstein before him. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-½ particles. |
The Dirac equation appears on the floor of Westminster Abbey on the plaque commemorating Paul Dirac's life, which was unveiled on 13 November 1995. |
The Dirac equation in the form originally proposed by Dirac is: |
where is the wave function for the electron of rest mass with spacetime coordinates . The are the components of the momentum, understood to be the momentum operator in the Schrödinger equation. Also, is the speed of light, and is the reduced Planck constant. These fundamental physical constants reflect special relativity and quantum mechanics, respectively. |
Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra. |
Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity, attempts based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus, had failed – and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics. |
The new elements in this equation are the four matrices , , and , and the four-component wave function . There are four components in because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron (see below for further discussion). |
The matrices and are all Hermitian and are involutory: |
These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of the German mathematician Hermann Grassmann in his "Lineale Ausdehnungslehre" ("Theory of Linear Extensions"). The latter had been regarded as well-nigh incomprehensible by most of his contemporaries. The appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics. |
The single symbolic equation thus unravels into four coupled linear first-order partial differential equations for the four quantities that make up the wave function. The equation can be written more explicitly in Planck units as: |
which makes it clearer that it is a set of four partial differential equations with four unknown functions. |
The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle: |
The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the "same order" in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation |
which says that the length of this four-vector is proportional to the rest mass . Substituting the operator equivalents of the energy and momentum from the Schrödinger theory, we get the Klein–Gordon equation describing the propagation of waves, constructed from relativistically invariant objects, |
with the wave function being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression |
and this density is convected according to the probability current vector |
with the conservation of probability current and density following from the continuity equation: |
The fact that the density is positive definite and convected according to this continuity equation implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace the probability density by the symmetrically formed expression |
which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression |
The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite – the initial values of both and may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. |
Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson or Higgs boson). Historically, Schrödinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the "charge" density, which can be positive or negative, and not the probability density. |
Dirac thus thought to try an equation that was "first order" in both space and time. One could, for example, formally (i.e. by abuse of notation) take the relativistic expression for the energy |
replace by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible. |
As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator thus: |
On multiplying out the right side we see that, in order to get all the cross-terms such as to vanish, we must assume |
Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if , , and are "matrices", with the implication that the wave function has "multiple components". This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least matrices to set up a system with the properties required — so the wave function had "four" components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here. |
Given the factorization in terms of these matrices, one can now write down immediately an equation |
with formula_19 to be determined. Applying again the matrix operator on both sides yields |
On taking formula_21 we find that all the components of the wave function "individually" satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is |
we get the Dirac equation as written above. |
To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows: |
and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that = ) |
where there is an implied summation over the values of the twice-repeated index , and is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is |
The complete system is summarized using the Minkowski metric on spacetime in the form |
denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature . The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this "geometric algebra" represents an enormous stride forward in the development of quantum theory. |
The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light: |
Using formula_31 (formula_32 is pronounced "d-slash"), according to Feynman slash notation, the Dirac equation becomes: |
In practice, physicists often use units of measure such that , known as natural units. The equation then takes the simple form |
A fundamental theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transformation: |
If in addition the matrices are all unitary, as are the Dirac set, then itself is unitary; |
The transformation is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the fundamental theorem, we may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form |
If we now define the transformed spinor |
then we have the transformed Dirac equation in a way that demonstrates manifest relativistic invariance: |
Thus, once we settle on any unitary representation of the gammas, it is final provided we transform the spinor according to the unitary transformation that corresponds to the given Lorentz transformation. |
The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function (see below). The representation shown here is known as the "standard" representation – in it, the wave function's upper two components go over into Pauli's 2 spinor wave function in the limit of low energies and small velocities in comparison to light. |
The considerations above reveal the origin of the gammas in "geometry", hearkening back to Grassmann's original motivation – they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as represent "oriented surface elements", and so on. With this in mind, we can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is |
For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of , where is the determinant of the metric tensor. Since this is negative, that factor is "imaginary". Thus |
This matrix is given the special symbol , owing to its importance when one is considering improper transformations of space-time, that is, those that change the orientation of the basis vectors. In the standard representation, it is |
This matrix will also be found to anticommute with the other four Dirac matrices: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.