text
stringlengths
13
991
A spectrum (plural "spectra" or "spectrums") is a condition that is not limited to a specific set of values but can vary, without steps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors in visible light after passing through a prism. As scientific understanding of light advanced, it came to apply to the entire electromagnetic spectrum. It thereby became a mapping of a range of magnitudes (wavelengths) to a range of qualities, which are the perceived "colors of the rainbow" and other properties which correspond to wavelengths that lie outside of the visible light spectrum.
Spectrum has since been applied by analogy to topics outside optics. Thus, one might talk about the "spectrum of political opinion", or the "spectrum of activity" of a drug, or the "autism spectrum". In these uses, values within a spectrum may not be associated with precisely quantifiable numbers or definitions. Such uses imply a broad range of conditions or behaviors grouped together and studied under a single title for ease of discussion. Nonscientific uses of the term "spectrum" are sometimes misleading. For instance, a single left–right spectrum of political opinion does not capture the full range of people's political beliefs. Political scientists use a variety of biaxial and multiaxial systems to more accurately characterize political opinion.
In most modern usages of "spectrum" there is a unifying theme between the extremes at either end. This was not always true in older usage.
In Latin, "spectrum" means "image" or "apparition", including the meaning "spectre". Spectral evidence is testimony about what was done by spectres of persons not present physically, or hearsay evidence about what ghosts or apparitions of Satan said. It was used to convict a number of persons of witchcraft at Salem, Massachusetts in the late 17th century. The word "spectrum" [Spektrum] was strictly used to designate a ghostly optical afterimage by Goethe in his "Theory of Colors" and Schopenhauer in "On Vision and Colors".
The prefix "spectro-" is used to form words relating to spectra. For example, a spectrometer is a device used to record spectra and spectroscopy is the use of a spectrometer for chemical analysis.
In the 17th century, the word "spectrum" was introduced into optics by Isaac Newton, referring to the range of colors observed when white light was dispersed through a prism. Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a "spectral density plot".
The term "spectrum" was expanded to apply to other waves, such as sound waves that could also be measured as a function of frequency, frequency spectrum and power spectrum of a signal. The term now applies to any signal that can be measured or decomposed along a continuous variable such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable.
Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different directions, making them individually visible at different angles. A graph of the intensity plotted against the frequency (showing the brightness of each color) is the frequency spectrum of the light. When all the visible frequencies are present equally, the perceived color of the light is white, and the spectrum is a flat line. Therefore, flat-line spectra in general are often referred to as "white", whether they represent light or another type of wave phenomenon (sound, for example, or vibration in a structure).
In astronomical spectroscopy, the strength, shape, and position of absorption and emission lines, as well as the overall spectral energy distribution of the continuum, reveal many properties of astronomical objects. Stellar classification is the categorisation of stars based on their characteristic electromagnetic spectra. The spectral flux density is used to represent the spectrum of a light-source, such as a star.
In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power contributed by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. the values are used to calculate other specifications and then plotted to show the spectral attributes of the source. This can be helpful in analyzing the color characteristics of a particular source.
A plot of ion abundance as a function of mass-to-charge ratio is called a mass spectrum. It can be produced by a mass spectrometer instrument. The mass spectrum can be used to determine the quantity and mass of atoms and molecules. Tandem mass spectrometry is used to determine molecular structure.
In physics, the energy spectrum of a particle is the number of particles or intensity of a particle beam as a function of particle energy. Examples of techniques that produce an energy spectrum are alpha-particle spectroscopy, electron energy loss spectroscopy, and mass-analyzed ion-kinetic-energy spectrometry.
In physics, particularly in quantum mechanics, some differential operators have discrete spectra, with gaps between values. Common cases include the Hamiltonian and the angular momentum operator.
In acoustics, a spectrogram is a visual representation of the frequency spectrum of sound as a function of time or another variable.
A source of sound can have many different frequencies mixed. A Musical tone's timbre is characterized by its harmonic spectrum. Sound in our environment that we refer to as "noise" includes many different frequencies. When a sound signal contains a mixture of all audible frequencies, distributed equally over the audio spectrum, it is called white noise.
The spectrum analyzer is an instrument which can be used to convert the sound wave of the musical note into a visual display of the constituent frequencies. This visual display is referred to as an acoustic spectrogram. Software based audio spectrum analyzers are available at low cost, providing easy access not only to industry professionals, but also to academics, students and the hobbyist. The acoustic spectrogram generated by the spectrum analyzer provides an acoustic signature of the musical note. In addition to revealing the fundamental frequency and its overtones, the spectrogram is also useful for analysis of the temporal attack, decay, sustain, and release of the musical note.
Antibiotic spectrum of activity is a component of antibiotic classification. A broad-spectrum antibiotic is active against a wide range of bacteria, whereas a narrow-spectrum antibiotic is effective against specific families of bacteria. An example of a commonly used broad-spectrum antibiotic is ampicillin. An example of a narrow spectrum antibiotic is Dicloxacillin, which acts on beta-lactamase-producing Gram-positive bacteria such as "Staphylococcus aureus".
In psychiatry, the spectrum approach uses the term spectrum to describe a range of linked conditions, sometimes also extending to include singular symptoms and traits. For example, the autism spectrum describes a range of conditions classified as neurodevelopmental disorders.
In mathematics, the spectrum of a matrix is the multiset of the eigenvalues of the matrix.
In functional analysis, the concept of the spectrum of a bounded operator is a generalization of the eigenvalue concept for matrices.
In algebraic topology, a spectrum is an object representing a generalized cohomology theory.
In social science, economic spectrum is used to indicate the range of social class along some indicator of wealth or income. In political science, the term political spectrum refers to a system of classifying political positions in one or more dimensions, for example in a range including right wing and left wing.
In dynamical system theory, a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs.
In classical mechanics, any choice of generalized coordinates "q""i" for the position (i.e. coordinates on configuration space) defines conjugate generalized momenta "p""i" which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space.
The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouville's theorem, and so can be taken as constant. Within the context of a model system in classical mechanics, the phase space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion.
For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, formula_1 with the resulting one-dimensional system being called a phase line, and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model/decay (one unstable/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane, which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram.
Here, the horizontal axis gives the position and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram.
Classic examples of phase diagrams from chaos theory are :
A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram. However the latter expression, "phase diagram", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition.
In quantum mechanics, the coordinates "p" and "q" of phase space normally become Hermitian operators in a Hilbert space.
But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product). This is consistent with the uncertainty principle of quantum mechanics.
Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and vice versa, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H J Groenewold (1946).
With J E Moyal (1949), these completed the foundations of the phase space formulation of quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern abstractions include deformation quantization and geometric quantization.)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter "ħ/S", where "S" is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter "v"/"c"; or the deformation of Newtonian gravity into General Relativity, with deformation parameter Schwarzschild radius/characteristic-dimension.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by ħ-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
The phase space can also refer to the space that is parameterized by the "macroscopic" states of the system, such as pressure, temperature, etc. For instance, one may view the pressure-volume diagram or entropy-temperature diagrams as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system.
Phase space is extensively used in nonimaging optics, the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics.
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. Instead of summing the Boltzmann factor over discretely spaced energy states (defined by
appropriate integer quantum numbers for each degree of freedom) one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of Planck's constant raised to a power equal to the number of degrees of freedom for the system.
The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic.
When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers.
A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology.
In the Quantum Measurement Problem the issue of what constitutes macroscopic and what constitutes the quantum world is unresolved and possibly unsolvable. The related Correspondence Principle can be articulated thus: every macroscopic phenomena can be formulated as a problem in quantum theory. A violation of the Correspondence Principle would thus ensure an empirical distinction between the macroscopic and the quantum.
In pathology, macroscopic diagnostics generally involves gross pathology, in contrast to microscopic histopathology.
The term "megascopic" is a synonym. "Macroscopic" may also refer to a "larger view", namely a view available only from a large perspective (a hypothetical "macroscope"). A macroscopic position could be considered the "big picture".
High energy physics compared to low energy physics.
Finally, when reaching the quantum particle level, the high energy domain is revealed. The proton has a mass-energy of ~ ; some other massive quantum particles, both elementary and hadronic, have yet higher mass-energies. Quantum particles with lower mass-energies are also part of high energy physics; they also have a mass-energy that is far higher than that at the macroscopic scale (such as electrons), or are equally involved in reactions at the particle level (such as neutrinos). Relativistic effects, as in particle accelerators and cosmic rays, can further increase the accelerated particles' energy by many orders of magnitude, as well as the total energy of the particles emanating from their collision and annihilation.
The term zero-point energy (ZPE) is a translation from the German Nullpunktsenergie.
Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate.
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes as a consequence of the uncertainty principle of quantum mechanics.
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were easily transmitted in empty space indicated that their associated aethers were part of the fabric of space itself. Maxwell himself noted that:
However, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be completely eliminated by cooling thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool it down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
In 1900, Max Planck derived the average energy of a single "energy radiator", e.g., a vibrating atomic unit, as a function of absolute temperature:
where is Planck's constant, is the frequency, is Boltzmann's constant, and is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900.
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900.
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:
This view was also later supported by (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field." This new theory, which Dirac coined quantum electrodynamics (QED) predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions.
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well: for, then, its position and momentum would both be completely determined to arbitrarily great precision. Therefore, instead, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle−−which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,
where is the minimum of the classical potential well.
making the expectation values of the kinetic and potential terms above satisfy
The expectation value of the energy must therefore be at least
where is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly , requires solving for the ground state of the system.
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing Planck's constant with a bar through its top () to denote the quantity . In these terms, the most famous such example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:
where is the Planck constant, is the mass of the particle, is the energy state ( corresponds to the ground-state energy), and is the width of the well.
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is "zero-point field" (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field.
Each point in space makes a contribution of , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy.
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy.
In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses.
The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories.
In the quantum theory of the electromagnetic field, classical wave amplitudes and are replaced by operators and that satisfy:
The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator . The fact that:
implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for and . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" and associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of and . This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian:
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:
without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted , i.e.:
In other words, within the normal ordering symbol we can commute and . Since zero-point energy is intimately connected to the non-commutativity of and , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with and and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistent of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian.
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by:
We introduce the "mode function" that satisfies the Helmholtz equation:
where and assume it is normalized such that:
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of for each mode of the field. The mode function satisfying these conditions is:
where in order to have the transversality condition satisfied for the Coulomb gauge in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume and impose on the field the periodic boundary condition:
where can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:
which satisfies the Helmholtz equation, transversality, and the "box normalization":
where is chosen to be a unit vector which specifies the polarization of the field mode. The condition means that there are two independent choices of , which we call and where and . Thus we define the mode functions:
in terms of which the vector potential becomes:
where and , are photon annihilation and creation operators for the mode with wave vector and polarization . This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:
for the total vector potential in free space. Using the fact that: