text
stringlengths
9
2.4k
Observation. On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. Gaia mission observations have found evidence of a Sun-like star orbiting a black hole named Gaia BH1 around away; evidence suggests a brown dwarf star orbits Gaia BH2. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing. Etymology. Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive.
The term "black hole" was used in print by "Life" and "Science News" magazines in 1963, and by science journalist Ann Ewing in her article Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase. Properties and structure. The escape velocity from a black hole exceeds the speed of light. The formula for escape velocity is formula_1 for an object at radius from a spherical mass , with being the gravitational constant. When the velocity is the speed of light, , the radius, formula_2 is called the Schwarzschild radius. A technical definition of a black hole is any object whose mass is contained in a radius is smaller than its Schwarzschild radius, a limit derived from one solution to the equations of general relativity.
The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem. These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analogue of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect.
When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behaviour of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behaviour is so puzzling that it has been called the black hole information loss paradox.
Physical properties. The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.
While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge "Q" and the total angular momentum "J" are expected to satisfy the inequality formula_3 for a black hole of mass "M". Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed "unphysical". The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations. Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is
formula_4 allowing definition of a dimensionless spin parameter such that formula_5 Black holes are commonly classified according to their mass, independent of angular momentum, "J". The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, "M", through formula_6 where "r" is the Schwarzschild radius and is the mass of the Sun. For a black hole with nonzero spin or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to formula_7 Event horizon. The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred.
As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole. In a thought experiment, a distant observer can imagine clocks near a black hole which would appear to tick more slowly than those farther away from the black hole. This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approaches the event horizon, taking an infinite amount of time to reach it. All processes on this object would appear to slow down, from the viewpoint of a fixed outside observer, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second.
On the other hand, imaginary, indestructible observers falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle. The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate. Singularity. At the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density.
Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect". In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of travelling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.
The appearance of singularities in general relativity is commonly perceived as signalling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities. Photon sphere. The photon sphere is a spherical boundary where photons that move on tangents to that sphere would be trapped in a non-stable but circular orbit around the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon.
While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde. Ergosphere. Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.
The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the "ergosurface", which coincides with the event horizon at the poles but is at a much greater distance around the equator. Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. Innermost stable circular orbit (ISCO). In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is:
formula_8 and decreases with increasing black hole spin for particles orbiting in the same direction as the spin. Plunging region. The final observable region of spacetime around a black hole is called the plunging region. In this area it is no longer possible for matter to follow circular orbits or to stop a final descent into the black hole. Instead it will rapidly plunge toward the black hole close to the speed of light. Formation and evolution. Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilise their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon.
Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes. Gravitational collapse. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.
The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding are produced by stars that were over before the collapse. If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.
The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to . These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~ could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift formula_9. Some candidates for such objects have been found in observations of the young universe. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away.
Primordial black holes and the Big Bang. Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass (formula_10 ≈ ≈ ) to hundreds of thousands of solar masses. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe.
Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. High-energy collisions. Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as . This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10 seconds, posing no threat to the Earth.
Growth. Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Evaporation. In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature "ħc"/(8"πGM""k"); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.
A stellar black hole of has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/"c" would take less than 10 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.
The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes. If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 10 years. A supermassive black hole with a mass of will evaporate in around 2×10 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10 years. Observational evidence.
By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings. Direct interferometry. The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019. After two years of data processing, EHT released its first image of a black hole, at the center of the Messier 87 galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon, displayed as orange or red, that define the black hole.
On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and "shadow" seen in the M87* black hole. The boundary of the shadow or area of less brightness matches the predicted gravitationally lensed photon orbits. The image was created using the same techniques as for the M87 black hole. The imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths. The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise.
The extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow". In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields. In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented. Detection of gravitational waves from merging black holes. On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km, or roughly four times the Schwarzschild radius corresponding to the inferred masses. The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation.
More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere. The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more.
Since then, many more gravitational wave events have been observed. Stars orbiting Sagittarius A*. The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes.
Accretion of matter. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk. Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas. When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known. Up to 40% of the rest mass of the accreted material can be emitted as radiation. In nuclear fusion only about 0.7% of the rest mass will be emitted as energy. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data.
As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported. X-ray binaries. X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole.
If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt remained, due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni.
Quasi-periodic oscillations. The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes. Galactic nuclei. Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk.
Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy. It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Microlensing. Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole.
Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*. Alternatives. The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass.
Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes. The average density of a black hole is comparable to that of water. Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates. The evidence for the existence of stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artefacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semiclassical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity.
A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, related nestar and the dark-energy star. Open questions. Entropy and thermodynamics. In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.
The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy. One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.
Although general relativity can be used to perform a semiclassical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities, such as mass, charge, pressure, etc. Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity. Information loss paradox. Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.
The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem. One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted.
This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate. In science fiction. Christopher Nolan's 2014 science fiction epic "Interstellar" features a black hole known as Gargantua, which is the central object of a planetary system in a distant galaxy. Humanity accessed this system via a wormhole in the outer solar system, near Saturn. External links. Videos. [[Category:Black holes| ]] [[Category:Articles containing video clips]] [[Category:Concepts in astronomy]] [[Category:Galaxies]] [[Category:Theory of relativity]]
Beta decay In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in what is called "positron emission". Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or "Q" value must be positive.
Beta decay is a consequence of the weak force, which is characterized by relatively long decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by means of a virtual W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description. The two types of beta decay are known as "beta minus" and "beta plus". In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β+ decay is also known as positron emission.
Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number −1. Since a proton or neutron has lepton number zero, β+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β− decay (an electron) must be accompanied by an electron antineutrino. An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14 with a half-life of about 5,730 years: In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the "parent nuclide" while the resulting element (in this case ) is known as the "daughter nuclide". Another example is the decay of hydrogen-3 (tritium) into helium-3 with a half-life of about 12.3 years:
An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23 with a half-life of about 11.3 s: β+ decay also results in nuclear transmutation, with the daughter element having an atomic number that is decreased by one. The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass. History. Discovery and initial characterization. Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the newly discovered elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet.
In 1900, Becquerel measured the mass-to-charge ratio () for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron. In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left. Neutrinos.
A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission. From 1920 to 1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay. However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute.
In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" ('little neutral one' in Italian). In 1933, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by the American physicists Clyde Cowan and Frederick Reines in the Cowan–Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi.
decay and electron capture. In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction  +  →  + , and observed that the product isotope emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of  decay (positron emission), which they termed artificial radioactivity since is a short-lived nuclide which does not exist in nature. In recognition of their discovery, the couple were awarded the Nobel Prize in Chemistry in 1935. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides. Non-conservation of parity. In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. However Wu, who was female, was not awarded the Nobel prize.
β− decay. In  decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron () and an electron antineutrino ().  decay generally occurs in neutron-rich nuclei. The generic equation is: where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final elements, respectively. Another example is when the free neutron () decays by  decay into a proton (): At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged () down quark to the positively charged () up quark, which is promoted by a virtual boson; the boson subsequently decays into an electron and an electron antineutrino: β+ decay. In  decay, or positron emission, the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron () and an electron neutrino (). "" decay generally occurs in proton-rich nuclei. The generic equation is:
This may be considered as the decay of a proton inside the nucleus to a neutron: However,  decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton.  decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a . When a boson is emitted, it decays into a positron and an electron neutrino: Electron capture (K-capture/L-capture). In all cases where  decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino:
An example of electron capture is one of the decay modes of krypton-81 into bromine-81: All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than 2,  decay is not energetically possible, and electron capture is the sole decay mode. If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc. Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the "only" type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino. Nuclear transmutation. If the proton and neutron are part of an atomic nucleus, the above described decay processes transmute one chemical element into another. For example:
Beta decay does not change the number () of nucleons in the nucleus, but changes only its charge . Thus the set of all nuclides with the same  can be introduced; these "isobaric" nuclides may turn into each other via beta decay. For a given there is one that is most stable. It is said to be beta stable, because it presents a local minimum of the mass excess: if such a nucleus has numbers, the neighbour nuclei and have higher mass excess and can beta decay into , but not vice versa. For all odd mass numbers , there is only one known beta-stable isobar. For even , there are up to three different beta-stable isobars experimentally known; for example, , , and are all beta-stable. There are about 350 known beta-decay stable nuclides. Competition of beta decay types. Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay.
Three types of beta decay in competition are illustrated by the single isotope (29 protons, 35 neutrons), which has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide is almost equally likely to undergo proton decay (by positron emission, 18% or by electron capture, 43%; both forming ) or neutron decay (by electron emission, 39%; forming ). Stability of naturally occurring nuclides. Most naturally occurring nuclides on earth are beta stable. Nuclides that are not beta stable have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide , which undergoes all three types of beta decay (, and electron capture) with a half-life of . Conservation rules for beta decay. Baryon number is conserved. formula_1 where Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin.
Up and down quarks have total isospin formula_4 and isospin projections formula_5 All other quarks have . In general formula_6 Lepton number is conserved. formula_7 so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0. formula_8 Angular momentum. For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered. The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total formula_9 (parallel) or formula_10 (anti-parallel). For forbidden decays, orbital angular momentum must also be taken into consideration. Energy release. The value is defined as the total energy released in a given nuclear decay. In beta decay, is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to . A typical is around 1 MeV, but can range from a few keV to a few tens of MeV.
Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light. In the case of Re, the maximum speed of the beta particle is only 9.8% of the speed of light. The following table gives some examples: Tritium β− decay being used in the KATRIN experimental search for sterile neutrinos. β− decay. Consider the generic equation for beta decay The value for this decay is where formula_12 is the mass of the nucleus of the atom, formula_13 is the mass of the electron, and formula_14 is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus is related to the standard atomic mass by formula_15 That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all "electron" binding energies for the atom. This equation is rearranged to find formula_12, and formula_17 is found similarly. Substituting these nuclear masses into the -value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high- atoms, we have
formula_18 This energy is carried away as kinetic energy by the electron and antineutrino. Because the reaction will proceed only when the  value is positive, β− decay can occur when the mass of atom is greater than the mass of atom . β+ decay. The equations for β+ decay are similar, with the generic equation giving formula_19 However, in this equation, the electron masses do not cancel, and we are left with formula_20 Because the reaction will proceed only when the  value is positive, β+ decay can occur when the mass of atom exceeds that of by at least twice the mass of the electron. Electron capture. The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture we have formula_21 which simplifies to formula_22 where is the binding energy of the captured electron.
Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true. Beta emission spectrum. Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum of emitted betas as follows: formula_23 where is the kinetic energy, is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), is the Fermi Function (see below) with "Z" the charge of the final-state nucleus, is the total energy, formula_24 is the momentum, and is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by minus the kinetic energy of the beta. As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right. Fermi function. The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be:
formula_25 where is the final momentum, Γ the Gamma function, and (if is the fine-structure constant and the radius of the final state nucleus) formula_26, formula_27 (+ for electrons, − for positrons), and formula_28. For non-relativistic betas (), this expression can be approximated by: formula_29 Other approximations can be found in the literature. Kurie plot. A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's  value). With a Kurie plot one can find the limit on the effective mass of a neutrino. Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay.
After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity. The higher the energy of the particles, the higher their polarization. Types of beta decay transitions. Beta decays can be classified according to the angular momentum ( value) and total spin ( value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow–Teller" transitions. When beta decay particles carry no angular momentum (), the decay is referred to as "allowed", otherwise it is "forbidden".
Other decay modes, which are rare, are known as bound state decay and double beta decay. Fermi transitions. A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin formula_10, leading to an angular momentum change formula_31 between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by formula_32 with formula_33 the weak vector coupling constant, formula_34 the isospin raising and lowering operators, and formula_35 running over all protons and neutrons in the nucleus. Gamow–Teller transitions. A Gamow–Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin formula_9, leading to an angular momentum change formula_37 between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by
formula_38 with formula_39 the weak axial-vector coupling constant, and formula_40 the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon. Forbidden transitions. When , the decay is referred to as "forbidden". Nuclear selection rules require high  values to be accompanied by changes in nuclear spin () and parity (). The selection rules for the th forbidden transitions are: formula_41 where corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the Δ and Δ values for the first few values of : Rare decay modes. Bound-state β decay. A very small minority of free neutron decays (about four per million) are "two-body decays": the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino.
For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons. Bound-state β decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for Dy in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research Center. Though neutral Dy is stable, fully ionized Dy undergoes β decay into the K and L shells with a half-life of 47 days. The resulting nucleus – Ho – is stable only in this almost fully ionized state and will decay via electron capture into Dy in the neutral state. Likewise, while being stable in the neutral state, the fully ionized Tl undergoes bound-state β decay to Pb with a half-life of days. The half-lives of neutral Ho and Pb are respectively 4570 years and years. In addition, it is estimated that β decay is energetically impossible for natural atoms but theoretically possible when fully ionized also for 193Ir, 194Au, 202Tl, 215At, 243Am, and 246Bk.
Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for Re by Bosch et al., also at Darmstadt. Neutral Re does undergo β decay, with half-life years, but for fully ionized Re this is shortened to only 32.9 years. This is because Re is energetically allowed to undergo β decay to the first-excited state in Os, a process energetically disallowed for natural Re. Similarly, neutral Pu undergoes β decay with a half-life of 14.3 years, but in its fully ionized state the beta-decay half-life of Pu decreases to 4.2 days. For comparison, the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Moreover, current mass determinations cannot decisively determine whether Rn is energetically possible to undergo β decay (the decay energy given in AME2020 is (−6 ± 8) keV), but in either case it is predicted that β will be greatly accelerated for fully ionized Rn. Double beta decay. Some nuclei can undergo double beta decay (2β) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as it has an extremely long half-life. In nuclei for which both β decay and 2β are possible, the rarer 2β process is effectively impossible to observe. However, in nuclei where β decay is forbidden but 2β is allowed, the process can be seen and a half-life measured. Thus, 2β is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change ; thus, at least one of the nuclides with some given has to be stable with regard to both single and double beta decay. "Ordinary" 2β results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless 2β has never been observed.
Blitzkrieg Blitzkrieg is a word used to describe a combined arms surprise attack, using a rapid, overwhelming force concentration that may consist of armored and motorized or mechanized infantry formations, together with artillery, air assault, and close air support. The intent is to break through an opponent's lines of defense, dislocate the defenders, confuse the enemy by making it difficult to respond to the continuously changing front, and defeat them in a decisive : a battle of annihilation. During the interwar period, aircraft and tank technologies matured and were combined with the systematic application of the traditional German tactic of (maneuver warfare), involving the deep penetrations and the bypassing of enemy strong points to encircle and destroy opposing forces in a (cauldron battle/battle of encirclement). During the invasion of Poland, Western journalists adopted the term "blitzkrieg" to describe that form of armored warfare. The term had appeared in 1935, in the German military periodical ("German Defence"), in connection to quick or lightning warfare.
German maneuver operations were successful during the campaigns of 1939–1941, involving the invasions of Belgium, the Netherlands, and France and, by 1940, the term "blitzkrieg" was being extensively used in Western media. Blitzkrieg operations capitalised on surprise penetrations, such as that in the Ardennes forest, the Allies' general lack of preparedness, and their inability to match the pace of the German attack. During the Battle of France, the French made attempts to reform defensive lines along rivers but were frustrated when German forces arrived first and pressed on. Despite being common in German and English-language journalism during World War II, the word was never used as an official military term by the Wehrmacht, except for propaganda, and it was never officially adopted as a concept or doctrine. According to David Reynolds, "Hitler himself called the term Blitzkrieg 'a completely idiotic word' ()". Some senior German officers, including Kurt Student, Franz Halder, and Johann Adolf von Kielmansegg, even disputed the idea that it was a military concept. Kielmansegg asserted that what many regarded as blitzkrieg was nothing more than "ad hoc solutions that simply popped out of the prevailing situation". Kurt Student described it as ideas that "naturally emerged from the existing circumstances" as a response to operational challenges.
In 2005, the historian Karl-Heinz Frieser summarized blitzkrieg as the result of German commanders using the latest technology in the most advantageous way, according to traditional military principles, and employing "the right units in the right place at the right time". Modern historians now understand blitzkrieg as the combination of traditional German military principles, methods and doctrines of the 19th century with the military technology of the interwar period. Modern historians use the term casually as a generic description for the style of maneuver warfare practised by Germany during the early part of World War II, rather than as an explanation. According to Frieser, in the context of the thinking of Heinz Guderian on mobile combined arms formations, blitzkrieg can be used as a synonym for modern maneuver warfare on the operational level. Definition. Common interpretation.
Origin of term. The origin of the term "blitzkrieg" is obscure. It was never used in the title of a military doctrine or handbook of the German Army or Air Force, and no "coherent doctrine" or "unifying concept of blitzkrieg" existed; German High Command mostly referred to the group of tactics as "Bewegungskrieg" (Maneuver Warfare). The term seems to have been rarely used in the German military press before 1939, and recent research at the German "Militärgeschichtliches Forschungsamt", at Potsdam, found it in only two military articles from the 1930s. Both used the term to mean a swift strategic knockout, rather than a radically new military doctrine or approach to war. The first article (1935) dealt primarily with supplies of food and materiel in wartime. The term "blitzkrieg" was used in reference to German efforts to win a quick victory in the First World War but was not associated with the use of armored, mechanized or air forces. It argued that Germany must develop self-sufficiency in food because it might again prove impossible to deal a swift knockout to its enemies, which would lead to a long war.
In the second article (1938), launching a swift strategic knockout was described as an attractive idea for Germany but difficult to achieve on land under modern conditions (especially against systems of fortification like the Maginot Line) unless an exceptionally high degree of surprise could be achieved. The author vaguely suggested that a massive strategic air attack might hold out better prospects, but the topic was not explored in detail. A third relatively early use of the term in German occurred in "Die Deutsche Kriegsstärke" (German War Strength) by Fritz Sternberg, a Jewish Marxist political economist and refugee from Nazi Germany, published in 1938 in Paris and in London as "Germany and a Lightning War". Sternberg wrote that Germany was not prepared economically for a long war but might win a quick war ("Blitzkrieg"). He did not go into detail about tactics or suggest that the German armed forces had evolved a radically new operational method. His book offered scant clues as to how German lightning victories might be won.
In English and other languages, the term had been used since the 1920s. The term was first used in the publications of Ferdinand Otto Miksche, first in the magazine "Army Quarterly", and in his 1941 book "Blitzkrieg", in which he defined the concept. In September 1939, "Time" magazine termed the German military action as a "war of quick penetration and obliteration – "Blitzkrieg", lightning war". After the invasion of Poland, the British press commonly used the term to describe German successes in that campaign. J. P. Harris called the term "a piece of journalistic sensationalism – a buzz-word with which to label the spectacular early successes of the Germans in the Second World War". The word was later applied to the bombing of Britain, particularly London, hence "The Blitz". The German popular press followed suit nine months later, after the Fall of France in 1940; thus, although the word had first been used in Germany, it was popularized by British journalism. Heinz Guderian referred to it as a word coined by the Allies: "as a result of the successes of our rapid campaigns our enemies ... coined the word "Blitzkrieg"". After the German failure in the Soviet Union in 1941, the use of the term began to be frowned upon in Nazi Germany, and Hitler then denied ever using the term and said in a speech in November 1941, "I have never used the word "Blitzkrieg", because it is a very silly word". In early January 1942, Hitler dismissed it as "Italian phraseology".
Military evolution, 1919–1939. Germany. In 1914, German strategic thinking derived from the writings of Carl von Clausewitz (1 June 1780 – 16 November 1831), Helmuth von Moltke the Elder (26 October 1800 – 24 April 1891) and Alfred von Schlieffen (28 February 1833 – 4 January 1913), who advocated maneuver, mass and envelopment to create the conditions for a decisive battle (). During the war, officers such as Willy Rohr developed tactics to restore maneuver on the battlefield. Specialist light infantry ("Stosstruppen", "storm troops") were to exploit weak spots to make gaps for larger infantry units to advance with heavier weapons, exploit the success and leave isolated strong points to the troops that were following up. Infiltration tactics were combined with short hurricane artillery bombardments, which used massed artillery. Devised by Colonel Georg Bruchmüller, the attacks relied on speed and surprise, rather than on weight of numbers. The tactics met with great success in Operation Michael, the German spring offensive of 1918 and restored temporarily the war of movement once the Allied trench system had been overrun. The German armies pushed on towards Amiens and then Paris and came within before supply deficiencies and Allied reinforcements halted the advance.
The historian James Corum criticised the German leadership for failing to understand the technical advances of the First World War, conducting no studies of the machine gun prior to the war and giving tank production the lowest priority during the war. After Germany's defeat, the Treaty of Versailles limited the Reichswehr to a maximum of 100,000 men, which prevented the deployment of mass armies. The German General Staff was abolished by the treaty but continued covertly as the "Truppenamt" (Troop Office) and was disguised as an administrative body. Committees of veteran staff officers were formed within the "Truppenamt" to evaluate 57 issues of the war to revise German operational theories. By the time of the Second World War, their reports had led to doctrinal and training publications, including H. Dv. 487, "Führung und Gefecht der verbundenen Waffen" ("Command and Battle of the Combined Arms)", known as "Das Fug" (1921–1923) and "Truppenführung" (1933–1934), containing standard procedures for combined-arms warfare. The "Reichswehr" was influenced by its analysis of pre-war German military thought, particularly infiltration tactics since at the end of the war, they had seen some breakthroughs on the Western Front and the maneuver warfare which dominated the Eastern Front.
On the Eastern Front, the war did not bog down into trench warfare since the German and the Russian Armies fought a war of maneuver over thousands of miles, which gave the German leadership unique experience that was unavailable to the trench-bound Western Allies. Studies of operations in the East led to the conclusion that small and coordinated forces possessed more combat power than large uncoordinated forces. After the war, the "Reichswehr" expanded and improved infiltration tactics. The commander in chief, Hans von Seeckt, argued that there had been an excessive focus on encirclement and emphasised speed instead. Seeckt inspired a revision of "Bewegungskrieg" (maneuver warfare) thinking and its associated "Auftragstaktik" in which the commander expressed his goals to subordinates and gave them discretion in how to achieve them. The governing principle was "the higher the authority, the more general the orders were"; it was the responsibility of the lower echelons to fill in the details. Implementation of higher orders remained within limits that were determined by the training doctrine of an elite officer corps.
Delegation of authority to local commanders increased the tempo of operations, which had great influence on the success of German armies in the early war period. Seeckt, who believed in the Prussian tradition of mobility, developed the German army into a mobile force and advocated technical advances that would lead to a qualitative improvement of its forces and better coordination between motorized infantry, tanks, and planes. Britain. The British Army took lessons from the successful infantry and artillery offensives on the Western Front in late 1918. To obtain the best co-operation between all arms, emphasis was placed on detailed planning, rigid control and adherence to orders. Mechanization of the army, as part of a combined-arms theory of war, was considered a means to avoid mass casualties and the indecisive nature of offensives. The four editions of "Field Service Regulations" that were published after 1918 held that only combined-arms operations could create enough fire power to enable mobility on a battlefield. That theory of war also emphasised consolidation and recommended caution against overconfidence and ruthless exploitation.
During the Sinai and Palestine campaign, operations involved some aspects of what would later be called blitzkrieg. The decisive Battle of Megiddo included concentration, surprise and speed. Success depended on attacking only in terrain favouring the movement of large formations around the battlefield and tactical improvements in the British artillery and infantry attack. General Edmund Allenby used infantry to attack the strong Ottoman front line in co-operation with supporting artillery, augmented by the guns of two destroyers. Through constant pressure by infantry and cavalry, two Ottoman armies in the Judean Hills were kept off-balance and virtually encircled during the Battles of Sharon and Nablus (Battle of Megiddo). The British methods induced "strategic paralysis" among the Ottomans and led to their rapid and complete collapse. In an advance of , captures were estimated to be "at least prisoners and 260 guns". Liddell Hart considered that important aspects of the operation had been the extent to which Ottoman commanders were denied intelligence on the British preparations for the attack through British air superiority and air attacks on their headquarters and telephone exchanges, which paralyzed attempts to react to the rapidly-deteriorating situation.
France. Norman Stone detects early blitzkrieg operations in offensives by French Generals Charles Mangin and Marie-Eugène Debeney in 1918. However, French doctrine in the interwar years became defence-oriented. Colonel Charles de Gaulle advocated concentration of armor and airplanes. His opinions appeared in his 1934 book "Vers l'Armée de métier" ("Towards the Professional Army"). Like von Seeckt, de Gaulle concluded that France could no longer maintain the huge armies of conscripts and reservists that had fought the First World War, and he sought to use tanks, mechanized forces and aircraft to allow a smaller number of highly trained soldiers to have greater impact in battle. His views endeared him little to the French high command, but, according to historian Henrik Bering, were studied with great interest by Heinz Guderian. Russia and Soviet Union. In 1916, General Alexei Brusilov had used surprise and infiltration tactics during the Brusilov Offensive. Later, Marshal Mikhail Tukhachevsky (1893–1937), (1898–1976) and other members of the Red Army developed a concept of deep battle from the experience of the Polish–Soviet War of 1919–1920. Those concepts would guide the Red Army doctrine throughout the Second World War. Realising the limitations of infantry and cavalry, Tukhachevsky advocated mechanized formations and the large-scale industrialisation that they required. Robert Watt (2008) wrote that blitzkrieg has little in common with Soviet deep battle. In 2002, H. P. Willmott had noted that deep battle contained two important differences from blitzkrieg by being a doctrine of total war, not of limited operations, and rejecting decisive battle in favour of several large simultaneous offensives.
The "Reichswehr" and the Red Army began a secret collaboration in the Soviet Union to evade the Treaty of Versailles occupational agent, the Inter-Allied Commission. In 1926 war games and tests began at Kazan and Lipetsk, in the Soviet Russia. The centers served to field-test aircraft and armored vehicles up to the battalion level and housed aerial- and armoured-warfare schools through which officers rotated. Nazi Germany. After becoming Chancellor of Germany in 1933, Adolf Hitler ignored the provisions of the Treaty of Versailles. Within the Wehrmacht, which was established in 1935, the command for motorized armored forces was named the "Panzerwaffe" in 1936. The "Luftwaffe", the German air force, was officially established in February 1935, and development began on ground-attack aircraft and doctrines. Hitler strongly supported the new strategy. He read Guderian's 1937 book "Achtung – Panzer!" and upon observing armored field exercises at Kummersdorf, he remarked, "That is what I want – and that is what I will have".
Guderian. Guderian summarized combined-arms tactics as the way to get the mobile and motorized armored divisions to work together and support each other to achieve decisive success. In his 1950 book, "Panzer Leader", he wrote: Guderian believed that developments in technology were required to support the theory, especially by equipping armored divisions, tanks foremost, with wireless communications. Guderian insisted in 1933 to the high command that every tank in the German armored force must be equipped with a radio. At the start of World War II, only the German Army was thus prepared with all tanks being "radio-equipped". That proved critical in early tank battles in which German tank commanders exploited the organizational advantage over the Allies that radio communication gave them. All Allied armies would later copy that innovation. During the Polish campaign, the performance of armored troops, under the influence of Guderian's ideas, won over a number of skeptics who had initially expressed doubt about armored warfare, such as von Rundstedt and Rommel.
Rommel. According to David A. Grossman, by the Twelfth Battle of Isonzo (October–November 1917), while he was conducting a light-infantry operation, Rommel had perfected his maneuver-warfare principles, which were the very same ones that were applied during the blitzkrieg against France in 1940 and were repeated in the Coalition ground offensive against Iraq in the 1991 Gulf War. During the Battle of France and against his staff advisor's advice, Hitler ordered that everything should be completed in a few weeks. Fortunately for the Germans, Rommel and Guderian disobeyed the General Staff's orders (particularly those of General Paul Ludwig Ewald von Kleist) and forged ahead making quicker progress than anyone had expected, on the way "inventing the idea of Blitzkrieg". It was Rommel who created the new archetype of Blitzkrieg by leading his division far ahead of flanking divisions. MacGregor and Williamson remark that Rommel's version of blitzkrieg displayed a significantly better understanding of combined-arms warfare than that of Guderian. General Hermann Hoth submitted an official report in July 1940 which declared that Rommel had "explored new paths in the command of Panzer divisions".
Methods of operations. "Schwerpunkt". "Schwerpunktprinzip" was a heuristic device (conceptual tool or thinking formula) that was used in the German Army from the nineteenth century to make decisions from tactics to strategy about priority. "Schwerpunkt" has been translated as "center of gravity", "crucial", "focal point" and "point of main effort". None of those forms is sufficient to describe the universal importance of the term and the concept of "Schwerpunktprinzip". Every unit in the army, from the company to the supreme command, decided on a "Schwerpunkt" by "schwerpunktbildung", as did the support services, which meant that commanders always knew what was the most important and why. The German army was trained to support the "Schwerpunkt" even when risks had to be taken elsewhere to support the point of main effort and to attack with overwhelming firepower. "Schwerpunktbildung" allowed the German Army to achieve superiority at the "Schwerpunkt", whether attacking or defending, to turn local success at the "Schwerpunkt" into the progressive disorganisation of the opposing force and to create more opportunities to exploit that advantage even if the Germans were numerically and strategically inferior in general. In the 1930s, Guderian summarized that as "Klotzen, nicht kleckern!" (roughly "splash, don't spill")
Pursuit. Having achieved a breakthrough of the enemy's line, units comprising the "Schwerpunkt" were not supposed to become decisively engaged with enemy front line units to the right and the left of the breakthrough area. Units pouring through the hole were to drive upon set objectives behind the enemy front line. During the Second World War, German Panzer forces used their motorized mobility to paralyze the opponent's ability to react. Fast-moving mobile forces seized the initiative, exploited weaknesses and acted before the opposing forces could respond. Central to that was the decision cycle (tempo). Through superior mobility and faster decision-making cycles, mobile forces could act faster than the forces opposing them. Directive control was a fast and flexible method of command. Rather than receiving an explicit order, a commander would be told of his superior's intent and the role that his unit was to fill in that concept. The method of execution was then a matter for the discretion of the subordinate commander. The staff burden was reduced at the top and spread among tiers of command with knowledge about their situation. Delegation and the encouragement of initiative aided implementation, and important decisions could be taken quickly and communicated verbally or with only brief written orders.
Mopping-up. The last part of an offensive operation was the destruction of unsubdued pockets of resistance, which had been enveloped earlier and bypassed by the fast-moving armored and motorized spearheads. The "Kesselschlacht" ("cauldron battle") was a concentric attack on such pockets. It was there that most losses were inflicted upon the enemy, primarily through the mass capture of prisoners and weapons. During Operation Barbarossa, huge encirclements in 1941 produced nearly 3.5 million Soviet prisoners, along with masses of equipment. Air power. Close air support was provided in the form of the dive bomber and medium bomber, which would support the focal point of attack from the air. German successes are closely related to the extent to which the German "Luftwaffe" could control the air war in early campaigns in Western and Central Europe and in the Soviet Union. However, the "Luftwaffe" was a broadly based force with no constricting central doctrine other than its resources should be used generally to support national strategy. It was flexible and could carry out both operational-tactical, and strategic bombing.
Flexibility was the strength of the "Luftwaffe" in 1939 to 1941. Paradoxically, that became its weakness. While Allied Air Forces were tied to the support of the Army, the "Luftwaffe" deployed its resources in a more general operational way. It switched from air superiority missions to medium-range interdiction, to strategic strikes to close support duties, depending on the need of the ground forces. In fact, far from it being a specialist panzer spearhead arm, less than 15 percent of the "Luftwaffe" was intended for close support of the army in 1939. Stimulants. Methamphetamine use among troops, especially Temmler's 3 mg Pervitin tablets, likely contributed to the Wehrmacht's "blitzkrieg" success by enabling synchronized, high-endurance operations with minimal rest. Limitations and countermeasures. Environment.
Air superiority. The influence of air forces over forces on the ground changed significantly over the course of the Second World War. Early German successes were conducted when Allied aircraft could not make a significant impact on the battlefield. In May 1940, there was near parity in numbers of aircraft between the "Luftwaffe" and the Allies, but the "Luftwaffe" had been developed to support Germany's ground forces, had liaison officers with the mobile formations and operated a higher number of sorties per aircraft. In addition, the Germans' air parity or superiority allowed the unencumbered movement of ground forces, their unhindered assembly into concentrated attack formations, aerial reconnaissance, aerial resupply of fast moving formations and close air support at the point of attack. The Allied air forces had no close air support aircraft, training or doctrine. The Allies flew 434 French and 160 British sorties a day but methods of attacking ground targets had yet to be developed and so Allied aircraft caused negligible damage. Against the Allies' 600 sorties, the "Luftwaffe" on average flew 1,500 sorties a day.
On 13 May, "Fliegerkorps" VIII flew 1,000 sorties in support of the crossing of the Meuse. The following day the Allies made repeated attempts to destroy the German pontoon bridges, but German fighter aircraft, ground fire and "Luftwaffe" flak batteries with the panzer forces destroyed 56 percent of the attacking Allied aircraft, and the bridges remained intact. Allied air superiority became a significant hindrance to German operations during the later years of the war. By June 1944, the Western Allies had the complete control of the air over the battlefield, and their fighter-bomber aircraft were very effective at attacking ground forces. On D-Day, the Allies flew 14,500 sorties over the battlefield area alone, not including sorties flown over Northwestern Europe. Against them the "Luftwaffe" flew some 300 sorties on 6 June. Though German fighter presence over Normandy increased over the next days and weeks, it never approached the numbers that the Allies commanded. Fighter-bomber attacks on German formations made movement during daylight almost impossible.
Subsequently, shortages soon developed in food, fuel and ammunition and severely hampered the German defenders. German vehicle crews and even flak units experienced great difficulty moving during daylight. Indeed, the final German offensive operation in the west, Operation Wacht am Rhein, was planned to take place during poor weather to minimise interference by Allied aircraft. Under those conditions, it was difficult for German commanders to employ the "armored idea", if at all. Counter-tactics. Blitzkrieg is vulnerable to an enemy that is robust enough to weather the shock of the attack and does not panic at the idea of enemy formations in its rear area. That is especially true if the attacking formation lacks the reserve to keep funnelling forces into the spearhead or the mobility to provide infantry, artillery and supplies into the attack. If the defender can hold the shoulders of the breach, it has the opportunity to counter-attack into the flank of the attacker and potentially to cut it off the van, as what happened to Kampfgruppe Peiper in the Ardennes.
During the Battle of France in 1940, the 4th Armoured Division (Major-General Charles de Gaulle) and elements of the 1st Army Tank Brigade (British Expeditionary Force) made probing attacks on the German flank and pushed into the rear of the advancing armored columns at times. That may have been a reason for Hitler to call a halt to the German advance. Those attacks combined with Maxime Weygand's hedgehog tactic would become the major basis for responding to blitzkrieg attacks in the future. Deployment in depth, or permitting enemy or "shoulders" of a penetration, was essential to channelling the enemy attack; artillery, properly employed at the shoulders, could take a heavy toll on attackers. Allied forces in 1940 lacked the experience to develop those strategies successfully, which resulted in the French armistice with heavy losses, but those strategies characterized later Allied operations. At the Battle of Kursk, the Red Army used a combination of defence in great depth, extensive minefields and tenacious defense of breakthrough shoulders. In that way, they depleted German combat power even as German forces advanced. The reverse can be seen in the Russian summer offensive of 1944, Operation Bagration, which resulted in the destruction of Army Group Center. German attempts to weather the storm and fight out of encirclements failed because of the Soviets' ability to continue to feed armored units into the attack, maintain the mobility and strength of the offensive and arrive in force deep in the rear areas faster than the Germans could regroup.
Logistics. Although effective in quick campaigns against Poland and France, mobile operations could not be sustained by Germany in later years. Strategies based on maneuver have the inherent danger of the attacking force overextending its supply lines and can be defeated by a determined foe who is willing and able to sacrifice territory for time in which to regroup and rearm, as the Soviets did on the Eastern Front, as opposed to, for example, the Dutch, who had no territory to sacrifice. Tank and vehicle production was a constant problem for Germany. Indeed, late in the war, many panzer "divisions" had no more than a few dozen tanks. As the end of the war approached, Germany also experienced critical shortages in fuel and ammunition stocks as a result of Anglo-American strategic bombing and blockade. Although the production of "Luftwaffe" fighter aircraft continued, they could not fly because of lack of fuel. What fuel there was went to panzer divisions, and even then, they could not operate normally. Of the Tiger tanks lost against the US Army, nearly half of them were abandoned for lack of fuel.
Military operations. Spanish Civil War. German volunteers first used armor in live field-conditions during the Spanish Civil War (1936–1939). Armor commitment consisted of Panzer Battalion 88, a force built around three companies of Panzer I tanks that functioned as a training cadre for Spain's Nationalists. The Luftwaffe deployed squadrons of fighters, dive-bombers and transport aircraft as the "Condor Legion". Guderian said that the tank deployment was "on too small a scale to allow accurate assessments to be made". (The true test of his "armored idea" would have to wait for the Second World War.) However, the "Luftwaffe" also provided volunteers to Spain to test both tactics and aircraft in combat, including the first combat use of the "Stuka". During the war, the "Condor Legion" undertook the 1937 bombing of Guernica, which had a tremendous psychological effect on the populations of Europe. The results were exaggerated, and the Western Allies concluded that the "city-busting" techniques were now part of the German way in war. The targets of the German aircraft were actually the rail lines and bridges, but lacking the ability to hit them with accuracy (only three or four Ju 87s saw action in Spain), the "Luftwaffe" chose a method of carpet bombing, resulting in heavy civilian casualties.
Poland, 1939. Although journalists popularized the term "Blitzkrieg" during the September 1939 invasion of Poland, the historians Matthew Cooper and J. P. Harris have written that German operations during the campaign were consistent with traditional methods. The Wehrmacht strategy was more in line with "Vernichtungsgedanke", a focus on envelopment to create pockets in broad-front annihilation. The German generals dispersed Panzer forces among the three German concentrations with little emphasis on independent use. They deployed tanks to create or destroy close pockets of Polish forces and to seize operational-depth terrain in support of the largely-unmotorized infantry, which followed. The Wehrmacht used available models of tanks, Stuka dive-bombers and concentrated forces in the Polish campaign, but the majority of the fighting involved conventional infantry and artillery warfare, and most Luftwaffe action was independent of the ground campaign. Matthew Cooper wrote: John Ellis wrote that "there is considerable justice in Matthew Cooper's assertion that the panzer divisions were not given the kind of "strategic" mission that was to characterize authentic armored blitzkrieg, and were almost always closely subordinated to the various mass infantry armies". Steven Zaloga wrote, "Whilst Western accounts of the September campaign have stressed the shock value of the panzer and Stuka attacks, they have tended to underestimate the punishing effect of German artillery on Polish units. Mobile and available in significant quantity, artillery shattered as many units as any other branch of the Wehrmacht."
Low Countries and France, 1940. The German invasion of France, with subsidiary attacks on Belgium and the Netherlands, consisted of two phases, Operation Yellow ("Fall Gelb") and Operation Red ("Fall Rot"). Yellow opened with a feint conducted against the Netherlands and Belgium by two armored corps and paratroopers. Most of the German armored forces were placed in Panzer Group Kleist, which attacked through the Ardennes, a lightly defended sector that the French planned to reinforce if necessary before the Germans could bring up heavy and siege artillery. There was no time for the French to send such reinforcement, as the Germans did not wait for siege artillery but reached the Meuse and achieved a breakthrough at the Battle of Sedan in three days. Panzer Group Kleist raced to the English Channel, reached the coast at Abbeville and cut off the BEF, the Belgian Army and some of the best-equipped divisions of the French Army in northern France. Armored and motorized units under Guderian, Rommel and others advanced far beyond the marching and horse-drawn infantry divisions and far in excess of what Hitler and the German high command had expected or wished. When the Allies counter-attacked at Arras by using the heavily armored British Matilda I and Matilda II tanks, a brief panic ensued in the German High Command.
Hitler halted his armored and motorized forces outside the port of Dunkirk, which the Royal Navy had started using to evacuate the Allied forces. Hermann Göring promised that the Luftwaffe would complete the destruction of the encircled armies, but aerial operations failed to prevent the evacuation of the majority of the Allied troops. In Operation Dynamo, some French and British troops escaped. Case Yellow surprised everyone by overcoming the Allies' 4,000 armored vehicles, many of which were better than their German equivalents in armor and gunpower. The French and British frequently used their tanks in the dispersed role of infantry support, rather than by concentrating force at the point of attack, to create overwhelming firepower. The French armies were much reduced in strength and the confidence of their commanders shaken. With much of their own armor and heavy equipment lost in Northern France, they lacked the means to fight a mobile war. The Germans followed their initial success with Operation Red, a triple-pronged offensive. The XV Panzer Corps attacked towards Brest, XIV Panzer Corps attacked east of Paris, towards Lyon and the XIX Panzer Corps encircled the Maginot Line. The French, hard pressed to organise any sort of counter-attack, were continually ordered to form new defensive lines and found that German forces had already bypassed them and moved on. An armored counter-attack, organized by Colonel Charles de Gaulle, could not be sustained, and he had to retreat.
Prior to the German offensive in May, Winston Churchill had said, "Thank God for the French Army". The same French Army collapsed after barely two months of fighting. That was in shocking contrast to the four years of trench warfare on which French forces had engaged during the First World War. French Prime Minister Paul Reynaud, analyzed the collapse in a speech on 21 May 1940: The Germans had not used paratroopry attacks in France and made only one large drop in the Netherlands to capture three bridges; some small glider-landings were conducted in Belgium to take bottlenecks on routes of advance before the arrival of the main force (the most renowned being the landing on Fort Eben-Emael in Belgium). Eastern Front, 1941–1944. Use of armored forces was crucial for both sides on the Eastern Front. Operation Barbarossa, the German invasion of the Soviet Union in June 1941, involved a number of breakthroughs and encirclements by motorized forces. Its goal, according to Führer Directive 21 (18 December 1940), was "to destroy the Russian forces deployed in the West and to prevent their escape into the wide-open spaces of Russia". The Red Army was to be destroyed west of the Dvina and Dnieper rivers, which were about east of the Soviet border, to be followed by a mopping-up operation. The surprise attack resulted in the near annihilation of the Voyenno-Vozdushnye Sily (VVS, Soviet Air Force) by simultaneous attacks on airfields, allowing the Luftwaffe to achieve total air supremacy over all the battlefields within the first week. On the ground, four German panzer groups outflanked and encircled disorganized Red Army units, and the marching infantry completed the encirclements and defeated the trapped forces. In late July, after 2nd Panzer Group (commanded by Guderian) captured the watersheds of the Dvina and Dnieper rivers near Smolensk, the panzers had to defend the encirclement, because the marching infantry divisions remained hundreds of kilometers to the west.
The Germans conquered large areas of the Soviet Union, but their failure to destroy the Red Army before the winter of 1941-1942 was a strategic failure and made German tactical superiority and territorial gains irrelevant. The Red Army had survived enormous losses and regrouped with new formations far to the rear of the front line. During the Battle of Moscow (October 1941 to January 1942), the Red Army defeated the German Army Group Center and for the first time in the war seized the strategic initiative. In the summer of 1942, Germany launched another offensive and this time focusing on Stalingrad and the Caucasus in the southern Soviet Union. The Soviets again lost tremendous amounts of territory, only to counter-attack once more during winter. The German gains were ultimately limited because Hitler diverted forces from the attack on Stalingrad and drove towards the Caucasus oilfields simultaneously. The "Wehrmacht" became overstretched. Although it won operationally, it could not inflict a decisive defeat as the durability of the Soviet Union's manpower, resources, industrial base and aid from the Western Allies began to take effect.
In July 1943, the "Wehrmacht" conducted Operation Zitadelle (Citadel) against a salient at Kursk, which Soviet troop heavily defended. Soviet defensive tactics had by now hugely improved, particularly in the use of artillery and air support. By April 1943, the Stavka had learned of German intentions through intelligence supplied by front-line reconnaissance and Ultra intercepts. In the following months, the Red Army constructed deep defensive belts along the paths of the planned German attack. The Soviets made a concerted effort to disguise their knowledge of German plans and the extent of their own defensive preparations, and the German commanders still hoped to achieve operational surprise when the attack commenced. The Germans did not achieve surprise and could not outflank or break through into enemy rear areas during the operation. Several historians assert that Operation Citadel was planned and intended to be a blitzkrieg operation. Many of the German participants who wrote about the operation after the war, including Erich von Manstein, make no mention of blitzkrieg in their accounts. In 2000, Niklas Zetterling and Anders Frankson characterised only the southern pincer of the German offensive as a "classical blitzkrieg attack". Pier Battistelli wrote that the operational planning marked a change in German offensive thinking away from blitzkrieg and that more priority was given to brute force and fire power than to speed and maneuver.
In 1995, David Glantz stated that blitzkrieg was at Kursk for the first time defeated in summer, and the opposing Soviet forces mounted a successful counter-offensive. The Battle of Kursk ended with two Soviet counter-offensives and the revival of deep operations. In the summer of 1944, the Red Army destroyed Army Group Centre in Operation Bagration by using combined-arms tactics for armor, infantry and air power in a coordinated strategic assault, known as deep operations, which led to an advance of in six weeks. Western Front, 1944–1945. Allied armies began using combined-arms formations and deep-penetration strategies that Germany had used in the opening years of the war. Many Allied operations in the Western Desert and on the Eastern Front, relied on firepower to establish breakthroughs by fast-moving armored units. The artillery-based tactics were also decisive in Western Front operations after 1944's Operation Overlord, and the British Commonwealth and American armies developed flexible and powerful systems for using artillery support. What the Soviets lacked in flexibility, they made up for in number of rocket launchers, guns and mortars. The Germans never achieved the kind of fire concentrations that their enemies were achieving 1944.
After the Allied landings in Normandy (June 1944), the Germans began a counter-offensive to overwhelm the landing force with armored attacks, but they failed because of a lack of co-ordination and to Allied superiority in anti-tank defense and in the air. The most notable attempt to use deep-penetration operations in Normandy was Operation Luttich at Mortain, which only hastened the Falaise Pocket and the destruction of German forces in Normandy. The Mortain counter-attack was defeated by the American 12th Army Group with little effect on its own offensive operations. The last German offensive on the Western front, the Battle of the Bulge (Operation Wacht am Rhein), was an offensive launched towards the port of Antwerp in December 1944. Launched in poor weather against a thinly-held Allied sector, it achieved surprise and initial success as Allied air-power was grounded due to cloud cover. Determined defense by American troops in places throughout the Ardennes, the lack of good roads and German supply shortages caused delays. Allied forces deployed to the flanks of the German penetration, and as soon as the skies cleared, Allied aircraft returned to the battlefield. Allied counter-attacks soon forced back the Germans, who abandoned much equipment for lack of fuel.
Post-war controversy. Blitzkrieg had been called a Revolution in Military Affairs (RMA), but many writers and historians have concluded that the Germans did not invent a new form of warfare but applied new technologies to traditional ideas of "Bewegungskrieg" (maneuver warfare) to achieve decisive victory. Strategy. In 1965, Captain Robert O'Neill, Professor of the History of War at the University of Oxford produced an example of the popular view. In "Doctrine and Training in the German Army 1919–1939", O'Neill wrote: Other historians wrote that blitzkrieg was an operational doctrine of the German armed forces and a strategic concept on which the leadership of Nazi Germany based its strategic and economic planning. Military planners and bureaucrats in the war economy appear rarely, if ever, to have employed the term "blitzkrieg" in official documents. That the German army had a "blitzkrieg doctrine" was rejected in the late 1970s by Matthew Cooper. The concept of a blitzkrieg "Luftwaffe" was challenged by Richard Overy in the late 1970s and by Williamson Murray in the mid-1980s. That Nazi Germany went to war on the basis of "blitzkrieg economics" was criticized by Richard Overy in the 1980s, and George Raudzens described the contradictory senses in which historians have used the word. The notion of a German blitzkrieg concept or doctrine survives in popular history and many historians still support the thesis.
Frieser wrote that after the failure of the Schlieffen Plan in 1914, the German army concluded that decisive battles were no longer possible in the changed conditions of the twentieth century. Frieser wrote that the Oberkommando der Wehrmacht (OKW), which was created in 1938 had intended to avoid the decisive battle concepts of its predecessors and planned for a long war of exhaustion ("Ermattungskrieg"). It was only after the improvised plan for the Battle of France in 1940 was unexpectedly successful that the German General Staff came to believe that "Vernichtungskrieg" was still feasible. German thinking reverted to the possibility of a quick and decisive war for the Balkan campaign and Operation Barbarossa. Doctrine.
Historian Victor Davis Hanson states that "Blitzkrieg" "played on the myth of German technological superiority and industrial dominance" and adds that German successes, particularly that of its Panzer divisions were "instead predicated on the poor preparation and morale of Germany's enemies". Hanson also reports that at a Munich public address in November 1941, Hitler had "disowned" the concept of "Blitzkrieg" by calling it an "idiotic word". Further, successful "Blitzkrieg" operations were predicated on superior numbers, air support and were possible for only short periods of time without sufficient supply lines. For all intents and purposes, "Blitzkrieg" ended at the Eastern Front once the German forces had given up Stalingrad, after they faced hundreds of new T-34 tanks, when the Luftwaffe became unable to assure air dominance, and after the stalemate at Kursk. To that end, Hanson concludes that German military success was not accompanied by the adequate provisioning of its troops with food and materiel far from the source of supply, which contributed to its ultimate failure. Despite its later disappointments as German troops extended their lines at too great a distance, the very specter of armored "Blitzkrieg" forces initially proved victorious against the Polish, Dutch, Belgian and French Armies early in the war.
Economics. In the 1960s, Alan Milward developed a theory of blitzkrieg economics: Germany could not fight a long war and chose to avoid comprehensive rearmament and armed in breadth to win quick victories. Milward described an economy positioned between a full war economy and a peacetime economy. The purpose of the blitzkrieg economy was to allow the German people to enjoy high living standards in the event of hostilities and avoid the economic hardships of the First World War. Overy wrote that blitzkrieg as a "coherent military and economic concept has proven a difficult strategy to defend in light of the evidence". Milward's theory was contrary to Hitler's and German planners' intentions. The Germans, aware of the errors of the First World War, rejected the concept of organizing its economy to fight only a short war. Therefore, focus was given to the development of armament in depth for a long war, instead of armament in breadth for a short war. Hitler claimed that relying on surprise alone was "criminal" and that "we have to prepare for a long war along with surprise attack". During the winter of 1939–1940, Hitler demobilized many troops from the army to return as skilled workers to factories because the war would be decided by production, not a quick "Panzer operation".
In the 1930s, Hitler had ordered rearmament programs that cannot be considered limited. In November 1937, he had indicated that most of the armament projects would be completed by 1943–1945. The rearmament of the "Kriegsmarine" was to have been completed in 1949 and the "Luftwaffe" rearmament program was to have matured in 1942, with a force capable of strategic bombing with heavy bombers. The construction and the training of motorized forces and a full mobilization of the rail networks would not begin until 1943 and 1944, respectively. Hitler needed to avoid war until these projects were complete but his misjudgements in 1939 forced Germany into war before rearmament was complete. After the war, Albert Speer claimed that the German economy achieved greater armaments output not because of diversions of capacity from civilian to military industry but by streamlining of the economy. Overy pointed out some 23 percent of German output was military by 1939. Between 1937 and 1939, 70 percent of investment capital went into the rubber, synthetic fuel, aircraft and shipbuilding industries. Hermann Göring had consistently stated that the task of the Four Year Plan was to rearm Germany for total war. Hitler's correspondence with his economists also reveals that his intent was to wage war in 1943–1945, when the resources of central Europe had been absorbed into Nazi Germany.